id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
199542465 | pes2o/s2orc | v3-fos-license | “Strong Teeth”—a study protocol for an early-phase feasibility trial of a complex oral health intervention delivered by dental teams to parents of young children
Background Dental attendance provides an important opportunity for dental teams to explore with parents the oral health behaviours they undertake for their young children (0–5 years old). For these discussions to be effective, dental professionals need to be skilled in behaviour change conversations. The current evidence suggests that dental teams need further support, training and resources in this area. Therefore, the University of Leeds and Oral-B (Procter & Gamble Company) have worked with the local community and dental professionals to co-develop “Strong Teeth” (an oral health intervention), which is delivered in a general dental practice setting by the whole dental team. The protocol for this early phase study will explore the feasibility and acceptability of the Strong Teeth intervention to parents and the dental team, as well as explore short-term changes in oral health behaviour. Methods Forty parents (20 of children aged 0–2 years old, and 20 of children aged 3–5 years old) who are about to attend the dentist for their child’s regular dental check-up will be recruited to the study. Parents and children will be recruited from 4 to 8 different dental practices. In the home setting, consent and baseline oral health behaviour data will be collected. The researchers will ask parents questions about their child’s oral health behaviours, including toothbrushing and diet. Three different proxy objective measures of toothbrushing will be collected and compared with self-report measures of parental supervised toothbrushing (PSB). Discussion The parent and child will then attend their dental visit and receive the Strong Teeth intervention, delivered by the dental team. This intervention should take 5–15 min to be delivered, in addition to the routine dental check-up. Furthermore, children aged 0–2 years old will receive an Oral-B manual children’s toothbrush, and children aged 3–5 years old will receive an Oral-B electric rechargeable children’s toothbrush. At 2 weeks and 2–3 months following the Strong Teeth intervention, further self-report and objective measures will be collected in the parent/child’s home. This data will be supplemented with purposively sampled qualitative interviews with parents (approximately 3 months following the intervention) and dental team members (following delivery of the intervention). Trial registration ISRCTN Register, (ISRCTN10709150) Electronic supplementary material The online version of this article (10.1186/s40814-019-0483-9) contains supplementary material, which is available to authorized users.
Background
Dental caries (tooth decay) is the most prevalent preventable childhood disease and a major public health priority [1]. Caries is a disease of health inequality. In England, 12% of 3-and 23% of 5-year-olds are affected by caries, with figures rising to 17% and 40% for children living in deprived parts of Yorkshire, respectively [2].
Both Public Health England [3] and the National Institute for Health and Care Excellence [4] identify young children and their parents as a key focus for oral health advice. Supporting parents to initiate and adopt protective home-based oral health behaviours in early-life is critical to the development of long-term oral health habits, thereby reducing common oral diseases such as caries and periodontal disease across the life course [5][6][7]. Both dental teams and parents [8][9][10] have identified that changing poor oral health behaviours for children is challenging, especially once dental disease has already been identified. Therefore, an approach which is strongly supported by local communities [11] is to encourage good oral health behaviours from the outset with different early years professionals skilled in providing appropriate support and advice. Following the development of our generic complex oral health intervention [11], our research group have adapted the intervention for different health and early years professionals. One such example is the HABIT intervention, which is focused on the universal developmental review undertaken by health visitors [12]. This home visit with parents of children aged 9-12 months covers a wide range of general health topics including a short conversation around oral health. The HABIT intervention involves training of health visitors to improve the structure, content and quality of these oral health conversations, as well as providing supporting paper-based and digital resources.
During the development of the generic and HABIT interventions, the community and study participants have repeatedly identified the need for preventive oral health conversations delivered by the primary care dental team. However, nearly two thirds (65.9%) of 0-4-year-olds did not attend the dentist in the 12 months up to June 2018 [13] and hence the need for effective oral health conversations in both the dental and community settings. These dental attendance figures are a key driver for a national oral health initiative in England, Dental Check by One (DCby1), which aims to encourage parents to take their child to the dentist before their first birthday (https://dentalcheckbyone.co.uk/) and establish regular dental attendance behaviours. The frequency of attendance is determined by the dental team based on an oral health risk assessment and can vary between 3 and 12 months [14]. Although Dental Check by One is aimed at tackling non-attendance, attendance in itself does not necessarily mean prevention advice is provided or adopted. To maximise the benefits of dental attendance, dental teams need to be able to have effective behaviour change conversations. As an example, a recent randomised controlled trial undertaken in Northern Ireland showed over a third of children developed dental caries by the age of 6 years old, despite regular attendance at the dentist over the previous 3 years [15]. In this study, preventive advice followed national Public Health England guidelines [3]. This highlights that changing oral health behaviours is challenging and requires more than simply providing information to parents.
There have been several studies that focus on the experiences of dental teams in providing oral health advice to patients [16][17][18][19][20]. These have identified a number of challenges, including the "ad hoc" nature of the content and delivery of oral health advice, the lack of training, knowledge and personal skills, as well as pressures related to insufficient finances, staff, facilities and time. Whilst national guidelines [3] have clarified what oral health behaviours should be promoted, they do not identify how to effectively undertake these behaviour change conversations.
Oral health behaviours (for example, brushing teeth twice a day with a fluoride toothpaste and reducing the frequency and amount of sugar consumed) are complex as they are influenced at multiple levels (i.e. individual, interpersonal, community, organisational and environmental), which can act as both barriers and facilitators to adoption [11,21]. As such, effective oral health interventions must embrace appropriate complex intervention (traditionally defined as interventions with several interacting components) methodology, underpinned by psychological theory, as outlined by the Medical Research Council [16]. This is the approach that has been taken when developing the "Strong Teeth" intervention, such that as well as providing the evidence-based guidance provided in "Delivering Better Oral Health", there is a strong recognition and appreciation of the challenges families with young children face and how this can impact on caring for their children's teeth, which is based on our previous research that is underpinned by the Theoretical Domains Framework and socioecological model [11,21]. For example, despite lacking the capability to effectively brush their own teeth, many young children are responsible for their own toothbrushing, yet, children are not always engaged nor cooperative with parental involvement. This is one of the reasons why in the early-phase evaluation of the Strong Teeth intervention, we have included the provision of an electric toothbrush in the 3-5 year old, as the novelty of the brush may increase child engagement with toothbrushing and parental involvement. However, accessing the acceptability and impact of electric toothbrushes in terms of engagement, toothbrushing behaviours as well as other issues, such as cost and ease of use, will be essential in the present study to determine whether this forms a key component of the intervention. The key strength of the Strong Teeth intervention is the training and focus on the conversation between dental professional and parent. Utilising a whole team approach, the conversation is tailored to the needs of each family and encourages parents to identify their own challenges and subsequently, the solutions to overcome these challenges. Yet allows each conversation to be delivered with consistency and clarity due to its structured and hierarchical format.
In collaboration with Oral-B (Procter & Gamble Company), the University of Leeds has undertaken a programme of research to develop a complex oral health intervention, delivered by dental teams to parents of young children. This programme of work included undertaking a series of rapid reviews to identify (1) the barriers and facilitators to toothbrushing, and healthy eating in respect to oral health for children aged 0-11 years old; and (2) Interventions already developed for use in general dental practice and their efficacy in reducing dental caries. As we had previously qualitatively explored the experiences of parents of children aged 0 -6 years old [21], a second workstream explored qualitatively the experiences of dental teams (n = 27), parents (n = 37) and children (aged 7-10 years old, involving five classes in three different schools) in delivering and receiving oral health advice and what impact this had on parents' and children's behaviour. This was to assess what the range and scope of the intervention should be (i.e. was a combined or separate approach needed for different age groups). This work led to the Strong Teeth intervention concentrating on the 0-5 year age group. Using our earlier generic complex intervention work [11,17,21] in conjunction with this research, we have worked with Oral-B to co-develop the Strong Teeth intervention (https://www. dentalcare.co.uk/en-gb/strong-teeth-strong-kids). As part of a co-production approach to development, 12 focus groups with dental professionals (n = 4, k = 27) and parents (n = 8, k = 41) were undertaken to review and incrementally improve the intervention. Full details of the rapid reviews, qualitative interviews and co-production process are not in the scope of the current paper and will be reported elsewhere. Nevertheless, the Strong Teeth intervention is now finalised and ready for an early-phase evaluation to explore its acceptability to parents and dental teams, the feasibility of delivery, and whether it leads to behaviour change.
Feasibility study primary aim
To undertake an early-phase feasibility trial of the Strong Teeth intervention delivered by dental teams to parents of children aged 0-5 years old.
Feasibility study primary objectives
Using a mixed-methods approach (including self-report questionnaires, dental examinations, filming the toothbrushing interaction between parent and child, and qualitative interviews): 1. To explore with NHS dental teams, the acceptability and feasibility of delivering the Strong Teeth intervention to parents of children aged 0-5 years old 2. To review study findings against progression criteria (see Table 1) and determine whether progression to a definitive trial is appropriate Feasibility study secondary objectives The secondary objectives are as follows: 1. To explore with parents of children aged 0-5 years old the acceptability of the Strong Teeth intervention 2. To study the mechanisms of action for the Strong Teeth intervention 3. To correlate different proxy objective measures of toothbrushing with parental self-reports of parental supervised toothbrushing (PSB, i.e. the parent actively brushing their child's teeth) 4. To describe the changes in dietary behaviour and PSB as a result of the Strong Teeth intervention in children aged 0-5 years old 5. To examine the impact of providing children aged 3-5 years old with an Oral-B electric rechargeable toothbrush, with respect to acceptability and toothbrushing behaviours
Design/methods
This mixed-methods study will involve two participant groups: Group A-dental teams working in NHS dental practices (n = 4-8 practices) and Group B-parents of children aged 0-5 years old (n = 40) to allow the objectives to be achieved and to capture the perspectives of all relevant stakeholders. Involvement of participants from different backgrounds is essential to ensure the sample is representative of the local population. Therefore, this study will seek to involve parents from different socio-economic and ethnic minority groups.
Overall design of the study
In parts of Yorkshire (Bradford, Leeds and surrounding areas) where many children are at high risk of dental caries, 40 parents who are about to attend the dentist for their child's regular dental check-up (20 parents of children aged 0-2 years old, and 20 parents of children aged 3-5 years old) will be recruited from 4 to 8 different dental practices.
In the home setting, consent and baseline oral health behaviour data will be collected. The researcher will ask parents questions about their children's oral health behaviours, including toothbrushing [18] and dietary behaviours [19] based on validated measures (the full baseline questionnaire can be found in Additional file 1: Appendix 1). Three different proxy objective measures of PSB will be collected and compared to self-reported parental behaviours: (1) children's pre-brushing plaque levels per sextant [20]; (2) duration of toothbrushing and parent-child interaction during toothbrushing-the researcher will film the parent/child toothbrushing using a small action camera (GoPro HERO5, GoPro .Inc) and this will be subsequently evaluated by the research team using an established toothbrushing index, please see Additional file 1: Appendix 2 [22]; and (3) toothbrushing activity-parents will be provided with either a paper Magic Timer diary or Disney Magic Timer app for their phone/tablet, which records frequency and duration of toothbrushing. It is imperative to obtain objective as well as self-reported measures of toothbrushing as research has shown there tends to be a mis-match between reported and observed behaviours [23,24]. The dental team member will also collect the gingivitis rating per sextant [25] and number of teeth present, missing and decayed following training and calibration using British Association for the Study of Community Dentistry (BASCD) standards [26,27].
The parent and child will then attend their NHS dental check-up and receive the Strong Teeth intervention delivered by the dental team. The Strong Teeth resources, training manual and videos are targeted at the whole dental team to enable them to have effective oral care conversations with parents of young children in their practice. The Strong Teeth intervention serves to provide a structure and hierarchy to the conversation and can be roughly broken into three sections: (1) Check motivation-why is oral health important? (2) Check brushing technique-how to brush? (3) Identifying other barriers to oral health (e.g. healthy eating, influence of family and friends, managing the child's behaviour to enable brushing, remembering to brush)-how to overcome these barriers? A variety of paper-based and digital resources for both dental professionals and parents are available to support the conversation (a full implementation guide, including the behaviour change techniques underlying the intervention and the Delivering Better Oral Health guidance covered by the intervention, is available from (https:// www.dentalcare.co.uk/en-gb/strong-teeth-strong-kids).
Two weeks and 2-3 months following the Strong Teeth intervention, further self-reports of toothbrushing and dietary behaviours and objective measures of PSB will be collected in the parent/child's home. This measurement schedule is shaped by the time taken for habitual behaviours to become established [28].
Recruitment and retention rates will be recorded, as this will be essential to establish the feasibility of undertaking a definitive trial (see Table 1 for the full progression criteria) The design for each group (Group A-NHS dental teams and Group B-parents of children aged 0-5 years old) will now be discussed in turn.
Acceptability and feasibility to dental teams delivering the Strong Teeth intervention to parents of children 0-5 years old Training
Each dental team member who will deliver the Strong Teeth intervention will attend a training session delivered by members of the research team (PD, KG-B, AB, JP, LR, JO and KT). The session will include evidencebased techniques for undertaking a behaviour change conversation and different approaches to engaging and motivating parents, including those who initially display resistance to behaviour change. Dental team members will then be guided through all the components of the Strong Teeth intervention. To ensure fidelity of the Strong Teeth intervention, dental team members will discuss the practicalities of delivering the intervention in their practice and agree upon a consistent approach to its delivery. Delivery will be reinforced with role play scenarios. An Oral-B representative (Professional Oral Health Territory Manager) will attend the training and provide a short tutorial on how to instruct parents to use the Oral-B electric rechargeable toothbrush with their child. During the study, a study team Dental Nurse (JP) will visit each practice and provide further training, role play and support to maximise the consistency of the Strong Teeth intervention.
Delivery of the Strong Teeth intervention
We will recruit dental teams from 4 to 8 dental practices who will deliver the Strong Teeth intervention as part of the child's dental check-up and/or at a subsequent visit/s. Each dental team member delivering the Strong Teeth intervention will attend the training outlined above. In addition, parents will receive a toothbrush and guidance on how to use it. For children 0-2 years old, this will be a manual Oral-B toothbrush; for children 3-5 years old, a rechargeable Oral-B electric toothbrush will be provided.
Data analysis
The acceptability and feasibility of delivering the Strong Teeth intervention by the dental team will be explored in two ways. First, after delivering each intervention, dental team members will complete a semi-structured diary exploring how the visit went, what oral health barriers were identified, and what Strong Teeth resources were used. Second, having fully completed delivery of the Strong Teeth intervention for all the parents recruited, individual qualitative interviews and/or focus groups with the wider dental practice team will be undertaken. Interviews will be audio recorded, transcribed verbatim, and managed in NVivo. Data will be analysed using framework analysis guided by Ayala and Elder [29] recommendations and the Sekhon, Cartwright [30] theoretical framework of acceptability. This will be coded independently by two researchers, who will then compare codes and resolve any disagreements by discussion [31,32].
Data regarding progression criteria (see Table 1), including recruitment and retention rates will be used to inform the decision to progress to a definitive trial, with the sample characteristics and overall recruitment and retention data being critical to the trial design.
Acceptability of the Strong Teeth intervention for parents of children aged 0-5 years old and other outcomes measures Sample size
Twenty parents of children 0-2 years old, and 20 parents of children 3-5 years old will be recruited to the study. The sample size has been derived to satisfy the best practice recommendations of Lancaster, Dodd [33] requiring at least 30 participants and will provide a 95% confidence interval of (74%, 96%) for a minimum anticipated retention rate of 85%. The data from the current feasibility study will inform and modify the sample size calculation for the subsequent definitive trial, although accepting the design (probably involving less home visits), primary outcome (dental decay) and follow-up (3 years) may differ.
Inclusion criteria: Children 0-5 years old about to visit their general dental practice for a dental check-up Children attending a general dental practice where the dental team is trained to deliver the Strong Teeth intervention Exclusion criteria: Only one sibling can be recruited per household A parent must be present at the baseline home visit to ensure valid consent Purposive sampling of parents and children will be undertaken to ensure the sample includes participants from different ethnic groups, living in areas of varying levels of deprivation, and with differing severities of dental decay. However, due to resource restraints, only parents who can understand intervention sessions delivered in English will be included.
Acceptability to parents/children of the Strong Teeth intervention
The outcome measures and the measurement schedule will be captured through structured questionnaires at baseline, as well as 2 weeks and 2-3 months after the intervention. In addition, qualitative interviews will take place in the parental home at around 3 months after the intervention. An analytical approach using NVivo and theoretical framework analysis will be undertaken, similar to that described for dental teams above.
Mechanism of action of the Strong Teeth intervention
Qualitative and quantitative data will be used to explore intervention mechanisms with questionnaires and interview topic guides being explicitly developed including questions mapped onto the Theoretical Domains Framework [31], considerate of wider family and community context, as tested and refined through our previous work [11,12,17,21]. The intervention mechanism (i.e. what are the active ingredients within the intervention, and how they are exerting their effect) will be evaluated, and our generic intervention logic model refined [11].
Adoption and maintenance of appropriate oral health behaviours
Changes in self-report and objective measures of PSB behaviours will be collated. The adoption and maintenance of good oral health behaviours will be measured against national guidance-for example, parental supervised toothbrushing undertaken twice a day with the appropriate amount and strength of fluoride toothpaste [3]. The validity of parent/child reports of PSB behaviours will be compared with three proxy objective measures (1-3, listed in the "Design/methods" section). We will formulate a preliminary measurement model and calculate factor loadings. Factor loadings will be available from the measurement model. By generating a standardised model where the variance of each objective measure is scaled to unity, the associate standardised factor loadings will effectively rank the measures according to the strength of their contributions to PSB. These will be taken as the quantitative assessment for each measure. The same model was used for our HABIT early-phase study and can be seen in Fig. 1 [12]. Other measures of toothbrushing behaviour (such as duration of brushing, amount and strength of fluoride toothpaste used and spitting out toothpaste residue after brushing) will be considered for inclusion in the model.
The dietary data collected at baseline, 2 weeks and 2-3 months will allow changes in dietary behaviour to be evaluated with respect to the frequency of sugary foods and drinks consumed by children. This quantitative dietary data will be used in conjunction with qualitative findings.
Impact of an Oral-B electric rechargeable toothbrush for children aged 3-5 years old
The impact of providing children aged 3-5 years old (n = 20) with an Oral-B electric rechargeable toothbrush will be evaluated. This will include assessing the acceptability of the electric toothbrush to children and parents. Furthermore, toothbrushing behaviours (frequency of toothbrushing, duration, amount and strength of fluoride toothpaste and spitting out toothpaste residue after brushing) will be explored in the home setting during data collection visits (at 2 weeks and 2-3 months post intervention) and with parents who agree to participate in the qualitative interviews (please see Fig. 2 for a detailed flowchart of the recruitment and data collection process).
Discussion
This early phase study is designed to evaluate the Strong Teeth complex oral health intervention and inform the design of a definitive study to explore the impact of the intervention on dental caries in children. It will provide invaluable information regarding the acceptability, feasibility and impact of the intervention on both dental teams and parents of children aged 0-5 years old. Specifically, it will describe the capabilities and skills of dental teams and outline what training and support is needed for the successful delivering of the Strong Teeth intervention in a general dental practice setting. It will provide deeper insight into the internal (e.g. motivation) and external (e.g. cultural, societal, interactional, contextual) factors underlying parental oral health behaviours. Furthermore, the study will evaluate whether and how the Strong Teeth intervention shapes oral health behaviour changes and characterise the impact of providing children aged 3-5 years old with an Oral-B electric rechargeable toothbrush.
In conjunction with our HABIT early phase study exploring the feasibility and acceptability of an oral health intervention delivered by health visitors to parents of children aged 9-12 months old in the UK [12]; this study Fig. 1 The measurement (top model) and growth (bottom model) models for the three proxy objective measures of parental supervised toothbrushing (PSB). Published with permission from Eskyte et al. [12] will continue the important work in addressing the lack of objective measures of PSB adoption. Whilst there are robust measures of dental caries, these require longterm follow-up (a minimum of 3 years) and are consequently more expensive and at high risk of attrition. Whilst short-term parental-self reports of PSB exist, these are at high risk of social desirability bias [34]. The size of this bias and the lack of objective measures that robustly characterise PSB behaviour is a key evidence gap that will be further addressed in this study. Whilst our earlier HABIT study focused on children aged 9-15 months, this study will examine the acceptability, feasibility and utility of these measures in older children aged 0-5 years old. | 2019-08-13T14:54:12.292Z | 2019-08-13T00:00:00.000 | {
"year": 2019,
"sha1": "68db2473a22dc3fcab3c29cc9d67a0cc3fd2abc5",
"oa_license": "CCBY",
"oa_url": "https://pilotfeasibilitystudies.biomedcentral.com/track/pdf/10.1186/s40814-019-0483-9",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "68db2473a22dc3fcab3c29cc9d67a0cc3fd2abc5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
2179988 | pes2o/s2orc | v3-fos-license | A novel biomarker TERTmRNA is applicable for early detection of hepatoma
Backgrounds We previously reported a highly sensitive method for serum human telomerase reverse transcriptase (hTERT) mRNA for hepatocellular carcinoma (HCC). α-fetoprotein (AFP) and des-γ-carboxy prothrombin (DCP) are good markers for HCC. In this study, we verified the significance of hTERTmRNA in a large scale multi-centered trial, collating quantified values with clinical course. Methods In 638 subjects including 303 patients with HCC, 89 with chronic hepatitis (CH), 45 with liver cirrhosis (LC) and 201 healthy individuals, we quantified serum hTERTmRNA using the real-time RT-PCR. We examined its sensitivity and specificity in HCC diagnosis, clinical significance, ROC curve analysis in comparison with other tumor markers, and its correlations with the clinical parameters using Pearson relative test and multivariate analyses. Furthermore, we performed a prospective and comparative study to observe the change of biomarkers, including hTERTmRNA in HCC patients receiving anti-cancer therapies. Results hTERTmRNA was demonstrated to be independently correlated with clinical parameters; tumor size and tumor differentiation (P < 0.001, each). The sensitivity/specificity of hTERTmRNA in HCC diagnosis showed 90.2%/85.4% for hTERT. hTERTmRNA proved to be superior to AFP, AFP-L3, and DCP in the diagnosis and underwent an indisputable change in response to therapy. The detection rate of small HCC by hTERTmRNA was superior to the other markers. Conclusions hTERTmRNA is superior to conventional tumor markers in the diagnosis and recurrence of HCC at an early stage.
Background
Since the discovery of circulating nucleic acids (CNAs) in plasma in 1948, many diagnostic applications have emerged. Recently, CNAs instead of a protein has appeared on this scene of practical diagnostic assay, suggesting that cell-free CNAs in the plasma/serum of cancer patients have characteristics of tumor-derived nucleic acids. In addition to DNA-derived from tumor cells [1][2][3][4], a recent development in this new field is the finding of tumor-related RNA in the plasma/serum of cancer patients [5]. These features include tyrosine kinase mRNA [6], telomerase components [7,8], the mRNAs that are encoded by different tumor-related genes [9][10][11][12][13], and viral mRNA [14]. In one study, two telomerase markers in breast cancer yielded 44% of positive rates [7]. Nevertheless, telomerase RNA seems to be a promising marker by the reason that it can be found even in the serum of patients with small, undifferentiated breast cancers without any metastatic lesions. Dasi et al. showed that circulating telomerase RNA is a sensitive marker, using real-time reverse transcription-PCR (RT-PCR) [8].
HCC ranks high among the most common and fatal malignancies associated with hepatitis B virus (HBV) and hepatitis C virus (HCV) infection [5]. Although HCC patients receive possible medical treatments such as transcatheter arterial chemoembolization/embolization (TACE/TAE), radiofrequency ablation (RFA), and surgery for primary tumors, intrahepatic and extrahepatic recurrence frequently limit patient's survival [6]. Although the modalities such as ultrasonography (US) and conventional tumor markers such as α-fetoprotein-L3 (AFP-L3) and DCP are widely used and important for HCC detection in clinical scenes [7], they still do not provide an entirely satisfactory solution to detect HCC at the early stage. Since HCC has been recently classified as a complex disease with a wide range of risk factors and many cellular signaling pathways have been reported to be involved in hepatocarcinogenesis, a novel biomarker for HCC is required [21]. We previously reported that measurement of serum hTERTmRNA by real-time RT-PCR method was sensitive in detection of tumor-derived hTERTmRNA even in the HCC patients whose AFP levels were low [9], and was also useful even for other malignancies such as non-small cell lung cancer, ovarian cancer, and gastric cancer [22][23][24]. In this large-scale study that includes follow-up cases, we focused on HCC of all malignancies and assessed the clinical significance of hTERTmRNA measurement in HCC diagnosis and monitored the clinical course.
Patients and Sample Collection
Four hundred-thirty seven consecutive patients (303 patients with HCC, 89 with CH, and 45 with LC), who were admitted at Tottori University related Hospitals, Osaka Red Cross Hospital, and Fukuoka University Chikushi Hospital between November, 2002 and December, 2006, were enrolled in this study. All the HCC patients had LC or CH as the underlying liver disease. The mean ages of patients with HCC, LC, and CH were 65, 66, and 61 years, respectively. One hundred-sixty seven patients were infected with HCV, 97 with HBV, 24 with both viruses and 15 with no viral markers. The patients were diagnosed by blood chemistry, US, computed tomography (CT), AFP and/or biopsy under US. The clinicopathological findings (age, gender, etiology, underlying liver disease (adjacent lesion), Pugh score, Child classification, total bilirubin (TB), albumin (Alb), alanine aminotransferase (ALT), AFP, AFP-L3, DCP, HCV titer, HCV subtype, tumor number, tumor size, differentiation degree of tumor, and presence of metastasis) were evaluated ( Figure 1). HCC was diagnosed according to the AASLD guidelines and the differentiation of HCC was diagnosed by liver biopsy. Two hundred one healthy individuals including 144 females (from 24-87 years old: mean age 57 years) served as controls. Informed consent was obtained from each patient and the study protocols followed the ethical guidelines of the 1975 Declaration of Helsinki and were approved by the human research committee of Tottori University. The therapies for HCC include TAE, transcatheter arterial infusion (TAI), percutaneus ethanol injection therapy (PEIT), and RFA. Regarding follow-up patients, blood samples were taken basically every two months.
RNA extraction and Real-time quantitative RT-PCR
Harvesting serum samples were performed as previously described [9]. RNA was extracted with DNase treatment from serum as reported previously [4,9]. The quantitative RT-PCR was performed as described previously [5,10]. (a) for hTERT. The RT-PCR condition was an initial incubation at 50 for 30 min followed by a 12-min incubation at 95, then 50 cycles at 95 (0 s), 55 (10 s), and 72 (15 s), and a 20 second melting at 40°C. The dynamic ranges of real-time PCR analysis for hTERTmRNA were more than approximately 5 copies in this assay and we were able to exclude the possibility of false negativity in serum samples from patients with CH, LC and controls. The PCR yielded products of 143 bp for hTERT (data not shown). The RT-PCR assay was repeated twice and the quantification was confirmed by using LightCycler (Roche, Basel, Switzerland) with reproducibility.
hTERTmRNA during the treatment and detection of small HCC
We examined the therapeutic effectiveness of hTERT-mRNA during the clinical course. Serum hTERTmRNA was measured before and 7 days after TAE in 16 HCC patients. In comparison with AFPmRNA, the half-life of hTERTmRNA was examined. By monitoring gene expression in serum up to 6 months after the beginning of therapy such as TAE, TAI, RFA, PEIT, surgical treatment, the effect of therapies were estimated in 20 patients. Furthermore, we examined hTERTmRNA expression and level of other conventional tumor markers after they were categorized by the tumor size (less than 10 mm, 11-20 mm, 21-30 mm, more than 30 mm).
Statistical analysis
Multivariate analysis was performed using SPSS 13.0 (SPSS Corp., Tokyo, Japan). Stratified categories in each clinical parameter were evaluated by One Way ANOVA and multivariate analysis using a logistic regression analysis model. To assess the accuracy of the diagnostic tests, the matched data sets (chronic liver diseases patients and HCC patients) regarding AFP, AFP-L3, DCP, and hTERT-mRNA were analyzed by using receiver operator characteristic (ROC) curve analysis. The correlation of hTERTmRNA between HCC tissue and serum was analyzed using both Paired t test and Spearman's test. The detection rates of HCC in comparison with tumor size were evaluated by Friedman test.
RNA extraction and Real-time quantitative RT-PCR
In each quantitative assay, a strong linear relation was demonstrated between copy number and PCR cycles using RNA controls for concentration (r 2 > 0.99; data not shown). hTERTmRNA expression showed stepwise upregulation with disease progression and the quantification was significantly higher in HCC than in LC, CH and healthy individuals (P < 0.001, P < 0.01 and P < 0.001, respectively, Figure 2A). ROC curve analyses showed that the sensitivity/specificity of hTERTmRNA for HCC were 90.2%/85.4% ( Figure 2B). Optimal cut-off values for hTERTmRNA expressions were predicted as 9,332 copies/0.2 ml by stressing the higher specificity. Forty six (15%) of HCC patients, whose AFP, AFP-L3, and DCP were within normal limits, had 4.23 ± 0.32 logarithmic values of hTERTmRNA, and 20 patients of 46 patients were positive for this assay.
Multivariate analysis showed that hTERTmRNA was associated with tumor size and differentiation degree of tumor (P < 0.001, each, Figure 1 &3). However, hTERT-mRNA was not associated with age, gender, etiology, background lesion or number of tumor. On the other hand, AFP was related to tumor size and differentiation (P = 0.008 and P = 0.0199), AFP-L3 was related to number of tumor and differentiation degree (P = 0.003 and P = 0.001), and DCP was associated with only number of tumor (P = 0.029). By Pearson relative test, serum hTERTmRNA significantly associated with tumor size and number of tumors (P < 0.033 and P < 0.003, respectively, Table 1). Importantly, hTERTmRNA was related only to DCP (P = 0.03).
ROC curve analyses showed that the sensitivity/specificity of hTERTmRNA for HCC were 90.2%/85.4% ( Table 2). The sensitivity/specificity of AFP, AFP-L3, and DCP were 76.6/66.2, 60.5/88.7, and 83.4/80.3, respectively. Thus, hTERTmRNA was superior to other markers especially in sensitivity. The positive predictive value (PPV)/ negative predictive value (NPV) of hTERTmRNA were 83.0/85.9. On the other hand, the PPV/NPV for AFP, AFP-L3, and DCP were 74.6/67.7, 59.6/92.2, and 78.4/ 73.5, respectively. Consequently, hTERTmRNA was superior to other markers in the diagnosis of HCC. Combina-tions of hTERTmRNA with AFP level improved the sensitivity/specificity up to 96.0%/87.2%. ROC curve analysis categorized by viruses was examined and sensitivity/specificity in HBV-infected cases was similar to that of HCV-infected cases (additional file 1). hTERT and other markers in LC was not statistically and significantly different in comparison with that in CH.
Estimation of therapeutic effect and the possibility of early HCC detection of hTERTmRNA in comparison with other biomarkers
To examine the significance of hTERTmRNA before and after TAE, serum hTERTmRNA was measured before and 7 days after TAE in 16 HCC patients ( Figure 4A). As a result, hTERTmRNA significantly decreased after TAE (P = 0.018), suggesting that changes in hTERTmRNA are indicative of therapeutic effects on HCC. Comparing the follow-up data of hTERTmRNA and AFP ( Figure 4B, C), the half-life of hTERTmRNA was shorter than that of AFP.
To clarify the significance of hTERTmRNA in monitoring the effect of therapies in comparison with other biomarkers, two representative cases were depicted in Figure 5. The quantification of hTERTmRNA was performed before, 2 and 5 months after RFA in a 73-year-old male patient whose HCC was a single 21 mm-sized ( Figure 5A). hTERTmRNA changed similar to AFP, AFP-L3, and DCP, suggesting that hTERTmRNA is useful for monitoring the clinical course of HCC. In a 78-year-old female patient whose HCC was a single 38 mm-sized, a surgical operation was performed ( Figure 5B). The values of AFP, DCP, and hTERTmRNA were measured before, 2 and 7 months after the operation. The operation was performed successfully in this patient, however recurrence was found by dynamic CT at 7 months after the operation. Although neither AFP nor DCP detected the recurrence, only hTERTmRNA did. In all the cases that hTERT detected recurrence in the earlier stage, no other imaging modality could detect it at the same time, but when we could find HCC in images such as US, CT, or MR, other markers began to arise.
Finally, we examined the relationship between the positive rates of biomarkers and tumor size. Positive rate of hTERTmRNA was higher than that of the other markers in each category of tumor size; 6-10 mm, 11-20 mm, 21-30 mm, over 31 mm by Friedman test (P = 0.017) ( Figure 3). However, the positivity of hTERTmRNA expression tended to reduce slightly in tumors with diameters that exceeded than 51 mm (5.2 ± 1.9 for 56 patients with 31-50 mm of HCC, 5.0 ± 1.8 for 43 patients with HCC over 51 mm; mean ± S.D.) (additional file 2). Dot blot regarding the correlation of hTERT mRNA quantification with tumor differentiation is shown in additional file 3. In a 6 mm HCC case, no marker other than hTERTmRNA was elevated and only abdominal US caught the evidence of HCC (Figure 6(a) A, B).
Immunohistochemistry
Immunohistochemical analysis showed that Ki-67 positivity was observed in the nuclei of cancer cells ( Figure 6(b) A). hTERT was observed in both the nuclei and cytoplasm of cancer cells (Figure 6(b) B). Some TUNEL-positive cells were present in cancerous lesions, however the prevalence was low (Figure 6(b) C). hTERT expression was significantly associated with the labeling index of Ki-67 (P = 0.023). When the labeling indices of Ki-67, hTERT and TUNEL were compared with the differentiation degree of HCC, both hTERT and Ki-67 were higher in poorly differentiated HCC than in well and moderately differentiated HCC (Figure 6(b) D).
Discussion
Since HCC has been recently classified as a complex disease with a wide range of risk factors and many cellular signaling pathways have been reported to be involved in hepatocarcinogenesis, a novel biomarker for HCC is required [21]. Since an epoch-making assay to detect telomerase activity was established [11], telomerase has been examined in many kinds of cancers, precancerous lesions and normal tissues using the telomeric repeat amplification protocol and investigated the correlation with telomere length [29,30]. Notwithstanding that telomerase was definitely an unprecedented candidate tumor marker due to its specificity to cancer, it has clinically remained inapplicable because telomerase expres- sion has not been detected stably in body fluid [12]. In serum, the hTERTmRNA derived from cancer cells seemed to be undetectable because it becomes instable by RNase in blood. Since RNAs in serum are unexpectedly stable within 24 hrs after drawing blood due to particle-associated complex in structure [13,14], it has been suggested that they can be generally detected even in RNase-rich blood. Actually, hTERTmRNA can be detected in serum from breast cancer patients and its maximum sensitivity and specificity are at most 40% and 100%, respectively [4]. The sensitivity in patients with HCC rose to 89.7% in the semi-quantitative assay, and thus compared favorably with the previous findings in which the sensitivity and specificity of AFPmRNA were 69% and 50% for HCC, respectively [31]. Besides, with respect to HCC detection, AFPmRNA was superior to AFP level used routinely in clinic [32]. Recently, in the present study, we reported the sensitivity to detect the nucleotides in blood in the process of RNA extraction, including centrifugation steps less than 1500 × g to remove cellular proteins in serum and a primer set that can detect hTERTmRNA more efficiently than primers in the previous reports (data not shown). We previously reported that hTERT expression was very faint in the serum from normal individuals indicating that lymphocytes and circulating normal cells express very low levels of hTERTmRNA [9]. Because hTERTmRNA in lymphocytes is very low, elevated hTERTmRNA levels in serum may mean that hTERTmRNA is derived from cancer cells. Since we could detect negligible amounts of lymphocyte markers after three steps of centrifugation of blood samples, the RNA extraction procedure seemed to remove lymphocytes effectively. In addition, normal or damaged hepatocytes express negligible amounts of hTERT [33,34]. Furthermore, we previously showed the significant correlation of hTERTmRNA expression between tumor tissue and serum [32]. These data suggest that hTERTmRNA detected in serum is derived from tumor cells.
Previously, we reported that qualitative analysis of serum hTERTmRNA was superior to AFP for the purpose of the early detection of HCC, because hTERTmRNA was detectable in HCC patients with normal AFP levels [9]. AFP is being widely used as a reliable marker of HCC not in earlier stage but in the advanced stage [35]. However, in this study, neither AFP was able to distinguish HCC from non-cancerous liver diseases, nor hTERTmRNA was correlated with AFP level (P = 0.201), suggesting that quantitative analysis of serum hTERTmRNA was much more sensitive for HCC diagnosis even in the early stage. Because the induction of the abdominal (enhanced-)US, CT, and MRI into the clinical scene enabled us to detect smaller-sized HCC [36], the sensitivity of AFP in the early detection of HCC became less than 70%. Unlike AFP level, AFPmRNA was significantly correlated with hTERTmRNA (P < 0.001) and more sensitive than AFP. In the present study, we measured AFP-L3, since AFP-L3 has been reported to be a more HCC-specific marker than AFP [37]. Indeed, the level of AFP-L3 correlated significantly with differentiation and number of HCC although that of AFP was correlated with tumor size and differentiation.
In the present study, of 303 HCC patients, 24 patients were negative below the calculated cut-off value (9,332; 3.97 as logarithmic number) for serum hTERTmRNA. Although the reason why hTERTmRNA was negative in these patients is not clear, eleven of 24 hTERTmRNAnegative HCC patients had decompensated liver cirrhosis as the underlying disease. It has been reported that decompensated liver cirrhosis had higher levels of serum TGF-β that promotes apoptosis of immortalized hepatocytes and, in these cases, elevated TGF-β may stimulate apoptosis, resulting in reduction of hTERTmRNA [34,38,39]. hTERT-negative cases had no other common characteristics with age, gender, etiology, child classification etc. than tumor size, ALT, and surrounding lesion. In 23 cases (95.8%), ALT was within 1.5 fold normal limits. In 17 cases (70.8%), surrounding lesion was LC including decompensated situation. Tumor size in 12 cases (50%) was over 30 mm, reflecting on the biological features of cancer itself, as referred in Norton-Simon models regard tumor growth [40]. AFP and DCP were positive in 16 (66.7%) and 11 (45.8%) cases, respectively, suggesting that combinative use of these markers contributes to improve the diagnostic specificity.
Thus, hTERTmRNA is not only improved in both sensitivity and specificity but has a close correlation with tumor size and number in an early stage of HCC. Since HCC repeatedly recurs polyclonally after any treatment as a biological characteristic, the measurement of serum hTERTmRNA makes it possible to recognize recurrence or therapeutic effect in details as well as the usefulness for one-point diagnosis. In this respect, we have to undergo follow-up study after the treatment of HCC [24]. hTERT-mRNA expression was closely associated with well to moderate differentiation degree of HCC and was enhanced with the proliferation. We should clarify that serum hTERTmRNA can be detected by what alterations of other molecules during the cancer progression [41][42][43]. In lower differentiated HCC, tumor cells are proliferating and hTERTmRNA has a tendency to correlate with the differentiation degree and an apoptotic event never reflect on the serum detection of cancer cell-derived mRNAs ( Figure 6). Nakashio et al. previously reported the significant correlation of HCC differentiation with telomerase expression [44]. The results in the present study confirmed their findings. hTERTmRNA showed more sensitivity and specificity compared with AFP-mRNA in HCC patients. However, in liver diseases other than HCC, hTERTmRNA was not correlated with AFP-mRNA. The higher specificity of hTERTmRNA in HCC may be related to fact that AFPmRNA is produced in HCC cells and injured hepatocytes and hTERT is produced mainly in HCC cells. Furthermore, we could detect serum hTERTmRNA expression even in HCC patients with less than 10 mm moderate-differentiated tumor, indicating that hTERT are upregulated during rapid proliferation of tumor at the early phase of oncogenesis, dedifferentiation.
Waguri et al. proved that there exist circulating cancer cells derived from original HCC tissues in blood and they can detect hTERTmRNA in blood [45]. The present study suggests that quantification of hTERTmRNAs in serum has diagnostic implications for HCC. Unless apoptosis of cancer cells contributes to the early detection of HCC using serum mRNA, the essence may be immunoreactions [46]. The development of micro vessels may be also involved in the step [47]. We will evaluate the correlation of prognosis with hTERTmRNA and the availability of hTERTmRNA in other cancers by comparison of hTERT-mRNA with other tumor markers [48], and will study its usefulness for inflammatory diseases in which cellular reactions are active [49]. This method depends on RNA stability in each process of RNA purification, storage, and quantification. In the light of its superior positivity to other markers, the assay will be applied for clinical use in the strict condition because it is required to keep the serum RNA as it is in blood and avoid the degradation of RNA quality. Now we are improving RNA stability and PCR condition to better cost/benefit of this assay. In the future, another large-scale study will be required to confirm our results for monitoring HCC and the feasibility for its detection even on a primary care level.
Conclusions
In sum, our results support the suggestion that quantification of circulating hTERTmRNA expression is clinically useful for the early detection of HCC. Furthermore, hTERTmRNA is superior to conventional tumor markers in the diagnosis and recurrence of HCC at the early stage.
Competing interests
The authors declare that they have no competing interests.
Authors' contributions YO analyzed biomedical data and provided blood sample as main researcher in Osaka Red Cross Hospital. MN analyzed biomedical data in Kinki University. MK analyzed biomedical data and provided blood sample as main researcher in Kinki University. KY analyzed biomedical data and provided blood sample in San-in Labor Welfare Hospital. TK analyzed clinical data and the practical analysis in San-in Labor Welfare Hospital. KO analyzed HCC imaging data and the analysis in Saiseikai Gotsu General Hospital. YK was in charge for case study in Saiseikai Gotsu General Hospital. SM analyzed biomedical data and provided blood sample as main researcher in Saiseikai Gotsu General Hospital EN was in charge for case study and biomedical analysis in Saiseikai Gotsu General Hospital. YH analyzed clinical and biomedical data as main researcher in Saiseikai Gotsu General Hospital comprehensively. MK analyzed biomedical data and provided blood sample as main researcher in Matsue City Hospital. SS analyzed biomedical data and provided blood sample as main researcher in Fukuoka University Chikushi Hospital. YH performed biomedical and clinical analysis in surgical case in Tottori University. HK analyzed biomedical data and provided blood sample as chief researcher in San-in Labor Welfare Hospital. JH provided the environment to analyze the data comprehensively. All authors read and approved the final manuscript. | 2017-06-21T14:59:36.358Z | 2010-05-18T00:00:00.000 | {
"year": 2010,
"sha1": "7cce300a1d6c9343f80af70d67665b204ccd1992",
"oa_license": "CCBY",
"oa_url": "https://bmcgastroenterol.biomedcentral.com/track/pdf/10.1186/1471-230X-10-46",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9b66f8e2f30f2d59002d1a511502355c4265206c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119140800 | pes2o/s2orc | v3-fos-license | An asymptotic Robin inequality
The conjectured Robin inequality for an integer $n>7!$ is $\sigma(n)<e^\gamma n \log \log n,$ where $\gamma$ denotes Euler constant, and $\sigma(n)=\sum_{d | n} d $. Robin proved that this conjecture is equivalent to Riemann hypothesis (RH). Writing $D(n)=e^\gamma n \log \log n-\sigma(n),$ and $d(n)=\frac{D(n)}{n},$ we prove unconditionally that $\liminf_{n \rightarrow \infty} d(n)=0.$ The main ingredients of the proof are an estimate for Chebyshev summatory function, and an effective version of Mertens third theorem due to Rosser and Schoenfeld. A new criterion for RH depending solely on $\liminf_{n \rightarrow \infty}D(n)$ is derived.
Introduction
1.1. History. The conjectured Robin inequality for an integer n > 7! = 5040 is σ(n) < e γ n log log n, where γ ≈ 0.577 · · · denotes Euler constant, and σ is the sum-of-divisors functions σ(n) = d|n d. This inequality has been shown to hold unconditionally for families of integers that are • odd > 9 [4] • square-free > 30 [4] • a sum of two squares and > 720 [2] • not divisible by the fifth power of a prime [4] • not divisible by the seventh power of a prime [11] • not divisible by the eleventh power of a prime [3] Ramanujan showed that Riemann Hypothesis implied that conjecture [8]. Robin proved the converse statement [9], thus making that conjecture a criterion for RH. This criterion was made popular by [6] which derives an alternate criterion involving Harmonic numbers.
1.2. Contribution. Denote the difference between the right hand side and the left hand side of Robin inequality by D(n) = e γ n log log n − σ(n). Let d(n) = D(n) n . The main result of this note is Theorem 1. For large n we have lim inf n→∞ d(n) = 0.
Its proof will depend on the following intermediate result.
Theorem 2. For large n the quantity lim inf n→∞ d(n) is finite and ≥ 0. The main ingredients of the proof of the latter are a combinatorial inequality between arithmetic functions (Lemma 1), an effective version of Mertens third theorem due to Rosser and Schoenfeld (Lemma 2), and an asymptotic estimate of Chebyshev first summatory function (Lemma 4). Also needed is a result of Ramanujan of 1915 first published in 1997 [8].
MSC 2010 Classification: Primary 11A25, Secondary 11B75 We also study the asymptotic behavior of D(n). Recall that a number is Colossally Abundant (CA) if it is a left-to-right maxima for the function with domain the integers x → σ(x) x 1+ǫ , where ǫ is a real parameter. Thus n is CA iff m < n ⇒ σ(m) m 1+ǫ < σ(n) n 1+ǫ . Theorem 3. We have the following limits when n ranges over Colossally Abundant numbers.
• If RH is false then lim inf n→∞ D(n) = −∞ • If RH is true then lim n→∞ D(n) = ∞. This result constitutes a new criterion for RH. Its proof will depends, for the RH false part, on an oscillation theorem of Robin [9], modelled after and depending upon an oscillation theorem of Nicolas [7] for the Euler totient function. For the RH true case, we use a result of Ramanujan from 1915, first published in 1997 [8].
1.3.
Organization. The material is arranged as follows. The next section contains the proof of Theorem 1, Section 3 that of Theorem 2, and Section 4 that of Theorem 3. Section 5 concludes and gives some open problems.
Proof of Theorem 1
The result will follow from Theorem 2 if we exhibit a sequence of integers n m with lim m→∞ D(n m ) = 0. We follow the approach of [4, §4, proof of Lemma 4.1, 1), p. 366]. Consider n of the shape n = p≤x p t−1 , with t > 1 integer and x real, both going infinity, and to be specified later. By this reference, we have with ζ Riemann zeta function. The error term can be made effective as follows. By [10, (3.28),(3.30)] we have From the Euler product of ζ and [4, Lemma 6.4] we derive Combining these four bounds together we can Thus, reporting, we get To achieve d(n) → 0, we need both log log n << 2 t , and log log n << x t−1 . This is ensured if we take x = p m , and t = m + 1. In that case we have log log n = log m + log θ(p m ). By Lemma 4 below, log θ(p m ) ∼ log p m . On the other hand, p m ∼ m log m as is well-known (see e.g. [5]). Combining the last two estimates we see that log log n ∼ 2 log m << 2 m . Similarly, log log n << p m m .
Proof of Theorem 2
If lim inf n→∞ d(n) = ∞ then lim n→∞ D(n) = ∞, and by Robin criterion RH holds. We know then by [8, p.25] that the sequence d(n) √ log n admits finite upper and lower limits when n ranges over CA numbers (see §4), which is a contradiction.
Assume therefore that lim inf n→∞ d(n) is finite and let us show that it is ≥ 0. For any integer n write its decomposition into prime powers as where the q i 's are prime numbers, indexed by increasing order, and a i 's are positive integers. Denote by p i the i th prime number, and for any integer n, let Note that, by definition, for each i = 1, 2, · · · , m we have q i ≥ p i , and that, therefore, n ≥n. With this notation observe that In particular and, likewise, σ(n) n ≤ 2. Thus, if m is bounded and n → ∞, we see that d(n) → ∞. We can thus assume when considering lim inf n→∞ d(n) that m → ∞. We prepare for the proof by a series of Lemmas. Lemma 1. For any integer n ≥ 1, we have d(n) ≥ d(n).
Proof. Let d(n) = f 1 (n) − f 2 (n), with f 1 (n) = e γ log log n, and f 2 (n) = σ(n) n . The monotonicity of the log and n ≥n yields we see that, for fixed a, the function x → g(a, x) is nonincreasing in x. This implies that g(a i , q i ) ≤ g(a i , p i ) for each i = 1, 2, · · · , m and, therefore, multiplying m inequalities between nonegative numbers, that f 2 (n) ≤ f 2 (n). The result follows then by d(n) = f 1 (n) − f 2 (n).
Lemma 2. For any n large enough we have σ(n) n < e γ log p m (1 + 1 log 2 pm ).
Proof. Note that, with the notation of the proof of Lemma 1, we have g(a, x) ≤ x x−1 , for x ≥ 2 and a ≥ 1, and, therefore The result follows then by [10,Th. 8,(39)].
Proof. By definition A classical result, related to the Prime Number Theorem, is Proof. An effective version is in [10,Th. 4]. See for instance [5,Th 4.7] for a sharper error term in O(x exp(− √ log x 15 )).
We are now ready for the proof of Theorem 1.
Proof. By Lemma 1 d(n) ≥ d(n). By Lemma 2 we have By Lemma 3 and 4 we have where the last equality results from log(1+u) ∼ u for u → 0. Adding up inequations 1 and 3, after cancellation of the terms in log p m , we obtain the inequality the right hand side of which goes to zero for large n.
Proof of Theorem 3
Recall the standard notation for oscillation theorems [5, p. 194]. If f, g are two real valued functions of a real variable x, with g > 0, then we write ), and f (x) = Ω − (g(x)) hold We refer the reader to [9] for the definition of Colossally Abundant (CA) numbers. By [9,Proposition,§4] if RH is false then, for CA numbers we have for some b ∈ (0, 1). This would imply, using the infinitude of CA numbers [9], that lim inf n→∞ D(n) = −∞. If RH holds then by [8, p.25] the sequence D(n) √ log n n admits upper and lower limits for n CA that are finite and > 0. Thus there are reals > 0 say A, B such that A n log n ≤ D(n) ≤ B n log n , when n is CA. Therefore lim n→∞ D(n) = ∞.
Conclusion and open problems
In this note we have studied the quantity D(n) which is the difference between the two handsides of Robin inequality, and its normalization d(n) = D(n) n . While the asymptotic behavior of d(n) can be determined unconditionally (Theorem 1), that of D(n) depends crucially on the truth of RH (Theorem 3). It would be desirable to extend Theorem 3 to integers that are not CA. It seems impossible to use Theorem 1 and Theorem 3 together to prove that RH holds. For instance, one cannot rule out the case that D(n) behaves like − √ n when n → ∞, which would not contradict the fact that lim inf n→∞ d(n) = 0. | 2015-12-31T09:48:20.000Z | 2015-12-01T00:00:00.000 | {
"year": 2016,
"sha1": "0ccbae6ae51e9d275267f0117405f860b2a29831",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "0ccbae6ae51e9d275267f0117405f860b2a29831",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
212552434 | pes2o/s2orc | v3-fos-license | PREVALENCE OF CTX-M-PRODUCING GRAM-NEGATIVE UROPATHOGENS IN SOKOTO, NORTH-WESTERN NIGERIA
Objective: Infections of the urinary tract remains one of the most common bacterial infections with many implicated organisms being Gram-negative, which are increasingly resistant to antimicrobial agents. The aim of the study was to evaluate the resistance of ESBL producing Gram-negative enterobacteriaceae to commonly prescribed antibiotics and the prevalence of CTX-M genes from these isolates using polymerase chain reaction (PCR). Methods: The isolates were collected from urine over a period of 4 mo and studied, and were identified using Microgen Identification Kit (GN-ID). Susceptibility testing was performed by the modified Kirby Bauer disc diffusion method, and results were interpreted according to Clinical and Laboratory Standard Institute (CLSI). Extended-Spectrum Beta-Lactamase (ESBL) production was detected by the double-disc synergy test (DDST). Molecular characterization was based on the isolates that were positive for the phenotypic detection of ESBL. Results: Sixty one (61) isolates of Gram-negative uropathogens were identified. Of these, 19 (31.2%) were E. coli, 15 (24.6%) were Salmonella arizonae, Klebsiella pneumoniae were 7 (11.5%), Klebsiella oxytoca were 3 (4.9%), Enterobacter gergoviae were 6 (9.8%), 4 (6.6%) were Citrobacter freundii, 4 (6.6%) were Serratia marscence, and 1 (1.6%) were Enterobacter aerogenes, Proteus mirabilis, and Edwardsiella tarda each. Analysis of the bacterial susceptibility to antibiotics revealed most of them to be generally resistant to cotrimoxazole (73.3%), nalidixic acid (66.7%), norfloxacin (53.5%), ciprofloxacin (50.5%), gentamicin (48.6%), amoxicillin/clavulanate (45%), and the least resistant was displayed in nitrofurantoin (30%). Of the 15 ESBL producers, 11 (73.3%) were harbouring bla CTX-M genes. Conclusion: The study revealed a high susceptibility to nitrofurantoin, whereas susceptibility to cotrimoxazole was lowest. It further portrays a high prevalence of enterobacteriaceae isolates harbouring bla CTX-M genes in Sokoto metropolis.
INTRODUCTION
Urinary Tract Infections (UTIs) are a serious public health issue, particularly in the developing world where there is a high level of poverty, ignorance and poor hygienic practices [1]. UTIs are the most prevalent bacterial infections in humans with Gram-negative pathogens most especially E. coli, which is regarded as the most important cause of nosocomial infections [2], both in adults as well as pediatric groups [3].
Wagenlehner and Naber [4] (2006) further noted that there are two important aims in the antimicrobial treatment of UTIs; an effective rapid response to therapy and prevention of recurrence of the patient treated and prevention of emergence of resistance to antimicrobial chemotherapy.
Since antibiotics have been introduced into clinical medicine, antibiotic-resistant bacteria have evolved. In 2016, the World Health Organization officially stated that "antimicrobial resistance is a global and societal challenge and threat". The constant increase of simultaneous resistance to various classes of antibiotics significantly reduces the possibility of treatment of infections caused by ESBL producers [5][6][7] have also reported that the management of UTIs has become increasingly challenging due to the production of ESBLs. They tend to be a worrying global public health issue due to their associated higher morbidity and mortality. Hence, they represent a clear and present higher danger to public health [8].
The production of ESBLs is one of the most prevalent resistance mechanisms in Gram-negative bacilli. ESBLs are enzymes whose rates of hydrolysis of the extended-spectrum beta-lactam antibiotics such as ceftazidime, cefotaxime, or aztreonam are>10% than that for benzylpenicillin [9]. ESBLs are predominantly described in K. pneumoniae and E. coli, but recently the enzymes were found in other genera of Enterobacteriaceae family [6,10].
Initially, the Temoneira (TEM) and sulphydryl variants (SHV) were recognized as the main ESBLs, but in recent times, the cefotaximase-Munich (CTX-M) become more prominent and considered the most prevalent beta-lactamases found in clinical isolates of E. coli globally [11]. Currently, the three major ESBL types are TEM, SHV, and CTX-M [9].
There are reported cases of CTX-M producing uropathogens isolates [12], and in orthopedic patients [13] from Nigeria. The CTX-M enzymes are known as an increasingly serious public health concern worldwide and have been noted to be the cause of outbreaks throughout the world [14]. These CTX-M genes are usually present in large plasmids that also carry additional resistance genes, but have been found on plasmids ranging in size from 7 to 430 kb [9].
The ongoing global spread and increased prevalence of CTX-M-type ESBL in Enterobacteriaceae is of great concern [15]. Due to the explosive dissemination of CTX-M around the world and increasing description worldwide, Canton et al., [16] have referred to it as the "CTX-M pandemic".
The study evaluates the resistance of the ESBL producing Gram-negative Enterobacteriaceae to commonly prescribed antibiotics and investigates the prevalence of CTX-M genes from these isolates using PCR.
MATERIALS AND METHODS
Approval to carry out this study was obtained from the ethics committee of the Specialist Hospital Sokoto (SHS) with approval number ESHS/239839. The sample collection was based on informed consent. The study was carried out in the Pharmaceutical Microbiology Laboratory of Department of Pharmaceutics and Pharmaceutical Microbiology, Faculty of Pharmaceutical Sciences, Usmanu Danfodiyo University, Sokoto.
Chemicals and reagents
The culture media and antibiotic discs used in this study were all sourced from Oxoid, UK and prepared as per the manufacturer's instruction. Identification kit used is sourced from Microgen ID Kit, GN-ID, UK, while the plasmid extraction kit is from ZymoPURE™, and primers were sourced from Inqaba Biotech, South Africa.
Inclusion criteria
Included in the study were urine samples from outpatients with UTIs with age group ≥ 18 y. A UTI in this study was defined as a positive urine culture ≥ 10 8 A suspension of each test organism was prepared in freshly prepared 0.9 % normal saline to give an inoculum equivalent to a 0.5 McFarland. The test organisms were inoculated on the surface of MHA plates. Confirmation of the ESBL phenotype was performed by DDST using antibiotic discs containing two cephalosporins and amoxicillin/ clavulanate. The discs were ceftazidime, CAZ (30 µg), AMC (30 µg) and cefotaxime, CTX (30 µg). A sterile needle was used to place the CTX and CAZ on the agar at a distance of 20 mm center to center from a combination disc of AMC. The plates were incubated at 37 °C and were examined for an extension of the edge of the zone of inhibition of antibiotic discs towards the disc containing AMC. It is interpreted as synergy and considered positive or the presence of ESBL.
E. coli ATCC 25922 and K. pneumoniae ATCC 700603 strains were used as negative and positive control respectively in this study.
Bacteria cell preparation
Luria and Bertani (LB) broth was used, which is a rich medium that is commonly used to culture members of the Enterobacteriaceae or as a general-purpose bacterial culture medium for a variety of facultative organisms. Single colonies were picked from freshly streaked isolates on NA and inoculated into 5 ml LB broth medium and incubated overnight at 37 °C. Bacterial cells were then harvested by centrifugation at 4 °C, 16,000 rpm in a refrigerated micro-centrifuge for 30 seconds at 37 °C in an Eppendorff's tube. The supernatant was then decanted and cells harvested.
Plasmid extraction
Plasmid extraction was carried out using ZymoPURE colony-forming unit/milliliters (cfu/ml) of pure bacterial growth.
Collection of clinical isolates
A total of three hundred and sixty-five (365) non-repetitive urine samples were collected over four (4) months from patients. Early morning mid-stream clean catch urine samples were collected by patients in sterile disposable containers. Before urine collection, patients were counseled on how to collect urine samples by observing all aseptic conditions to avoid contamination. Urine samples were inoculated on Cysteine Lactose Electrolyte Deficient (CLED) agar using a calibrated wire loop and incubated under aerobic conditions for 18-24 h at 37 °C Pure cultures of the individual isolates were obtained by sub-culturing on nutrient agar (NA).
Identification of bacterial isolates
An 18-24 hour pure culture of the bacterial isolate to be identified was used. Oxidase test was carried out on the isolate before strip inoculation. Only oxidase negative isolates were considered. A loopful was emulsified from an 18-24 hour culture in 3 ml sterile 0.9 % saline for the GN A microwell strip (Microgen Identification Kit) and was mixed thoroughly. Using a sterile Pasteur pipette, 3-4 drops (approximately 100μl) of the bacterial suspension was added to each well of the strip(s). The GN A microwell strips were read after 18-24 h incubation for Enterobacteriaceae according to manufacturer's instruction.
Screening for ESBL production
Multidrug-resistant (MDR) isolates were further screened for ESBL production. ESBL screening was performed by disc diffusion using cefotaxime (CTX, 30µg) and ceftazidime (CAZ, 30µg) on MHA for the initial screening test. The tests were interpreted according to CLSI guidelines.
The zones in mm shown below for respective antibiotics indicate potential ESBL producers. For any isolate that is suspected as an ESBL producer, a phenotypic confirmatory test was carried out.
TM Plasmid
Miniprep Kit according to the manufacturer's instructions. To ascertain that plasmids were extracted, the extracted plasmids were subjected to agarose gel electrophoresis.
Amplification of CTX-M genes
Amplification of plasmid DNA fragments was carried out using Dream Taq TM DNA polymerase (enhanced Taq DNA Polymerase optimized for high throughput PCR applications). Dream Taq TM PCR master mix (2X) was vortexed and centrifuged for 30 seconds at 8,000 rpm. The thin-walled PCR tube was then placed on an ice pack and the following components were added for each isolate for the single reaction of 10 µl viz: 0.2 µl of Dream Taq TM PCR master mix was added in the PCR tube. Then dNTP Mix 2 mmol each 0.5 µl were added, forward primers; CTX-M 0.5 µl of the forward primers were calculated and added. Then 0.5 µl CTX-M of the reverse primers were calculated and added. A 3.0 µl of template DNA (plasmid DNA), 10X PCR Taq buffer 1 µl were added. The nuclease-free water was added in the PCR tube to make up a total volume of 10 µl. The samples were vortexed gently and spin down. For 15 isolates, the forward and reverse primers, Taq buffer, dNTP, Taq polymerase, water, and template were multiplied by 15, making a total sum of 150 µl Dream Taq TM
Primers and conditions used for PCR
PCR was performed with the primers used in this study, as shown in table 1, and the thermal cycling conditions as shown in table 2.
Detection of PCR products on agarose gel
A 2% agarose gel was used to resolve the PCR plasmid DNA fragments with their primers. The amplicons were separated using electrophoresis and photographed under ultraviolet illuminator using a gel documentation system (BIORAD, USA).
Statistical analysis
Data were analyzed using Microsoft Excel.
Identification and screening for ESBL production in gramnegative uropathogens
The result of the identification and screening for ESBL production in Gram-negative uropathogens found that out of the 61 Gram-negative isolates, (18) 29.5% were potential ESBL producers, while 70.5% were not ESBL producers.
Amplification of CTX-M on gel electrophoresis
The plasmid DNA PCR of the CTX-M gene on ESBL isolates revealed the position of the amplification products, which were estimated with the position of the molecular weight marker as shown in fig. 3. Eleven (11) of the 15 isolates were found in the region of the expected amplicon of 909 bp.
DISCUSSION
Although UTIs caused by ESBL producing Gram-negative Enterobacteriaceae are a cause of concern due to clinical failure of empirical treatment, the occurrence of ESBL positive strains expressing MDR to antibiotics has remained the dominant problem in the therapy of infections caused by Gram-negative bacilli [6,19]. The overall frequency of ESBL producing Enterobacteriaceae among urinary tract pathogens in this study was 83.3%. However, in comparison to our study, a combination of low and high isolation rate was recorded in many studies: 44% in Saudi Arabia [20], 66.7% in India [21], 67.9% in Portugal [22], and 84% in Turkey [23].
Our study demonstrates an increasing trend in the emergence of ESBL in community-acquired UTI. Therefore, it is of great concern that the Gram-negative uropathogens carrying CTX-M are widespread in the Sokoto metropolis. Hence the need for antimicrobial stewardship and guidance for the management of these complex MDR infections can never be overemphasized.
CONCLUSION
The study revealed a high susceptibility to nitrofurantoin by the Gram-negative uropathogens, whereas susceptibility to cotrimoxazole to these isolates was lowest. It further portrays a high prevalence of Enterobacteriaceae isolates harboring CTX-M genes; thus a demonstration of the emergence of ESBL in communityacquired UTI in the study area.
AUTHORS CONTRIBUTIONS
Busayo Olalekan Olayinka, Nuhu Tanko and Rebecca Olajumoke Bolaji conceptualized the study, while the methodology was carried out by Busayo Olalekan Olayinka and Nuhu Tanko. Formal analysis and investigation were done by Nuhu Tanko and Rebecca Olajumoke Bolaji. Writing of the original draft was prepared by Nuhu Tanko and Busayo Olalekan Olayinka. Review and editing were done by Eugene Ong Boon Beng. The study was supervised by Busayo Olalekan Olayinka, Rebecca Olajumoke Bolaji and Eugene Ong Boon Beng. | 2020-01-09T09:11:31.874Z | 2019-11-28T00:00:00.000 | {
"year": 2019,
"sha1": "831a3489447e965827301cbf00e8caadb2084c17",
"oa_license": "CCBYNC",
"oa_url": "https://innovareacademics.in/journals/index.php/ijpps/article/download/35863/21332",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "67431d9d09eb0b194d94d13413b39eda464a7a78",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology"
]
} |
9988498 | pes2o/s2orc | v3-fos-license | A Parallel Hybrid Controller based on the Backstepping Method and S-plane Control for Three-dimensional Tracking of a Semi-pelagic Trawl System
Abstract: Aiming at improving the security of a semi-pelagic trawl in a complex working environment, this paper proposes a threedimensional tracking control system for the guidance of a semi-pelagic trawl on a complicated trajectory. The flexible nonlinear trawl system is simplified as a mass-spring-bar model. A hierarchical backstepping controller is then designed for the trajectory tracking of the trawl net and two otter boards. However, high-order state variables caused by the non-strict feedback characteristic of the trawl system appear frequently in recursive processes. Hence, a control algorithm based on the Sigmoid function is applied to construct the control outputs of these high-order variables. The stability of the proposed hybrid control method is analyzed based on the Lyapunov theorem when interference with an unknown upper limit occurs. Finally, simulation and contrast examinations show that the control algorithm is effective.
INTRODUCTION
The semi-pelagic trawl is a type of relatively advanced trawl system.It is mainly adopted to catch demersal fish such as squid and butterfish.Compared with the bottom trawl, it has proved to be friendly to the undersea environment [1].Similar to the mid-water trawl, the semi-pelagic trawl system is hydrodynamically balanced, which means that the sweeps, bridles and trawl net maintain a short distance from the seabed, but the otter boards are still on the sea bottom [2].The diagrammatic sketch of a semi-pelagic trawl system is shown in Fig. (1).In the process of trawling, rough undersea mountains and some obstacles such as subsea production equipment and pipelines sometimes can hook the otter boards, which will affect the catch rate and damage the trawl net.This situation may even threaten the safety of the vessel in bad weather.The purpose of this work is to study a three-dimensional trajectory tracking approach for a semipelagic trawl system, which can control the trajectories of the otter boards and trawl net.Owing to the complexity of the semi-pelagic trawl system, it is difficult for a single control strategy to achieve this control objectives, so a combined control method is needed.The otter boards can then leave the seabed and the trawl net can follow a complicated trajectory.After a collision with the seabed, the trawl system can rapidly return to a steady state of motion.
Tracking control of the trawl system is difficult because of the hulking physical size of the entire system and the flexibility of the trawl net and warps.At present, a few approaches have been studied.In the study of control modeling, Umeda proposed a first-order simplified model to control the depth of the trawl net [3]. Lee et al. presented a nonlinear trawl mathematic model, which considers the trawler, otter board and net as three material points, and all ropes are simplified as elastic links [4,5].This nonlinear model has been employed widely.In the study of control methods, several conventional PID controllers have been adopted to control the trajectory of the trawl net by regulating the otter board and considering the horizontal movement [6 -8].The fuzzy control approach and robust H2/H∞ method have been attempted to realize controlled directed fishing with the mid-water trawl and semi-pelagic trawl [9,10].However, the above literature is mainly aimed at the depth control of the trawl net.To ensure the safety of the semipelagic trawl in a complex environment, trajectory tracking control in three-dimensional space is required.Moreover, the motion state parameters of two otter boards should be seriously considered because they provide the lateral spread of the trawl net.Few studies have investigated three-dimensional tracking of the semi-pelagic trawl.Nevertheless, many approaches have been proposed to control the trajectory of under-actuated unmanned underwater vehicles(UUV), which are quite similar to the research question in this work.The commonly used tracking control methods of UUVs are neural network adaptive control, sliding-mode control and the backstepping method.Yu et al. presented a direct adaptive control algorithm for UUVs based on a generalized dynamic fuzzy neural network [11].Jia et al. approached a virtual guide method to establish the space motion error equation of UUVs, and a three-dimensional path controller was designed based on nonlinear iterative sliding mode [12].Do et al. adopted the Lyapunov direct method and backstepping technique to design an adaptive path tracking controller of UUVs, which considered the parameter uncertainty of the system model [13].Jon et al. combined the backstepping method and feedback control model to control slender-body under-actuated UUVs and carried out an experiment [14].Zhu et al. proposed a backstepping tracking control algorithm for UUVs combined with a bio-inspired neurodynamics model, which reduced the speed jump in the conventional backstepping controller [15,16].Xu et al. applied a virtual velocity to replace attitude tracking in the backstepping design, and adaptive sliding model control was adopted to increase the adaptive ability of UUVs in dynamic uncertain environments [17,18].
In this work, the three-dimensional trajectory tracking strategy of a trawl net and otter boards are considered.First, a simple kinetic model of the semi-pelagic trawl system considering the motions in three degrees of two otter boards is established.
A hybrid control algorithm based on the hierarchical backstepping method and another nonlinear control method is then proposed.The backstepping method is used to design the virtual control variables of each subsystem from lower to higher orders.Owing to the non-strict feedback characteristics of the model state equation, the backstepping method is unavailable for some subsystems that contain high-order variables.A parallel nonlinear controller called "S-plane control" can solve these high-order variables directly.
Subsequently, the stability of the proposed control algorithm has been proved based on Lyapunov theory.To illustrate the superiority of the proposed control approach, a simulation experiment comparison is conducted based on the MATLAB/Simulink platform.The semi-pelagic trawl is a complex, flexible and under-actuated system.A kinematic model is established based on the following simplified conditions.The trawl net and otter boards have planar motions in three degrees of freedom.The trawler moves in the horizontal plane with two translational degrees of freedom and one rotational degree of freedom.As shown in Fig. (2), two rectangular coordinate systems have been defined i.e., the earth fixed frame ( X E , Y E , Z E ) and the ship fixed ( X T , Y T , Z T ) frame.The trawler, two otter boards and trawl net are simplified as four mass points.The two pairs of warp and sweep are simplified as flexible links.The tension of all ropes can be calculated by Hooke's law.The simplified model of the semi-pelagic trawl can be named as the "four-mass-point model".
The state variables of the trawl system can be defined as: where q n and are the position and velocity of the trawl net, and are the position and velocity of the port otter boards, and are the position and velocity of the starboard otter boards, and q s and are the position and velocity of the trawler, respectively.and are the lengths of the two warps, and and are the release rates of these warps.
The external environment of the semi-pelagic trawl is rather complicated.The trawler is under the joint action of wind, waves and current.Based on the nonlinear mathematical model for ships [19], the kinetic equations of the trawler can be obtained.
The main source of the environmental disturbance of the otter boards and trawl net is the current.Based on D'Alembert's principle, the kinetic equations of the port otter board can be expressed as: where k 2 sz is the elasticity coefficient of the port sweep.m 0 , W 0 and B 0 are the equivalent mass, the gravity and the resistance coefficient of the otter board, respectively.ω oz is the environmental disturbance.The equations of the two otter boards are basically the same.When the system motion state changes, the fluid resistance coefficient B 0 is slowly varying.Owing to the limitation of fluid theory, B 0 is obtained by parameter estimation.
The kinetic equations of the trawl net mass point are similar to Eq. ( 2).According to the mechanical properties of the hydraulic winch, the kinetic equations of the port trawl winch can be expressed as: where J w is the rotational inertia of the winch, and W d is the tension produced by the power source of the winch.B w is the damping coefficient, and r w is the radius of the roller.
Combined with the equations of these subsystems, the state-space equations of the three mass point models are obtained [20].For the proposed four mass point models, the trawl net is pulled by two sweep lines.Analogously, the trawler is pulled by two warps.Hence the state equations of these two subsystems must be modified.The otter boards and the trawl winches both require two sets of equations to describe their motion states.
After the above modification, the state equations of the semi-pelagic trawl system can be expressed as: where m n and B n are the equivalent mass and damping coefficient of the trawl net, and k l s is the elasticity coefficient of the sweep lines.l rz and l ry are the length vectors of two sweep lines before they extend.is the gravity vector of the net underwater.
For the fishing vessel, u s , u wz , and u wy are the control inputs provided by the marine propulsion system and two trawl winches.M s is the inertial matrix of the vessel after coordinate transformation.B s is the damping force matrix of the vessel.C s is the Coriolis force matrix of the vessel.T s is the thrust of the vessel produced by the propeller and rudder, which is the control vector.
TRAJECTORY TRACKING CONTROLLER DESIGNING
The three-dimensional trajectory of the trawl net is the main control aim.The movements of two otter boards should also be highly regarded.Moreover, the state space model requires extra virtual control inputs.Thus, the tracking error of the trawl system can be defined as: where x ld is the target trajectory of the trawl net, and are the virtual control inputs of all state variables.
Based on the backstepping method, the design process starts from the lower differential equation of Eq. ( 4), and the Lyapunov functions related to the control objectives can be built step by step.
Step 1. Construct a Lyapunov function as: The derivative of V 1 is: To ensure that , the virtual control inputs x 2d can be defined as with the control parameter k 1 ≥ 0 plugging the expression of x 2d into Eq.( 7), the derivative of V 1 is: Step 2. Based on Eq. ( 6), a new Lyapunov function can be defined as: The derivative of V 2 is: In Eq. ( 10) the high-order state variable x 5 appears, which means that the backstepping method cannot be used directly.Therefore, an entire variable x so , which is defined as x so = x 3 + x 5 , is introduced.To ensure that , the new virtual control inputs can be embodied as: where the control parameter k 2 ≥ 0 .
Step 3: Construct a new Lyapunov function as: The derivative of V 3 is: To ensure that , the virtual states x 4d and x 6d are and .The control parameters k 3 and k 5 are positive constants.Plugging the expressions of x 4d and x 6d into Eq.( 13), the derivative of V 3 is: Step 4: Based on Eq. (12), a new Lyapunov function can be defined as: The derivative of V 4 is: In Eq. ( 16), the high-order state variables x 9 and x 11 appeared.Similar to step 2, new entire virtual control inputs x szd and x syd can be introduced as: Obviously, there are two expressions for one vector x 7d in Eq. ( 17).The tensions of two warps are not equal when the vessel swerves, so this condition would lead to a conflict in solving x 7d variable.
To solve the high-order variables x 7d , x 9d , and x 11d , a method called "S-plane control" is applied.This method is derived from the Sigmoid function, which partly embodies the idea of fuzzy control [21 -23].
The expression of S-plane control is: where k a and k b are the control parameters, and e and ė are the inputs the express deviation and change in deviation, respectively.f is the control output.
The expression of S-plane control is similar to that of PID control, but S-control is nonlinear.It is very suitable for the movement control problems of the towing system which is nonlinear and thus has a mathematical model that is difficult to acquire precisely.The essential similarities of PD control can effectively ensure the effect of motion control.
There are coupling effects among the 3-degrees of freedom motions of the vessel.To simplify the controller design, the decoupling controller is then designed as one controller for every degree.Thus, there are 3 S-plane controls for the vessel.
Assume that the target trajectory of the vessel in the X-direction is x 7dx , the tracking error is e x = x 7dx -x 7x , and the output of each S controller can been expressed as: where X max is the maximum output, and e max is the maximum permissible error.The change in buoyancy and current can be seen as steady interference forces over a period of time.Δf is the adjustable variable for adapting to the interference of the environment.This fixed deviation can be eliminated by adjusting the offset of the S-plane.The expression of adaptive variable Δf can be expressed as: where S (Δe x , 0) is the control function of the S-plane, e t is the deviation at time t , and Δe x is the adaptive adjustment amount of the deviation.γ is the enabled factor.When the absolute value of deviation change is less than the threshold value ė 0 , and the deviation shows an increase in tendency, γ is set to 1 otherwise γ is set to 0. α t is the fading factor at time t .When γ is 0, α t =0.9 α t -1.When γ is 1, α t = 1.β t is the vanish factor at time t .When Δe and e t are opposite in sign, β t =0.25 β t -1 otherwise β t = 1.δ is the low-pass filter parameters n is the number of historical data.
Analogously, the expression of virtual control inputs x 9d and x 11d are formally consistent with Eq. ( 19).Hence there are 9 S-plane controllers for x 7d , x 9d and x 11d altogether.
With the expressions of x 7d , x 9d and x 11d , the recursive process is thus able to continue.Plugging the expressions of these variables into Eq.( 14), the derivative V 4 of can be obtained.
Step 5: Based on Eq. ( 15), a new Lyapunov function is given by: The derivative of V 5 is: To ensure that , the virtual control inputs x 8d , x 10d and x 12d can been defined as: , and .The control parameters k 7 , k 9 and k 11 are positive constants.
Step 6: The last Lyapunov function is given by: According to the derivative of V 6 , the controlled quantities of the vessel and two winches can be defined as: where the control parameters k 8 ≥ 0 , k 10 ≥ 0 and k 12 ≥ 0 .With this, the three-dimensional trajectory tracking controller of the trawl system has been acquired.
STABILITY ANALYSIS
For the proposed S controller, it is assumed that the real trace is equal to the desired trajectory over a long time.At this moment, the desired output of controller is f 0 .
Define that s = -k 1 e + k 2 ė .A Lyapunov function can then be expressed as: The derivative of V( s ) is .
When f > f 0 , there is .When f < f 0 , there is .Therefore the following equality is established: The sign function of is: Because , over enough time or a sufficiently large value, the following inequality is established: Therefore V(s) is asymptotically stable.
To ensure the stability of the trawl system with a bounded input signal under the external environment disturbance, the disturbances are added to the trawl net and otter boards, which are ω(n) , ω(oz) and ω(oy) .
Based on Eq. ( 23), and considering that the nonzero often bounded disturbances exist, a Lyapunov function can be defined as: According to the inequality , the derivative V 7 of satisfies the following inequality: The definite integrals of Eq. ( 30) can be expressed as: (31) Thus, Eq. (31) proves that in the presence of interference with an unknown upper limit, the system is asymptotically stable and the control outputs would converge to the origin of a small domain in the stable area.
SIMULATION EXPERIMENT
To simulate the three-dimensional motion tracking of a semi-pelagic trawl net in a complex seabed terrain, the target trajectory of the trawl net is defined as a nonlinear time-varying function.The semi-pelagic trawl system is assumed to move in uniform flow.The masses of four mass points are defined as m s = 5718000kg, m o = 5200kg, m n = 47916kg.The lengths of the warp and sweep lines are l w = 3000m and l r = 206m .The expression of x 1d is given by: where , When the time t=1000 s, a disturbing force ω 0 acts on the otter board on the left side, which is removed at time t=2000s.At time t=1500 s, ω 0 acts on the trawl net, and it is removed at time t=2500 s. ω 0 is defined as 6x10 5 As shown in Eq. ( 28), the target trajectory is rather complex.For comparison, the conventional linear controller is also given.The condition that the semi-pelagic trawl moves without disturbing forces is simulated first.The simulation result of the linear controller is shown in Fig. (3a), and the result of the improved backstepping controller is shown in Fig. (3b).Compared with these two results, it can be seen that the track curve of the conventional linear controller has obvious deviation.In contrast, the deviation has been effectively reduced under the action of the proposed control method.For further quantitative analysis, the real-time relative error comparison of these two controllers has been provided.As shown in Fig. (5), in the X-direction, the tracking errors of two controllers are nearly the same.In the Y-direction, the two tracking relative error curves both have mutations at time t=2000 s and t=4000 s.This may be caused by the reverse direction of velocity.However, for most of the time, the proposed controller has much lower tracking error.In the Z-direction, the tracking error of the linear controller is much larger, sometimes even higher than 0.1.The proposed controller can still maintain the tracking error at a low level.6) shows the real-time relative error comparison in the presence of a disturbance force.It can be seen that when the disturbing force exists, the tracking errors in the Y-direction have changed little.In the Z-direction, the maximum relative error increases to more than 0.3.After the disturbing force disappears, the improved backstepping controller can rapidly reduce the error to the previous level.
CONCLUSION
According to the actual working status of a semi-pelagic trawl system, a new simplified model is presented in this paper.By using the backstepping method and S-plane control in combination, a compound nonlinear controller considering decoupling of the high-order state variables in the process of recursion is presented.The robustness and stability of this system under constant external disturbances have been proved based on Lyapunov stability theory.Finally, the simulation results show the superiority and effectiveness of the proposed control approach.When a constant disturbance force acts on the system, the depth location of the trawl net changes more than the horizontal position.In future work, a model experimental demonstration will be presented to verify the effectiveness of the proposed control scheme in practical applications, and further modification of the control model will be considered to improve the model accuracy in a complex flow field.
CONFLICT OF INTEREST
The authors confirm that this article content has no conflict of interest.
Fig. ( 4 )
Fig. (4) shows the trajectories of the trawl net under the disturbing forces.Similar to Fig. (3), Fig. (4a) shows the trajectory of the linear controller and Fig. (4b) shows the trajectory of the proposed controller.Apparently, the path deviation in the Z-direction is larger than in other directions during the action period of the disturbing forces.The trajectory tracking errors of the proposed controller are much lower than in the linear controller.When the disturbing forces disappear, the proposed controller can eliminate the deviation rapidly and stably, whereas the trajectory of the linear controller still has oscillation for a long time.
Fig. (
Fig.(6) shows the real-time relative error comparison in the presence of a disturbance force.It can be seen that when the disturbing force exists, the tracking errors in the Y-direction have changed little.In the Z-direction, the maximum relative error increases to more than 0.3.After the disturbing force disappears, the improved backstepping controller can rapidly reduce the error to the previous level. | 2016-10-26T03:31:20.546Z | 2016-10-14T00:00:00.000 | {
"year": 2016,
"sha1": "d469e820adac7804d4b0bebb17fdb3bd10927f26",
"oa_license": "CCBYNC",
"oa_url": "http://benthamopen.com/contents/pdf/TOCSJ/TOCSJ-10-180.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d469e820adac7804d4b0bebb17fdb3bd10927f26",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
222171370 | pes2o/s2orc | v3-fos-license | Down-regulation of miR-361-5p promotes the viability, migration and tube formation of endothelial progenitor cells via targeting FGF1
Abstract Transplantion of bone marrow-derived endothelial progenitor cells (EPCs) may be a novel treatment for deep venous thrombosis (DVT). The present study probed into the role of microRNA (miR)-361-5p in EPCs and DVT recanalization. EPCs were isolated from male Sprague–Dawley (SD) rats and identified using confocal microscopy and flow cytometry. The viability, migration and tube formation of EPCs were examined using MTT assay, wound-healing assay and tube formation assay, respectively. Target gene and potential binding sites between miR-361-5p and fibroblast growth factor 1 (FGF1) were predicted by StarBase and confirmed by dual-luciferase reporter assay. Relative expressions of miR-361-5p and FGF1 were detected using quantitative real-time polymerase chain reaction (qRT-PCR) and Western blot as needed. A DVT model in SD rats was established to investigate the role of EPC with miR-361-5p antagomir in DVT by Hematoxylin–Eosin (H&E) staining. EPC was identified as 87.1% positive for cluster of difference (CD)31, 2.17% positive for CD133, 85.6% positive for von Willebrand factor (vWF) and 94.8% positive for vascular endothelial growth factor receptor-2 (VEGFR2). MiR-361-5p antagomir promoted proliferation, migration and tube formation of EPCs and up-regulated FGF1 expression, thereby dissolving thrombus in the vein of DVT rats. FGF1 was the target of miR-361-5p, and overexpressed FGF1 reversed the effects of up-regulating miR-361-5p on suppressing EPCs. Down-regulation of miR-361-5p enhanced thrombus resolution in vivo and promoted EPC viability, migration and angiogenesis in vitro through targeting FGF1. Therefore, miR-361-5p may be a potential therapeutic target for DVT recanalization.
Introduction
Deep venous thrombosis (DVT) refers to the formation of a blood clot within a deep vein in contrast with venous thromboembolism, which includes superficial thrombophlebitis and pulmonary embolism [1]. DVT will lead to morbidity and mortality in several conditions [2]. Anti-coagulation is a major therapeutic strategy for DVT, but it cannot dissolve thrombus and restore the function of valves, at the same time, thrombosis may occur with post-thrombotic syndrome (PTS) [3]. Thus, it is of great significance to explore a new therapeutic method for DVT treatment.
Recent discoveries showed that successful DVT-related thrombi resolution plays a key role in DVT treatment [4], however, detailed mechanisms remained obscure. Endothelial progenitor cells (EPCs), derived from bone marrow and resident to tissues, act as precursor for endothelial cells. EPCs have the ability to differentiate into mature endothelial cells and play a major role in vascular integrity maintenance and endothelial damage as well as the resolution of thrombus in vivo [5][6][7]. EPCs can be homed and integrated into the injured blood vessel and thrombus to secrete angiogenesis factors, thus increasing the formation of new blood vessels and improving the resolution of vascular thrombosis when DVT occurs [2]. Therefore, the effective recruitment of EPCs into thrombus may help treat DVT.
As previous study demonstrated, microRNAs (miRNAs, miRs) are involved in the biological functions of EPCs [8]. MiRNAs are a highly conserved family of small non-coding RNAs (ncRNAs) with 19-25 nucleotides in length and regulate the gene expression through the combination with 3 -untranslated region (3 -UTR) of target messenger RNAs (mRNAs) [9]. MiR-361-5p, in particular, has been reported to suppress many human cancers progression, such as hepatocellular carcinoma [10], hemangioma [11] and papillary carcinoma [12]. According to Wang et al.'s study, miR-361-5p suppresses vascular endothelial growth factor (VEGF) expression and EPC activity [13]. We predicted that the target gene of miR-361-5p was fibroblast growth factor 1 (FGF1), which aroused our research interest. FGF1, which plays a part in cell proliferation, migration, invasion and angiogenesis, can promote the viability of EPCs and neovascularization [14,15]. Herein, in our study, the roles of miR-361-5p in the development and progression of DVT were explored through examining their regulatory functions in EPCs based on an animal model, hoping to find a potential therapeutic method for DVT.
Materials and methods
EPCs isolation and culture EPC isolation was conducted following a previous description [16]. Male Sprague-Dawley (SD) rats (3 weeks old, 80-100 g) were purchased from Guangdong Medical Laboratory Animal Center (Foshan, China) and kept in microisolator cages under a 12-hour (h) day/night cycle at 23 • C with free access to standard laboratory diet and tap water for 2 weeks before our experiment. Then, after 2 weeks of stabilization, five male SD rats were anesthetized and sacrificed with intraperitoneal injection of ketamine (100 mg/kg) and xylazine (10 mg/kg). Subsequently, bone marrow of the rats was harvested via femurs and tibias. Mononuclear cells were available from density-gradient centrifugation with Ficoll-Paque (GE Healthcare, Piscataway, NJ, U.S.A.). EPCs at the density of 0.8-1.0 × 10 6 cells/cm 2 were inoculated to the culture flask and cultured with microvascular endothelial cell growth-2 (EGM-2) medium; catalog number: CC-3125; Lonza, Greenwood, SC, U.S.A.), which contained penicillin-streptomycin (P4333, Sigma-Aldrich, St Louis, MO, U.S.A) at 37 • C with 5% CO 2 . Non-adherent cells were washed after the cell culture for 4 days, and the medium was refreshed every 2 days. EPCs in passage 3 were selected for subsequent experiments.
MTT assay
MTT assay was performed to measure the EPC viability. In brief, the EPCs (1 × 10 3 cells/well) were cultured in 96-well plates and 10 μl MTT assay kit (#30006, Biotium, Inc., Fremont, CA, U.S.A.) was added into the wells at 12, 24 and 48 h of the culture. The supernatant was discarded 4 h after incubation at 37 • C, and 100 μl dimethyl sulfoxide (DMSO; 472301, Sigma-Aldrich, U.S.A.) was added into the wells to dissolve formazan crystals. The OD values at an absorbance of 490 nm were measured and recorded by a microplate reader (Model 680, Bio-Rad, U.S.A.).
Wound healing assay
After 48-h cell transfection, the EPCs (1 × 10 4 cell/ml) were seeded in a 24-well tissue culture plate. The straight wound in the middle of the culture was then created by a sterile pipette tip after the cells reached 100% confluence. After washing the cells by PBS twice to smoothen the edge of scratch and the removal of the floating cells, the EPCs were incubated in an incubator at 37 • C with 5% CO 2 . Cell images at 0 and 48 h were captured under an inverted optical microscope (SW380T, Swift Optical Instruments, Schertz, TX, U.S.A.). Cell migration was measured by Image-Pro Plus Analysis software (Version 6.0, Media Cybernetics Company, U.S.A.).
Tube formation assay
The vascular formation of EPCs was evaluated using tube formation assay and Matrigel plug assay. Pre-heated matrigel (R&D Systems, Minneapolis, MN, U.S.A.) at 4 • C overnight was diluted with non-serum medium, layered in 96-well plates and incubated at 37 • C for 30 min to allow polymerization. Subsequently, the EPCs (2 × 10 4 cell/ml) were plated on to the Matrigel layer in EGM™-2 MV medium, and later the capillary-like structures formation was captured using an inverted microscope (IRB20, Microscope World, Carlsbad, CA, U.S.A.) with Tube formation ACAS Image Analysis Software (v.1.0, ibidi GmbH, Gräfelfing, Germany).
Target gene and dual-luciferase reporter assay
StarBase (http://www.starbase.sysu.edu.cn) predicted that the target gene of miR-361-5p was FGF1, which aroused our research interest. Previous study found that miR-361-5p plays an important role in cell proliferation and invasion [18]. FGF1 has also been found to promote the activity, proliferation, angiogenesis and anti-apoptosis of EPCs [15]. The predicted targeted relationship was subsequently confirmed by dual-luciferase reporter assay. PMIR-REPORT Luciferase vector (catalog number: AM5795; Thermo Fisher Scientific, U.S.A.) containing the sequences of wildtype or mutated FGF1 3 -UTR was cloned into the pMirGLO reporter vector (Promega, Madison, WI, U.S.A.) to form FGF1-WT and FGF1-MUT. A total of 1 × 10 4 cells/ml (EPCs) were then co-transfected with FGF1-WT and FGF1-MUT, miR-361-5p agomir and miR-NC-agomir by Lipofectamine 2000 Transfection reagent (Thermo Fisher Scientific, U.S.A.) at 37 • C. Renilla reporter gene in the luciferase reporter vector was used as an internal control. Then cells were harvested 48 h after the transfection for luciferase detection in dual-luciferase reporter assay system (E1910; Promega, Madison, WI, U.S.A.) following the producer's protocols. The firefly luciferase activity was normalized to that of Renilla luciferase activity.
Construction of animal model
SD rats (10 weeks old, 280-300 g) regardless of gender were obtained from Guangdong Medical Laboratory Animal Center (Foshan, Guangdong, China), and were then kept in specifically made pathogen-free animal rooms. For this, rat model construction was well described in previous studies [2,16]. In brief, the rats were anesthetized by intraperitoneal 7% pentobarbital injection and underwent midline laparotomy to dissect inferior vena cava (IVC) from aorta. IVC was subsequently ligated just below the upper renal vein using 7-0 Prolene sutures, meanwhile, the posterior venous branches were tightened. Then, confluence in iliac vein was discontinued using a pair of vascular clip for 15 min. After that, the incision was closed and the rats were allowed to recover after the surgery. The rats in Sham group received a dissection of IVC but without ligation.
Three days after the construction of animal model, the SD rats were divided into four groups at random (total number = 40; n=10 for each group): (A) Sham group received 2 ml EGM™-2 MV medium (Lonza, U.S.A.) injection with IVC exposure, (B) Model group received 2 ml EGM™-2 MV medium injection after model construction, (C) EPCs group received the injection of 1 × 10 6 EPCs containing miR-NC carriers via tail vein injection, (D) EPC+miR-361-5p antagomir group received injection of 1 × 10 6 EPCs via tail vein injection after transfection with miR-361-5p antagomir.
Histopathologic examination with Hematoxylin-Eosin (H&E) staining
The rats were anesthetized and sacrificed with intraperitoneal injection of ketamine (100 mg/kg) and xylazine (10 mg/kg) 7 days after the injection. Segments of IVC containing the thrombus were harvested with caution and fixed in 4% paraformaldehyde, subsequently embedded in dissolved paraffin. Excess blood on thrombi was removed using filter paper. Specimens were finally stained with Hematoxylin-Eosin (H&E) and analyzed with a light inverted microscope (CKX53; Olympus, Tokyo, Japan) in the dark.
Statistical analysis
In our study, all the experiments were independently performed more than three times. The experimental data were expressed as mean + − standard deviation. Statistical analysis was performed with SPSS 21.0 software (IBM Corporation, Armonk, NY, U.S.A.). Normal distribution and variance homogeneity were tested for all the data. Comparison of differences between multiple groups was determined by one-way ANOVA. Comparison of differences between two groups was determined by Student's t test. P-value <0.05 was considered as statistically significant.
Culture and identification of EPCs
In line with previous studies, changes in the morphology and numbers of EPCs were observed under an inverted optical microscope. Shortly after the isolation, colony of Peripheral blood mononuclear cells (PBMCs) exhibited a round morphology and suspended in the medium ( Figure 1A). Then, 7 days after the culture, an elongated spindle-shaped morphology and formed central cluster was observed ( Figure 1A). Also, PBMCs began to merge in passage 3 ( Figure 1A). The isolated PBMCs were identified by confocal microscopy, and the double staining with functional marker FITC-UEA-I and Dil-Ac-LDL suggested that the isolated PBMCs were EPCs ( Figure 1B). EPCs were further characterized by flow cytometry. CD31, CD133, vWF and VEGFR2 were markers of EPCs [20], and therefore their expressions were measured using flow cytometry in order to confirm the identity of EPCs. In Figure 1C, the results from flow cytometry showed that P10 cells were 87.1% positive for CD31, 2.17% positive for CD133, 85.6% positive for vWF and 94.8% positive for VEGFR2, suggesting that the isolated mononuclear cells were EPCs.
To further uncover the effects of miR-361-5p on EPC functions, we measured the viability, migration and tube formation of EPCs after the transfection with miR-361-5p agomir or antagomir. MTT assay showed that the viability of EPCs at 24 and 48 h was reduced in miR-361-5p agomir group ( Figure 2B, P<0.01), while that in miR-361-5p antagomir group showed an opposite result at 24 and 48 h ( Figure 2B, P<0.05), indicating that down-regulating miR-361-5p could promote the cell viability. Then the migration of EPCs was determined using wound-healing assay. The data from the experiments revealed that relative migration rate of EPCs in miR-361-5p agomir group was decreased ( Figure 2C, P<0.001), while that in miR-361-5p antagomir group was increased ( Figure 2C, P<0.05), suggesting that down-regulating miR-361-5p expression promoted the migration of EPCs. Finally, the EPC tube formation was detected with tube formation assay, and it has been observed that both branch points and relative tube length in miR-361-5p agomir group were reduced ( Figure 2D, P<0.01), while that in miR-361-5p antagomir group was increased ( Figure 2D, P<0.01), suggesting that the EPC tube formation could be enhanced by down-regulating miR-361-5p.
FGF1 was the target of miR-361-5p and overexpressed FGF1 reversed the effects of miR-361-5p agomir on FGF1 expression
MiRNAs combine with 3 -UTR of target mRNAs to regulate gene expressions [9]. By applying StarBase, we successfully found that FGF1 might be a possible target of miR-361-5p, because it contained miR-361-5p binding sites at 3 -UTR ( Figure 3A). To further confirm that miR-361-5p could bind with FGF1, we built a luciferase reporter vector containing 3 -UTR. For the assay, the results demonstrated that the relative luciferase activity in FGF1-WT group was reduced in the presence of miR-361-5p agomir ( Figure 3B, P<0.001). However, no significant difference was detected in luciferase activity of miR-361-5p agomir in FGF1-MUT group ( Figure 3B). These results suggested that FGF1 was the target of miR-361-5p.
Overexpression of FGF1 reversed the inhibitory effects of miR-361-5p agomir on the viability, migration and tube formation of EPCs
To uncover the effects of miR-361-5p and FGF1 on the EPC viability, migration and tube formation, EPCs were transfected with miR-361-5p agomir or antagomir. MTT assay showed that the EPC viability was reduced after miR-361-5p agomir was transfected into the cells, while overexpressed FGF1 showed an opposite effect ( Figure 4C, P<0.001). In addition, overexpressed FGF1 reversed the effects of miR-361-5p agomir on the EPC viability ( Figure 4C, P<0.01). In wound-healing assay, relative migration of EPCs in miR-361-5p agomir+NC group was reduced, whereas overexpression of FGF1 led to an opposite result ( Figure 4D, P<0.001). Furthermore, overexpressing FGF1 in EPCs reversed the effects of miR-361-5p agomir on the cell migration ( Figure 4D, P<0.001). Moreover, from tube formation assay, it could be observed that the relative branch points and tube length of the EPCs were reduced following upregulating miR-361-5p ( Figure 4E, P<0.01), while overexpressed FGF1 resulted in an opposite effect ( Figure 4E, P<0.001), and overexpression of FGF1 in the EPCs reversed the effects of miR-361-5p agomir ( Figure 4E, P<0.01).
EPCs with miR-361-5p antagomir showed promotion on thrombus resolution in the vein
As shown in Figure 5, H&E staining showed a normal vein in Sham group, while in Model group, nucleated cells (monocytes, endothelial cells and neutrophil granulocytes) were found entering the thrombus perimeter on day 7. Moreover, the red blood cells, platelets and fibrin were dried red in the center of thrombus in Model group. Besides, in EPC/miR-NC antagomir group, more nucleated cells and channels with reduced thrombus were detected on day 7 compared with Model group. In addition, in EPC/miR-361-5p antagomir group, more nucleated cells were found entering the thrombus. For EPC/miR-361-5p antagomir group, we observed small fracture in the perimeter of thrombus and the formation of tube structure and red blood cells. Collectively, the experimental results suggested that EPCs with miR-361-5p antagomir promoted thrombus resolution in the vein.
Discussion
Thrombosis, which referred to the blood clot formation inside blood vessels, could obstruct blood flow in the circulatory system [16]. Endothelial cells at normal state express the molecules with anticoagulant effect and inhibit the formation of fibrin [21]. Moreover, as endothelial cells may induce tissue repair and tube formation [22], it now plays an important role in thrombosis prevention and treatment. EPCs function as precursor cells for mature endothelial cells, and its potential to differentiate into all capillary niches allows it to contribute to vascularizing engineered tissues [23]. Many researches uncovered the relation between proliferation, migration and tube formation of EPCs and DVT recanalization. Mo et al. demonstrated that down-regulation on miR-195 could regulate the proliferation, migration, angiogenesis and autography of EPCs by targeting GABA type A receptor-associated protein-like 1 (GABARAPL1), and Li et al. discovered that miR-3120 was implicated in the mechanisms via which long non-coding RNA Wilms Tumor 1 Associated Protein Pseudogene 1 (LncRNA WTAPP1) promoted EPCs migration and angiogenesis [24,25]. In our study, consistent with previous discoveries, we found that EPCs transfected with miR-361-5p antagomir could partially promote thrombus resolution in vein.
Recently, the functions of miRNAs in the regulation of vascular development, homeostasis and differentiation have been widely explored both at home and abroad [26,27]. MiRNAs also affect EPC function in angiogenesis [28]. MiR-361-5p, in particular, has been found overexpressed in vascular cells, including EPCs, to inhibit their activities [13]. Wang et al. demonstrated that miR-361-5p could suppress EPCs activities via targeting VEGF in patients with coronary artery diseases [29]. In our present study, we conducted a series of studies on the biological behaviors of EPCs at the cellular level, and found that after miR-361-5p antagomir was transfected into EPCs, the viability, migration and tube formation of EPCs were promoted, suggesting that down-regulating miR-361-5p may have promoting effects on EPCs, which was consistent with previous studies [29]. However, upregulating miR-361-5p significantly reduced the viability, migration and angiogenesis of EPCs. Such a result encouraged us to further study its mechanism of action. Using the starBase site, we predicted the target genes and potential binding sites of miR-361-5p. FGF1, which was predicted as a target for miR-361-5p in the present study, has been found to promote activity, proliferation and angiogenesis of EPCs [15]. Moreover, we also confirmed the targeting relationship between miR-361-5p and FGF1 by dual-luciferase reporter gene assay.
FGF1 is a member of FGF family and a growth factor involved in cell proliferation, migration, invasion and angiogenesis [14]. Up-regulating FGF1 expression modulates to ameliorate atherosclerosis [30]. FGF1 induced by ERK1/2 signaling could reciprocally regulate proliferation and smooth muscle cell differentiation of ligament-derived EPC-like cells [31]. In addition, adipose-derived mesenchymal stem cells (AD-MSCs) transfected with FGF1 has been found to promote angiogenic proliferation [32]. However, the relation of FGF1 with miR-361-5p was hardly discussed. In our studies, according to the results of bioinformatics analysis, a binding site between FGF1 and miR-361-5p was identified, suggesting that FGF1 was the target of miR-361-5p. Then we discovered that down-regulation of miR-361-5p expression could promote FGF1 expression and FGF1 overexpression reversed the effects of up-regulating miR-361-5p on inhibiting the viability, migration and tube formation of EPCs. MiRNAs also participate in promoting DVT recanalization and resolution [28], but the effects of miR-361-5p in promoting DVT recanalization has not been examined yet. The results of in vitro experiments showed that miR-361-5p regulated the proliferation, migration and angiogenesis of EPCs through FGF1. Next, we carried out in vivo experiments to further explore the effect of miR-361-5p on venous thrombosis. In our studies, we found that miR-361-5p played an important role in DVT recanalization. Moreover, histopathological observation showed that EPCs transfected with miR-361-5p antagomir promoted DVT recanalization, indicating that down-regulating miR-361-5p expression in EPCs promoted thrombus resolution in vein, and that miR-361-5p could be a potential biomarker for DVT treatment.
Our study has some limitations that should be noted, with limited in vivo studies, the mechanisms of action of miR-361-5p on EPCs and DVT recanalization have not been fully elucidated, and this will be addressed in our future studies. The present study did not identify the aging of cells, which is also one of the limitations.
In conclusion, our studies revealed a novel role of miR-361-5p in DVT recanalization based on a rat model in vivo, and we also discovered that down-regulation of miR-361-5p expression played a vital role in promoting the viability, migration and tube formation of EPCs by targeting FGF1. Therefore, down-regulation of miR-361-5p could serve as a potential therapeutic strategy for DVT diagnosis and prognosis in clinical practice. | 2020-10-06T13:06:03.816Z | 2020-09-28T00:00:00.000 | {
"year": 2020,
"sha1": "4a3f25de7acdc4784566c90d605310a167d1a0f5",
"oa_license": "CCBY",
"oa_url": "https://portlandpress.com/bioscirep/article-pdf/40/10/BSR20200557/895176/bsr-2020-0557.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6b4e9462fa6b9339c70f3937999fbdccf7b2bcaa",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
151384219 | pes2o/s2orc | v3-fos-license | The Effect of Generate Argument’ Instruction Model to Increase Reasoning Ability of Seventh Grade Students on Interactions of Living Thing with their Environment
This study aim to examine the effect of generates an argument instruction model to increase students’ thinking skills, especially reasoning ability in lesson material of interactions of living thing with their environment. The study use weak experimental method with and the design is One-group pretest-posttest design. Sample in this study consists of 34 junior high school students of Seventh Grade in one of the junior high school in Ciamis. The instrument used to collect data is the essay questions of reasoning ability test according to reasoning Marzano’s framework which consist of the eight indicators that are comparing, classifying, induction, deduction, constructing support, analyzing perspectives, analyzing errors, and abstraction. In generally, the results show there is an increase in the students’ reasoning ability is significantly (Sig = 0.000). In addition, an increase in the ability of reasoning also viewed based on gender, and the result show there is not significantly (Sig = 0.168) the difference of reasoning ability between male student and female student. Increasing the ability of reasoning divided into two categories that is middle and low category.
Introduction
One of the thinking skills that need to be developed for students in studying science is reasoning abilities that also associated with the argument. Both of these abilities are related to one another, this is in line with that proposed by the Basel et al. in his research on the analysis of students' arguments on the concept of evolutionary theory which suggests that the analysis of the arguments gives us the opportunity to concentrate on the ability when the practice in terms of reasoning and arguing [4]. Beside it, a low complexity argument of students seem obvious but on the other hand, the use of various schemes of argument suggests that this argument comes from everyday experience. This shows that all students seem to have basic skills in argumentation derived from everyday life or other scientific contexts. Venville and Dawson argumentation skills play an important role in the formal reasoning [24]. In addition, any intervention based on the argument also positively impact on the quality of formal reasoning about a problem mainly related to sociosainstific. Konstantinidou and Macagno explained that there are strategy and methods that can encourage the active participation of students in developing the ability of reasoning through argumentation, the results of research suggests that the latest research shows the increase in the application of the arguments used as a strategy of education and teaching methods [7]. Based on these problems, we need a teaching strategies or teaching methods that can be used to develop reasoning skills. One of the way to do is through the implementation of Generate An Argument Instructional Model. This model is a learning model that encourages students to be active in the group argued. Sampson and Schleigh this learning model is designed to provide opportunities for small groups of students to develop the claim that answer research questions based on the amount of data available [17].
In the process of learning in the classroom, sometimes not all students are actively involved in their group, so the contribution of students in learning as well as a discussion group becomes uneven. Active participation of students during the learning process is influenced by gender. Lowrie and Diezmann has conducted research on gender differences in secondary school students in a test of problem solving related to the graph, the results showed gender differences favoring boys among 9 to 12 year old students' graphical languages in mathematics [10]. Other studies also showed (Lorenzo, Crouch & Mazur, 2006; Pollock, Finkelstein & Kost, 2007 in Nieminen, et. Al 2012) male student is better than female student in understanding the concepts of force, motion, electricity, and the concept of magnetism in materials physics.
Sampson and Schleigh argues that Generate An Argument Instructional Model was designed to provide opportunities for small groups of students to develop claims answered questions according the problem is based on the data available [17]. As part of this process, the group made a tentative argument then give their claims and evidence that supports the claim by using media that can be viewed by others. Each group then had the opportunity to share their ideas during the session argument. Application of this model shows that it should provide benefits that are useful for teachers because it can be used as a template or guide to design learning aimed at describing the content in the existing curriculum [16]. Here are the stages Generate An Argument Instructional Model that consists of four stages identification of problem, question, and task, generation of a tentative argument, the argumentation session and group sense-making [16], [13], [20].
Stiggins suggests that the reasoning is the application of knowledge in the context of problem solving [21]. Septiana argues that the reasoning of students is the result of the receipt and processing of information received during the learning process [18]. Kusumawati and Woro [8] argues that the reasoning requires one to describe the specific results obtained from the process of observation, facts and allegations. Reasoning ability is one aspect of cognitive intelligence possessed by each individual. Basically, the individual has the ability to create a reasoning are different from each other. Reasoning ability is required of students when faced with tests such as the completion of the problems and remedies during the learning process.
Student reasoning ability develop naturally and continuously in line with increasing levels of education. Based on Inhelder & Piaget Basic reasoning ability have evolved since someone was 4 years old, but growing more rapidly again when it enters adolescence [5]. The development of reasoning ability differ at all ages, in adolescence reasoning ability can be developed through simple problem-solving strategies, through this strategy one can integrate knowledge and problem solving ability as part of the learning process [19].
Although everyone basically has the reasoning ability from an early age, but this ability should continue to be trained in order to develop properly. The development of reasoning ability are also a result of the teachings and practices of successive [15]. Students reasoning ability need to be developed in adolescence because reasoning ability was essential to learn the concepts of science [9] In order to develop the ability of reasoning can be done through a series of instructional design that facilitates students use reasoning ability, for example by learning science and mathematics-based inquiry can help develop students' reasoning ability [3].
Reasoning leads to the complex thought [12]. Reasoning ability can be measured through a series of tests were prepared based on any particular framework. The conceptual framework offered by Marzano consist of cognitive and affective components [21]. The reasoning process consists of five dimensions, there are attitude and a good perception of learning acquiring and combining knowledge, extending and refining knowledge, using meaningful knowledge, and the habits of mind [11] [12]. Based on Dugari cognitive dimensions of reasoning Marzano can be applied as a learning strategy that can improve achievement in science [2]. In addition, according Salamat that through learning based on cognitive dimension Marzano can improve critical thinking skills in understanding the concepts of physics [2]. The results of another study (Baz 2001)stating that the chemistry learning through learning strategies adopted from the cognitive dimension Marzano shows the results that can improve student achievement in high-level thinking skills such as decision making, critical thinking, and creative thinking [2]. In this study, assessing student reasoning ability focus to the second cognitive dimension which consists of eight aspects of reasoning that is comparing, classifying, induction, deduction, error analysis, constructing support, abstraction, and analyze perspectives.
Research methodology
This research use Weak Experimental method with the research design is the One-group pretestposttest design. The number of samples consists of 34 students of 7 th grade at one junior high school in Ciamis. Reasoning skills of students are assessing through reasoning ability test which consists of 13 essay by 8 indicators reasoning abilities. The test is done twice: before give treatment (pretest) and after give treatment (posttest).
Results and discussion
Based on the results of data processing, it can be seen an increase in the average score of posttest when compared with the score pretest reasoning abilities of students. Results of significance tests through paired sample T test show the significance results with obtained Sig. (0.000 < 0.05) and shown in detail in Table 1 Indicators of reasoning ability assessd in this study consisted of eight indicators include the ability to comparing, classifying, induction, deduction, constructing support, analyzing perspectives, analyzing errors, and abstraction. The score of each is shown in Table 2. In addition to seeing the reasoning ability of students, this research also viewed ability of students by gender, which distinguished group of male students and female students. Based on the results of data processing, increasing the average score of posttest compared with pretest is better in both of group that is male students and female students. The results show in Table 3. The stages that exist in the generate argument instructional model facilitate students develop reasoning relating to the third dimension according to Marzano's framework of reasoning that extending and refining their knowledge. Through the first stage to the final stage students are required to actively develop the knowledge, the stimulus is given at the beginning of learning in the form of presentation of the problem. In addition, students are given the opportunity to select the knowledge through sessions arguments, which gives students the opportunity to express opinions not only on his native groups but also other groups who visit. So that students not only acquire knowledge through a learning process, but it can develop and select the knowledge gained.
The learning experience is very important in the achievement and student learning outcomes. Through various phenomena and problems presented during the learning process, students gain knowledge and experience that is more real because they are directly involved and experience it yourself. Learning through the generate an argument instruction model applied in this study pasted direct observation method to develop an argument while being prepared by the students. Generate an argument instruction model more emphasis on the study conducted outside the laboratory, so it is not required to obtain the data directly, but from a variety of literature and data from previous observations, Generate An Argument Instructional Model designed to develop arguments based on data obtained through the literature because of limitations in observation and data collection of objects, if done directly [16]. The ability of the student's thinking in this regard reasoning abilities of students are a significant improvement after the study is completed. Giving the problems in early learning, developing a tentative argument, and the argumentation sessions have helped develop the students to think systematically and logically and to be able to develop answers based on observation, combined with theory or concept based on the literature.
Although an increase in students' reasoning abilities significantly seen from the average value post test results, basically still occur obstacles when learning takes place. The low category of improvement based on the value of N-Gain is likely due to the obstacle when learning takes place. Responses were fairly well demonstrated by the students was also evident from the enthusiasm of the students increased from one learning process to the next learning process. Although the first learning process, most students are still confused about the learning patterns applied by teachers who are different from the usual pattern they use every day. Students' ability to communicate is still relatively low, it looks at the first learning process where the students still do not brave to ask questions, express or refute opinions. This is because the learning process is commonly used in everyday is not much or not familiar high-level thinking skills exercises.
If we look at the score of N-Gain, an average increase students' reasoning abilities of male and female student included in the low category. However, the value of N-Gain group of male students is higher than the score of N-Gain group of female students. Researchers initially suspected increase in student groups of male student will differ significantly from the group of female students. But the concept of gender itself is not an absolute, but rather is influenced also by other factors in the surroundings. The concept of gender is the inherent nature of men and women that is socially and culturally, but nature itself can be interchanged with one another, may change over time and differ from one place to another [6]. However, there is research that shows that there are differences in science achievement more positive on male students compared to female students [23].
No significant differences between the groups of male students and female students assumption group for learning model used is the Generate An Argument Instructional Model equally well for both groups of students, so the achievements of the two group students did not differ significantly. The same thing also expressed by the results of other studies of gender differences in the understanding of concepts and the ability of scientific reasoning in style concept which states that learning the concept of style does not show a significant difference between male students and female students, although the results of pretest and posttest male student better than female students, which indicates that the learning methods are used effectively for the two groups of students [14]. The different results show that the reasoning abilities of male students is lower than female students [22]. In addition the results of other studies also demonstrate reasoning abilities of male students are generally lower than on girls, except in drawing conclusions logically indicators [8].
All stages of learning that exist in the Generate An Argument Instructional Model followed by a group of male students and group of female students, so that all students have an equal opportunity to develop thinking patterns. In addition, the materials studied are concepts that are close to the students' everyday life, namely the interaction of living thing with the environment. Therefore male students and female students are much different reasoning ability at the end of learning. It shows the characteristics of the subject matter also influence student performance achievements of men and women students, so that the characteristics of such academic subjects also affect the level of significance of gender differences.
Seeing the results of observation on student learning activity showed each stage generally follow well. But if you look at the enthusiasm to learn, male students look more enthusiastic when learning of group and observation activities. This is presumably because male student are more like a learning process with student-centered and relevant to problem solving, but female student are more like learning to write activity [1]. So, it can also influence is no difference between the reasoning ability of male students and female students due to screen test reasoning ability in writing. So, in this case, the gender factor is not the major determinant in the successful completion of a test-based reasoning because there are external factors that influence such differences in learning style preferred by male students and female students also indirectly affect the achievement of reasoning abilities.
Conclusion
Based on the analysis, it can be concluded that there are significant differences between the results of the pretest and posttest of students' reasoning abilities after learning to use the Generate An Argument Instructional Model with significant score of T test results of paired samples T test is 0,000 < α (0.05). It also means the Generate An Argument Instructional Model sufficient to give effect to an increase in the ability of reasoning even though the score of N-Gain is still in the low category. Increase the ability of reasoning viewed based on gender in general there is no difference, this is indicated by the score of N-gain average are in the same category that is low. | 2019-05-10T13:09:36.219Z | 2017-02-01T00:00:00.000 | {
"year": 2017,
"sha1": "7c24c7ef1fc42cea9dd4458d06e29bf7dcbed0e5",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/812/1/012042",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "b52c12878facc4528ad24986249c929d6ffc89b4",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Physics",
"Psychology"
]
} |
6383291 | pes2o/s2orc | v3-fos-license | Nonlysosomal vesicles (acidosomes) are involved in phagosome acidification in Paramecium.
Although acidification of phagocytic vacuoles has received a broadened interest with the development of pH-sensitive fluorescent probes to follow the pH changes of vacuoles and acidic vesicles in living cells, the mechanism responsible for the acidification of such vacuoles still remains in doubt. In previous studies of the digestive vacuole system in the ciliate Paramecium caudatum we observed and described a unique population of apparently nonlysosomal vesicles that quickly fused with the newly released vacuole before the vacuole became acid and before lysosomes fused with the vacuole. In this paper we report the following: (a) these vesicles, named acidosomes, are devoid of acid phosphatase; (b) these vesicles accumulate neutral red as well as acridine orange, two observations that demonstrate their acid content; (c) cytochalasin B given 15 s after exposure of the cells to indicator dye-stained yeast will inhibit the acidification of yeast-containing vacuoles; and that (d) we observed using electron microscopy, that fusion of acidosomes with the vacuole is inhibited by cytochalasin B. We conclude that the mechanism for acidification of phagocytic vacuoles in Paramecium resides, at least partially if not entirely, in the acidosomes.
Recent papers that have used pH-sensitive fluorescent probes (1) to follow the time course of the pH changes in phagosomes (2)(3)(4)(5) and endosomes ( l, 4, 6-8) of various cells suggest that the pH of a vesicle or vacuole that derives from the cell surface often becomes acid before such an organelle fuses with lysosomes. Thus the mechanism for the initial acidification of these organelles would not come from the lysosomes, though some lysosome membranes, such as those of rat liver cells, have now been shown to have a proton pump (9,10), nor would the acidification in these cases result from the digestive processes that would typically follow lysosome fusion. In Paramecium it has long been known that the pH of the digestive vacuole falls quickly following the release of the vacuole from the oral region ( 1 l, 12). We recently reported a detailed study of the time-course of the pH changes in the vacuoles of Paramecium caudatum grown in axenic medium and showed that the vacuolar pH falls from 7.0 to 3.0 within 5 min (13). By 8 min, the pH begins to rise rapidly to neutrality. The initial acidification parallels the early rapid vacuole condensation (13), while the rise in pH at 8 min corresponds to the approximate time when vacuoles first fuse with lysosomes (14) and acquire acid phosphatase activity (15).
We have also described a population of rather large vesicles (up to l ~zm in diameter), which we termed phagosomal fusion vesicles (PFVs) ~ (16,17). These PFVs were observed to bind to the forming digestive vacuole membrane, but they do not fuse with the vacuole until the vacuole has been released from the oral region of the cell and has moved a distance of some 30 um to the posterior end of the cell. By 15 to 30 s these PFVs have fused with the vacuole. Because these PFVs fuse with digestive vacuoles around the time the vacuolar pH begins to fall, we investigated their potential role in vacuole acidification.
MATERIALS AND METHODS
For bright-field microscopy, log-phase 1~ caudatum cells growing in an axenic medium at pH 7.0 (18) were concentrated in nylon cloth before incubating for ~8 min in neutral red (0.05 mg/ml axenic medium). Cells were washed several times with fresh axenic medium and allowed to stand for 10 min or longer. Cells were then observed and photographed using a Zeiss microscope after their swimming had been impeded by the pressure of the coverslip. This pressure had no other observable effects and normal cellular activities such as the filling and emptying of the contractile vacuoles and the rapid posterior movement of newly formed digestive vacuoles continued.
For fluorescence microscopy, concentrated cells were incubated in acridine orange (50 #g/ml) for 1.5 min. The cells were then washed free of acridine orange with fresh axenic medium and observed using a Zeiss epifluorescence microscope.
To test the effects of cytochalasin B (CB) on vacuole acidification, we first placed a population of cells in medium containing heat-killed, bromcresol green-stained yeast for 15 s before the addition of cytochalasin B (CB) (at 143 #g/ml in a final concentration of 0.4% dimethylsulfoxide [DMSO], vol/vol). (The typical fluorescence-probe techniques used by others are not useable below pH 4 such as is attained in digestive vacuoles of Paramecium.) Half of the cells formed at least one digestive vacuole during this pre-CB pulse. At predetermined intervals aliquots of cells were removed, spread on albuminized slides, and air dried. Air drying required roughly 2 rain. The digestive vacuoles containing yeast were then scored for their colors lacing careful to exclude those vacuoles near to and perhaps still continuous with the oral region. Bromcresol greenstained yeast were blue at pH 7 or above, blue green at pH 6 to 6.5, green at pH 4.5 to 5.5, and yellow at pH 4 or lower (13). Colors were scored within 4 h of drying, since the yeast-containing vacuoles reverted back to blue when allowed to stand overnight. Control cells were placed in axenic medium containing yeast and 0.4% DMSO without CB.
For electron microscopy (EM) cells were transferred to axenic medium which contained both polystyrene latex beads (to label the vacuoles for EM) and CB (only CB was omitted in control cells) for periods ranging from 0.25 to 30 min before fixation. Latex beads were used in place of yeast since Paramecium takes up these smaller particles more readily than yeast. Latex beads were suspended in axenic (nutrient) medium to ensure that vacuole processing would be unaffected. Processing for EM was carried out as previously described (19). The Gomori technique (20) was used to show the presence of acid phosphatase.
Phagosome Fusion Vesicles
PFV's are defined as the large vesicles that (a) specifically associate with the nascent vacuole membrane, (b) have an irregular shape, and (c) contain very little detectable material in electron micrographs (16). These vesicles are illustrated in Fig. 1 surrounding a nascent vacuole. In this experiment we exposed cells for a short pulse in horseradish peroxidase. The electron-opaque horseradish peroxidase-reaction product following incubation in H202 and diaminobenzidine can be seen lining the luminal surface of the vacuole and adsorbed to the latex beads. The absence of horseradish peroxidase-reaction product in these PFVs lying against the vacuole membrane indicates there had been no openings between these PFVs and the vacuole.
Accumulation of Weak Bases
Neutral red was distributed in a punctate pattern throughout the cell and around certain digestive vacuoles. Particularly prominent were the neutral red-containing vesicles close to the forming digestive vacuole (Fig. 2a). These vesicles were observed to remain near the vacuole's surface as the released vacuole moved to the cell's posterior end (Fig. 2 a).
Acridine orange also became concentrated in vesicles giving a similar punctate pattern as seen following neutral red exposure. The vesicles around the developing vacuoles (Fig. 2 b) and newly released vacuoles (Fig. 2 c) were especially bright orange. As a digestive vacuole was forming it acquired a coat of these vesicles which moved posteriorly with the released vacuole (Fig. 2, b and c). Since the PFVs are the only vesicles found to line the nascent vacuoles in electron microscopy ( Fig. 1), we conclude that the vesicles containing either neutral red or acridine orange that border nascent vacuoles are the PFVs.
CB Effects
We also investigated the effects of CB on the acidification of digestive vacuoles. Having observed that microfilaments occupy the space between the PFVs and vacuole membranes (16), we reasoned that this microfilament-active drug might affect the fusion of these vesicles. If the PFVs were indeed responsible for vacuole acidification, blocking their fusion with the newly released vacuoles should prevent acidification; this drug has been shown to reduce pH changes in macrophage (2) and neutrophil (3) phagosomes. In control cells placed in bromcresol green-stained yeast and 0.4% DMSO some 70% of the yeast-containing vacuoles became blue-green (pH 6-6.5), green (pH 4.5-5.5), or yellow (pH _<4) within 6 min (upper curve, Fig. 3). In cells treated with CB only 10 percent of the yeast-containing vacuoles formed during the 15-s pre-CB pulse became blue-green and none became green or yellow during the same time period (lower curve, Fig. 3). Thin sections of CB-treated cells showed the labeled vacuoles that were separated from the oral region to have PFVs remaining around them (Fig. 4 b) long after the PFVs would have fused in control cells (Fig. 4a). Serial sections were studied to determine that the PFVs were unfused. This concentration of CB stopped vacuole release (but not increase in diameter) for several minutes after which the vacuole number per cell increased slowly.
Acid Phosphatase Cytochemistry
Acid phosphatase reaction product was found in some digestive vacuoles, in iysosomes and smaller vesicles in the cytoplasm as previously reported (12,15,21). However, no reaction product was found in the PFVs either in the cytopharynx region or around the newly formed digestive vacuole (Fig. 5). Reaction product was never found in any digestive vacuole to which PFVs were associated.
DISCUSSION
The observations reported here showed that the PFVs found around the developing and newly released vacuoles are themselves acid, for they accumulate both neutral red and acridine orange. Neutral red and acridine orange, which are weak bases, have been shown to accumulate in vesicles such as lysosomes that have an acid pH (22)(23)(24)(25). This is presumably due to diminished membrane permeability by the protonated forms of these bases (6). The observation that acridine orange changes in color from green to yellow to red orange when this weak base becomes increasingly concentrated in an acid environment has previously been used to identify acid vesicles in cells (25) as well as to follow phagosome-lysosome fusion in macrophages (26). Our results confirm the observation of Mast (11) that neutral red granules (which seem to us to encompass both lysosomes and PFVs [ 16]) are located around developing vacuoles and that they move with the vacuole to the cell's posterior pole. Furthermore, our studies have shown that these acidic PFVs fuse with the vacuole before the vacuole becomes acid ( 13,16,17). Thus by fusing with newly released digestive vacuoles these PFVs will contribute their load of protons to the vacuole as well as the mechanism whereby the PFVs ean accumulate these ions. This is analogous to the addition of proton pump-containing vesicles to the luminal surface of epithelial cells in turtle bladders (25). We conclude that these PFVs are responsible, at least in part, for bringing about the acidification of digestive vacuoles in P. caudatum and we propose that these now be called acidosomes.
To our knowledge, this is the first report of nonlysosomal vesicles being involved in acidification of phagocytic vacuoles. That these vesicles are nonlysosomal is deduced from the facts that (a) they fuse with the vacuole several minutes before the FIGURES 1 and 2 Fig. 1: An electron micrograph of a digestive vacuole still attached to the oral region (or) of Paramecium caudatum. This cell had been pulsed 15 s with polystyrene latex beads (0.8/am diam) and horseradish peroxidase whose reaction product following incubation in 3,3'-diaminobenzidine and H202 formed the electron-opaque deposit lining the luminal side of the vacuole membrane and adsorbed to the latex beads. Acidosomes (previously called PFVs) (x) are bound to but are not fused with the forming digestive vacuole membrane as indicated by their lack of horseradish peroxidase-reaction product. Bar, 2 /am. x 8,400. Fig. 2: Wet mounts of living P. caudatum incubated in neutral red (A) and acridine orange (B and C). Both compounds accumulated in a punctate pattern in the cells and were found within granules that rim the forming digestive vacuole (arrows, A and B) and these granules moved with the released vacuole toward the posterior pole of the cell (arrowheads, A and C). The dark granules bordering the forming and newly released vacuoles in A were red and the rims of similar vacuoles in B and C fluoresced bright orange demonstrating that both neutral red and acridine orange were concentrated in the acidosomes that line these vacuoles and the cytopharynx (c) area. Both /3 and C are of the same cell; the developing vacuole in B (arrow) is seen in C (arrowhead) a few seconds after its release from the oral region (or). Other digestive vacuoles fluoresced yellow (y) or orange (o), depending on their pH and consequent concentration of acridine orange, or not at all (unlabeled circular dark profiles), n, macronucleus; cv, contractile vacuole. Black and white pictures were reproduced from Ectachrome 400 color film of living cells with original exposure times of 1/60 s (A) and 4 s (B and C). The long exposure time required in B and C coupled with cyclosis of the cytoplasm resulted in some blurring and in the lack of definition of individual vesicles around developing and newly released vacuoles. The apparent fluorescence within the developing vacuole in B is most likely an artifact caused by the bright orange acidosomes surrounding the vacuole but above and below the plane of focus. Bar, 10/am. x 1,200. vacuole acquires acid phosphatase activity (15), (b) cytochemical studies show no acid phosphatase reaction product in these vesicles, and (c) morphologically they do not resemble lysosomes. Acidosomes contain neither the prominent glycocalyx on their luminal membrane surface nor the paracrystalline matrix material that is characteristic of lysosomes in P.
RAPID COMMUNICATIONS
caudaturn (18,27). Furthermore lysosomes concentrate only around the condensed and most acidic vacuoles (~ 5 min old) and fuse only with the vacuoles that are _>8 min old (14). These vacuoles then become acid phosphatase positive (15).
Caution should be exercised in generalizing this finding to other cell types. Whereas the phagosomal membrane in Par- amecium is derived from a pool of discoidal vesicles (19,28), the phagosomal membrane in the phagocytic cells of mammals (29) or even in other protozoa such as amebae (30) is derived from the plasma membrane. If the plasma membrane of these other cell types contain proton pumps, their phagosome acidifying mechanisms may be acquired directly from the plasma membrane rather than from a population of vesicles such as the acidosomes. Subsequent investigations are needed to clarify this question.
formation. When treated with CB (lower curve) following a 15 s preexposure to yeast, only 10% of these free vacuoles became acid and then only mildly so (pH ~6.0). During this preexposure some 50% of the cells formed a single vacuole. Results are given as percentage values, mean + SD of two to three experiments and each point represents the average of 50-100 vacuoles. Each time point represents the time (~2 min) an aliquot of cells was spread on a slide and air dried. FIGURES 4 and 5 Fig. 4: (A) Acidosomes fused (arrowheads) with the membrane of a 30-s-old digestive vacuole containing latex beads (/) in a control cell. (B) Acidosomes remained bound to but unfused with a free labeled digestive vacuole that formed while the cell was continuously exposed to CB and latex beads (I) for 10 min. The actual time of the release of this vacuole is unknown but only one acidosome was found to be continuous with a vacuole in the five vacuoles studied which had formed during the 10-20-rain exposure to 143 ~g/ml of CB. Sequences of serial sections were followed to establish that the membranes of acidosomes were not continuous with the vacuole membrane. Bar, 0.5 /~m. × 25,000. Fig. 5: Portions of two vacuoles are shown, one containing electron-opaque acid phosphatase reaction product and one without. Acidosomes line the margin of the acid phosphatase-negative vacuole and are themselves devoid of acid phosphatase reaction product. The smaller lysosomes (arrows) contain reaction product. Bar, 0.5 #m. x 25,000. | 2014-10-01T00:00:00.000Z | 1983-08-01T00:00:00.000 | {
"year": 1983,
"sha1": "34d864a17eeabda89e107d351da2806b6cee1eeb",
"oa_license": "CCBYNCSA",
"oa_url": "https://rupress.org/jcb/article-pdf/97/2/566/1077128/566.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "34d864a17eeabda89e107d351da2806b6cee1eeb",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
266281445 | pes2o/s2orc | v3-fos-license | Indicator analysis as a way for the Second Department of Polish General Staff to counter the risk of chemical warfare being unleashed by Germany in the 1920s
After regaining its independence in 1918, Poland faced a number of security challenges. The most important of these was survival in the face of revisionist steps taken by aggressive neighbours, including Germany and the USSR. One important aspect of this threat was to determine the risk of the Weimar Republic unleashing chemical warfare against the Second Republic. In order to cope with this intelligence task, the Second Department of Polish General Staff developed a number of instructions whose structure and internal logic is comparable to the indicator analysis technique developed only 60 years later by the American Intelligence Community. On the basis of material preserved in the State Archive in Gdańsk and contemporary textbooks on information analysis techniques, it is shown how officers of Polish military intelligence, decades before the method of indicator analysis was formalised, developed their own way, which is essentially identical to it. This demonstrates the remarkable innovation and organisational capacity of the newly forming intelligence service of the reborn state.
After independence in 1918 and during the formation of its borders until 1922, the reborn Republic was vulnerable to German revisionism.There were at least three reasons for this threat.Firstly, Poland had 2/3 of the industrial potential of Upper Silesia, which -together with the Ruhr and the also lost Alsace and Lorraine -was the most important industrial area of the Reich.Secondly, Greater Poland was, before the First World War, an agricultural region renowned for its productivity, one of the most important for Germany's food economy.Thirdly, the Pomeranian Corridor divided Germany into two areas and physically cut off East Prussia from the main national territory.Therefore, from the very beginning of the reborn Polish state, the main task of its military intelligence was to obtain information that could provide evidence of Berlin's war preparations.
One of the most important changes in the methods of warfare brought about by World War I was the advent of the first weapons of mass destruction, i.e. war gases.The psychological effects associated with the use of these weapons were enormous.Whether through memoirs described in literature, witness accounts or images and photographs, the vision of a weapon capable of killing or permanently maiming thousands of people in an instant will always accompany humanity.To associate the use of war gases only with the Western Front is erroneous.Also in the eastern theatre of operations, including the areas of the reborn Second Republic, soldiers of the warring sides and civilians could come into contact with this deadly weapon.Between January and July 1915, the Germans attempted several times to break through the Russian positions near Bolimow, using battle gas.In August of the same year, unable to capture the Osowiec fortress, they also opted for a chemical attack, which led to one of the most gruesome events in the history of wars, dubbed the attack of the dead 1 .Later on, information about the use of poisonous gases appeared in reports from Greater Poland troops taking part in the uprising2 . 1 А.А.Черкасов, А.А.Рябцев, В.И.Меньковский, «Атака мертвецов» (Осовец, 1915 г.): миф или реальность, "Былые годы" 2011, no. 4, pp.5-11.Despite a six-month siege, the Germans were unable to take the Osowiec fortress, so Marshal Paul von Hindenburg, commanding the German army, gave the order to launch a battle gas attack on Fort Zarzeczny and its advanced positions on the right bank of the Biebrza River.As a result of this attack, the entire 226th Infantry Regiment, which was defending the positions, was poisoned.The German soldiers, in a force of about 7,000, moved forward, thinking that they would take the abandoned positions.Meanwhile, opposite them, a company of horribly burned, dying Russians moved to counterattack.This caused such a shock to the attacking soldiers that they threw themselves into a panicked flight, convinced that they were being fought by the living dead.The history of the struggle and the German gas attacks on the Russian positions on the Rawka River, commonly referred to as the Battle of Bolimow, has already been well researched and described in the national literature.
Therefore, the study of issues related to the possibility of the use of weapons of mass destruction by Germany was an important task for the Second Department of Polish General Staff (later the General Staff of the Polish Army, the Staff of the Commander-in-Chief).This is clearly demonstrated by the Polish intelligence documents analysed in this article, preserved in the State Archives in Gdańsk (hereinafter: SAG).
However, information acquisition is only one phase of the so-called intelligence cycle.It cannot function in isolation from the other phases, namely prior planning and tasking and subsequent analysis and distribution.The purpose of this article is to show how the above-mentioned documents reflect the principles concerning the organisation of analytical and information work adopted at the Second Department of Polish General Staff (hereinafter: Second Department).
Research methods, sources and definitions
The primary research method used in the article is a comparative analysis of the source intelligence documents of the Second Department in the light of the theory and practice of analytical and information operations.Particular attention has been paid to the definition of the intelligence cycle and indicators (indices).
The sources used in this article are documents collected in the collection Organization of intelligence and instruction marked with the number 1107 and located in SAG 3 .This collection consists of a relatively small collection of only 212 pages of material relating to the work of Office 7 of the Second Department in Gdańsk.Most of them consist of personal data of agents, information on the dislocation of German troops, operational instructions, cipher instructions and guidelines on the concealment of writing.Among them, however, are four letters dealing directly with the acquisition of information on chemical weapons.
The book Theory and practice of analytical and informational activities4 by Józef Kozłowski (definition of indicators, the intelligence cycle) and English-language of the Wielkopolska Uprising.The Battle of the windmills and other skirmishes), Institute of National Remembrance, https://pw.ipn.gov.pl/pwi/historia/przebieg-walk-powstancz/sladami-powstaniawielk/8464,BITWA-POD-WIATRAKAMI-I-INNE-POTYCZKI.html [accessed: 5 VI 2023].According to Kozłowski, the intelligence cycle is divided into five stages.The first is planning, during which the tasks for the acquisition apparatus are defined.The second is the acquisition of information from various personal and technical sources.This is followed (the third stage) by their processing, when intelligence information is pre-processed, broken down into smaller pieces and verified.The fourth stage is their analysis, i.e. re-integration and aggregation, after which the intelligence information becomes intelligence information and thusimplicitly -a verified and objective picture of the situation for which the intelligence service already takes responsibility.The fifth stage is information dissemination.After this, the decision-makers, already familiar with the facts, set the next tasks, which start the planning of the new intelligence cycle 5 .
In this article, indicator analysis is understood as a periodic review of observed events and trends, done to track them, monitor targets, detect new trends and warn of unforeseen changes 6 .It is therefore a structured analytical technique by which historical data, trends and doctrinal documents are examined in order to identify imminent threats at an early stage.This most often (although not exclusively) involves events such as the outbreak of armed conflict or internal upheaval.Indicator analysis thus involves observing reality for the occurrence or non-occurrence of specific events, phenomena, actions that may indicate an impending crisis.In addition to early warning, one of the purposes of this type of analysis is to identify information needs and the resulting preparation of tasks for the acquiring intelligence apparatus, as well as the creation of possible scenarios for the development of the situation 7 .They are therefore an extremely important part of the intelligence cycle and occur in two of its five stages -planning (tasking) and analysis.
In and Financial Sciences" 2017, vol. 10, no. 30, p. 128.11 September 2001, which was one of the greatest failures in the history of the United States Intelligence Community 8 .
Chemical intelligence in the collection of the State Archive in Gdańsk
In the documents of the Second Department preserved in the SAG, one can find information indicating concerns about the possibility of chemical warfare with Germany.In the collection numbered 1107 and entitled Organisation of intelligence and instruction, one can find a general instruction and three intelligence tasks for the Gdańsk office of the Second Department related to the acquisition of information concerning chemical armament in Germany 9 .
The first of the documents, actually a set of four annexes preserved, unfortunately without a cover letter, sets out the tasks of chemical intelligence for Germany, East Prussia and Gdańsk.The importance attached to these tasks is best demonstrated by the fact that the document was signed by the Chief of the General Staff of the Polish Army himself -General Władysław Sikorski.Based on a reference in a later document, it should be dated mid-June 1921 10 .
The document is divided into four parts.The first contains general instructions, the next three detailed instructions for chemical intelligence work in Germany, East Prussia and the Free City of Gdańsk.The General Staff instructed the offices of the Second Department to acquire, among other things: chemical warfare manuals, plans and blueprints for gas protection equipment, plans and specimens of the latest chemical weapons, all scientific literature on the production of chemical weapons, photographs of technical equipment for the production of chemical warfare gases (after infiltrating selected chemical plants in Germany) 11 Ibid, p. 147.Among the plants mentioned in the manual were the laboratories of Höcht, the Badenische Anilin und Soda Fabrik (BASF), Bayer and Merck.The first three of these, together with the AGFA concern, formed the IG Farben corporation in 1925, which in time came to play an important role in the economy of the Nazi regime in Germany and was complicit in the production of poisonous gases and the crimes of genocide committed in the German extermination and concentration camps.
as well as penetrating the immediate environment of German scientists such as Fritz Haber and the disciples of Emil Fischer 12 .The last point of this section of the manual is significant: It is important to note all the ways in which foodstuffs and their surrogates are made, cultures of useful micro-organisms (food yeast, glycerine yeast, lemon yeast, etc.), bacteriological work, studies of plague, typhoid and other infectious diseases 13 .
The tasks assigned to the Prussian area were of a different nature.Among other things, they included determining whether chemical troops were stationed in the area, whether factories producing war gases were located, whether field conditions for their use were being prepared, and whether tanks in which poisonous substances could be stored appeared on the border with Poland 14 .
Tasks assigned to the area of the Free City of Gdańsk included the observation of transports of substances that could have been semi-finished products for the production of chemical weapons, such as: calcium, sulphur, tin and titanium chlorides, liquefied chlorine, ethylene, carbon tetrachloride, alanine, ferrocyanide salts, chlorhydrin, bromine, mercury and arsenic derivatives, acid-resistant steels, explosives, organic toxins15 .The next assignment, dated 27 August 1921, is a general instruction on the conduct of chemical intelligence in Germany and refers to the previously sent assignments 16 .Attention is drawn to the introduction of a sentence in this instruction in which the Second Department admits that it has serious problems in obtaining information on German chemical weapons and is only just working out its working methods.
The next document found in the SAG 17 was written on 8 October 1921 and is addressed to Rittmaster Karol Dubicz in charge of the Gdańsk office of the Second Department 18 .Its author was Major Kazimierz Kierzkowski, then head of the Intelligence Unit of the Second Department 19 .The first paragraph of this document reads as follows: In view of the great difficulty of directly observing the state of chemical armaments in East Prussia and the Mazurians, it is necessary to take advantage of the perhaps already meagre indications that can be drawn in this respect from the nature of the transports passing in an easterly direction through the Gdańsk corridor 20 .The following paragraph says a lot about the reasoning of the analysts of the Second Department: Among the materials transported, some may bear witness to certain preparatory procedures preceding the great chemical armament (...) 21 .
The goods transported were divided into two groups: ammunition and the components used to produce it, and chemicals.The former included: ready-made gas masks, glasses for them or masses for their manufacture, rubber materials or impregnated leather, fine coal, sheet metal cases or sheet metal for their manufacture, flexible or sprung rubber bands, waterproof clothing or materials impregnated with oil, filled or empty gas grenades or parts thereof, filled or empty gas shells, steel gas flasks or empty gas bombs, parts of mine-throwers or prepared throwers, threaded or smooth pipes between 4 and 8 inches in diameter, sprayers and hydropults 22 of all systems 23 .Among the chemicals, the following were considered to be of interest in the context of studying the issue of chemical weapons: chlorine liquefied in flasks, sulphur, sulphur chloride 24 , ethyl and methyl alcohol 25 , ethylene in gaseous tanks, toluol, xylol, benzol, concentrated nitric acid, liquid bromine or iron bromide, 20 SAG, Organizacja wywiadu…, ref. no.1107, p. 17. Ibid.
22 Hydropult -a small hand pump used to extinguish small fires and pump out water. 23SAG, Organizacja wywiadu…, ref. no.1107, p. 17. 24 It was used to vulcanise rubber products.. 25 In the original: ordinary and woody.
part of them might go by sea or pass through the corridor at large intervals in order to confuse their interdependence.Nonetheless, keeping as meticulous a note as possible of the items listed above may prove useful in some cases 27 .
Source: State Archive in Gdańsk.
The SAG also contains information about another, later assignment that may be related to the issue described.On 5 December 1921, Major Kierzkowski instructed Rittmaster Dubicz to obtain the latest edition of a German textbook on organic chemistry published by Beilstein and entitled Handbuch der Organischen Chemie 28 .
Tasks for the Rittmaster Karol Dubicz and indicator analysis
The instructions to Rittmaster Dubicz are a typical task sent from the headquarters of the intelligence service to the capturing apparatus and represent the beginning of a new intelligence cycle.At the same time, they are the result of the completion of another cycle -there is no doubt that the Second Department first had to obtain information on the methods and means of producing chemical weapons.Part of this may have been information gained from experts in the field (e.g.scientists).
The description of the situation presented in the four documents analysed is an accurate reflection of the starting point for the indicator analysis.Due 27 SAG, Organizacja wywiadu…, ref. no.1107, p. 18. 28 SAG, Organizacja wywiadu…, ref. no.1107, letter to the impossibility of obtaining intelligence that constitutes a response to a task (e.g. a document, an order, etc.), HQ decides to start looking for signs of impending change and for facts and phenomena that could be indications of command in this respect.
Exactly as defined by Richard Heuer and Randolph Pherson29 on the basis of historical experience and doctrinal (scientific) knowledge, the Second Department developed a list of phenomena, the possible confirmation of which by Rittmaster Dubicz could provide a premise proving the conduct of chemical armament in the area of East Prussia.This was particularly true of the initial tasks for chemical intelligence in Prussia and the Free City of Gdańsk set in the spring of 1921.They were not concerned with ascertaining the presence of chemical weapons in these two territories and their dislocation to frontline units, but with gaining information either on substrates and intermediates that could be used in the production of chemical weapons, or on logistical preparations in the field (clearing vegetation, deploying tanks).It is interesting to note the very phrase "indication" used in the 8 October 1921 document, which in the given context can be translated into English precisely as indicator.Very significantly, indicator analysis as an analytical method emerged, as mentioned earlier, in the US intelligence community in the 1980s.Meanwhile, Polish military intelligence officers used the indicator category, although they did not formalise it, already at the beginning of the free Republic.
Undoubtedly, the Second Department took the risk of chemical warfare with the Germans seriously.This was, at least in part, the result of the experience of Polish society in the years of World War I, as well as knowledge of the tragic events on the Western Front and the Southern Front.Documents preserved in the SAG show how creatively the emerging Polish military intelligence service tried to develop a methodology for early warning of an attack with weapons of mass destruction.This was only the third year of its existence and, as such, it could not have had a sufficient agent network in enemy territory to gain direct evidence of such a threat.Therefore, an attempt was made to define phenomena that would allow sufficient advance preparation for a possible attack.Using contemporary intelligence terminology, it can be said that officers of the Second Department developed a set of early warning indicators, which they later monitored, analysed and used to assign tasks to the capturing apparatus.
Its consideration of the need to keep track of research into infectious microorganisms, as evidenced by its first set of tasks, should also be regarded as evidence of the great perspicacity and creative thinking of Polish military intelligence.In addition to observing phenomena occurring in German food production during World War I, including the production of synthetic surrogates (German: Ersatz) of food, the Second Department was interested in bacteriological research, recognising the dangers arising from the possibility of their results being used as weapons of mass destruction, in time called biological weapons.
3
State Archive in Gdańsk (hereinafter: SAG), Organizacja wywiadu i instrukcja (Eng.Intelligence organisation and instruction), (herinafter: Organizacja wywiadu…), ref.no.1107.When quoting original texts, their spelling has been adapted to modern Polish and obvious spelling errors have been corrected.. studies on analytical techniques were used to provide definitions of the intelligence concepts studied.
the US literature, structured research techniques are considered to have started in the 1980s.Their intensive development took place at the beginning of the 21st century, influenced by the reflections following the terrorist attack on 7 R.J. Heuer, R.H. Pherson, Structured Analytic Techniques for Intelligence Analysis, Washington 2011, p. 24.J. Kozłowski, Teoria i praktyka…, p. 150-151; P. Grunt, Structured Analytic Techniques: Taxonomy and Technique Selection for Information and Analysis Practitioners, "Journal of Management The Five Habits of the Master Thinker, "Journal of StrategicSecurity" 2013, vol.6, no. 3, pp.54-55.
11 , 8 R.H. Pherson, from the Second Department of the General Staff no.14074a/II.Inf.III.B.5 to Office no. 2 of 5 December 1921, p. 5 (the correct title of this textbook is Beilsteins Handbuch der Organischen Chemie.It was published in 1921 by Verlag von Julius Springereditor's note). | 2023-12-16T16:19:20.846Z | 2023-12-06T00:00:00.000 | {
"year": 2023,
"sha1": "f9605233fb515c86cd3a909416359d7941ab934b",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.ejournals.eu/pliki/art/24770/",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "4e89a31c69b0141e68b6337b46f17b06191bf6ac",
"s2fieldsofstudy": [
"History",
"Chemistry",
"Political Science"
],
"extfieldsofstudy": []
} |
202899091 | pes2o/s2orc | v3-fos-license | Optimum Design for Controlling the Scouring on Bridge Piers
The scouring around bridge pier can be considered the most important reasons of bridge failure. Therefore, we investigated by using physical models of piers and we used single pier with square collar , circular collar and interaction of two piers in laboratory channel, its width 1 m and applied three velocities (0.1, 0.08, and 0.07) m/sec. This experimental investigation was made to choose the optimum shape and location of collar of single pier and comparing it with the interaction of two piers, the results showed that both square and circular collar decrease the scour depth, but the square collar is more effective of reducing scouring and the best location at bed level for single pier, comparing the results of single pier with the interaction of two piers, the interaction of two piers without any countermeasure reduced scour depth about 58%.
Introduction
Scour can be considered a natural reaction caused by the erosive action of flowing water in the channels and river bed. This can be happened by removing sediment particles near structure that is existed in flowing water. This erosive action of water leads to reduce the river bed level as a result foundation fails by converting it from stable state to unstable state [1]. Many reasons can cause scour of sediment and cause different temporal range in level of bed sediment, in case of normal flow, scour occurs slowly by moving sediment around structure that is located in flowing water and transport it to another side, this type of scour can be prevented by observation and maintenance, but the main reasons to scour sediment with a large amount and cause damage of structure is flood [2].
The existing of bridge pier in flowing water streams produces a three dimensional reaction of flow, there is an additional pressure head upstream of the pier due to hitting of water to bridge pier which then makes its path down into the scour hole and a horseshoe vortex is formed, the accumulated flowing water on the surface pushes back as a result bow wave creation is completed and the water that existed around the pier takes its path down and produces a wake vortex, from that above it can be define the local scour as a combination that results from the horseshoe and wake vortices. We can define the scour depth is the reduction of the bed sediment level around the obstruction due to removing the sediment [3]. The necessity of scour studying because of difficulties to understand the complex flow and mechanism of scouring that caused of bridge failure in high percentage in many countries of the world and how to find effective solutions to control and predict the scour around bridge piers and prevent bridge failure [4].
The magnitude of scour depth is governed by these factors: velocity of the approach flow, flow depth, pier width, flow rate , pier length if skewed to flow, size and gradation of bed material, the attack angle of approach flow to pier, shape of pier or abutment, bed configuration , ice formation or jams and debris [5].
Deng and Cia (2010) studied the effect of using collar with rounded shape and founded that the collar divided the region of flow into two parts above the collar and below the collar .The collar make as an obstacle against down flow and decreases its strength and diminish the scour depth therefore the efficiency of collar depends on its size and location according to the bed level [6]. Wang et al. (2017) made reviewing from previous studies, their reviewing showed that the collars are environment friendly that have a good stability, efficiency and a good economical countermeasure [7]. Kumar et al. (1999) studied the effect of collar size and location of reducing the scour depth, it was used circular collar to study the case of reducing the scour depth, and it was found that the larger size in lower level gave little scour depth comparing with the smallest size with higher level [8]. Negm et al. (2009) made an experimental investigation on four types of collar (circular, trapezoidal, triangular, and rectangular) with cylindrical pier of diameter (3) cm; the width of collar was (1.5, 2, 3, 3.5, 4, 4.5 and 5) of pier diameter, it was found that the percentage of scour reduction was 62, 65, 68% and 72% for triangular, trapezoidal, circular and rectangular collar shapes, respectively. Also, it was found for all collar shapes, when the width of the collar was increased the scour depth was reduced to minimum limit. The circular collar reduced the scour depth by 32, 48, 61, 69, 75, 83, 85% and 87% for collar width (1.5, 2, 3, 3.5, 4, 4.5 and 5) of pier diameter, respectively [9]. Jahangirzadeh et al. (2012) investigated of using rectangular collar and its optimum dimensions, it was founded when the collar width increases the scour depth decreases, and therefore the width of the collar is a very important and effective factor on scouring process [10]. Ardeshiri et al. (2014) made a comparison with three types of collars lozenge, circle and square around cylindrical pier, the diameter of pier (D=40 mm, 30 mm and 21 mm), the dimension of collar (width=2D), it was founded that the lozenge collar was more efficient than the square, the percentage of scour depth decreasing was 70% for square collar and the percentage of scour depth decreasing was 65% for the circle collar [11]. Jahangirzadeh et al. (2014) made an experimental and numerical investigation in two types of collars rectangular collar and circular collar, the results indicated that the rectangular collar gave a scour depth less from circular collar and the percentage of reduction for the rectangular collar was 79% [12]. Chen et al. (2018) investigated experimentally of using hooked collar on cylindrical pier with diameter equal to 4 cm, the diameter of the hooked collar was 1.25 the diameter of the cylindrical pier , the collar was put in two locations , first above bed level equal to 0.25 diameter of the pier, the percentage of reduced scour hole was equal to 24% comparing with the pier without collar and the other position of the collar was at bed level and the percentage of reduced scour hole was 100% or in the same meaning the scour depth was zero [13]. Khodashenas et al. (2018) made a comparison between two types of collars circle and square, the dimensions of them were twice the diameter of the pier with three locations under bed level, at bed level and above bed level, it was found that the square was more efficient than the circular collar and percentage of reduction of scour depth in square collar was 70% while the percentage of reduction in circular collar was 50% and the best location for two collar was under bed [14]. Moncada-M et al. (2009) made experimental investigation on cylindrical pier of diameter 7.3 cm and used circular collar with different sizes and different locations, the size of collar (W=2D, 3D) where W the size of the collar and D the diameter of the pier, the results showed that the wider size of collar and near the bed level gave best results to reduce the scour depth [15]. Vittal et al. (2009) made comparison between three cases, the first case group of three piers the angle between them 120 degree, the second single pier with slot and the third single pier with collar , the results showed that the group pier was more effective than the other case and it could be given scour reduction about 40% [16]. Shrestha (2015) made experimental investigation of interaction of two piers with different spacing (L/D= 0 to 12) where L spacing between two piers from center to center and D diameter of pier, the results showed that the scour depth for the second pier is less than the first pier and for single pier [17]. Raleigh (2015) used experimentally triple collar on cylindrical pier, the dimensions of collar was three times the diameter of the pier and the distance between them 1/6 diameter of pier, the results showed that the reduction percentage of scour depth about 82 % compare with the single pier [18]. investigated experimentally of twin piers, it was found that the scour depth for the first pier was higher than the second pier [19]. Keshavarzi et al. (2018) made experimental investigation of interaction of two piers (L/D) from zero to twelve where L spacing between two piers from center to center and D the diameter of the pier with different flow intensities, it was found that the scour depth when the spacing 1< L/D< 2.5 increases, in the same meaning when the spacing increases the scour depth at upstream of the front pier increases [20]. Vikas et al. investigated experimentally of using a collar plate at bed level of different sizes, the larger size was three times the diameter of pier and gave zero scour depth, also it was studied the using of triple collar with different spacing between them, it was found when the spacing between collar (D/6) reduces the scour by 84 % comparing with the pier without any protection [21].
Malik and Baldev (2018) investigated experimentally of effecting of arrangement and spacing of pier groups, it was taken three cases of arrangement for piers, tandem, side by side and staggered arrangement, they concluded for the tandem arrangement, that the scour depth at upstream of front pier is higher than the scour for the second pier and when the spacing between two piers is 16 times (16D) the pier diameter then the pier show independent behavior, in the same meaning behave as single pier [22]. made investigation of interaction of three piers in a tandem arrangement and it was found that the scour depth at upstream of pier was the same in single pier and the important factor of interaction in a tandem arrangement the spacing between piers and compered with interaction of two piers and found that the critical velocities of rear pier were larger than the critical velocities for case of two piers [23].
Laboratory Channel
The channel that was used to make the experiment was built from block and concrete located at Kufa University and has the following parts: Also, we used interaction of two piers / = 3.5 in a tandem arrangement where spacing between two piers from center to center and diameter of pier and pier length 60 cm without any countermeasure, then we used square collar of dimension 24×24 cm at bed level 30 cm for each pier as shown in Figures 6 and 7. Also, we used triple square collar for each pier (pier diameter 8 cm, its length 60 cm) in a tandem arrangement, the distance between collars equal to ( /4) and the dimensions of them 24×24 cm as shown in Figure 8. 2-The water that supplied to the lateral basin was lifted to the head basin by the main pump of the channel.
3-After lifting the water to the head basin, the water was raised to the flume by opening the head gate, before arriving the water to the flume, the water flowed through the stilling screens to make the flow more serene.
4-The laboratory model of pier was fixed by putting it in the center of the sand layer, its length was 2 m and the thickness was 0.3 m, the piers were made from MDF wood and coated with varnish to avoid MDF swelling of water, according to Chiew and Melville (1987) recommendations, pier diameter should not be more than 10% of flume width to avoid wall effect on scouring. In this study, the flume width is more than 10 times of pier width [25].
5-
The model was covered with water, after arrival the water to the end of the flume, the depth of the water could be kept it to be equal by using the tail gate by lifting it to the particular height, then the water returned to the lateral basin and the process of circulation of water continued to five hours and a half hour.
6-Turning off the main pump switch and waiting for draining all the water from the sand and measuring the scour depth around the pier by gauge point.
7-
The sand was re-levelled and the previous procedure was repeated for the other case.
Dimensional Analysis
The variables that effect on the local scour mechanism are listed in this relationship: f ( , , , , , , , , , , , , , , , 50 , , , , µ, ) where the scour depth, the width of the channel , diameter of the pier , shape factor of pier , angle of attack of approach flow, diameter of the collar, length of the collar, collar distance from bed level, Lint distance between two piers from center to center, bed slope of channel, water depth, velocity of water, critical velocity of flow, density of the sediment, 50 median particle size of sand, geometric standard deviation, fluid density, g gravitational acceleration, µ fluid viscosity, time. The variables that are selected as repeated variables are , , , the dimensionless that effect on scour process:
Test program
In this study, we used three different velocities 0.1, 0.08 and 0.07 m/sec, the flow intensity should be less than one to get the clear-water condition, also we used point gauge to measure the maximum scour depth for each case that was used in each case and also to measure the required water depth, the median particle size of used sand ( 50 ) equal to 0.72 mm and the depth of sand layer was 0.3 m, the test condition for each case are summarized in table 1.
Results of Square Collar
We list the results of collar of two dimensions 16×16 cm and 24×24 cm of two positions on bed level and above bed level as shown in Tables 2 and 3. We notice from results of Table 2 that the percentage of reduction of scour depth for the collar of dimensions (16×16) cm at bed level equal to 91, 94 and 100% for three discharges, comparing it with reduction percentage of collar above bed level 17, 28 and 33%, it was more effective to put the collar at bed level.
Compare this case of collar with results of Table 3 for square collar of wider size (24×24) cm, the percentage of reduction of collar at bed level equal to 97, 100, 100% for three discharges and the percentage reduction of collar above bed level 47, 42 and 41%, from these percentages we concluded that the square collar of dimension (24×24) cm was more effective than the square collar of dimension 16×16 cm, and the best location at bed level. Khodashenas et al. (2018) [14] studied the case of location of square collar and it was found when the collar close to the bed it was more effective to reduce scours depth, also Jahangirzadeh et al. (2012) investigated the optimum dimension of collar, it was found the effectiveness of collar increased when the size of collar increased [10,14].
Results of Circular Collar
We list the results of circular collar of two diameters 16 cm and 24 cm of two positions on bed level and above bed level as shown in Tables 4 and 5. The percentages of reduction scour depth that it was declared in Table 4 of using collar of diameter 16 cm at bed level was 90, 93, 100% for three discharges respectively, comparing these percentages with the percentages of using collar above bed level was 14, 21 and 16% , from these comparing , the effective position of collar was at bed level.
The percentage of reduction scour depth from Table 5 of using collar, its diameter 24 cm, when the collar was at bed level 96, 97, and 100% for three discharges, comparing these percentages with reduction percentages of using the collar above bed level, the reduction percentage was 35, 32 and 37%, from this comparing, the effective position was at bed level and the wider size of circular collar was the best to reduce the scour depth. Kumar et al. (1999) studied the effect of collar size and location of reducing the scour depth, it was used circular collar to study the case of reducing the scour depth, and it was found that the larger size in lower level gave little scour depth comparing with the smallest size with higher level [8].
From Tables 2, 3, 4 and 5 and comparing the reduction percentages of square collar and circular collar, it was found that the square collar was more effective than circular collar as Khodashenas et al. (2018) made a comparison between two types of collars circle and square, the dimensions of them were twice the diameter of the pier with three locations under bed level , at bed level and above bed level , it was found that the square Was more efficient than the circular collar and percentage of reduction of scour depth in square collar was 70% while the percentage of reduction in circular collar was 50% [14].
Results of Interaction of Two Piers
In this case, we made interaction of two piers for the higher discharge, / = 3.5 where L the distance between two piers from center to center and diameter of the pier, we made three cases of interaction, the first case interaction of two piers without collar, the second interaction. It was used square collar of dimension 24×24 cm on bed level 30 cm for each pier and the last case we used triple collar for each pier distance between them ( /4), the results as shown in Table 6. From these results of Table 6, it was clearly that the scour depth of pier 2 was very little and comparing with the single pier of high discharge, the reduction percentage of pier 2 without any countermeasure was 58% and with using collar for each pier, the reduction percentage of scour depth equal to 100% and the reduction percentage of using triple collar was 30% and 28% respectively, the case of collar on bed level was the best from using triple collar to reduce scour depth. We compare the case of single pier with or without countermeasure and with interaction of two piers; we concluded that the interaction of two piers without any countermeasure reduced about 58% of scour depth. Vittal et al. (1994) made comparison between three cases, the first case group of three piers the angle between them 120 degree, the second single pier with slot and the third single pier with collar , the results showed that the group pier was more effective than the other case and it could be given scour reduction about 40% [16]. The figures that showed the scour depth for the three cases in this paper:
Conclusion
Many researchers had studied the bridge scour and using countermeasures, it is very important subject to prevent the structure from failing, we concluded from our experimental search that using collar could reduce the scour depth, the wider size of collar and the position near the bed level gave zero scour depth and comparing the square collar with circular collar, the square collar was the best of reducing scour. Comparing the single pier with interaction of two piers without any countermeasure gave less scour depth than the single pier about 58%, therefore we considered the interaction of two piers in our experimental investigation was the best.
Conflicts of Interest
The authors declare no conflict of interest. | 2019-09-17T02:40:43.275Z | 2019-09-01T00:00:00.000 | {
"year": 2019,
"sha1": "8ae34ab4f42a65f989cac8148728c9b9d280d69d",
"oa_license": "CCBY",
"oa_url": "https://www.civilejournal.org/index.php/cej/article/download/1693/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "9ff8fe12dc067babbc978e92c538820de4adaacb",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Geology"
]
} |
11485018 | pes2o/s2orc | v3-fos-license | Effect of Two Polyethylene Covers in Prevention of Hypothermia among Premature Neonates
AbstrAct Background: After the umbilical cord is cut, premature neonates face numerous problems including hypothermia. With regard to serious complications of hypothermia and incapability of conventional methods in preservation of neonates' temperature after admission, the researcher decided to conduct a study on the effects of two polyethylene covers in prevention of hypothermia among premature neonates. Materials and Methods: This clinical trial was conducted on 96 neonates aged 28–32 weeks that randomly allocated, by drawing of lots, to three 32‑subject groups as follows: Intervention group 1 (a plastic bag cover and a cotton hat), intervention group 2 (a plastic bag cover and a plastic hat), and a control group receiving routine care. Data were analyzed by descriptive and inferential statistics through SPSS V.14. Results: Mean axillary temperatures in intervention groups 1 and 2 were different after admission and 1 and 2 h later, but this difference was not significant and the mean axillary temperature increased with time. Mean axillary temperature in the control group showed no significant difference at these time points and it did not increase with time. The mean temperatures in preterm infants were significantly higher in the intervention groups after admission and 1 and 2 h after birth, compared to the control group. Mean axillary temperature in intervention group 2 was significantly higher than in intervention group 1. Conclusions: Usage of a plastic bag cover and a plastic hat (with no risk of hyperthermia) is more effective in preventing hypothermia among neonates aged 28–32 weeks, compared to usage of a plastic bag cover and a cotton hat.
neonatal birth are among the major health problems and the most common causes for neonatal mortality. [4]World Health Organization (WHO) reported 15 million premature birthzappening in a year in different countries (premature neonates). [5]onates, especially premature neonates, face a common problems of heat loss. [6]Warming a neonate at the moment of birth is a crucial issue. [7,8]Hypothermia is a dangerous sign which can lead to increased neonatal mortality among premature neonates at birth.The main cause is high surface/ body weight ratio among the neonates.The body surface of a newly born term infant, compared to its weight, is threefold more than that of an adult. [7]It reaches fivefold to sixfold, especially in very low weight neonates.In addition, body dehydration due to evaporation in very low birth weight neonates is 8-10-fold more than in adults.Therefore, high evaporation in premature neonates plays a pivotal role in their metabolism and heat loss.Due to low existing fat in the epidermis, absence of protective fat on premature neonates' skin, inadequate energy to warm their body, and finally, reduced vasomotor response to cold stress, they are prone to a high risk of hypothermia. [9]The incidence of hypothermia among premature neonates under 1500 g is between 31% and 78%. [10]Based on WHO criterion,
B
irth is a beautiful, miraculous and, sometimes, the most risky phenomenon during one's life.Human body needs an extraordinary physiologic regulation and coordination immediately after birth. [1]Of all creatures, human beings need the longest time for being developed and for blossoming of his/her abilities and capacities, as he/ she is born with the lowest abilities and needs much special care. [2]It is more important to provide this sort of care for premature neonates. [3]Early delivery and a premature temperatures between 36 and 36.4°C are considered as minor hypothermia, between 32 and 35.9°C as moderate, and less than 32°C is considered as acute hypothermia. [7,8]lthough the factor of cold stress can be used to trigger neonates' respiration mechanism in the delivery room, the neonates should not be exposed to low temperature for a long time. [11]Long-term environmental low temperature causes destructive complications for the neonates, such as hypoglycemia, metabolic acidosis, cold hands, legs, and body, neonatal mottling, and irregular and slow respiration, bradycardia, apnea or respiratory distress, coagulation and circulation disorders, renal failure, necrotizing enterocolitis, and a defect in thermoregulation (hyper-or hypothermia), and in acute cases, it causes death. [7,12]With regard to aforementioned issues, one of the important duties of nurses is prevention of hypothermia and warming the neonates during their transition from the operating room or delivery room to the ICU. [9]Making appropriate conditions and regulation of neonates' environmental temperature immediately after birth is among nurses' responsibilities. [13]One of the ways to keep premature neonates warm is using polyethylene transparent nylon covers.These nylon bags are also used for packing the food. [9]This transparent cover is used in the form of a swaddle for neonates less than 30 weeks immediately after birth to prevent heat loss. [13]This cover reduces skin evaporation and heat loss, as the skin is not directly exposed to air and it acts as an isolation to prevent the heat passing from neonates' body.In addition, as the infant is laid in the bag after drying, the vernix caseosa remains on its skin and prevents heat loss. [10]n a study conducted in Turkey on 60 premature neonates under 1500 g, the infants who were laid in polyethylene covers reached their normal body temperature sooner than controls. [9]In a study conducted on the incidence of hypothermia in England in a group covered by polyethylene bags, although the incidence of hypothermia reduced from 25 to 16%, in a high number of these neonates, hyperthermia was reported (12.5% vs 39.8%, respectively). [14]A study in Italy showed that the group covered in a polyethylene bag and those with a polyethylene hat had a higher temperature, compared to controls.They concluded that a polyethylene hat and bags were efficient in prevention of heat loss from premature neonates. [6]A study in Iran showed a reduction in prevalence of hypothermia in the group laid in polyethylene plastic bag, compared to controls.Resuscitation time was also significantly lower in this group, and only one case of hypothermia was reported. [3]It is noteworthy that some recent studies reported controversial results and concluded that usage of polyethylene plastic bags led to hypothermia in neonates and its related complications.A study conducted on neonates under 30 weeks of age during their transition to neonatal ward reported that usage of polyethylene bags in neonates after delivery led to hypothermia in them. [15]nother study in France in 2010 reported that usage of polyethylene plastic bags to cover 29-31 week premature neonates could predispose them to the risk of hyperthermia and its complications. [16]In most of the studies, neonates' body was laid in a polyethylene cover up to their neck without drying, although neonates' head is much bigger than other parts of their body (one-fourth of their height) and its circumference is more than their chest circumference.About 40% of their body mass is their head. [17]In comparison with a term neonate, premature neonates' body surface and weight ratio is more as their head is bigger than their trunk, and consequently, the risk of their hypothermia is higher. [18]ven in a study conducted in Newzealand, the subjects' heads were not covered by a plastic hat, and the researchers suggested covering the neonate's head by a plastic cover in future studies to prevent hypothermia.They argued that the big size of the head in neonates and its high proportion of the body surface was the reason for this. [19]With regard to the complications and the importance of prevention of hypothermia in premature neonates, and as there was no comparative study in this context, the researchers decided to compare two interventional protocols of a plastic cover with a plastic hat and a plastic cover with a cotton hat in prevention of premature neonates' heat loss at their birth in Alzahra and Shahid Beheshti hospitals in Isfahan in 2013.
Ethical considerations
This is a clinical trial conducted in the neonatal wards of Alzahra and Shahid Beheshti hospitals during November-March 2013 on 96 neonates selected through convenient sampling and assigned to two interventional groups of polyethylene bag with cotton hat and polyethylene bag with polyethylene hat, and one control group by random allocation (n = 32 in each group).
Inclusion criteria were: 28-32 week old neonates; the neonates without neural tube defects, congenital obvious anomalies, congenital dermal diseases, or abdominal wall defects; the neonates not born from mothers having fever; and the neonates of age more than 28 weeks and of weight 900 g.Exclusion criteria were: Subjects' parents' loss of interest to continue with the study, neonates' expiration, neonates' urination in the polyethylene bag, and having no stable vital signs 30 min after arriving at the neonatal intensive care unit (NICU).The researcher started the intervention after the subjects' parents signed a consent from and received explanations of the research goal and method.Data collection tool was a checklist.Demographic characteristics were collected through referring to subjects' medical files, observation, and interviewing their mothers and recording the data in an information note.The neonates in group 1 intervention were laid in a 25-40 cm heat-resistant polyethylene plastic bag (by the researcher and her colleagues) covering up to the neck of neonates, which had been already heated up under a warmer without drying, immediately after their birth and cutting their umbrical cord in the labor room or operating room.Their heads were covered by a cotton hat after drying.The neonates in group 2 intervention underwent the same procedure, but their heads were covered by a polyethylene, already warmed-up hat with no strips, without drying.
The control group underwent routine care (being dried by a cloth and being placed under a warmer).Neonates were transferred from the labor room or operating room to the neonatal ward and were placed under warmer after checking their vital signs concerning stable respiration, pulse, and color, in a portable incubator already regulated on 35°C.Then, neonates' axillary temperature was measured with a pediatric digital thermometer (Omron, Japan) through a hole made in the polyethylene cover.The length of intervention (wearing the polyethylene cover) was 1 h.All physical interventions were administered on the plastic bag.In cases of an emergency need for a naval venous catheter or placing a pulse oximeter, tiny holes were made in the bag.Neonates' body temperature was measured in all stages (at admission and at 1 and 2 h after admission) in the three groups.The obtained data were entered into SPSS version 14 and analyzed by Chi-square, repeated measure analysis of variance (ANOVA), and independent t-test.
results
There were 98 subjects in the present study of whom 2 were excluded.One subject was excluded due to urination in the bag during transfer and another due to a critical condition needing resuscitation through which the subject was expired.Totally 96 subjects were randomly assigned to three groups.Most of the subjects were boys in each group (53.1%).Chi-square test showed no significant difference between the three groups (P = 0.85).Independent t-test showed no significant difference in subjects' demographic characteristics (gestational age, birth weight, transfer time, APGAR score) between the three groups (P > 0.05).Chi-square test also showed no significant difference in the frequency distribution of delivery mode between the three groups (P = 0.73).One-way ANOVA showed a significant difference in neonates' mean temperature at birth and at 1 and 2 h after admission between the three groups (P < 0.001).Repeated measure ANOVA showed no significant difference in neonates' mean temperature in control (P = 0.32), cotton hat (P = 0.48), and plastic hat (P = 0.41) groups between the three time points [Table 1].Post-hoc least significant difference (LSD) test showed a significant difference in neonates' temperature at admission time and at 1 and 2 h after admission in polyethylene and cotton groups, compared to control (P = 0.001).The difference in mean temperature of neonates was significant between a polyethylene bag and a cotton hat and between a polyethylene bag and a polyethylene hat [Table 2].
dIscussIon
Results showed a significant difference in neonates' temperature at admission and at 1 and 2 h after admission between the three groups.There was an insignificant difference in neonates' temperature in control (P = 0.32), cotton hat (P = 0.48), and polyethylene hat (P = 0.41) groups between the three time points.In fact, mean temperature in the two intervention groups was less at admission compared to 1 h after admission, and neonates' temperature was less at 1 h after admission compared to 2 h after admission (temperature at admission <1 h after admission <2 h after admission).Mean temperature of neonates in the cotton hat group nearly reached 36.5°C at 1 h after admission, while it reached 37°C in the polyethylene hat group.Farhadi et al. concluded that covering the neonates up to their neck by polyethylene cover increased their temperature within the first hour after admission (36.60 to 37.31°C).Trevisanuto et al. showed that covering the neonates with a polyethylene bag up to their neck increased their temperature within the first hour (36.1 to 36.5°C), which is in line with the present study. [3]Our obtained results showed a significant difference in neonates' mean temperature between control, polyethylene bag, and cotton hat groups and between control, polyethylene bag, and polyethylene hat groups (P = 0.001).There was also a significant difference between polyethylene bag and cotton hat groups and between polyethylene bag and polyethylene hat groups at admission and at 1 and 2 h after admission.Gathwala et al. showed that covering the neonates with vinyl bag up to their neck increased the subjects' recorded axillary temperature at admission in the intervention group, compared to control (P < 0.01).Mean axillary temperature was slightly higher in the intervention group at 1 h after admission, but the difference was not significant; therefore, they recommended usage of vinyl bags during resuscitation, which is not in line with the present study despite the fact that the neonates' body was covered by a vinyl bag up to their neck in that study. [10]arhadi et al. concluded that mean axillary temperature of the neonates covered by a polyethylene bag up to their neck was significantly higher than that of controls at admission in the neonatal ward and 1 h after, and prevalence of hypothermia was less in the polyethylene group compared to control. [3]Trevisanuto et al. reported that the neonates' temperature was higher in polyethylene bag and polyethylene hat groups, compared to control.They claimed that polyethylene hats and bags were efficient in prevention of premature neonates' heat loss, [6] which is in line with the present study.It is noteworthy that some controversial studies concluded that using a polyethylene bag may lead to occurrence of hyperthermia and its complications among the neonates.A study conducted on neonates under 30 weeks of age during their transfer to neonatal ward showed that usage of polyethylene bags after birth led to hyperthermia in neonates. [16]Overall, our findings showed that using a plastic cover and hat for the premature neonates aged 28-32 weeks is more efficient in prevention of hypothermia (with no risk of hyperthermia), compared to using a plastic bag and a cotton hat.These findings can be considered in taking care of the neonates during resuscitation, transition, and admission.Low number of subjects can be considered as a limitation of the present study.Therefore, another study with a higher number of subjects is suggested to be compared with the present study.
conclusIon
Based on our obtained results, it is suggested to prevent neonates' hypothermia and its complications through holding educational programs and by the prevailing usage of a polyethylene cover for neonates' whole body, in order to lower the hospitalization costs.This cover is especially recommended for the hospitals in which operating rooms or labor rooms are not next to neonatal ward, as usage of transparent covers among the neonates under warmer prevents their heat loss during transfer to the ward.
Table 1 : Mean temperatures of premature neonates in three groups at various time points
SD: Standard deviation, ANOVA: Analysis of variance | 2017-04-10T07:31:42.769Z | 2015-05-01T00:00:00.000 | {
"year": 2015,
"sha1": "62c41a2be77593cb7752dd438c644f1fc6f99792",
"oa_license": "CCBYNCSA",
"oa_url": null,
"oa_status": null,
"pdf_src": "Grobid",
"pdf_hash": "62c41a2be77593cb7752dd438c644f1fc6f99792",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
267591145 | pes2o/s2orc | v3-fos-license | Enhancing Stress Detection: A Comprehensive Approach through rPPG Analysis and Deep Learning Techniques
Stress has emerged as a major concern in modern society, significantly impacting human health and well-being. Statistical evidence underscores the extensive social influence of stress, especially in terms of work-related stress and associated healthcare costs. This paper addresses the critical need for accurate stress detection, emphasising its far-reaching effects on health and social dynamics. Focusing on remote stress monitoring, it proposes an efficient deep learning approach for stress detection from facial videos. In contrast to the research on wearable devices, this paper proposes novel Hybrid Deep Learning (DL) networks for stress detection based on remote photoplethysmography (rPPG), employing (Long Short-Term Memory (LSTM), Gated Recurrent Units (GRU), 1D Convolutional Neural Network (1D-CNN)) models with hyperparameter optimisation and augmentation techniques to enhance performance. The proposed approach yields a substantial improvement in accuracy and efficiency in stress detection, achieving up to 95.83% accuracy with the UBFC-Phys dataset while maintaining excellent computational efficiency. The experimental results demonstrate the effectiveness of the proposed Hybrid DL models for rPPG-based-stress detection.
Introduction
Stress in humans is related to mental health and well-being [1].It is the biological response to a situation such as a threat, challenge, or physical and psychological barrier [2].The sympathetic nervous system (SNS) and the parasympathetic nervous system (PNS) are two components of the autonomic nervous system (ANS) that directly affect how the body reacts to stress [3,4].In highly stressful events, the SNS executes the fight or flight survival response.As a result, the body redirects its efforts toward fighting off threats.Given its subjective nature, identifying and monitoring the onset, duration, and severity of stressful events is challenging.This is especially true in workplace situations [5] where there is often an intelligent choice to ignore stress for professional gain.Recent studies have shown an increase in stress levels in the office environment [6].Due to the plasticity of the brain, chronic or persistent stress has been shown to increase the volume of the amygdala, a structure within the limbic system that defines and regulates emotions, stores emotional memories, and, most importantly, executes the fight or flight response [7].Similarly, chronic stress is associated with a reduction in the mass of the prefrontal cortex [8], which is used to intelligently regulate thoughts, actions, and emotions.
Recent research in the field has introduced various sensor-based solutions for stress detection, as evidenced by studies such as [4,9,10].Although some of these solutions use only a single type of sensor, others employ multimodal sensing.Traditionally, electrocardiography (ECG) has been used to measure heart rate variability (HRV) for stress detection [11].
Biomarkers like galvanic skin response (GSR), electrodermal activity (EDA), respiration, and electromyography (EMG) are increasingly recognized for assessing affective states and stress levels [12][13][14], utilising sensing devices.While these traditional sensor types are considered the gold standard and provide excellent opportunities for the measurement of stress-related biomarkers, the ease of use for these devices in a practical scenario becomes a challenge, as experimentation can only be carried out in a designated equipped setting.The focus of research is shifting to developing simpler and more convenient sensing solutions that are applicable to everyday life to measure physiological parameters.Recent advances in technology have led to significant developments in wearable and personal sensing devices with applications in healthcare, for example, the use of a wearable device to capture physiological data for health monitoring [15][16][17][18][19][20].These devices include chest bands [15,16,21,22], portable ECG devices [17,23], etc. HRV parameters can be measured using wristbands such as Empatica E4 wristband [18,24], Microsoft Band 2 [19,25], Polar watch [20,26], and Fitbit watch [20,26], among others.Researchers analyse personal data from these devices to provide relevant insights into the individual's physical and health status.Although these devices show promise and provide a non-intrusive means of acquiring data for stress detection models, a major limitation of these devices relates to the size, making them uncomfortable for practical use cases [27].
On the contrary, rPPG technology measures Blood Volume Pulse (BVP) using a camera, eliminating the need for sensor attachments [28,29].By extracting skin pixels from facial data captured by the camera, rPPG technology utilises changes in skin colour corresponding to heartbeat to obtain the BVP signal [28,[30][31][32].This method simplifies the measurement, reduces sensor complexity, and avoids attachment-related problems.Furthermore, rPPG can be used to capture HRV measures for analysis, especially in healthcare applications.The widespread availability of cameras in the form of webcams or smartphones makes rPPG technology easily accessible to anyone.Due to its advantages, rPPG finds applications in healthcare, fitness, and forensic science.Integration of rPPG technology into smart mirrors or smartphones increases its potential as a professional health indicator.Although still in an early stage, rPPG-based non-contact affective computing has become a growing area of research in recent years, which can drastically improve human-computer interaction in real time for stress detection.This paper explores the feasibility of end-to-end methods for recognising stress by proposing a rPPG-based stress detection system to leverage noncontact and physiological techniques, facilitating the continuous monitoring of pervasive long-term biomedical signals.The contributions made in this paper are as follows: The remainder of this paper is structured as follows.Section 2 presents a comprehensive literature review of the existing approaches, while Section 3 introduces the methodology, collection protocol, and preprocessing steps.In Section 4, the experimental results are discussed while the conclusion and future work plan are outlined in Section 5.
Related Work
The term stress was initially introduced into medical terminology in 1936, referred to as a syndrome produced by diverse nocuous agents that seriously threaten homeostasis [33].Selye's experiments demonstrated that prolonged exposure to severe stress could lead to disease and tissue damage [34].Recently, research on stress, its causes, and implications has gained traction [4,9,10,[12][13][14].It is characterised by a complex interactive phenomenon, arising when a situation is deemed important, carries the possibility of damage, and requires psychological, physiological, and/or behavioural actions [4,9,10].Understanding stress involves distinguishing between stressors, stress responses, and stress biomarkers.Stressors are stimuli that disrupt normal activity, stress responses are symptoms triggered by stressors, and biomarkers reflect interactions between a biological system and potential hazards [3,4,9,10].The human body responds to stressors through mechanisms such as the hypothalamic-pituitary-adrenal (HPA) axis, ANS, and the immune system [35].The HPA axis releases hormones, including cortisol, in response to stressors, initiating the "fight or flight response", leading to physiological reactions from the ANS, increasing SNS activity, and decreasing PNS activity [3,4].Cortisol levels and other physiological measures such as body temperature, respiration rate, pulse rate, HRV, and blood pressure (BP) have been identified as standard stress biomarkers [15][16][17][21][22][23].Several methods for stress detection include questionnaires, ECG, electroencephalogram (EEG), BP using arm cuff, sampling saliva cortisol and other biomarkers from blood tests [36][37][38].Self-reporting tools such as the Perceived Stress Scale and Depression Anxiety Stress Scale are widely used to measure perceived stress, but have limitations such as biased responses and subjectivity [39].ECG measures changes in heart rhythm due to emotional experiences; providing information about HRV usually requires a visit to a medical facility.EEG captures electrical signals in the brain, correlating brain waves (beta and alpha) to stress, but conventional EEG machines are impractical for managing daily stress [40,41].Biomarkers such as cortisol in salivary and hair samples are associated with chronic stress but are invasive and time-consuming.Blood pressure measured with a sphygmomanometer is accurate, but requires a trained professional [36][37][38].Ambulatory Blood Pressure Measurement (ABPM) devices offer home monitoring, but lack widespread validation and can be influenced by factors other than stress [42].While traditional sensor types are acknowledged as the gold standard, offering excellent opportunities for measuring stress-related biomarkers, their practical use in everyday situations poses a significant challenge.Emerging technologies have focused on developing simpler and more convenient sensing solutions applicable to daily life to measure physiological biomarkers.Wearable and personal sensing devices, such as chest bands, wrist bracelets, and portable ECG devices [15,18,21,24], have played a pivotal role in this evolution.
Conventional approaches to stress detection have drawbacks that are not in line with modern lifestyles and real-time monitoring.These methods are invasive, prone to bias, incur substantial costs, and require time-consuming travel to clinical settings.Over the past two decades, there has been a noticeable shift towards technology-driven approaches for more efficient, cost-effective, and less intrusive stress measurement compatible with modern lifestyles.Wearable devices, mobile applications, and Machine Learning (ML) algorithms have revolutionised stress detection and measurement.One approach is measuring HRV using wearable devices such as smartwatches, fitness trackers, and chest straps, allowing continuous and long-term monitoring of stress levels [16,17,20,23,26].Typically, as HRV measures are inherently nonlinear, ML algorithms and other statistical data-driven methods such as Modified Varying Index Coefficient Autoregression Model (MVICAR) [43] can be applied in stress detection systems.ML algorithms have enabled accurate and efficient HRV-based stress detection and classification systems [29,[44][45][46][47]. EDA, which measures the electrical activity of sweat glands, is another method that can be monitored with wearable devices, providing continuous and real-time monitoring of stress levels.Mobile applications using EDA-based biofeedback help individuals manage stress by providing real-time feedback and stress reduction techniques [16,25].However, EDA measurement is sensitive to environmental factors, skin conditions, and medications, affecting the precision.
The COVID-19 pandemic has stimulated interest in remote healthcare, leading to research using cameras for the estimation of rPPG signals and real-time monitoring, addressing the need for non-invasive, contactless, and accessible methods for stress assessment [48,49].rPPG offers a non-invasive means of measuring BVP remotely.This approach requires only a camera and an ambient light source.With this, HRV measures, pulse rate, and breathing rate can be measured using an everyday camera for facial video analysis to remotely detect and monitor stress [28,[30][31][32].There have been a growing number of research papers.For example, Benezeth et al. [46] proposed an rPPG-based algorithm that estimates HRV using a simple camera, showing a strong correlation between the HRV features and different emotional states.Similarly, Sabour et al. [29] proposed an rPPGbased stress estimation system with an accuracy of 85.48%.Some other works on the use of rPPG are encouraging, indicating that noncontact measures of some human physiological parameters (e.g., breathing rate (BR) and Heart Rate (HR)) are promising and have great potential for various applications, such as health monitoring [47,50] and affective computing [51][52][53].While these contributions are noteworthy, this paper significantly advances the field by introducing Hybrid Deep Learning (DL) networks and models for rPPG signal reconstruction and Heart Rate (HR) estimation.This novel approach presents a substantial improvement in accuracy and efficiency in stress detection, achieving up to 95.83% accuracy with the UBFC-Phys dataset.The integration of Hybrid DL networks represents a contribution, offering enhanced capabilities for signal reconstruction and stress classification.Considering these, rPPG is well-suited for both business and everyday applications and has the significant advantage of measuring ECG and photoplethysmography (PPG).
Wearable and contactless devices offer promising alternatives for stress measurement, providing convenient and non-invasive methods for continuous monitoring.However, the quality and accuracy of the data generated by these devices can vary.A major limitation to adapting rPPG is evident in the decrease in the signal-to-noise ratio, which requires advanced signal processing.Many articles lack peer review and validation in clinical settings, raising concerns about the reliability of data.Although wearable devices can be sensitive to factors such as movement, heat, and transpiration, leading to inaccurate measurements, ease of use, especially during sleep or physical activities, is another huge limitation.Individuals with skin sensitivities, allergies, or specific health conditions may also find wearing these devices intolerable.
Method
The proposed methodology consists of three main parts, as shown in Figure 1.The primary objective is to detect social stress using contactless physiological signals extracted from facial videos through DL techniques.In the first part, a pyVHR toolbox (Python framework for Virtual Heart Rate) [54] is used to capture and estimate the beats per minute (BPM) from facial video data.The second part involves the increase in the estimated BPM and is subsequently input into four DL models (Recurrent Neural Network (RNN), LSTM, GRU, and 1D-CNN).The performance of these models is then evaluated and compared on the basis of specific metrics.The proposed methodology is implemented using Python 3 and relevant libraries for data manipulation, leveraging an NVIDIA graphics processing unit (GPU) with Compute Unified Device Architecture (CUDA) version 12.2 and CUDA Deep Neural Network (CuDNN) library.It should be noted that the default parameters of pyVHR, including a window size of 8, patch size of 40, and pre/post filter, were used for the estimation of BPM.The selected methods include Regions of Interest (ROI) approaches: holistic and convex hull, as well as CuPy CHROM, Torch CHROM, and CuPy POS.Refer to Table 1 for a brief overview of the methods.
Dataset and Data Processing
The UBFC-Phys dataset includes data from 56 healthy subjects, with 12 participants excluded due to technical and consent issues [29].The participants, aged between 19 and 38 (mean age 21.8, standard deviation 3.11), comprise 46 women and 10 men.In the study, stress levels were induced using a modified version of the Trier Social Stress Test (TSST) [55].The participants completed three tasks: a 10-minute rest task serving as a baseline, a speech task, and an arithmetic task.Speech and arithmetic tasks aimed to induce stress through a social evaluation threat.In the test scenario, the speech task simulated a job interview, introducing an additional expert via video call to enhance social-evaluative threat.The arithmetic task involved a countdown with variations.For the purposes outlined in this paper, attention is given to ground-truth (GT) BVP signals labelled as T1 and T2 for the stress and non-stress classes, respectively.These signals, obtained using the Empatica 4 wristband at a 64 Hz sampling rate, consist of vectors with 11,520 data points each (64 × 180 = 11,520).Subsequently, the first 500 data points of the GT BVP signals for subjects s1 to s4 were plotted to visually depict the impact of stress (T1) and non-stress (T2) on signal behaviour.Refer to Figure 2 for these graphs.
Data processing included the application of the Fast Fourier Transform (Fast Fourier Transform (FFT)) to generate frequency domain features from the Blood Volume Pulse (BVP) signals.In addition, the data augmentation was implemented with Linear Interpolation and Gaussian White Noise.
Cupy POS
Plane POS is another method also used to infer the pulse signal from RGB traces, but from a projection plane that is perpendicular to the skin tone built with the CuPy library.Linear interpolation, as illustrated by Equation ( 1), augments by estimating values between existing data points, creating straight lines connecting these points.
where x 1 and y 1 are the first coordinates, x 2 and y 2 are the second coordinates, x is the point to perform the interpolation, and y is the interpolated value.
Alternatively, the Gaussian White Noise augmentation method generates series of random values using the Gaussian distribution; see Equation ( 2) below.The resulting sequence exhibits white noise characteristics.Gaussian White Noise serves multiple purposes beyond dataset expansion.It is valuable to simulate uncertainty, randomness, or inherent variability present in real-world data.
Deep Learning Models
A set of DL models are selected to detect stress and evaluate the effectiveness and efficiency of the models.Due to intrinsic structural differences between DL models based on RNN, specifically LSTM and GRU, and Convolutional Neural Networks (CNN), three 1D-CNN-Multilayer Perceptron (MLP) models were designed.One of these models closely mirrors the architectures of RNN-based models in terms of the number of neurons, represented as "filters" in CNNs.However, instead of utilising LSTM or GRU layers, Convolutional One-Dimensional (Conv1D) layers were used.These models also include Maxpooling1D layers and flatten layers, along with specific parameters and functions, such as kernel size and Rectified Linear Unit (ReLU) activation.The other two 1D-CNN models have additional CNN and MLP layers and different "pool size".It is important to note that the limited sample size of estimated BPM signals (only 172 data points per video) from the pyVHR toolbox prevented the evaluation of the performance of 1D-CNN models versions 2 and 3, given their respective architectures.For a detailed architecture, layer descriptions, parameters, and functions of the 1D-CNN-MLP models, please refer to Table 2.The design flow of the 1D-CNN with 3 CNN and 2 MLP layers, labelled "CNNv2", is illustrated in Figure 3.
Performance Evaluation
The metrics chosen to evaluate the models needed to be suitable for the classification of categorical variables "stress" and "no-stress".For that reason, the metrics Accuracy-Ac, Recall-Re, Precision-Precision (Pr), and F1-Score (F1) were selected.Each of these metrics assesses the models' classification performance from a different perspective.
Accuracy-It provides a general sense of how well the model is performing between stress and non-stress classification.The higher the value, the greater the model's accuracy.
Recall-This metric is also known as sensitivity metric, or true positive rate.It computes the proportion of true positive predictions out of all actual positive instances.In the context of this research project, a high recall value indicates that the model is sensitive to detecting social stress, which is critical for its practical application.
Precision-Calculates the proportion of true positive predictions out of all positive instances.The higher the value, the more accurate the model is predicting the true positive instances.This helps minimise false positives, which is crucial when dealing with stress assessment.
F1-This metric provides a balanced view of the model's performance by considering both precision and recall.In stress classification, achieving a balance between minimising false positives Pr and false negatives Re is vital.A high F1 indicates that the model accurately identifies instances of social stress and minimises false classifications.
where the classification outcomes are True Positive (TP), True Negative (TN), False Positive (FP), and False Negative (FN).
Experimental Results
The visualisations provided in Figure 4 offer a distinct view of the contrasting characteristics between the non-stress task (T1) and the stress-induced task (T2) in both the time and frequency domains.In the time domain analysis, the T1 signal exhibits fluctuations within the range of −250 to 250 units, while in the presence of stress during T2, this range becomes wider, spanning from −500 to 500 units.This change in range suggests a potentially heightened physiological response during the stress task.Likewise, when we delve into the frequency domain, we notice a parallel pattern.In the frequency domain representation, the T1 signal presents values oscillating between 0 and 1, whereas the T2 signal exhibits a wider span of 0 to 5.This expanded variation in the frequency domain further emphasises the distinction between the non-stress and stress-induced states.Moreover, the implications of these observations extend beyond mere visualisation.The frequency domain signal has immense potential as a feature for training and testing deep learning methods aimed at stress classification.While the raw BVP signal encapsulates temporal patterns, the frequency domain offers insight into the underlying frequency components that contribute to those patterns.By extracting features from the frequency domain, deep learning models can potentially capture and leverage distinctive spectral characteristics related to stress.The plots in Figure 4 illustrate the GT BVP signals of subject 1 during tasks T1 and T2 before and after FFT being applied to the data.
Figure 5 shows the estimated heart rate (BPM) extracted from video T1 of subject 1, using the CuPy CHROM method from the pyVHR toolbox.This visualisation illustrates the state before and after augmentation using linear interpolation, where it is possible to infer that expanding the original dataset of 173 data points to 11,009 data points did not alter the underlying signal, reinforcing the consistency between the original and augmented data.The processed and augmented dataset is then partitioned into training, validation, and test datasets using 10% for validation and 10% for testing.
Likewise, Figure 6 shows the estimated heart rate (BPM) plotted from the T1 and T2 videos of subject 1, using the CuPy CHROM method from the pyVHR toolbox.This visualisation illustrates the state before and after augmentation using white noise, where it is possible to infer that expanding the original dataset of 173 data points to 11,180 data points did not alter the underlying signal.
Classification Results
Three distinct DL methods (LSTM, GRU, 1D-CNN), each with different architectures (as detailed in Table 2), were implemented to identify the optimal model to effectively classify stress levels.Although this work focuses on building the best DL model to accurately classify stress status by extracting rPPG from face videos, this classification task was conducted using both GT-BVP signals computed from videos of the UBFC-Phys dataset separately in order to compare the performance of the DL models on the GT-BVP and the rPPG.The results in Table 3 present the top-performing results achieved in this article for both the raw GT (TD) data and the processed GT (FD) data.The results are arranged in descending order, highlighting the best-performing models and their respective accuracy.Additionally, the computation time for each model is also provided to allow for comparison of the execution times of the different models.There is a noticeable difference in computational efficiency between the CNN models and the LSTM and GRU models.The 1D-CNNv1 model completed 50 epochs in just 4.24 s, while the LSTMv2 model required approximately 1 min and 30 s to achieve the same.The accuracy of the models varies between approximately 41.67% and 83.33%, and it is obvious that the best results were obtained using the TD data.However, some models exhibit different performances depending on the domain.For example, the 1D-CNNv2 model achieves significantly better accuracy (83.33%) in the time domain compared to its accuracy (50.00%) in the frequency domain.On the contrary, the GRUv2 model demonstrates a higher accuracy (62.50%) in the FD compared to its accuracy (58.33%) in the TD.Concerning the number of epochs for training and testing the models, it is possible to infer that the majority of the models only needed 50 or fewer epochs.On the other hand, the 1D-CNNv2 model achieved its higher performance around the 60th epoch, as can be seen in Figure 7. Regarding the precision and recall in Table 3, precision in some cases is balanced with recall, while in others, trade-offs are evident.As previously mentioned, models with both high precision and high recall scores are effective at correctly classifying stress instances (true positives) and minimising both false positives and false negatives.For instance, the 1D-CNNv2 model achieved this balance, with an accuracy of 83.33% and Precision and Recall of 83.33%.On the other hand, models with high Recall, but lower Precision predict more instances as stressed, including those that are uncertain.This is useful when capturing all stress instances is a priority, even if it means more false positives.The GRUv1 model in the FD shows this pattern, with Recall of 91.67% but Precision of 61.11%.It is also clear that the 1D-CNNv2 model achieved the highest accuracy 83.33% among the tested methods.This suggests that it might be the most effective model for classifying stress and non-stress states from the GT-BVP signals.From Table 4, it can be inferred that the results achieved by the traditional machine learning method employed by the dataset's authors 75% and the CNN-MLP model utilised in the study by Hasanpoor et al. 82% [54] were both exceeded in this work 83.33%. 5.With regard to these results, several conclusions can be drawn from this table.On a wider perspective, the accuracy ranges from 79.17% to 95.83%, indicating the DL models' effectiveness in distinguishing between stress and non-stress states, which in the opinion of the authors can be considered a very good performance across the models.Precision and recall values vary across all models, with some achieving 100% and others slightly lower (the lowest being 73.33%), and the F1 score follows the same trend.Considering the time domains and augmentation techniques, it is possible to infer that the majority of the models excelled in the frequency domain, whereas the 1D-CNNv3 demonstrated high scores across all metrics in TD.In terms of augmentation techniques, it is possible to infer that interpolation and no additional augmentation achieved the best performances across all models.Furthermore, both CuPy-CHROM and Torch-CHROM pyVHR methods can be a good choice for estimating BPM from facial videos for stress classification, because all three CNN models achieved higher performances, although with distinctive augmentation techniques and domains.Regarding the train and test times, these range from few seconds to over two minutes, with CNN having the best execution times compared with the LSTM and the GRU models.In terms of the number of epochs for training and testing, it is possible to infer that, for the great majority of the models, less than 50 epochs were needed to train and test the model, with a few exceptions, as in the case of the model that achieved the best overall performance, 1D-CNNv1 with the configuration white noise and FD, whose performance slightly improved from around 91.70% to 95.83%.The validation loss and accuracy curves also reflect that difference, where it can be seen that the model's performance slightly improved after around 60 epochs, with an increase in the testing curve and a decrease in the loss curve (refer to Figure 8).Considering the importance of accuracy, precision, and recall metrics, along with the focus on real-world deployment utilising edge devices, the following models appear to be the stronger candidates: 1D-CNN models, namely 1D-CNNv1, using the CuPy-CHROM method, white noise augmentation, FD, and 100 epochs, with a mere 7.8 s of execution time; 1D-CNNv2, also using the CuPy-CHROM method, with linear interpolation augmentation, FD, and 50 epochs; and the 1D-CNNv3 using the Torch-CHROM method, with linear interpolation augmentation, TD, and 50 epochs.These models, as illustrated in the normalised confusion matrix in Figure 9, consistently achieve high accuracy 95.83%, precision, and other metrics across TD, FD, and pyVHR methods.They are well-suited for real-time applications due to their relatively lower training times compared to the LSTM and the GRU models.Furthermore, these models demonstrate that they are efficient in processing sequential data like time series, making them suitable for processing heart rate data extracted from videos.Moreover, the balanced precision and recall they offer make them well-suited for stress and non-stress classification, as avoiding false positives and false negatives is crucial.As shown in Table 6, two of the three CNN models (1D-CNNv2 and 1D-CNNv3) achieved perfect scores (100%) in all performance metrics.These results were omitted from the best results in Table 5 and are likely the consequence of overfitting, due to training a heavy model on a small dataset.The authors believe that it is reasonable to assume that the deployment of these models, along with their associated weights, to real-world data scenarios would probably yield performance outcomes that are less impressive.
Conclusions and Future Work
This paper has successfully established a robust framework for remote stress detection through the analysis of physiological signals derived from facial videos.The primary goal was to ascertain an advanced DL model for stress classification, surpassing the capabilities of traditional ML techniques.The adoption of three DL methods (LSTM, GRU, and CNN) and their refinement through empirical optimization yielded significant achievements, including an impressive 95.83% accuracy in classifying stress from rPPG signals.The outstanding computational efficiency of the best-performing DL model, 1D-CNNv1, aligns seamlessly with the prospect of deploying the framework on edge devices.The exploration of augmentation techniques, particularly linear interpolation and the absence of augmentation, showcased promising outcomes, highlighting their efficacy in enhancing model performance.The proposed methodology holds significant potential to influence stress-related policies, practices, and management, potentially fostering increased user engagement with stress detection tools.However, it is crucial to acknowledge a major limitation inherent in the rPPG approach, centered around privacy concerns stemming from the utilisation of cameras and the diversity of the participants.The privacy issue emphasises the need for user consent and necessitates a careful balance between the potential advantages of the approach and the preservation of individual privacy rights.It is imperative to underscore that the rich insights provided by this approach should be accompanied by stringent privacy measures, ensuring that user consent is sought and respected throughout the stress detection process.Future work will focus on improving signal extraction through alternative physiological sensing tools and optimising parameters in existing toolboxes.Exploring additional augmentation techniques and advancing DL methods, particularly focusing on 1D-CNN, stand as promising paths for further enhancement.Rigorous validation through cross-validation and testing on diverse datasets is paramount to assess model robustness and ensure generalisation across various scenarios.Furthermore, future investigations could also consider the potential influence of participant ethnicity on model accuracy, recognising the importance of addressing diversity in the dataset and its implications for the broader applicability of the stress detection framework.
Figure 1 .
Figure 1.Stress detection framework.The video frames serve as inputs to the pyVHR toolbox, enabling the extraction of rPPG BPM signals from facial regions within the frames.The derived BPM signals are subsequently channelled through DL models (LSTM, GRU, and 1D-CNN), culminating in stress classification outcomes.
Figure 4 .
Figure 4. Graphs depicting the Time Domain (TD) and Frequency Domain (FD) representations of the GT BVP signals for subject 1 during tasks T1 and T2.
Figure 5 .
Figure 5. Plot of estimated BPM extracted from video T1 of subject 1, using the method CuPy CHROM, before and after augmentation using linear interpolation.
Figure 6 .
Figure6.Plot of estimated BPM extracted from videos T1 of subject 1, using the method CuPy CHROM, before and after augmentation using white noise.
4. 1 . 1 .
Performance Analysis of the DL Methods Applied to the GT Signal
Figure 7 .
Figure 7. Validation loss and train and accuracy curves of the GT-1D-CNNv2 model.
4. 1 . 2 .
Performance Analysis of the DL Methods Applied to the rPPG Signal Moving forward to the performance of the DL models on the estimated BPM, these were obtained considering the different methods of BPM extraction on the pyVHR toolbox (CuPy-CHROM, CuPy-POS, and Torch-CHROM), different epochs (50-100), augmentation techniques (none, linear interpolation, and white noise), DL model versions, input domains (TD and FD), evaluation metrics (accuracy, precision, recall, and F1 score), and execution times.The training and testing generated over two hundred lines of results.The best results per DL model version and per pyVHR method are depicted in Table
Figure 8 .
Figure 8. Plot of estimated BPM extracted from videos T1 of subject 1, using the method CuPy CHROM, before and after augmentation using white noise.
Figure 9 .
Figure 9. Confusion matrix showing performance across different models.
Table 1 .
Parameters and methods used for rPPG with pyVHR toolbox.
Table 4 .
Comparison of different papers' results on the UBFC-Phys data.
Table 5 .
Best DL method results from the rPPG data.
Table 6 .
Overfitted results of the rPPG data. | 2024-02-11T16:32:45.244Z | 2024-02-01T00:00:00.000 | {
"year": 2024,
"sha1": "0cd881cf8d687cb3dda3d766abda4827cf827c44",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "622e14fe2773131d7f9e939f7fc60f78127ee81c",
"s2fieldsofstudy": [
"Computer Science",
"Psychology",
"Medicine"
],
"extfieldsofstudy": []
} |
55945101 | pes2o/s2orc | v3-fos-license | Analysis of changes in post-seismic landslide distribution and its effect on building reconstruction
Six years after the devastating Ms 8.0 Wenchuan earthquake, new landslides, debris flows, and flash floods still occur frequently in the earthquake-stricken regions. This shows that the geological hazards that occur after a major earthquake in a mountainous environment can be a long-term threat. However, post-earthquake reconstruction and relocation of local residents often neglect this evolving threat, and its interaction with existing and rebuilt houses has not been well studied. Here we show that the evolving mountain environment, including the changed geographic distribution of new landslides and the continuously uplifting riverbed, creates emerging risks for existing and rebuilt houses. We use spatial analysis of landslide debris and the location of houses from high-resolution images and field survey in the study area and find that new landslides and the houses rebuilt after the Wenchuan earthquake have a similar trend of moving to lower elevations, gentler slopes, and closer to rivers. This study confirms that the persistent downward movement of landslide debris has rapidly filled up riverbeds over the past 6 years. The elevated riverbeds make the study area extremely susceptible to flash floods, creating further risks to newly rebuilt houses that are closer to the river. We highlight the often neglected dynamic process that involves changes in the natural environment and man-made constructions and their interaction. This dynamic process requires long-term monitoring and adaptive management of mountainous regions after major earthquakes that can fully consider the sophisticated evolving risks caused by the changing environment, exposure, and vulnerability in the region.
Introduction
Geohazards after major mountain earthquakes not only pose an imminent threat to lives by coseismic landslides and quake-dam breach floods (Cui et al., 2011;Fan et al., 2012a) but may also persist for more than 10 years before recovery to pre-earthquake conditions (Cui et al., 2011;Huang, 2011).Post-seismic reconstruction often occurs quickly during that time, while geodynamic activity still continues.The lack of understanding between a changing mountain environment and reconstruction of buildings in the affected areas results in more damage in the years following the earthquake (Cheng et al., 2005;Sudmeier-Rieux et al., 2011).
The evolution of post-seismic landslides is a complex issue that may have distinct patterns in different geographic settings.Studying typhoon events before and after the Chi-Chi earthquake, Lin et al. (2006) found that the density of post-earthquake rainfall-induced landslides increased significantly compared to those of pre-seismic typhoon events, and newly triggered landslides were located on steeper slopes.However, research on spatiotemporal landslide changes after the 2005 Kashmir earthquake showed that apart from the heaviest monsoon seasons, landslide changes are insignifi-cant 2 years after the earthquake (Khattak et al., 2010;Saba et al., 2010;Khan et al., 2013).
Considering the large number and complex types of landslides triggered by the Wenchuan earthquake (Gorum et al., 2011), it is difficult for most post-seismic landslide-affected slopes to fully revert to the pre-seismic condition covered by stable vegetation (Lin et al., 2008;Wang et al., 2014).Furthermore, the large number of earthquake-related landslides near the drainage system may have an effect on postseismic sediment flux over a decadal timescale (Fan et al., 2012b).On a regional scale, erosion caused by earthquakerelated landslides has a long-lasting effect on the landscape (Keefer, 1994;Dadson et al., 2003).Based on studies of sediments in check-dam basins, Koi et al. (2008) concluded that mountain earthquakes can lead to high sediment discharge for 100 years.Chen (2009) studied the sedimentary impacts of coseismic and post-seismic landslides in rainfall events in the Tachia River basin after the Chi-Chi earthquake and found that landslides provided materials for numerous debris flows, filled up the riverbed, and led to flash floods inundating downstream areas.After the Wenchuan earthquake, Cui et al. (2011) and Huang (2011) predicted that the debris generated by landslides may evolve into major postseismic geohazards, such as debris flows, which can last for at least 10 years before recovery to the pre-seismic conditions.Materials generated from mountain earthquakes more than 100 years ago may have led to the disasters that occurred in Zhouqu, China, that claimed more than 1700 lives in 2010 (Xin, 2010).
Post-disaster reconstruction always occurs almost immediately following a disaster but can have long-term impacts for the people affected (Jha and Duyne, 2010).Furthermore, most studies on post-seismic reconstruction have mainly been carried out in urban areas.After major earthquake disasters, the lifeline systems are regarded as the most urgent infrastructure to recover in order to reduce the impact of the disaster in cities (Kozin and Zhou, 1990).Building reconstruction after the 1999 Chi-Chi and the 1994 Northridge earthquakes was completed 35 months after the main earthquake (Wu and Lindell, 2004), whereas the reconstruction was much faster for the 2008 Wenchuan earthquake.Because of the dominant role played by the central Chinese government, reconstruction of rural housing was virtually completed 1 year after the main earthquake, which was 2 years less than the overall plan (Dunford and Li, 2011).Housing recovery usually takes place in four stages from emergency shelter to permanent housing (Quarantelli, 1982).Accelerated reconstruction of permanent housing is often selected at the location of temporary housing (UNDRO, 1982), which is usually built in the chaos following the disaster using incomplete information (Johnson, 2007).
The reconstruction of housing and roads has an impact on mountain geohazards after a major earthquake (Atta-ur-Rahman et al., 2010, 2013;Sudmeier-Rieux et al., 2011).After the 2005 Pakistan earthquake, Khattak et al. (2010) ob-served that the locations that showed an increase in landslides were situated along rivers and/or roads.After the Wenchuan earthquake, the lack of reconstruction space in the mountainous regions caused some rebuilt infrastructures to be placed in the hazardous area.Because of the rough terrain in the study area, most buildings and other infrastructure before and after the earthquake were located along rivers.Several quake-dam breaches and landslides along the riverbank provided sediment that caused the rising and widening of the riverbed.The competition for space between housing reconstruction and sediment transportation posed further risks in this area.
To understand post-earthquake landslide-related disasters, analysis of hazards, elements at risk, and other related factors should be integrated (Shi, 1996;Schwendtner et al., 2013).The typology, location, and intensity of geohazards are still changing and the hasty reconstruction after the major mountain earthquake does not take all factors into consideration.Therefore, it would be impossible to understand post-seismic disasters from single studies of changing mountain hazards or the study of change patterns of reconstructed infrastructures.This paper will use a small basin in the Wenchuan area to study the effect of changing earthquake-related landslides, the varying patterns of reconstructed houses, and their interaction to understand the mechanism of mountain disasters after major earthquakes.
Data and methods
The study area is part of east Pingwu County in the lower part of the Hongxi River watershed, which is a first-order tributary on the right side of the upper Fujiang River in northern Sichuan, as shown in Fig. 1.This region experienced Modified Mercalli Intensity scale X and XI and suffered extensive damage with more than 60 % of the housing collapsing and most of the remaining structures being heavily destroyed during the 2008 Wenchuan earthquake (Mao et al., 2009).The main fault of the Wenchuan earthquake began near the epicenter close to the town of Yingxiu and ran more than 200 km northeast through this watershed, with maximum vertical and horizontal surface rupture offsets of 6.5 and 4.9 m, respectively (Xu et al., 2009).Most buildings within 50 m of this surface rupture during the earthquake suffered damage ranging from serious damage to total collapse.More than 30 % of the buildings were found to have collapsed within 1000 m of the surface rupture in the Nanba region (Zhao et al., 2012).The Hongxi River drains the watershed from the northeast to the southwest, with its major watercourse running parallel to the fault.
After the 2008 earthquake, landslide debris in the watershed experienced dramatic changes with some small, shallow landslide debris being re-vegetated and the shape of most of the large landslides evolving into flow-like morphology over the following years.The morphological change was caused 1).A 25 m resolution DEM was derived from a 1 : 50 000 scale topographic map.Using RPCs and at least four GCPs for each image, orthorectification was done with a root mean square error of less than 2 m for all images based on the 2002 IKONOS image, which had the minimum off nadir angle of 4.2 • .
Manual interpretation of buildings was carried out on the platform of ENVI 4.8 based on the shape, spectrum, texture, and other information of houses on the 2002 and 2011 images.Given the low economic activity of this mountainous rural area, the IKONOS image in 2002 and the World-View image in 2011 could be used to represent the pre-and post-earthquake condition.Field survey revealed that the average size of pre-and post-quake buildings was greater than 4 m × 4 m.The footprint of each house was represented as a polygon by digitizing building outlines.Post-earthquake houses were also validated by field surveys in 2012 and 2013.Buildings destroyed by landslides during the 2008 earthquake and damages caused by debris flows and floods in 2012 were validated in selected areas along the valleys in the study area.Thirty-one buildings in Maanshi and 51 in Hejiashan were completely destroyed by landslides during the 2008 earthquake.Flood-and debris-affected buildings in 2013 were also surveyed and mapped based on GPS tracks with 185 and 65 buildings affected by floods and debris, respectively.
Along with sediment transportation and deposition, the riverbed provides a key link between landslides in the upstream basin and houses rebuilt on the lower streams.Riverbeds before and after the earthquake were interpreted based on very high-resolution images in 2002 and 2011.In these images, the riverbed was interpreted based on its tone in stereo images draped over the 25 m DEM.The riverbed in 2002 was defined to incorporate floodplains, which were covered by very little vegetation with similar elevation near the watercourse.The mapping of the riverbed in 2011 was based on image interpretation of texture, where deposited rocks of various sizes were obvious and confirmed with field validation in 2012 and 2013.
The landslides in 2008 and 2011 were interpreted using high-resolution images and validated by field surveys from September 2012.Coseismic landslides in 2008 were mapped by comparing two SPOT5 images pre-and post-earthquake and validated based on other existing landslide inventories in this area (Xu et al., 2014).
Field surveys from 2012 and 2013 showed significant accumulation of debris in the riverbed.Based on field measurements from August 2012 and September 2013, we selected buried houses at four sites located on the riverbed as reference points to estimate the depth change of deposits along the stream (Fig. 2).After the rainstorm in August 2013, the accumulation of debris in these locations on the riverbed rose from 1.4 to 3.2 m.The rising and broadening river morphology show in combination that the amount of sediment deposition on the riverbed is substantial.
Results
The main surface rupture of the Wenchuan earthquake runs through the bottom of the Hongxi River valley in the study area (Xu et al., 2009).About 68.9 % of the pre-seismic housing was located within 1000 m distance from the surface rupture.Within this distance, less than 10 % of the total houses survived the major earthquake in 2008 (Zhao et al., 2012).Despite of this heavy damage along the surface rupture, an increased percentage of post-seismic houses up to 75.2 % are located within 1000 m from the surface rupture.Major increase of post-seismic housing was observed between 50 and 250 m from the surface rupture (Fig. 3).The distances between housing and the surface rupture displayed a similar distribution for the post-seismic houses as pre-seismic ones.The number of these housing decreased dramatically with distance from the surface rupture, which reflected the distribution of the scarce construction space in the study area.
Because of the steep terrain, both pre-and post-earthquake houses are sparsely distributed within the study area.Based on overlay analysis, 145 houses were affected by coseismic landslides in 2008, including houses that were fully or partially covered by the landslides (Fig. 4a).Twenty-three landslides were identified as damage coseismic landslides that affected houses in the watershed.Two of these are listed within the top 20 fatal landslides of the Wenchuan earthquake: the Maanshi landslide and the Zhengjiashan landslide cluster (Yin, 2008).The spatial distribution of postearthquake houses differs from pre-earthquake houses, where some relocations were made from previously occupied locations to unoccupied new sites, the post-earthquake houses tend to cluster within major locations, and re-built housings clustered along riverbanks near Wenjiaba and east Jiankang (Fig. 4b and c).Besides, major decreases of housing were observed east of Hongxi village and the movement of housing clusters from east of Jiankang to Jiankang is obvious.
Compared to pre-earthquake buildings, the number of postearthquake houses increased from 2139 to 2371.
Figure 5a shows that regions with slopes of 30 to 35 • have the largest area of new and enlarged landslides, where there seem to be fewer new and enlarged landslides on gentle slopes.However, because of fewer areas with gentle slopes, a higher percentage of gentle slope areas between 5 to 15 • , and steeper slopes above 40 • , are threatened by enlarged landslides (Fig. 5b).More than 90 % of the housing was located below 35 • both before and after the 2008 earthquake with large increases in post-earthquake houses on slopes gentler than 25 • (Fig. 5b).
Figure 6a shows that there are more enlarged than newly generated landslides on all elevation intervals.New and enlarged landslides occur mainly below 1000 m with fewer at higher elevations.Elevation in the watershed ranges from 684 to 2286 m, where the altitude of the residential houses ranges from 688 to 1650 m.The highest residential house pre-and post-earthquake was located near 1650 m at the eastern margin of the watershed.Relocated houses were mainly distributed at lower elevation near downstream Wenjiaba and Jiankang (Fig. 4b and c).Two main clusters of houses occurred at elevation intervals of 688 to 900 and 1000 to 1300 m for pre-and post-earthquake houses, respectively (Fig. 6b).Compared to the elevation of pre-earthquake houses, the major increase for post-earthquake houses occurs at elevations ranging from 688 to 800 m, showing that the lower elevations were preferred for the rebuilding of houses.Above 1200 m, fewer post-earthquake houses were built compared to the preearthquake situation.
The dominance of enlarged landslides over newly generated landslides indicates that most of the post-earthquake landslides are related to existing coseismic landslides.A greater percentage of new and enlarged landslides at lower elevation between 800 and 1100 m indicate that the low elevation areas are more susceptible to post-earthquake landslide activities.The active enlarged landslides on the lower slope gradients and elevations are caused by the downward movement of landslide debris to the valley bottom, where most post-earthquake reconstructed buildings become significantly susceptible to post-earthquake landslide activities.
Compared to the pre-earthquake water channel, an increase in river width was observed in the upstream area and a decrease in width in the downstream area (Fig. 7).We investigated the width of the pre-and post-earthquake water channel within this watershed from images and field surveys.Moving from the river mouth upstream with 300 m intervals, different intersections of riverbed were measured along the main riverbed (Fig. 7).We compared the pre-and post-earthquake riverbed widths, and the changes to the postearthquake riverbed are shown for each riverbed intersection in Fig. 7.After the earthquake, the riverbed width increased dramatically, especially in the upstream section and in segments of the downstream part, with three substantial increases at upper Hongxi and Jiankang villages (Fig. 2).However, a decrease in the riverbed width was also observed upstream Wenjiaba village.This decrease was caused by land reclamation from floodplains for industrial use, which was also observed in the field work.The most significant increase in riverbed width was located between Wenjiaba and Jiankang, where the terrain is flat and sediment was widely deposited.
Based on digitized and field-validated riverbed data in 2002 and 2011, the distances of houses from the main riverbed were calculated by buffer analysis for pre-and postearthquake conditions (Fig. 8).For every distance unit from the riverbank, a higher percentage of rebuilt houses were located within 70 m distance from the riverbank compared with the pre-earthquake housing, which places more postearthquake houses in flood areas during flood events, such as the August 2013 flood.Compared to 17 % of houses located within 100 m of the riverbank before the earthquake, 23 % of post-earthquake houses were located within 100 m of the riverbed after the reconstruction.This is most likely due to the trend of rebuilding towards lower elevation and gentler slope areas along riverbanks in this mountainous region.
Fieldwork conducted in September 2013 showed that flash floods affected houses distributes along the river mainly concentrated in three locations: Hongxi, Jiankang, and Wenjiaba (red polygons in Fig. 2).Most of the flood-affected areas reside along river channels, apart from a few others located in debris-filled valleys east of Jiankang.All three inundated areas are coupled with dramatic river width changes.Rebuilt houses in Hongxi and Shikan are located in the upstream area characterized by deep valleys, where the elevation contrast between houses and the river channel is low, whereas the flood-affected Wenjiaba area in the lower stream is located near the segments of decreased channel width (Fig. 2).
Discussion
Although we only analyzed the lower part of the watershed, the study area covers over 90 % of all coseismic landslides within the studied watershed, in particular the largest ones, which have connections to the riverbed.Based on fieldwork and satellite images, most small and shallow coseismic landslides were rapidly covered by vegetation, leaving most large, deep-seated landslides along the stream as the main debris sources to the riverbed.
Spatial analysis between landslides and houses at different times reveal that the geoenvironments and the elements at risk have changed over time.Before the earthquake, few landslides existed in the watershed and most of the landslides were shallow slides caused by road building, and debris flows originated from steep ridges along the deeply incised valley.Because of the monsoon climate of the region, there are no significant rainfall events, such as the cyclone sequences after the 1999 Taiwan earthquake (Lin et al., 2006).However, most of the rainfall events occur during the summer season from May to September, which have triggered many landslides (Tang et al., 2011).During the earthquake, there were widespread landslides in the study area and these coseismic landslides evolved into flow-like morphology 3 years after the main earthquake (Fig. 4a).The downward movements of hillslope materials have buried the previous riverbed and the nearby terrace (Fig. 4a), leading to the common events of river overflow in the watershed.At the same time, flat areas in the watershed were concentrated along both riverbanks, where most new houses were built (Fig. 4c).The similar trend between the rebuilt houses and the patterns of debris movement from landslides indicate that more flood events might occur in the region.
Conclusions
The impact of an earthquake in a mountain region can persist for a long time and can change from landslides to debris flows in the valley and flash floods near the river course.The increased area of rebuilt houses on the gentle slopes and lower altitude places more newly built houses at risk of debris flows and flash floods as the riverbed is continually filled by debris from coseismic landslides.The main geohazards in this watershed have gradually changed from landslides and debris flows to more frequent flash floods.Comparing the distribution of landslide debris and houses pre-and postearthquake on the Fujiang River tributary, we find the following: 1. Post-seismic houses were built closer to the surface ruptures and concentrated on gentler slopes, lower elevation areas, and closer to the riverbed.
2. After the earthquake, there were more enlarged than newly generated landslides.Material originated mainly from coseismic landslides.Landslide back-scarp retreat was the main landslide activity of enlarged landslides.
3. The width and depth of the riverbed have changed dramatically because of material from landslides.The changing width and uplift of the riverbed increase the risk of flash floods in the watershed.
Because of the limited land resources for reconstruction after the Wenchuan earthquake, the rapid post-seismic reconstruction process has encountered significant challenges for future seismic, flood, and other related geohazards.We predict that this area might suffer long-term geohazards in the future, and the task to prevent geohazards would be a high priority.
Figure 1 .
Figure 1.Schematic map of the study area.
Figure 2 .Figure 3 .
Figure 2. Accumulation of debris on the riverbed at four locations and distribution of flooded houses during the 2013 August flood (photos of a1, b1, c1, and d1 were taken in August 2012; a2, b2, c2, and d2 were taken in September 2013).
Figure 4 .
Figure 4. Spatial changes in landslides and buildings.
Figure 5 .
Figure 5. Distribution of new or enlarged landslides and buildings on different slope units.
Figure 6 .
Figure 6.Distribution of new or enlarged landslides and buildings on different elevation units.
Figure 7 .
Figure 7. Riverbed width change before and after the earthquake.
Figure 8 .
Figure 8. Distribution of housing in different distance buffers from the river in the study area.
Table 1 .
Inventory of images used in the study. | 2018-12-11T03:23:10.438Z | 2014-08-01T00:00:00.000 | {
"year": 2014,
"sha1": "91376018e6b58046c49aaf22e8e93b43b7542ff4",
"oa_license": "CCBY",
"oa_url": "https://nhess.copernicus.org/articles/15/817/2015/nhess-15-817-2015.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "1fdbc0566fac1b9a6dd1242b7b5060163af925ab",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering",
"Geology"
],
"extfieldsofstudy": [
"Geology"
]
} |
84844002 | pes2o/s2orc | v3-fos-license | Measurement of the Chair Rise Performance of Older People Based on Force Plates and IMUs
An early detection of functional decline with age is important to start interventions at an early state and to prolong the functional fitness. In order to assure such an early detection, functional assessments must be conducted on a frequent and regular basis. Since the five time chair rise test (5CRT) is a well-established test in the geriatric field, this test should be supported by technology. We introduce an approach that automatically detects the execution of the chair rise test via an inertial sensor integrated into a belt. The system’s suitability was evaluated via 20 subjects aged 72–89 years (78.2 ± 4.6 years) and was measured by a stopwatch, the inertial measurement unit (IMU), a Kinect® camera and a force plate. A Multilayer Perceptrons-based classifier detects transitions in the IMU data with an F1-Score of around 94.8%. Valid executions of the 5CRT are detected based on the correct occurrence of sequential movements via a rule-based model. The results of the automatically calculated test durations are in good agreement with the stopwatch measurements (correlation coefficient r = 0.93 (p < 0.001)). The analysis of the duration of single test cycles indicates a beginning fatigue at the end of the test. The comparison of the movement pattern within one person shows similar movement patterns, which differ only slightly in form and duration, whereby different subjects indicate variations regarding their performance strategies.
Introduction
Facing the challenge of the demographic shift, the maintenance of the independence of older adults becomes more and more important [1]. Therefore, preventive measures of physical function are necessary to identify functional decline early and to start intervention programs (such as individual fitness trainings) and to reduce the risk of downstream decline [2]. The decline can be detected via functional assessments covering different aspects of functioning such as balance, mobility, endurance and power [3]. In order to assure such an early detection, functional tests must be conducted on a frequent and regular basis and should be supported by automated technology to keep the additional burden for medical experts as low as possible. Therefore, we introduce an automated measuring system for detecting age-related functional decline. Bean et al. emphasized muscle power The advantage of the chair rise test in comparison to other geriatric tests is its simplicity and the possibility to perform this test in clinical as well as in home environments. Even with an already existing functional impairment, patients are able to perform this test. Therefore, this test is a suitable test for our measurement system.
As another benefit, the chair rise test requires not only lower limb strength and power but also good balance and coordination [5] and, therefore, covers several components of physical function. In the traditional approach, the duration of the chair rise test is measured by stopwatches as single indicator [5][6][7]. This does not fully exploit the potential of the test as more performance parameters can be derived, which may be more medically relevant than the total test duration. However, technology is needed, in order to measure these comprehensive parameters. As a result, the interest in assessments supported by technology has risen significantly to ensure objective and more detailed analyses. A growing body of literature has examined the use of inertial measurement units (IMU) for sit-to-stand analyses [8][9][10][11][12]. However, video-based systems [13] or force plates [14][15][16] have also been utilized for analyses of chair rise tests. Additionally, various approaches have been proposed to investigate the sit-to-stand (STS) performance, especially in older people. From this, for example, the mean power can be estimated based on the test duration [6,7]. Several groups focused also on the trunk movement [8,11,16,17]. However, research has tended to focus on the sit-to-stand transition rather than the stand-to-sit transition [18], which might also hold crucial information of the functional status.
In order to introduce a novel approach of an automatic and more generalized chair rise test evaluation system for the early detection of functional decline, we present an IMU-based system that automatically detects the execution of the chair rise test via an inertial sensor integrated into a belt and analyzes this test regarding various aspects, which are mentioned in the following. In contrast to existing studies, we do not only focus on the evaluation of the sit-to-stand transition parameters but also consider differences during the single cycles of chair rise test and therefore reveal the characteristic fatigue and unsteadiness associated with the test, which is not considered in most studies. In addition, in this article, different performance strategies of older adults within the five time chair rise test (5CRT) are considered, which enables a more generalized investigation of the chair rise performance. We compare the repeated movements by a single person as well as across different persons. These movements are monitored by a multi-sensor based assessment system, which includes inertial, optical and mechanical sensors. Our main system consists of an IMU integrated into a belt and automatically detects the single transition as well as the whole 5CRT-sequence with a machine learning classifier. After the recognition of the corresponding sequence, the system calculates different performance parameters such as the duration of each transition or of the whole test. The consideration of individual transitions is of special interest, since enabling a separate consideration for clinical assessment as potential predictors of functional decline. Since, in conventional settings, the test duration is the only parameter used to assess the performance, we thereby investigate the association of these separate transitions towards other classical geriatric assessments. Furthermore, we use a force plate and a Kinect camera as reference systems. In order to understand the performance strategies, an understanding of biomechanics of the sit-to-stand and stand-to-sit transitions is necessary. Therefore, we also focus on the biomechanical signal interpretation until we move on to performance evaluations.
Materials and Methods
In this study, the 5CRT was measured in a conventional manner with manual stopwatch measurements, and technically via a force plate, a Microsoft Kinect depth camera (Redmond, WA, USA) and a wearable sensor system. The wearable system consists of one IMU integrated into a belt. The positioning in a belt assures an unobtrusive sensor-placement and can also be easily handled by older adults at home independently.
In the 5TCRT, the participant sits on a height-adjustable chair without armrests. The legs are positioned in a 90 • angle, and the feet stand firmly on a force plate. In a conventional assessment, the time is measured to complete five cycles of rising from a chair until standing fully erect to a straight and upright position and then sitting down again with arms folded across the chest. A lower test duration means a better performance. Therefore, the corresponding cut-off values for the performance evaluation are listed in Table 1. The 5CRT is a part of the Short Physical Performance Battery (SPPB) [20], which consists of two other tests besides the 5CRT, namely a 4 m gait speed test and balance tests (side-by-side stand, semi-tandem stand, and tandem stand). Additionally to the SPPB, several other geriatric tests (Frailty Criteria according to Fried [21], Timed up and Go Test (TUG) [22,23], 6 Minute Walk Test (6 min WT) [24], De Morton Mobility Index (DEMMI) [25,26], Stair Climb Power Test (SCPT) [27], and Counter Movement Jump [28]) were conducted in this study. The description of the complete study can be found in [29]. All tests were measured conventionally by medical professionals and additionally by our multi-sensor system. The study has been approved by the appropriate ethics committee (ethical vote: Hanover Medical School No. 6948) and conducted in accordance with the Declaration of Helsinki.
Sensor Belt
Our main system consists of a commercially available sensor-unit (DX-3, Humotion GmbH, Münster, Germany), which includes a triaxial accelerometer, gyroscope, and magnetometer, as well as a barometer. The sensor-unit is integrated into a hip-worn belt and the sampling rate is 100 Hz for all sensors. The hip positioning was chosen to enable an easy use and to assure a fixed positioning and an equal orientation of the sensor-unit among all participants. The orientation of the sensor belt is illustrated in Figure 2. Due to its light (overall weight: 140 g) and compact dimensions (11 cm × 2.5 cm including battery), the belt enables flexible, easy, and unobtrusive measurements. The correct placement of the sensor belt, as well as the position of the sensor unit inside the belt were checked and adjusted according to the hip circumference for each participant of our study by our medical professionals. The correct alignment between the L3 and L5 lumbar vertebral body was important, especially regarding our machine learning approach and a good classification performance.
Force Plate
Force plates measure the ground reaction force (GRF), which acts on the plate. Figure 3 shows the orientation and dimensions of the AccuPower force platform (Advanced Mechanical Technology, Inc. (AMTI), Watertown, MA, USA) used in our study. The main force acts perpendicular to the plate in the vertical direction. However, forces in the mediolateral direction and in the anterior-posterior direction are also taken into account because they might hold noteworthy information about the balance ability. The force plate in our study measures with a sample rate of 200 Hz. The subject sits on a height-adjustable seat beside the force-plate while his feet stand firmly on the plate.
Microsoft Kinect
A Microsoft Kinect V2 depth camera (30 fps) was positioned in front of the force plate and the chair. The Kinect depth data is used as optical reference system, especially for the biomechanical signal interpretation. Figure 4 shows an example of the depth images in front view and a calculated side view. To ensure a precise synchronization of the data time across all used devices, every computer synchronizes its system time via the Network Time Protocol (NTP) with an NTP server. However, especially, for the direct comparison of the measurements of different system, the synchronization was also checked by significant points in the signals. Figure 5 shows the acceleration in the vertical direction during the functional assessments in our study. The 5CRT is marked in red. In order to realize an automated 5CRT analysis based on the IMU data, we use a hierarchical machine learning (ML) algorithm. First, the state of the activity (static, dynamic, transition) is classified. The subsequent classifier recognizes the relevant movements (namely, sitting (static), standing-up (transition), standing (static) and sitting down (transition)). We therefore trained classification models under consideration of various ML-approaches (Boosted Decision Trees (BDT), Boosted Decision Stumps (BDS), Multilayer Perceptrons (MLP), Adaptive Multi-Hyperlane Machine (AMM)), features and sliding window parameters. A five-fold cross validation was used to validate the performance of our classifier. The complete study and more details of our machine learning algorithm can be found in [30]. We used the following features in our study:
Machine Learning Classification-Model for Activity
A detailed description of each feature and its calculation can be found in [31]. The different approaches were evaluated by the F1-Score, which was calculated by following equations: where precision and recall are defined by: and with tp as true positives, fp as false positives and fn as false negatives. Additionally, a low-pass filter (cut-off frequency: f c = 4.5 Hz) was used for noise reduction. Further details are discussed in [30]. Figure 6 shows the resulting classification-and 5CRT detection-workflow. In order to automatically detect sequences of 5CRT sequences, the annotations resulting from the ML classifier are processed regarding the occurrence of sequential activities via the following rule-based approach. Therefore, we check the motion labels, which were automatically created by our machine learning algorithm and search for valid test sequences. A valid sequence consists of five iterations of the SIT_STAND label directly followed by STAND_SIT label: Since the participants are allowed to hold the position of sitting or standing for a while during the test, the labels SIT and STAND between the corresponding transitions are also valid: All other motion labels are not allowed during the test interval. After one valid sequence is found, the analysis algorithm will be executed for a performance assessment.
Statistical Data Analysis
The data of the automatically recognized CRT-sequence is analyzed in various ways. A linear regression, as well as a Bland-Altman plot, is used to analyze relationship between the overall test duration measured by the IMU-based system and by a stopwatch.
For estimating relationships among different CRT-parameters (duration of 1st/last/average sit-to-stand, 1st/last/average stand to sit, and whole test duration measured by the IMU-system) and other geriatric tests (duration of Timed up and Go (TUG), walk test of the Frailty Criteria and the Stair Climb Power Test (SCPT) measured by a stopwatch), a linear regression analysis is utilized and the significance evaluated by the p-value, whereby a p-value < 0.005 indicates a significant relation. A normalized cross-correlation measures the similarity of two signals and is used to examine the intra-and interpersonal variability in performing the CRT-cycles. The results were evaluated regarding the Rule of Thumb for interpreting the size of a correlation coefficient [32]: 1-0.9 very high correlation, 0.7-0.9 high correlation, 0.5-0.7 moderate correlation, 0.3-0.5 low correlation, and 0-0.3 negligible correlation.
Results
At the beginning of this section, we present the study population and the results of our machine learning algorithm to detect the five time chair rise test in our raw data. In order to understand the biomechanics of the detected chair rise test cycles, we first focus on the signal pattern and the qualitative progress of the movements until we move on to performance analysis. After the calculation of the temporal test parameters and their evaluations, a comparison of the performance strategies within our study population follows.
Study Population
Overall, 20 healthy participants aged 72-89 years (78.2 ± 4.6 years) performed the 5CRT. The characteristics of the study population are listed in Table 2. Table 2. Characteristics of our study population (n = 20) with minimum (min), maximum (max) and mean-value (mean) as well as the standard deviation (SD) of age in years, body weight in kg, body height in cm, and results for the five time chair rise test in s.
Sensitivity and Specificity of Applied Machine Learning Classifiers
Among the considered machine learning approaches (Boosted Decision Trees (BDT), Boosted Decision Stumps (BDS), Multilayer Perceptrons (MLP), Adaptive Multi-Hyperlane Machine (AMM)), the best results for the recognition of the state were achieved with a Boosted Decision Tree and for static activities with a Multilayer Perceptrons approach. Details of the classifier can be found in Table 3. Table 3. Classifier configuration: method, size and step-width of the sliding window, as well as the noise reduction filter and feature set (Root Mean Square (RMS), Signal Energy (SE), Auto Correlation (AC), Correlation (C), Signal Magnitude Area (SMA), Standard Deviation (SD)). The used data for each feature are specified in brackets at the end of the line: Acceleration data (Acc), Gyroscope data (Gyro). The abbreviations HL and HN stand for hidden layer and hidden nodes. The cut-off frequency of the specific filters is f c .
Method Window
Step The best results for transitions (sit-to-stand, stand-to-sit) were achieved with a Multilayer Perceptrons classifier (four hidden layers, 40 hidden nodes) and a window size of 1.135 s and a step width of 0.073 s. The obtained F1-Score for state-classifier is 96.6, for the static-classifier 97.3 and for the transition-classifier is 94.8.
Evaluation of Overall 5CRT Duration Accuracy
The calculated 5CRT duration from the IMU-based classification results have been compared with the corresponding manual stopwatch measures for the considered 20 participants. As we have shown in [29], the stopwatch measures have a low inter-rater variability and thus represent a suitable reference measure. Regarding the classification results, the 5CRT duration was calculated as the sum of the durations for the considered sub-activities (sit-to-stand, standing, stand-to-sit and sitting) as described in Section 2.2.
The results of a linear regression among the IMU and stopwatch measurement are shown in Figure 7 and confirm a strong association-resulting in following equation: In addition, a very strong correlation with r = 0.93 (p < 0.001) has been identified via Pearson as well. To evaluate the medical sensitivity, we analyzed the classification regarding the cut-off values for the 5CRT. Therefore, we considered the bias of the IMU data under using the equation of the linear regression. Only one participant was classified in the wrong category (stopwatch: 1 point, IMU: 2 points), the other 19 participants were classified correctly (2 × 1 point, 8 × 2 points, 5 × 3 points and 4 × 4 points). This classification shows also our heterogeneous study population, which includes all point categories except those who were unable to complete five cycles (0 points). Additionally, a Bland-Altman plot was used to analyze the agreement between both systems (see Figure 8). Since the mean value is −0.61, our IMU-based system has a slight fixed bias.
Evaluation of Distinct Transitions
Besides the total test duration, the duration of the single cycles and distinct transitions might be an indicator for functional decline. In order to identify differences in the performance of each cycle, we evaluated the duration of the single cycles. Figure 9 shows a histogram of the distribution of the fastest cycle among all five cycles over all subjects. In most cases, the first cycle has the shortest duration, followed by cycle two. In contrast, Figure 10 shows a histogram of the slowest cycle, which is clearly one of the last two cycles in a sequence of five cycles. Besides the overall 5CRT duration, the evaluation of the single transitions could be worthwhile. Therefore, we calculated the duration of each transition (sit-to-stand and stand-to-sit) automatically with our machine learning classifier. A linear regression shows the relationship between our response variables and the Timed Up and Go (TUG) test duration, the Stair Climb Power Test (SCPT) and the duration of the walking test of the Frailty Criteria (distance: 4.57 m), which are other well-established tests or parts of functional geriatric tests. We analyzed the following response variables: the first sit-to-stand duration due to its standardized execution with the forced equal starting position, the last sit-to-stand duration due to the beginning fatigue, the first and last (fourth) stand-to-sit duration, the average sit-to-stand duration (over five cycles) and the average stand-to-sit duration (over four cycles because the test ends after the fifth sit-to-stand-movement) and the overall 5CRT duration. Table 4 lists the p-values of our regression analysis. Among these variables, only the first and average sit-to-stand duration show a significant relationship (p-value < 0.05) to the TUG test, with resulting regression lines of t TUG = (1.80 ± 0.84) · t 1st sit−to−stand + (5.72 ± 1.53) and t TUG = (1.72 ± 0.87) · t average sit−to−stand + (5.73 ± 1.64). The stand-to-sit and the overall duration have no significant relationship to the other tests as well as the durations of the last transitions. The TUG test is the only test among the mentioned ones which includes transitions and thus, a relationship was obvious.
Signal-Pattern and Qualitative Progress of the Movements
Furthermore, the signal-patterns and the qualitative progress of the CRT movements have been studied. The pattern of a sit-to-stand and stand-to-sit cycle is shown in Figure 11 and represents the acceleration data of the IMU in mediolateral (ML), vertical and anterior-posterior direction (AP) during this sequence. Due to the acceleration of gravity, the vertical acceleration amounts to Acc vertical ≈ 9.81 m/s 2 . The second graph shows the angular velocity of the IMU during the same sequence. The third graph shows the force during the same sequence measured by the force plate. The resting weight (weight on the plate while sitting on the chair), as well as the body weight, is marked in the figure. The specific phases of the chair rise test cycle are marked in this figure based on the force plate data (see marker). The Kinect camera data was used as a reference system. In the first phase, the subject is sitting on a chair. The legs should form a right angle, and the feet stand firmly on the ground. During this phase, the accelerations and the forces are almost constant. The second phase describes the rising with a short phase of preparation in which the subject gains momentum and lifts his feet from the ground with a light backward movement before he shifts his weight forward in order to raise from the chair. The force increases to a maximum at this point. The rising phase (sit-to-stand) ends when the vertical force reaches the body weight again after de-and increasing after the peak force. During the standing and stabilization phase, the force oscillates around the body weight until the subject starts the stand-to-sit movement, which shows similar motion sequences to the rising phase.
The following parameters are often used for analysis: time to stand up, power, maximum ground-reaction force (GRF) or the overshoot of vertical GRF over body weight. In particular, the forces in mediolateral and anterior-posterior direction can be used for sway or balance analysis besides the oscillations in the vertical direction during the stabilization phase. This also applies to the acceleration in the corresponding directions. In comparison to the force plate measurements, the sensor belt also shows significant movements in anterior-posterior direction due to its placement at the hip, while the feet stand firmly on the ground during the whole sequence. The angular velocities can determine the orientations of the hip in detail during the movements.
In a biomechanical point of view, moving from a sit-to-stand (STS) position provides complex transfer skills with adequate lower limb muscle strength and balance control. This transitional movement from sitting to upright standing posture also requires horizontal and vertical displacement of the whole body's centre of mass (COM) from a stable to a less stable position with the COM located posterior to extended lower extremities [33,34]. During a common clinical assessment, the STS transfer can be divided into four basic phases (Figure 12). The upper body including trunk and pelvis can be described as a mobile momentum in the context of biomechanical movements. The initial Phase 1 "Flexion Momentum" starts by bending the trunk and the pelvis prior to the moment when the buttocks leave the chair base (seat-off). Continuously, in Phase 2 "Momentum Transfer", the upper body is transferred and shifted throughout displacement of COM to the forward and upward movement of the whole body. This phase lasts from seat-off to maximum ankle dorsiflexion. An upright body position is performed during Phase 3 "Extension" and completed by the fully extended position of hip and knee joints. A straight and stable standing position characterizes the end of the transfer in Phase 4 "Stabilization" [35].
Investigation of Movement Patterns
Besides the duration of the whole test and the single cycles, the comparison of the movement pattern could be worthwhile. It makes sense to distinguish interpersonal and intrapersonal patterns in order to observe differences in the execution of cycles in one person and to examine differences between different persons. First, we concentrate on the differences between the different cycles of the same person.
Intrapersonal-Variability: Cycles of the Same Participant
In order to evaluate the differences in the movement pattern between the different cycles of one person, we first compared the five sit-to-stand and stand-to-sit cycles during the test. Figure 13 shows exemplarily the force plate data during the five cycles and Figure 14 the IMU acceleration data in vertical (including the acceleration of gravity) and anterior posterior direction. For clarification purposes, the mediolateral direction (which holds the smallest changes in the amplitude) was excluded from the graph. The curves of the movements show similar patterns for both measurement systems. The variance in the duration is mostly related to the standing and descending phase and only slight differences can be seen in the form of the pattern. Therefore, the intrapersonal-variability has a low variance.
In order to compare the pattern in a more objective point of view, we calculated the correlation coefficients by a normalized cross-correlation analysis of the first test cycle with the following cycles. The correlation coefficients on the basis of the force plate data are shown in Figure 15. In general, the evaluation shows high correlation coefficients and, therefore, a significant similarity between these patterns. This indicates that the participants use a similar performance strategy for all cycles. Table 5 lists the statistics of the correlation coefficients of the first cycle with the subsequent cycles, which range between 0.71 and 0.99, whereby the average value is about 0.94. Regarding the Rule of Thumb for interpreting the size of a correlation coefficient [32], these results lie between a high and very high correlation. The largest deviation seems to be between the first and the last cycle with a range of 0.28. This might be an indicator of physical exhaustion. Another interesting point is that there is a slight trend for a lower similarity between the cycles of the faster participants (lower test duration) than for the slower ones, which indicates a higher intrapersonal-variability for faster performances. This can be seen by the linear regression analysis over all correlation coefficients, which results in a regression line with the equation y = 3.53x + 8.74. Table 5. Descriptive statistics (minimum (min), maximum (max), standard deviation (SD)) of correlation coefficients of the first test cycle with the following cycles. Subsequently. we also tested the interpersonal-variability of the sit-to-stand and stand-to-sit pattern. We observed the first cycle due to its standardized start condition (angle of 90 • ). Figure 16 shows the correlation coefficients of the first cycle of participant A (slowest participant among all considered subjects) with the first cycle of the other participants. The correlation-coefficients are also determined by a normalized cross-correlation. Therefore, they describe the deviations of the form of the pattern. Overall, this graph indicates that there exist differences in the pattern among different participants. Table 6 lists the descriptive statistics of the correlation coefficients. In comparison to the intrapersonal-variability, the correlation coefficient between different persons are significantly lower with a minimum of 0.65 and a mean correlation coefficient of 0.88. Table 6. Descriptive statistics (minimum (min), maximum (max), standard deviation (SD)) of correlation coefficients of the first test cycle of different participants. In order to analyze the different strategies, we compared the pattern of participants A and B, which show the lowest correlation. Figures 17 and 18 show exemplarily the force plate data (measured amplitude and time and normalized amplitude and time) during cycles one and two of participants A and B. Besides the high difference in test duration, the faster participant B shows a significantly higher dynamic in the movements (range of force). In particular, the overshoot is substantially higher than for participant A. In contrast, participant A shows an almost steady movement with a low dynamic. However, this participant has a higher swaying, which might indicate a lower balance ability. Another striking point is the higher intrapersonal-variability between the cycles of participant B, which was already mentioned in the previous subsection. A reason could be that fitter participants go more to their limits and show therefore higher physical exhaustion. Analogous to the force plate data, these findings can also be observed in the acceleration data. Figure 19 shows the normalized acceleration over all three axes for the first 5CRT cycle:
Min Max Mean Median SD Range
The acceleration of gravity was excluded. The time axis was also normalized to focus on the differences in the form of the movements. We can observe the significantly higher dynamic in the movements of participant B and the higher swaying of participant A.
Discussion
Due to the importance of an early-stage detection of functional decline, we introduced a multi-sensor system to analyze the five time chair rise test (5CRT). This test is a well-established assessment item for the evaluation of the functional status. By an inertial-sensor based system integrated into a belt, we recognized automatically the relevant activities of the test (sit-to-stand, stand-to-sit, sitting, and standing) via a machine-learning based classifier. Valid test cycles are detected by a rule-based model. The classifier achieved good results in the detection of transitions (sit-to-stand, stand-to-sit) with an F1-Score of around 94.8%. This result is comparable with previous work. For example, Allen et al. [36] reached an accuracy of 93.1 for sit-to-stand transitions and 88.3 for stand-to-sit transition with a Gaussian mixture model and a waist-mounted triaxial accelerometer. Gupta et al. [37] achieved an accuracy of 95.4 using k-nearest neighbor (k-NN) and 97.7 using Naive Bayes for transitions. The recognition of the transitions is important for automated measurements and subsequent evaluations of the test.
Since conventional assessments usually consider the total test duration only, we investigated whether our automatic IMU-based system is also capable of calculating correct test durations and achieved good results with a significant correlation of 0.93. The medical sensitivity could also be initially confirmed since 19 of 20 participants were correctly classified regarding the cut-off values for the 5CRT. Although our heterogeneous study population included all categories (except zero points), a larger study population is needed to verify the significance of this result. Additionally, we evaluated the agreement among the two measurement techniques with a Bland-Altman plot. The Bland-Altman plot shows a small fixed bias of −0.61 s, which must be considered. The minimal detectable change (MDC) of the 5CRT differs in literature depending on the observed study population. For example, Goldberg et al. [38] found an MDC of 2.5 s in older females and Blackwood [39] a MDC of 3.54 s in older adults with early cognitive loss. Therefore, differences within the mean ± 1.96 SD might be not clinically important and the two methods may be used interchangeably.
In the examination of the durations of the single cycles and subsequent of the durations of the single transitions, we found that the first two cycles are often the fastest and the slowest cycles the last two cycles. It has to be mentioned that the 5CRT ends at the fifth standing. Therefore, the duration of cycle 5 can be influenced by the end of the test. However, since the last two cycles typically show the longest duration, this could be an indicator for a beginning physical exhaustion and could hold crucial information of changes in the functional status.
For further investigations, we should increase the repetition to confirm this result. Roldán-Jiménez et al. [40] investigated the muscular activity and fatigue of healthy adults during different repetitions (5,10,30) and speeds of STS tasks using surface electromyography in lower-limb and trunk muscles and found significant differences in fatigue in the M. vastus medialis of the quadriceps between the different STS tests, whereby an emerging EMG-activity can be reported in order of relevance for M. vastus medialis, M. tibialis anterior, M. rectus femoris and M. erector spinae during the sequence of seat-off [41]. The study of Roldán-Jiménez et al. affirms our presumption, whereby we assume that fatigue occurs already with small repetitions in our heterogeneous study population. In some cases, the first cycle belongs to the slowest ones. This is due to the forced starting position of the legs with an angle of 90 • , which makes the rising quite exhausting. Usually, the participants change the sitting position during the test lightly to a more comfortable position for rising.
The investigation of associations between the durations of the transitions and other geriatric test results showed that, among the considered variables, only the first and average sit-to-stand duration show a significant relationship to the TUG test. This might be an indication that the evaluation of the sit-to-stand durations is a more sensitive indicator than the overall test duration or the stand-to-sit duration. The stair climb test and the walk test differ strongly from the motion sequences, whereby the TUG includes transitions and thus a relationship is more obvious than for the other tests. In contrast to our results, Goldberg et al. [38] confirmed a significant correlation between the 5CRT (overall test duration) and the TUG. We suspect that we did not find a significant correlation due to our small study population.
Additionally, we evaluated the movement patterns of the separate test cycles via a normalized cross-correlation based on the force plate data. The patterns of the same person show similar curves for the five cycles, which slightly differ in form and duration. In particular, the first and last cycle differ mostly, which seems to be an indicator of physical exhaustion as already mentioned. Another point was that the intrapersonal-variability was slightly higher for fitter participants than for slower participants. This could indicate that less fitter subjects are moving more steadily and distribute their power over the whole test duration than fitter ones, who respond well to minor deviations in the movements. The comparison of the patterns of different participants indicated the existence of different performance strategies, which is worthwhile to investigate in detail. It is known that age-related changes in the biomechanical process of standing up can be seen in adopted strategies to perform the task. For example, Gross et al. [34] revealed that elderly participants show higher muscle activity in M. tibialis anterior and more hip flexion prior to lift off from the chair seat in Phases 1 and 2 (see Section 3.5). We showed two examples of strategies observed in our study, which differ significantly in time and execution of movements and resulted in a higher swaying (less fit) or a higher dynamic range (fit). The comparison of performance strategies is a major topic and should be investigated in greater detail. In particular, the separate examination of the movements in the AP and ML direction could be worthwhile. In contrast to force plate measurements (measures the center of pressure), IMU measurements also allow statements about movements in anterior-posterior and mediolateral direction due to the central positioning at the hip (center of mass). Therefore, the IMU system alone can already be used as an adequate or even advantageous measurement system and the other sensors act as reference systems.
The use of other pattern matching analysis, for example dynamic time warping (DTW), could also be useful to investigate the variability in the CRT performance since this approach is considered as more robust than cross-correlation [42]. For a more detailed analysis of differences in the performance strategies, it could also be worthwhile to analyze sub-sequences in the cycle instead of the entire cycle. This would reduce the influence of the overall process and thus emphasizes differences in the sub-phases more strongly.
However, due to the limited cohort size of the current study, our results should be confirmed in a larger study population. Despite these limitations, we nevertheless believe that our findings are worthwhile for future developments in geriatric assessments and technology supported assessments, especially regarding high frequent assessments of functional decline at an early stage.
Conclusions
We introduced an approach for an automatic chair rise test detection and evaluation system via an inertial sensor integrated into a belt and a machine learning classifier. We also considered differences during the single cycles of chair rise test and therefore reveal the characteristic fatigue and unsteadiness associated with the test. Another point of this paper was the consideration of different performance strategies of older adults. We, therefore, compared the repeated movements by a single person as well as across different persons and found, from a correlations analysis, that the persons maintain their performance strategy during the test, but differ from each other inter-personally. | 2019-03-22T16:08:16.955Z | 2019-03-01T00:00:00.000 | {
"year": 2019,
"sha1": "3bbbbd670fb9914fc6a77164758a4683ee027f80",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/19/6/1370/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3bbbbd670fb9914fc6a77164758a4683ee027f80",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
} |
56353193 | pes2o/s2orc | v3-fos-license | Mulching an Arenic Hapludult at Umudike : Effects on saturated hydraulic conductivity and rhizome yield of turmeric
A study was carried out over two cropping seasons on an Arenic Hapludult at Umudike, southeastern Nigeria, to investigate and determine the quantity and type of mulch material that would optimize the rhizome yield of turmeric (Curcuma longa Linn) and improve the saturated hydraulic conductivity of the soil. The turmeric was planted at two depths; 5 and 10 cm. Two types of mulch, straw (elephant grass) and wood shavings were evaluated at 3 rates (0, 4 and 8 t/ha). Results showed that the effect of mulching on rhizome yield was significant. Yields increased significantly with the rate of mulch. Rhizome yield of turmeric was significantly influenced by the depth of planting and mulch type used. The 10 cm planting depth out-yielded 5 cm depth, and straw mulch out-yielded wood shavings. Optimum values of Ksat occurred at the 4 t/ha mulch rate. Bulk density, macro porosity and micro porosity were the most important physical properties influencing Ksat of the soil. Total porosity (Pt), and void ratio (Ve) were not good indicators of Ksat even though they positively explained between 98 and 96% of its variations, respectively.
INTRODUCTION
Turmeric (Curcuma longa Linn), a shallow-rooted crop, is a relative of ginger and is indigenous to India.It is a spice used extensively by all classes of people in India.India exports 20 different spices, among which turmeric ranks fourth in terms of foreign exchange earnings (Naidu and Murthy, 1989).The soil and agro-climatic conditions at Umudike are favorable for successful cultivation of turmeric (Olojede et al., 2003).
Soil conditions at the rooting depth of a crop influence the full exploitation of its genetic potentials.Mulching affects the conditions in the surface layer of the soil, and in consequence, on the crops with shallow root systems.However, the magnitude of mulch effects on nutrient supply and improvement on soil physical properties depends on the quantity and quality of mulch, soil properties and environment (Lal, 1995).Lal (2000) reported for an Alfisol high variability in the data on K sat with no consis-*Corresponding author.E-mail: chystova@yahoo.co.uk.Tel: +234 805 5846 329.tent trends with regards to mulch rate or depth.Mulch rate had no significant effect on K sat for any of the layers (0 to 50 cm depth).However, K sat , changed significantly with depth (Lal, 2000).It therefore becomes necessary to: (1) determine the quantity and type of mulch material that will optimize the saturated hydraulic conductivity of an Ultisol at Umudike; and (2) evaluate their effects on rhizome yield of turmeric.
Experimental design and soil sampling
A factorial combination of two planting depths, two mulch types and thee mulch rates with thee replications was arranged in a split-split plot format using a randomized complete block design.The main plot treatments were two depths of planting (5 and 10 cm).The subplot treatments were two mulch types (straw and wood shavings).
The sub-sub-plot treatments comprised thee mulch rates (0, 4, and 8 t/ha).The dimensions of each sub-sub-plot were 3 m x 2 m.Planting was done in June each year.Mulch treatments (dry) were applied immediately after planting.Fertilizer (N.P. K. 15:15:15) was applied 8 weeks after planting (WAP) at the rate of 400 kg/ha.Primextra and Gramazone herbicides were applied, as preemergence, at the recommended concentrations, while supportive rouging was done at regular intervals to keep the plots weed free.
Harvesting was done at 28 WAP and data collected on rhizome yield.Two undisturbed core samples were collected from each plot at 0 -5 cm and 5 -10 cm depths for saturated hydraulic conductivity and bulk density determinations.
Laboratory analysis
Measurements were taken on saturated hydraulic conductivity using the constant head method as described by Klute (1986), using the equation: where volume of water per unit time; A = area of core sampler; t = unit time in min/h; L = length of core; and H = hydraulic head difference.Bulk density (BD) was determined according to Blake and Hartge (1986).Total porosity (Pt), and macro-porosity (Pe), were obtained from BD values with assumed particle density (dp) of 2.65 g/cm 3 , as follows: Nwokocha et al. 2005 Pt = 100 (1 -BD / dp) …………………………….( 1) where is the volumetric water content.Particle size distribution in calgon was measured by the hydrometer method (Day, 1965).Percent organic carbon (%O.C.) was determined by the dichromate oxidation method of Walkley and Black (Nelson and Sommers, 1982), organic matter (O.M) was determined by multiplying %O.C with the conventional Van Bernmeller factor of 1.724.
Data analysis
Statistical method of analysis of variance for a split-split plot design as outlined by Steel and Torrie (1980) was used for the analysis.
Mean separation for significant effects (P < 0.05) was carried out using F-LSD, as described by Obi (1986).Correlation coefficients, coefficients of determination and regression equations, were used to explain relationships between yield and soil properties, and soil properties and soil indices.
Effect of treatments on rhizome yield of turmeric
Effects of planting depth, mulch type and rate on rhizome yield of turmeric are shown in Higher yield recorded at 10 cm planting depth may be due to the improvement of Ksat and macro-porosity, and reductions in micro-porosity and bulk density at the 5 cm depth, which subsequently, enriched the soil moisture content and aeration status of the soil at the 10 cm depth.Nwokocha et al. (2006) reported that straw mulch contains higher percent OM and improved soil structure better than wood shavings.These may have implications for the higher rhizome yield observed with straw mulch.Lower micro-porosity and BD, and higher Ksat and macro-porosity observed in mulched plots, and as mulch rate increased may be responsible for the increased yield due to mulch rate, and may have resulted from the beneficial effects of a mulch cover to breakdown the kinetic energy of the rain drops.Thus, reducing their impact pressure on the soil, and consequently, reducing soil compaction and aggregate disintegration and crusting.Also, according to Nwokocha et al. (2006) the decomposition of the mulch materials provide SOM which helps to stabilize soil aggregates, thus making the soil conducive for rhizome development (Table 1).
Effect of treatments on saturated hydraulic conductivity, total porosity, macro and micro porosities and bulk density
Effects of mulch type, rate and sampling depth on saturated hydraulic conductivity are shown in Table 2. Mulch rate and sampling depth significantly influenced K sat .Mulching at 4 t/ha rate was found to be optimum for K sat .However, increase in K sat due to mulch rate was more pronounced in straw than in wood shavings at both sampling depths (Table 3).Nwokocha et al. (2006) reported that straw (elephant grass) has higher organic matter content (5.10%) than wood shavings (1.31%).This may have been responsible for the pronounced percent increase in K sat recorded by straw (443.2%) as against that by wood shavings (118.7%).More percent increase in Ksat occurred within the 10 cm soil depth than within the 5 cm soil depth.Increase in macro-porosity and decreases in micro-porosity and bulk density recorded at 5 cm depth may have facilitated the higher Ksat value (54.4 cm/h) obtained within the 5 cm soil depth (Table 2).Significant interaction was also obtained between mulch rates and sampling depths (Table 3).Optimum value of Ksat (69.7 cm/h) was obtained in the 4 t/ha x 5 cm depth interaction.The order was 8 t/ha x 5 cm depth (71.2 cm/h) = 4 t/ha x 5 cm depth (69.7 cm/h) > 8 t/ha x 10 cm depth (48.7 cm/h) > 4 t/ha x 10 cm depth (40.2 cm/ha) > 0 t/ha x 5 cm depth (22.2 cm/h) > 0 t/ha x 10 cm depth (14.2 cm/h).Higher Ksat, and P e and lower BD observed in mulched plots, and as mulch rate increased may be Nwokocha et al. 2007 Table 4. Coefficients of determination and regression analysis between saturated hydraulic conductivity and physical properties of an arenic hapludult following mulch application (N = 72).
Independent variable R 2 Regression models
Macro porosity (Pe, %) 0.88** Ksat = -13.44+ 2.87(Pe) Micro porosity (Pm %) 0.94** Ksat = 110.82-2.15(Pm)Total porosity (Pt %) NS Ksat = -5.1 + 0.97(Pt) Bulk density (BD g/cm 3 ) 0.98** Ksat = 281.7 -152.9 (BD) Void ratio (Ve) NS Ksat = 23.3+ 19.8(Ve) ** and NS = significant at 1% and not significant at 5% alpha levels respectively.responsible for the increased yield due to mulch rate.Lack of crusts and high earthworm activities observed on the mulched plots may have contributed more to their having higher K sat rates, compared to the unmulched plots.The reduction in saturated hydraulic conductivity as sampling depth increased from 5 cm to 10 cm could be due to increased BD and reduced macro-porosity, with increasing soil depth.This is in agreement with the findings of Mbagwu (1995) who reported that K sat and macro-porosity decreased with increase in soil depth.Lal (2000) also observed that K sat changed significantly with soil depth.
Saturated hydraulic conductivity and soil physical properties
The correlation and regression analyses given in Table 4 showed that bulk density, macro porosity and micro porosity were the most important physical properties influencing K sat of the soil.The negative linear regression relationship between K sat and bulk density indicated that as bulk density increased to 1.84 g/cm 3 , K sat decreased and approached zero.This linear regression explained 98% of variation in K sat .Linear relationship between K sat and P e explained 88% of the variation in K sat.This predicted negative K sat for P e below 4.58%; an unrealistic regression relationship.Similar result was reported by Mbagwu (1995), where he used an exponential model with P e as the independent variable to explain the variation in K sat and to produce a relationship having acceptable physical interpretations over the range of measured P e values.Table 5 showed that macro porosity was the more consistent property that explained most of the variations in K sat at either of the two sampling depths.Total porosity (P t ) and void ratio (V e ) were not good indicators of K sat even though they positively explained between 98 and 96% of its variations, respectively.By definition, P t is the sum of P e and P m and therefore, the overall contribution of P t to K sat is the sum of the individual contributions of P e and P m .The positive effect of P e on K sat (r = 0.94) was more than the negative effect of P m (r = -0.97)implying that the overall contribution of P t to K sat was positive, though not significant at P < 0.05.This is consistent with the findings of Rasse et al. (2000) who reported that saturated hydraulic conductivities were significantly and positively correlated with macro porosity.
Conclusions
From these results, mulching increased the rhizome yield of turmeric with yield increasing with increase in mulch rates.Planting depth of 10 cm, out-yielded 5 cm depth.Straw proved a better mulch material than wood shavings.There was no significant difference between effects due to straw mulch and wood shavings on Ksat of the soil.Optimum values of Ksat occurred at the 4 t/ha mulch rate.Bulk density, macro porosity and micro porosity were the most important physical properties influencing K sat of the soil.Linear relationship between K sat and P e was unrealistic.Total porosity (P t ) and void ratio (V e ) were not good indicators of K sat even though they positively explained between 98 and 96% of its variations, respectively.
Table 1 .
Rhizome yield of turmeric as influenced by planting depth, mulch type and rate at Umudike.
Table 2 .
Effects of mulch types, rates and sampling depth on selected physical properties of an arenic hapludult.
Table 3 .
Percent increase in saturated hydraulic conductivity due to sampling depth and mulch application on an arenic hapludult.
Table 5 .
Simple linear correlation between saturated hydraulic conductivity and physical properties of an arenic hapludult at two sampling depths following mulch application (N = 36). | 2018-12-15T17:38:31.440Z | 2007-04-30T00:00:00.000 | {
"year": 2007,
"sha1": "9d898db1fd87c969fbddef5387a0303b60247af7",
"oa_license": "CCBY",
"oa_url": "https://academicjournals.org/journal/AJB/article-full-text-pdf/72877666260.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "9d898db1fd87c969fbddef5387a0303b60247af7",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
119678141 | pes2o/s2orc | v3-fos-license | Free resolutions of some Schubert singularities
In this paper we construct free resolutions of certain class of closed subvarieties of affine spaces (the so-called"opposite big cells"of Grassmannians). Our class covers the determinantal varieties, whose resolutions were first constructed by A. Lascoux (Adv. Math., 1978). Our approach uses the geometry of Schubert varieties. An interesting aspect of our work is its connection to the computation of the cohomology of homogeneous bundles (that are not necessarily completely reducible) on partial flag varieties.
Introduction
A classical problem in commutative algebra and algebraic geometry is to describe the syzygies of defining ideals of interesting varieties. Let k ≤ n ≤ m be positive integers. The space D k of m × n matrices (over a field k) of rank at most k is a closed subvariety of the mn-dimensional affine space of all m × n matrices. When k = C, a minimal free resolution of the coordinate ring k[O D k ] as a module over the coordinate ring of the the mn-dimensional affine space (i.e. the mn-dimensional polynomial ring) was constructed by A. Lascoux [Las78]; see also [Wey03,Chapter 6].
In this paper, we construct free resolutions for a larger class of singularities, viz., Schubert singularities, i.e., the intersection of a singular Schubert variety and the "opposite big cell" inside a Grassmannian. The advantage of our method is that it is algebraic group-theoretic, and is likely to work for Schubert singularities in more general flag varieties. In this process, we have come up with a method to compute the cohomology of certain homogeneous vector-bundles (which are not completely reducible) on flag varieties. We will work over k = C.
Let N = m + n. Let GL N = GL N (C) be the group of N × N invertible matrices. Let B N be the Borel subgroup of all upper-triangular matrices and B − N the opposite Borel subgroup of all lowertriangular matrices in GL N . Let P be the maximal parabolic subgroup corresponding to omitting the simple root α n , i.e, the subgroup of GL N comprising the matrices in which the (i, j)-th entry (i.e., in row i and column j) is zero, if n + 1 ≤ i ≤ N and 1 ≤ j ≤ n; in other words, We have a canonical identification of the Grassmannian of n-dimensional subspaces of k N with GL N /P . Let W and W P be the Weyl groups of GL N and of P , respectively; note that W = S N (the symmetric group) and W P = S n × S m . For w ∈ W/W P , let X P (w) ⊆ GL N /P be the Schubert variety corresponding to w (i.e., the closure of the B N -orbit of the coset wP (∈ GL N /P ), equipped with the canonical reduced scheme structure). The B − N -orbit of the coset (id · P ) in GL N /P is denoted by O − GL N /P , and is usually called the opposite big cell in GL N /P ; it can be identified with the mn-dimensional affine space. (See 2.2.) Write W P for the set of minimal representatives (under the Bruhat order) in W for the elements of W/W P . For 1 ≤ r ≤ n − 1, we consider certain subsets W r of W P (Definition 3.11); there is The first author was supported by a CMI Faculty Development Grant. The second author was supported by NSA grant H98230-11-1-0197, NSF grant 0652386. w ∈ W n−k such that D k = X P (w) ∩ O − GL N /P . Note that for any w ∈ W P , X P (w) ∩ O − GL N /P is a closed subvariety of O − GL N /P . Our main result is a description of the minimal free resolution of the coordinate ring of X P (w) ∩ O − GL N /P as a module over the coordinate ring of O − GL N /P for every w ∈ W r . This latter ring is a polynomial ring. We now outline our approach.
First we recall the Kempf-Lascoux-Weyman "geometric technique" of constructing minimal free resolutions. Suppose that we have a commutative diagram of varieties (1.1) is an affine space, Y a closed subvariety of A and V a projective variety. The map q is first projection, q ′ is proper and birational, and the inclusion Z ֒→ A × V is a sub-bundle (over V ) of the trivial bundle A × V . Let ξ be the dual of the quotient bundle on V corresponding to Z. Then the derived direct image Rq ′ * O Z is quasi-isomorphic to a minimal complex F • with Here R is the coordinate ring of A; it is a polynomial ring and R(k) refers to twisting with respect to its natural grading. If q ′ is such that the natural map O Y −→ Rq ′ * O Z is a quasi-isomorphism, (for example, if q ′ is a desingularization of Y and Y has rational singularities), then F • is a minimal free resolution of C[Y ] over the polynomial ring R .
The difficulty in applying this technique in any given situation is two-fold: one must find a suitable morphism q ′ : Z −→ Y such that the map O Y −→ Rq ′ * O Z is a quasi-isomorphism and such that Z is a vector-bundle over a projective variety V ; and, one must be able to compute the necessary cohomology groups. We overcome this for opposite cells in a certain class (which includes the determinantal varieties) of Schubert varieties in a Grassmannian, in two steps.
As the first step, we need to establish the existence of a diagram as above. This is done using the geometry of Schubert varieties. We take A = O − GL N /P and Y = Y P (w) := X P (w) ∩ O − GL N /P . LetP be a parabolic subgroup with B N ⊆P P . The inverse image of O − GL N /P under the natural map GL N /P −→ GL N /P is O − GL N /P × P/P . Letw be the representative of the coset wP in WP . Then XP (w) ⊆ GL N /P (the Schubert subvariety of GL N /P associated tow) maps properly and birationally onto X P (w). We may chooseP to ensure that XP (w) is smooth. Let ZP (w) be the preimage of Y P (w) in XP (w). We take Z = ZP (w). Then V , which is the image of Z under the second projection, is a smooth Schubert subvariety of P/P . The vector-bundle ξ on V that we obtain is the restriction of a homogeneous bundle on P/P . Thus we get: See Theorem 3.7 and Corollary 3.9. In this diagram, q ′ is a desingularization of Y P (w). Since it is known that Schubert varieties have rational singularities, we have that the map O Y −→ Rq ′ * O Z is a quasi-isomorphism, so F • is a minimal resolution.
As the second step, we need to determine the cohomology of the homogeneous bundles ∧ t ξ over V . There are two ensuing issues: computing cohomology of homogeneous vector-bundles over Schubert subvarieties of flag varieties is difficult and, furthermore, these bundles are not usually completely reducible, so one cannot apply the Borel-Weil-Bott theorem directly. We address the former issue by restricting our class; if w ∈ W r (for some r) then V will equal P/P . Regarding the latter issue, we inductively replaceP by larger parabolic subgroups (still inside P ), such that at each stage, the computation reduces to that of the cohomology of completely reducible bundles on Grassmannians; using various spectral sequences, we are able to determine the cohomology groups that determine the minimal free resolution. See Proposition 5.5 for the key inductive step. In contrast, in Lascoux's construction of the resolution of determinantal ideals, one comes across only completely reducible bundles; therefore, one may use the Borel-Weil-Bott theorem to compute the cohomology of the bundles ∧ t ξ.
Computing cohomology of homogeneous bundles, in general, is difficult, and is of independent interest; we hope that our approach would be useful in this regard. The best results, as far as we know, are due to G. Ottaviani and E. Rubei [OR06], which deal with general homogeneous bundles on Hermitian symmetric spaces. The only Hermitian symmetric spaces in Type A are the Grassmannians, so their results do not apply to our situation.
Since the opposite big cell O − GL N /P intersects every B N -orbit of GL N /P , Y P (w) captures all the singularities of X P (w) for every w ∈ W . In this paper, we describe a construction of a minimal free resolution We hope that our methods could shed some light on the problem of construction of a locally free resolution of O X P (w) as an O GL N /P -module.
The paper is organized as follows. Section 2 contains notations and conventions (Section 2.1) and the necessary background material on Schubert varieties (Section 2.2) and homogeneous bundles (Section 2.3). In Section 3, we discuss properties of Schubert desingularization, including the construction of Diagram (1.2). Section 4 is devoted to a review of the Kempf-Lascoux-Weyman technique and its application to our problem. Section 5 explains how the cohomology of the homogeneous bundles on certain partial flag varieties can be computed; Section 6 gives some examples. Finally, in Section 7, we describe Lascoux's resolution in terms of our approach and describe the multiplicity and Castelnuovo-Mumford regularity of C[Y P (w)].
Acknowledgements
Most of this work was done during a visit of the first author to Northeastern University and the visits of the second author to Chennai Mathematical Institute; the two authors would like to thank the respective institutions for the hospitality extended to them during their visits. The authors thank V. Balaji, Reuven Hodges and A. J. Parameswaran for helpful comments. The computer algebra systems Macaulay2 [M2] and LiE [LiE] provided valuable assistance in studying examples.
Preliminaries
In this section, we collect various results about Schubert varieties, homogeneous bundles and the Kempf-Lascoux-Weyman geometric technique.
2.1. Notation and conventions. We collect the symbols used and the conventions adopted in the rest of the paper here. For details on algebraic groups and Schubert varieties, the reader may refer to [Bor91,Jan03,BL00,Ses07].
Let m ≥ n be positive integers and N = m + n. We denote by GL N (respectively, B N , B − N ) the group of all (respectively, upper-triangular, lower-triangular) invertible N × N matrices over C. The Weyl group W of GL N is isomorphic to the group S N of permutations of N symbols and is generated by the simple reflections s i , 1 ≤ i ≤ N − 1, which correspond to the transpositions (i, i + 1). For w ∈ W , its length is the smallest integer l such that w = s i 1 · · · s i l as a product of simple reflections. For every 1 ≤ i ≤ N − 1, there is a minimal parabolic subgroup P i containing s i (thought of as an element of GL N ) and a maximal parabolic subgroup P i not containing s i . Any parabolic subgroup can be written as P A := i∈A P i for some A ⊂ {1, . . . , N − 1}. On the other hand, for A ⊆ {1, . . . , N − 1} write P A for the subgroup of GL N generated by P i , i ∈ A. Then P A is a parabolic subgroup and P {1,...,N −1}\A = P A .
The following is fixed for the rest of this paper: (a) P is the maximal parabolic subgroup P n of GL N ; (b) for 1 ≤ s ≤ n − 1,P s is the parabolic subgroup P {1,...,s−1,n+1,...,N −1} = ∩ n i=s P i of GL N ; (c) for 1 ≤ s ≤ n − 1, Q s is the parabolic subgroup P {1,...,s−1} = ∩ n−1 i=s P i of GL n . We write the elements of W in one-line notation: (a 1 , . . . , a N ) is the permutation i → a i . For any A ⊆ {1, . . . , N − 1}, define W P A to be the subgroup of W generated by {s i : i ∈ A}. By W P A we mean the subset of W consisting of the minimal representatives (under the Bruhat order) in W of the elements of W/W P A . For 1 ≤ i ≤ N , we represent the elements of W P i by sequences (a 1 , . . . , a i ) with 1 ≤ a 1 < · · · < a i ≤ N since under the action of the group W P i , every element of W can be represented minimally by such a sequence.
For w = (a 1 , a 2 , . . . , a n ) ∈ W P , let r(w) be the integer r such that a r ≤ n < a r+1 .
, then we call the the flag variety a full flag variety; otherwise, a partial flag variety. The LetP be any parabolic subgroup containing B N and τ ∈ W . The Schubert variety XP (τ ) is the closure inside GL N /P of B N · e w where e w is the coset τP , endowed with the canonical reduced scheme structure. Hereafter, when we write XP (τ ), we mean that τ is the representative in WP of its coset. The opposite big cell O − GL N /P in GL N /P is the B − N -orbit of the coset (id ·P ) in GL N /P . Let YP (τ ) := XP (τ ) ∩ O − GL N /P ; we refer to YP (τ ) as the opposite cell of XP (τ ). We will write R + , R − , R + P , R − P , to denote respectively, positive and negative roots for GL N and forP . We denote by ǫ i the character that sends the invertible diagonal matrix with t 1 , . . . , t n on the diagonal to t i .
2.2. Précis on GL n and Schubert varieties. LetP be a parabolic subgroup of GL N with B N ⊆P ⊆ P . We will use the following proposition extensively in the sequel. (d) For 1 ≤ s ≤ n − 1, P/P s is isomorphic to GL n /Q s . In particular, the projection map O − GL N /P × P/P −→ P/P s is given by Proof. (a): Note that U − P is the subgroup of GL N generated by the (one-dimensional) root sub- It is easy to check that this is an isomorphism.
(b): Suppose that zP ∈ O − GL N /P . By (a), we see that there exist matrices A ′ n×n , C ′ n×m , D ′ m×n and E ′ m×m such that Since z ∈ GL N , z 2 ∈ P . (c): Let z ∈ U − P P ⊆ GL N . Then we can write z = z 1 z 2 uniquely with z 1 ∈ U − P and z 2 ∈ P . For, suppose that ) and E = E ′ . Hence U − P × C P = U − P P . Therefore, for any parabolic subgroup P ′ ⊆ P , U − P × C P/P ′ = U − P P/P ′ . The asserted isomorphism now follows by taking P ′ =P s .
For the next statement, let with A invertible (which we may assume by (b)). Then we have a decomposition (in GL N )
Finally,
There is a surjective morphism of C-group schemes P −→ GL n , This induces the required isomorphism. Notice that the element Then using Proposition 2.2.1(a) and its proof, O − GL N /P can be identified with the affine space of lower-triangular matrices with possible non-zero entries x ij at row i and column j where For the maximal parabolic group P l , we have, Then the Plücker co-ordinate p (l) sα on the Grassmannian GL N /P l lifts to a regular function on GL N /P , which we denote by the same symbol. Its restriction to O − G/P is the the l × l-minor with column indices {1, 2, . . . , l} and row indices {1, . . . , j − 1, j + 1, . . . , l, i}. In particular, In general p (l) sα need not be a linear form, or even homogeneous; see the example discussed after Definition 3.2.
Homogeneous bundles and representations.
Let Q be a parabolic subgroup of GL n . We collect here some results about homogeneous vector-bundles on GL n /Q. Most of these results are well-known, but for some of them, we could not find a reference, so we give a proof here for the sake of completeness. Online notes of G. Ottaviani [Ott95] and of D. Snow [Sno14] discuss the details of many of these results. Let L Q and U Q be respectively the Levi subgroup and the unipotent radical of Q. Let E be a finite-dimensional vector-space on which Q acts on the right; the vector-spaces that we will encounter have natural right action. Definition 2.3.1. Define GL n × Q E := (GL n × E)/ ∼ where ∼ is the equivalence relation (g, e) ∼ (gq, eq) for every g ∈ GL n , q ∈ Q and e ∈ E. Then π E : GL n × Q E −→ GL n /Q, (g, e) → gQ, is a vector-bundle called the vector-bundle associated to E (and the principal Q-bundle GL n −→ GL n /Q). For g ∈ GL n , e ∈ E, we write [g, e] ∈ GL n × Q E for the equivalence class of (g, e) ∈ GL n ×E under ∼. We say that a vector-bundle π : E −→ GL n /Q is homogeneous if E has a GL n -action and π is GL n -equivariant, i.e, for every y ∈ E, π(g · y) = g · π(y).
In this section, we abbreviate GL n × Q E as E. It is known that E is homogeneous if and only if E ≃ E for some Q-module E. (If this is the case, then E is the fibre of E over the coset Q.) A homogeneous bundle E is said to be irreducible (respectively indecomposable, completely reducible) if E is a irreducible (respectively indecomposable, completely reducible) Q-module. It is known that E is completely reducible if and only if U Q acts trivially and that E is irreducible if and only if additionally it is irreducible as a representation of L Q . See [Sno14, Section 5] or [Ott95, Section 10] for the details.
so the assignment g → e defines a function φ : GL n −→ E. This is Q-equivariant in the following sense: for every q ∈ Q and g ∈ GL n .
Conversely, any such map defines a section of π E . The set of sections H 0 (GL n /Q, E) of π E is a vector-space with (αφ)(g) = α(φ(g)) for every α ∈ C, φ a section of π E and g ∈ GL n . It is finite-dimensional.
Note that GL n acts on GL n /Q by multiplication on the left; setting h · [g, e] = [hg, e] for g, h ∈ GL n and e ∈ E, we extend this to E. We can also define a natural GL n -action on H 0 (GL n /Q, E) as follows. For any map φ : The action of GL n on the sections is on the left: is a GL n -module for every i. Suppose now that E is one-dimensional. Then Q acts on E by a character λ; we denote the associated line bundle on GL n /Q by L λ .
Hence the irreducible homogeneous vectorbundles on GL n /Q are in correspondence with Q-dominant weights. We describe them now. If Q = P n−i , then GL n /Q = Gr i,n . (Recall that, for us, the GL n -action on C n is on the right.) On Gr i,n , we have the tautological sequence Here S µ denotes the Schur functor associated to the partition µ. Now suppose that Q = P i 1 ,...,it with 1 ≤ i 1 < · · · < i t ≤ n − 1. Since the action is on the right, GL n /Q projects to Gr n−i,n precisely when i = i j for some 1 ≤ j ≤ t. For each 1 ≤ j ≤ t, we can take the pull-back of the tautological bundles R n−i j and Q i j to GL n /Q from GL n /P i j . The irreducible homogeneous bundle corresponding to a Q-dominant weight λ is S (λ 1 ,..., Hereafter, we will write U i = Q * i . Moreover, abusing notation, we will use R i , Q i , U i etc. for these vector-bundles on any (partial) flag varieties on which they would make sense. [Wey03,p. 114]. Although our definition looks like Weyman's definition, we should keep in mind that our action is on the right. We only have to be careful when we apply the Borel-Weil-Bott theorem (more specifically, Bott's algorithm). In this paper, our computations are done only on Grassmannians. If µ and ν are partitions, then (µ, ν) will be Q-dominant (for a suitable Q), and will give us the vector-bundle S µ Q * ⊗ S ν R * (this is where the right-action of Q becomes relevant) and to compute its cohomology, we will have to apply Bott's algorithm to the Q-dominant weight (ν, µ). (In [Wey03], one would get S µ R * ⊗ S ν Q * and would apply Bott's algorithm to (µ, ν).) See, for example, the proof of Proposition 5.4 or the examples that follow it.
Proof. For Q 2 (respectively, Q 1 ), the category of homogeneous vector-bundles on GL n /Q 2 (respectively, GL n /Q 1 ) is equivalent to the category of finite-dimensional Q 2 -modules (respectively, finite-dimensional Q 1 -modules). Now, the functor f * from the category of homogeneous vectorbundles over GL n /Q 2 to that over GL n /Q 1 is equivalent to the restriction functor Res Q 2 Q 1 . Hence their corresponding right-adjoint functors f * and the induction functor Ind Q 2 Q 1 are equivalent; one may refer to [Har77, II.5, p. 110] and [Jan03, I.3.4, 'Frobenius Reciprocity'] to see that these are indeed adjoint pairs. Hence, for homogeneous bundles on GL n /Q 1 , R i f * can be computed using R i Ind Q 2 Q 1 . On the other hand, note that Ind Q 2 Q 1 (−) is the functor H 0 (Q 2 /Q 1 , GL n × Q 1 −) on Q 1 -modules (which follows from [Jan03, I.3.3, Equation (2)]). The proposition now follows.
Properties of Schubert desingularization
This section is devoted to proving some results on smooth Schubert varieties in partial flag varieties. In Theorem 3.4, we show that opposite cells of certain smooth Schubert varieties in GL N /P are linear subvarieties of the affine variety O − GL N /P , whereP =P s for some 1 ≤ s ≤ n − 1. Using this, we show in Theorem 3.7 that if X P (w) ∈ GL N /P is such that there exists a parabolic subgroupP P such that the birational model XP (w) ⊆ GL N /P of X P (w) is smooth (we say that X P (w) has a Schubert desingularization if this happens) then the inverse image of Y P (w) inside XP (w) is a vector-bundle over a Schubert variety in P/P . This will give us a realization of Diagram (1.2).
Recall the following result about the tangent space of a Schubert variety; see [BL00, Chapter 4] for details. It is immediate that if XP (τ ) has the linearity property then it is smooth. The converse is not true, as the following example shows. Let τ = (2, 4, 1, 3) and consider X B (τ ) ⊆ GL 4 /B. Note that X B (τ ) is smooth. The reflections (i, j) (with i > j) that satisfy (i, j) ≤ τ (in W = S 4 ) are precisely (3, 1), (4, 1) and (4, 2). For these reflections, we note that the relevant restrictions of the Plücker coordinates to O − GL 4 /B that vanish on YP (τ ) are as follows: p We are interested in the parabolic subgroupsP =P s for some 1 ≤ s ≤ n − 1. Take such aP . We will show below that certain smooth Schubert varieties in GL N /P have the linearity property. From Discussion 2.2.2 it follows that {x ij | j ≤ n and i ≥ max{j + 1, s + 1}} is a system of affine coordinates for O − GL N /P . Notation 3.3. For the remainder of this section we adopt the following notation: Let w = (a 1 , a 2 , . . . , a n ) ∈ W P . Let r = r(w), i.e., the index r such that a r ≤ n < a r+1 . Let 1 ≤ s ≤ r. We writeP =P s . Letw be the minimal representative of w in WP . Let c r+1 > · · · > c n be such that {c r+1 , . . . , c n } = {1, . . . , n} \ {a 1 , . . . , a r }; let w ′ := (a 1 , . . . , a r , c r+1 , . . . , c n ) ∈ S n , the Weyl group of GL n .
Theorem 3.4. With notation as above, suppose that the Schubert variety XP (w) of GL N /P is smooth. Then it has the linearity property.
On the other hand, note that the reflections (i, j) with j ≤ n and i ≥ max{a j + 1, s + 1} are precisely the reflections We have the following immediate corollary to the proof of Theorem 3.4. See Figure 1 for a picture. Figure 1). Then we have an identification of
Corollary 3.5. Suppose that XP (w) is smooth, and identify
the space of all m × n matrices) given by 1 ≤ j ≤ r(w) and for every i, or, r(w) + 1 ≤ j ≤ n − 1 and a j − n < i ≤ m.
and V ′ w is the linear subspace of O − P/P (being identified with M m,n , the space of all m × n matrices)given by x ij = 0 for every 1 ≤ j ≤ r(w) and for every i ≥ max{a j + 1, s + 1}.
Proof. As seen in the proof of Theorem 3.4, we have that YP (w) is the subspace of the affine space O − G/P given by x ij = 0 for every j ≤ n and for every i ≥ max{a j + 1, s + 1}. This fact together with the identification of
Using the injective map
B n can be thought of as a subgroup of B N . With this identification, we have the following Proposition: Proposition 3.6. ZP (w) is B n -stable (for the action on the left by multiplication). Further, p is B n -equivariant.
Proof. Let Then z ′ ∈ B Nw B N , so z ′ (modP ) ∈ XP (w). By Proposition 2.2.1(b), we have that A is invertible, and hence AA ′ is invertible; this implies (again by Proposition 2.2.1(b) ) that z ′ (modP ) ∈ ZP (w). , which arises as the restriction of the vector-bundle on GL n /Q s associated to the Q s -module V w (which, in turn, is a Q s -submodule of O − GL N /P ). We believe that all the assertions above hold without the hypothesis that XP (w) is smooth.
Proof. (a): The map XP (w) ֒→ GL N /P −→ GL N /P is proper and its (scheme-theoretic) image is X P (w); hence XP (w) −→ X P (w) is proper. Birationality follows from the fact thatw is the minimal representative of the coset wP (see Remark 2.2.5). ( (c) From Theorem 3.4 it follows that Hence p(YP (w)) = V ′ w ⊆ X Qs (w ′ ). Since YP (w) is dense inside ZP (w) and X Qs (w ′ ) is closed in GL n /Q s , we see that p(ZP r (w)) ⊆ X Qs (w ′ ). The other inclusion X Qs (w ′ ) ⊆ p(ZP r (w)) follows from (b). Hence, p(ZP r (w)) equals X Qs (w ′ ).
Next, to prove the second assertion in (c), we shall show that for every A ∈ GL n with A mod Q s ∈ X Qs (w ′ ), Towards proving this, we first observe that p −1 (e id ) equals V w (in view of Corollary 3.5). Next, we observe that every B n -orbit inside X Qs (w ′ ) meets V ′ w (= Y Qs (w ′ )); further, p is B n -equivariant (see Proposition 3.6). The assertion (3.8) now follows.
(d): First observe that for the action of right multiplication by GL n on O − G/P (being identified with M m,n , the space of m × n matrices), V w is stable; we thus get the homogeneous bundle GL n × Qs V w → GL n /Q s (Definition 2.3.1). Now to prove the assertion about ZP s (w)) being a vector-bundle over X Qs (w ′ ), we will show that there is a commutative diagram given as below, with ψ an isomorphism: The map α is the homogeneous bundle map and β is the inclusion map. Define φ by φ : Using Proposition 2.2.1(c) and (3.8), we conclude the following: φ is well-defined and injective; β · p = α · φ; hence, by the universal property of products, the map ψ exists; and, finally, the injective map ψ is in fact an isomorphism (by dimension considerations).
Corollary 3.9. If XP (w) is smooth, then we have the following realization of the diagram in (1.2): We now describe a class of smooth varieties XP s (w) inside GL N /P s .
Proof. For both (a) and (b): Let w max ∈ W (= S N ) be the maximal representative ofw. We claim that w max = (a s , a s−1 , . . . , a 1 , a s+1 , a s+2 , . . . , a n , b n+1 , . . . , b N ) ∈ W. Assume the claim. Then w max is a 4231-and 3412-avoiding element of W ; hence X B N (w max ) is smooth (see [LS90] , [BL00, 8.1.1]). Since w max is the maximal representative (in W ) ofwP s , we see that X B N (w max ) is a fibration over XP s (w) with smooth fibresP s /B N ; therefore XP s (w) is smooth.
To prove the claim, we need to show that X P i (w max ) = X P i (w) for every s ≤ i ≤ n and that w max is the maximal element of W with this property. This follows, since for every τ := (c 1 , . . . , c N with c 1 , . . . , c i written in the increasing order. In light of Proposition 3.10(b) we make the following definition. Our concrete descriptions of free resolutions will be for this class of Schubert varieties. + 1, . . . , n, a r+1 , · · · , a n−1 , N ) ∈ W P : n < a r+1 < · · · < a n−1 < N }.
Example 3.12. This example shows that even with r = s, X Qs (w ′ ) need not be smooth for arbitrary w ∈ W P . Let n = m = 4 and w = (2, 4, 7, 8). Then r = 2; take s = 2. Then we obtain w max = (4, 2, 7, 8, 5, 6, 3, 1), which has a 4231 pattern. and m for its homogeneous maximal ideal. (The grading on R arises as follows. In Diagram (1.1), A is thought of as the fibre of a trivial vector-bundle, so it has a distinguished point, its origin. Now, being a sub-bundle, Z is defined by linear equations in each fibre; i.e., for each v ∈ V , there exist s := (dim A − rk V Z) linearly independent linear polynomials ℓ v,1 , . . . , ℓ v,s that vanish along Z and define it. Now Y = {y ∈ A : there exists v ∈ V such that ℓ v,1 (y) = · · · = ℓ v,s (y) = 0}. Hence Y is defined by homogeneous polynomials. This explains why the resolution obtained below is graded.) Let m be the homogeneous maximal ideal, i.e., the ideal defining the origin in A. Then: Theorem 4.1 ([Wey03, Basic Theorem 5.1.2]). With notation as above, there is a finite complex (F • , ∂ • ) of finitely generated graded free R-modules that is quasi-isomorphic to Rq ′ * O Z , with
Free resolutions
We give a sketch of the proof because one direction of the equivalence is only implicit in the proof of [Wey03, 5.1.3].
Sketch of the proof. One constructs a suitable q * -acyclic resolution I • of the Koszul complex that resolves O Z as an O A×V -module so that the terms in q * I • are finitely generated free graded Rmodules. One places the Koszul complex on the negative horizontal axis and thinks of I • as a second-quadrant double complex, thus to obtain a complex G • of finitely generated free R-modules whose homology at the ith position is R −i q * O Z . Then, using standard homological considerations, Our situation. We now apply Theorem 4.1 to our situation. We keep the notation of Theorem 3.7. Theorem 4.1 and Corollary 3.9 yield the following result: (This is the dual of the In the first case, Q s = B n , so p makes ZP 1 (w) a vector-bundle on a smooth Schubert subvariety X B 1 (w ′ ) of GL n /B n . In the second case, w ′ is the maximal word in S n , so X Qr (w ′ ) = GL n /Q r ; see Discussion 4.3 for further details.
Computing the cohomology groups required in Theorem 4.2 in the general situation of Kempf's desingularization (Proposition 3.10(a)) is a difficult problem, even though the relevant Schubert variety X Bn (w ′ ) is smooth. Hence we are forced to restrict our attention to the subset of W P considered in Proposition 3.10(b).
The stipulation that, for w ∈ W r , w sends n to N is not very restrictive. This can be seen in two (related) ways. Suppose that w does not send n to N . Then, firstly, X P (w) can be thought of as a Schubert subvariety of a smaller Grassmannian. Or, secondly, U w will contain the trivial bundle U n as a summand, so H 0 (GL n /Q r , ξ) = 0, i.e., R(−1) is a summand of F 1 . In other words, the defining ideal of Y P (w) contains a linear form. Discussion 4.3. We give some more details of the situation in Proposition 3.10(b) that will be used in the next section. Let w = (n − r + 1, n − r + 2, . . . , n, a r+1 , . . . , a n−1 , N ) ∈ W r . The space of (m × n) matrices is a GL n -module with a right action; the subspace V w is Q r -stable under this action. Thus V w is a Q r -module, and gives an associated vector-bundle (GL n × Qr V w ) on GL n /Q r . The action on the right of GL n on the space of (m × n) matrices breaks by rows; each row is a natural n-dimensional representation of GL n . For each 1 ≤ j ≤ m, there is a unique r ≤ i j ≤ n − 1 such that a i j < j + n ≤ a i j +1 . (Note that a r = n and a n = N .) In row j, V w has rank n − i j , and is a sub-bundle of the natural representation. Hence the vector-bundle associated to the jth row of V w is the pull-back of the tautological sub-bundle (of rank (n − i j )) on Gr n−i j ,n . We denote this by R n−i j . Therefore (GL n × Qr V w ) is the vector-bundle R w := m j=1 R n−i j . Let Q w := m j=1 Q i j where Q i j the tautological quotient bundles corresponding to R n−i j . Then the vector-bundle U w on GL n /Q r that was defined in Theorem 4.2 is Q * w .
Cohomology of Homogeneous Vector-Bundles
It is, in general, difficult to compute the cohomology groups H j (GL n /Q r , t U w ) in Theorem 4.2 for arbitrary w ∈ W r . In this section, we will discuss some approaches. We believe that this is a problem of independent interest. Our method involves replacing Q r inductively by increasingly bigger parabolic subgroups, so we give the general set-up below.
Setup 5.1. Let 1 ≤ r ≤ n − 1. Let m r , . . . , m n−1 be non-negative integers such that m r + · · · + m n−1 = m. Let Q be a parabolic subgroup of GL n such that Q ⊆ P i for every r ≤ i ≤ n − 1 such that m i > 0. We consider the homogeneous vector-bundle ξ = ⊕ n−1 i=r U m i i on GL n /Q, We want to compute the vector-spaces H j (GL n /Q r , t ξ).
Lemma 5.2. Let f : X ′ −→ X be a fibration with fibre some Schubert subvariety Y of some (partial) flag variety. Then f * O X ′ = O X and R i f * O X ′ = 0 for every i ≥ 1. In particular, for every locally free coherent sheaves L on X, Proof. The first assertion is a consequence of Grauert's theorem [Har77, III.12.9] and the fact (see, for example, [Ses07, Theorem 3.2.1]) that The second assertion follows from the projection formula and the Leray spectral sequence. Then Proof. The assertion follows from Lemma 5.2, noting that t ξ on GL n /Q is the pull-back of t ξ on GL n /Q ′ , under the natural morphism GL n /Q −→ GL n /Q ′ . Note that there is a permutation σ such that σ · α = α. The proposition now follows.
An inductive approach. We are looking for a way to compute H * (GL n /Q, t ξ) for a homogeneous bundle . . , n − 1} and m i > 0 for every i ∈ A. Using Proposition 5.3, we assume that Q = P A . (Using Proposition 5.8 below, we may further assume that m i ≥ 2, but this is not necessary for the inductive argument to work.) Let j be such that Q ⊆ P j and Q j (equivalently U j ) be of least dimension; in other words, j is the smallest element of A. If Q = P j (i.e., |A| = 1), then the t ξ is completely reducible, and we may use the Borel-Weil-Bott theorem to compute the cohomology groups. Hence suppose that Q = P j ; write Q = Q ′ ∩ P j non-trivially, with Q ′ being a parabolic subgroup. Consider the diagram GL n /Q Note that t ξ decomposes as a direct sum of bundles of the form (p 1 ) * η⊗(p 2 ) * ( t 1 U ⊕m j j ) where η is a homogeneous bundle on GL n /Q ′ . We must compute H * (GL n /Q, (p 1 ) * η ⊗(p 2 ) * ( t 1 U ⊕m j j )). Using the Leray spectral sequence and the projection formula, we can compute this from , in turn, decomposes as a direct sum of S µ U j , so we must compute H * (GL n /Q ′ , η ⊗ R * (p 1 ) * (p 2 ) * S µ U j ). The Leray spectral sequence and the projection formula respect the various direct-sum decompositions mentioned above. It would follow from Proposition 5.5 below that for each µ, at most one of the R p (p 1 ) * (p 2 ) * S µ U j is non-zero, so the abutment of the spectral sequence is, in fact, an equality.
Proposition 5.5. With notation as above, let θ be a homogeneous bundle on GL n /P j . Then Proof. Follows from Proposition 2.3.5.
We hence want to determine the cohomology of the restriction of S µ U j on Q ′ /Q. It follows from the definition of j that Q ′ /Q is a Grassmannian whose tautological quotient bundle and its dual are, respectively, Q j | Q ′ /Q and U j | Q ′ /Q . We can therefore compute H i (Q ′ /Q, S µ U j | Q ′ /Q ) using the Borel-Weil-Bott theorem.
Example 5.6. Suppose that n = 6 and that Q = P {2,4} . Then we have the diagram The fibre of p 1 is isomorphic to P 4 /Q which is a Grassmannian of two-dimensional subspaces of a four-dimensional vector-space. Let µ = (µ 1 , µ 2 ) be a weight. Then we can compute the cohomology groups H * (P 4 /Q, S µ U 2 | P 4 /Q ) applying the Borel-Weil-Bott theorem [Wey03, (4.1.5)] to the sequence (0, 0, µ 1 , µ 2 ). Note that H * (P 4 /Q, S µ U 2 | P 4 /Q ) is, if it is non-zero, S λ W where W is a four-dimensional vector-space that is the fibre of the dual of the tautological quotient bundle of GL 4 /P 4 and λ is a partition with at most four parts. Hence, by Proposition 5.5, we see that R i (p 1 ) * (p 2 ) * S µ U 2 is, if it is non-zero, S λ U 4 on GL 6 /P 4 .
We summarize the above discussion as a theorem: Theorem 5.7. For w ∈ W r the modules in the free resolution of C[Y P (w)] given in Theorem 4.2 can be computed.
We end this section with some observations. Proposition 5.8. Suppose that there exists i such that r + 1 ≤ i ≤ n − 1 and such that ξ contains exactly one copy of U i as a direct summand. Let where for 1 ≤ j ≤ n, ω j is the jth fundamental weight. Assume the claim. Then we have an exact sequence Pl; then Q = Q ′ ∩ P i . Let p : GL n /Q −→ GL n /Q ′ be the natural projection; its fibres are isomorphic to Q ′ /Q ≃ GL 2 /B N ≃ P 1 . Note that t−1 ξ ′ ⊗ L ω i−1 is the pull-back along p of some vector-bundle on GL n /Q ′ ; hence it is constant on the fibres of p.
On the other hand, L ω i is the ample line bundle on GL n /P i that generates its Picard group, so L −ω i restricted to any fibre of p is O(−1). Hence t−1 ξ ′ ⊗ L ω i−1 −ω i on any fibre of p is a direct sum of copies of O(−1) and hence it has no cohomology. By Grauert's theorem [Har77, III.12.9], R i p * ( t−1 ξ ′ ⊗ L ω i−1 −ω i ) = 0 for every i, so, using the Leray spectral sequence, we conclude that This gives the proposition. Now to prove the claim, note that Let e 1 , . . . , e n be a basis for C n such that the subspace spanned by e i , . . . , e n is B N -stable for every 1 ≤ i ≤ n. (Recall that we take the right action of B N on C n .) Hence R n−i+1 /R n−i is the invertible sheaf on which B N acts through the character ω i − ω i−1 , which implies the claim.
Remark 5.9 (Determinantal case). Recall (see the paragraph after Definition 3.11) that Y P (w) = D k if w = (k + 1, . . . , n, N − k + 1, . . . N ) ∈ W n−k . In this case, where the first equality comes from a repeated application of Proposition 5.8 and the second one follows by Lemma 5.2, applied to the natural map f : GL n /Q −→ GL n /P n−k . Hence our approach recovers Lascoux's resolution of the determinantal ideal [Las78]; see also [Wey03, Chapter 6].
Examples
We illustrate our approach with two examples. Firstly, we compute the resolution of a determinantal variety using the inductive method from the last section.
We put these together to compute h l (∧ t ξ); the result is listed in Table 1. From this we get the following resolution: Note, indeed, that dim Y Q (w) = dim X Q (w) = 4 + 4 + 5 + 5 + 6 + 6 = 30 and that dim O − GL N /P = 6 · 6 = 36, so the codimension is 6. Since the variety is Cohen-Macaulay, the length of a minimal free resolution is 6.
Further remarks
A realization of Lascoux's resolution for determinantal varieties. We already saw in Remark 5.9 that when Y P (w) = D k , computing H * (GL n /Q n−k , * ξ) is reduced, by a repeated application of Proposition 5.8 to computing the cohomology groups of (completely reducible) vector bundles on the Grassmannian GL n /P n−k . We thus realize Lascoux's resolution of the determinantal variety using our approach.
In this section, we give yet another desingularization of D k (for a suitable choice of the parabolic subgroup) so that the variety V of Diagram (1.2) is in fact a Grassmannian. Recall (the paragraph after Definition 3.11 or Remark 5.9) that Y P (w) = D k if w = (k +1, . . . , n, N −k +1, . . . N ) ∈ W n−k . LetP = P {n−k,n} ⊆ GL N . Letw be the representative of the coset wP in WP .
Proposition 7.1. XP (w) is smooth and the natural map XP (w) −→ X P (w) is proper and birational, i.e, XP (w) is a desingularization of X P (w).
Proof. The proof is similar to that of Proposition 3.10. Let w max = (k+1, . . . , n, N −k+1, . . . N, N − k, . . . , n + 1, k, . . . , 1) ∈ W . Then X B N (w max ) is the inverse image of XP (w) under the natural morphism GL N /B N −→ GL N /P , and that w max is a 4231 and 3412-avoiding element of W = S N .
We have P/P ∼ = GL n /P n−k . As in Section 3, we have the following. Denoting by Z the preimage inside XP (w) of Y P (w) (under the restriction to XP (w) of the natural projection G/P → G/P ), we have Z ⊂ O − × P/P , and the image of Z under the second projection is V := P/P ( ∼ = GL n /P n−k ). The inclusion Z ֒→ O − × V is a sub-bundle (over V ) of the trivial bundle O − × V . Denoting by ξ the dual of the quotient bundle on V corresponding to Z, we have that the homogeneous bundles i+j ξ on GL n /P n−k are completely reducible, and hence may be computed using Bott's algorithm.
Multiplicity. We describe how the free resolution obtained in Theorem 4.2 can be used to get an expression for the multiplicity mult id (w) of the local ring of the Schubert variety X P (w) ⊆ GL N /P at the point e id . Notice that Y P (w) is an affine neighbourhood of e id . We noticed in Section 4 that Y P (w) is a closed subvariety of O − GL N /P defined by homogeneous equations. In O − GL N /P , e id is the origin; hence in Y P (w) it is defined by the unique homogeneous maximal ideal of C[Y P (w)]. Therefore C[Y P (w)] is the associated graded ring of the local ring of C[Y P (w)] at e id (which is also the local ring of X P (w) at e id ). Hence mult id (w) is the normalized leading coefficient of the Hilbert series of C[Y P (w)].
Observe that the Hilbert series of C[Y P (w)] can be obtained as an alternating sum of the Hilbert series of the modules F i in Theorem 4.2. Write h j (−) = dim C H j (X Qs (w ′ ), −) for coherent sheaves on X Qs (w ′ ). Then Hilbert series of C[Y P (w)] is We may harmlessly change the range of summation in the numerator of (7.2) to −∞ < i, j < ∞; this is immediate for j, while for i, we note that the proof of Theorem 4.1 implies that h j i+j U w = 0 for every i < 0 and for every j. Hence we may write the numerator of (7.2) as (with k = i + j) Since ∧ k U w is also a T n -module, where T n is the subgroup of diagonal matrices in GL n , one may decompose ∧ k U w as a sum of rank-one T n -modules and use the Demazure character formula to compute the Euler characteristics above.
It follows from generalities on Hilbert series (see, e.g., [BH93, Section 4.1]) that the polynomial in (7.3) is divisible by (1 − t) c where c is the codimension of Y P (w) in O − GL N /P , and that after we divide it and substitute t = 1 in the quotient, we get mult id (w). This gives an expression for e id (w) apart from those of [LW90,KL04].
Castelnuovo-Mumford Regularity. Since C[Y P (w)] is a graded quotient ring of C[O − GL N /P ], it defines a coherent sheaf over the corresponding projective space P mn−1 .
Let F be a coherent sheaf on P n . The Castelnuovo-Mumford regularity of F (with respect to O P n (1)) is the smallest integer r such that H i (P n , F ⊗ O P n (r − i)) = 0 for every 1 ≤ i ≤ n; we denote it by reg F . Similarly, if R = k[x 0 , . . . , x n ] be a polynomial ring over a field k with deg x i = 1 for every i and M is a finitely generated graded R-module, the Castelnuovo-Mumford regularity of M to be the smallest integer r such that H i (x 0 ,...,xn) (M ) Now let w = (n − r + 1, n − r + 2, . . . , n, a r+1 , . . . , a n−1 , N ) ∈ W r . We would like to determine reg C[Y P (w)] = max{j : H j (GL n /Q r , ∧ * U w ) = 0}. Let a r = n and a n = N . For r ≤ i ≤ n − 1, define m i = a i+1 − a i . Note that U i appears in U w with multiplicity m i and that m i > 0. Based on the examples that we have calculated, we have the following conjecture. (Note that since Y P (w) is Cohen-Macaulay, reg C[Y P (w)] = reg O Y P (w) .) Consider the examples in Section 6. In Example 6.1, m 2 = 2, m 3 = 1 and reg C[Y P (w)] = (2 − 1)2 + 0 = 2. In Example 6.3, m 2 = m 4 = 2 and m 3 = m 5 = 1, so reg C[Y P (w)] = (2 − 1)2 + 0 + (2 − 1)4 + 0 = 6, which in deed is the case, as we see from Table 1. | 2015-04-17T01:16:07.000Z | 2015-04-17T00:00:00.000 | {
"year": 2015,
"sha1": "2922532d741c412555cd3a8d1133f9e984e99ec6",
"oa_license": null,
"oa_url": "http://msp.org/pjm/2015/279-1/pjm-v279-n1-p14-s.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "2922532d741c412555cd3a8d1133f9e984e99ec6",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
486305 | pes2o/s2orc | v3-fos-license | The impact of regular school closure on seasonal influenza epidemics: a data-driven spatial transmission model for Belgium
Background School closure is often considered as an option to mitigate influenza epidemics because of its potential to reduce transmission in children and then in the community. The policy is still however highly debated because of controversial evidence. Moreover, the specific mechanisms leading to mitigation are not clearly identified. Methods We introduced a stochastic spatial age-specific metapopulation model to assess the role of holiday-associated behavioral changes and how they affect seasonal influenza dynamics. The model is applied to Belgium, parameterized with country-specific data on social mixing and travel, and calibrated to the 2008/2009 influenza season. It includes behavioral changes occurring during weekend vs. weekday, and holiday vs. school-term. Several experimental scenarios are explored to identify the relevant social and behavioral mechanisms. Results Stochastic numerical simulations show that holidays considerably delay the peak of the season and mitigate its impact. Changes in mixing patterns are responsible for the observed effects, whereas changes in travel behavior do not alter the epidemic. Weekends are important in slowing down the season by periodically dampening transmission. Christmas holidays have the largest impact on the epidemic, however later school breaks may help in reducing the epidemic size, stressing the importance of considering the full calendar. An extension of the Christmas holiday of 1 week may further mitigate the epidemic. Conclusion Changes in the way individuals establish contacts during holidays are the key ingredient explaining the mitigating effect of regular school closure. Our findings highlight the need to quantify these changes in different demographic and epidemic contexts in order to provide accurate and reliable evaluations of closure effectiveness. They also suggest strategic policies in the distribution of holiday periods to minimize the epidemic impact. Electronic supplementary material The online version of this article (doi:10.1186/s12879-017-2934-3) contains supplementary material, which is available to authorized users.
We describe here in detail the inference procedure used to approximate fluxes of commuters per age class i = a, c in Belgium. For each commuting link l of the French commuting network, we computed the commuting distance d(l) and the fraction ρ(l) of commuters of age class i. We filtered all links having less than 30 commuters. We considered seven bins of distance, according to the definition used in the "Enquête National Transport et Déplacements 2008" (National survey on transport and mobility, 2008): [0, 2] km, ]2, 5] km, ]5, 10] km, ]10, 20] km, ]20, 40] km, ]40, 80] km, and > 80 km. The distribution obtained for each distance bin is then used to infer the fraction of Belgian commuters in age class i traveling on the same distance bin. A comparison between the empirical distributions obtained from the French commuting data and the reconstructed distributions for Belgium is shown in Figure S1. A good agreement is found for all distance bins, with a noisier behavior obtained for the bin class d(l) ≤ 2km, due to poor statistics. Plots also show that children commute at shorter distances than adults. Figure S1: Probability distribution of the fraction of children commuters at specific distance bins: comparison between the empirical distributions obtained from the French commuting data and the reconstructed distributions for Belgium.
Contact Matrices
The average number of contacts made by participants in age class i = 1, 2 with people in age class j = 1, 2 is given by M ij . The per capita contact rates are then summarised in the contact rate matrix which is rescaled to a normalised contact matrix where N tot is the total population of Belgium We note here that the values C ij are scale invariant, that is We consider C ij to describe the social interaction in each patch, i.e. C (p) following Eq.(S1), with N (p) (t) being the total population in the patch at time t.
Details of the compartmental model in each patch
Each patch receives commuters from k in (p) patches and moves residents to k out (p) other patches, with k in and k out representing the indegree and outdegree, respectively, of patch p in the commuting network. Commuters are modeled with separate compartments, in order to track them in their movements from residence to destination and back. At each time step t, the population of a patch p is composed of the following subpopulations, each described by a two-age class SEIR disease progression model: • individuals who reside in patch p and do not commute: • k out (p) subpopulations of individuals who reside in patch p and commute to another patch q: , with q neighbor of p, and i = c, a; • k in (p) subpopulations of individuals who reside in a patch q and commute from patch q to patch p: , with q neighbor of p, and i = c, a.
Accounting for commuting, we can then write the force of infection for a susceptible individual of age class i in patch p and time t:
Influenza transmission
Here we describe influenza transmission for the subpopulation of residents of a given patch p (we drop the p for simplicity). The extension to the other subpopulations present in the patch is straightforward. The probabilities associated to SEIR transitions for age class i in a small enough time interval dt are given by: The number of individuals in age class i newly entering the E, I, and R class are extracted with binomial distributions (B):
Derivation of the next generation matrix
In calculating the values of R we disregard mobility. The in-patch model becomes therefore a two age-classes stochastic SEIR model. Its deterministic couterpart can be written as: Using Diekmann's approach we linearize the equations of the infectious compartments E, I around the disease free state with the correct immunity fraction a and obtain for the following system of linear equations restricted to the infectious compartments: The next generations matrix in the patch therefore reads (I I I is the identity matrix): which in components gives the result reported in the main text:
Computational details
The code of the simulations was written in C++ and made use of the Mersenne-trwister random generator and the binomial extraction procedures as provided by the Boost Libraries v1.58.0. Compiling was done using the gnu c++ compiler version 4.8.1 with optimization level 3. Table S1 presents the list of district names and associated IDs used in the study.
Calibration procedure
We minimized the Weighted Least Square function W LS(β n , α n ) computed on the median normalized incidence curves, considered from the start of the epidemic up to the peak time. The calibration is performed on Brussels district only and for each set of parameters (β n , α n ) we performed 1,000 simulations. Here β n is the explored per-contact transmission rate, and α n is the rescaling factor for the simulated incidence in Brussels district to account for possible sampling biases in the initial condition. The calibration is performed on normalized incidence curves to discount the effects of unknown GP consultation rates.
To reduce the number of points to explore and cope with stochastic fluctuations we considered iterative resampling through a particle filter/bootstrap method. For each level l we calculate a weight distribution for each (β n , α n ) as follows: which allows us to define a filtering/resampling transition probability: p(β, α; l + 1 | β l 1 , . . . , β l n , α l 1 , . . . , α l n ; l) = β l n ,α l n w l (β l n , α l n )V βn,αn) (β, α) (S13) where V (a0,b0) (a, b) is the uniform distribution over the Voronoi cell centred in (a 0 , b 0 ). We can then resample N -particles at level l + 1 given the M -particles at level l and repeat the process iteratively until the filtering Ath Aat Table S1: List of district names and associated IDs.
probability is almost uniform, and therefore the filter does not work any more. Here we used 20 particles at each level (except the first) and then we stopped when the number of effective particles defined as: (S14) was greater of 19, which corresponds to uniformity of filtering probability. For each level l, we thus obtain a set of 20 pairs (β l n , α l n ) whose distribution is used to estimate the values of α and β that minimize W LS(β, α).
Additional validation results
Calibration results are listed in Table S2. Figure S2 shows the comparison in the peak timing between simulations calibrated with values of Table S2 and surveillance data. Figure S2: Left: Boxplot of the peak time difference ∆T d per district between simulations and empirical data. Numbers represent Belgium districts, see Table S1 for corresponding names. Right: Geographical map of the median peak time difference per district. | 2018-01-16T22:28:01.873Z | 2017-12-07T00:00:00.000 | {
"year": 2018,
"sha1": "faf7a4e9db24c30637224f708ad868ecfc1d9da6",
"oa_license": "CCBY",
"oa_url": "https://bmcinfectdis.biomedcentral.com/track/pdf/10.1186/s12879-017-2934-3",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dc3150dec198edfa1fd0127b946c9e65e41d1172",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Geography"
]
} |
266549866 | pes2o/s2orc | v3-fos-license | Chronic subdural hematoma associated with type II and type III Galassi arachnoid cysts: illustrative cases
BACKGROUND Arachnoid cysts (ACs) are congenital abnormalities that can be located anywhere within the subarachnoid space along the cerebrospinal axis, although they are most often found on the left side in the temporal fossa and sylvian fissure. ACs comprise approximately 1% of all intracranial space-occupying lesions and are considered potential risk factors for subdural hematoma (SDH) in individuals of all age groups who have experienced traumatic brain injury. Although it is uncommon for an intracystic hemorrhage of an AC to occur without evidence of head trauma, it may be more common among children and young adults. Here, the authors present three cases of spontaneous AC intracystic hemorrhage with chronic SDH. Additionally, they provide a thorough review of the existing literature. OBSERVATIONS All three patients with AC were adolescent males. In all cases, AC was identified using the Galassi classification (type II or III) and associated with spontaneous intracystic hemorrhage and chronic SDH as seen on imaging. LESSONS Spontaneous intracystic hemorrhage is a rare complication and occurs most commonly on the left side. Surgery is the definitive treatment, requiring either craniotomy or burr hole for hematoma evacuation and microsurgical fenestration to drain the cyst into the subarachnoid cisterns.
Arachnoid cysts (ACs) are classified as congenital accumulations of cerebrospinal fluid (CSF) that are situated within the arachnoid membrane.These cysts are most often found in the middle cranial fossa.ACs tend to prefer the left side and often do not produce any symptoms but can become symptomatic if they grow sizably or if there is bleeding within the cyst or subdural space. 1 Although ACs are typically discovered incidentally, they can sometimes present with clinical signs and symptoms such as macrocephaly in the neonatal period, headaches, hydrocephalus, and epileptic seizures. 2he diagnosis of asymptomatic ACs has increased in recent years due to the extensive utilization of computed tomography (CT) and magnetic resonance imaging (MRI) techniques. 3Head trauma is one of the most significant risk factors for the development of intracystic hemorrhage in ACs and subsequent subdural hematoma (SDH).However, fewer than 182 cases of spontaneous intracystic hemorrhage have been reported in the literature. 3,4In this report, we present illustrative cases of three young patients who experienced spontaneous AC intracystic hemorrhage followed by chronic SDH.Additionally, we provide a thorough review of the existing literature.
Illustrative Cases Case 1
An 18-year-old male experienced a mild traumatic brain injury (TBI) after a head collision in a traffic accident.During the acute trauma phase, there were no indications of loss of consciousness (LOC) or other severe symptoms.However, 3 months after the incident the patient experienced a worsening headache that became continuous, unresponsive to analgesic treatment, and associated with vomiting.Neurological examination revealed a Glasgow Coma Scale (GCS) score of 15, with no observed changes in pupillary responses, language, or motor function, or signs of epilepsy.After a head CT, the patient's lesion was identified as a left temporal Galassi type III arachnoid cyst with corresponding chronic SDH (Fig. 1).The patient was urgently admitted to the hospital for microsurgical intervention.The corrective procedure involved a left pterional craniotomy, drainage of the hematoma, and microsurgical arachnoid cyst fenestration.This surgical approach aimed to establish communication between the cyst and interpeduncular cistern.The patient's headache improved during the immediate postoperative period.He was discharged after 2 weeks with no residual deficits.
Case 2
A 14-year-old male presented with worsening headache that started 2 days prior to admission without associated trauma.He then came to the hospital for further evaluation.On examination his GCS score was 15 with no pupillary, language, or motor changes.Other than headache, he was asymptomatic, with no signs of epilepsy.An initial CT head identified a left temporal Galassi type II AC with corresponding chronic SDH (Fig. 2).The patient underwent a left pterional craniotomy, hematoma drainage, and microsurgical AC fenestration.The patient's headache improved in the immediate postoperative period, and he was discharged after 2 weeks with good outcomes.
Case 3
A 21-year-old male experienced a mild TBI after a head collision in a traffic accident.During the acute trauma phase, there were no indications of LOC or other severe symptoms.However, 2 months after the incident the patient's headache intensified and became continuous, unresponsive to analgesic treatment, and accompanied by vomiting.Neurological examination revealed a GCS score of 15, with no pupillary, motor, language, or epileptic signs.An initial CT head identified the lesion as a left temporal Galassi type II AC with associated chronic SDH (Fig. 3).The patient was placed under local anesthesia, and a burr hole was created for hematoma evacuation.Upon performing the procedure, the brain promptly re-expanded.The AC was not excised during surgery, and a subdural drain was not required for additional drainage.The patient was discharged after 10 days with good postoperative outcomes.
Patient Informed Consent
The necessary patient informed consent was obtained in this study.
Discussion
Observations All participants included in our study were adolescent males.Previous studies have reported the prevalence of AC in the adult population to fall between 0.23% and 2.43%.In the pediatric population (those 18 years or younger), the estimated prevalence is 2.6%.Notably, both adult and pediatric populations exhibit a higher prevalence of AC among male patients. 5DHs are common, with an annual incidence of 13 to 14 cases per 100,000 person-years, and many cases remain undiagnosed.Most SDHs are associated with head trauma, wherein deceleration forces cause the rupture of bridging veins within the subdural space.Spontaneous SDHs are less frequent, accounting for 2% to 5% of atraumatic cases.The etiology of these hematomas, excluding those associated with arteriovenous malformations or fistulae, is not fully understood.However, they have been observed in patients with tendency to bleed due to hematological disorders, malignancies, anticoagulation therapy, hypertension from preeclampsia, infections, and hypervitaminosis.Rupture of cortical arteries at their adhesion sites with dura mater has also been reported.Spontaneous SDHs may occur during Valsalva maneuvers (e.g., coughing, straining, or weight training), or in association with intracranial hypotension caused by CSF leakage, excessive shunt drainage, exercise, or dehydration. 6n adults, ACs are frequently encountered as an incidental finding on imaging.Although ACs may present with symptoms during childhood, they often remain asymptomatic until adulthood.According to one study, the prevalence of ACs was found to be 1.4% in patients who underwent MRI.Only a small percentage (5.3%) of the cysts were symptomatic. 7Despite being considered benign, ACs can give rise to complications such as SDH, hygroma, and intracystic hemorrhage.These complications can occur spontaneously or due to minor head trauma.The bleeding is believed to be caused by the absence of structural support in the veins surrounding the ACs.This hypothesis is supported by the vascular nature of the cystic membrane as well as the presence of bridging veins traversing the cyst.Notably, the reduced compliance of the cyst compared to normal brain tissue may contribute to bridging vein rupture, particularly during instances of increased intracranial pressure (ICP). 2 Gosalakkal 8 identified that ACs within the middle cranial fossa were present in 2.43% of patients diagnosed with chronic SDHs or hygromas.Patients with chronic SDH had five times the prevalence of ACs when compared with controls. 8n our study, all three ACs bled with associated chronic SDH.All cases were associated with mild head trauma and large cyst size according to the Galassi classification (types II and III).These cases demonstrate the influence of mild TBI as a risk factor for subsequent AC rupture.A case-control study revealed that recent trauma within the past 30 days might elevate the risk of AC rupture by as much as 26.5-fold.Additionally, cyst size greater than 50 mm was identified as another risk factor for rupture. 9linical signs or symptoms often seen in the setting of ACs include epilepsy, attention-deficit/hyperactivity disorder, speech or developmental delay, signs of obstructive hydrocephalus, aphasia, increased ICP, headaches, and vomiting. 5,8In our study, none of the patients experienced significant symptoms other than headache or vomiting prior to hemorrhage.
The most effective treatment approach for chronic SDH associated with AC remains a topic of debate.Treatment for ACs is based on their location and presence of symptoms.Symptomatic cysts may require surgical intervention, but most neurosurgeons do not recommend treating asymptomatic cysts.ACs that are linked to subdural or epidural hematomas may also resolve on their own.In cases in which conservative measures are ineffective, neurosurgical intervention can be considered for AC management.Surgical options include shunting procedures, craniotomy with cyst fenestration, use of burr holes, and endoscope-assisted cyst fenestration into the CSF spaces. 2,5,9urr hole and craniotomy are the primary surgical techniques employed for draining chronic SDH.
In certain cases, surgeons may opt not to treat ACs when performing burr hole procedures.Others may choose to address the cyst through various methods including partial cyst removal or microsurgical cyst fenestration toward the interpeduncular or optic chiasmatic cisterns. 5,9Wu et al. 4 conducted a comprehensive review focusing on AC-associated chronic SDH in both adult and pediatric patients (n 5 182).Their study revealed recurrence rates of 8.2% with burr hole procedures and 1.5% with craniotomy. 4 Some authors have proposed the use of burr holes without any manipulation of the AC as the initial preferred approach for symptomatic patients and for selected instances of recurrence. 4 Other authors have suggested that the optimal treatment modality for patients with AC with associated hemorrhage involves hematoma evacuation and radical marsupialization. 10After cyst rupture and hemorrhage, endoscopic surgery presents technical challenges due to limited visibility and orientation within the cyst and surrounding membranes.These factors hinder the safe execution of fenestration procedures. 9n the first two cases, we assumed that the cyst wall had been opened because the contents within the AC and chronic SDH were of similar density.In both cases a frontotemporal craniotomy was performed, and the membranes of the hematoma and AC were completely removed to achieve complete evacuation.In the third case, a burr hole was made to evacuate the chronic SDH because the boundary between the cyst wall and hematoma was visible.
Lessons
AC with spontaneous intracystic hemorrhage and corresponding chronic SDH is a relatively rare condition.Surgery is considered the most effective treatment approach.In addition, evacuating chronic SDHs, microsurgical fenestration and membranectomy are recommended to prevent recurrence.
FIG. 1 .
FIG. 1. Left: Preoperative axial head CT with contrast showing a left temporal Galassi type III AC and chronic left SDH.Right: Postoperative axial head CT without contrast demonstrating total resection of the AC and SDH.
FIG. 2 .
FIG. 2. Left: Preoperative axial head CT with contrast showing a left temporal Galassi type II AC with chronic left SDH.Right: Postoperative axial head CT without contrast demonstrating total resection of the AC and SDH. | 2023-12-27T06:16:51.657Z | 2023-12-25T00:00:00.000 | {
"year": 2023,
"sha1": "0e03b0f88084a0300afd88ea8ba96ce12593e47f",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "1865027757e3c36a17c4940138f3378fe1b12232",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
16377442 | pes2o/s2orc | v3-fos-license | Evolutionarily Stable Association of Intronic snoRNAs and microRNAs with Their Host Genes
Small nucleolar RNAs (snoRNAs) and microRNAs (miRNAs) are integral to a range of processes, including ribosome biogenesis and gene regulation. Some are intron encoded, and this organization may facilitate coordinated coexpression of host gene and RNA. However, snoRNAs and miRNAs are known to be mobile, so intron-RNA associations may not be evolutionarily stable. We have used genome alignments across 11 mammals plus chicken to examine positional orthology of snoRNAs and miRNAs and report that 21% of annotated snoRNAs and 11% of miRNAs are positionally conserved across mammals. Among RNAs traceable to the bird–mammal common ancestor, 98% of snoRNAs and 76% of miRNAs are intronic. Comparison of the most evolutionarily stable mammalian intronic snoRNAs with those positionally conserved among primates reveals that the former are more overrepresented among host genes involved in translation or ribosome biogenesis and are more broadly and highly expressed. This stability is likely attributable to a requirement for overlap between host gene and intronic snoRNA expression profiles, consistent with an ancestral role in ribosome biogenesis. In contrast, whereas miRNA positional conservation is comparable to that observed for snoRNAs, intronic miRNAs show no obvious association with host genes of a particular functional category, and no statistically significant differences in host gene expression are found between those traceable to mammalian or primate ancestors. Our results indicate evolutionarily stable associations of numerous intronic snoRNAs and miRNAs and their host genes, with probable continued diversification of snoRNA function from an ancestral role in ribosome biogenesis.
Introduction
Noncoding RNAs (ncRNAs) are known to have a diverse range of roles in eukaryotes (Eddy 2001;Mattick 2003;Stefani and Slack 2008). Among the numerous groups of ncRNA described, several abundant classes of small ncRNA with a broad phylogenetic distribution are known, including C/D and H/ACA box small nucleolar RNAs (snoRNAs) and microRNAs (miRNAs). SnoRNAs have well-documented roles in cleavage-based processing and modification, primarily of rRNAs (Kiss 2002), but have also been documented to modify other RNA targets including small nuclear RNAs of the spliceosome (Ganot et al. 1999;Jády and Kiss 2001;Bachellerie et al. 2002;Darzacq et al. 2002). More recently, a role in regulation of alternative splicing of mRNA has been described (Kishore and Stamm 2006). MiRNAs, on the other hand, have welldocumented roles in gene regulation across a broad range of species and biological processes. They act to repress gene expression posttranscriptionally through direct pairing to a target mRNA (Bartel 2009;Carthew and Sontheimer 2009). The genomic arrangement of both classes of RNA is varied and includes independent transcripts, genomic clusters consisting of multiple RNAs and residence within the introns of protein-coding genes (Weinstein and Steitz 1999;Mattick 2003;Brown et al. 2008;Royo and Cavaillé 2008).
The intronic location of ncRNAs is interesting in that it represents a situation where two distinct gene products may be expressed from the same transcript. Expression of in-tronic ncRNAs is largely (though not exclusively) splicing dependent (Hirose et al. 2003;Baskerville and Bartel 2005;Brown et al. 2008). Assuming that expression profiles of both intronic ncRNA and host gene are subject to natural selection, one may envisage several explanations for this arrangement. One is that ncRNAs in introns primarily emerge de novo (Lu et al. 2008) and that a given intronic ncRNA is retained by selection on the basis of it performing some selectively advantageous function within the scope of the host gene expression profile. Another model builds upon the observation that ncRNAs, including snoRNAs, have been documented to be mobile (Weber 2006;Zemann et al. 2006;Schmitz et al. 2008) and may move between genomic locations over evolutionary time via reverse transcription (Volff and Brosius 2007). Mobility may result in a copy of an existing ncRNA becoming intronically located (from some other position, either intronic or not) and being retained at that site because overlap of ncRNA and host gene expression is beneficial. Under both models, which are not mutually exclusive, coexpression of host gene and intronic ncRNA may result in some optimal expression profile for both products, with maximum overlap and minimum trade-off. This might potentially be achieved by switching from one host gene to another (Enerly et al. 2003). Note that the mobility model results in ncRNA duplication (via segmental duplication or retrotransposition), which may in some cases lead to functional divergence of the copies (Volff and Brosius 2007).
Anecdotal observations support evolutionarily stable ncRNA-host gene relationships (Cervelli et al. 2002), mobility (Weber 2006;Schmitz et al. 2008), and segmental duplication (Zemann et al. 2006;Nahkuri et al. 2008). However, short lengths and limited sequence conservation among small RNAs make it nontrivial to distinguish orthology and paralogy. Genome alignments make assignment of orthology between small ncRNAs more reliable than by simple sequence similarity alone, and within this framework it is possible to systematically examine the association between ncRNAs and their host genes (Tanaka-Fujita et al. 2007). We therefore made use of available multispecies whole-genome alignments (Hubbard et al. 2009) to examine the degree to which intron occupancy by miRNAs and snoRNAs is stable across mammalian genomes. For both classes of ncRNA, around 50% of all annotated ncRNAs appear to be intronic in the genomes we studied, and we report a high degree of evolutionary conservation between intronic ncRNAs and their host genes across mammals. Out of the several hundred snoRNAs and miRNAs annotated in the respective genomes (e.g., 717 snoRNAs and 1664 miR-NAs in humans), 87 snoRNAs and 103 miRNAs are traceable to the mammalian ancestor using synteny established from genome alignments. Of these, almost all snoRNAs (87/89) and the majority of miRNAs (61/80) are intronic.
At the same time, many snoRNAs and miRNAs are restricted to specific lineages within the mammalian tree, suggesting either ancestral losses or a more recent evolutionary origin. In the case of miRNAs, the latter is generally assumed given the well-documented role this class of ncRNA plays in gene regulation. Although data are emerging to support a broader regulatory role for snoR-NAs (Kishore and Stamm 2006;Royo and Cavaillé 2008), such snoRNAs are found in clusters and are generally not intronic (though some may have evolved from intronic snoRNAs [Nahkuri et al. 2008], and some are found in the introns of nontranslated mRNAs [Tycowski et al. 1996]).
We compared the functions of mammalian genes carrying intronic ncRNAs whose intronic positions are stable and ancient (conserved across all 11 mammalian genomes in our data set) with the functions of those that have been in their current location more recently (restricted to primates). For the stable ancient snoRNAs, there appears to be significant overrepresentation of host genes involved in protein synthesis and ribosome biogenesis, whereas no functions are significantly overrepresented among the less stable lineage-specific snoRNAs. Against the backdrop of stable association between ncRNAs and their host genes, this may suggest that snoRNAs have taken on additional roles during the diversification of mammals, in line with suggestions that mammals (and vertebrates, see Heimberg et al. 2008) employ extensive RNA regulatory networks for finetuning function and gene expression (Mattick 2001(Mattick , 2009).
Data set
A precompiled genomic alignment of 11 mammals and one bird was retrieved from release 54 of the Ensembl Compara database (''12 amniota vertebrates Pecan,'' id 338), comprising the following species: Homo sapiens (Human), Pan troglodytes (Chimpanzee), Pongo pygmaeus (Orangutan), Macaca mulatta (Macaque), Rattus norvegicus (Rat), Mus musculus (Mouse), Canis familiaris (Dog), Equus caballus (Horse), Bos taurus (Cow), and Gallus gallus (Chicken). We examined the conservation of annotated snoRNAs and miRNAs across this data set. Annotated snoRNAs and miRNAs in release 54 are derived from Rfam (Griffiths-Jones et al. 2003) and miRBase (Griffiths-Jones 2006) databases. The annotation pipelines employ primary, manually curated seed sequences from these databases, as follows. Seed sequences are used in Blast searches against each genome to identify putative ncRNA genes. Because both classes of ncRNA possess secondary structure motifs, in silico folding of sequences is subsequently performed to check for characteristic structural motifs as means to ascertain functionality using covariance models (snoRNAs, see [Nawrocki et al. 2009] or stem-loop folding miRNAs, [Hofacker et al. 1994]), respectively. A description of the Ensembl release 54 annotation pipeline can be found in the FAQs at www.ensembl.org.
Assigning Orthology to ncRNAs using Synteny SnoRNA and miRNA orthology across species were established using two criteria. The first is simple assignment of homology based on common Rfam and miRBase IDs. IDs in these databases are assigned to ncRNAs based on similarity to the covariance model or seed alignment describing each ''family'' (each family corresponds to a particular id).
Next, genomic locations of ncRNAs were overlaid onto the genome alignment to identify cases of positional conservation. We draw a distinction between candidate ncRNAs that fall within aligned regions and those that fall outside identified syntenic regions; only the former are used in our analyses (table 1) on the grounds that it is nontrivial to assign orthology for the latter group.
To account for slight positional variations in ncRNA predictions, and minor inaccuracies, gaps and small indels in the genomic alignments, we only infer orthology among similar ncRNA sequences across the genome alignment where the alignment falls within a range of ±80 nucleotides for snoRNAs and ±40 nucleotides for (the generally shorter) miRNAs across the entire alignment. In both cases, we stayed under the total length of individual genes to avoid unwanted overlap with adjacent paralogues within an RNA cluster. Although these range constraints may result in the loss of data (i.e., false negatives), larger ranges may result in inclusion of false positives in our data set; the latter is of greater concern than the former. Manual vetting of the data indicate that for most cases of positional conservation across the genome alignment, the range is considerably smaller (,5 nt).
Analysis of ncRNA Conservation across the Mammalian Tree
Relationships between the 11 mammals used in our analysis were established from a recently published mammalian supertree (Bininda-Emonds et al. 2007); chicken was added manually as an out-group by assuming a divergence time of 310 Myr (Hedges 2002). We inferred the ncRNA status of each internal node in the tree with maximum parsimony using DolloP from the PHYLIP package (Felsenstein 2004), using the subset of ncRNAs where positional conservation could be established between at least two genomes (table 1). Maximum likelihood was not used owing to the absence of an accurate evolutionary model to statistically describe the gain and loss of ncRNAs.
Analysis of Host Gene Function
The analysis of host gene function was based upon GO terms from the Gene Ontology project (Ashburner et al. 2000). Given that many GO terms are assigned on the basis of sequence similarity to experimentally characterized homologs, we restricted our analysis to genes from H. sapiens as one of the better studied genomes. The graphical representation ( fig. 4) was created from data from the Gene Ontology Term Mapper (http://go.princeton.edu). Statistical support was computed using the GoStat web server (Beissbarth and Speed 2004), employing a stringent cutoff of P 0.001 and Benjamini correction for false positive detection. Expression data for a statistical comparison of Shannon entropy and strength of expression (approximated as the sum across all tested tissues) were obtained from the human transcriptome atlas (Su et al. 2004). Data were obtained from Array Express, accession E-TABM-145. To estimate the expression level of each gene, we calculated the median array signal for all tissues (removing duplicates, such as brain subsamples). To estimate the expression breadth, we calculated the Shannon entropy as S 5 À sum (Pi  ln(Pi)). Where the total expression T is the sum of all expression values for tissues (1..i), Ei is the expression of the gene in tissue i and the proportion of expression in tissue(i) is Pi 5 Ei/T.
Results and Discussion
Extensive Conservation of snoRNAs and miRNAs across the Mammalian Tree We made use of available whole-genome alignments across 12 vertebrates (11 mammals plus chicken; Hubbard et al. 2009) and evolutionary conservation of annotated snoRNAs and miRNAs from Rfam (Griffiths-Jones et al. 2003) and miRBase (Griffiths-Jones 2006) to examine positional conservation of orthologous ncRNAs across the mammalian tree. Only snoRNAs and miRNAs located in syntenic regions were considered, yielding a set of 3041 unique miRNA and 1648 snoRNA groups in the 6.5 gigabase pair long alignment. Out of these, 648 snoRNAs and 964 miRNAs were present in more than one genome and formed the basis for our analysis (table 1). Given genome alignments and the evolutionary relationships between mammalian groups, we performed a parsimony-based analysis of conservation of miRNAs and snoRNAs using Dol-loP from the PHYLIP package to establish the ncRNA content at different stages during mammalian evolution (as represented by internal nodes in the tree, see fig. 1).
Our results indicate that a considerable number of snoRNAs and miRNAs can be traced back to the mammalian ancestor on the basis of genome alignment aided orthology assignment, 135 snoRNAs (135/648 5 21%) and 103 miRNAs (103/964 5 11%; see Node 2, fig. 1). We refer to these as ancestral positionally conserved (APC) RNAs, indicating that we can be confident of an ancestral conserved location for these ncRNAs. Other snoR-NAs and miRNAs are present in a more limited number of nodes. Because this analysis cannot distinguish between a genuine de novo origin of a particular ncRNA within a particular lineage and an earlier origin with mobility or loss in deeper branching lineages, we collectively refer to these as novel location (NL) RNAs.
Clearly, there will be false discoveries and false negatives with automated ncRNA predictions (Griffiths-Jones 2007), and this may impact our results. Likewise, assembly errors in individual genomes are likewise a potential source of either missing or duplicated data, though overall these problems are likely to have a smaller overall impact than ncRNA annotation. The risk of including false positive ncRNA annotations will be higher for NL ncRNAs because inferences rely upon sequence data from only a few species. For deeper divergences, false positives become less likely because sequence conservation and consistent spurious ncRNA prediction is less likely. However, the greater sequence divergence between ancient ncRNAs may mean that the initial Blast-based screens fail to identify a putative ncRNA in the fist place (see Materials and Methods). Therefore, we probably ''underestimate'' the true number of APC ncRNAs and ''overestimate'' the true number of NL ncRNAs. The result is that our predictions for the percentages of APC ncRNAs (21% of snoRNAs and 11% of miRNAs) are expected to be conservative. Our analysis includes only the ncRNAs in aligned regions of the genomes, which were predicted in at least two species. Consequently, our data set includes only approximately 50% of all the annotated ncRNAs for these genomes (table 1). To examine how representative our analysis is of ncRNA gene paralogs, we reconstructed the ancestral states for individual snoRNA and miRNA families (as defined by Rfam and miRBase) based on their presence or absence in individual genomes (without reference to aligned regions and not taking into account copy numbers or location). We then compared these numbers with those obtained from our data set. The results ( fig. 2) indicate that our analysis has good coverage of ncRNA families: approximately 80% of snoRNA families and 70% of miRNA families conserved across the entire 12 genome data set are included (node 2, fig. 2). This suggests that the remaining 20-30% are either mobile or located in regions too divergent to be alignable across larger evolutionary distances.
To confirm that our APC ncRNAs are more conserved than NL ncRNAs, we calculated percentage identity and median genomic evolutionary rate profiling (GERP) scores (GERP method, Cooper et al. 2005) from the genomic alignment. APC RNAs showed significantly greater percentage identity (Mann-Whitney U test P 5 2.2 Â 10e À16 ) and significantly higher median GERP scores (Mann-Whitney U test P 5 3.486 Â 10e À14 ) than NL RNAs inferred to have emerged along the branches leading to primates. 2007), with modifications as described in Materials and Methods. Counts of positionally conserved ncRNAs are derived from a maximum parsimony analysis using DolloP from the PHYLIP package (see Materials and Methods). The numbers of ncRNAs inferred from synteny to be present at each internal node are listed (red: snoRNAs, blue: miRNAs). The first number indicates the total count of orthologous mi/snoRNAs inferred to be present at a given node, followed by the number of intronic ncRNAs inferred to be present at a given node (a subset of the first value).
We observe that the vast majority of APC ncRNAs are intronic in all mammalian lineages represented in our study; 97% (131/135) of snoRNAs and 79% (81/103) of miRNAs traceable to the mammalian ancestor are intronic. This trend extends to the common ancestor of birds and mammals (Node 1, fig. 1). Thus, many ncRNAs appear to be stably associated with the same intron of the same host gene over considerable evolutionary timescales, possibly indicating a selective advantage for this arrangement over an intergenic location.
To examine patterns of intronic and intergenic ncRNA conservation across mammalian evolution, we compared the ncRNA inventory of the mammalian ancestor (node 2, fig. 1) with numbers obtained for the primate ancestor. We used the ncRNAs from the primate ancestor (node 4, fig. 1), rather than a data from specific species because elements present across several species will have a lower false positive rate. We find that 104 of the 374 primate snoRNAs (28%) are also present in the mammalian ancestor (table 2). Similarly, 92 of the 490 primate miRNAs (19%) are also present in the mammalian ancestor. The majority of these are intronic; 102 snoRNAs (102/104-98%) and 72 miRNAs (72/92-78%). Intronic snoRNAs (v 2 5 22.33; P ,, 0.001) are thus significantly more positionally stable than intergenic elements, whereas no such trend was found for miRNAs (v 2 5 1.71; P 5 0.19).
The majority of ncRNAs used in our analysis (table 1) are specific to a particular mammalian order. Numbers in Laurasiatherians (horse, cow, and dog) are likely low on account of limited experimental study of ncRNAs among the Laurasiatherian genomes included in this study. The intensive experimental focus on human ncRNA (e.g., Fejes-Toth et al. 2009), particularly for miRNA identification (e.g., Bar et al. 2008;Wyman et al. 2009), is likely to be responsible for inflation of the numbers of annotated ncRNAs among primates. Given strong miRBase growth fig. 1 and fig. 2) is due to a corresponding jump in miRNA disparity (sensu Heimberg et al. 2008) within this group; in the current analysis, we cannot exclude the possibility that this is an artifact of greater experimental focus on miRNAs in H. sapiens (supplementary fig. S1, Supplementary Material online). Analysis of reported expression profiles of miRNA host genes (Su et al. 2004) failed to detect correlation with a particular tissue (data not shown); a significant correlation might have been expected if newly emerging miRNAs were predominantly involved in, for example, brain development. This should not be taken as evidence against a general correlation between the evolution of the human brain and miRNA genesis-our conclusion is limited to intronic miRNAs present in the primate ancestor.
Rfam and miRBase Families in the Common Ancestor of Mammals and Birds Are Represented by both Orthologues and Paralogues
Both snoRNAs and miRNAs are grouped into families by Rfam and miRBase, respectively, on the basis of sequence similarity. Using this information, we sought to extend our analysis to include the presence or absence as well as secondary losses of such families. Amongst primate NL ncRNAs, 130 out of 189 snoRNAs (68.78%) and 80 out of 220 miRNAs (36.46%) belong to families already present in the mammalian ancestor (supplementary tables S1 and S2, Supplementary Material online). The positionally conserved ncRNA content of the common ancestor of birds and mammals consists of both single-family representatives and cases of paralogy for both snoRNAs and miRNAs (supplementary tables S3 and S4, Supplementary Material online; this of course excludes those ncRNAs which are not positionally conserved). Thus, mobility is clearly a feature of numerous ncRNAs.
Interestingly, this also includes cases of evolutionarily conserved within-gene duplication. The most striking examples are miRNA miR-302 and box C/D snoRNA snoRD58 (of which there are four copies each; supplementary tables S3 and S4, Supplementary Material online).
MiR-302 has diversified into four distinct RNA species (miR-302a-d) as a result of ''within-intron'' duplication within the LARP7 gene prior to the bird-mammal split. A related fifth miRNA, miR-367 (miRBase accession: MI0000738), also conserved in this cluster (supplementary fig. S3, Supplementary Material online). Homologues of miR-302 are known in Xenopus (miR-427) and zebra fish (miR-430), and recent experimental data demonstrate that human miR-302a and Xenopus miR-427 are involved in FIG. 2.-Representation of snoRNA and miRNA families among the subset of positionally conserved ncRNAs in this study. Our analysis is based on genomic alignments and thus excludes approximately 50% of annotated snoRNAs and miRNAs located in nonsyntenic regions of the respective genomes (table 1). To estimate the coverage of snoRNA and miRNA families (as defined by Rfam and miRBase) in our analysis, we performed a per-node reconstruction of family presence/absence irrespective of copy number or positional conservation (genome) and compared these numbers (family count) with the family representation in our analysis (alignment). The results indicate that our study set provides good coverage (between 70% and 90%) of snoRNA and miRNA families. The remaining ncRNA families are likely located in regions not alignable across genomes. Intronic Intergenic Total snoRNA All primates 293 81 374 Mammalian ancestor a 102 2 104 miRNA All primates 351 139 490 Mammalian ancestor a 72 20 92 a ncRNAs conserved across primates (node 7, fig. 1) that were already present in the mammalian ancestor (node 2, fig. 1).
embryonic mesendoderm differentiation in both species through regulation of Nodal signaling (Choi et al. 2007;Rosa et al. 2009). It is unclear exactly what role the four mammalian miR-302 paralogues may have, but this broad vertebrate family appears to play numerous roles in addition to the above partially conserved functional roles (Ketting 2009). The functional significance of the association between LARP7 and the miR-302 cluster is as yet unclear; LARP7 is involved in negative regulation of RNA polymerase II genes via 7SK RNP, of which it is a constituent (He et al. 2008;Markert et al. 2008). However, we note that all miRNAs in this ''intronic'' cluster are coded antisense to LARP7 (supplementary fig. S3, Supplementary Material online), and, consequently, it is unclear to what extent there is overlap of expression profiles. In the case of snoRD58 cis-duplicates, all four are found in ''different'' introns of the gene coding for the ribosomal protein RPL17 (supplementary fig. S2, Supplementary Material online). Two of these have been previously shown to direct 2#-O-methylation of 28S rRNA (snoRD58a and b; Nicoloso et al. 1996) and snoRD58c has been predicted to modify this same rRNA molecule (Yang et al. 2006). The role of snoRD58d has not been established, but its conservation across mammals and birds suggests it is not a degenerate nonfunctional copy, as has been suggested (http://www-snorna.biotoul.fr/plus.php?id5U58C; Lestrade and Weber 2006).
Such cis-duplications do not appear to be widespread across our data set; most intronic ncRNA-bearing genes in humans carry only a single snoRNA or miRNA ( fig. 3). It is likely that the three processes of cis-and transduplication and de novo emergence all contribute to ncRNA evolution (Weber 2006;Zemann et al. 2006;Lu et al. 2008;Schmitz et al. 2008), however, our analysis suggests that the latter two processes may play a greater role in the evolution of snoRNA and miRNA genes in mammals.
Analysis of Host Gene Functions Suggests Recent Diversification of snoRNA Functions during Primate Evolution
Previous reports suggest that many of the more widely conserved snoRNAs are involved in rRNA processing (Lafontaine and Tollervey 1998;Dieci et al. 2009). Because transcriptional overlap may well be common between intronic ncRNAs and their host genes (Baskerville and Bartel 2005), we examined the difference between host gene function in our set of APC snoRNAs (mammalian ancestor) versus snoRNA groups of a putatively more recent origin (NL). To describe host gene function, we used Gene Ontology (Ashburner et al. 2000), expression level, and gene expression breadth (amongst tissues) data derived from human host genes. We reasoned that if snoRNA-host gene relationships are evolutionarily stable, host gene function and tissue distribution may provide information regarding emergent roles among intronic snoRNAs, as has been considered for miRNAs (Rodriguez et al. 2004).
Of 60 human host genes dating back to the mammalian ancestor (i.e., hosting an APC snoRNA), 21 associated with the biological process ''translation'' and 18 with the molecular function ''RNA binding.'' In contrast, of the 123 host genes recruited along the branches leading to primates (i.e., hosting exclusively NL snoRNAs), only 14 associate with translation and 15 with RNA binding (fig. 4). This provides strong support (P 5 9.14 Â 10e À22 , see Beissbarth and Speed 2004) for overrepresentation of human host genes involved in ribosome function traceable back to the mammalian ancestor compared with those specific to primates.
We also expected that host genes of APC snoRNAs would be expressed in a wider range of tissues than NL snoRNAs host genes, consistent with a role in more fundamental cellular processes. To test this expectation, we use the GNF/Novartis human gene expression data set, containing expression profiles for 33698 genes in 38 tissues (Su et al. 2004). As a measure of the breadth of host gene expression, we calculated the Shannon entropy for each gene expression profile. Briefly, Shannon entropy measures the degree to which a quantity is ''randomly'' distributed amongst categories (tissues in our case). A high entropy indicates ubiquitous expression, whereas low entropy indicates expression limited to one or a few tissues. A comparison of Shannon entropies for those host genes where expression data were available (see Materials and Methods) revealed significantly higher entropy for APC snoRNAs (P 5 4.18 Â 10e À6 , Mann-Whitney U test) indicative of broad expression. We also considered whether APC host genes were more highly expressed than NL host genes. The median of expression levels (measured across all tissues) of genes containing an APC snoRNA were significantly higher than host genes containing NL snoRNAs (P 5 2.621e À7 , Mann-Whitney U test). These observations indicate, in the human snoRNA data set, NL snoRNAs reside in the introns of more tissuespecific low-expression genes.
The dependence of snoRNA and host gene expression cannot be assumed if ncRNA and host gene are encoded 1) inspection of the number of (A) miRNAs and (B) snoRNAs per host gene in Homo sapiens reveals that most host genes carry only a single ncRNA (black). This suggests de novo emergence and/or transduplications (including ncRNA retroposition) of existing families are more prevalent in mammals than cis-duplication. on opposite strands. Only a fraction of intronic NL snoRNAs fall into this category (approximately, 12% in the primate ancestor), whereas all deeply conserved intronic snoRNAs (mammalian ancestor) are on the sense strand (supplementary table S5, Supplementary Material online). This finding therefore strongly supports the notion of overlapping expression profiles.
We performed equivalent analyses for human miR-NA host genes. We found no functional association for host genes of miRNAs, regardless of node depth (data not shown) nor an significant differences between the expression level or breadth of APC and NL miRNA host genes. There is no a priori expectation that this class of regulatory ncRNA should be associated with regulation of a specific process, and our result likely reflects the broad range of cellular processes in which miRNAs are involved (and the large number of potential target mRNAs). We also note that intronic miRNAs are more frequently housed antisense to the host gene (up to 30% per node). This may indicate that expression of a significant fraction of intronic miRNAs is not directly dependent on host gene expression.
Conclusions
We have analyzed the positional conservation of snoRNAs and miRNAs across a multiple genome alignment of 11 mammals, using the chicken genome as an out-group. We found 3041 miRNAs and 1649 snoRNAs to be present in two or more species. Of these, 169 are APC ncRNAs (89 snoRNAs and 80 miRNAs), and the vast majority (98% of snoRNAs and 76% of miRNAs) are located in the introns of protein-coding genes. Intronic snoR-NAs and miRNAs are significantly more likely to be positionally stable than intergenic RNAs.
Our results thus demonstrate the utility of genome alignments for examining ncRNA orthology across considerable evolutionary timescales and complement sequence similarity guided approaches. Comparative genome analyses of ncRNAs are still in their infancy, necessitating FIG. 4.-A comparison of human host gene function between the ancestor of mammals and primates suggests a diversification in the roles of snoRNA host genes in more recent evolutionary history. Whereas half of the host genes in the earliest mammals (mammalian ancestor, graphs on left) are involved in ribosome formation or protein production ([A] biological process: translation; [B] molecular function: RNA binding), no such bias can be found for snoRNA host genes traceable to the primate ancestor (minus those also in the mammalian ancestor, graphs on right). The scale on the y axis corresponds to the total number of genes from Homo sapiens used in each analysis. Only the top 10 categories are shown. E values for significantly overrepresented GO terms were calculated using GoStat (Beissbarth and Speed 2004). a conservative approach, but as ncRNA annotations and genome assemblies improve, additional questions will become tractable. The current analysis does not enable us to establish whether intronic APC ncRNAs are ancestrally intronic or whether they have migrated from other genomic locations. Among ncRNAs showing positional conservation among primates (270 snoRNAs and 398 miRNAs), some may be new RNAs that have arisen de novo in the lineage leading to primates. However, family assignments based on Rfam and miRBase classifications also indicate that numerous primate NL ncRNAs (130 snoRNAs and 82 miRNAs, supplementary tables S1 and S2, Supplementary Material online) are paralogs of families dating back to the mammalian ancestor, suggesting that ncRNAs positionally conserved among primates are likely to have inserted into their current location from elsewhere.
This indicates that only a minority of snoRNAs and miRNAs-primarily intron-encoded ncRNAs-have remained in the same location during mammalian evolution. The general patterns we observe (greater positional conservation for intronic ncRNAs, with few such locations being demonstrably ancestral) suggest that intronic location may confer an advantage but that ncRNAs only rarely arise de novo within introns.
Finally, we report that intronic APC snoRNAs are more likely to be present in the introns of genes involved in ribosome biogenesis and more likely to be broadly and highly expressed than genes containing a NL snoRNA (NL snoRNA). SnoRNAs function in ribosome biogenesis across all eukaryotes and are known to be encoded in the introns of ribosomal protein genes in species as evolutionarily distant as yeast (Bachellerie et al. 2002), and it will therefore be of interest to establish whether intronic APC snoRNAs have been ancestrally associated with these host genes or whether various intronic locations are have arisen by convergent evolution. In contrast to APC snoRNAs, miRNA host genes show no significant associations with specific biological processes or functions, and we detect no expression differences between ancestral and NL miRNAs, as measured by expression breadth across tissues or levels of expression. Interestingly, examination of host genes for NL snoRNAs reveals a pattern similar to that observed for miRNAs, suggesting that snoRNAs may have been coopted into a broader range of (possibly regulatory) roles in the course the diversification of mammals. This suggests that during the course of mammalian evolution, snoRNAs have undergone gradual diversification from their ancestral functions in translation, which may date to early stages in cellular evolution (Omer et al. 2000;Penny et al. 2009). | 2017-04-10T12:24:38.404Z | 2009-11-05T00:00:00.000 | {
"year": 2009,
"sha1": "a4509af4eb4a47c7285b3940d0bf72059a9a5837",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/gbe/article-pdf/doi/10.1093/gbe/evp045/17918151/evp045.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "617be5f4d223a2c7f5f44739515c63753b8a6cda",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
10709049 | pes2o/s2orc | v3-fos-license | Selective REM-Sleep Deprivation Does Not Diminish Emotional Memory Consolidation in Young Healthy Subjects
Sleep enhances memory consolidation and it has been hypothesized that rapid eye movement (REM) sleep in particular facilitates the consolidation of emotional memory. The aim of this study was to investigate this hypothesis using selective REM-sleep deprivation. We used a recognition memory task in which participants were shown negative and neutral pictures. Participants (N = 29 healthy medical students) were separated into two groups (undisturbed sleep and selective REM-sleep deprived). Both groups also worked on the memory task in a wake condition. Recognition accuracy was significantly better for negative than for neutral stimuli and better after the sleep than the wake condition. There was, however, no difference in the recognition accuracy (neutral and emotional) between the groups. In summary, our data suggest that REM-sleep deprivation was successful and that the resulting reduction of REM-sleep had no influence on memory consolidation whatsoever.
Introduction
Sleep and emotion modulate memory consolidation, and the role of sleep in memory consolidation is supported by a multitude of studies [1][2]. Similar effects of sleep on consolidation have been found in most memory systems, including emotional memory. For example, Wagner et al. [3] identified a selective benefit on retrieval of content of an emotional story when compared to retrieval of content of a neutral story, if sleep was in the retention interval. Payne et al. [4] showed their participants scenes with neutral or negative objects in front of a neutral background. Unlike a retention interval spent awake, a retention interval spent asleep led to selective consolidation of the negative objects at the cost of memory of the neutral background. In summary, there are several studies in support of the hypotheses that emotional aspects of memory relative to neutral aspects of memory are selectively enhanced by sleep [5]. On the other hand, Lewis et al. [6] found contradicting results. The authors showed their participants pictures with either negative or neutral contexts and compared the recognition performance after wake and sleep retention intervals. Their data showed a smaller decay of context memory in the course of sleep than during wakefulness; however emotional and neutral context memories were protected to the same extent during sleep. However, they were able to record a stronger increase in activity in brain areas specific for emotional processing, such as the left amygdala and right parahippocampus, during correct retrieval of negative contexts after sleep.
Recent papers discuss the function of different sleep phases in memory consolidation. Specifically, it is hypothesized that slowwave sleep primarily enhances the consolidation of hippocampusdependent, declarative memories [1][2]. In addition, REM sleep is believed to selectively enhance the consolidation of emotionally tagged memory content while attenuating the emotion itself [5]. In line with the earlier study by Wagner and colleagues [3], Groch et al. [7] showed better recognition of emotional stimuli than of neutral stimuli after late, REM-rich nocturnal sleep compared to early, slow-wave rich sleep (SWS). Furthermore, correlations between REM-sleep during a short nap [8] or overnight sleep [9] and the consolidation of emotional memory have been reported. In contrast, some recent studies failed to replicate any selectively enhancing effect of REM-sleep on the consolidation of emotional memory. Hu et al. [10] also did not find a benefit for emotional relative to neutral stimuli when they asked for ''remember'' judgments; the assumed influence of REM-sleep was only reproducible for ''know'' judgments. In a recognition task of negative and neutral stimuli, Baran et al. [11] could not identify any relationship between emotional memory and measures of REM-sleep; they were, however, able to identify such a relationship for emotional salience. In summary, there is some evidence in favor of the hypothesis that REM sleep fosters the consolidation of emotional memory. It is unclear, however, whether REM sleep is necessary for the consolidation of emotional memory.
To the best of our knowledge, this is the first study to investigate the effect of selective REM-sleep deprivation on emotional memory consolidation. Emotional memory was tested using negative pictures of the IAPS [12]. Since most of the recent studies using other study-designs suggested a link between REM sleep and emotional memory performance (3,(7)(8)(9)(10), we hypothesized that selective REM-sleep deprivation would result in decreased emotional memory retention.
Participants
All participants gave written informed consent before participating. The study was approved by the local ethics committee of the School of Medicine of the Christian-Albrechts University of Kiel (proposal no. A 449/11). Each participant was compensated with payment for participating in this study. Participants were 39 medical students recruited at the University of Kiel campus. Exclusion criteria were a history of neurologic or psychiatric disorders, sleep disorders, medications, or a body-mass-index above 30. Another exclusion criterion was a recognition performance of less than .7 in the encoding control of the memory paradigm. Neurologic or psychiatric symptoms were measured by self-report and the German version of the standardized symptom checklist (SCL-90-R) [13]. Two of the main scores of the SCL-90-R, the Global Severity Index (GSI), and the Positive Symptom Total Score (PST) were used. The GSI is an indicator for general psychological stress, while the PST reflects the number of symptoms. For both scores, participants with results outside the average were excluded (T-values: ,40 or .60). Sleep disturbances were screened with the Pittsburgh Sleep Quality Index (PSQI) [14]. A cut-off-score $5 suggested by Buysse et al. [14] was used to exclude participants with unhealthy sleep. Because of a possible influence on the sleep architecture [15], samples were limited to right-handed participants, as measured by self-report and the Edinburgh Handedness Inventory [16], and non-smokers [17]. The Digit Span Forward test from the WMS-R [18] was used to make sure that concentration and working memory of all participants were average at minimum. The Digit Span test took place before the learning phase of the memory paradigm.
The data of 6 participants could not be used for further analysis, either due to technical problems during data recording or because the participants were not able to fall asleep. The data of 2 more participants were discarded because they had too much REMsleep within the experimental group (40.5 min) or insufficient REM-sleep within the control group (8.53 min), respectively. 2 more participants failed the encoding control test and were therefore excluded (for details, see below). The final sample as the basis of the data analysis consisted of 29 participants (18 female, 11 male) aged between 19 and 25 (mean 23, SD = 1.71). 14 participants were in the control group and 15 participants were REM-sleep deprived.
The participants' age in both groups, handedness, and selfreported sleep quality did not differ significantly between groups. There were also no significant differences between the experimental groups in the GSI and PST scores from the SCL-90-R (Table 1).
Procedure
The participants spent two nights in our sleep laboratory. The first night's purpose was to exclude severe sleep disorders like sleep apnea syndrome and to adapt the participants to the conditions in the sleep laboratory. At 10:00 p.m. before the second night in the laboratory, the participants were shown the stimulus pictures (encoding phase) and then immediately tested with a small subsample of pictures afterwards to ensure sufficient encoding (encoding control test). Sleep was recorded between lights off (approx. 10:40 p.m.) and lights on in the morning (approx. 6:45 a.m.). The participants performed the recognition test at 8:00 a.m.. Participants were randomly assigned to either the control or the experimental group (gender was parallelized). Participants in the experimental group were REM-deprived during the experimental night, while the sleep of participants in the control group was undisturbed. One week before or after the sleep condition (order counterbalanced), a wake condition was employed. For this purpose, the participants arrived at 9:00 a.m. (+/21 h) and attended a parallel version of the learning phase and encoding control test of the memory paradigm. They were asked to spend the following retention interval awake (even without napping). At 07:00 p.m. (+/21 h) the recognition test phase of the memory paradigm followed.
Memory Task
Stimuli were two sets of 260 pictures (130 emotional and 130 neutral). Most of them were taken from the International Affective Picture System (IAPS) [12], whereas a few pictures (20%) were chosen from an in-house picture set which included images similar to the IAPS set. These pictures were used in the studies of Prehn-Kristensen et al. [19,20] and approved in this research. The emotional pictures from the IAPS ranged in their arousal from 4.46 to 7.26 on a 9-point-scale (from 1 not arousing, to 9 maximal arousing) and in their valence from 1.48 to 4.85 (9-point-scale from 1 negative, to 9 positive). Means and standard deviations for the emotional pictures were 5.7260.72 (arousal) and 2.8160.69 (valence) and for the neutral pictures 3.3360.78 (arousal) and 5.4760.76 (valence). Picture selection was orientated to the IAPSnorms [12]. Both picture sets were parallelized with respect to arousal and valence ratings. The pictures in both sets were presented in pseudorandom order and the order of presentation was same for all participants, who were aware that this was a memory test. During the learning phase 130 pictures (65 emotional and 65 neutral) were presented on a computer screen for 1.5 s each. Participants were asked to rate the arousal of each picture on a 9-point scale (SAM) [12,21]. Afterwards, the participants completed the encoding control test. The encoding control stimuli included 20 pictures (10 emotional and 10 neutral) from the learning phase (targets) mixed with 20 novel pictures (10 emotional and 10 neutral) (distractors). Every stimulus from the encoding control was shown for 1.5 s followed by an old/new memory judgment. The encoding control did not include enough pictures for a reliable analysis of a recognition baseline. Therefore the results of the encoding control were only used to assure that the participants complied with the instructions, attended to the task, and showed a sufficient encoding performance (accuracy rate cut off .7). Exploratory t-tests revealed no significant differences in encoding performance between experimental groups and between conditions. The retention interval was followed by the final recognition test in which 220 pictures were shown for 1.5 s each, and the participants were asked for an ''old''/''new'' judgment without a time limit for their answer. The 110 targets from the learning phase (55 emotional and 55 neutral) were mixed with 110 previously unseen distracters (55 emotional and 55 neutral). Accuracy (difference between hit-rate and false alarm-rate) was used as a measure of memory performance [7]. In addition we calculated the sensitivity d9 and the response bias c (see Supplements S1).
Participants of both experimental groups were asked to rate their individual emotional state once after sleep before the memory retrieval on a 9-point valence (1 = happy, pleased, content, optimistic; 9 = unhappy, bugged, discontent, sad, desperate) and arousal (1 = relaxed, calm, lazy, sleepy; 9 = excited, frantic, nervous, wide awake, aroused) scale (SAM) [21]. We introduced these measures to control for possible effects of sleep inertia on memory retrieval. The consideration of these covariates in the ANOVA did not change the outcome and that is why they are not reported on below. There were no significant differences concerning emotional state in the morning between participants of both experimental conditions as shown in Table 2.
Sleep Recording and Deprivation
Sleep during the experimental night was recorded by standard procedures using a digital electroencephalogram (EEG), electromyogram (EMG), electrooculogram (EOG), and electrocardiogram (EKG). The EEG montage according to the 10-20 system included the positions C4 referenced to A1, O2 referenced to A1, and F4 referenced to A1. F3 referenced to A2, C3 referenced to A2, and O1 referenced to A2 were used as backup positions. To record sleep parameters, the polysomnographic recoding-system SOMNOscreen PSG plus TM (SOMNOmedics, Randersacker, Germany) was used. Data were analyzed according to the specifications provided in the revised AASM manual [22] by a certified rater unaware of the hypotheses. Sleep spindles (11)(12)(13)(14)(15)(16) were visually identified in all epochs scored as N2-sleep. Spindles exceeded 0.5 seconds and had a typical waxing and waning spindle morphology. Sleep spindle density was calculated as the ratio of the number of sleep spindles counted in N2-sleep to the number of minutes of N2-sleep. REM-sleep was classified according to the standard criteria of rapid eye movements, low muscle tone, and rapid low-voltage EEG and was monitored online by two trained psychologists. As soon as the first 30-s REMsleep epoch was identified, participants were awakened. At the beginning of the night, it was usually sufficient to awaken the participants using an intercom. If participants did not wake up completely this way, one of the investigators went to the participant's room and addressed him or her personally. If this was not sufficient either, the participants were asked to sit up for a moment before being allowed to sleep again.
Statistical Analysis
The manipulation check of the REM-sleep deprivation was performed using two-sample t-tests. A mixed ANOVA model with the between subjects factor GROUP (control versus REMD) and the within subjects factors CONDITION (wake versus sleep) and AFFECT (emotional versus neutral) was used to assess the effect of REM-deprivation and sleep on the consolidation of the (emotional) stimulus material. The degrees of freedom for within-subject effects were corrected according to Greenhouse-Geisser. T-tests were carried out as post-hoc tests. The level of significance was set at 5%. Data analysis was performed with SPSS for Windows, version 20.0 (SPSS Inc., Chicago, IL, USA).
Manipulation Check
When comparing the percentage of REM sleep across the two nights for each experimental group we found a highly significant decrease of REM sleep from 11.9%61.2 (mean 6SEM; adaptation night) to 1.4%6.4 (experimental night) in the REM deprivation group (p,.001; paired t-test) and a slight increase in the undisturbed sleep group (12.4%61.2 and 15.7%61.4; Short-term memory is measured by the digit span total score; emotional state of the participants is measured by the ratings on the 9-point self-assessment manikin (SAM) valence (1 = happy, pleased, content, optimistic; 9 = unhappy, bugged, discontent, sad, desperate) and arousal (1 = relaxed, calm, lazy, sleepy; 9 = excited, frantic, nervous, wide awake, aroused) scales. doi:10.1371/journal.pone.0089849.t002 p = .07). Values of sleep parameters are shown in Figure 1 and Before the learning phase in both wake and sleep conditions, participants took part in the Digit Span memory task. The total score of this task did not differ significantly between the experimental groups in the wake and sleep condition, showing similar attention and short-term memory in both groups (p..21). Our data also showed no significant differences between both groups in the arousal and valence ratings in the morning after the experimental night ( Table 2).
Effects of Sleep on Memory
The main question was whether (REM-) sleep deprivation diminishes the consolidation of (emotional) pictures which should result in lower accuracy scores in the recognition test after the retention interval. An ANOVA revealed a significant main effect for the factor SLEEP (wake vs. sleep) [F(1,27) = 6.52, p = .017]. Figure 2 shows that recognition accuracy for pictures (emotional + neutral) was higher after a night of sleep than after a retention interval spent awake. The expected main effect for the factor AFFECT (emotional vs. neutral) also was significant [F(1,27) = 8.21, p = .008]. Memory for emotional pictures was better than for neutral ones (Fig. 3). There was no significant main effect for the factor DEPRIVATION (control vs. Table 4. None of the single comparisons was significant (p.. 16). Correlations between recognition accuracy (neutral and emotional memory) and sleep stage N3 or sleep spindle density were also not significant (p..1).
Discussion
The aim of this study was to specify the role of REM-sleep in the consolidation of emotional memory. Using the technique of selective REM-sleep deprivation, we investigated whether REMsleep is necessary to consolidate declarative memories and, more specifically, whether REM-sleep is necessary to enhance consolidation of emotional compared to neutral content.
We replicated the effect that sleep fosters the consolidation of declarative memories [23], regardless of the emotional arousal attached to the stimuli. After an 8 h retention interval of a night of sleep, our participants performed better in the recognition test than after a wake retention interval of the same length. Furthermore, we replicated the memory-enhancing effect of emotional arousal [24]. In both conditions (wake and sleep), emotional pictures were better recognized than neutral pictures.
Emotional memory in particular is supposed to benefit from sleep after encoding [3][4]10,25]. Wagner et al. [3] used the Ekstrand paradigm (early vs. late night sleep) and found that sleep selectively favors the retention of emotional texts relative to neutral texts and that this benefit was only present following late-nightsleep (rich of REM-sleep). In their nap-paradigm Nishida et al. [8] used negative emotional and neutral pictures and found a selective offline emotional memory advantage correlated with the amount of REM-sleep and the extent of right-dominant prefrontal theta waves during this sleep stage.
In two other studies, a benefit of sleep (especially for emotional memory) was found only for high confidence answers but not for recognition accuracy in total. Hu et al. [10] failed to find a preferential benefit of the sleep condition on accuracy for arousal stimuli relative to neutral stimuli for ''remember'' judgments (low confidence answers). They only found this benefit for ''know'' judgments (high confidence answers). Also, Groch et al. [7] showed a significant difference only for answers with high confidence. For all answers (including low-confidence responses), the observed advantage of memory for emotional stimuli over neutral ones was only significant for the hit-rate results but not for accuracy (hit-rate -false-alarm rate).
Our findings are coherent with some recent research which also failed to show a selective emotional memory enhancing effect of (REM-) sleep. Lewis et al. [6] showed that context memory decayed less across an overnight retention interval containing sleep relative to an equivalent retention interval containing daytime wakefulness. They found that emotional content of contextual memories did not interact with the reduction in forgetting. Baran et al. [11] used a memory paradigm similar to ours in which their participants learned negative and neutral IAPS-pictures [12]. When comparing a wake and sleep condition, they also found better recognition memory following sleep compared to wake only for emotional and neutral pictures together however, their data did not show a relationship of memory and measures of REM either.
Different attempts have been made to analyze the functional significance of neuronal and hormonal processes known to take place during REM-sleep by pharmacological or behavioral suppression of REM-sleep [26]. Studies with selective REM-sleep deprivation provided indications of a role of REM sleep concerning consolidation of procedural memory in humans [27][28]. The technique of REM-sleep deprivation is therefore an interesting approach to also analyze sleep-related emotional memory processes. In this study, we successfully employed a rigorous technique of selective REM-sleep deprivation in which the amount of REM-sleep during an eight hour sleep window was minimized to a mean of 4.8 min. Furthermore, the procedure did not affect the total amount of non-REM-sleep. Compared to the technique of partial REM-sleep deprivation using the split-night paradigm [3,29], we obtained a much better reduction of REMsleep. Therefore, any proposed effect of REM-sleep on consolidation should be much stronger than in studies using the splitnight paradigm or studies which simply correlate the relative amount of REM-sleep with performance. Moreover, we used comparable or even larger sample sizes and a comparable number of items in the memory test as other studies in the field [6][7][8].
Despite the fact that our study should have more power than previous studies, we did not find the expected effect of REM-sleep deprivation on memory consolidation. Nevertheless, several authors argued that the technique of selective REM-sleep deprivation and the repeated disturbance of sleep might cause stress and therefore diminish memory retrieval [7,30]. The participants' sensation of stress accompanied with the repeated awakenings could lead to an increase in cortisol levels. There are contradicting results concerning the influence of cortisol on memory [31]. Increasing glucocorticoid levels during sleep by post-learning administration of dexamethasone impaired consolidation of declarative memory [32]. Cortisol suppression with metyrapone also impaired retention of neutral stimuli without altering the recognition of emotional memory [33]. In a study by Gonnissen et al. [34], sleep fragmentation due to awakenings every 90 minutes during the night caused a significant but slight reduction of waking cortisol in the morning. More pronounced fragmentation with arousals every 30 seconds lead to an increase in morning cortisol measured after the second night [35]. Decreased plasma cortisol levels have been reported in a former study with REM-sleep deprivation [36]. The self-evaluated emotional and arousal state of participants, which could be influenced by the stress due to awakenings in the REM-deprived group, did not differ significantly between both experimental groups as measured by self-reporting of valence and arousal and therefore most likely did not influence memory consolidation.
Our study may be limited by the use of only negative emotions. Indeed, this is a common method to manipulate emotional arousal but limits the results to this state of emotion. Another limitation is that our encoding control only consisted of 20 pictures and was therefore too low for an acceptable baseline measure. A further limitation is that our selective REM-sleep paradigm was successful at reducing REM-sleep, but also produced a significant increase in light sleep (N1) and a significant decrease in total sleep time and in SWS. For a more detailed analysis of possible memory effects of SWS, an SWS-deprived control group would be helpful and is a promising research step in the future, and results of a study by Groch et al. [37] also point in this direction.
Another explanation of our results is that other processes that are important to memory consolidation and normally associated with REM-sleep, such as high cholinergic activity or coherent theta activity in amygdala and PFC [1,24,38], may persist during REM-sleep deprivation and thereby result in consolidation of emotional memory during sleep. It is also possible that other aspects of sleep, which are undisturbed by REM-sleep deprivation, such as stage 2 sleep spindles, may also be important in the processing of emotional memory. A possible role of sleep spindles for emotional memory performance, however, was not supported by our own correlational analyses but by a recent pharmacology study [39]. In this study hypnotics increased sleep spindle density and enhanced recognition of negative and high-arousal memories. These results raise the possibility that sleep spindles may causally facilitate emotional memory consolidation. However, further studies are required to elucidate possible interactions between sleep spindles and emotional memory processes (e.g. pharmacological enhancement of spindle activity in REM-sleep deprivation).
In summary, our data suggest, that REM-sleep deprivation was successful and that the resulting massive reduction in REM-sleep had no influence on memory consolidation whatsoever. It seems that sleep-dependent emotional memory consolidation does not solely rely on intact amounts of REM-sleep throughout a night of sleep. | 2016-05-04T20:20:58.661Z | 2014-02-27T00:00:00.000 | {
"year": 2014,
"sha1": "1fbea03967f570f46215b5b30cdefbfd0ebc4b73",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0089849&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1fbea03967f570f46215b5b30cdefbfd0ebc4b73",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
157056514 | pes2o/s2orc | v3-fos-license | Survival analysis: part II – applied clinical data analysis
As a follow-up to a previous article, this review provides several in-depth concepts regarding a survival analysis. Also, several codes for specific survival analysis are listed to enhance the understanding of such an analysis and to provide an applicable survival analysis method. A proportional hazard assumption is an important concept in survival analysis. Validation of this assumption is crucial for survival analysis. For this purpose, a graphical analysis method and a goodnessof- fit test are introduced along with detailed codes and examples. In the case of a violated proportional hazard assumption, the extended models of a Cox regression are required. Simplified concepts of a stratified Cox proportional hazard model and time-dependent Cox regression are also described. The source code for an actual analysis using an available statistical package with a detailed interpretation of the results can enable the realization of survival analysis with personal data. To enhance the statistical power of survival analysis, an evaluation of the basic assumptions and the interaction between variables and time is important. In doing so, survival analysis can provide reliable scientific results with a high level of confidence.
Introduction
The previous article 'Survival analysis: Part I -analysis of time-to-event' introduced the basic concepts of a survival analysis [1].To decrease the gap between the data from a clinical case and a statistical analysis, this article presents several extended forms of the Cox proportional hazards (CPH) model in-series.
The most important aspect of the CPH model is a proportional hazard assumption during the observation period.The hazard of an event occurring during an observation cannot always be remained constantly, and the hazard ratio cannot be maintained at a constant level.This is the main obstacle for a clinical data analysis using a CPH model.
The basic concepts required to understand and interpret the results of a survival analysis were covered in a previous article [1].Part 2 of this article, described herein, focuses on the analytical methods applying clinical data and coping with problems that can occur during an analysis.Such methods for validating a proportional hazard assumption apply clinical data and several extended Cox models to overcome the problem of a violated proportional hazard assumption.This article also includes the R codes used for estimating several Cox models based on clinical data.For those familiar with a statistical analysis, the R codes can easily enable an extension of the Cox model estimation. 1) VOL.72, NO. 5, OctOber 2019
Proportional Hazard Assumption
Refer to the previous article [1] for a description of diagnostic methods applied to a CPH model.Here, we consider only a proportional hazard assumption.A hazard is defined as the probability of an event occurring at a time point (t).The survival function of a CPH model is an exponential function, and the hazard ratio (λ) is constant during an observation; thus, a survival function is defined in the exponential form of the hazard ratio at a time point (equation 1) [1].
s(t): survival function based on the CPH model t: specific time point λ: hazard ratio To estimate hazard ratio, which is included in the survival function, hazard function (h) is required and it contains a specific explanatory variable (X) which indicates a specific treatment or exposure to a specific circumstance.At the time point of t, the hazard function of the control group is defined as the basal hazard function (h 0 (t)), and hazard function of the treatment group as the combined form of the basal hazard function and a certain function with the explanatory variable (X).The hazard ratio is the value of the hazard functions of treatment over control groups (equation 2) [2].
h 0 (t) h C (t): Hazard function of control group h T (t): Hazard function of treatment group λ: Hazard ratio h 0 (t): Baseline hazard function at time t t: specific time point X: explanatory variable β: coefficient for X As shown in equation 2, the CPH model processes the analysis under the constant hazard ratio assumption with the explanatory variable, which is not affected by the time [3].The hazard ratio remains constant, and the hazards of each group at any time point remain at a distance and never meet graphically during an observation.However, this does not guarantee the satisfaction of the proportional hazard assumption.In a clinical setting, one hazard could remain lower or higher than the others, and their ratio cannot be constant because the treatment effect may vary owing to various factors.Therefore, we need a statistical method to prove the satisfaction or violation of the proportional hazard assumption.
Validation of Proportional Hazard Assumption
There are three representative validation methods of a proportional hazard assumption.One is a graphical approach, another is using the goodness of fit (GOF), and the last is applying a time-dependent covariate [4,5].
Graphical analysis for validation of proportional hazard assumption
As mentioned in the previous article, a log minus log plot (LML plot) is one of the most frequently used methods for the validation of a proportional hazard assumption [1].The log transformation is applied twice during a mathematical process for estimating the survival function.The first log transformation results in negative values because the probability values from the survival function lay between zero and 1, and such values should be made positive to conduct a second log transformation.The name of the LML plot implies this process.A survival function is the exponential form of a hazard ratio, and the hazard ratio is constituted with the hazard function, which is an exponential form of an explanatory variable.As a result of an LML transformation, the survival function is converted into a linear functional form, and the difference from the explanatory variable creates a distance on the y-axis at a time point.Ultimately, survival functions that are log transformed twice become parallel during the observation period.Deductively, two curves on an LML plot also become parallel, which indicates that the hazard ratio remains constant during the observation period [4].
There is a risk of subjective decision regarding the validation of a proportional hazard assumption using an LML plot because this method is based on a visual check.It is recommended that the interpretation be as conservative as possible except under strong evidence of a violation, including instances in which the curves are crossing each other or apparently meet.A continuous explanatory variable should be converted into a categorical variable of two or three levels to produce an LML plot.When doing so, the data thin out and a different result can be reached according to the criteria used for dividing the variable [5].
1)
Sample data (Survival2_PONV.csv)and the R console output of entire code are provided as supplemental information.Refer to online help or R statistical textbooks for detailed explanations of the argument.The included R code covers the process beginning with the survival analysis introduced in [1].A detailed description of a violation of a proportional hazard assumption is provided in [14].
R codes for Kaplan-Meier survival analysis under the assumption of a proportional hazard
The sample data 'Survival2_PONV.csv'contains the imaginary data of 104 patients regarding the first onset time of postoperative nausea and vomiting (PONV).All patients received one of two types of antiemetics (Drugs A or B).The columns represent the patient number (No), types of antiemetics (Antiemetics), age (Age), body weight (Wt), amount of opioid used during anesthesia (Inopioid), the first PONV onset time (Time), and whether PONV occurred (PONV).To load such data into R software 3.5.2(R Development Core Team, Vienna, Austria, 2018), the following code can be used.In this code, the location of the CSV file on the hard drive is 'd:\' , and users should adequately modify the path.This code provides the first several lines of data (Table 1 To conduct a survival analysis using R, two R packages are required, 'survival' 2) and 'survminer' . 3)When these packages are not supplied as a default, manual installation is not difficult when using the command 'install.packages("packagename")' .These packages are then called.
#Load Package: survival, survminer library (survival) library (survminer) Then, a Kaplan-Meier survival analysis is applied.The following code covers a Kaplan-Meier analysis, comparing the PONV using a log-rank test, and the LML plot introduced in part I of this article [1].Small modifications of this code can enable a survival analysis with the user's own data.R applies a Kaplan-Meier analysis using the new variable 'Survobj' .The results of a Kaplan-Meier analysis and a survival table are presented in Table 2. Out of 104 patients, 63 patients suffered from PONV, and the median onset time was 10 h.A graphical presentation is also possible (Fig. 1).Here, 'ggsurvplot' produces survival curves with complex arguments, fine-tuning the argument options to draw intuitive graphs.
The next code is for an estimation of the survival curves according to two antiemetics and conducting a log-rank test.Table 3 and Fig. 2 show the results of this code.Antiemetics are coded as 0 for Drug A and 1 for Drug B, namely ' Antiemetics = 0' and '1' represent Drugs A and B, respectively in the Table and Figure.
As the interpretation of a log-rank test, the survival functions of two antiemetics are statistically different (P = 0.009), and the median PONV free time is 13 and 6 h for Drugs A and B, respectively.
The log-rank test is also based on the proportional hazard assumption, and an LML plot can be used to validate this assumption.The code for this process is as follows, and the output graph is shown in Fig. 3. 4,5) # LML plot plot (survfit(Surv(Time, PONV == 1) ~ Antiemetics, data = PONV.raw),fun = "cloglog") A 95% confidence interval (estimated from a log hazard) is presented in the shadowed area.The dashed line indicates the median survival time.
4)
There are several ways to draw an LML plot in R; 'plot.survfit'with the argument 'fun = "cloglog"' provides an LML plot of the log-scaled x-axis.Most statistical references describe a log-scaled x-axis LML plot, whereas others describe a standard linear-scaled x-axis LML plot.The R code for a non-log-scaled LML plot can be created through the following.
In and Lee
The goodness of fir test (GOF test) The second method for validating a proportional hazard assumption is a GOF test between the observed and estimated survival function values.This provides a P value and hence is a more objective method than a visual check [5].
A Schoenfeld residual test is a representative GOF test for validation of a proportional hazard assumption [6][7][8].A Schoenfeld residual is the difference between explanatory variables observed in the real world and estimated using a CPH model for patients who experience an event.Thus, Schoenfeld residuals are calculated using all explanatory variables included in the model.If the CPH model includes two explanatory variables, the two Schoenfeld residuals come out for one patient at a time. 6) Because the hazard ratio is constant during the observation period (a proportional hazard assumption), Schoenfeld residuals are independent of time.A violation of the proportional hazard assumption may be suspected when the Schoenfeld residual plot presents a relationship with time.Also, a Schoenfeld residual test is possible under a null hypothesis of 'there is no correlation between the Schoenfeld residuals and ranked event time' . 7) Schoenfeld residual tests cannot be used to validate a proportional hazard assumption in a Kaplan-Meier estimation because it is based on estimated values using the CPH model.A Schoen- 28.1 Chisq = 6.8 on 1 degrees of freedom, P = 0.009 Antiemetics = 0 and 1 indicate Drugs A and B, respectively.Because the variable ' Antiemetics' is coded as 0 for drug A and 1 for drug B, the R output only describes these as ' Antiemetics = 0 and 1' .n: total number of cases, Events: number of patients who experienced postoperative nausea and vomiting, Median: Median survival time, 0.95LCL: lower limit of 95% confidence interval, 0.95UCL: upper limit of 95% confidence interval, n.risk: number at risk, n.event: number of events, Survival: survival rate, std.err: standard error of survival rate, Lower/upper 95% CI: lower/upper limits of 95% confidence interval, Chisq: chi-squared statistics.
6)
Schoenfeld residual is only for the patient who experienced the event.It is the difference between observed value of explanatory variable at a specific time and expected value of the explanatory variable (covariate) at a specific time which is a weighted-average value by likelihood of event from the risk set at that time point. 7)Some statistical software provides a method using scaled Schoenfeld residuals.Under a specific circumstance, these two results are different, although they mostly produce similar results.Please refer to the following: Survival analysis part II feld residual test is lacking in terms of the statistical hypothesis testing process.Null hypothesis significance testing applies a statistical process to validate 'no difference, ' and when the null hypothesis is not true under a significant level, an alternative hypothesis is true except for the probability of the significance level, that is, differences exist between comparatives within the probability of significance.A Schoenfeld residual test determines whether a proportional hazard assumption is violated based on the probability of the correlation statistics.Correlation statistics with a higher probability than the significance level result in a satisfaction of the proportional hazard assumption without null hypothesis testing.This method cannot guarantee sufficient evidence to reject a hypothesis, however.Furthermore, the P value is dependent on the sample size, and large sample size will produce a high significance with a minimal violation of the assumption; an apparent assumption violation may be insignificant with small sample size.Although a Schoenfeld residual test is more objective than an LML plot, the use of two methods simultaneously is recommended owing to the problems listed above [4,5,7].
R codes for the Cox proportional hazard regression model and GOF test
To estimate a CPH model, libraries used in a Kaplan-Meier analysis are also required.After importing the data and calling the required libraries, the CPH model can be estimated according to the antiemetics using the following code.
In and Lee
After examining the full model including all covariates (summary(cph.full)),the most compatible model is confirmed through a covariate selection (summary(cph.selection)),and a clean result is finally obtained (summary(cph.selected)).Table 5 shows the final model.According to the result, the PONV increment is estimated as 2.021-fold (95% CI, 1.217-3.358,P = 0.007) based on the antiemetics, and 1.013-fold (95% CI, 1.008-1.018,P < 0.001) based on intraoperative opioid usage.
The next code draws survival curves against the antiemetics for the final model (Fig. 4).The R code for an LML plot is described above.For categorical variables, an LML plot provides an easy to interpret and intuitive validation method for a proportional hazard assumption. 9)Validation of the proportional hazard assumption of the antiemetics, which is a categorical variable, is possible using an LML plot.(Fig. 5) #LML for CoxPH plot (survfit(coxph(Surv(Time, PONV == 1) ~ strata(Antiemetics) , data = PONV.raw ) ), fun = "cloglog" ) The proportional hazard assumption of the antiemetics is not violated according to the graphs shown in Fig. 5.The covariate "Inopioid" is a continuous type of variable, and an LML plot using this variable is impossible to achieve without a categorical transformation.
A Schoenfeld residual test is shown below.Here, 'cox.zph'included in the 'survminer' library enables this test.The results are listed in Table 6, and graphical output is shown in Fig. 6.
KOREAN J ANESTHESIOL
In and Lee value over time, but not continuously.In this way, a Schoenfeld residual test provides more objective results than an LML plot, which is strictly conservative.
Adding a time-dependent covariate
To validate a proportional hazard assumption in a CPH model, a time-dependent covariate is intentionally added into the estimated model.This covariate can be made using a time-independent variable and time, or a function of time.For example, the process compares two models, namely, a CPH model that assumes the proportional hazard assumption has not been violated, and another model incorporated with a combined covariate of the explanatory variable and time (or a function of time) in the estimated CPH model.A likelihood ratio test or Wald statistics are used for comparison.This type of method has certain advantages, including a simultaneous comparison with multiple covariates and various time functions; note that the results may change depending on the covariates and types of functions selected [5,10,11]. 10)
Cox Proportional Hazard Regression Models with Time-dependent Covariates
Covariates violating the proportional hazard assumption in a CPH model should be adequately adjusted.This section introduces a stratification and time-dependent Cox regression to deal with covariates violating the proportional hazard assumption.
Stratified Cox proportional hazard model
To fit the CPH model with variables violating the propor-tional hazard assumption, one method is to apply a stratified CPH model.This method makes one integrated result from the results of each stratum containing a categorical variable classified based on a certain criterion.Unlike the Mantel-Haenszel method, which is based on the sample size of each stratum, stratification in the CPH model sets a different baseline hazard corresponding to each stratum, and a statistical estimation is then applied to achieve common coefficients for the remaining explanatory variables except for the stratified variables. 11)This provides a hazard ratio of the controlled effects of variables violating the proportional hazard assumption [12].
A stratified CPH model can be applied to control the variables violating a constant hazard assumption, as well as to control the confounding factors that influence the results with little or no clinical significance.Stratification always requires categorical variables, and conversion into categorical variables is required for continuous variables.Under this situation, care should be taken that the sample size of each stratum is reduced (data thinned out) and information held by the continuous variable is simplified.Therefore, conversion into a categorical variable should consider as small number of strata as possible, setting the range of clinical or scientific meaning, and maintaining a balance among the strata [12].Because various application methods and their variations are available, they are not discussed in detail herein.
11)
This is a non-interaction stratified CPH model.Several survival functions are estimated through stratification, and if the explanatory variables have interactions with each other, the coefficients at each stratum may be different.In this case, it is assumed that an interaction model between explanatory variables and a likelihood ratio test provide clues to judge whether there is an interaction between explanatory variables.That is, if two or more variables are included in the model, it is necessary to check whether an interaction between them exists.
R codes for stratified Cox proportional hazard model
In the previous CPH modeling, the variable 'Inopioid' violated the constant hazard assumption based on the Schoenfeld residual test (Fig. 6).Here, 'Inopioid' is a continuous variable that records the dose of intraoperatively used opioid.To apply a stratified CPH modeling, continuous variables should be converted into categorical variables.For convenience, the following is a code that converts 'Inopioid' into a categorical variable of 0 or 1, when not used or used, respectively.##### Stratified Cox regression ### Add categorical variables from Inopioid PONV.raw <-transform(PONV.raw,Inopioid_c = ifelse( Inopioid == 0, 0, 1)) head (PONV.raw)According to this, the categorical variable 'Inopioid_c' is recorded as 0 or 1 and is newly added to the dataset (Table 7).
Next, the code for a stratified CPH model is as follows: ' Antiemetics' is coded as 0 for Drug A or 1 for Drug B in the original data.'Inopioid_c' is a newly created categorical variable based on 'Inopioid' , which is coded as 0 for an opioid not used or 1 for an opioid used during operation.coef: the value of coefficient, exp(coef): exponential value of coefficient, se(coef): standard error of coefficient, z: z-statistics, Pr(>|z|): P value of given z-statistics, Signif.codes: codes for significance marking.
Time-dependent Cox regression
Most clinical situations change over time, and the variables affected by a specific treatment also change even when the treatment remains constant during the observation period [13].For example, consider an analgesic having a toxic effect on the hepatobiliary function for patients with chronic pain.A periodic liver function test will be crucial, and all laboratory results will vary for every follow-up time.The administration dose may also vary according to the laboratory results or analgesic effects.Moreover, the laboratory results may not be valid after the patients are censored or after event occurs.These variables are common in clinical practice, and the existence of time-dependent variables should be considered and checked before starting the data collection for survival analysis.If an adequate Survival analysis part II measurement method is developed, a time-dependent covariate Cox regression will be possible.Another type of time-dependent variable is a covariate with a time-dependent coefficient [14].If the analgesics mentioned above produces a level of tolerance, its effect decreases over time.This indicates that the risk of breakthrough pain occurrence may be higher as time passes, which apparently violates the proportional hazard assumption.In this case, the effect of the analgesics can be included in the survival function, which is expressed as a covariate with a coefficient of the function of time.
As mentioned above, a time-dependent covariate is incorporated into the analysis as single value according to the repeated observation intervals.For example, a patient under analgesics medication takes an initial liver function test, the results of which show 40 IU/L and 100 IU/L after four weeks with continued pain and 130 IU/L at eight weeks with pain, whereas at 12 weeks after analgesics administration, the pain is subsided and medications are discontinued without a further laboratory test.The laboratory data input for the time-dependent covariate are 40 until 4th weeks without an event, 100 from 4th to 8th weeks without an event, 130 from 8th to 12th weeks, and an event occurs at 12th weeks.
Clinical studies in the area of anesthesiology often include variables related to the response or effect of a specific treatment or medication.Depending on the characteristics and measurement methods of the variables, once a specific treatment or medication is applied, their effects are gradually decreased over time or delayed until the onset time.The effects of treatment or medication changes over time, the coefficient of these effects can be expressed as a time function, and for Cox regression, a step function is frequently applied.A step function is a method applying different coefficient values to different time intervals.A Cox regression can thus be established and output the integrated results [15].In addition, a continuous parametric function for a time-dependent coefficient can be used for analysis instead of a step function [14].
R code for time-dependent coefficient Cox regression model: step function
As shown in Fig. 6, the Schoenfeld residuals of ' Antiemetics' and 'Inopioid' turn from positive to negative or vice versa at approximately 3 and 6 h.The data are arbitrarily separated using these time points., data = PONV.raw, cut=c (3,6) , episode = "tgroup" , id = "id" ) head(tdc) The command 'survSplit' separates the patient data according to the established time interval, where the value for each interval is the measured value on the left side of the interval (start time, 'tstart'), and 'Time, ' which is the end of the interval succeeds the next interval.That is, one interval is closed at the left and opened at the right, and if an event occurs during the interval, the survival function is estimated using the variables measured at the left side of the interval (Table 9).It seems that the data being duplicated at the end and the start of the interval, problems do not occur because the divided time does not overlap.It is possible to apply a Cox regression and GOF test with these separated data.
All personal data are separated according to a preset time period (at 3 and 6 h).The same 'id' number indicates the same person.For example, data with id = 1 are separated into two time periods.The first period starts from time = 0 (tstart = 0) and ends at 3 (Time = 3) and PONV does not occur.
The second period starts from 3 to 4 (the observation is prematurely ended before 6) and PONV does not occur.The same time period is indicated as tgroup (time group) in the last column.The other variables are the same as in Table 7.
KOREAN J ANESTHESIOL
In and Lee print (sf.tdc) par(mfrow=c(2,2)) plot(sf.tdc[1]) abline (h = coef(fit.tdc)[1], lty = "dotted") plot(sf.tdc[2]) abline (h = coef(fit.tdc)[2], lty = "dotted") plot(sf.tdc[3]) abline (h = coef(fit.tdc)[3], lty = "dotted") plot(sf.tdc[4]) abline (h = coef(fit.tdc)[4], lty = "dotted") Table 10 shows the estimated Cox regression and GOF test results, and Fig. 9 presents a plot of the Schoenfeld residuals.The risk of the PONV increases 1.0126-fold (95% CI: 1.0078-1.017,P < 0.001) by one unit of intraoperative opioid.For the antiemetics, the group taking drug B showed an increased PONV risk of 3.6545-fold (95% CI: 1.2024-11.107,P = 0.022) until 3 h post-operation, 3.8969-fold (95% CI: 1.4020-10.831,P = 0.009) until 6 h post-operation, with no significant difference shown until the end of the observation (risk ratio = 0.9382, 95% CI: 0.4242-2.075,P = 0.957).The results of the Schoenfeld residual test (Table 10 and Fig. 9) indicate that all variables do not violate the proportional hazard assumption.These results cannot provide a single desired outcome, and it is necessary to combine the results.To compare the results from two antiemetics, the data divided by 'survSplit' are combined to enable an interpretation (combine.tdc).The results are shown in Table 11.The survival model considering the time-dependent coefficient increases the sample size because the data of one patient are separated at the established time points.Note that the median survival times in this model are 31 and 16 h, and the median survival times from the Proportional hazard assumed Kaplan-Meier analysis results are presented in the upper part of the table.Note that this result is the same as in Table 3.The lower part of this table presents the results of a Cox regression with a time-dependent coefficient.The median survival is different from the proportional hazard assumed analysis.Antiemetics = 0 and 1 indicate Drugs A and B respectively.n: total number of cases, Events: number of patients who experienced postoperative nausea and vomiting, 0.95LCL: lower limit of 95% confidence interval, or 0.95UCL: upper limit of 95% confidence interval.
Conclusions
Clinical studies in the area of anesthesiology had rarely presented statistical results using survival analysis.In recent years, studies on the survival or recurrence of cancer according to the anesthetics have been actively published [16][17][18].Survival analysis has the power to present clear and comprehensive results based on studies on pain control or the effects of medications.Previous articles have focused on the basic concepts of survival analysis and interpretations of the published results [1], and the present article covered the process of conducting a survival analysis using clinical data, finding errors, and achieving adequate results.Although this article does not include all existing survival analysis methods, it introduced several R codes to enable an intermediate level of survival analysis for clinical data in the field of anesthesiology. 12) Some clinical papers dealing with a survival analysis have presented statistical results without considering a proportional hazard assumption or an interaction between the covariates and time.The power of a log-rank test, which is commonly used to compare two groups, tends to decrease when a proportional hazard assumption is violated and can generate an incorrect result [19,20].An investigation into the reporting of survival analysis results in leading medical journals indicated that the use of survival analysis has significantly increased, although several problems still exist, including descriptions regarding the censoring, sample size calculation, constant proportional hazard ratio assumption validations, and GOF testing [21].Because most statistical analyses require several basic assumptions, survival analysis also requires some essential assumptions.In a Kaplan-Meier analysis, the likelihood of an event of interest and censoring occurring should be independent from each other, and the survival probabilities of patients who participated in earlier and later studies should be similar.A log-rank test also requires the previously described and proportional hazard assumptions [22].A CPH model requires a proportional hazard assumption, independence between the survival times among different patients, and a multiplicative relationship between the predictors and hazard [23].A clustered event time analysis and an accelerated failure time analysis are often applied to survival analysis methods in clinical study.A clustered event time analysis is similar with a stratified CPH model, and has certain advantages when each stratum has insufficient event cases.It has two types of processes, one is a marginal approach that estimates the survival function through an overall cluster from the pooled effect of each stratum, and another is a conditional approach that estimates the survival function from the heterogeneity between clusters.An accelerated failure time analysis estimates the model similarly with a linear regression based on a Weibull distribution or log-logistic distribution.Unlike a CPH model that continuously maintains the risk ratio of the covariates, this model assumes that the disease process can be accelerated or decelerated over time.
Fig. 1 .
Fig. 1.Kaplan-Meier curve of overall survival status with sample data.A 95% confidence interval (estimated from a log hazard) is presented in the shadowed area.The dashed line indicates the median survival time.
Fig. 2 .
Fig. 2. Kaplan-Meier curves of two antiemetics with sample data.The P value is estimated based on a log-rank test.A 95% confidence interval (estimated from a log hazard) is presented in the shadowed area.The dashed lines indicate the median survival times of groups taking Drugs A and B. Drug A is coded as ' Antiemetics = 0' and Drug B is coded as ' Antiemetics = 1' in the original data.
Fig. 3 .
Fig. 3. Log minus log plot of Kaplan-Meier estimation with log-rank test between two antiemetics.The two curves do not meet during the observation period, indicating the satisfaction of the proportional hazard assumption.The log-time scale is shown in the x-axis.
Fig. 5 .
Fig. 5. LML plot of Cox proportional hazards model based on antiemetics with sample data.
Fig. 4 .
Fig. 4. Survival curves of antiemetics estimated using the Cox proportional hazards regression model.a solid black line indicates Drug A (Antiemetics = 0) and a solid grey line indicates Drug B (Antiemetic = 1).Dashed lines present a 95% CI range.Drug A is coded as ' Antiemetics = 0' and drug B is coded as ' Antiemetics = 1' in the original data.
Fig. 7 .
Fig. 7. Examples of the stratified Cox proportional hazard model and corresponding LML plot.(A) Survival curves of estimated stratified Cox proportional hazard model.Stratification is achieved using the categorical variable 'Inopioid_c' .(B) Log-minus log plot for evaluation of proportional hazard assumption against two antiemetics.Note that a non-parallelism of below 2 h is not assured, whereas the overall curves are roughly parallel without crossing.
Fig. 8 .
Fig.8.Schoenfeld residual test for the stratified Cox proportional hazard model.For the covariate ' Antemetics' , the probability was estimated as 0.039, and a violation of the proportional hazard assump tion was strongly suggested under the controlled covariate 'Inopioid' (the dotted horizontal line shows the estimated coefficient of ' Antiemetics').
Fig. 10 .
Fig. 10.Graphical comparison between survival models of Kaplan-Meier and Cox regression with time-dependent coefficient.Black curves indicate the model fitted using a Kaplan-Meier analysis, and the gray curves are from a Cox regression with a time-dependent coefficient.The solid lines indicate Antiemetics = 0 (Drug A), and the dashed lines indicate Antiemetics = 1 (Drug B). ).
Table 1 .
First Three Data Imported as PONV.raw
Table 2 .
Results of Kaplan-Meier Estimation and Survival Table
Table 4
summarizes the results.The PONV incidence rate is 1.9471-fold higher (95% CI, 1.174-3.229,P = 0.010) in the drug B groups than in the drug A groups.
Table 4 .
Results of the Cox Proportional Hazard Model Estimation Using Antiemetics with Sample DataCall: coxph(formula = Surv(Time, PONV == 1) ~ Antiemetics, data = PONV.raw) ' Antiemetics' is coded as 0 for Drug A or 1 for Drug B in the original data.coef: the value of coefficient, exp(coef): exponential value of coefficient, se(coef): standard error of coefficient, z: z-statistics, Pr(>|z|): P value of given z-statistics, Signif.codes: codes for significance marking.
Table 5 .
Multivariate Cox Proportional Hazard Model with Sample Data Antiemetics' is coded as 0 for Drug A or 1 for Drug B in the original data.'Inopioid' is the amount of opioid used during surgery.coef: the value of coefficient, exp(coef): exponential value of coefficient, se(coef): standard error of coefficient, z: z-statistics, Pr(>|z|): P value of given z-statistics, Signif.codes: codes for significance marking.
Call: coxph(formula = Surv(Time, PONV == 1) ~ Antiemetics + Inopioid, data = PONV.raw)8)Thecommand 'ggadjustedcurves' included in the 'survminer' library easily produces the survival curves of the CPH model.Unfortunately, this command still has minor functional errors such as in printing the 95% CI or labelling, and a somewhat complex 'ggsurvplot' is used in this example.
Table 6 .
Results of the Schoenfeld Residual Test ' Antiemetics' is coded as 0 for Drug A or 1 for Drug B in the original data.'Inopioid' is the amount of opioid used during surgery.rho: Spearman's ρ statistics, chisq: chi-squared statistics, P: P value.
Table 7 .
PONV.raw Added a New Categorical Variable 'Inopioid_c' from the Variable 'Inopioid' 'Survobj' is a variable created by an R command during the process of a Kaplan-Meier estimate, and indicates a survival object.'Inopioid_c' is a newly created categorical variable based on 'Inopioid' , which is coded as 0 for an opioid not used or 1 for an opioid used during operation.From left, each column contains each coded variable: The first column has a number automatically generated by R, the variable 'No' is a coded number in the original data, ' Antiemetics' has a value of 0 for Drug A and 1 for Drug B, ' Age' and 'Wt' are the actual patients' age and body weight, 'Inopioid' is the amount of opioid used during surgery, 'Time' indicates the onset time of postoperative nausea and vomiting, and 'PONV' is coded as 1 when the patient experienced postoperative nausea and vomiting.
Table 8 .
Results of Stratified Cox Proportional Hazard Model.Stratification with 'Inopioid_c'
Table 9 .
Data Divided by survSplit Function
Table 10 .
Results of Time-dependent Coefficient Cox Regression Using Step Function and Schoenfeld Residual Test ' Antiemetics' is coded as 0 for Drug A or 1 for Drug B in the original data.'Inopioid' is the amount of opioid used during surgery.The split time periods are presented as Antiemetics:strata(tgroup)tgroup = 1 for the time period from 0 to 3, Antiemetics:strata(tgroup)tgroup = 2 for the time period from 3 to 6, and Antiemetics:strata(tgroup)tgroup = 3 for the time period from 6 to the end of the observation.coef: the value of coefficient, exp(coef): exponential value of coefficient, se(coef): standard error of coefficient, z: z-statistics, Pr(>|z|): P value of given z-statistics, Signif.codes: codes for significance marking.
Table 11 .
Comparison Kaplan-Meier Analysis and Survival Analysis with Time-dependent Coefficient | 2019-05-18T13:03:49.924Z | 2019-05-17T00:00:00.000 | {
"year": 2019,
"sha1": "9775aab5272ff6665282490a3510676feb3c1227",
"oa_license": "CCBYNC",
"oa_url": "https://ekja.org/upload/pdf/kja-19183.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0001a09827ae0d862c8e67c819e9898234c4281f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
257941753 | pes2o/s2orc | v3-fos-license | A misdiagnosed case of a 150-cm umbilical cord coiled twice around the fetal neck with a true cord knot: A rare Syrian case report
The normal umbilical cord is a crucial component during pregnancy, but sometimes it could become compromised due to some abnormalities such as excessive long umbilical cord, and though they usually end up with a healthy baby, they may lead to severe consequences. Excessive long umbilical cords are found in 4% of pregnancies and represent a risk factor for nuchal cords and true knots. We report a case of a 37-year-old Syrian pregnant woman who presented to the hospital at 37 weeks of gestation asking for a C-section for a fear of ambiguous ultrasound findings that have been interpreted as fetal malformation. At delivery, a healthy baby was born with a 150-cm umbilical cord, a true knot, and double-looped nuchal cords; the formation of the loops and the knot had been attributed to the elongated cord. Besides, ultrasound imaging could sometimes be deceptive and lead to unnecessary interventions; therefore, cord anomalies should always be kept in mind because they do not always represent a justification for a C-section.
Introduction
The umbilical cord (UC) is the vital connection between the fetus and the placenta. Placenta provides the crucial resources needed for fetus development. 1 The UC begins to develop in the embryologic period between Week 3 and Week 7 gestation. 1 It contains umbilical vessels, two arteries carry deoxygenated blood and a single vein carries the oxygenated blood. 2 Elongated UCs may affect cardiac dynamics and increase peripheral vascular resistance; excessive long umbilical cords (ELUCs) predispose to stasis which is a risk factor for thrombosis (Virchow's triad). 3 Generally, UC abnormalities such as short cord, long cord, knots, hyper-coiling, hypo-coiling, stricture and single umbilical artery can lead to increased morbidity and mortality of the fetus. 4 To consider the cord abnormally long, it must be longer than 100 cm. 5 And it presents in 4% of placentae, increasing the risk of cord entanglement. 6 The incidence of true knots of the UC is not only very low but also the majority of them did not present distinct symptoms. However, several fetuses can present with serious complications, given the possibility of circulatory alteration (low perfusion) and subsequent intrauterine growth restriction (IUGR), increased incidence of premature birth, or even death. A true UC knot has been described in 0.3%-2.1% of all births worldwide, and its prenatal diagnosis is extremely difficult and rarely described in the literature. 7 Especially, the presence of one single true knot in the navel string is a rare disorder occurring in roughly 0.3%-2% of pregnancies. 5 In our case, we report a rare condition of a 150-cm UC with two nuchal cords and a rare true knot.
Case presentation
A 37-year-old woman at 37 weeks of gestation, gravida 4 para 2, presented to our hospital to perform a selective C-section delivery, a private OB/GYN clinician interpreted the woman's ambiguous findings on ultrasound in the third trimester as an undefined malformation. The patient had a history of two cesarean deliveries and one spontaneous complete abortion in the first trimester. She does not suffer from any illnesses or allergies and does not take any prescribed medications. On the first day of arrival at our hospital, the woman had undergone a full clinical examination; her vital signs were as follows: blood pressure = 110/80, heart rate = 82 bpm and temperature = 37°C; a series of laboratory tests were performed and the findings were as follows: complete blood count (hemoglobin = 12.2 g/dL, white blood cells = 10.8 × 10 9 cells/L, 168,000 platelets/µL), body mass index (BMI) = 28 kg/m 2 and her blood type is O positive. An ultrasound imaging showed normal findings; the placenta was positioned anteriorly and the fetus measurements were suitable for gestational age (biparietal diameter = 38W and femur length = 37W + 6D). The non-stress test (NST) results were normal. On the second day of hospitalization, cesarean delivery was performed on the request of the patient. A male infant was delivered. Apgar's score was 8 at 1 min, and the newborn weighed 3500 g. Examination of the placenta showed an ELUC measuring 150 cm, with a double loop of the nuchal cord and a true knot ( Figure 1).
The patient was monitored at regular intervals and was later discharged with normal vital signs and no bleeding. No complications were observed.
Discussion
For a healthy fetus development, a normal UC is critical. It is a three-vessel cord, one vein responsible for supplying the fetus with nutrients and oxygen from the placenta, and two arteries through which deoxygenated blood and waste products get removed. This conduit begins to form at 5 weeks and elongates until it reaches full length by the 28th week of gestation, with an average length of 50-60 cm. 8 The blood flows through the cord vessels at a speed of 35 mL/min at 20 weeks of gestation and 240 mL/min at 40 weeks of gestation. Several abnormalities can affect the UC, including single artery, prolapse, knots, ELUCs and entanglements, all can lead to severe outcomes such as intrauterine growth retardation and stillbirth which is defined as fetal death inside the womb during labor and birth or after 20 weeks of gestation, about 2.5%-30% of abnormalities are associated with stillbirth. 9,10 In our case, we will discuss a cord with a length of 150 cm, double-loop nuchal cords and a true cord knot.
ELUCs are defined as cords longer than 100 cm occurring in 4% of placentae. 5,6 Increased parity, high BMI, increased placental weight and a history of chronic diseases such as hypertension and diabetes mellitus increase the risk of an elongated cord; gender plays a big role as well where male fetuses tend to develop longer cords than female ones, and this difference is observed after gestational week 28. 11 ELUC leads to very dangerous outcomes for the mother and the fetus, that is, pre-eclampsia, preterm premature rupture of membranes (PPROM), intrauterine and perinatal death, low 5-min Apgar score and even transfer to the neonatal intensive care unit (NICU). 11 The mechanism of pre-eclampsia as a result of prolonged UC is still not fully understood, but abnormal vascular endothelial growth factor (VEGF) family protein and messenger ribonucleic acid (mRNA) expression were observed in both pre-eclampsia and the UC anatomical abnormalities (UCAA), which may be the reasons for severe maternal outcomes. 12 ELUC is considered a strong risk factor for entanglements, which is the most observed abnormality with an incidence of 6% at 20 weeks of gestation and increased with advancing gestational age till peaking at 29% at 42 weeks of gestation. 12 Entanglement is a feature in which one or more loops are encircled around any part of the fetus' body due to random fetal movements and when that part is the neck it is called the nuchal cord; it is the most common kind of entanglement occurring in 15%-34% of all pregnancies. 6 In addition to ELUC, oligohydramnios and low birth weight increase the risk of nuchal cord formation. Nuchal loops could be loose or tight. Tight ones are more severe for the fetus because they could lead to the obstruction of blood flow causing hypoxia or even death, but, fortunately, most nuchal loops are benign, the frequency of a single loop is estimated to be 24%-28%, while the frequency of multiple loops is 0.5%-3.3%. Moreover, three loops or more are needed to cause adverse outcomes. 6 Besides these abnormalities, true cord knots in which the UC loops upon itself and can be physically untied represent a very rare condition occurring in roughly 0.3%-2% of all deliveries, unlike false knot which is varicosity or redundancy of an umbilical vessel (usually the vein) within the cord substance and cannot be physically released. Risk factors influencing the formation of cord knots include polyhydramnios, gestational diabetes, chronic hypertension, mono-amniotic twins, male and small fetuses, long cord and multi-parity. 13 Knots can cause either severe outcomes or benign due to their level of tightness just like nuchal loops. 14 A rare but classic radiological finding used to diagnose a true knot is the "hanging noose sign," and it shows a crosssection of the UC surrounded by one of the UC loops. 15 Maybe this sign was interpreted as a malformation by the private doctor in our case.
In addition to our case, which describes a cord with a length of 150 cm and two loops, two special cases about ELUC with entanglements were reported in the medical literature. One with a 160-cm cord coiled eight times around the fetal neck causing IUGR and fetal distress, 16 and another one with a longer cord but fewer loops, a 190-cm cord wrapped six times around the neck with no complications at all. 17 The studies regarding UC length-besides the two previous cases-are controversial. Some showed poor perinatal outcomes such as asphyxia and fetal demise, 17,18 while others showed otherwise; a Japanese study published in March 2019, which was conducted on 32,315 women, concluded that ELUC may contribute to the decreased risk factors of intrauterine fetal demise in singleton pregnancies delivered at >34 weeks' gestation and suggested that ELUC could be a preventive factor of miserable outcomes when combined with true knots by explaining that ELUCs decrease the probability of knots being tightened and closed whenever the fetus moves inside the womb. 19 This study could interpret the result of our case, a baby with two entanglements and a knot could survive with no complications, the 150-cm cord represents a logical reason for that. However, a case similar to ours reported a 93-cm cord with four loops and a true knot suffered from IUGR and needed an emergency delivery. 6 This case ensures that more research is needed to be done about the combined effects of multiple UC abnormalities.
In this case, the entanglements and the true cord knot were interpreted as a result of ELUC, and the reason behind the survival of the baby is also "ELUC." Our patient requested to deliver her baby by C-section though all other investigations were normal because she thought that would be the best for her baby's health as the echo image finding was wrongly diagnosed as a fetal malformation by a private OB/GYN. According to studies, the decision of pregnant women to choose the mode of birth (MOB) is still debatable and there are no standards that prevent women from that, and sometimes all doctors can do is give the pros and cons of each MOB. 20
Conclusion
Ultrasound images could sometimes be misleading due to UC abnormalities, leading to confusion about whether it is a malformation or not and consequently resulting in unnecessary surgical interventions. So as doctors, we have to bear in mind all of the cord abnormalities for such unclear ultrasound findings because knots and nuchal loops unlike malformation are not a cause for concern and do not represent a justification for a C-section, especially when every another aspect of the mother and fetus health is normal not all cord abnormalities end up with devastating results even when two or more abnormalities are combined.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship and/or publication of this article. | 2023-04-05T15:27:10.145Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "357619052da992c64e679207433118bdca8d8f50",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1177/2050313x231164858",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d9071aca6d872d191d5f567f2c555ecc4b6bfe7e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
50783097 | pes2o/s2orc | v3-fos-license | Risk factors for recurrence in deep vein thrombosis patients following a tailored anticoagulant treatment incorporating residual vein obstruction
Abstract Background Finding the optimal duration of anticoagulant treatment following an acute event of deep vein thrombosis (DVT) is challenging. Residual venous obstruction (RVO) has been identified as a risk factor for recurrence, but data on management strategies incorporating the presence of RVO and associated recurrence rates in defined clinical care pathways (CCP) are lacking. Objectives We aimed to investigate the long‐term clinical outcomes and predictors of venous thromboembolism (VTE) recurrence in a contemporary cohort of patients with proximal DVT and managed in a CCP incorporating the presence of RVO. Patients All patients treated at the Maastricht University Medical Center within an established clinical care pathway from June 2003 through June 2013 were prospectively followed for up to 11 years in a prospective management study. Results Of 479 patients diagnosed with proximal DVT, 474 completed the two‐year CCP (99%), and 457 (94.7%) the extended follow‐up (2231.2 patient‐years; median follow‐up 4.6 years). Overall VTE recurrence was 2.9 per 100 patient‐years, 1.3 if provoked by surgery, 2.1 if a non‐surgical transient risk factor was present and 4.0 if unprovoked. Predictors of recurrent events were unprovoked VTE (adjusted hazard ratio [HR] 4.6; 95% CI 1.7, 11.9), elevated D‐dimer one month after treatment was stopped (HR 3.3; 1.8, 6.1), male sex (HR 2.8; 1.5, 5.1), high factor VIII (HR 2.2; 1.2, 4.0) and use of contraceptives (HR 0.1; 0.0, 0.9). Conclusions Patients with DVT managed within an established clinical care pathway incorporating the presence of RVO had relatively low incidences of VTE recurrence.
| INTRODUCTION
The optimal management strategy for the prevention of recurrent venous thromboembolism (VTE) is still uncertain. Venous thromboembolism contributes significantly to global disease burden. 1 Within the European Union, it is estimated that the annual incidence of deep-vein thrombosis (DVT) and pulmonary embolism (PE) cases is 684 000 and 435 000, respectively, and VTE-related deaths exceed 543 000. 2 While prevention of recurrent VTE is important to reduce the burden of disease, 1 anticoagulation treatment, the mainstay of VTE prevention, is accompanied by a significant risk of bleeding complications. 3 Efficient prevention thus critically depends on optimal assessment of recurrence risk in individual patients. Even though a number of clinical criteria, laboratory assays, and even imaging tests have been proposed as risk factors for recurrent VTE, 4,5 their clinical value is limited. 4 Recent guidelines comment on previous study results and suggest that the demonstration of residual vein obstruction (RVO) at the end of the regular period of anticoagulation treatment might improve risk assessment and management of recurrent VTE. 3,6 Indeed, observational data has established RVO as a risk factor for recurrent VTE with a relative risk of about 1.5. 7-11 These data were confirmed in a randomized controlled trial that compared RVO-guided anticoagulation therapy vs stopping anticoagulation treatment after 3 to 6 months. 10 However, it is not known if RVO is useful for risk assessment in clinical practice, although it is used in combination with other diagnostic and prognostic tools in a management strategy. In particular, the effects of a management strategy comprising the presence of RVO in clinical practice are unknown.
Applying evidence-based health care is a difficult task, particularly in VTE patients. Clinical care pathways (CCPs) have been introduced to guide diagnostic and therapeutic decisions for patients with defined clinical problems in complex organisations. 12 CCPs aim to translate evidence-based medicine into clinical practice, improve collaboration among multiple specialized care providers and standardize health care procedures. 13,14 CCPs have been introduced for patients with a variety of clinical problems including venous thromboembolism. [15][16][17][18][19] However, knowledge on the long-term effects of CCPs on clinical outcomes of patients with VTE is lacking. The present investigation aimed to investigate both the long-term clinical outcomes in patients with DVT managed in a defined CCP incorporating the presence of RVO and also to determine risk factors for VTE recurrence in a contemporary cohort of patients with proximal DVT managed within a defined CCP. between 2003 and 2013 were followed for two years within the CCP, and additional outcomes data was collected for an extended period.
| Study design and population
No exclusion criteria were applied. However, certain patient groups were usually not treated within the CCP: distal DVT, DVT complicated by PE, patients who follow further treatment in other institutions than MUMC, and patients with cancer. The study was carried out in accordance with the Declaration of Helsinki, and the study protocol and collection of data was approved by the local MUMC ethical committee (METC 15-4-256).
| Clinical care pathway
In 2003, a CCP was implemented at the MUMC to guide management of patients diagnosed with proximal DVT. All patients objectively diagnosed with proximal DVT (popliteal vein, femoral vein, common femoral vein, or iliac vein) at the MUMC are managed in a specialized outpatient clinic according to a strict protocol. Regular visits are scheduled 0.5, 3, 6, 12, and 24 months after diagnosis. Structured history and physical examination as well as an assessment of clinical risk factors are performed at the first visit. The Villalta score is performed at every visit. 20 Laboratory tests are performed 1 month after cessation of anticoagulation treatment, and 12 and 24 months after diagnosis (levels of D-dimer, factor VIII, and C-reactive protein [CRP]). Thrombophilia markers are not ordered routinely. RVO is assessed by ultrasonography 1 week before the intended cessation of anticoagulant treatment (after 3 or 6 months, respectively).
| Risk assessment and treatment decisions
The criteria by which the risk of recurrent VTE is assessed are delineated by a strict protocol and instructions are in line with current Essentials • Outcomes of clinical care pathways (CCP) for treatment of deep vein thrombosis (DVT) are unknown.
• We followed 479 DVT patients treated within a CCP incorporating RVO for a median of five years.
• Patients had relatively low incidences of VTE recurrences and deaths.
• Unprovoked DVT, D-dimer, male sex, factor VIII and contraceptive use predicted recurrent events.
guidelines. The major principles are illustrated in Figure 1. Patients are assigned to three different categories: (i) patients with a provoked DVT in the course of a reversible risk factor such as recent surgery are assigned to three months of anticoagulation treatment; (ii) patients with an unprovoked DVT are assigned to 6 months of anticoagulation therapy and extensive risk assessment; (iii) highrisk patients are assigned to an indefinite anticoagulant treatment regimen. "Provoked DVT" was defined as DVT with the presence of a reversible risk factor (surgery within 2 months, contraceptive use, pregnancy, long-distance travel of more than 10 hours, and immobilization). "Unprovoked DVT" was defined as DVT without the presence of a reversible risk factor (see above). "High-risk patients" were defined as unprovoked DVT in the presence of recurrent VTE, elevated D-dimer, high factor VIII, known high-risk thrombophilia, inflammation, or active cancer. High risk thrombophilia were defined as protein S or C deficiency, homozygous factor V-Leiden mutation, antithrombin deficiency, or antiphospholipid antibody syndrome. 21 Antithrombin deficiency was defined as functional antithrombin <70%. 22 Protein S deficiency was defined as free protein S below reference range (<2.5th percentile), and protein C deficiency was defined as protein C below reference range (< 2.5th percentile)-both in the absence of vitamin K-deficiency. 23 Inflammation was defined as the presence of an systemic inflammatory disease such as Crohn's disease, ulcerative colitis, or connective tissue disease. The presence of elevated D-dimer, high factor VIII, and persistent elevated CRP was considered for risk assessment 1 month after stop anticoagulation only.
Presence of RVO at the time of planned treatment discontinuation is the primary risk factor upon which treatment duration is further tailored. If no RVO is present at this time point, anticoagulation treatment will be stopped. In the case of detected RVO, anticoagulation will be prolonged for another 3 months (provoked DVT), or 6 months (unprovoked), respectively. Treatment will only be prolonged once. Deviations from the protocol could be made at the discretion of the treating physician to address patients preferences.
| Assessment of RVO
RVO was assessed using compression ultrasound (CU) as previously described 24 and a protocol has been implemented for conducting a series of standardized ultrasound measurements as follows. Measurements were taken at: (i) the common femoral vein, just below the inguinal ligament, and (ii) at the popliteal vein. No iliacal or calf veins were assessed. B-mode images were taken in a transverse plane. RVO was defined according to the definition of Prandoni as residual vein diameter during compression of more than 2 mm. 24 Several studies confirmed an acceptable accuracy and inter-observer reproducibility of this method [24][25][26][27] and agreement between observers was achieved by close teamwork among team members. A formal assessment of the inter-observer agreement was not conducted.
| Collection of data
All data were prospectively recorded as part of routine clinical practice.
A structured database was implemented. Outcome data were also documented as part of clinical routine. After completion of 2 years of follow-up in the course of CCP, outcome data were additionally collected over the course of further outpatient visits and accessing MUMC and general practitioner records.
| Determination of laboratory tests
Laboratory data were determined in a certified MUMC+ laboratory using established methods as previously described. 28 Venous blood samples were collected in commercially available tubes with and without citrate 0.106 mol/l as appropriate following an established protocol to ensure adequate preanalytic conditions. Samples were centrifuged according to recent guidelines (10 minutes at 1500 g or 5 minutes at 2500 g, respectively). D-dimer and CRP was measured after centrifugation. Plasma for factor VIII measurements were snapfrozen at −80°C. D-dimer were determined using the Vidas assay until
| Outcomes, predictor variables, and co-variables
We defined the time to recurrent VTE as the primary outcome.
Recurrent VTE was defined as objectively confirmed proximal or distal DVT, PE, or other venous thrombosis as determined by CU, spiral computed tomography, or ventilation-perfusion lung scanning.
Recurrent DVT was defined as: (i) a new non-compressible vein in the contralateral leg, (ii) a new non-compressible vein of the same leg as the first event (previously unaffected), (iii) a clear proximal extension of the known thrombus, or (iv) a new non-compressible site of a vein that was effected but previously re-canalized. 11,26,29 Secondary outcome was time to death from any cause. A number of variables previously identified or suspected as predictors for recurrent VTE were recorded as a potential predictor or a co-variable, respectively (Table 1) Due to organizational reasons, bleeding events were not recorded.
| Patient characteristics
Four-hundred and seventy-nine patients diagnosed with proximal lower extremity DVT were treated within the CCP; the flow of the patients is shown in Figure 1, detailed patient characteristics are reported in see Figure 1). The overall incidence rate was 4.2 per 100 patient-years for the first two years and 2.9 for the extended observation period ( Table 2).
| VTE recurrence
The Kaplan-Meier estimate of overall recurrence is shown in Figure 1.
In patients with DVT provoked by surgery, incidence rate was 1.1 per 100 patient-years (CCP) or 1.3, respectively (extended observation period). For patients with non-surgical transient risk factors, the incidence rate was 3.1 or 2.1, respectively. In contrast, the incidence rate was 6.1 or 4.0, respectively, in patients with unprovoked VTE. The cumulative incidence according to these groups is illustrated in Figure 3.
Subgroup analyses revealed higher incidence rates for the following risk factors: male sex, traveling history, inflammatory disease, elevated D-dimer, high factor VIII and elevated CRP ( Table 2). Kaplan-Meier estimates are shown in Figure 3. No recurrent VTE events occurred during anticoagulation treatment.
| Risk factors for recurrent VTE
Unadjusted and adjusted hazard ratios (HR) are reported in Table 3.
| DISCUSSION
In our study of a cohort of patients with proximal DVT who were managed with a CCP that incorporated the presence of RVO, we found that predictors of recurrent VTE were unprovoked VTE, elevated D-dimer 1 month after anticoagulant treatment was stopped, male sex, and high factor VIII. We also documented that 99% of patients diagnosed with proximal DVT and treated within the CCP completed the 2 year treatment protocol, the overall rate of recurrence was relatively low, and recurrence rates were lowest in women with VTE provoked by oral contraceptives.
Minimal data is available on clinical outcomes of patients treated within a particular CCP or treatment program, and research to date has instead focused on the short-term management. Tillman and colleagues evaluated an outpatient DVT treatment program in 391 patients 19 resulting in a VTE incidence rate of 6.0 per 100 patient-years.
An elevated risk of mortality was stated in a different cohort of 131 patients, but no incidence rates were reported. 34 Equivalent numbers of recurrent VTE compared to usual care were recorded in another CCP in the community setting, but only short-term effects were observed. 17 There have been several other investigations studying small cohorts, but few or no clinical outcomes were reported. 17,[35][36][37][38] In contrast, we studied the long-term clinical outcomes in large cohort of patients treated within a defined CCP considering risk assessment and long-term treatment strategies. 32 In accordance with previous investigations, patients with unprovoked VTE were found to carry a high risk of recurrence. 32 Both observational data [39][40][41] and interventional studies have also confirmed the predictive value of elevated D-dimer 1 month after cessation of treatment for VTE recurrence. 42 Men had a higher risk of recurrence than women, in our setting as well as in others. 43,44 In contrast to earlier cohorts, risk of recurrence was very low in patients taking estrogen containing contraceptives at the time of thrombosis. 45 This difference is most probably reflected by the increased awareness of the associated risk and the subsequent strict avoidance of estrogen-containing drugs in patients that have suffered from VTE. In line with previous data, high factor VIII was associated with recurrent thromboembolism. 46,47 In addition, inflammatory conditions, as well as elevated CRP were associated with VTE recurrence, at least in the univariate analysis. 48 RVO was not associated with recurrent VTE, 11 perhaps because the duration of anticoagulation was tailored according to the presence of RVO in this CCP.
We are, however, faced with some limitations. The number of patients and recurrent events were limited, resulting in imprecise estimations for some of the more infrequent predictor variables or those with a smaller effect. This effect was intensified by a relevant number of missing values with regard to the variables not assessed at baseline. However, we did not find apparent discrepancies in sensitivity analyses (eg, after multiple imputation). Another limitation is that only very few patients with active cancer were included in our cohort. Thus, the results of our investigation cannot be extended to this specific population. 49 Also concerning the efficacy of the management strategy some limitations have to be considered. We were not able to record bleeding events due to organizational reasons. Therefore, our conclusions are limited to the efficacy of the CCP. Moreover, we did not implement a formal assessment of the compliance with the CCP. The risk of relevant protocol-deviations is however estimated to be low because the duration of anticoagulation fits well with the risk categories (85% of patients with provoked DVT were treated three to six months, 94% of patients with unprovoked DVT were treated six to 12 month, and 92% of high-risk patients were treated indefinitely).
In this study, as in many others, 7,8,10,50-55 we did not formally asses the inter-observer agreement with regard to RVO. We cannot fully exclude that this might have introduced a certain variability of results, obscuring a possible effect on clinical outcomes. However, this reflects clinical practice of a clinical care pathway. Another limitation is that we did not record if the recurrent event was provoked or unprovoked.
Our investigation has several strengths. Firstly, it is one of the first studies investigating clinical outcomes of DVT patients treated within a CCP and the first study investigating a management incorporating RVO in clinical practice. Secondly, we have conducted a longterm follow up, facilitating long-term predictions and counseling of patients.
CCPs aim to translate evidence-based medicine into clinical practice, improve collaboration among multiple specialized caregivers, and standardize health-care procedures. In addition the structured management and follow up of a CCP allows for prospective collection of data on F I G U R E 2 Cumulative incidence of VTE recurrence in patients managed in a clinical care pathway incorporating the presence of residual vein obstruction. The overall incidence rate was 2.9 per 100 patient-years factors for VTE recurrence were unprovoked VTE, male sex, elevated D-dimer, as well as factor VIII one month after cessation of treatment, and inflammation.
RELATIONSHIP DISCLOSURE
None of the authors have any disclosures relevant to this paper T A B L E 3 Hazard ratios of recurrent VTE by risk factors | 2018-08-06T13:39:54.202Z | 2018-02-03T00:00:00.000 | {
"year": 2018,
"sha1": "d39b908902f6255be9e83a1a5f1bd73b2341a0ea",
"oa_license": "CCBYNCND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/rth2.12079",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d39b908902f6255be9e83a1a5f1bd73b2341a0ea",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
219690315 | pes2o/s2orc | v3-fos-license | An Improved Faster R-CNN for High-Speed Railway Dropper Detection
Overhead contact systems (OCSs) are the power supply facility of high-speed trains and plays a vital role in the operation of high-speed trains. The dropper is an important guarantee for the suspension system of the OCS. Faults of the dropper, such as slack and breakage, can cause a certain threat to the power supply system. How to use artificial intelligence technologies to detect faults is an urgent technical problem to be solved. Because droppers are very small in whole images, a feasible solution to the problem is to identify and locate the droppers first, then segment them, and then identify the fault type of the segmented droppers. This paper proposes an improved Faster R-CNN algorithm that can accurately identify and locate droppers. The innovations of the method consist of two parts. First, a balanced attention feature pyramid network (BA-FPN) is used to predict the detection anchor. Based on the attention mechanism, BA-FPN performs feature fusion on feature maps of different levels of the feature pyramid network to balance the original features of each layer. After that, a center-point rectangle loss (CR Loss) is designed as the bounding box regression loss function of Faster R-CNN. Through a center-point rectangle penalty term, the anchor box quickly moves closer to the ground-truth box during the training process. We validate the improved Faster R-CNN through extensive experiments on the VOC 2012 and MSCOCO 2014 datasets. Experimental results prove the effectiveness of the proposed network combined with attention feature fusion and center-point rectangle loss. On the OCS dataset, the accuracy using the combination of the improved Faster R-CNN and ResNet-101 reached 86.8% mAP@0.5 and 83.9% mAP@0.7, which was the best performance among all results.
I. INTRODUCTION
In recent years, high-speed railway transport has developed rapidly worldwide. The overhead contact system (OCS) is the key equipment for powering electric locomotives. The continuous operation of the OCS ensures the high-speed running of the train. The dropper is one of the important components in the chain suspension of the OCS, and the carrier cable is suspended on the OCS through the dropper.
Due to the open-air work all year round, the dropper is prone to breakdown. Once the dropper is loose or dropped, The associate editor coordinating the review of this manuscript and approving it for publication was Vivek Kumar Sehgal . it will have a great impact on the power supply system of the high-speed railway, threatening the normal operation of trains and the safety of passengers. At present, the railway system still relies on manually viewing video images acquired through the 2C system to find dropper faults. Because of the influence of various human factors, omissions or misjudgments can easily occur. Image processing is a method for replacing manpower for fault diagnosis of droppers, the first step of which is to use an efficient detector to detect and locate the dropper in the high-definition image. With the development of artificial intelligence, it is an urgent problem to realize the dropper detection method based on deep learning.
Convolutional neural networks can learn the robustness and deep feature representation of an image and have good performance in computer vision. From LeNet [1], AlexNet [2] won the ImageNet [3] competition in 2012, and then to VGGNet [4] and ResNet [5], CNN has become deeper for better performance. With the development of CNNs, more powerful object detection algorithms have appeared one after another, such as the YOLO series [6]- [8] networks and Faster R-CNN [9], which are widely used in the engineering field. It is of great significance to use object detection networks to accurately locate and identify droppers for further research on dropper fault diagnosis. Therefore, the main purpose of this paper is to find a high-performance object detector.
However, the structure of the OCS components is complex and diverse, and the background is extremely complicated, which leads to poor feature representation of the dropper. There are many non-target parts that greatly affect the feature extraction of the dropper, such as wrist arms and wire rods. Therefore, using a deep learning network to achieve accurate dropper identification requires a more efficient object detection framework. With the introduction of Faster R-CNN [9], the accuracy of detection has been greatly improved. Faster R-CNN is widely used in some computer vision tasks in the engineering field and can solve the detection problem of small objects with different sizes. Due to the abundance of semantic information, the deep layer in feature extraction networks plays an important role in the classification stage, while the lower layer with more detailed information and content description is easy to ignore. Thus, the feature fusion of FPN [10] is of great significance to the performance improvement of object detection tasks. For example, the proposal of PANet [11] enables the feature pyramid to be enhanced through a bottom-up path, which can obtain more accurate positioning information from low-level features. In addition, the attention mechanism focuses information on key parts of the image and shows good performance in image classification and object detection tasks.
In this paper, to address the problem of dropper detection, we propose an improved Faster R-CNN with two innovative views. The first innovation is that a balanced attention feature pyramid network (BA-FPN) is proposed to obtain the fusion feature of multilevel feature maps. Specifically, by relying on an integrated semantic feature map to balance the original features of each layer of the pyramid, each resolution in the feature pyramid can obtain equal information from the other layers. The image information imbalance problem of FPN [10] can be solved by better fusion of shallow detailed information and deep semantic information. In addition, based on the attention mechanism, a new network module named the ''mixed attention block'' is designed to act on the integrated semantic feature map. By acquiring the channel and spatialwise attention, the mixed attention block reduces the information redundancy and extracts more useful image features. The second innovation is the proposal of a centerpoint rectangle loss (CR loss) to accelerate convergence and improve the accuracy of the model. In CR loss, we add a center-point rectangle penalty term to the coordinate regression loss function. The vertices of the center-point rectangle consist of the center points of the ground-truth box and the anchor box. By optimizing the area of the rectangle, the center distance between the anchor box and the ground-truth box is directly minimized, which provides a moving direction for the bounding box and accelerates convergence. In summary, the contributions of this paper are as follows: 1) We propose BA-FPN, a feature pyramid model based on an attention mechanism, which can better extract useful features.
2) We propose a center-point rectangle loss function, which uses a center-point rectangle penalty term to accelerate convergence.
3) We use the improved Faster R-CNN as the basic object detection network and validate the proposed method on VOC 2012 [12], MSCOCO 2014 [13] and our OCS datasets. Our method achieves state-of-the-art performance.
The remainder of this paper is organized as follows. Section II shows the recent research on engineering applications of OCSs and the development of detection tasks in the computer vision field. The dropper detection method proposed in this paper is described in Section III. Section IV presents the experimental datasets and parameter settings, and the experimental results are analyzed in detail. The relevant conclusions are given in Section V.
A. THE OCS ANALYSIS AND DROPPER DETECTION
The OCS is an important part of the electrified railway system that is responsible for transferring the electric energy in the traction network to the electric locomotive. The specific structure of the OCS is shown in Figure 1. There are complex mechanical and electrical interactions between the pantograph and the catenary device. The vibration and impact generated by the long-term operation of the train will inevitably cause the failure of the catenary support device, such as the disappearance of the fasteners and breakage of the load-bearing cable, which can seriously affect train operation. In recent years, researchers have attempted to use image processing methods to detect the key components of the OCS. Karakose et al. [14] proposed a new approach using image processing-based tracking to diagnose faults in the pantograph-catenary system. Liu et al. [15] proposed a unified deep learning architecture for the detection of all catenary support components. Qu et al. [16] used a genetic optimization method based on an adadelta deep neural network to predict pantograph and catenary comprehensive monitor status. Zhong et al. [17] introduced a CNN-based defect inspection method to detect catenary split pins in high-speed railways.
This paper focuses on the dropper detection of the OCS. The dropper is one of the important components in the catenary suspension, which is of great significance to the normal operation of trains. Similar to the detection of other VOLUME 8, 2020 parts, dropper detection will also be interfered by the noise in the background of the complex OCS images. In addition, the main body of the dropper is filamentous and very small in the image, which creates some difficulties in feature extraction. Several years ago, Petitjean et al. [18], [19] introduced an original system for the automatic detection of droppers in the catenary, which used prior knowledge to obtain the location of the dropper. With the advancement of computer vision technology, Xu [20] used a Faster R-CNN to locate dropper images and then used the Hough transform to recognize dropper faults. Liu et al. [21] proposed a deep learning method based on depthwise separable convolution for dropper detection. In order to address the impact of image complexity, we propose an attention-based feature fusion method combined with a high-precision Faster R-CNN network to form an effective object detector and realize dropper detection in complex backgrounds.
B. OBJECT DETECTION NETWORK
With the development of CNN, image processing and object detection technology have achieved an improvement from traditional machine learning methods to deep learning. Girshick et al. [22] proposed R-CNN based on region proposal, which makes two-stage object detection a mainstream detection method. He et al. [23] used SPPNet to effectively solve the problem of computational redundancy of candidate regions. On the basis of R-CNN [22] and SPPNet [23], Fast R-CNN [24] realized a multitask learning method by simultaneously training object classification and bounding box regression. Immediately afterward, Ren et al. [9] proposed a region proposal network in Faster R-CNN to fuse the region proposal with CNN classification and realized a complete end-to-end CNN object detection model. After that, Cascade R-CNN [25] expanded Faster R-CNN [9] into a multistage detector through a powerful cascade structure. Lin et al. [10] proposed a feature pyramid network (FPN), which caused multiple detection ports from different levels in the network to detect objects of different scales. FPN [10] has now become a basic component in many detectors. In the path aggregation network proposed by Liu et al. [11], a bottom-up path augmentation structure was introduced to fuse FPN features and make full use of the features of the shallow layer.
A one-stage detection model can obtain the final detection result directly after a single detection and has a fast detection speed. YOLO [6] was the first proposed one-stage detection algorithm, which directly obtained the position of the bounding box and the classes of the object through only one convolutional neural network. Liu et al. [26] proposed the SSD algorithm, which absorbed the advantages of YOLO's fast speed and the precise positioning of RPN [9]. SSD [26] adopted multiwindow technology in RPN and detected multiple feature maps with different resolutions. To improve the detection accuracy of the one-stage method, Lin et al. [27] proposed ''focal loss'' to modify the traditional cross-entropy loss function and greatly improved the detection precision. The high-precision detectors of many algorithms rely on dense anchor strategies, resulting in a large number of redundant anchor boxes and a serious imbalance between positive and negative samples. To solve this problem, Wang et al. [28] proposed GA-RPN, which predicted the position and shape of the anchor to generate sparse and arbitrarily shaped anchors.
At present, object detection technology based on deep learning is also gradually used in various fields. Chen et al. [29] applied an attention mechanism to ship detection in satellite images. Cao et al. [30] designed an improved Faster R-CNN for small object detection. In the field of railway engineering, Wei et al. [31] used Faster-R-CNN to detect railway track fasteners. Juan et al. [32] proposed FB-NET detection based on a deep learning method for detecting the shape of railways and dangerous obstacles. In addition, He et al. [33] combined SSD and Faster-R-CNN to detect foreign matter in high-speed trains.
C. ATTENTION MECHANISM
The attention mechanism essentially imitates the way that humans observe objects. In recent years, most of the research work on the combination of deep learning and visual attention mechanisms has focused on the use of masks. By giving weight to the network layer to identify the key features of the image, an attention mechanism is formed. Wang et al. [34] introduced a residual attention network using a trunk-andmask attention mechanism model. The trunk branch is similar to the traditional convolutional network, and features are extracted through multiple convolution operations. The mask branch is an encoder-decoder model with the output attention weight. Fu et al. [35] proposed RA-CNN, which combines area determination with fine-grained feature extraction. The region with a dense distribution of important features can be used as a key recognition region for further accurate judgment to promote feature extraction. Hu et al. [36] designed a squeeze-and-excitation block to explore the relationship between channels, which calculates the attention weight of each channel through a global pooling operation. Woo et al. [37] proposed the convolutional block attention module. In addition to considering the attention weight of the channels, a spatial attention branch was also added in the module.
In different visual tasks, the attention mechanism has also been applied accordingly. Ling et al. [38] proposed a self-residual attention network for deep face recognition. In the image translation task, a channel attention network was designed by Sun et al. [39], with which the original function in the encoder and the conversion function in the decoder can be better integrated. In addition, Liu et al. [40] proposed a spatiotemporal attention module for video action recognition. Gao et al. [41] introduced a residual attention mechanism to one convolutional layer object tracking network to avoid data imbalance.
III. OUR PROPOSED METHODS
To improve the performance of dropper detection, we develop an improved Faster R-CNN network. The architecture of the improved Faster R-CNN is shown in Figure 2. The proposed method contains two aspects: a balanced attention feature pyramid network (BA-FPN) and a center-point rectangle loss (CR loss).
The BA-FPN model balances the original feature of each layer by relying on an integrated semantic feature map. First, the feature maps of different levels of the feature pyramid are fused into an integrated semantic feature map. Then, we use the mixed attention block to extract the channel and spatial attention of the integrated feature map, which in turn acts on the integrated semantic feature map to generate an attention map. We combine the attention map with feature maps of the pyramid to balance the original feature. CR loss is an optimized bounding box regression loss function. Based on the regression of the prediction box vertex, we add a rectangular area penalty term to the function. The two diagonal vertices of the rectangle are composed of the center points of the predicted anchor box and the ground-truth box. By optimizing the rectangle penalty term, the convergence of loss is accelerated, and the accuracy is improved. In Section A, we introduce the feature extractor used in the proposed method. In Section B, we review the structure of the FPN and introduce the BA-FPN model in detail. In Section C, the proposed CR loss function is stated. Section D describes the generation process of the predicted bounding box.
A. FEATURE EXTRACTOR
It is important to select a high-performance convolutional neural network for the performance of the detection model. The depth and parameter settings of the feature extraction network directly affect the performance of the proposed method. A deep network can generate a feature map with rich semantic information, which is useful for achieving better feature pyramid fusion.
In this paper, we choose ResNet as the basic feature extractor of the proposed method. Instead of attempting to learn the mapping between the input and output directly as in VGGNet, ResNet can learn the representation of the input residual and output by using multiple residual blocks. The residual block is shown in Figure 3. It is much easier to learn residuals than to directly learn the mapping between the input and output, which is proven by a large number of experiments.
In the experiment, we used the models trained on ImageNet [3] as the basic pretrained parameter models of ResNet.
B. BALANCED ATTENTION FPN
There are objects of different sizes in the image, and different objects have different characteristics. Simple objects can be distinguished by shallow features, while complex objects can be distinguished by deep features. The emergence of the FPN can solve the above problem to some extent. FPN is a kind of enhancement of the image information expression output of traditional CNN networks, which can be flexibly applied to different tasks. Figure 4 demonstrates the overall architecture of the FPN. First, FPN can efficiently calculate strong features through the hierarchical structure of the CNN network. By combining bottom-up and top-down methods, FPN obtains strong semantic features to improve the performance of object detection and semantic segmentation on multiple datasets. For small objects, FPN can utilize the highlevel semantic information after the top-down model, which increases the resolution of the feature map and operates on a larger feature map to obtain more useful information of small objects.
However, in FPN, the semantic information contained in nonadjacent layers will be diluted in the information fusion process, resulting in information fusion imbalances of different scales. On the basis of FPN, BA-FPN fuses the feature maps of each level into an integrated semantic feature map, which in turn acts on the maps of the corresponding scales to balance the differences between the levels and enhance useful feature expression. The general framework of BA-FPN is shown in Figure 5.
Assuming the number of layers in the feature pyramid is L, the outputs of Conv2, Conv3, Conv4 and Conv5 are adopted here, denoted as {C 2 , C 3 , C 4 , C 5 }. To integrate features of different levels and retain their semantic information, the features of different levels {C 2 , C 3 , C 4 , C 5 } were first reconstructed to the size of C 4 through interpolation or max-pooling, and then {F 2 , F 3 , F 4 , F 5 } was obtained. After that, by calculating the mean value of {F 2 , F 3 , F 4 , F 5 }, the integrated semantic feature map F b was obtained. The formula is defined as To reduce the information redundancy of balanced semantic features and further enhance useful feature expression, we design a mixed attention block (MA block) based on an attention mechanism, including a channel attention branch and a spatial attention branch. The structure of the MA block is shown in Figure 6. The feature representation of the balanced semantic feature can be enhanced effectively by extracting the channel and spatialwise attention. Thus, the output of the MA block focuses on the most significant components of the information.
We took the integrated semantic feature map F b as the input of the MA block, where F b ∈ R C×H ×W . By calculating the channel attention branch and the spatial attention branch simultaneously, the corresponding attention maps were generated. In the channel attention branch, we aggregated the spatial information of F b through an averagepooling operation to generate the spatial context descriptor: F c avg ∈ R C×1×1 , which generates a channel attention map M c ∈ R C×1×1 through a multilayer perceptron (MLP). The hidden layer size of the MLP was set to R C/r×1×1 , and r is the reduction ratio. Additionally, in the spatial attention branch, channel information is aggregated by averagingpooling operation on the channel axis to generate a feature descriptor: F s avg ∈ R 1×H ×W . Then, a convolutional layer was applied to F s avg to produce a spatial attention map M s ∈ R 1×H ×W . The overall attention process can be summarized as where σ denotes the sigmoid function. W 0 ∈ R C/r×C , and W 1 ∈ R C×C/r are the weight parameters of MLP in the channel attention branch. f 7×7 represents that the convolution kernel size of the convolution operation is 7 * 7 in the spatial attention branch. AvgPool1 and AvgPool2 are the channel and spatialwise global averaging-pooling, respectively.
After the above operation, we obtain the attention maps M c and M s acting on F b . At the end of the MA block, the final refined attention feature map A is obtained.
where ⊗ denotes elementwise multiplication. Considering that M c ⊗ M s belongs to [0, 1], if multiplied directly by F b , it will lead to a weakened output response of the feature map. Therefore, using 1 + M c ⊗ M s can avoid the emergence of this problem.
To feed back the balanced semantic feature information to each level, the output A of the MA block is reconstructed to the same size corresponding to each level of {C 2 , C 3 , C 4 , C 5 }, and {A 2 , A 3 , A 4 , A 5 } was obtained, which are then added with {C 2 , C 3 , C 4 , C 5 } to obtain {P 2 , P 3 , P 4 , P 5 }. The process is expressed as follows: Compared with {C 2 , C 3 , C 4 , C 5 }, {P 2 , P 3 , P 4 , P 5 } balances the differences among the layers and enhances the original feature of each layer. For subsequent object detection, the following process of the model is the same as FPN.
C. CENTER-POINT RECTANGLE LOSS
From L1 loss and L2 loss to the proposal of smoothL1 loss, the optimization of regression loss makes the training process increasingly efficient. When the predicted value differs greatly from the target value, the gradient of L2 loss is (x-t), which is prone to gradient explosion, and the gradient of L1 loss is constant. At present, in the Faster R-CNN object detection network, smoothL1 loss is generally used as the loss function for bounding box regression. When the predicted value differs greatly from the target value, the gradient explosion can be prevented by changing from L2 Loss to L1 loss. The loss function of the original Faster R-CNN is expressed as follows: where i is the index of the predicted anchor box, and p i represents the predicted probability of the i-th anchor box. p * i is the value of the i-th ground-truth box. If the anchor is a positive sample, the value of p * i is 1; otherwise, it is 0. t i and t * i are the coordinate vectors of the predicted anchor box and ground-truth box, respectively. λ is the coefficient used to balance regression loss and classification loss, which was set to 1 in the experiment. N cls and N reg are the normalized and weighted parameters by λ. L reg denotes the basis regression loss function (smooth L 1 loss).
where S L1 = 0.5x 2 |x| < 1 |x| − 0.5 |x| ≥ 1 (8) SmoothL1 has excellent performance in the Faster R-CNN network. This paper attempts to optimize the loss function by shortening the spatial distance between the predicted anchor box and the ground-truth box. In the DIoU loss function, Zheng et al. [42] rapidly reduced the distance between the predicted anchor box and the ground-truth box by adding a penalty term of center distance to the IOU loss. In this paper, center-point rectangle loss (CR loss) is designed based on the smoothL1 loss function. We add a center-point rectangle term to L. The vertices of the center-point rectangle consist of the central points of the ground-truth box and the predicted anchor box. By optimizing the rectangular area, the distance between the two center points is directly minimized so that the anchor box quickly moves closer to the ground-truth box. As shown in Figure 7, our goal is to reduce the area of the rectangular box enclosed by the red dotted line. The formula of the CR loss function is defined as follows.
where b i and b gt i are the center points of the anchor box and the ground-truth box. R(b i , b gt i ) is the center-point rectangle. R i represents the smallest rectangular box that can only contain both the anchor box and the ground-truth box. We replace S L1 t i − t * i with L CR t i , t * i in the total loss function. In the experiment, the proposed loss function is proven to be effective.
D. DETECTION BOUNDING BOX GENERATION
Multilevel feature maps output by BA-FPN are used as the inputs of RPN, and the structure of RPN is shown in Figure 8. An n * n sliding window is generated on the shared convolutional feature layer with the maximum number of k anchor boxes. After a 3 * 3 convolution operation, the feature map enters the regression layer and classification layer. Then, the regression layer and classification layer produce 4k and 2k outputs, which represent coordinate values of corresponding candidate regions and the probability of whether the area is the foreground.
The loss functions of the regression layer and classification layer are CR loss and cross-entropy loss, respectively. The total loss function is defined as follows: Then, anchor boxes selected by NMS are output to train the Fast R-CNN. The position information output by RPN is mapped to the original feature map to obtain corresponding region proposals. These region proposals generate feature maps of size 7×7 through RoI pooling, which are then sent to the fully connected layer and softmax layer for the next classification operation. Additionally, the regression operation is used again to modify the region proposal to obtain a more accurate object anchor box.
IV. EXPERIMENTS
To validate the effectiveness of the proposed method, we first test the improved Faster R-CNN on VOC 2012 [12] and MSCOCO 2014 [13]. The results show that the proposed method has a significant performance improvement. Then, we apply the method to our OCS dataset and compare the performance with the experimental results of SSD [26] and RetinaNet [27]. In this section, we introduce the datasets used in the experiment and experimental implementation details. After that, the method is thoroughly tested on different datasets, and the results are presented. Finally, we conduct a detailed analysis of the experimental results.
A. DATASET
In the experiment, VOC 2012 and MSCOCO 2014 are used as validation datasets for the performance of the method. Specifically, VOC 2012 has 20 object categories, which contain 5,717 pictures for training and 5,823 images for validation. MSCOCO 2014 is another well-known object detection dataset with 80 object categories, which contains 5,717 pictures for training and 5,823 images for validation.
In this paper, 1,465 high-resolution OCS images are selected from the high-speed rail 2C system for engineering tests. Each OCS image contains several or dozens of dropper objects. We make them into the VOC dataset to perform dropper recognition experiments. The training set contains 1,172 images, and the test set contains 293 images.
B. IMPLEMENTATION DETAILS 1) TRAINING DETAILS
In the validation phase, we used Faster R-CNN as the basic detector and ResNet [5] as the feature extraction network to carry out experiments on the proposed method. On the VOC 2012 dataset, we trained the detector for 20 epochs with an initial learning rate of 0.01 and used stochastic gradient descent (SGD) with momentum 0.9 and a weight decay 0.0001. On the MSCOCO 2014 dataset, except that the epoch was set to 12, the other settings were the same as the VOC 2012 dataset.
In the test phase of dropper detection, we tested several detectors on the OCS dataset, including our improved Faster R-CNN, SSD512 and RetinaNet. Faster R-CNN and RetinaNet choose ResNet as the feature extraction network. We set the input size of training and testing to 1333 × 800 and 960 × 800 for Faster R-CNN. The other settings of Faster R-CNN were the same as the VOC 2012 dataset. We trained RetinaNet for 20 epochs with an input size of 960 × 800, an initial learning rate of 0.01 and a weight decay of 0.0005. SSD512 was trained for 24 epochs with an initial learning rate of 0.001 and a weight decay of 0.0005.
The entire experimental environment is described as follows: Deep learning framework Pytorch 1.1.0, centos7, and the embedded artificial intelligence platform NVIDIA Tesla P100 GPU.
2) METRICS
The classification and location of the models in the object detection task need to be evaluated, and each image may have different objects in different categories. We use mAP (mean average precision) to evaluate the accuracy of the method. The formula is as follows: where R is the recall rate and P is the accuracy rate. TP is the number of positive samples correctly divided into positive samples, FN is the number of positive samples incorrectly divided into negative samples, and FP is the number of negative samples incorrectly divided into positive samples. TP + FN is the number of all actual positive samples, and TP + FP is the total number of the samples divided into positive samples. TP and FP were judged based on the IOU (intersectionover-union) threshold. The IOU calculation formula is as follows: where A represents the ground-truth box and B represents the anchor predicted by the detection model. The initial IOU threshold was set to 0.5. If IOU >0.5, the sample was TP; otherwise, FP.
C. EXPERIMENTAL RESULTS AND ANALYSIS
In the performance experiment of the VOC 2012 dataset, we used Faster R-CNN as the basic detector and ResNet as the feature extraction network to evaluate the proposed model. A total of 5,717 pictures were used to train the model, and 5,823 pictures were used for testing. First, to verify targets selected are shown in Table 2. The detection results of small targets improved considerably. Compared with FPN, the experimental results of BA-FPN showed a good performance improvement, indicating the effectiveness of the attention mechanism in FPN feature fusion. Table 3 shows the performance of CR loss on the VOC 2012 dataset. First, in the absence of BA-FPN, we compared the detection results of the original smoothL1 loss and CR loss. The mAP@0.5 of the model using CR loss was 0.3% and 0.4% higher than that of the model using smoothL1 loss on ResNet50 and ResNet101, respectively. Combining CR loss with BA-FPN, the performance of the detector was further improved. ResNet50 with BA-FPN and CR loss increased to 72.9% mAP@0.5 by 1.5% compared with ResNet50, and ResNet101 with BA-FPN and CR loss increased to 74.4% mAP@0.5 by 1.2% compared with ResNet101.
To further verify the performance of the proposed method, we tested the model on the MSCOCO 2014 dataset. The MSCOCO 2014 dataset contains 80 object categories and more than 80,000 pictures for training, which could test the performance of the detector better. In this paper, we used the training set for training and the val set for testing. The average mAP over different IOU thresholds from 0.5 to 0.95 was used for evaluation. The experiment used the Faster R-CNN detector and tested it on ResNet. The purpose of this experiment was to examine the effect of the combination of BA-FPN and CR loss on the whole detection network, so any performance improvement can prove its contribution to better performance. Table 4 After testing on VOC 2012 and MSCOCO 2014, this paper carried out model testing on an engineering dataset of dropper detection. In this part, we chose three different detectors to conduct comparative experiments, including Faster R-CNN, RetinaNet and SSD. Considering that the pixel of the OCS dataset was high and the detection target was small, we used SSD512 instead of SSD300, which was faster. The experimental performance of different detectors is shown in Table 5. From Table 5, we learn that Faster R-CNN shows obvious advantages in test accuracy among the whole experiment, where resnet101 with BA-FPN and CR loss achieved 86.8% mAP@0.5 and 83.9% mAP@0.7, respectively, reaching the optimal performance. ResNet50 combined with BA-FPN and CR loss also improved compared to ResNet50. RetinaNet performed best on resnet101, reaching 78.8% mAP@0.5 and 72.7% mAP@0.7. Compared with Faster R-CNN and Reti-naNet, the input size of SSD is 512 × 512. SSD was faster than other detectors but performed poorly in accuracy, which only achieved 67.6% mAP@0.5.
To further describe the good performance of the proposed method in the dropper detection task, we trained different detection models on the OCS dataset and tested two input images from the dataset for performance verification. Figure 9 shows the detection effect of different detectors. The visualization results show that the Faster R-CNN with BA-FPN and CR loss had the best detection effect, significantly better than SSD512 and RetinaNet, and slightly better than that of the unimproved Faster R-CNN. The results also show the feasibility of the proposed method in the engineering testing task of droppers.
According to the comprehensive analysis, the OCS dataset used in this experiment for engineering detection of highspeed railways belongs to ultra HD images, and the detection object was too small, which required a more efficient and detailed object detection network. On the basis of the experimental results in Table 5 and Figure 9, Faster R-CNN shows great advantages in dropper recognition. On the premise that real-time detection is not required, Faster R-CNN becomes the preferred method in this project. BA-FPN and CR loss also further improved the performance of Faster R-CNN in dropper detection.
V. CONCLUSION
This paper proposes an improved Faster R-CNN for OCS dropper detection, including the balanced attention feature pyramid network (BA-FPN) and center-point rectangle loss (CR loss). First, we used an integrated semantic feature map to balance the original features of FPN and designed a mixed attention module to enhance the effective features by using an attention mechanism, making feature fusion of different scales more efficient. Second, CR loss accelerates the convergence of the regression function by optimizing the area of the rectangle, which is formed by the center points of the groundtruth box and the predicted anchor box. We carried out experiments on the VOC 2012 and MSCOCO 2014 datasets to verify the effectiveness of the proposed method and achieved great performance. In addition, compared with RetinaNet and SSD, the application experiment on the OCS dataset shows the effectiveness and feasibility of the proposed method in dropper detection, which lays a solid foundation for further dropper fault diagnosis. WENFENG JING was born in Xi'an, China, in 1963. He received the Ph.D. degree in applied mathematics from Xi'an Jiaotong University, China, in 2009. His current research interests include basic and core algorithms for big data, deep learning and AutoML methods, data analysis platforms, and applications of big data and deep learning. VOLUME 8, 2020 | 2020-06-11T09:04:28.658Z | 2020-06-08T00:00:00.000 | {
"year": 2020,
"sha1": "3b032734c18c7e931a6492bfb4aba2453ddbf95e",
"oa_license": "CCBY",
"oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/8948470/09110596.pdf",
"oa_status": "GOLD",
"pdf_src": "IEEE",
"pdf_hash": "23408c9ca262b006dd3f8996e481bfc157f953d5",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
250493052 | pes2o/s2orc | v3-fos-license | Uncertainty Quantification for Neutrino Opacities in Core-Collapse Supernovae and Neutron Star Mergers
We perform an extensive study of the correlations between the neutrino-nucleon inverse mean free paths (IMFPs) and the underlying equation of states (EoSs). Strong interaction uncertainties in the neutrino mean free path are investigated in different density regimes. The nucleon effective mass, the nucleon chemical potentials, and the residual interactions in the medium play an important role in determining neutrino-nucleon interactions in a density-dependent manner. We study how the above quantities are constrained by an EoS consistent with (i) nuclear mass measurements, (ii) proton-proton scattering phase shifts, and (iii) neutron star observations. We then study the uncertainties of both the charged current and the neutral current neutrino-nucleon inverse mean free paths due to the variation of these quantities, using Hartree-Fock+random phase approximation method. Finally, we calculate the Pearson correlation coefficients between (i) the EoS-based quantities and the EoS-based quantities; (ii) the EoS-based quantities and the IMFPs; (iii) the IMFPs and the IMFPs. We find a strong impact of residual interactions on neutrino opacity in the spin and spin-isospin channels, which are not well constrained by current nuclear modelings.
I. INTRODUCTION
More than 98% of gravitational binding energy of proto-neutron stars (PNSs) is emitted in the form of neutrinos and anti-neutrinos issued from electron captures and proton decays during the explosion of core-collapse supernovae (CCSNe). The neutrino opacity of hot and dense matter plays an important role in CCSNe explosion mechanism, particularly because the kinetic energy of the explosion is small compared with the total energy released. [1][2][3]. It also heavily influences the nucleosynthesis process in the neutrino driven wind (NDW) [4][5][6].
The neutrino reactions in CCSNe matter can be mainly classified into two types: (1) the neutral current (NC) neutrino interactions; and (2) the charged current (CC) neutrino interactions. NC interactions are flavor-blind. Consequently, the NC neutrino opacities are similar for different flavors of neutrinos. On the other hand, the major scource of CC neutrino opacities in CCSNe matter are the ν e /ν e absorption/emission reactions. The CC reactions involving ν µ , ν τ ,ν µ andν τ are suppressed by the mass of the heavy leptons µ and τ . The NC neutrino scattering reactions may have an influence on the neutrino cooling rate, the proto neutron star contraction speed in CCSNe, and the neutrino re-heating in the external layers of CCSNe [7]. The CC neutrino absorption/emission reactions determine the ν e /ν e spectrum and thus the electron fraction in NDW [4][5][6]8].
Neutrino interaction rates in CCSNe matter are sensitive to the many-body correlations in dense and hot matter as well as in finite nuclei, which still exist in the external layers of the CCSNe close to the neutrinosphere [9]. Close to the neutrino-sphere, the neutrino mean free path is comparable to the size of protoneutron star and the neutrino transport properties outside of this region can be well described by free streaming. Pioneering works on neutrino opacities in dense and hot matter use Hartree-Fock (HF) or HF+random phase approximations (HF+RPA) to estimate the many-body corrections on neutrino-nucleon interactions, in both the nonrelativistic limit, see Refs. [10][11][12][13][14][15], and the relativistic limit, see Refs. [6,11,12].
In the long-wavelength limit [16], the many-body corrections on NC neutrino-nucleon interactions are solely determined by its equation of state (EoS). Recent progress on the description of NC neutrino-nucleon scattering rates include works using a virial equation of state (EoS) to calculate the inverse mean free path (IMPF) near CCSNe neutrino sphere model-independently [16,17]. As density increases, the virial expansion method gradually loses its predictive power because of our lack of understanding of the higher order virial coefficients. Early efforts in calculating NC neutrino-nucleon interactions in the high-density regime include works using HF or HF+RPA approximations [11,12,[14][15][16][17]. Additionally, recent works based on lattice effective field (EFT) theory provide an ab-initio calculation of the static structure factor of neutron matter over a wide range of densities at finite temperatures [18,19]. The results from lattice EFT calculations agree with the those based on the virial method in the low density regime and may give insight into the many-body corrections to NC neutrino interactions in high density regime.
In CC neutrino-nucleon reactions, the transferred neutrino energy and momentum can be much larger than those in the NC reactions, since they are governed by the in-medium single-particle energy difference between the neutrons and the protons. This difference is determined by the symmetry energy, which is expected to be larger than 30 MeV in dense matter above satura-tion density [20]. Consequently, both the static and the dynamic response of the many-nucleon system are important for understanding the many-body corrections to CC neutrino interactions. To our knowledge, no modelindependent description of CC reactions have been performed in the context of CCSNe. Early efforts calculating CC reactions include [9,11,12,21], where HF or HF+RPA approximations were applied.
As discussed in the pioneering works, see Refs. [10,12,13], on neutrino-dense matter interactions based on HF+RPA calculations, the main source of neutrino reaction rates uncertainties is the poorly-constrained densitydependent nuclear interactions. Indeed, the density dependency of nuclear residual interactions were ignored in some early endeavors using HF+RPA to calculate neutrino-nucleon interaction rates, and the uncertainties of neutrino-nucleon interactions due to the uncertainties of nuclear interactions have not been systematically studied.
In this work, we apply HF+RPA to calculate the dynamic and the static responses of nucleons to neutrinos in CC and NC reactions. We improve the description of the density-dependent residual interactions by deriving them from the virial model at low densities and from the Skyrme models at higher densities, where they are expected to be constrained by astrophysical observations, nuclear experiments, and guided from fundamental theory. Note that we limit the densities explored in this study, and consider the high-density limit to be 0.2 fm −3 . In our approach, the uncertainties of the EoS-based quantities at high densities are captured by the Skyrme models and propagate into the calculation of neutrino-nucleon reaction rates. In this work, we quantitatively study the uncertainties of neutrino opacities due to EoSs uncertainities at different densities, in the framework of HF+RPA approach. As pointed out by several CC-SNe numerical simulations, neutrino opacities in different density regimes are sensitive to different CCSNe physics. By studying the density-dependent correlations between neutrino opacities and the underlying EoSs, the critical EoS-based quantities determining the neutrino opacities may be unveiled at different densities.
In section II, we introduced the formalism for both CC and NC neutrino-nucleon reaction rates. In section III, we present the IMFPs, the dynamic responses of both CC and NC reactions. We also present the Pearson correlations between (1) two different EoS-based quantities; (2) EoS-based quantities and IMFPs and (3) two IMFPs taken at different densities. Finally, we conclude this analysis in section IV.
II. FORMALISM AT FINITE TEMPERATURE
The differential cross section of neutrino-nucleon reactions l 1 + N 2 → l 3 + N 4 is given by where l 1/3 are the incoming/outgoing leptons, and N 2/4 are the initial/final state nucleons. The vector and axial couplings V and A, in the NC reactions, stand for C V /2 = −0.50 and C A /2 = −0.615 respectively. In the CC reactions, they stand for g V = 1 and g A = 1.23 respectively. The Pauli blocking factor is is the Fermi distribution of final state leptons. In NC reactions, we take F Pauli = 1. In mean field approximations, the response function S V and S A associated with the Fermi and Gamow-Teller operators are indistinguishable and we have S A = S V = S 0 . We first discuss the response functions S 0 for neutrino-nucleon neutral current (NC) and charged current (CC) reactions.
A. Mean Field Approximation for the Response functions
We start from the Hartree-fock residual propagator G τ τ 0 at finite temperature defined as [22]: where the neutrons and protons Fermi-Dirac distributions are with τ and τ both refer to either neutrons or protons. The quantities τ are the mean-field single particle energies, see Eq. (6), and µ τ are the chemical potentials. Given the propagator, the imaginary part of the polarization function Π 0 , in the mean field approximation [11,22], is The detailed balance theorem and 1/(w+iη) = P(1/w)− iπδ(w) is used to derive the second line of Eq. (4). In the linear response theory, the dynamic structure factor S 0 (q 0 , q) at finite temperature is defined as [5,13,22]: In non-relativistic limit of the mean field approximation, the nucleon energy spectrum τ is given as the sum of an effective-kinetic and mean-field terms: where m * τ is the Landau effective mass of nucleon with isospin τ , and U τ is the nucleon potential. Note that the energy delta function in S(q 0 , q) can be written in terms of the angle θ between q and k: where and with χ = 1 − M * τ /M * τ and c = q 0 + U τ − U τ − q 2 /(2M * τ ). For charged current (CC) reactions, τ = τ and we focus in this work on ν e + n → p + e − , with τ = n and τ = p. By using the delta function (7), the S(q 0 , q) in the CC channel becomes where For neutral current (NC) reactions τ = τ , the NC S(q 0 , q) reduces to (12) Note that in NC reactions χ → 0. By performing a Taylor expansion of the second term in eq. 9, e − reduces to [11] B. HF+RPA Response Functions at finite temperature To go beyond the Hartree-Fock approximation by including the long-range correlations, we calculate the HF+RPA residual propagator and solve the Bethe-Salpeter integral equation [22][23][24][25][26]: In this equation, α and α are quantum numbers of residual pairs, e.g., in spin-isospin channels α = (S, T ), k and k are hole momenta, and V α,α ph ( k, k , q) is the residual interaction matrix element which describes the RPA collective excitations of the system built on a mean field (Hartree-Fock) ground state.
Neglecting the possible mixing between spin/isospin channels, V α,α ph = δ(α, α )V α ph , and we get the RPA polarization function as In the next step, we discuss the residual interactions relevant more specifically to NC and CC neutrinonucleon reactions.
In the monopolar Laudau approximation, where only the low-energy = 0 interaction is considered and the momenta are taken at the Fermi surface, V α ph ( k, k , q) = W α (2) and W α 1R are the strength functions of the residual interactions. Note that W α,α 1R is the contribution from the rearrangement term, i.e., the term which derives from the density dependence of the effective nuclear interaction. For Skyrme force, since the density dependent term implies only the isoscalar density ρ, the rearrangement term does not contribute to the spin-density channel. The W α 1 , W α 1R and W α 2 are strength functions, see Appendix B, which can be written in terms of the Skyrme parameters as well as of the isoscalar density ρ [24]. The density and spindensity dependent residual interactions in (pp −1 , pp −1 ), (nn −1 , nn −1 ), (pp −1 , nn −1 ) and (nn −1 , pp −1 ) transitions are then given by The strength functions W α i (i = 1, 1R, 2) in symmetric nuclear matter (SNM) can be obtained from the general expression in (τ, τ ) channel W τ τ ,S i as [21], from which the Landau parameters in SNM are obtained: We now focus on the NC processes, e.g., ν +n → ν +n, where the residual interaction V ph can be expressed in terms of the Landau parameters (16)- (19). The polarization functions for Fermi (Vector) and Gamow-Teller (Axial) operators are and In (n.p.e) beta equilibrium condition, residual interactions corresponding to (nn −1 , pp −1 ) are also involved for the NC RPA calculation. The vector and the axial vector residual interactions in matrix form are written as: Furthermore, the Hartree-Fork polarization function may be written as a 2 × 2 diagonal matrix. We then get [12] Π nn A compact form of RPA polarization functions including coupling constants is where are the coupling constants. When calculating NC vector current neutrino-proton interactions, we drop the second and the third term in the right handed side of Eq. (28), which are proportional to the vector current neutrinoproton coupling constant c p,V ≈ 0. The RPA polarization function in NC vector current channel, is For the neutral current vector polarization functions, we have Note that the effect of Coulomb force in the NC vector polarization functions has been considered, by re- The axial vector polarization function is given by The NC polarization functions have the form similar to those derived in Ref. [12]. Additionally, note that in SNM, we have f nn = f pp = f 0 + f 0 and g nn = g pp = g 0 + g 0 . By replacing f nn and f pp with f 0 + f 0 , Eq. (30) reproduces the vector RPA polarization function in [13]. And by assuming g nn = g pp = −g np , Eq. (32) reproduces the axial RPA polarization function in Ref. [13]. For CC, the residual interactions for charge exchange (CE) process are given by [21]: and we get the residual interactions in vector and axial vector channel corresponding to (pn −1 , pn −1 ) transitions, where k F is the Fermi momentum of holes states. The functions W CE;S for S = 0 and 1 are given in terms of the Skyrme parameters: In SNM, f CE 0 = 2f 0 and g CE 0 = 2g 0 , which is consistent with the CC residual interactions used in [12]. The relationship between CC and NC residual interactions is discussed in more details in Appendix A. The Landau parameters in Eqs. (35)-(36) are relevant to V ph in CC process. Indeed, in ν e + n → e − + p, the polarization function for Fermi and Gammow-Teller operators are: and In low density region where the nucleon gas is hot and dilute, we use the virial EoS [16] to deduce the residual interactions. In the Virial EoS, we have nucleon densities and pressures written in terms of spin and isospin dependent fugacities z i = e µi/T , where z i could be z n↑ , z n↓ , z p↑ or z p↓ . The thermal wavelength λ = (2π/(mT )) 1/2 , the spin-like virial coefficients are b 1 , and the spin-opposite virial coefficients are b 0 . The pressure in virial expansion reads: Given the pressure, the nucleon density of specified spin and isospin can be derived using We further write down the free energy in terms of the virial coefficients b i and the fugacities z i : f = n n µ n + n p µ p − P = T 2 (n n↑ ln z n↑ + n n↓ ln z n↓ + n n↑ ln z n↓ + n n↓ ln z n↑ + n p↑ ln z p↑ + n p↓ ln z p↓ + n p↑ ln z p↓ + n p↓ ln z p↑ ) − P, where the second line was derived by using the fact that n n(p) = n n↑(p↑) + n n↑(p↑) and z n(p) = √ z n↑(p↑) z n↓(p↓) .
Note that the free energy in non-interacting nucleon gas f 0 in virial expansion can be easily calculated by replacing the virial coefficients: b n,1 → b free n,1 = −1/4 √ 2, b n,0 → b free n,0 = 0, b pn,1 → b free pn,1 = 0 and b pn,0 → b free pn,0 = 0. Finally, we have the nucleon potential energy U i in virial expansion: which is defined similarly as in Ref. [9]. Given the potential density E = f − f 0 and the single nucleon potential U i , in principle we can derive the spin and isospin dependent residual interactions by finding the double density functional derivative of potential energy density E: where the index i indicate the spin and isospin of the nucleon states. In SNM, the monopolar Laudau parameters are defined as [15]: and where n 3,0 = n p − n n , n 0,3 = n p↑ − n p↓ + n n↑ − n n↓ and n 3,3 = n p↑ −n p↓ −n n↑ +n n↓ . Following Ref. [9], we invert Eq. (44) to second order in densities. In this way, the free energy is a function of densities up to the second order: Note that the nucleon potential U i in Eq. (46) reproduces the nucleon potentials in spin-symmetric matter in [9]. Since the virial EoS include virial coefficients up to the 2nd order, f is a function of n i up to O(n 2 ). Consequently, the residual interactions based on low-density virial expansion are density and isospin-density independent. They are given by: Here the b 0,1 are virial coefficients for spin-opposite and spin-like particles, b free are virial coefficients for noninteracting nucleon gas, and the length parameter λ = (2π/(M T )) 1/2 . The residual interactions based on virial coefficients at T = 10 MeV are Furthermore, and are residual interactions in CC vector and axial vector RPA polarization functions in low density region where virial EoS is valid. Similarly, these residual interactions follow the form of V f and V gt in SNM, since the residual interactions are approximately density and isospindensity independent in low density regimes where the viral approximation applies. Finally, following Ref. [27], a thermodynamically consistent approach is employed to connect the residual interactions calculated by the virial approach with the ones calculated using Skyrme models. We discuss this method in more details in Appendix A.
C. EoS and its Constraints
Concerning the EoS, we use a similar approach to that used in Ref. [27,28]. In particular, the benefit of this EoS is that it reproduces exactly the virial approximation in the nondegenerate limit, and in the high-density limit, it reproduces the Skyrme interaction. The lowdensity behavior is important for the neutrinosphere in core-collapse supernovae [9] where the Skyrme interaction fails to accurately describe the EoS. Our free energy is defined as where the function η(z n , z p ) is given by and z n and z p are the neutron and proton fugacities in the virial expansion. Note that, in Ref. [27], slightly different coefficients were used in the denominator of η but these have been modified to ensure the entropy is positive everywhere. The Skyrme free energy, f Sk , is taken from the UNEDF posterior distribution from Refs. [29,30]. The posterior distribution from Ref. [30] takes the form of a table of 1000 Skyrme models, each selected from a likelihood function based on nuclear masses, charge radii, fision barriers, and other nuclear data. In Ref. [27] and in this work, we randomly select from that table of 1000 Skyrme models in order to obtain our results. The Skyrme parameters which we select are thus not uncorrelated, they retain the correlations which were obtained in Ref. [30] as a result of the matching to the experimental data. These correlations are discussed below and displayed in Fig. 8. In some cases, we replace this Skyrme model with an alternate parameterization to test the variation beyond that obtained in this posterior. We use NRAPR [31] because it has been shown to be a good model for high-density matter [32] and is often used in EoS for core-collapse supernovae [33]. We use SGII because it was explicitly constructed to fix the spin instability encountered in Skyrme models [34].
We also use the UNEDF0 [35] and UNEDF2 [29] EoS to compare with the original posterior distribution used in Refs. [27,28]. As in Ref. [27], we also randomly select Skyrme models from the posterior distribution generated in Ref. [30]. All of these models give a reasonable description of the binding energies, charge radii, and other experimental nuclear properties. The symmetry energy S and its derivative L are not taken from the posterior but used as additional parameters. Results obtained from EoS selected in this way are referred to as "MC" hereafter. In order to focus on the uncertainty of the neutrino opacities, we approximate the electron fraction in beta equilibrium to be the same for all models, derived and then were applied in the neutrino opacity calculations. Note that, the effective mass is defined differently in relativistic (the "Dirac mass") and in nonrelativistic models (the "Landau mass"). Consequently, one cannot directly use the non-relativistic type effective mass derived from Skyrme EoSs in relativistic neutrino opacity calculations.
In the present study, the uncertainties in the neutrino opacities are directly resulting from the uncertainties in the EoS. The aforementioned Skyrme-type EoSs are different to each other mainly because they are constrained by different experimental measurements/astronomical observations. In this way, we evaluate the impact on the neutrino opacities due to changing the Skyrme interaction, e.g., NRAPR, SGII, UNEDF0, UNEDF2 and SVmin. Similarly, the impact of the MC EoSs on the neutrino opacities are also evaluated. Since the MC EoSs are constrained differently from the other Skyrme interactions previously mentioned, one could estimate the contribution of the nuclear and astrophysical uncertainties on the prediction of the neutrino opacities. This will be done in the discussion of our results in the following.
D. Correlations and Uncertainties
In the following we discuss uncertainties of the neutrino IMFPs, in the framework of HF+RPA. As shown in Eqs. (42), (41), (25) and Eq. (24), the input for the calculation of IMFPs based on HF+RPA are EoS-based quantities such as M * τ , M * τ , µ τ , µ τ , and V ph . We first investigate the Pearson correlations between (1) two different EoS-based quantities; (2) EoS-based quantities and IMFPs and (3) two IMFPs at different densities. The Pearson coefficient is given by: where A and B are (1) EoS-based quantities and EoSbased quantities; (2) EoS-based quantities and IMFPs and (3) IMFPs and IMFPs of a specific model M . In this work the EoS-based quantities are from N hybrid EoSs, which were introduced with more details in Ref. [28]. Given EoS-based quantities from N models we define the covariance matrix C ij [36]: where x i,M = (P i,M −P i )/P i and P i,M is the ith EoSbased quantity (e.g. nucleon effective mass, residual interaction, etc) predicted in EoS model M . The variance of IMFP is: where A is the IMFP. From Eqs. (66) and (67), it is clear that the IMFP uncertainties are not only determined by the diagonal matrix elements of C ij , but also by its off-diagonal matrix elements. Thus, the density-dependent correlations between EoS-based quantities may induce non-trivial density-dependent IMFPs uncertainties and non-trivial correlations between IMFPs in different channels.
III. RESULTS
In this section we show our results for the IMFP, as well as the residual interactions which have been used in our study.
A. Neutrino response from low-density to high-density region In this subsection, we present the density-dependent residual interactions based on a set of EoSs previously introduced (the full derivation of EoS-based residual interactions is presented in Appendix A), the IMFPs from low to high densities, and the dynamic responses at different densities. In Fig. 1, the density-dependent residual interactions are shown. To illustrate the impact of EoSs on residual interactions, two groups of residual interactions are shown. The first group of residual interactions are based on various Skyrme models from low-density to high-density regime. The second group of residual interactions denoted as "MC" are Monte Carlo realizations based on the hybrid EoS model first introduced in [27], which reproduces the virial EoS in the low-density regime [16,17]. In the high-density regime, the residual interactions based on the hybrid EoSs sample the uncertainties in the parameter distribution originating from the UNEDF2 model. More details about the density-, temperature-, and isospin-dependent residual interactions are introduced in Appendix A. Note that we do not consider the uncertainties in the virial EoS resulting from the contribution from higher order virial coefficients since they are not well-constrained by experiments and observations. They contribute only as the density increases. At low densities, the residual interactions from all of the MC EoSs converge because they are based on the same 2nd order virial coefficients. In our approach, the residual interactions in the low-density regime are constrained, for the first time, by a model-independent virial EoS. The density-dependent residual interactions beyond this low-density regime is however still poorly constrained by data.
In the following, we briefly summarize the qualitative features of residual interactions in various channels. At around saturation density, CCSNe matter is very neutron-rich. In neutron-rich matter, the residual interactions f nn and g nn play a major role in NC interactions, while the influence from f pp (which is dominated by the Coulomb interaction) is weak. Indeed, in pure neutron matter, f nn = f 0 and g nn = g 0 . As densities increase, f nn increases and becomes positive, which is a feature expected on general grounds and originating from the dominant contribution of the repulsive vector meson. The density dependence of g nn is not well constrained by nuclear experiments probing collective modes, even at saturation density. In Skyrme models, we observe that g nn decreases as the density increases, and may become negative at high densities. In CC reactions, V f and V gt are residual interactions relevant to vector and axial vector neutrino-nucleon reactions. These residual interactions are calculated consistently from low-density to respectively.
x 0 x 1 high-density regimes, and their detailed derivation are shown in the Appendix (see Eqs. (A45) and (A46)).
All the Skyrme models that we have tested in our study predict consistent values for V f , which decreases as the density increases. In the axial channel however, the dispersion among the Skyrme predictions is larger than in the vector channel. As shown in the upper left panel of Fig. 1, the values of V gt based on Skyrme models diverge in high-density regime. Overall, we observe that the uncertainties of residual interactions in spindependent channels are obviously larger than the uncertainties in spin-independent channels, which reflects the lack of experimental constraints for the time-odd terms in Skyrme EDFs and thus the phenomenological spindependent forces.
In Fig. 2, vector (left panels) and axial vector (right panels) IMFPs of neutrino-nucleon reactions are plotted. Since the calculation of neutrino opacities are consistent with the underlying EoSs, we observe that the uncertainties of EoS-based quantities (e.g. residual interac-tions/effective mass/nucleon chemical potentials) result in variations of IMFPs. In high-density regime, the uncertainties of axial vector IMFPs in both CC (upper right panel) and NC (lower right panel) are larger than the uncertainties of vector IMFPs (upper left and lower left panels). This is because the axial vector IMFPs are sensitive to the residual interactions V gt , g nn , g np and g pp , which are derived from the poorly-constrained time-odd part of the Skyrme EDF.
In Eqs. (41), (42), (30) and (32), we observe that the polarization functions containing the real parts have poles, and these poles may result in collective excitations [13,23]. For both the vector and the axial vector channels, the collective excitations in the medium appear if the reaction has a (q, q 0 ) pair that satisfies the mode's dispersion relation. The resonances may enhance the IMFPs for the neutrino-nucleon reactions, similarly to the effect of Giant-dipole resonance and the Gamow-Teller resonance on finite nuclei scattering. As shown in the right lower panel of Fig. 2, a significant increase is observed in those IMFPs based on UNEDF and MC EoSs in high-density regime. In this region, the g nn values decrease and become negative, representing an attractive potential enhancing the IMFP. The increase of IMFPs observed in axial NC panel may be due to the Gamow-Teller collective mode. However, we stress that the collective modes are sensitive to the strength of the related residual interactions which are not well-constrained, and may result in the creation of a ferromagnetic unstable region. There are no clear experimental observations to support the existence of ferromagnetic instability in neutron stars and CCSNe. We show in Fig. 2 that if the ferromagnetic unstable region appears at (sub)saturation density, the NC neutrino opacity increases by several orders and such an increase may be associated with observable modulations on late-time neutrino signals from a future galactic CC-SNe. We include the NC axial IMFPs with high peaks at high densities in Fig. 2 to maintain the consistency between the underlying EoSs and the neutrino IMFPs. In future work, we will construct new EoSs which allow us to quantify the uncertainties near the saturation density without a spin instability while still matching laboratory experiments.
The ferromagnetic instability could be obtained from the long wave-length limit of the HF+RPA response. In this limit, the density at which matter undergoes a ferromagnetic instability is denoted n crit , which is defined from the zero of the matrix: where G nn , G pp , G np and G pn are dimensionless Landau parameters. The values for n crit as function of the proton fraction Y p is shown in Fig. 3 for different EoSs.
The ferromagnetic unstable region in UNEDF0 and UN-EDF2 extends from n ≈ 0.05 fm −3 to higher densities. The UNEDF EoSs were constructed to describe the bulk ground state and collective excited state properties of various spherical atomic nuclei. However, in most atomic nuclei, the ground state and collective excited state properties are not sensitive to the spin-dependent interactions. Consequently, the spin-dependent residual interactions in the UNEDF appraoch may be poorly constrained. In Fig. 4, the IMFPs with and without the RPA correlations are shown in four different channels (CC vector/CC axial/NC vector/NC axial). In CC vector and CC axial channel, the RPA correlations lead to a decrease of the neutrino opacity. In NC axial channel, the RPA correlations reduce (increase) the neutrino opacity in the high-density regime for NRAPR and SGII (UNEDF) EoSs, where the increase is due to the collective modes. However, as mentioned above, the spin-dependent residual interactions in UNEDF may be poorly constrained and the RPA-amplified IMFPs may reside in ferromagnetic unstable region. In NC vector channel, the many-body effect manifested by the RPA correlations increase the neutrino opacity, which quantitatively agrees with the results of NC vector neutrinonucleon scattering opacity based on model-independent virial expansion. Indeed, as shown in Fig. 1, the residual interactions f nn that mainly controls the behavior of the RPA correlations in the NC vector channel were derived density-dependently, and in the low-density regime f nn reproduces the virial predictions.
By comparing IMFPs with and without RPA correlations, the variations of neutrino opacity resulting from two different basic many-body methods (HF and HF+RPA) can be evaluated. The variation between the HF and the HF+RPA predictions is denoted as ∆(1/λ) mb in the following. By comparing IMFPs with RPA corrections but with different underlying EoSs, the variations of neutrino opacity resulting from the choice of Skyrme EoS constraints can be evaluated. This type of variation will be denoted ∆(1/λ) EOS−Skyrme in the following. By comparing IMFPs consistent with MC EoSs, we estimate the variations of neutrino opacity due to the uncertainties across the Monte Carlo EOSs, which shows the variation due to the inability of the Skyrme model to precisely reproduce the experimental data. This type of variation will be denoted as ∆(1/λ) EOS−MC in the following. Based on Fig. 2 and 4, an estimate for the hierarchy of these three types of neutrino opacity variations can be constructed in a wide range of densities. We find that approximately, ∆(1/λ) EOS−MC ≤ ∆1/(λ) EOS−Skyrme < ∆(1/λ) mb . However, in axial vector channel, where the spin-dependent residual interactions are poorly-constrained, the ∆(1/λ) EOS−MC may be comparable with or even larger than the ∆λ mb . The hierarchy of relative neutrino opacity uncertainties as a function of density are shown in Fig. 5.
In Figs. 6 and 7, the dynamic responses are plotted for various densities. Since for most regions of the phase space the axial response dominates the neutrino-nucleon IMFPs, only the S A (q, q 0 ) CC and S A (q, q 0 ) N C are shown here. As density increases, we observe that the transferred energy q 0 , favored by CC dynamic responses, deviates from q 0 = 0 MeV and become negative, due to the modifications from nucleon potential shifts in CC reactions [11]. For NC dynamic response functions, the difference between neutron and proton single-particle energies do not have any influence on these reactions that conserve nucleon isospins. Consequently, the S N C (q, q 0 ) centers at q 0 ≈ 0 MeV at any densities. Finally, we observe that the uncertainties in the dynamical response functions increase as density increases, due to the increasing uncertainties of EoS-based quantities (e.g. the residual interactions, the nucleon effective mass, and nucleon single-particle energies).
B. Correlations between EoSs and Neutrino Response
In this section we present the Pearson correlations ρ between (1) the EoS-based quantities and the EoS-based quantities; (2) the EoS-based quantities and the IMFPs; (3) the IMFPs and the IMFPs, using Eq. (65). The Pearson correlation coefficients quantitatively describes the connection between two quantities, and allows us to estimate the degrees of correlations between different quantities, whether they may be or not be measured or observed. In particular, a value of ρ(A, B) = ±1 implies that the two observables are fully correlated/anticorrelated, whereas a value of ρ(A, B) = 0 means that the observables are totally uncorrelated.
In Fig. 8, the correlation coefficients among Skyrme parameters are plotted. In Fig. 9, the correlation coefficients among the residual interactions are plotted for n = 10 −4 , 0.005, 0.15 fm −3 . Only off-diagonal correlation are of interest in these two figures. Note that several residual interactions in off-diagonal blocks are highly correlated/anti-correlated. For example, at n = 10 −4 fm −3 we observe that ρ(f nn , f pp ) ≈ 1 and ρ(g nn , g pp ) ≈ 1. Indeed, in low-density regime where the residual interactions are calculated based on the virial approximation, we have f nn = f pp and g nn = g pp . Interestingly, the correlation coefficients between the residual interactions are density dependent. As shown in Fig. 9, as density increases ρ(f nn , f pp ) decreases. For the three different densities explored in Fig. 9, the correlations among g nn , g np and g pp remains high. The density dependence of correlations between EoS-based quantities may result in (1) non-trivial correlations between IMFPs and EoSbased quantities; and (2) non-trivial correlations between IMFPs and IMFPs.
In Fig.10 and Fig.11, the correlations between EoSbased quantities and IMFPs are plotted. Correlations between IMFPs and EoS-based quantities reveal the sensitivity of a particular type of IMFP to EoS-based quantities. In Fig. 10, the correlations between the EoS-based quantities and the CC IMFPs in both vector and axial vector channels are shown. We observe that (1) the sensitivity of M * n (M * p ) to CC vector IMFPs are higher than that to CC axial vector IMFPs; (2) the density dependence of sensitivity of U n − U p to CC IMFPs is different in vector and in axial vector channels; and (3) the density dependence of sensitivity of residual interactions to CC IMFPs is different in vector and in axial vector channels. In Fig. 11, the correlations between the EoS-based quantities and the NC IMFPs in both vector and axial vector channels are shown. We observe that (1) the sensitivity of M * n (M * p ) to NC IMFPs is moderate in both vector and axial vector channels; (2) the sensitivity of U n − U p to NC IMFPs ≈ 0, reflecting the fact that NC IMFPs are not modified by nucleon potential shifts; and (3) the density dependence of sensitivity of residual interactions to NC IMFPs is different in vector and in axial vector channels.
In Fig. 12, the correlations between IMFPs and IMFPs are plotted. A strong correlation between a theoretically (or experimentally) well-determined IMFP and an IMFP not accessible either experimentally or observationally may provide a clear path for the determination of the latter. As expected, the correlations in diagonal blocks ≈ 1 at all three different densities. At n = 0.15 fm −3 , we observe that ρ(IMFP ax CC , IMFP ax N C ) ≈ 0.8. And this high correlation at n = 0.15 fm −3 may result from a relatively high correlation between g nn and V gt at this density (see the upper left panel of Fig. 9). At n = 0.005 fm −3 , we observe relatively high ρ(IMFP ax N C , IMFP vec N C ), ρ(IMFP ax N C , IMFP vec CC ) and ρ(IMFP vec N C , IMFP vec CC ). Again, these strong correlations at off-diagonal blocks may be due to the correlations between EoS-based quantities and EoS-based quantities at n = 0.005 fm −3 .
IV. CONCLUSION
Strong interactions between nucleons could significantly modify neutrino-nucleon cross sections via manybody effects at densities and temperature of relevance to CCSNe. The main difficulty in improving the description of neutrino opacities may come from the poorlyconstrained density-dependent nucleon-nucleon interactions. The nucleon-nucleon interactions, also play an important role in determining the properties of EoSs. In this way, the uncertainty of EoS-based quantities propagate to the uncertainty of neutrino opacities.
In this work we calculate the neutrino-nucleon interactions in both vector and axial vector coupling channels, in the framework of the HF+RPA approximation. In the low density regime, the neutrino opacity based on HF+RPA calculations are consistent with a virial EoS, which generates a model-independent nucleonnucleon interaction (in both spin-independent and spindependent channels). The low-density virial EoS used in this work naturally evolve to a series of Skyrme EoSs as density increases, as in Ref. [28]. Note that in our framework the neutrino opacity calculations are always consistent with the underlying EoSs at any densities. Although spin-dependent interactions play an important role in determining the dominant axial-vector neutrino-nucleon interactions, they only slightly influence the properties of nuclei, which are mostly spin-saturated or close to, and which are employed to fit Skyrme forces. Consequently, we observe that the EoS-based quantities in spindependent channels are not well constrained and have big uncertainties at high densities, which further induce big uncertainties of neutrino opacities in axial-vector channel in our self-consistent calculations.
In the last several decades, our understanding of nuclear EoSs increases thanks to the progress made in experimental measurements of nuclei properties and in astronomy observations of neutron star properties. These measurements and observations, provide valuable constraints on spin-independent nucleon-nucleon interactions. However, compared to the former, spin-dependent nucleon-nucleon interactions are still poorly constrained. While they play a crucial role in neutrino-nucleon interactions at high densities, in electron capture reactions and in pion condensation. In the future, we will construct: (1) EoSs with extended terms stabilizing spin-dependent functional at high densities [37]; (2) EoSs that are constrained by recent measurements of Gamow-Teller giant resonance in various finite nuclei [38]; and (3) improved connection between the low-density and the high density models, where not only the energy but also the first and second derivatives will be accurately described.
The descriptions of correlations between (1) EoSbased quantities and EoS-based quantities; (2) EoS-based quantities and IMFPs and (3) IMFPs and IMFPs may not be affected by the large uncertainties of EoS-based quantities and IMFPs at high densities. In this work, for the first time we study these correlations in the frame-work of RPA at different densities. How big are the influence of uncertainties of the EoS-based quantities on neutrino-nucleon IMFPs at different densities? The study of density-dependent correlations may help to answer this question and motivate an accurate determination of EoS-based quantities to better determine neutrino opacities at interested densities in the future.
In the future, it could be interesting to study the impact of neutrino opacity uncertainties on the CCSNe explosion mechanism, on the CCSNe neutrino signals and on the CCSNe light curves. and Finally, we have: The strength function W (τ,τ ,0) i agrees with Ref. [24]. We now discuss the spin-density-dependent Landau parameters in Skyrme model. To begin with, we define the spin density of neutrons and protons as: and n p,3 = n p↑ − n p↓ .
We further define the spin-kinetic energy density as: and The contribution to the Skyrme Hamiltonian due to the spin asymmetry is expressed as: where the H n,3 k , H p,3 k and H s pot (n n,3 , n p,3 ) are defined as following: H n,3 k = 1 8 τ n,3 n n,3 t 1 (−1 + x 1 ) + n p,3 t 1 x 1 + n n,3 t 2 (1 + x 2 ) and H s pot = − Note that in spin-saturated matter the term H s Sk vanishes by construction and the total Skyrme Hamiltonian reduces to Eq. (A4). The Landau parameters describing the spin-density fluctuations are expressed as: The strength function of the residual interactions W (τ τ ,1) i agrees with Ref. [24]. We then discuss the vector and the axial vector Landau parameters in CC reactions n +ν e → p + e − based on Skyrme models. For V f , we have: are the strength of Landau parameters. Note the absence of the rearrangement terms in the CC channel since the density dependent term of the Skyrme interaction does not contribute here. Here the rearrangement term W (τ τ ,0) 1R is: and There are no rearrangement contributions to spin-density dependent residual interactions and we have: In this work Landau parameters in the high density regime are derived based on Skyrme models, while in the low density regime they are derived based on modelindependent virial interactions. In the low density region where the virial interaction applies, we assume that m * ≈ m, and the kinetic energy density terms in the virial Hamiltonian is density-independent. In the firstorder approximation where only the 1st order virial coefficients are involved, the potential energy density part (including both density and spin-density terms) of the virial Hamiltonian is: pn,0 T λ 3 [(n n + n n,3 )(n p − n p,3 ) +(n n − n n,3 )(n p + n p,3 )] − 1 2b pn,1 T λ 3 [(n n + n n,3 )(n p + n p,3 ) +(n n + n n,3 )(n p + n p,3 )] − 1 2b n,0 T λ 3 [(n n + n n,3 )(n n − n n,3 ) n,1 T λ 3 (n n + n n,3 ) 2 + (n n − n n,3 ) 2 where the notation of virial coefficients follows Refs. [9,16]. Since in the virial Hamiltonian the kinetic energy density is density independent. So, we have H n,virial k = τ n /(2m n ) and H p,virial k = τ p /(2m p ). The Landau parameters in virial model simplifies to: Note that in H virial pot the density dependent terms include up to O(n 2 ), which means that the virial Landau parameters are density independent in the first order of approximation. So, there are no rearrangement contributions in virial Landau parameters and the Landau parameters in CC interaction based on virial method is: and V gt,virial = g virial nn − g virial np . (A33) Finally, we discuss the Landau parameters that apply in the whole density regime. As described above, we construct a global Hamiltonian H g that applies in the whole density regime: where η(z n , z p ) is given in Eq. 64. For the general density-dependent Landau parameters in NC reaction, we then have: For CC interaction, again we need to take off terms due to rearrangement effects in Hamiltonian and single particle potentials. We derive the rearrangement term in single particle potential U ≡ ∂H pot /∂n in Skyrme model being: U Re = αt 3 12 n α−1 ((1 + x 3 /2)n 2 − (x 3 + 1/2)(n 2 n + n 2 p )). (A44) We then have: The η function was first applied in [27] to construct a Hamiltonian valid from low-density regime to highdensity regime. This η function ensures that in lowdensity regime the EoS in [27] reproduces the features predicted by a virial EoS and allows the EoS to smoothly transit into a Skyrme EoS as the density increases. However, the location of a transition region where the η function smoothly decrease from ≈ 1 to ≈ 0 may be modeldependent, and in principle, temperature-dependent as well. A precise connection between the low and the high density regimes shall be performed in the future, ensuring that this connection does not impact the first and second derivatives of the energy density. Since this is not performed in the present study, the functional form of the η function in this transition region may influence the behavior of W 1,η , W 2,ηn , W 2,ηp , ∂η ∂nn U Re and ∂η ∂np U Re . In the present study, we simply neglect the contribution of the derivatives of the η function to the Landau parameter strengths for simplicity. This shall however be investigated in more details in the future. | 2022-07-14T01:16:00.042Z | 2022-07-13T00:00:00.000 | {
"year": 2022,
"sha1": "456b2df0bf590527fbeeaaa04fd6cfdf96c9e02c",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2207.05927",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "456b2df0bf590527fbeeaaa04fd6cfdf96c9e02c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
254326530 | pes2o/s2orc | v3-fos-license | Adherence to dermatologic treatment: A retrospective cross-sectional study on geriatric patients
OBJECTIVE With the prolongation of the average life expectancy worldwide, diseases including dermatological disorders of the elderly are gaining importance. The presence of comorbidities in this age group may affect the treatment strategies; compliance with follow-up and adherence to medication can be poor. The aim of this study is to evaluate the dermatological disorders of patients aged 65 and over and determine their adherence to dermatologic treatment. METHODS A retrospective and cross-sectional study was conducted on patients aged 65 and over applied to a single tertiary dermatology clinic between April 2021 and April 2022. Diagnoses were that clinical and diagnostic tests were performed when only necessary. RESULTS A total of 207 admissions to the dermatology clinic by 135 patients were evaluated. Eczema (23.05%) and infections (25.2%) were the most common dermatological diagnoses. The percentage of patients with precancerous and cancerous lesions was 11.9%. Among 123 patients who need at least a follow-up visit, only 37 patients (30.1%) applied for follow-up as advised, and medicines were taken regularly by 23 of these patients (62.2%). Compliance with follow-up was lower among men (OR 0.365, 95% CI 0.160–0.834, and p=0.02) and patients who were treated only with local therapy agents (OR 0.345, 95% CI 0.138–0.863, and p=0.20). CONCLUSION Eczema and infections were the most common dermatological diagnoses among geriatric patients in the present study. The majority of geriatric patients with skin conditions were not applying for follow-up visits. Women and patients treated with systemic therapy agents were more compliant. The prevalence of basal cell carcinoma was not low, and this emphasizes the importance of a careful dermatological examination regardless of primary complaint in this age group.
T he population aged 65 and over, which is consid- ered as the geriatric population, increased by 24% in the past 5 years in Turkiye [1].According to population projections, it was predicted that the proportion of the elderly population would be 11% in 2025 and 16.3% in 2040 [1].With the prolongation of the average life expectancy all over the world, especially in developed and developing countries, physicians will see increasing numbers of patients over the age of 65 years [2].Dermatologists, like physicians from all other branches, have already begun to encounter the diseases of this group of patients more frequently [3,4].The presence of comorbidities in this age group can complicate the diagnosis and treatment choices [5].Moreover, compliance with follow-up and adherence to medication can be poor [6].In this age group, non-compliance can be observed in many ways including not taking the medications, not taking them as prescribed, using non-prescribed medications, and missing follow-ups [7].Although there are several studies on statistical data of skin disorders among geriatric patients from our country, it is not clear the ratio of compliance with follow-up and adherence to dermatologic medication.The aim of this study is to evaluate the dermatological diseases among geriatric patients applied to the dermatology clinic, their treatments, and responses as well as to determine their adherence to dermatologic treatment including follow-up and medication.
MATERIALS AND METHODS
This retrospective, cross-sectional, and single-center study was conducted on patients aged 65 and over who applied to the author' s clinic at a tertiary healthcare institution between April 2021 and April 2022.Patients aged 65 and over who applied to the clinic within the planned period of retrospective study were detected from the electronic health records and the ages, genders, dermatological signs and symptoms, diagnoses, planned treatments, compliance with planned follow-up, adherence to medications, and treatment responses were recorded.All patients were examined and treated by a single dermatologist.This study complies with the Declaration of Helsinki and was performed according to local ethics committee approval (2022-09/01).
Statistical Analysis
The data were tabulated for statistical analysis with SPSS-22 software (SPSS Inc., Chicago, IL, USA).Continuous data were presented as means ± standard deviation (SD), and range.Chi-square test or Fisher' s exact test was used to compare categorical variables.Student' s t-test was used to compare continuous variables.Values of p<0.05 were considered statistically significant.
Highlight key points
• Eczema and infections were the most common dermatological diagnoses among geriatric patients.
• The majority of geriatric patients with skin conditions were not applying for follow-up visits.
• Women and patients treated with systemic therapy agents were more compliant with the follow-up during dermatologic treatment.
Number Table 1.General features of the presented series (n=7, 5.2%), seborrheic dermatitis (n=5, 3.7%), nummular dermatitis (n=4, 3.0%), and allergic contact dermatitis (n=3, 2.2%) (Table 2).The most common eczema localization was in hands (n=9), followed by feet (n=7), legs (n=6), and others (n=8).Although eczema cases are accompanied by itching at varying rates, the percentage of patients whose primary complaint is itching without any lesions was 8.1%.Among the infections, the most common infection was dermatophyte infections (10.4%), followed by herpes zoster (5.9%) and scabies (1.5%) (Table 2).The percentage of patients with precancerous and cancerous lesions was 11.9%.Of the 10 patients presented with primary cutaneous malignancy, eight patients (seven males and one female) presented with BCC, one with Bowen' s dis- ease, and the other with Kaposi' s Sarcoma (KS) (Table 2).All of the primary cutaneous carcinomas were newly diagnosed except KS which the patient was diagnosed with 8 years ago.Skin metastasis of systemic non-Hodgkin lymphoma presented with a dense nodule on the chest was seen in one female patient.All the cases with cutaneous carcinoma were confirmed by histopathologic evaluation through punch biopsy and re-excised by a plastic surgeon after revealing the diagnosis.Five of the BCC cases were located on the nose, one was on the forehead and the other one was on the back of the hand.Only in one patient, multiple lesions of BCC were seen on different locations (forehead, cheek, and lower lip).Bowen' s disease was located on the shoulder of one male patient.The duration of the existence of skin cancer before diagnosis ranged from 2 months to 15 years (median and 1 year).All of the actinic keratosis (n=5) developed on the head and face.Total clinic visits (n=207) consisted of 135 first applications and 72 follow-up visits of 37 patients.Topical (n=68) or systemic ± topical treatments (n=37) were planned in 105 patients in their first visits.In the rest of the patients, excision, cryosurgery, or electrocauterization were applied, or wait-and-see approach was adopted (Table 1).The medical records showed that 123 (91%) patients were advised to apply for at least a follow-up appointment at the time of their first submission.The overall compliance with follow-up was calculated as 30.1% (37/123) in these patients.Compliance with follow-up was lower in males compared to females (Chi-square, OR 0.365, 95% CI 0.160-0.834,and p=0.02) (Table 3).Furthermore, it was lower in patients treated with only topical therapy compared to systemic±topical therapy (Chi-square, OR 0.345, 95% CI 0.138-0.863,and p=0.20) (Table 3).There was not a significant difference between compliant and non-compliant patients in terms of age (Table 3).Self-reported adherence to medication by those complying with scheduled clinic visits was 62.2% (23/37).Among these patients, full remission was seen in nine and partial remission was seen in 14 patients during varying numbers of follow-up visits.All four patients who were given systemic steroid therapy due to eczema or urticaria were compliant with the follow-up and showed adherence to treatment; full remission was seen.Among the four patients who were treated with systemic antifungal drugs, three continued their follow-up and almost fully recovered without any side effects.Two of the eight patients with herpes zoster who were treated with systemic anti-viral drugs applied for a second visit and showed remission.Among nine patients who were treated with systemic antibiotics, five patients reapplied and showed remission.In the patient who was treated with methotrexate for 1 year, the treatment was continued with ustekiunmab due to unresponsiveness in the former one.The rest of the systemic drugs used in the series were anti-histaminic drugs.
DISCUSSION
Turkiye is expected to be considered an "aged" society by 2040 since the percentage of elderly will be higher than 14% [1].Since the geriatric population constitutes a quickly growing part of the population in our country as the rest of the world, physicians and medical systems need to adapt to new population characteristics according to the needs of the geriatric community [2].Thus, determining the prevalence and types of skin diseases in geriatric patients is very important in planning the prevention and treatments in this age group.
Throughout the aging process, skin shows degenerative structural, metabolic, and physiological changes that occur due to intrinsic and extrinsic aging, the latter is mainly due to solar radiation [8].Due to these changes, skin disorders are more commonly seen in the elderly population and the distribution of skin diseases is different from other age groups [9].Moreover, systemic comorbidities which are commonly exist in the elderly may worsen skin conditions, restrict to generate optimal treatments on dermatological disorders, or cause new skin problems in this age group [10].
Although eczematous skin disorders (lichen simplex chronicus, atopic dermatitis, contact dermatitis [allergic and irritant], nummular eczema, and seborrheic dermatitis) are not considered fatal, they were found to be the most common group of skin diseases in geriatric patients in most of the studies, carry high morbidity, and significantly decrease the quality of life of the elderly [11].Similar to the literature, different types of eczema constituted the majority of skin disorders (31%) among geriatric patients of the present study.Lichen simplex chronicus, which was seen as the most common presentation of eczematous disorders among presented patients, is characterized by a variety of pruritic and lichenified lesions in irregular shapes [12].It may involve anywhere on the body including mainly the legs, arms, neck, upper trunk, and genital region [12].Although geriatric patients' sensibility to allergic contact dermatitis is expected to be low due to decreased delayed-type hypersensitivity reaction and vascular reactivity compared to other age groups, irritant contact dermatitis is expected to be common among the geriatric population due to behavioral changes including decreased care and cognition and immobility and skin changes including decreased lipid content, skin dryness, impaired epidermal barrier, and immune function [11].Although eczema cases are accompanied by itching at varying rates, the percentage of patients whose primary complaint is itching without any lesions was not low (8.1%) among the presented series.In elderly patients, pruritus can be caused by a variety of conditions, but the most frequent cause is skin dryness [13].Skin dryness is a very common skin problem in the elderly population due to many factors including decreased lipid content [14].Since pruritus can be a disease itself or a symptom of a systemic disease in the elderly, geriatric patients need a detailed evaluation [15].
The second largest group of skin disorders was infections in this study, which is concordant with the data in the literature [16][17][18][19].Tinea pedis (2.2%) and onychomycosis (7.4%) were the most common fungal infections in the present study.This is probably due to the commonly observed habits of the community such as leaving the feet wet after contact with water [18].Viral infections, especially herpes zoster, appear commonly in the elderly patients (5.9%) due to the expected weakness in the immune system by aging [19].The lifetime risk of herpes zoster was reported about 20-30% which increases with age and it is important not to forget that herpes zoster can make a huge impact on elderly patients due to the long-standing pain and can lead to the inability to recover the lifestyle [19].
The percentage of patients with precancerous and cancerous lesions was 11.9% in the presented series.The ratio of primary skin carcinomas was 7.4% and BCC constituted the majority.Although all of the BCCs were located on sun-exposed areas of the presented patients, it is very important to examine the elderly patients carefully regardless of primary complaint [20].Moreover, the ratio of patients who presented with AK emphasizes the importance of regular follow-up in geriatric patients, especially in fair skin types and strongly advising the use of sunscreens in this age group [20].
Patients with skin diseases usually need to apply to dermatology clinics repeatedly due to the chronic nature of the skin diseases or slow response to treatments [21].Thus, compliance with follow-up and adherence to treatment is always difficult for all individuals with chronic skin diseases [22].However, it is even more difficult and low in elderly patients due to multiple factors including lack of motivation, impaired cognition, possible vision problems or handicaps, or difficulties in reaching the health-care facilities due to transportation problems [23,24].Compliance has been defined as a patient' s behavior toward treatment and also includes following the scheduled follow-up visits, taking medications regularly as prescribed, and following all the suggestions [24].The overall compliance with appointments was calculated as very low (27.4%) and medicines were taken regularly by 62.1% of the attendant patients in the present study.Although it is concordant with the literature that female patients were more compliant with the follow-up, as far as we know the compliance' s being higher among the patients treated with systemic therapies seems to be a new finding [8].It is probably due to the fear of side effects caused mainly by the warnings from the physician or general belief.Although simple topical regimens are more likely to be preferable to maximize compliance and efficacy according to the literature, our results showed that a systemic treatment approach with a close follow-up can be an effective and controllable choice compared to topical treatments in suitable situations [17].
The main limitation of this study was its retrospective design.Due to the very low compliance to follow-up among the presented patients, a clear evaluation of self-reported adherence to medication was not fully possible.
Conclusion
Skin diseases, especially eczematous conditions and infections, are common among the geriatric population.Management is often less than optimal due to the limitations of this age group.Comorbidities may restrict treatment plans and compliance can be poor; therefore, it is important to adopt a realistic approach toward the dermatological diseases of the elderly.The prevalence of precancerous and cancerous being not low emphasizes the importance of a careful dermatological examination regardless of the primary complaint in all patients.
Table 2 .
Main groups of skin disorders *: Twenty-seven patients were excluded who were not given any topical or systemic dermatological therapy.
Table 3 .
Comparison between compliant and non-compliant patients in terms of age, sex, and type of medication | 2022-12-07T16:07:05.592Z | 2023-11-22T00:00:00.000 | {
"year": 2023,
"sha1": "fd585ae5e2dfc64215d0236c221fbbeac0eb9f50",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.14744/nci.2022.20788",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4c54d5b652ce980f1e045cfca3776a511b9e8a85",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
51925155 | pes2o/s2orc | v3-fos-license | A Survey on Surrogate Approaches to Non-negative Matrix Factorization
Motivated by applications in hyperspectral imaging we investigate methods for approximating a high-dimensional non-negative matrix $\mathbf{\mathit{Y}}$ by a product of two lower-dimensional, non-negative matrices $\mathbf{\mathit{K}}$ and $\mathbf{\mathit{X}}.$ This so-called non-negative matrix factorization is based on defining suitable Tikhonov functionals, which combine a discrepancy measure for $\mathbf{\mathit{Y}}\approx\mathbf{\mathit{KX}}$ with penalty terms for enforcing additional properties of $\mathbf{\mathit{K}}$ and $\mathbf{\mathit{X}}$. The minimization is based on alternating minimization with respect to $\mathbf{\mathit{K}}$ or $\mathbf{\mathit{X}}$, where in each iteration step one replaces the original Tikhonov functional by a locally defined surrogate functional. The choice of surrogate functionals is crucial: It should allow a comparatively simple minimization and simultaneously its first order optimality condition should lead to multiplicative update rules, which automatically preserve non-negativity of the iterates. We review the most standard construction principles for surrogate functionals for Frobenius-norm and Kullback-Leibler discrepancy measures. We extend the known surrogate constructions by a general framework, which allows to add a large variety of penalty terms. The paper finishes by deriving the corresponding alternating minimization schemes explicitely and by applying these methods to MALDI imaging data.
Introduction
Matrix factorization methods for large scale data sets have seen increasing scientific interest recently due to their central role for a large variety of machine learning tasks. The main aim of such approaches is to obtain a low-rank approximation of a typically large data matrix by factorizing it into two smaller matrices. One of the most widely used matrix factorization method is the principal component analysis (PCA), which uses the singular value decomposition (SVD) of the given data matrix. In this work, we review the particular case of non-negative matrix factorization (NMF), which is favorable for a range of applications where the data under investigation naturally satisfies a non-negativity constraint. These include dimension reduction, data compression, basis learning, feature extraction as well as higher level tasks such as classification or clustering [11,26,27,30]. PCA based approaches without any non-negativity constraints would not lead to satisfactory results in this case since possible negative entries of the computed matrices cannot be easily interpreted for naturally non-negative datasets. Typically, the NMF problem is formulated as a minimization problem. The corresponding cost function includes a suitable discrepancy term, which measures the difference between the data matrix and the calculated factorization, as well as penalty terms to tackle the non-uniqueness of the NMF, to deal with numerical instabilities but also to provide the matrices with desirable properties depending on the application task. The NMF cost functions are commonly non-convex and require tailored minimization techniques to ensure the minimization but also the non-negativity of the matrix iterates. This leads us to the so called surrogate minimization approaches, which are also known as majorize-minimization algorithms [18,23,36]. Such surrogate methods have been investigated intensively for some of the most interesting discrepancy measures and penalty terms [11,13,16,23,25,33,36]. The idea is to replace the original cost function by a so called surrogate functional, such that its minimization induces a monotonic decrease of the objective function. It should be constructed in such a way that it is easier to minimize and that the deduced update rules should preserve the non-negativity of the iterates, which typically leads to alternating, multiplicative update rules. It appears, that these constructions are obtained case-by-case employing different analytical approaches and different motivations for their derivation. The purpose of this paper, first of all, is to give a unified approach to surrogate constructions for NMF discrepancy functionals. This general construction principle is then applied to a wide class of functionals obtained by different combinations of divergence measures and penalty terms, thus extending the present state of the art for surrogate based NMF-constructions. Secondly, one needs to develop minimization schemes for these functionals. Here we develop concepts for obtaining multiplicative minimization schemes, which automatically preserve non-negativity without the need for further projections. Finally, we exemplify some characteristic properties of the different functionals with MALDI imaging data, which are particularly high-dimensional and challenging hyperspectral data sets. The paper is organized as follows. Section 2 introduces the basic definition of the considered NMF problems. Section 3 gives an overview about the theory of surrogate functionals as well as the construction principles. This is then exemplified in Section 4 for the most important cases of discrepancy terms, namely the Frobenius norm and the Kullback-Leibler divergence as well as for a variety of penalty terms. Section 5 discusses alternating minimization schemes for these general functionals with the aim to obatin non-negativity-preserving, multiplicative iterations. Finally, Section 6 contains numerical results for MALDI imaging data.
Notation
Throughout this work, we will denote matrices in bold capital Latin or Greek letters (e.g. Y , K , Ψ , Λ) while vectors will be written in small bold Latin or Greek letters (e.g. c, d , β, ζ). The entries of matrices and vectors will be indicated in a non-bold format to distinguish between the i-th entry x i of a vector x and n different vectors x j for j = 1, . . . , n. In doing so, we write for the entry of a matrix M in the i-th row and the j-th column M ij and the i-th entry of a vector x the symbol x i . The same holds for an entry of a matrix product: the ij-th entry of the matrix product MN will be indicated as (M N ) ij . Furthermore, we will use a dot notation to indicate rows and columns of matrices. For a matrix M we will write M •,j for the j-th column and M i,• for the i-th row of the matrix. What is more, we will use · for the usual Euclidean norm, M 1 := ij |M ij | for the 1-norm and M F for the Frobenius norm of a matrix M . Besides that, we will use equivalently the terms function and functional for a mapping into the real numbers. Finally, the dimensions of the matrices in the considered NMF problem are reused in this work and will be introduced in the following section.
Non-negative Matrix Factorization
Before we introduce the basic NMF problem, we give the following definition to clarify the meaning of a non-negative matrix.
The non-negativity of an arbitrary matrix M will be abbreviated for simplicity as M ≥ 0 in the later sections of this work. The basic NMF problem requires to approximately decompose a given non-negative matrix Y ∈ R n×m ≥0 into two smaller non-negative matrix factors K ∈ R n×p ≥0 and X ∈ R p×m ≥0 , such that p min(n, m) and For an interpretation let us assume, that we are given m data vectors Y •,j ∈ R n for j = 1, . . . , m, which are stored column-wise in the matrix Y . Similarly for k = 1, . . . , p we denote by K •,k , respectively X k,• , the column vectors of K , respectively the row vectors of X . We then obtain the following approximation for Furthermore, we define accordingly The corresponding matrix divergences are defined componentwise, i.e. β = 2 yields the Frobenius norm and β = 1 the Kullback-Leibler divergence. These discrepancy measures are typically amended by so called penalty terms for stabilization and for enforcing additional properties such as sparsity or orthogonality. This yields the following general minimization task.
Definition 4 (NMF Minimization Problem) For a data matrix Y ∈ R n×m ≥0 , we consider the following generalized NMF minimization task The functional is called the cost functional. Furthermore, we call (i) D β (Y , KX ) the discrepancy term, (ii) α the regularization parameters or weights (iii) and ϕ (K , X ) the penalty terms.
The functional in (4) is typically non-convex in (K , X ). Hence, algortihms based on alternating minimization with respect to K or X are favourable, i.e.
where the index d denotes the iteration index of the corresponding matrices. This yields simpler, often convex restricted problems with respect to either K or X . Considering for example the minimization of the NMF functional with Frobenius norm and without any penalty term, yields a high dimensional linear system K Y = K K X , which, however, would need to be solved iteratively. Instead, so called surrogate methods for computing NMF decompositions have been proposed recently and are introduced in the next section. They also consider alternating minimization steps for K and X , but they replace the restricted minimization problems in (5) and (6) by simpler minimization tasks, which are obtained by locally replacing F by surrogate functionals for K and X separately.
Surrogate Functionals
In this section, we discuss general surrogate approaches for minimizing general non-convex functionals, which are then exemplified for specific NMF functionals in later sections. Let us consider a general functional F : Ω → R where Ω ⊂ R N and the minimization problem min x ∈Ω F (x ).
We will later add suitable conditions guaranteeing the existence of minimizers or at least the existence of stationary points. Surrogate concepts replace this task by solving a sequence of comparatively simpler and convex surrogate functionals, which can be minimized efficiently. These methods are also commonly referred to as surrogate minimization (or maximization) algorithms (SM) or also as MM algorithms, where the first M stands for majorize and the second M for minimize (see also [18,23,36]). Such approaches have been demonstrated to be very useful in many fields of inverse problems, in particular for hyperspectral imaging [11], medical imaging applications such as transmisson tomography [13,14] as well as MALDI imaging and tumor typing applications [26]. Replacing a non-convex functional by a series of convex problems is the main motivation for such surrogate approaches. However, if constructed appropriately, they can also be used to replace non-differentiable functionals by a series of differentiable problems and they can be tailored such that gradient-descent methods for minimization yield multiplicative update rules which automatically incorporate non-negativity constraints without further projections. From this point on it is important to note that possible zero denominators during the derivation of the NMF algorithms as well as in the multiplicative update rules themselves will not be discussed explicitly throughout this work. Usually, this issue is handled in practice by adding a small positive constant in the denominator during the iteration scheme. In fact, the instability of NMF algorithms due to the convergence of some entries in the matrices to zero is not sufficiently discussed in the literature and still needs proper solution techniques. We will not focus on this problem and turn now to the basic definition and properties of surrogate functionals.
Definitions and Basic Properties
As in [25], we use the following definition of a surrogate functional.
Definition 5 (Surrogate Functional)
Let Ω ⊆ R N denote an open set and F : Ω → R a functional defined on Ω. Then Q F : Ω ×Ω → R is called a surrogate functional or a surrogate for F, if it satisfies the following conditions: This is the most basic definition, which does not require any convexity or differentiability of the functional. However, it already allows to prove that the iteration yields a sequence which monotonically decreases F .
Proof The monotone decrease (9) follows directly from the defining properties of surrogate functionals, see Definition 5: We obtain where ( ) follows from the definition of x [d+1] in (8).
Remark 1 (Addition of Surrogate Functionals)
Let Ω ⊆ R n be an open set, F, G : Ω → R pointwise defined functionals and Q F , Q G corresponding surrogates. Then Q F + Q G is a surrogate functional for F + G.
For each functional F there typically exist a large variety of surrogate functionals and we can aim at optimizing the structure of surrogate functionals. The following additional property is the key to simple and efficient minimization schemes for surrogate functionals.
Lemma 1 above only ensures the monotonic decrease of the cost functional, which is not sufficient to guarantee convergence of the sequence {x [d] } to a minimizer of F or at least to a stationary point of F . The convergence theory for surrogate functionals is far from being complete (see also the works [23] and [36]). Despite this lack of theoretical foundation, surrogate based minimization yields strictly decreasing sequences for a large variety of applications. In particular, surrogate based methods can be constructed such that first order optimality conditions lead to multiplicative update rules, which -in view of applications to NMF constructions -is a very desirable property. We now turn to discussing three different construction principles for surrogate functionals.
Jensen's Inequality
The starting point is the well known Jensen's inequality for convex functions, see [10].
Lemma 2 (Jensen's Inequality) Let Ω ⊆ R N denote a convex set, F : Ω → R a convex function and λ i ∈ [0, 1] non-negative numbers for i ∈ {1, . . . , k} with In this subsection we consider functionals F which are derived from continuously differentiable and convex functions f : for Ω ⊆ R N ≥0 and some auxiliary variable c ∈ Ω. This also implies, that F is convex, since We now choose λ i ∈ [0, 1] with N i=1 λ i = 1 and α ∈ R N and define for some b ∈ Ω. This implies The functional Q F : Ω × Ω → R defines a surrogate for F , which can be seen by the inequality above and by observing
Low Quadratic Bound Principle
This concept is based on a Taylor expansion of F in combination with a majorization of the quadratic term. This so called low quadratic bound principle (LQBP) has been introduced in [5] and was used in particularly for the computation of maximum-likelihood estimators. These methods do not require that F itself is convex and its construction is based on the following lemma.
We then obtain a quadratic majorization =: and Q f is a surrogate functional for f.
Proof The proof of this classical result is based on the second-order Taylor polynomial of f and shall be left to the reader.
The related update rule for surrogate minimization can be stated explicitly under natural assumptions on the matrix Λ.
Corollary 1 Assume that the assumptions of Lemma 3 hold. In addition, assume that Λ is a positive definite and symmetric matrix. Then, the corresponding surrogate Q f is strictly convex in its first variable and we have from (8) Proof For an arbitrary α ∈ {1, . . . , N }, we have that where ( ) utilizes the symmetry of Λ. Hence, it holds that . The Hessian matrix of Q f then satisfies This implies the positive definiteness of the functional, hence, it has a uniqie minimizer, which is given by This is the update rule above.
The computation of the inverse of Λ is particularly simple if Λ is a diagonal matrix. Furthermore, the diagonal structure ensures the separability of the surrogate functional mentioned in Definition 6. Therefore, we consider matrices of the form where κ i ≥ 0 has to be chosen individually depending on the considered cost function. We will see that an appropriate choice of κ i will lead finally to the desired multiplicative update rules of the NMF algorithm.
The matrix Λ(a) in (18) fulfills the conditions in Corollary 1 as it can be seen by the following lemma. Therefore, if Λ is constructed as in (18), the update rule in (17) can be applied immediately.
denote a symmetric matrix. With a ∈ R N >0 and κ i ≥ 0, we define the diagonal matrix Λ, such that for i = 1, . . . , N. Then Λ and Λ − M are positive semi-definite.
Proof Let ζ ∈ R N denote an arbitrary vector and let δ denote the Kronecker symbol. Then The positive semi-definiteness of Λ follows from its diagonal structure.
Further Construction Principles
So far we have discussed two major construction principles based on either Jensen's inequality or on upper bounds for the quadratic term in Taylor expansions. [23] lists further construction principles, which however will not be used for NMF constructions in the subsequent sections of this paper. For completeness we shortly list their main properties. A relaxation of the approach based on Jensen's inequality is achieved by choosing A typical choice is |c j | p which leads to surrogate functionals for p ≥ 0. This type of surrogate was originally introduced in the context of medical imaging, see [12], for positron emission tomography. Another approach is based on combining arithmetic with geometic means and can be used for constructing surrogates for posynomial functions. For α, v , a ∈ R N >0 , we obtain
Surrogates for NMF Functionals
In this section we apply the general construction principles of Section 3 to the NMF problem as stated in (3). The resulting functional F (K , X ) depends on both factors of the matrix decomposition and minimization is attempted by alternating minimization with respect to K and X as described in (5) and (6). However, we replace the functional F in each iteration by suitable surrogate functionals, which allow an explicit minimization. Hence, we avoid the minimization of F itself, which even for the most simple quadaratic formulation requires to solve a high dimensional linear system. We start by considering the discrepancy terms for β = 2 (Frobenius norm) and β = 1 (Kullback-Leibler divergence) and determine surrogate functionals with respect to X and K . We then add several penalty terms and develop surrogate functionals accordingly. With regard to the construction of surrogates for the case of β = 0, which leads to the so-called Itakura-Saito divergence, we refer to the works [15,16,32].
Frobenius Discrepancy and Low Quadratic Bound Principle
We start by constructing a surrogate for the minimization with respect to X for the Frobenius discrepancy Let Y •,j , resp. X •,j , denote the column vectors of Y , resp. X . The separability of F yields Hence, the minimization separates for the different f Y•,j terms. The Hessian of these terms is given by and the LQBP construction principle of the previous section with κ k = 0 yields leading to the surrogate functionals An appropriate choice of κ k ensures the multiplicativity of the final NMF algorithm. In the case of the Frobenius discrepancy term, we will see that suitable κ k can be chosen dependent on 1 regularization terms in the cost function, which are not included up to now (see Subsection 4.4 and Appendix A.1 for more details on this issue). Due to the absent 1 terms, we set κ k = 0 to get the desired multiplicative update rules. Summing up the contributions of the columns of X yields the final surrogate The equivalent construction for K can be obtained by regarding the rows of K separately, which for We summarize this surrogate construction in the following theorem.
Theorem 1 (Surrogate Functional for the Frobenius Norm with LQBP)
We consider the cost functionals F (X ) : define separable and convex surrogate functionals.
Frobenius Discrepancy and Jensen's Inequality
Again we focus on deriving a surrogate functional for X , the construction for K will be very similar. Expanding the Frobenius discrepancy yields Putting v := X •,j ∈ R p ≥0 and c := K i,• ∈ R p ≥0 allows us to define such that Hence we have separated the Forbenius discrepancy suitably for applying Jensen's inequality. Following the construction principle in Subsection 3.2, we define with the auxiliary variable A ∈ R p×m Inserting this into the decomposition of the Frobenius discrepancy yields the surrogate Q F,2 (X , A) by The construction of a surrogate for K proceeds in the same way. We summarize the results in the following theorem.
Theorem 2 (Surrogate Functional for the Frobenius Norm with Jensen's Inequality) We consider the cost functionals F (X ) : define separable and convex surrogate functionals.
These surrogates are equal to the ones proposed in [11]. We will later use first order necessary conditions of the surrogate functionals for obtaining algorithms for minimization. We already note i.e. despite the rather different derivations, the update rules for the surrogates obtained by LQBP and Jensen's inequality will be identical.
Surrogates for Kullback-Leibler Divergence
The case β = 1 in Definition 3 yields the so-called Kullback-Leibler divergence (KLD). For matrices M , N ∈ R n×m >0 , it is defined as and has been investigated intensively in connection with non-negative matrix factorization methods [11,16,24,25]. In our context, we define the cost functional for the NMF construction by We will focus in this subsection on Jensen's inequality for constructing surrogates for the KLD since they will lead to the known classical NMF algorithms (see also [11,24,25]). However, it is also possible to use the LQBP principle to construct a suitable surrogate functional for the KLD which leads to different, multiplicative update rules (see Appendix B).
We start by deriving a surrogate for the minimization with respect to X , i.e. we consider Using the same λ k and α k as in the section above and applying it to the convex function f (t) := − ln(t), we obtain Multiplication with Y ij ≥ 0 and the addition of appropriate terms yields The condition Q F,1 (X , X ) = F (X ) follows by simple algebraic manipulations, such that Q F,1 is a valid surrogate functional for F. The approach by Jensen's inequality is very flexible and we obtain different surrogate functionals Q F,2 and Q F,3 by using i.e.
Inserting the same λ k and α k as before in Equation (15), we obtain immediately the surrogates It easy to check, that the partial derivatives for all three variants are the same, hence, the update rules obtained in the next section based on first order optimality conditions will be identical. Applying the same approach for obtaining a surrogate for K yields the following theorem. Then define separable and convex surrogate functionals.
Surrogates for 1 -and 2 -Norm Penalties
Computing an NMF is an ill-posed problem, see [11], hence one needs to add stabilizing penalty terms for obtaining reliable matrix decompositions. The most standard penalties are 1 -and 2 -terms for the matrix factors leading to for β ∈ {1, 2}. The 2 -penalty prohibits exploding norms for each matrix factor and the 1 -term promotes sparsity in the minimizing factors, see [21,28] for a general exposition.
Combinations of 1 -and 2 -norms are sometimes called elastic net regularization, [20], due to there importance in medical imaging. These penalty terms are convex and they separate, hence, they can be used as surrogates themselves. For the case of Kullback-Leibler divergences this leads to the following surrogate for minimization with respect to X : where Q KL is the surrogate for the Kullback-Leibler divergence of Theorem 3 for X .
The Frobenius case cannot be treated in the same way. If we use the penalty terms as surrogates themselves and obtain the standard minimization algorithm by first order optimality conditions, then this does not lead to a multiplicative algorithm, which preserves the non-negativity of the iterates. It can be easily seen that the which yields the Hessian ∇ 2 f y (a) = K K + νI . The choice of κ k is done dependent on the 1 regularization term of the cost function f y as already described in Subsection 4.1. It can be shown in the derivation of the NMF algorithm that κ k = λ for all k leads to multiplicative update rules. A more general cost function is considered in Appendix A.1, where the concrete effect of κ k is described in more detail. This yields the following diagonal matrix Λ f y (a): The surrogate for minimization with respect to X is then Similar, for minimization with respect to K we obtain the surrogate by using the diagonal matrix Λ g y (a) kk := (a(XX + µI p×p )) k + ω a k .
Surrogates for Orthogonality Constraints
The observation that a non-negative matrix with pairwise orthogonal rows has at most one non-zero entry per column is the motivation for introducing orthogonality constraints for K or X . This will lead to strictly uncorrelated feature vectors, which is desirable in several applications e.g. for obtaining discriminating biomarkers from mass spectra, see Section 6 on MALDI Imaging. We could add the orthogonality constraint K K = I as an additional penalty term σ K K K − I 2 . However, this would introduce fourth order terms. Hence we introduce additional variables V and W and split the orthogonality condition into two second order terms leading to Surrogates for the terms I − V K 2 F and I − X W 2 F can be calculated via Jensen's inequality (see Subsection 4.2). The other penalties can be used as surrogates themselves and therefore, we obtain Theorem 4 (Surrogate Functionals for Orthogonality Constraints) We consider the cost functionals define separable and convex surrogate functionals.
Surrogates for Total Variation Penalties
Total variation (TV) penalty terms are the second important class of regularization terms besides p -penalty terms. TV-penalties aim at smooth or even piecewise constant minimizers, hence they are defined in terms of first order or higher order derivatives [7]. Originally, they were introduced for denoising applications in image processing [31] but have since been applied to inpainting, deconvolution and other inverse problems, see e.g. [8]. The precise mathematical formulation of the total variation in the continuous case is described in the following definition.
Definition 7 (Total Variation (Continuous))
Let Ω ⊂ R N be open and bounded. The total variation of a function u ∈ L 1 loc (Ω) is defined as There exist several analytic relaxations of TV based on 1 -norms of the gradient, which are more tractable for analytical investigations. For numerical implementations one rather uses the L 1 -norm of the gradient ∇f L 1 as a more computationally tractable substitute. For discretization the gradient is typically replaced by a finite difference approximation [9]. For applying TV-norms to data, we assume that the row index in the data matrix Y refers to spatial locations and the column index to so-called channels. In this case, we consider the most frequently used isotropic TV for applying it to measured, discretized hyperspectral data.
Definition 8 (Total Variation (Discrete))
For fixed ε TV > 0, the total variation of a matrix K ∈ R n×p ≥0 is defined as where ψ k ∈ R ≥0 is a weighting of the k-th data channel and N i ⊆ {1, . . . , n} \ {i} denotes the index set referring to spatially neighboring pixels.
We will use the following short hand notation which can be seen as a finite difference approximation of the gradient magnitude of the image K •,k at pixel K ik for some neighbourhood pixels defined by N i . A typical choice for neighbourhood pixels in two dimensions for the pixel (0, 0) is N (0,0) := {(1, 0), (0, 1)} to get an estimate of the gradient components along both axes. Finally, by introducing the positive constant ε TV > 0, we get a differentiable approximation of the total variation penalty. In Section 6, we will discuss the application of NMF methods to hyperspectral MALDI imaging datasets, which has a natural 'spatial structure' in its columns. In this section we stay with a generic choice of N i as well as of the ψ k and we construct a surrogate following the approach of the groundbreaking works of [13] and [29]. For t ≥ 0 and s > 0 we use the inequality (linear majorization) and apply it in order to compare ∇ ik K with values obtained by an arbitrary non-negative matrix A: Summation with respect to i, multiplication with ψ k and summation with respect to k leads to This yields a candidate for a surrogate Q Oli TV for the TV-penalty term, which is the same as the one used in [29]. However, it is not separable, hence we aim at a second, separable approximation. For arbitrary a, b, c, d ∈ R we have This leads to Therefore, we have the following Theorem 5 (Surrogate Functional for TV Penalty Term) We consider the cost functional F (K ) := TV(K ) with the total variation defined in (34). Then defines a separable and convex surrogate functional.
The separability of the surrogate is not obvious. The proof (see Appendix C) delivers the following notation, which we also need for an description of the algorithms in the next section. First of all we need the definition of the so-called adjoint neighborhood pixelsN i given by One then introduces matrices P (A) ∈ R n×p ≥0 and Z(A) ∈ R n×p ≥0 via Using these notations, it can be shown that the surrogate can be written as such that we obtain the desired separability. Here, C(A) denotes some function depending on A. The description of Q TV with the help of P (A) ik and Z(A) ik will also allow us to compute the partial derivatives in a way more comfortable way (see also Appendix A.2).
Surrogates for Supervised NMF
As a motivation for this section, we consider classification tasks. We view the data matrix Y as a collection of n data vectors, which are stored in the rows of Y . Moreover, we do have an expert annotation u i for i = 1, . . . , n, which assigns a label to each data vector. For a classification problem with two classes we have u i ∈ {0, 1}. As already stated, the rows of the matrix X of an NMF decomposition can be regarded as a basis for approximating the rows of Y . Hence, one assumes that the correlations between a row Y i,• of Y and all row vectors of X , i.e. computing Y i,• X , contains the relevant information of Y i,• . The vector of correlations yields a so-called feature vector of length p. A classical linear regression model, which uses these feature vectors, then asks to compute weights β k for k = 1, . . . , p, such that Y i,• X β ≈ u i (for more details on linear discriminant analysis methods, we refer to Chapter 4 in [4]).
In matrix notation and using least squares, this is equivalent to computing β as a minimizer of u − Y X β 2 .
We now use X and β to define In tumor typing classifications, where the data matrix Y is obtained by MALDI measurements, the vector x * can be interpreted as a characteristic mass spectrum of some specific tumor type and can be directly used for classification tasks in the arising field of digital pathology (see also Section 6 and [26]). The classification of a new data vector y is then simply obtained by computing the scalar product w = x * y and assigning either the class label 0 or 1 by comparing w with a pre-assigned threshold s. This threshold is typically obtained in the training phase of the classification procedure by computing YX β for some given training data Y and choosing s, such that a performance measure of the classifier is optimized. The approach we have described is based on first computing an NMF, i.e. K and X , and then computing the weights β of the classifier. Hence, the computation of the NMF is done independently of the class labels u, which is also referred to as an unsupervised NMF approach. We might expect, that computing the NMF by minimizing a functional which includes the class labels, i.e.
will lead to an improved classifier. In the application field of MALDI imaging, this supervised approach yields an extraction of features from the given training data, which allow a better distinction between spectra obtained from different tissue types such as tumorous and non-tumorous regions (see also [26]). Surrogates for the first term have been determined in the previous section for the case of the Frobenius norm and the Kullback-Leibler divergence. Hence, we need to determine surrogates of the new penalty term for minimization with respect to X and β: Surrogates can be obtained by extending the Jensen principle to the matrix valued case. Here, we consider a convex subset Ω ⊂ R N ×M >0 and definẽ with a convex and continuously differentiable function f and auxiliary variables c ∈ R N >0 and d ∈ R M >0 . We now use the following generalized Jensen's inequality Setting for some i ∈ {1, . . . , n} yields by inserting λ jk and α jk into (45) The computation of a surrogate for minimization with respect to β proceeds analogously. We summarize the results in the following theorem.
A big advantage of linear regression models are their simplicity and manageability. However, they are by far not the optimal approach to approximate the binary output data u with a continuous input. Logistic regression models offer a way more natural method for binary classification tasks. Together with the supervised NMF as a feature extraction method, this overall workflow leads in [26] to excellent classification results and outperformed classical approaches. However, the proposed model is based on a gradient descent approach, such that the non-negativity of the iterates can only be guaranteed by a projection step. Appropriate surrogate functionals for this workflow are still ongoing research and could lead to even better outcomes (see also [35,36]).
Surrogate Based NMF Algorithms
In the previous section we have defined surrogate functionals for various NMF cost functions. Besides the necessary surrogate properties we also expect that the minimization of these surrogates is straightforward and can be computed efficiently. In our case we demand additionally, that the minimization schemes based on solving the first order optimality conditions leads to a separable algorithm and that it only requires multiplicative updates, which automatically preserve the nonnegativity of its iterates. Let us start with denoting the most general functional with Kullback-Leibler divergence, the Frobenius case follows similarly. For constructing non-negative matrix factorizations, we incorporate 2 -, sparsity-, orthogonality-, TV-constraints and also the penalty terms coming from the supervised NMF. Of course, in most applications one only uses a subset of these constraints for stabilization and for enhancing certain properties. These algorithms can readily be obtained from the general case by putting the respective regularization parameters to zero. The corresponding update rules are classical results and can be found in numerous works [11,24,25].
≥0 , β ∈ R p ≥0 and a set of regularization parameters λ, µ, ν, ω, τ, σ K ,1 , σ K ,2 , σ X ,1 , σ X ,2 , ρ ≥ 0, we define the NMF minimization problem by The choice of the various regularization parameters occuring in Definition 9 is often based on heuristic approaches. We will not focus on that issue in this work and refer instead to [19] and the references therein, where two methods are investigated for the general case of multi-parameter Tikhonov regularization. The algorithms studied in this section will start with positive initializations for K , X , V , W and β. These matrices are updated alternatingly, i.e. all matrices except one matrix are kept fixed and only the selected matrix is updated by solving the respective first order optimality condition. We will focus in this section on the derivation of the update rules of K (see also Appendix A.2). The iteration schemes for the other matrices follow analogously. For that, we only have to consider those terms in the general functional which depend on K , i.e. we aim at determining a minimizer for Instead of minimizing this functional we exchange it with the previously constructed surrogate functionals, which leads to with the surrogates Q KL for the Kullback-Leibler divergence in Theorem 3, Q TV for the TV penalty term in Theorem 5 and Q Orth for the orthogonality penalty terms in Theorem 4. The computation of the partial derivatives leads to a system of equations for ξ ∈ {1, . . . , n} and ζ ∈ {1, . . . , p}. This leads to a system of quadratic equations Solving for K ξζ and denoting the Hadamard product by • as well as the matrix division for each entry separately by a fraction line, yields the following update rule for K . (Note that the notation for P (A) and Z(A) was introduced in the section on TV-regularization above.) In the above update rule, 1 n×p denotes an n × p matrix with ones in every entry and Ψ ∈ R n×p ≥0 is defined as Details on the derivation can be found in Appendix A.2.
The partial derivatives with repect to X are computed similarly. Defining leads to the update The updates for V , W are straight forward and we obtain the following theorem.
Theorem 7 (Alternating Algorithm for NMF Problem in Definition 9)
The initializations and the iterative updates lead to a monotonically decrease of the cost functional It is easy to see that the classical, regularized NMF algorithms described in [11,24,25] can be regained by putting the corresponding regularization parameters to zero. In the case of 1 -and 2 -regularized NMF, this leads to the cost function The classical update rule for X is obtained by setting which -in connection with the update rule for X of the previous theorem -leads to = 2Λ [d] which is the update rule described in [11]. By the same approach and with the surrogate functionals derived in Section 4, we obtain the update rules for the Frobenius discrepancy term, i.e. we consider the functional A monotone decrease of this functional is obtained by the following iteration in combination with the update rules for V , W , β as in Theorem 7 (see also Appendix A.1 for more details on the derivation of these algorithms.) .
MALDI Imaging
As a test case we analyse MALDI imaging data (matrix assisted laser desorption/ionization) of a rat brain. MALDI imaging is a comparatively novel modality, which unravels the molecular landscape of tissue slices and allows a subsequent proteomic or metabolic analysis [1,6,22]. Clustering this data reveals for example different metabolic regions of the tissue, which can be used for supporting pathological diagnosis of tumors. The data used in this paper was obtained by a MALDI imaging experiment, see Figure 2 for a schematic experimental setup. In our numerical experiments, we used a classical rat brain dataset which has been used in several data processing papers before [2,3,22]. It constitutes a standard test set for hyperspectral data analysis. The tissue slice was scanned at 20185 positions. At each position a full mass spectrum with 2974 m/z (mass over charge) values was collected. I.e. instead of three color channels, as it is usual in image processing, this data has 2974 channels, each channel containing the spatial distribution of molecules having the same m/z value.
The following numerical examples were obtained with the multiplicative algorithms described in the previous section. We just illustrate the effect of the different penalty terms for some selected functionals. One can either display the columns of K as the pseudo channels of the NMF decomposition or the rows of X as pseudo spectra characterizing the different metabolic processes present in the tissue slice, see the Figures 3-6. Both ways of visualization do have their respective value. Looking at the pseudo spectra in connection with orthogonality constraints leads to a clustering of the spectra and to a subdivision of the tissue slice in regions with potentially different metabolic activities, see [22]. Considering instead the different pseudo spectra, which were constructed in order to have a bases which allows a low dimensional approximaton of the data set, is the basis for subsequent proteomic analysis. E.g. one may target pseudospectra where the related pseudo channels are concentrated in regions, which were annotated by pathological experts. Mass values which are dominant in those spectra may stem from proteins/peptides relating to biomarkers as indicators for certain diseases. Hence, classification schemes based on NMF decompositions have been widely investigated [26,30,34]. Fig. 3: NMF of the rat brain dataset for p = 6. Orthogonality constraints on the channels with σ K ,1 = 1 and σ K ,2 = 1.
Conclusion
In this paper, we investigated methods based on surrogate minimization approaches for the solution of NMF problems. The interest in NMF methods is related to its importance for several machine learning tasks. Application for large data sets require that the resulting algorithms are very efficient and that iteration schemes only need simple matrix-vector multiplications. The state of the art for constructing appropriate surrogates are based on case-bycase studies for the different, considered NMF models. In this paper, we embedded the different approaches in a general framework, which allowed us to analyze several extensions to the NMF cost functional, including 1 -and 2 -regularization, orthogonality constraints, total variation penalties as well as extensions, which leaded to supervised NMF concepts. Secondly, we analyzed surrogates in the context of the related iteration schemes, which are based on first order optimality conditions. The requirement of separability as well as the need of having multiplicative updates, which preserve nonnegativity without additional projections, were analyzed. This resulted in a general description of algorithms for alternating minimization of constrained NMF functionals. The potential of these methods is confirmed by numerical tests using hyperspectral data from a MALDI imaging experiment. Several further directions of research would be of interest. First of all, besides the most widely used penalty terms discussed in this paper further penalty terms, e.g. higher order TV-terms, could be considered. Secondly, construction principles for more general discrepancy terms could be analyzed (see also [16]). Potentially more importantly, this paper contains only very first results for combining NMF constructions directly with subsequent classification tasks. The question of an appropriate surrogate functional for the supervised NMF model with logistic regression used in [26] remains unanswered and also the comparison with algorithmic alternatives such as ADMM methods needs to be explored. 6: NMF of the rat brain dataset for p = 6. Orthogonality constraints on the channels with σ K ,1 = 10 and σ K ,2 = 10 and sparsity penalty term on X with λ = 0.06. The sparsity penalty term has in connection with the orthogonality constraint a comparatively strong influence on the NMF computation: The sparsity in the spectra increases significantly and thus their biological interpretability, whereas the anatomic structure in the pseudo channels diminishes.
Appendix A Details on the Derivation of the Algorithms in Section 5
In this section, we give a more detailed derivation of the algorithms presented in Section 5. We start with the less complex case of the Frobenius norm as discrepancy term and then turn to the Kullback-Leibler divergence. To cover both aspects, we derive the update rules of X for the Frobenius discrepancy term and of K in the case of the KLD. We will also take a closer look at the effect of κ in equation (18) with respect to the LQBP construction principle.
A.1 Frobenius Norm
We consider the general cost function described in Section 5 for the case of the Frobenius norm. To compute the update rules for X , it is enough to examine the function where all terms independent from X are omitted. Based on Remark 1 and following the discussion of Section 4.4, the construction of a surrogate for F can be done separately for F 1 and the remaining penalty terms. The construction of a surrogate for F 1 with the LQBP principle as it has been done similarly in Subsection 4.4 is essential. If we would use instead a surrogate for the discrepancy term 1 /2 Y − KX 2 F from Subsection 4.1 or 4.2 and take the 1 -penalty term λ X 1 as surrogate itself, it is easy to see that this would not lead to multiplicative update rules. It is the 1 -penalty term which causes the difficulty. Computing the first order optimality condition for the corresponding surrogateQ F (X , A) =: λ X 1 +Q F (X , A) with respect to X would lead to where the second term on the right hand side does not depend on λ. Hence, we get a sign in front of λ by solving the equation for X ξζ and we will not obtain multiplicative updates for X . A correct surrogate is obtained by using the LQBP principle to F 1 and leads to and with the diagonal matrix The functionals Q Orth resp. Q LR are the surrogates obtained from Theorem 4 resp. Theorem 6. It will turn out that an appropriate choice of κ k will ensure a multiplicative NMF algorithm. The computation of the first order optimality condition for Q F leads to One can see immediately, that the choice of κ ξ := λ for all ξ ∈ {1, . . . , p} is appropriate to get rid of the problematic term λ. Hence, we obtain Reordering the terms leads to (K Y ) ξζ + (σ X ,1 + σ X ,2 )W ξζ + ρβ ξ (Y u) ζ = X ξζ A ξζ (K KA) ξζ + νA ξζ + λ + ρβ ξ (Y Y A β) ζ + σ X ,1 (AW W ) ξζ + σ X ,2 A ξζ .
By exploiting the surrogate minimization principle as described in Lemma 1, we get finally the update rule for X presented in Section 5.
This equation holds for arbitrary ξ ∈ {1, . . . , n} and ζ ∈ {1, . . . , k}. We therefore can extend this relation to the whole matrix K and obtain which is exactly the described update rule in Section 5.
Appendix B Kullback-Leibler Divergence Discrepancy and LQBP
In this Section, we will use the LQBP construction principle to derive a multiplicative algorithm for the cost function It follows for the partial derivatives of f such thatP st (A) · K 2 st takes all quadratic terms of the matrix entries of K in the surrogate functional into account. The same can be done with the linear terms K st , which leads to the coefficient Therefore, the surrogate Q TV can be written as for some functionC, which only depends on A. This shows the separability of the surrogate. | 2018-08-09T12:17:16.000Z | 2018-08-06T00:00:00.000 | {
"year": 2018,
"sha1": "f52d930a56b72ce889ff8e15e220086d0428683c",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10013-018-0315-x.pdf",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "51d896ee08a276ef30451af6f1ecfebe00101994",
"s2fieldsofstudy": [
"Mathematics",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
9440650 | pes2o/s2orc | v3-fos-license | The Effect of 17α-Ethynylestradiol on Steroidogenesis and Gonadal Cytokine Gene Expression Is Related to the Reproductive Stage in Marine Hermaphrodite Fish
Pollutants have been reported to disrupt the endocrine system of marine animals, which may be exposed through contaminated seawater or through the food chain. Although 17α-ethynylestradiol (EE2), a drug used in hormone therapies, is widely present in the aquatic environment, current knowledge on the sensitivity of marine fish to estrogenic pollutants is limited. We report the effect of the dietary intake of 5 µg EE2/g food on different processes of testicular physiology, ranging from steroidogenesis to pathogen recognition, at both pre-spermatogenesis (pre-SG) and spermatogenesis (SG) reproductive stages, of gilthead seabream (Sparus aurata L.), a marine hermaphrodite teleost. A differential effect between pre-SG and SG specimens was detected in the sex steroid serum levels and in the expression profile of some steroidogenic-relevant molecules, vitellogenin, double sex- and mab3-related transcription factor 1 and some hormone receptors. Interestingly, EE2 modified the expression pattern of some immune molecules involved in testicular physiology. These differences probably reflect a developmental adjustment of the sensitivity to EE2 in the gilthead seabream gonad.
Introduction
A wide variety of chemicals discharged from industrial and municipal sources have been reported to disrupt the endocrine system of marine animals, which may be exposed via the food chain or directly through contaminated seawater [1]. Recent evidence suggests that endocrine disruption as a consequence of estrogen exposure may have very serious consequences for the wild fish populations [2]. An inevitable consequence of the increasing consumption of pharmaceuticals is an increased level of contamination of surface and ground waters by these biologically active drugs, accompanied by a greater potential for adverse effects on aquatic wildlife [3]. Estrogenic pollutants are adsorbed to the sediment and could be included in the benthic food chain, in the end affecting pelagic fish [4]. Moreover, it has been demonstrated that contaminated marine sediments alter the expression of genes that are biomarkers for fish endocrine disruption [5]. 17α-Ethynylestradiol (EE 2 ), a pharmaceutical compound used in oral contraceptives and hormone replacement therapy with a strong affinity for estrogen receptors (ER) [6], has a widespread presence in the aquatic environment [7]. It reaches concentrations of 0.5 to 62 ng/L in European sewage and surface waters [8,9]. Several fish species have been bath-exposed to environmental concentrations of EE 2 (up to 10 ng/L) to ascertain any effect on reproduction [10]. Importantly, long term exposure to environmental estrogens has been shown to have an impact on the severity of the subsequent effects on reproductive development and fertility: Concentrations of EE 2 as low as 4-6 ng/L are able to affect the fertility of the F1 generation, but not the fertility of the F0 generation [10]. Moreover, several food-web models have predicted the bioaccumulation of EE 2 throughout the food chain [11] and this ability should not be underestimated. The determination of the impact of even low concentrations of EE 2 on fish reproduction is therefore advisable.
Reproduction in fish is subject to hormonal regulation by gonadal steroids [12,13]. Dihydrotestosterone (DHT) is one of the most physiologically important androgens in many male vertebrates [14], with the exception of teleost fish, in which testosterone (T) and 11-ketotestosterone (11KT) are generally considered the major and most potent circulating male androgens [15]. T levels increase in both females and males during gonadal development, while 11KT is considered to be the dominant androgen in males [13,15]. While 17β-estradiol (E 2 ) has been considered to be the main sex steroid of female fish, recent studies have suggested that estrogens are "essential" for normal male reproduction [16][17][18]. However, little is known about the local immune regulation that takes place in the fish testis that provides protection for the developing male germ cells, while permitting qualitatively normal inflammatory responses and protection against infection [19].
The gilthead seabream (Sparus aurata L.) is a seasonally breeding, marine, protandrous hermaphrodite teleost. The specimens are male during the first 2 years of life and subsequently change into females. During the male phase, the bisexual gonad has functional testicular and non-functional ovarian areas [20,21]. Therefore, the gonad of this species could be considered a complex model in which both ovarian and testicular regulatory mechanisms coexist. This fish species has recently been used to describe the biological effects of contaminated marine sediments in light of its importance as a commercial food and its use as sentinel fish for environmental studies and monitoring [5]. The gilthead seabream is common in the Mediterranean Sea and, due to its euryhaline and eurythermal habits, the species is found in both marine and brackish water environments such as coastal lagoons and estuarine areas, particularly during the initial stages of its life cycle [22]. Moreover, the production of gilthead seabream in marine farms in the Mediterranean Sea is an industry with a promising future, whose current economic value is also significant, particularly in Spain, where turnover reached 88.8 million Euros in 2009 [23]. Levels of some xenobiotics are much higher in the Mediterranean Sea than in other seas and oceans [24], since, among other reasons, it has limited exchange of water with the Atlantic Ocean, and it is surrounded by some of the most heavily populated and industrialized countries in the world [25]. The reproductive cycle of gilthead seabream males is divided into four stages: spermatogenesis (SG), spawning, post-spawning and resting [20]. During the spermatogenesis stage all the different germ cell types develop from a testicular area, mainly formed by spermatogonia stem cells and cysts of primary spermatogonia. In fish, spermatogenesis occurs in a cystic structure in which all germ cells develop synchronously surrounded by a cohort of Sertoli cells, which nurse one germ cell type at a time [20,26]. During the spermatogenesis process, the germ cells reduce their chromosome content by meiosis and differentiate into spermatozoa. Once the spermatozoa are formed, they are released from the germinal epithelium into the lumen of the tubules together with the seminal fluid produced by Sertoli cells, where they remain until spawning. Afterwards, in the efferent duct system, spermatozoa are capacitated to fertilize the eggs. Once they make contact with seawater, the change in the osmolarity of the medium induces the motility of the spermatozoa and their final maturation [27].
In the gilthead seabream, and during the reproductive cycle, the levels of E 2 , T and 11KT [28], as well as the gene expression and production of several cytokines [19] vary. Interestingly, in male gilthead seabream specimens, E 2 serum levels increase after the spawning stage when a massive infiltration of acidophilic granulocytes (AGs, the professional phagocytes in the gilthead seabream) into the gonad takes place [29][30][31], although this cell type does not express any of the three known nuclear ER, namely ERa, ERb1 and ERb2 [32]. Moreover, AG infiltration also occurs in the testis of specimens during the morphogenesis process [33]. These data, together with the expression pattern of cytokines and metalloproteinase (MMPs) by this cell type, suggested that AGs are essential for testicular tissue formation, remodeling and cell renewal [29]. We have recently reported that EE 2 dietary intake disrupts spermatogenesis and promotes leukocyte infiltration in the gonad by up-regulating the expression of several genes involved in regulating leukocyte trafficking in the testis of SG stage fish [34]. Moreover, bath-exposure to EE 2 might alter the capacity of gilthead seabream to appropriately respond to infection although this synthetic estrogen does not behave as an immunosuppressor [35]. Furthermore, it is known that the ability to respond to sex steroids or endocrine disruptors depends on the maturation stage of the fish [36].
In this framework, the present study tries to fill in some gaps in this knowledge by providing data about the effects of EE 2 on the local immune regulation that takes place in the gonad in two different physiological stages of the reproductive cycle of a marine hermaphrodite fish, the gilthead seabream. Moreover, we analyze the gene expression profile of some molecules involved in the reproductive processes with the idea of using them as biomarkers of endocrine disruption. With this aim in mind, gilthead seabream specimens, in pre-SG (the stage just before to the spermatogenesis stage of the first reproductive cycle) and SG stages, were fed for 28 days with a pellet diet containing 5 μg EE 2 /g food, in order to determine whether EE 2 promotes an estrogenic response by measuring the sperm quantity and quality, the serum levels of the main sex steroids and the gene expression of vitellogenin (vtg).
EE 2 Reduces the Stripped Volume of Seminal Fluid and Sperm Motility in Specimens in the Spermatogenesis Stage
In the SG stage, the dietary intake of EE 2 decreased the stripped volume of seminal fluid and sperm motility but did not modify the stripped sperm concentration (Table 1). No data are presented for pre-SG specimens as no stripped sperm was obtained. Table 1. Effects of the dietary intake of 5 μg 17α-ethynylestradiol (EE 2 )/g food during 7 and 28 days on volume of seminal fluid (mL), sperm concentration (cells/mL) and motility index at different exposure times. Data represent means ± SEM of six independent fish per group. * Asterisks denote statistically significant differences between treatment and control groups according to a Student t test (p ≤ 0.05).
EE 2 Modifies Serum Sex Steroid Levels and Modulates the Gene Expression Profile of Some Steroidogenic Enzymes
We have previously demonstrated in SG specimens of gilthead seabream that 5 μg EE 2 /g food promotes an increase in E 2 and T serum levels after 7 days of treatment and a decrease in T and 11KT levels after 28 days [29]. Moreover, in pre-SG specimens, the dietary intake of 5 μg EE 2 /g food increased the E 2 ( Figure 1a), T (Figure 1b) and 11KT ( Figure 1c) serum levels after 7 days of treatment, but did not significantly modify the same sex steroid levels at the end of the experiment.
We next investigated the gonadal gene expression of several steroidogenic enzymes ( Figure 2). First, it was observed that the gene expression levels of steroidogenic acute regulatory protein (star) (Figure 2a 17β-estradiol (E 2 ) (a), testosterone (T) (b) and 11-ketotestosterone (11KT) (c) serum levels were determined in gilthead seabream specimens in the pre-spermatogenesis (pre-SG) stage after the dietary intake of 0 (control) and 5 µg EE 2 /g food during 7 and 28 days. The serum sex steroid levels (ng/mL) from five to six fish/group were analyzed by ELISA. The asterisks denote statistically significant differences after Student t test between the untreated control group and the EE 2 treated group at each time point. * p < 0.05 and ** p < 0.01. ns, not significant. Figure 2. EE 2 modulates the expression of genes coding for steroidogenic-relevant molecules in the gonad. Specimens at both pre-SG and SG stage were treated with 0 (control) and 5 µg EE 2 /g food during 7 and 28 days. Afterwards, the mRNA levels of star (a); cyp11a1 (b); hsd3b (c); cyp19a1a (d); cyp11b1 (e); srd5a (f) and hsd11b (g) were determined in the gonad by real-time reverse transcription polymerase chain reaction (RT-PCR). Total RNA was obtained after pooling the same amount of mRNA from five to six fish/group. Data represent means ± S.E.M. of triplicates of the same pooled sample. The asterisks denote statistically significant differences after Student t test between: (i) the untreated control group and the EE 2 treated group at each time point and spermatogenic condition and (ii) the untreated control groups of the two spermatogenic conditions within the same sampling date. * p < 0.05, ** p < 0.01 and *** p < 0.001. ns, not significant.
EE 2 Increases the Expression Profile of the Hepatic vtg Gene
The expression of vtg, a gene induced by activation of nuclear ER [37], was slightly higher in pre-SG than in SG specimens. Moreover, the vtg expression levels were significantly up-regulated in the liver at both reproductive stages and EE 2 exposure times assayed (Figure 3a). Figure 3. EE 2 promotes an estrogenic response and modulates the expression of genes coding for hormone receptors. Specimens at both pre-SG and SG stages were treated with 0 and 5 µg EE 2 /g food during 7 and 28 days. Afterwards, the mRNA levels of vtg were determined in the liver (a) and the mRNA levels of dmrt1 (b); era (c); fshr (d) and lhr (e) in the gonad by real-time RT-PCR. Total RNA was obtained after pooling the same amount of mRNA from five to six fish/group. Data represent means ± SEM of triplicates of the same pooled sample. The asterisks denote statistically significant differences after Student t test between: (i) the untreated control group and the EE 2 treated group at each time point and spermatogenic condition and (ii) the untreated control groups of the two spermatogenic conditions within the same sampling date. * p < 0.05; ** p < 0.01 and *** p < 0.001. ns, not significant.
EE 2 Modulates the Expression of Testicular Specific Protein, Dmrt1, and Some Hormone Receptor Genes in the Gonad
As expected, expression of the gene that codes for the testicular specific protein, double sex-and mab3-related transcription factor 1 (dmrt1) was higher in the gonad of SG than in pre-SG specimens (Figure 3b). EE 2 decreased the dmrt1 expression levels in SG specimens after 7 days of exposure, but the effect disappeared after 28 days. No significant changes were observed in the dmrt1 expression levels in pre-SG specimens after 7 or 28 days of exposure.
Interestingly, the mRNA expression levels of estrogen receptor α (era) (Figure 3c), follicle stimulating hormone (FSH) receptor (fshr) (Figure 3d) and luteinizing hormone (LH) receptor (lhr) (Figure 3e) were higher in the gonad of pre-SG than SG specimens except for the era levels at day 28 of treatment. EE 2 increased the era expression levels in the gonad of pre-SG specimens after 7 and 28 days or only after 28 days of exposure in SG (Figure 3c). In pre-SG specimens, EE 2 decreased the fshr (Figure 3d) and lhr (Figure 3e) expression levels after 7 days of exposure, but increased lhr expression after 28 days (Figure 3e). Nevertheless, in SG specimens, fshr expression levels increased only after 28 days of EE 2 dietary intake (Figure 3d), while the lhr expression levels were slightly lower after 7 days of EE 2 exposure (Figure 3e).
EE 2 Modifies the Gene Expression of Molecules Relevant in the Immune Response in the Gonad
To explore the local immune regulation that occurred in the gonad, we analyzed the expression of genes coding for several pro-and anti-inflammatory cytokines, MMPs, molecules related with pathogen recognition, antigen presentation, leukocyte recruitment, and B lymphocytes markers (Figures 4 and 5). Interestingly, almost all of the immune-related genes analyzed showed higher expression levels in pre-SG specimens than in SG, except those for tumor necrosis factor α (tnfa) (Figure 4b), matrix metalloproteinase (mmp) 9 ( Figure 4d) and major histocompatibility complex I α protein (mhc1a) (Figure 4g), which were more highly expressed in SG specimens on at least one of the times analyzed. . EE 2 modulates the expression of genes coding for immune-relevant molecules in the gonad. Specimens at both pre-SG and SG stage were treated with 0 and 5 µg EE 2 /g food during 7 and 28 days. Afterwards, the mRNA levels of il1b (a); tnfa (b); tgfb1 (c); mmp9 (d); mmp13 (e); tlr9 (f); and mhc1a (g) were determined in the gonad by real-time RT-PCR. Total RNA was obtained after pooling the same amount of mRNA from five to six fish/group. Data represent means ± SEM of triplicates of the same pooled sample. The asterisks denote statistically significant differences after Student t test between: (i) the untreated control group and the EE 2 treated group at each time point and spermatogenic condition and (ii) the untreated control groups of the two spermatogenic conditions within the same sampling date. * p < 0.05; ** p < 0.01 and *** p < 0.001. ns, not significant. Figure 5. EE 2 modulates the expression of genes involved in regulating leukocyte trafficking and lymphocytes B markers. Specimens at both pre-SG and SG stage were treated with 0 and 5 µg EE 2 /g food during 7 and 28 days and 28 days, respectively. Afterwards, the mRNA levels of ccl4 (a); il8 (b); sele (c); ighm (d) and ight (e) were determined in the gonad by real-time RT-PCR. Total RNA was obtained after pooling the same amount of mRNA from 5 to 6 fish/group. Data represent means ± SEM of triplicates of the same pooled sample. The asterisks denote statistically significant differences after Student t test between: (i) the untreated control group and the EE 2 treated group at each time point and spermatogenic condition and (ii) the untreated control groups of the two spermatogenic conditions at day 28. * p < 0.05; ** p < 0.01 and *** p < 0.001. ns, not significant. EE 2 was seen to differently modulate the expression levels of these immune-related genes in pre-SG and SG specimens. Thus, EE 2 inhibited the expression level of interleukin 1β (il1b) after 7 days and increased it after 28 days in pre-SG specimens, while no significant differences were observed in SG specimens (Figure 4a). Moreover, the expression levels of tnfa ( Figure 4b) and mmp9 (Figure 4d) increased after EE 2 exposure in both pre-SG and SG specimens, the increase being more pronounced in SG specimens. In contrast, the expression levels of transforming growth factor β1 (tgfb1) (Figure 4c), mmp13 ( Figure 4e) and toll-like receptor (tlr) 9 ( Figure 4f) were increased by EE 2 in pre-SG specimens at both times analyzed, and only after 28 days of exposure in SG specimens. As regards to the gene related with antigen presentation that codes for MHC Iα protein, named spau-UAA following the accepted nomenclature [38], EE 2 increased the mhc1a mRNA levels in both pre-SG and SG specimens at both times analyzed, except after 7 days of exposure in pre-SG specimens where a decrease was observed (Figure 4g).
We previously demonstrated that the dietary intake of EE 2 promoted an up-regulation in the gonad of the genes coding for CC chemokine ligand (ccl4), CXC chemokine interleukin 8 (il8) and leukocyte adhesion molecule E-selectine (sele), and the B lymphocyte markers, heavy chain of immunoglobulin M (ighm) and heavy chain of immunoglobulin T (ight), in SG specimens after 7, 14 and 21 days of exposure, which occurred simultaneously with an infiltration of AGs and lymphocytes [34]. Here, we explore the differential regulation by EE 2 of the expression of ccl4 (Figure 5a), il8 (Figure 5b), sele (Figure 5c), ighm ( Figure 5d) and ight (Figure 5e), in pre-SG and SG specimens, after 7 and 28 days of exposure. As mentioned above, the expression levels of all these genes were higher in pre-SG than in SG specimens ( Figure 5 and [34]). Similarly to what was previously observed after 7 days of EE 2 exposure in SG specimens, EE 2 increased the expression levels of all these genes in the gonad of SG specimens after 28 days. Nevertheless, in pre-SG specimens, EE 2 modulated the expression levels of these genes in a different way. Thus, although ccl4, sele and ight expression levels increased after certain times of EE 2 exposure, the expression levels of ccl4 decreased and the transcription of il8 and ighm were unchanged after 7 days of EE 2 exposure. Moreover, sele and ighm expression levels fell after 28 days of EE 2 exposure.
Discussion
EE 2 is an environmental estrogen considered as an endocrine-disrupting compound (EDC) with strong estrogenic effects and a widespread presence in the aquatic environments [8,10]. Fish represent the animal group most affected by EDC exposure since they are continuously and directly exposed to these contaminants. Most authors agree that EE 2 promotes an immature stage of the male gonads by blocking their development or by inducing the ablation of post-meiotic germ cells when immature fish or spermatogenically active fish are treated, respectively [39,40]. Moreover, in gonochoristic fish species, a widely observed effect of estrogenic compounds is the modification of sperm quality, sex steroid levels and hepatic Vtg production [37,41]. However, little is known about these effects in hermaphroditic fish species.
In the gilthead seabream, a hermaphroditic protandrous seasonal breeder, the spermatogenesis process proceeds in the testicular area, where it is orchestrated by high androgen levels; however, an increase in endogenous E 2 levels coincides with spawning. EE 2 promotes an estrogenic response, as seen from the increase in vtg gene expression levels in pre-SG and SG specimens. Interestingly, E 2 serum levels increased after 7 days of exposure in both pre-SG and SG [34] specimens. However, T and 11KT levels differed between pre-SG and SG specimens, they both increased in pre-SG specimens after 7 days of exposure and decreased in SG after 28 days, probably due to the fact that T and 11KT serum levels were already very high in SG specimens compared with pre-SG specimens [34]. Moreover, exogenous E 2 treatment of spermatogenically active males accelerated the final events of spermatogenesis and inhibited the proliferation of spermatogonia in early stages, promoting a post-spawning stage [42] which coincided with the decrease in sperm quality (volume of seminal fluid and the motility index) observed. No development of the ovary has been observed in gilthead seabream after 1 month of E 2 [42] or EE 2 [34] treatments, although primary oocytes appeared in the protandrous male black porgy after 5 months of E 2 treatment [43].
Looking at the transcript regulation of the most relevant steroidogenic molecules involved in their production, we found that EE 2 down-regulated the transcripts of star, cyp11a1, (two molecules that are synthesized rapidly in response to acute tropic hormone stimulation [44]), and hsd3b, cyp11b1 and srd5a (steroidogenic molecules involved in androgen production) in the gonad of both pre-SG and SG specimens at all time points analyzed, while the hsd11b (steroidogenic enzyme involved in androgen production) and cyp19a1a (steroidogenic enzyme involved in estrogen production) expression levels were up-regulated in the gonad of SG males. Interestingly, the cyp19a1a was down-regulated in the gonad of pre-SG males. In the gilthead seabream, cyp19a1a expression gradually increased during the spermatogenesis and spawning stages, reaching a maximum at post-spawning. The serum levels of E 2 increased progressively with each reproductive cycle [28]; moreover, during the second reproductive cycle the expression of cyp19a1a reached a higher level than during the first reproductive cycle [19]. These data, together with our present data, suggest that there is a reciprocal action between the estrogen serum levels and the expression of cyp19a1a gene in the SG specimens, while this mechanism is not effective in pre-SG specimens. This hypothesis would explain why, in the gilthead seabream, E 2 seems to be essential for the renewal of the testis during the two first reproductive cycles [20,42], although high levels of this hormone are also needed in the sex change process that occurs at the beginning of the third reproductive cycle [45].
To assess whether any testicular reproductive parameters could be used as markers of endocrine disruption in gilthead seabream, the gene expression levels of the testicular specific protein, dmrt1 and of some hormone receptor genes were analyzed in both stages of the reproductive cycle. In mammals, the depletion of Dmrt1 gene expression led to the loss of mitotic germ cells, which had precociously entered meiosis [46]. In gilthead seabream males, dmrt1 gene expression was down-regulated at the end of the second reproductive cycle and the beginning of sex change [21]. Moreover, upon short-term estrogenic treatment, the testicular area of the gonad was depleted of pre-meiotic germ cells and showed an increase in spermatozoa [34]. Our data related to dmrt1 expression levels in pre-SG and SG specimens explain these observations, as the dmrt1 expression levels were not affected by the dietary intake of EE 2 in pre-SG specimens, while in SG specimens, EE 2 was seen to lower dmrt1 expression levels after 7 days, although this effect had disappeared by the end of the treatment.
Male germ cell development is regulated by the brain pituitary axis, which has evolved in vertebrates as a hormonal master control system over spermatogenesis and reproduction in general. Within this system, the pituitary gonadotropins, LH and FSH, play pivotal roles by regulating testis functions via their respective cognate receptors, LH receptor (LHR) and FSH receptor (FSHR). However, gonadal steroids and other agents that bind or prevent binding to steroid hormone receptors also regulate testicular functions [47]. In fact, in mammals, estrogens regulate testicular steroidogenesis acting though ERα [47]. Our data show that in the gilthead seabream, EE 2 treatment increased the expression of era gene in the gonad of pre-SG and SG fish, although at different times. We have previously recorded increases in era gene expression upon E 2 treatment in endothelial cells and macrophages in vitro [32,48] and upon EE 2 -bath exposure in head-kidney leukocytes in vivo [35]. Although era gene expression was seen to have increased in all the analyses carried out and could well be used as biomarker of endocrine disruption [49], the magnitude of the response differed from that observed for vtg gene expression and the time at which the effect appear became evident varied between stages and between tissues. Furthermore, these differences suggest changes in the sensitivity to estrogens during sexual maturation and point to the need for further studies to clearly determine life stages that are susceptible to estrogenic pollutants in fish. Moreover, EE 2 dietary intake during 7 days down-regulated the fshr and lhr expression levels, while EE 2 dietary intake during 28 days up-regulated the fshr expression level in SG specimens and the lhr expression levels in pre-SG specimens. Interestingly, all these data agree with the disruption of spermatogenesis and the recrudescence of the testicular area of the gonad and the non-induction of the sex change previously observed in SG gilthead seabream gonad upon EE 2 dietary intake [34].
A relevant role for immune molecules in the regulation of spermatogenesis and/or steroidogenesis has been described in various vertebrates including fish [19,50]. In the gilthead seabream, the testis undergoes abrupt morphological changes especially after spawning, including a massive infiltration of AGs [20,21,29,51]. AGs are produced in the head-kidney and when they infiltrate the testis, they show heavily impaired functions [52]. Interestingly, the expression of genes coding for pro-inflammatory and anti-inflammatory mediators, innate immune receptors, lymphocyte receptors, anti-bacterial and anti-viral proteins and molecules related to leukocyte infiltration show a testicular pattern that depends on the reproductive stage of the gilthead seabream specimens [19] and which guarantees and modulates reproductive functions. In addition, endogenous increases of E 2 in serum are correlated with AG infiltration into the gonad after spawning [29], while the dietary intake of EE 2 by SG specimens of gilthead seabream induces the recruitment of AGs and B lymphocytes and up-regulates the expression of genes coding for molecules involved in leukocyte trafficking [34]. Moreover, specimens bath-exposed to EE 2 show alterations in their capacity to appropriately respond to infection [35]. Although, EE 2 modulates the expression pattern of immune molecules in gilthead seabream macrophages, which are known to be a key cell type in the immune-modulatory role played by E 2 in the gilthead seabream gonad [32,35], little is known about the effects of EE 2 and other environmental estrogens on the gene expression of immune-relevant molecules in the gonad of fish in general.
In the gilthead seabream, EE 2 promotes an increase in the gonadal transcripts of the pro-inflammatory cytokines, il1b and tnfa, and the anti-inflammatory cytokine, tgfb1, although the response differs between pre-SG and SG specimens. These increases could be correlated with the decrease in androgen production and suggest, as occurs in mammals [53], that these cytokines are involved in testicular steroidogenesis; however, further studies are needed to confirm this observation. A similar conclusion was reached for the goldfish testis, in which a heterologous recombinant IL1, murine IL1β, inhibited basal and human chorionic gonadotrophin-stimulated T production [54]. Mmp9 and mmp13 gene expression in the testis of gilthead seabream suggests a pivotal role for them in the regulation of the testicular physiology and, in particular, in the organization of the cysts during spermatogenesis and post-spawning, as well as in AG infiltration [55]. EE 2 dietary intake promotes the transcription of mmp9 and mmp13 genes in the gonad of both pre-SG and SG specimens, which concords with the induction of the post-spawning stage and AG infiltration upon estrogen (E 2 or EE 2 ) exposure [34,42,55]. TLRs play important roles in the innate immunity of the male mammalian reproductive tract [56]. Although several tlr gene sequences have been reported in gilthead seabream, tlr9 is the only one which is expressed in the gonad [19]. Our data show that the dietary intake of EE 2 increases the expression of tlr9 and mhc1a genes in the gonad of both pre-SG and SG fish, suggesting that this estrogenic pollutant stimulates the ability of the gonad to recognize and respond to pathogens. This is important when we consider that there are some pathogens that use the gonads to be transmitted to the next generations or to other animals [57]. However, further studies are needed to clearly determine the ability of the gonad to respond to gonad invasive pathogens under estrogenic pollutant conditions.
Finally and concerning the expression of genes that code for molecules involved in leukocyte recruitment (AGs and lymphocytes) into the gonad [29,58], and for B lymphocyte markers (IgM and IgT), higher expression levels were observed in the control group of pre-SG than in the control group of SG specimens [34]. Moreover, the EE 2 dietary intake promotes bigger changes in most of those genes in pre-SG than in SG specimens. Probably, these differences in the level of gene expression observed between pre-SG and SG specimens resulted in differences in the leukocyte influx into the gonad in response to EE 2 .
The experiment was performed using 30 pre-SG specimens (June, with a body weight of 110 ± 20 g, 14-months old) and 30 SG specimens (November, with a body weight of 405 ± 25 g, 19 months old) of gilthead seabream males. The fish were kept in 2 m 3 tanks with a flow-through circuit, suitable aeration and filtration system and natural photoperiod. The water temperature ranged from 14.6 to 17.8 °C. The environmental parameters, mortality and food intake were recorded daily. The EE 2 was incorporated in the commercial food (44% protein, 22% lipids, Skretting, Spain) at doses of 0 (control) and 5 μg/g food, using the ethanol evaporation method (0.3 L ethanol/kg of food) as described elsewhere [59]. The specimens were fed ad libitum three times a day for 28 days and fasted for 24 h before sampling. Sampling was carried out after 7 and 28 days of EE 2 exposure (n = 6 fish/group). For this, specimens were anesthetized with 40 µL/L of clove oil and the urogenital pore was dried before collecting sperm as described below. The specimens were then decapitated, weighed, and the livers and gonads were removed and processed for gene analysis, as described below. Serum samples from trunk blood were obtained by centrifugation and immediately frozen and stored at −80 °C until use. The experiments complied with the Guidelines of the European Union Council (86/609/EU), the Bioethical Committee of the University of Murcia (Spain) and the Instituto Español de Oceanografí a (Spain) for the use of laboratory animals.
Measurement of the Volume of Seminal Fluid and Sperm Concentration and Motility
Stripped sperm was obtained by gentle abdominal massage, collecting and measuring the sperm in the genital pore with a syringe as the semen flowed out (urine-contaminated samples were discarded). The total semen from six fish of each group was used immediately to determine cell concentration and motility. To determine the sperm concentration, semen was diluted in 1% formol (Panreac) and 5% NaHCO 3 (Sigma) in water at a ratio of 1:400 and the spermatozoa were counted using a Newbauer chamber. Motility was analyzed activating 1 μL of sperm (diluted on Ringer 200 mOsm solution at the optimal dilution of 1:5) with 20 μL of seawater [60]. The duration of sperm motility was determined by measuring the time elapsing between the initiation of sperm motility and the cessation of cell displacement using a light microscope at 400× magnification. The motility index was expressed on a relative scale of 0 to 5 [61].
Analytical Techniques
Serum levels of E 2 , T, and 11KT were quantified by ELISA following the method previously described [62]. Steroids were extracted from 20 μL of serum in 0.6 mL of methanol (Panreac). The methanol was then evaporated at 37 °C and the steroids were resuspended in 400 µL of reaction buffer [0.1 M phosphate buffer with 1 mM EDTA (Sigma), 0.4 M NaCl (Sigma), 1.5 mM NaN 3 (Sigma) and 0.1% albumin from bovine serum (Sigma)]. Then, 50 µL were used in each well so that 2.5 µL of serum were used in each well for all the assays. E 2 and T standards were purchased from Sigma-Aldrich. The 11KT standard, mouse anti-rabbit IgG monoclonal antibody (mAb), and specific anti-steroid antibodies and enzymatic tracers (steroid acetylcholinesterase conjugates) were obtained from Cayman Chemical. Microtiter plates (MaxiSorp) were purchased from Nunc. A standard curve from 6.13 × 10 −4 to 2.5 ng/mL (0.03-125 pg/well) was established in all the assays. Standards and extracted serum samples were run in duplicate. The lower limit of detection for all the assays was 12.21 pg/mL. The intra-assay coefficients of variation (calculated from duplicate samples) were 3.98% ± 0.57% for E 2 , 8.26% ± 1.33% for T, and 8.80% ± 1.68% for 11KT for the pre-SG specimens. Details on cross-reactivity for specific antibodies were provided by the supplier (0.01% of anti-11KT reacts with T; 2.2% of anti-T reacts with 11KT; and 0.1% of anti-E 2 reacts with T).
Calculation and Statistics
All data related to the stripped volume of seminal fluid, sperm concentration and motility index, sex steroid serum levels and gene expressions were analyzed by a Student t-test to determine differences between untreated control and the treated group for each time point. The critical value for statistical significance was taken as p ≤ 0.05. The asterisks mean: * p < 0.05; ** p < 0.01 and *** p < 0.001. All statistical analyses were carried out using the GraphPad Prism 5 program.
Conclusions
Our data demonstrate that the dietary intake of EE 2 promotes an estrogenic response and modifies the expression pattern of steroidogenic molecules, cytokines and other immune-related molecules involved in different processes of testicular physiology, ranging from steroidogenesis to pathogen recognition. Interestingly, a developmental adjustment of the sensitivity to EE 2 in the gilthead seabream gonad was observed, pointing to the need for further studies to clearly determine the life stages most susceptible to estrogenic pollutants in fish.
Declaration
Genetic nomenclature used in this manuscript follows the guidelines of the Zebrafish Nomenclature Committee (ZNC) for fish genes and proteins and the HUGO Gene Nomenclature committee for mammalian genes and proteins. | 2016-04-04T08:54:49.290Z | 2013-12-01T00:00:00.000 | {
"year": 2013,
"sha1": "3ad5fbf2d8646e819b92f49a2d6344bff05a4192",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-3397/11/12/4973/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3ad5fbf2d8646e819b92f49a2d6344bff05a4192",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
220768609 | pes2o/s2orc | v3-fos-license | Considerations for Eye Tracking Experiments in Information Retrieval
In this survey I discuss ophthalmic neurophysiology and the experimental considerations that must be made to reduce possible noise in an eye-tracking data stream. I also review the history, experiments, technological benefits and limitations of eye-tracking within the information retrieval field. The concepts of aware and adaptive user interfaces are also explored that humbly make an attempt to synthesize work from the fields of industrial engineering and psychophysiology with information retrieval.
INTRODUCTION
On the nature of learning, I think about my son. A 1-year old at the time of this writing, he plays by waving his arms, looking around, yelling, and putting his mouth on every object in sight. In these moments, I observe him without explicit instruction unless of course danger lurks. His sensorimotor connections to the world around him provides information on the rewards and penalties he needs to be well-adjusted -behave optimally, safely, curiously. Learning through interaction is the foundation of our existence. Equally, we can take a computational approach to this information interaction in the context of human and machine where now the roles are reversed. The machine is the human, and the human, the environment.
What would we need to understand in order to interact with an information system (machine) with our eyes or have the machine interact with us based on what it perceives in our eyes? Well of course, the machine would require a direct interface to an eye-tracking device which would provide a data stream. Consider gaze point as an example signal.
What are the operational characteristics of this signal? Investigation of the speed and sensitivity of the signals is a fundamental objective for this interaction to make sense. Additionally, a human knows precisely when they wish to click, touch, or use their voice, to execute interactions. What can we say about the machine? How would the machine learn when to provide a context menu, retrieve a specific document, adjust the presentation, or filter the information? If such a system existed, how would we democratize it? As I will discuss later in great detail, Pupil Center Corneal Reflection (PCCR) eye tracking devices are extraordinarily expensive and thus research with them becomes self-limiting for building real-time adaptive systems as I have outlined above.
Generally, the traditional methodology for information retrieval experiments has been to study gaze behavior and then report the findings in order to optimize interface layout or improve relevance feedback. If I were to ask where is the technology and how can I interact with it? What we would find is that they are confined to aseptic laboratories. Sophisticated eye trackers utilize infrared illumination and pupil center corneal reflection methods to capture raw gaze coordinates and classify the ocular behavior as an all-in-one package. Local cues within highly visual displays of information are intended to be used to assess and navigate spatial relationships [45,46]. Having functions that enable rapid, incremental, and reversible actions with continuous browsing and presentation of results are pillars of visual information seeking design [1]. Moreover, "how does visualization amplify cognition?" By grouping information together and using positioning intelligently between groups, reductions in search and working memory can be achieved and is the essence of "using vision to think" [9, see page [15][16][17]. Thus, by studying ocular behavior of information retrieval processes, engineers can optimize their systems. This short review provides a historical background on ophthalmic neurophysiology, eye tracking technology, information retrieval experiments, and experimental considerations for those beginning work in this area.
OPHTHALMIC NEUROPHYSIOLOGY
Millions of years of evolution through physical, chemical, genetic, molecular, biological, and environmental, pathways of increasing complexity naturally selected humans for something beautiful and fundamental to our senses and consciousness -visual perception. The knowledge gained since the first comprehensive anatomic descriptions of neural cell types that constitute the retina in the 19 th century followed by electron microscopy, microelectrode recording techniques, immunostaining, and pharmacology, in the 20 th century [44] are immature in comparison to the forces of nature.
Now, here we are in the first-quarter of the 21 st century and human-machine interaction research scientists are asking the question "how can I leverage an understanding of vision and visual perception in my research and development process?" As research scientists in the information field, we should bear this responsibility with conviction and depth to try and understand every possible angle of the phenomena we seek to observe and record. This section on Ophthalmic Neurophysiology is an elementary introduction on how vision works and should be our prism through which we plan and execute all eye-tracking studies. Figure 1 shows the basic anatomy of the eye. First, light passes through the cornea which due to its shape, can bend light to allow for focus. Some of this light enters through the pupil which has its diameter controlled by the iris. Bright light causes the iris to constrict the pupil which lets in less light. Low light causes the iris to widen the pupil diameter to let in more light. Then, light passes through the lens which coordinates with the cornea via muscles of the Ciliary body to properly focus the light on the light-sensitive layer of tissue called the retina. Photoreceptors then translate the light input into an electrical signal that travels via the optic nerve to the brain 1 . Figure 2 shows the slightly more complex anatomy of the eye as a cross-section. We will focus on the back of the eye (lower portion of the figure). The fovea is the center of the macula and provides sharp vision that is characteristic of attention on a particular stimulus in the world while leaving the peripheral vision somewhat blurred. You may notice the angle of the lens and fovea are slightly off-center. More on this later. The optic nerve is a collection of millions of nerve fibers that relay signals of visual messages that have been projected onto the retina from our environment to the brain. The electrical signals in transit to the brain first have to be spatially distributed across the five different neural cell types shown in figure 3. The photoreceptors (rods and cones) are the first order neurons in the visual pathway. These receptors synapse (connect and relay) with bipolar and horizontal cells which function primarily to establish brightness and color contrasts of the visual stimulus. The biploar cells then synapse with retinal ganglion and amacrine cells which intensify the contrast that supports vision for structure, shape, and is the precursor for movement detection. Finally the visual information that has been translated and properly organized into an electrical data structure is delivered to the brain via long projections of the retinal ganglion cells called axons. Described thus far is, broadly, the visual pathway from external stimulus to retinal processing. Sensory information must reach the cerebral cortex (outer layer of the brain), to be perceived. We must now consider the visual pathway from retina to cortex as shown in the cross-section of figure 4. The optic nerve fibers intersect contralaterally at the optic chiasm. The axons in this optic tract end with various nuclei (cell bodies). The thalamus is much like a hub containing nerve fiber projections in all directions that exchange information to the cerebral cortex (among many other regulatory functions). Within the midbrain, involved in motor movements, there is the superior colliculus that plays an essential role in coordinating eye and head movements to visual stimuli (among other sensory inputs). For example, the extraocular muscles are shown in figure 5. Within the thalamus, the lateral geniculate nucleus coordinates visual perception 3 as shown in figure 6. Lastly, the pretectum controls the pupilary light reflex. 4 Based on the introductory ophthalmic neurophysiology reviewed in this section, human-machine interaction experimenters should consider (at a minimum) certain operating parameters: • Pupillary response to lighting conditions is sensitive. Control for this by maintaining stable lighting throughout an experiment, as one may not be able to defend that changes in pupil diameter are in-fact due to changes in focus/attention on the machine or changes in ambient lighting.
• Screen participants for no previous history of ophthalmic disease. If the visual system is impaired at any level, the neurophysiological responses are no longer a reliable dependent variable as an excitatory, lack of, or delayed, ophthalmic response may not accurately represent a neurophysiological transition state with respect to machine interaction. • Many ophthalmic diseases are age-related. For the examination of human-machine interaction in the context of spatial/visual information, recruit study participants that are under the age of 40 to minimize the likelihood of confounding variables. 5,6,7,8,9 After reviewing the itemized list above, some may reason that this preliminary screening criteria is too narrow due to the fact that neuroadaptive systems will soon emerge on the technological landscape and that aging populations are increasingly engaging with technology, therefore their neurophysiological responses should be studied in order to make technology inclusive, not exclusive. I happen to agree with this logic. However, as we will review later, many limitations in current measuring devices exist, and some are related to ophthalmic diseases or deficiencies.
EYE-TRACKING TECHNOLOGY
In this section I will explain the history, theory, practice, and standardization of eye-tracking technology. The pioneers of eye-tracking date all the way back to Aristotle as can be seen in the clock-wise chronological arrangement in figure 7.
Although his work did not use the terms fixations and saccades (rapid movement between fixations), it provided a framework for understanding the terms we use today. In the same year, the German physiologist Edwald Hering and French Ophthalmologist Louis Émile Javal, described the discontinuous eye movements during reading. Dr. Javal was an ophthalmic laboratory director at the University of Paris (Sorbonne), worked on optical devices, the neurophysiology of reading, and introduced the term saccades which of Old French origin (8 th to 14 th century) was saquer or to pull and in modern French translates to violent pull.
"the eye makes several saccades during the passage over each line, about one for every 15-18 letters of text" [30]. (French to English translation).
About twenty years later, the psychologist Edmund Burke Huey appeared to be the first American to cite Javal's work describing that the consistent neurophysiological accommodation (referring to the lens of the eye) from having to read laterally across a page increases extraocular muscle fatigue and reduces reading speed [25]. Moreover, Dr. Huey described his motivations for building an experimental eye-tracking device: "the eye moved with along the line by little jerks and not with a continuous steady movement. I tried to record these jerks by direct observation, but finally decided that my simple reaction to sight stimuli was not quick enough to keep up... It seemed needful to have an accurate record of these movements; and it seemed impossible to get such record without a direct attachment of recording apparatus to the eye-ball. As I could find no account of this having been done, I arranged an apparatus for the purpose and have so far succeeded in taking 18 tracings of the eye's movements in reading." Ortiz A drawing of this apparatus is show in figure 8. Dr. Huey went on to write the famous book on Psychology and Pedagogy of Reading in 1908 [26]. For an excellent historical overview of eye-tracking developments in the study of fixations, saccades, and reading, in the 19 th and 20 th centuries please see sections 6 and 6.1 in [56]. Additionally, and of particular interest, is the work in the 1960's of the British engineering psychologist, B. Shackel, who worked on the inter-relation of man and machine and the optimum design of such equipment for human use. Specifically his early work in measures and viewpoint recording of electro-oculography (electrical potential during eye rotation) for the British Royal Navy on human-guided weapon systems [51,52] (see figs. 9 to 11). The Russian psychologist Alfred L. Yarbus studied the relation between fixations and interest during image studies that used a novel device developed in his laboratory (figure 12). Please see Chapter IV in [58] for a thorough review of his experiments. still the fundamental technology for state-of-the-art eye-trackers today, in the year 2020. 10 Although, the design as it related to form factor, dark room requirements, and restriction of head movement, were sub-optimal for "use in the wild". Unfortunately, as history has shown us, when mission critical United States military funded research projects fail on deliverables, the research community follows in its abandonment of theory and practice and thus many years passed before innovations in eye-tracking emerged once again. However, metrics of performance was the overarching contribution by the early pioneers and are not limited to: • Pupil and iris detection.
• Freedom of head movement.
• Adjustments for human anatomical eye variability.
• Adjustment for uncorrected and corrected human vision.
• Ease of calibration.
• Form factor and cost.
Let's discuss the Pupil Center Corneal Reflection (PCCR) method in more detail. Near-infared illumination creates reflection patterns on the cornea and lens called Purkinje images [17] (see figure 13) which can be captured by image sensors and the resulting vectors can be calculated in real-time that describe eye-gaze and direction. This information can be used to analyze the behavior and consciousness of a subject [14]. Lastly, geometric characteristics of a subject's eyes must be estimated to reliably measure eye-gaze point calculations (see figure 15). Therefore, a calibration procedure involves bright/dark pupil adjustments for lighting conditions, light refraction/reflection properties of the cornea, lens, and fovea, and an anatomical 3D eye model to estimate foveal location responsible for the visual field (focus, full color). 13
EYE-TRACKING IN SEARCH AND RETRIEVAL
In 2003, the first study on eye-tracking and information retrieval (IR) from search engines was conducted [49]. The authors of the study wanted to understand if it was possible to infer relevance from eye movement signals. In 2004, Granka et al. [20] investigated how users interact with search engine result pages (SERPs) in order to improve interface design processes and implicit feedback of the engine while Klöckner et al. [31] asked the more basic question of search list order and eye movement behavior to understand depth-first or breadth-first retrieval strategies. In 2005, similar to the previous study, Aula et al. [3] wanted to classify search result evaluation style in addition to depth-first or breadth-first strategies. The research revealed that users can be categorized as economic or exhaustive in that the eye-gaze of experienced users is fast and decisions are made with less information (economic). conducted during utilization of facets for filtering and refining a non-transactional search strategy [8].
In 2010, Balatsoukas and Ruthven [4] argued that "there are no studies exploring the relationship between relevance criteria use and human eye movements (e.g. number of fixations, fixation length, and scan-paths)". I believe their was some truth to this statement, as the only research close to their work was that of inferring relevance, at the macro-level, from eye-tracking [49]. Their work uncovered that topicality explained much of the fixation data. Dinet et al. [11] studied visual strategies of young people from grades 5 to 11 on how they explored the search engine results page and how these strategies were affected by typographical cuing such as font alterations while Dumais et al. [12] examined individual differences in gaze behavior for all elements on the results page (e.g. results, ads, related searches).
In 2012, Balatsoukas and Ruthven extended their previous work on the relationship between relevance criteria and eye-movements to include cognitive and behavioral approaches with grades of relevance (e.g. relevant, partial, not) and the relationship to length of eye-fixations [5] while Marcos et al. [36] studied patterns of successful vs. unsuccessful information seeking behaviors; specifically, how, why, and when, users behave differently with respect to query formulation, result page activity, and query re-formulation. In 2013, Maqbali et al. studied eye-tracking behavior with respect to textual and visual search interfaces as well as the issue of data quality (e.g. noise reduction, device calibration) at a time when the existing software 14 did not support such features [2].
In 2014, Gossen et al. studied the differences in perception of search results and interface elements between lateelementary school children and adults with the goal of developing methodologies to build search engines for engaging and educating young children based on previous evidence that search behavior varies widely between children and adults [19]. Gwizdka examined the relationship between the degree of relevance assigned to a retrieval result by a user, the cognitive effort committed to reading the documented result, and inferring the relationship with eye-movement patterns [21] while Hofmann et al. examined interaction and eye-movement behavior of users with query auto completion rankings (also referred to as query suggestions or dynamics queries) [24].
In 2015, Eickhoff et al. argued that query suggestion approaches were "attention oblivious" in that without mapping mouse cursor movement at the term-level of search engine result pages, eye-tracking signals, and query reformulations, efforts of user modeling were limited in their value, based solely on previous, popular, or related searches, and not entirely obvious that such suggestions were relevant for users with non-transactional information needs [13]. ...in order to contextualize experimental responses [38]. Prior to their position, experimental concerns were focused on data quality (e.g. noise reduction) and device calibration, not human response calibration.
In 2017, Gwizdka et al. revisited previous work on inferring relevance judgements for news stories albeit with a higher resolution eye-tracking device and the addition of more complex neurophysiological approaches such as electroencephalography (EEG) to identify relevance judgement correlates between eye-movement patterns and electrical activity in the brain [22] while Low et al. applied eye-tracking, pupillometry, and EEG to model user search behavior within a multimedia environment (e.g. an image library) in order to operationalize the development of an assistive technology that can guide a user throughout the search process based on their predicted attention, and latent intention [34].
In 2019, the first neuroadaptive implicit relevance feedback information retrieval system was built and evaluated by Jacucci et al. [29]. The authors demonstrated how to model search intent with eye and brain-computer interfaces for improved relevance predictions while Wu et al. examined eye-gaze in combination with electrodermal activity (EDA), which measures neurally mediated effects on sweat gland permeability, while users examined search engine result pages to predict subjective search satisfaction [57]. In 2020, Bhattacharya et al. re-examined relevance prediction for neuroadaptive IR systems with respect to scanpath image classification and reported up to 80% accuracy in their model [6].
EYE-TRACKING IN AWARE AND ADAPTIVE USER INTERFACES
In this section, I will review only those works that satisfy the criteria of a system (machine) that utilizes implicit signals from an eye-tracker to carry out functions and interact or collaborate with a human.
iDict was an eye-aware application that monitored gaze path (saccades) while users read text in foreign languages.
When difficulties were observed by analyzing the discontinuous eye movements, the machine would assist with the translation [27]. Later, an affordable "Gaze Contingent Display" was developed for the first time that was operating system and hardware integration agnostic. Such a display was capable of rendering images via the gaze point and thus had applications in gaze contingent image analysis and multi-modal displays that provide "focus+context" as can be found with volumetric medical imaging [41].
Children with autism spectrum disorder have difficulties with social attention. Particularly, they do not focus on the eyes or faces of those communicating with them. It is thought that forms of training may offer benefit. An amusement ride machine was engineered and outfitted with various sensors and an eye-tracker. The ride was an experiment that would elicit various responses from the child and require visual engagement of a screen that would then reward with auditory and vestibular experiences, and thus functioned as a gaze contingent environment for socially training the child on the issue of attention [48].
Fluid and pleasant human communication requires visual and auditory cues that are respected by two or more people.
For example, as I am speaking to someone and engaged in eye contact, perhaps I will look away for a moment or fade my tone of voice and pause. These are social cues that are then acted upon by another person where they then engage me with their thought. This level of appropriateness is not embedded in devices. Although the concept of "Attentive User Interfaces" that utilize eye-tracking to become more conscious about when to interrupt a human or group of humans has been studied [53]. Utilizing our visual system as a point and selection device for machine interactions instead of a computer mouse or touch screen would seem like a natural progression in the evolution of interaction. There are two avenues of engineering along this thread. The first simply requires a machine to interact with, an accurate eye tracking device, and thresholds for gaze fixation in order to select items presented by the machine. The second requires that we study the behaviors of interaction (eye, peripheral components) and their correlates in order to build a model of what the typical human eye does precisely before and after selections are made.
With this information we may then be able to have semi-conscious machines that understand when we would like to select something or navigate through an environment. A machine of the first kind was in-fact built and experimented on for image search and retrieval [42], whereby a threshold of 80 millisecond gaze fixation was used as the selection device.
The experiment asked that users identify the target image within a library of images that were presented in groups. All similarity calculations were stored as metadata prior to the experiment. The user would have to iteratively gaze at related images for at least 80 milliseconds for the group of images to filter and narrow with a change of results. The results indicated that the speed of gaze contingent image search was faster than an automated random selection algorithm.
However, the gaze contingent display was not experimented against a traditional interaction like the computer mouse.
Later, a similar system was built and an experiment was conducted using Google image search [15]. The authors in [40] also presented a similar gaze threshold (100 ms) based system called GazeNoter. The gaze-adaptive note-taking system was built and tested for online PowerPoint slide presentations. Essentially, by analyzing a user's gaze, video playblack speed would adjust and recommend notes for particular areas of interest (e.g. bullet points, images, etc.).
The prototype was framed around the idea that video lectures require the user to obtrusively pause the video, lose focus, write a note, then continue. In-fact, the experiments reported show that users generated more efficient notes and preferred the gaze adaptive system in comparison to a baseline system that had no adaptive features.
In [43], the authors note that implementation of eye-tracking in humanoid robots has been done before. However, no experiment had been conducted on the benefits for human-robot collaboration. The eye-tracker was built into Ortiz the humanoid robot "iCub" 15 as opposed to being an externally visible interface. This engineering design enabled a single blind experiment where the subjects had no knowledge of any infared cameras illuminating the cornea and pupil or the involvement of eye-tracking in the experiment. The robot and human sat across each other at a table. The humans were not asked to interact with the robot in any particular way (voice, pointing, gaze, etc.) but were asked to communicate with the robot in order to receive a specific order of toy blocks to build a structure. The robot was specifically programmed in this experiment to only react to eye gaze which it did successfully in under 30 seconds across subjects.
Cartographers encode geographic information on small scale maps that represent all the topological features of our planet. This information is decoded with legends that enable the map user to understand what and where they are looking at. Digital maps have become adaptive to user click behavior and therefore the legends reflect the real-time interaction. Google Earth 16 is an excellent example of this. New evidence indicates that gaze-based adaptive legends are just as useful and perhaps more useful than traditional legends [18]. This experiment included two versions of a digital map (e.g. static legend, gaze-based adaptive legend). Although participants in the study performed similarly for time-on-task, they preferred the adaptive legend, indicating its perceived usefulness.
Technology
The standardization of eye-tracking technology is not without limitation. A number of advancements in the fundamental technology of PCCR based eye-trackers are still required. For example, the image processing algorithms have difficulty on a number of scenarios involving the pupil center corneal reflection method: • Reflections from eye-glasses and contact lenses worn by the subject can cause image processing artifacts.
• Eye-lashes that occlude the perimeter of the pupil cause problems for time-series pupil diameter calculations.
• Large pupils reflect more light than small pupils. The wide dynamic range in reflection can be an issue for image processors. • The eye blink reflex has a complex neural circuit involving the oculomotor nerve (cranial nerve III), trigeminial nerve (cranial nerve V), and the facial nerve (cranial nerve VII). 17,18 When a pathology in this reflex is present the subject does not blink during an experimental task therefore dry and congealed corneas is the result, which makes corneal reflection difficult for the image processor.
• High-speed photography by the image capture modality is required as saccadic eye movements have high velocity, and head movements may at times be also high in velocity causing blurred images of the corneal reflection.
• Squinting causes pupil center and corneal reflection distortion during image processing.
• The trade-off between PCCR accuracy and freedom of head movement may be overcome by robotic cameras that "eye follow" although this is not available in most affordable eye-trackers.
Additionally, sampling frequencies should be thoughtfully understood in order to design an experiment that potentially answers a question or set of questions (see figure 16). Essentially, at the highest frequency (1200 Hz), 1200 data points for each second of eye movement are recorded and each eye movement will be recorded approximately every 0.83 milliseconds (sub-millisecond). While at the lowest end of the frequency spectrum (60 Hz), 60 data points for each second of eye movement are recorded and each eye movement will be recorded approximately every 16.67 milliseconds. These sampling frequencies are important to understand because certain eye phenomena can only be observed at certain frequencies. For example, low-noise saccades are observed at frequencies greater than 120Hz which are sampled every 8.33 milliseconds while low-noise microsaccades are observed at frequencies greater than 600Hz which are sampled every 1.67 milliseconds. 19 Higher sampling frequencies will provide higher sample sizes and levels of certainty over the same unit of time. In terms of stratifying a data stream accurately and building user models for adaptive feedback within a system, high sampling frequency is a pre-requisite and provides more granularity for fixations, fixation duration, pupil dilation, saccades, saccade velocity, microsaccades, and spontaneous blink rate.
Psychophysiology
With this data, we can begin to ask questions related to moment-by-moment actions and their relationship to neurophysiology. For example, it is not possible to move your eyes (voluntarily or involuntarily) without making a corresponding shift in focus/attention and disruption to working memory. This is especially true in spatial environments [54,55].
Perhaps, by modeling a user's typical pattern of eye movement over time, a system can adapt and learn when to politely re-focus the user and/or more accurately model the eye-as-an-input.
Moreover, the eyes generally fixate on objects of thought. Although this may not always be the case in scenarios where we are looking at nothing but retrieving representations in the background [16]. Think of a moment where you gestured with your hand at a particular area of a room or place that someone you spoke to earlier was in. Therefore, in the context of a human-machine interaction, how would the machine learn to understand the difference in order to execute system commands, navigate menus, or remain observant for the next cue? For information systems, at least, this is the argument for supplementary data collection from peripheral components which allow for investigation and 19 https://www.tobiipro.com/learn-and-support/learn/eye-tracking-essentials/eye-tracker-sampling-frequency/ Ortiz potential discovery of correlates that the machine can be trained on to understand the difference. However, an accepted theory of visual perception is that it is the result of both feedforward and feedback connections, where the initial feedforward stimulation generates interpretation(s) that are fed-backward for further processing and confirmation known as a reentrant loop. Experiments have demonstrated varying cycle times for reentrant loops when subjects are presented with information in advance (a specific task) for sequential image processing and detecting a target.
Detection performance increased as the duration of an image being presented increased from 13-80 milliseconds [47].
Another limitation with this interaction is the manipulation device (computer mouse) as the literature has suggested that average mouse pointing time for web search appears to range from 600-1000 milliseconds [39] while pupil dilation can have latencies of only 200 milliseconds. This suggests that visual perception during information seeking tasks is significantly faster than the ability to act on it with our motor movements and thus it is likely that the eye-as-an-input device is more efficient and therefore a significant delay between the moment a user decides upon a selection item and when the selection item is actuated, appears to exist. On this particular issue, experimental protocols should outlines a specific manner in which to understand or operationalize this gap.
Even when a user is focused and attentive, their comprehension may still lack that of an expert. How would an adaptive system learn about a user to the extent that although attentive, their comprehension is not optimal and perhaps recommend material to build a foundation then return later? Most scientists in the field would likely argue that this is the purpose for objective questioning as an assessment. However, these assessments cannot distinguish correctly guessed answers, or misunderstanding in the wording of a question leading to an incorrect answer. Additionally, less fixations and longer saccades may be indicative of proficient comprehension and has been shown to be predictive of higher percentage scores on objective assessments [50].
CONCLUSION
In this short review I have discussed ophthalmic neurophysiology and the experimental considerations that must be made to reduce possible noise in an eye-tracking data stream. I have also reviewed the history, experiments, technological benefits, and limitations of eye-tracking studies within the information retrieval field. The concepts of aware and adaptive user interfaces were also explored that humbly motivated my investigations and synthesis of previous work from the fields of industrial engineering, psychophysiology, and information retrieval.
As I stated at the beginning of this review, on the nature of learning I consistently think about my son. Learning from his environment is the foundation of his existence. His interaction with ambient information reinforces or discourages certain behaviors. Throughout this writing I attempted to express these ideas within the context of human-informationmachine interaction. More precisely, I attempted to express the need for establishing a foundation that measures the decision making process with lower latency but also with the ability to be operationalized non-intrusively and as an input device, which in order to achieve such a goal, requires a window to the mammalian brain that is achievable only with eye-tracking as I firmly believe this to be the future of ocular navigation for information retrieval. | 2020-07-27T01:00:27.773Z | 2020-07-24T00:00:00.000 | {
"year": 2020,
"sha1": "689456c3059e07f33742d0ad63279dad4036e03f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "689456c3059e07f33742d0ad63279dad4036e03f",
"s2fieldsofstudy": [
"Computer Science",
"Psychology",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
119436123 | pes2o/s2orc | v3-fos-license | The Qth-power algorithm in characteristic 0
The Qth-power algorithm produces a useful canonical P-module presentation for the integral closures of certain integral extensions of $P:=\mathbf{F}[x_n,...,x_1]$, a polyonomial ring over the finite field $\mathbf{F}:=\mathbf{Z}_q$ of $q$ elements. Here it is shown how to use this for several small primes $q$ to reconstruct similar integral closures over the rationals $\mathbf{Q}$ using the Chinese remainder theorem to piece together presentations in different positive characteristics, and the extended Euclidean algorithm to reconstruct rational fractions to lift these to presentations over $\mathbf{Q}$.
mod q), with equality for most q; and that the reconciled version of the presentations over Z q for various q is lifted back to something over Q which has fractions with the same set of leading monomials as those of the fractions in its mod q images. So if the lifted version has an isomorphic image of the original ring inside it (which will not happen unless the product of the distinct primes used is large enough), it necessarily must be the integral closure of that original ring.
The fact that the extended Euclidean algorithm gives essentially inverse results of the mod q map when q is sufficiently large, gives, as a corollary, the integral closure for large primes q can be gotten from those for several smaller primes, a useful result here, in that the Qth-power algorithm should be, by its nature, expected to perform significantly better for smaller primes q.
It should be pointed out that in both characteristic 0 and characteristic q > 0 the main advantage of the Qth-power algorithm is that it takes highly structured input and produces highly structured output, namely a strict affine P-algebra presentation for the integral closure with an induced (as opposed to default) monomial ordering based on the weighted monomial ordering on the input. This allows for a fairly simple determination of the existence of a better Noether normalization than the given one, giving a smaller presentation with the same type of structure and information. But in characteristic 0, it gives a presentation with relatively small rational coefficients rather than a presentation over the integers with overly large integer coefficients, and one that specializes mod q to that of the image mod q for all large q (and a subset of the integral closure for all smaller q for which the image makes sense). Section 2 contains notation to describe the main algorithm, the algorithm itself, and an outline of what will be proved to justify it. The technical definitions and other details are postponed to section 3. Section 4 deals with the computation of canonical conductor elements based on the Jacobian. Section 5 is a discussion of the application of the Chinese remainder theorem and the extended Euclidean algorithm in this context. And section 6 contains the theory and proofs neeeded to justify the algorithm. There are numerous small examples throughout to help with the concepts and notation. But there is a larger example relegated to the Appendix useful as well for comparison to other implementations of integral closure computations. There are several other such on the author's website. Code for Qth-power algorithm has been available for a while in Magma and more recently in Macaulay2 as well. The latter has documentation containing more examples.
Overview of the algorithm
Let P (0) := Q[x n , . . . , x 1 ] be the polynomial ring over the rationals in the (independent) variables x n , . . . , x 1 . Let S (0) := P (0) [y]/ f (y) for some monic polynomial f (T ) ∈ P [T ], be an integral extension. Suppose also that it is an affine domain of type I with a weight function defining a weight-over-grevlex monomial ordering (as in [6]), though more general hypotheses may suffice.
Let Q(S (0) ) denote the field of fractions of S (0) . Let ∆ (0) ∈ P (0) be the canonical monic conductor element computed from Jacobian(B (0) ) as described below, so that the integral closure C(S (0) , Q(S (0) )) of S (0) in Q(S (0) ) satisfies and is known to be the union of all rings lying between S (0) and 1 ∆ (0) S (0) . [Since S (0) is assumed to be an integral extension of P (0) , it probably makes more sense to think of it as C(P (0) , Q(S (0) )), in that P (0) is a minimal subring over which this has a finite module structure. And once a fixed conductor element ∆ (0) ∈ P (0) has been chosen, it makes more sense to use the notation is not a ring) to emphasize that the elements are from 1 ∆ (0) S (0) and are integral over P (0) . So that is the notation that will be used here.] The objective here is to find a canonical ordered set of monic polynomials (that is, with leading coefficient 1 relative to the monomial ordering being used) g j /∆ (0) , 0 ≤ j ≤ J 0 form a P (0) -module generating set for C(P (0) , 1 ∆ (0) S (0) ), then use y (0) J 0 , . . . , y 1 as new variable names for the fractions g 1 ; x n , . . . , x 1 ] with grevlex-over-weight monomial ordering induced by the weight function of R (0) (as in [6] but described below as well), and then compute the monic polynomials b Then S (0) := R (0) /I (0) is a strict affine P (0) -algebra presentation of the integral closure C(P (0) , 1 ∆ (0) S (0) ). While P (0) is an explicit subring of S (0) , S (0) need not be; so let ψ (0) : S (0) → S (0) be the inclusion map (the identity on the x i identified in each copy of P throughout, but not on the y which is mapped to combinations of the y j and the x i ) so that Now consider what needs to happen for there to be an image of all this over Z q for some prime q, gotten by applying the mod q map, µ q .
It is easy enough to define P (q) := Z q [x n , . . . , x 1 ] by identifying the variables x i , and similarly to define R (q) := Z q [y; x n , . . . , x 1 ] by further identifying the variable y.
If q doesn't divide any denominator β of any rational fraction α/β, gcd(α, β) = 1 of any b sense, and is still a minimal, reduced Gröbner basis for the ideal I (q) of R (q) that it generates, though the quotient ring S (q) := R (q) /I (q) need no longer be even a reduced ring, let alone an affine domain of any sort. [Whether it is an integral extension of P (q) can depend on how one views such things as Z q [y 1 ; x 1 ]/ y 2 1 in which x 1 doesn't appear in the defining relation.] And if q doesn't divide LC(d (0) ), for d (0) = LC(d (0) )∆ (0) the conductor as computed from the Jacobian of B (0) over Z, then ∆ (q) := µ q (∆ (0) ) is the monic canonical conductor element that would have been computed using the Jacobian over Z q .
The Qth-power algorithm is meant to work in positive characteristic to produce a strict affine P (q) -algebra presentation with P (q) -module generating set of fractions with monic numerators g (q) Jq /∆ (q) , . . . , g 1 /∆ (q) , and g (having the grevlex-over-weight monomial ordering induced by that on R (q) ) with monic polynomials b (q) k , 1 ≤ k ≤ K q forming a minimal, reduced Gröbner basis B (q) for the ideal I (q) of induced relations, defining the presentation The steps in the proposed characteristic 0 algorithm based on this are then simple to understand: Algorithm 1 (1) Start with the finite ordered set of (independent) variables (x n , . . . , x 1 ) defining the Noether normalization P (0) in characteristic 0, the (dependent) variable name y used to define the ring R (0) , and the finite ordered set of monic relations (b 1 , . . . , b K ) forming a minimal, reduced Gröbner basis for the ideal of relations I (0) for a presentation of the input quotient ring S (0) = R (0) /I (0) . (2) Compute a canonical conductor element ∆ (0) ∈ P (0) for S (0) from the Jacobian . (3) For successive primes, q l , test that the (mod q l ) map, µ q l , is defined (that is that q l doesn't divide β for any rational coefficient α/β, gcd(α, β) = 1, in any of the basis relations b k or of ∆ (0) ) and that S (q l ) is still an integral extension of the image P (q l ) . (4) Compute a canonical conductor element ∆ (q l ) ∈ P (q l ) for S (q l ) , skipping q l if it is one of the (finite number of) primes for which ∆ (q l ) = µ q l (∆ (0) ). (5) Use the Qth-power algorithm in characteristic q l (as a black box for the purposes of this paper) to compute a canonical ordered set of (numerator) polynomials (g (q l ) j ), 0 ≤ j ≤ J q l (with the common denominator polynomial being g (q l ) 0 := ∆ (q l ) ) for the fractions forming a P (q l ) -module generating set for a strict affine P (q l ) -algebra presentation of the integral closure Gröbner basis for the ideal of relations relative to the induced grevlexover-weight monomial ordering. (6) If (q l ), l ∈ L is a sequence of distinct primes for which presentations S (q l ) have been computed, and • J L := J q l is independent of l ∈ L; k ) is independent of l ∈ L; then use the Chinese remainder theorem on the canonical ordered sets g (q l ) j l∈L , 1 ≤ j ≤ J L , and also on the sequences b to get similar canonical ordered sets (g to small fractions α/β ∈ Q with α 2 + β 2 minimal, to get a canonical ordered set of polynomials (g To explain why this works, the important steps are to show that whenever everything involved makes sense and is still a minimal, reduced Gröbner basis for I (0,N L ) .
These are proven as Lemma 9 and Theorem 15 below. It should be noted that (regardless of characteristic) any P -module between S and 1 ∆ S has a canonical ordered set of polynomials (g j ) j (as defined in the next section) with LM(g j ) j a measure of the size of the P -module. That is, were the de Jong algorithm implemented relative to a fixed Noether normalization P and a canonical conductor element ∆ ∈ P , then the sequence of nested rings produced would have a sequence of canonical ordered sets of polynomials (g j ) j with LM(g j ) j nested and getting larger. The reverse is true of the Qth-power algorithm in that the P -modules produced have canonical ordered sets of polynomials (g j ) j with LM(g j ) j nested and getting smaller. Both approaches meet in the middle with a P -module that is a ring that must be the integral closure sought.
Also, Jacobian(B (0) ), over the integers, Z, is used below to define canonical conductor elements ∆ (0) ∈ P (0) and ∆ (q) ∈ P (q) for all primes q at the same time. This is discussed in its own section. The use of the Chinese remainder theorem and the extended Euclidean algorithm, while discussed below as applied in this context, are assumed to be elementary. The proof that the Qth-power algorithm works in positive characteristic was dealt with in the author's previous papers cited in the introduction, though certain parts of it are discussed below. [It should be noted however, that it would not take too much work to put other implementations of other integral closure algorithms in a form that would also work here, though at present, few if any give a similar canonical result in characteristic 0 that directly specializes to the result they give in positive characteristic. It makes mathematical sense to rewrite them to reflect this connection between integral closures in characteristic 0 and positive characteristic q.]
Definitions and other details
The following material describes the structure that is used to describe integral closures of integral extensions of a given Noether normalization P , and explaining the mindset of the Qth-power algorithm approach to same. The idea is to have an integral extension S := R/I of P := F[x n , . . . , x 1 ], with R := F[y; x n , . . . , x 1 ], I := f (y) the ideal of relations, and P a Noether normalization of S, and with S having a weight function induced on it by P , [6]. Then its integral closure C(P, S) should have a presentation S := R/I with an induced weight function. That is, for R := F[y J , . . . , y 1 , x n , . . . , x 1 ] with y j a name for the (non-trivial) P -module generator g j /g 0 having wt(y j ) := wt(g j ) − wt(g 0 ) as its induced weight.
Moreover a presentation S of the integral closure C(S, Q(S)) should have a nice structure as an affine P -algebra.
Definition 1 A strict affine P -algebra presentation S := R/I with R := F[y, x], and I the ideal of induced relations is one with a minimal, reduced Gröbner basis B for I consisting of P -quadratic relations of the form y i y j − k c i,j,k y k , c i,j,k ∈ P , describing the P -algebra multiplication with possibly some monic P -linear relations of the form k a k y k , a k ∈ P , if the P -module generators, y k , are not independent over P , [6]. This is ensured by the grevlex-over-weight monomial ordering, in that all products of total degree 2 in the y's are reduced to P -linear combinations of the P -module generators with total degree less than two, and all P -syzygies only involve monomials of total degree less than 2 in them.
Definition 2 For P := F[x n , . . . , x 1 ] a polynomial ring, R := F[y; x n , . . . , x 1 ], I an ideal of R such that the quotient ring S := R/I is an integral extension of P , an ordered set (g j ∈ R : 0 ≤ j ≤ J) of polynomials is said to be canonical for some submodule 1 ∆ T ⊆ 1 ∆ S iff (1) each g j is monic (has leading coefficient 1 relative to the monomial ordering being used); (2) g 0 = ∆ ∈ P is a conductor element for T ; for any j 2 = j 1 and x α ∈ P .
Making sure that one produces a canonical strict affine P -algebra presentation, independent of characteristic, is crucial in being able to reconstruct a canonical presentation in characteristic 0 from canonical one in positive characteristic.
Moreover the induced grevlex-over-weight monomial ordering is based on a weight function on the input extended to the output: Definition 3 A weight function wt : R\I → N n (n the number of independent variables) is a function satisfying: , the columns of any non-singular matrix M P defining a (global) monomial ordering on P also define a weight function on (the nonzero elements of) P . Weight functions (relative to a given ideal I) have the important property that wt(g) = wt(NF (g, I)), as otherwise their difference (an element of I) would have a defined weight. This, in turn, implies that all standard monomials have different weights. Integral extensions with weight functions have at least this much more structure than those that don't. The integral extensions such as those considered here have a weight function.
One can extend a weight function on S naturally to (the non-zero elements of) Q(S) (the field of fractions of S) by wt Q(S) (g/h) := wt S (g) − wt S (h) if one allows values in Z n . But for g/h ∈ C(S, Q(S)), wt(g/h) will not have negative entries and will represent an induced weight on g/h.
There are weight-over-grevlex and grevlex-over-weight monomial orderings defined by the non-singular matrices gotten by replacing the top n or bottom n rows of a grevlex monomial ordering matrix by the weight matrix, respectively The former emphasizes the property that wt(LT (f )) = wt(LT (f − LT (f ))), whereas the latter emphsizes the desired strict affine P -algebra presentation.
The conductor element ∆ and the numerators g j produced are all assumed to be monic. What will be computed are pairs of finite canonical ordered sets of polynomials and finite sequences of relations, with maps between such pairs. The induced weights are kept track of as well, given that they define the monomial orderings involved.
What is necessary to know about the Qth-power algorithm is that it treats the input ring S as a P -module with a natural induced monomial ordering, computes a conductor element ∆ ∈ P , starts with a dual module such as the default M 0 := 1 ∆ S and computes a nested sequence of P -modules by the simple definition Necessarily each M i (and hence the integral closure itself) is a P -module with a natural induced monomial ordering. Moreover it naturally produces a strict affine P -algebra presentation R/I relative to a canonical ordered set (g j ) j of polynomials. [This approach works theoretically for any characteristic and any integer power at least 2, but is only linear when q is (a power of) the characteristic.] Consider the reasonably generic example: This is worked out in detail in the Appendix, not only by the methods being described here and implemented in the author's QthPower package in Macaulay2 [10], but also using the other existing applicable implementations of integral closure and/or normalization algorithms, normal in Singular, in-tegralClosure in Macaulay2, and both Normalisation and IntegralClosure in Magma.
Computing a canonical conductor element
Standard methods to compute a conductor element ∆ ∈ S (meaning an element for which C(S, Q(S)) ⊆ 1 ∆ S) use determinants of n × n minors of a Jacobian. This can be easily done by column-reducing the Jacobian matrix Jacobian(B) of B; and this computation can be done over R instead of S by appending columns one for each basis element of I and each row of Jacobian(B). It is then possible to consider those entries C i,j ∈ P of the column-reduced form, C, for which C k,l = 0 for k > i and l ≤ j. An appropriate monomial ordering must be chosen relative to which this is done, so that the elements C i,j ∈ P considered will correspond to diagonal entries in n × n minors whose determinants necessarily produce conductor elements, greatest common divisors of those in the same row can be used, and a scaled product of those gcds over all rows can be used to give a canonical conductor element ∆ ∈ P . For this purpose, any block ordering treating the dependent variables any way but using the given monomial ordering described by M P on the lowest block consisting of the (independent) variables in P , will suffice. [Note that when computing ∆ (0) ∈ P (0) over Q, it is possible to do this over the integers, Z, instead (if denominators are cleared first) in order to see in one computation for what (finite set of) primes q it might be that ∆ (q) = ∆ (0) (mod q), by seeing what primes occur anywhere in the column reduction C.] The method qthConductor exported from the author's QthPower package, [10], in Macaulay2 can be used to compute such a canonical conductor element, by letting Macaulay2 do the column-reduction, then using a simple loop to compute the product of the gcds described. This computation is not a point of this paper, other than to insure that there is a canonical conductor element that can be computed, that it is an element of the given Noether normalization, P , and that the computation in positive characteristic mirrors the computation in characteristic 0. Consider the following instructive example, meant originally to test minimality and form of presentation, but, as a byproduct, was used to catch bugs in various implementations as well.
There are rational functions (f 0 := 1, f 4 , f 5 , f 9 , f 10 , f 14 , f 15 , f 19 ) (with the subscripts corresponding to the weights) forming a P (0) -module basis for the integral closure, S (0) . Then the presentation of S (q) can be gotten by reading S (0) modulo q for all primes q = 3, 5. Curiously, the smallest conductor element that could be used is δ (0) = δ (q) = x 13 for all primes except δ (5) = x 13 (x 3 + 1) 2 . It is tempting to conjecture that ∆ (q) = ∆ (0) (mod q) implies that S (q) = S (0) (mod q). It is clearly not true that δ (q) = δ (0) (mod q) implies that S (q) = S (0) (mod q) from q = 3 in this example; and it is clearly not true that Since it is computationally easy to avoid all the (finitely many) primes q for which ∆ (q) = ∆ (0) (mod q) (necessarily divisors of some coefficient in the computation over Z), it is possible to simplify subsequent computations by so doing.
Note especially that in implementing this approach, care must be taken to assure that the integral closure algorithm produces the same canonical result for each good prime q. That is, for most primes q, the integral closure over Z q should look exactly like that of the integral closure over Q, but with coefficients reduced mod q.
For each presentation of S := R/I and presentation of its integral closure S := R/I, there is a map ψ : R → R, necessarily with ψ(I) ⊆ I, so that ψ can be viewed as an inclusion map ψ : S → S.
Definition 7
The rational fraction reconstruction map (see, for instance, [2]) is These are almost inverse operations in the sense that for −N/2 < c < N/2, Both maps naturally extend to polynomials, by applying them to coefficients and mapping variables to corresponding variables; so we shall abuse notation and use the same function names when applying them to polynomials.
Definition 8
Similarly the Chinese remainder map standardly takes ordered sets of remainders (a l ) l∈L and ordered sets of respective moduli (q l ) l∈L , and produces a(mod N L ) for N L := {q l : l ∈ L}) such that a ≡ a l (mod q l ) for all l ∈ L when the moduli are all relatively prime, as they will necessarily be here when the q l are distinct primes.
We shall call this map CRT regardless of the number of inputs, and regardless of whether we are applying it to integers or extending it to polynomials by applying it to the coefficients.
Corollary 11 If (g (0) j : 0 ≤ j ≤ J) is canonical for C(P (0) , 1 ∆ (0) S (0) ) and the mod q map makes sense, then (g (q) j : 0 ≤ j ≤ J) is canonical for C(P (q) , 1 ∆ (q) S (q) ) if q is a good prime (and only for some subring if it is not a good prime).
Clearly if q divides any denominator of any rational coefficient α/β of any term of any b (0) k , it is not good. If q divides any numerator of any rational coefficient α/β of any term of any b (0) k , it is may not be good, especially if the extension mod q is no longer really an extension. And if ∆ (q) = µ q ∆ (0) , q may not be good. So computationally one can try to avoid such primes that are not good or may not be good (since these form a finite predictable set of primes).
The Euclidean algorithm, applied to N L := r −1 and any r 0 > 0, produces sequences (r i ) and (Q i ) such that r i−2 = Q i r i−1 + r i with 0 ≤ r i < r i−1 , and r n = 0. Part of the extended Euclidean algorithm produces a sequence (u i ) with u −1 := 0, u 0 := 1, and u i : Of these there is necessarily some i ≥ 0 with r 2 i + u 2 i minimum, choosing i minimum as well if this is not unique.
Now define the composite map
Suppose the variables y j /∆ (q) , for g (q) j := µ q g (0) j and ∆ (q) := µ q ∆ (0) . If q is a good prime, then these should be variables and (a Gröbner basis of) relations for the integral closure S (q) .
Since the object here is to go in the reverse direction by reconciling various presentations, S (q) , and reconstructing the presentation S (0) from them, using the Chinese remainder map and the extended Euclidean algorithm map, consider the candidates for S (0) , namely S (0,N L ) with polynomial ring R : l ∈ L}.
Theory
Lemma 13 If q = N 1 is a good prime larger than α 2 + β 2 for any coefficient α/β ∈ Q needed to be reconstructed to produce the presentation R (0) /I (0) , then R (q) /I (q) lifts to this presentation. [And the canonical polynomial set (g (q) j : 0 ≤ j ≤ J) necessarily lifts to a canonical polynomial set (g (0,q)) j then c ≡ α/β(mod q) lifts to α/β using the extended Euclidean algorithm as described above.
Corollary 14 If (q l ) l∈L is a set of distinct good primes and N L := {q l : l ∈ L} is larger than α 2 + β 2 for any rational coefficient needed to be reconstructed to produce the presentation R (0) /I (0) , and R (q l ) /I (q l ) are compatible in the sense Since S (0) is not known ahead of time, it is not clear how big N L must be to apply the proposition or corollary above. It is therefore better to have a theorem independent of this knowledge. So the following is a way of knowing that N L is sufficiently large without knowing just how large sufficiently large is.
If F 0 = Q, and a 0 := 1/3 and b 0 := 8/7, then the image in characteristic q is not defined for q = 3, 7, and is not an affine domain for q = 2. For F 5 = Z 5 , a 5 = 2 and b 5 = −1 would lift to a 0 = 2 and b 0 = −1, giving a presentation of the wrong integral closure (one with the right form but these wrong coefficients). Using F 11 = Z 11 as well would give a 11 = 4 and b 11 = −2 reconciled to give a 55 = −18 and b 55 = 9, and lifted to a 0 = 1/3 and b 0 = −1/6, again giving a presentation of the wrong integral closure. Using in addition F 13 = Z 13 would produce a 13 = −3 and b 13 = 3 reconciled to give a 715 = −238 and b 715 = −101, lifted to the correct a 0 = 1/3 and b 0 = 8/7.
The details for this example are as follows: The primes q = 2, 7 are bad because they divide denominators of fractions defining the problem. The image for q = 3 is not even a reduced ring, so probably should be avoided as well. δ (0) = 7x−8, and q = 7 is already to be avoided.
The presentation found (but not minimized) is then Q[y; x]/ y 2 − 3/2x with inclusion map defined by ψ(y) = y(x−8/7). [ The minimized presentation here would have been just the polynomial ring Q[y] with x = 2/3y 2 and y = y(2/3y 2 − 8/7) both unnecessary except for defining the inclusion.] It is envisioned that the code and the relevant examples relative to this paper on the website http://www.dms.auburn.edu/~leonada. will be updated as various packages change for the better. The code for the Qth-power algorithm in positive characteristic and the extra code to extend it to char 0 for this paper are both written in Magma and in Macaulay2 and are available from the author.
Since there is only one free variable in this example, Magma's IntegralClosure gives an answer At least this necessarily gives a P -module basis and an answer over Q instead of Z. But there is obviously no way to give weights, and the presentation is only implicit. | 2013-01-25T17:40:34.000Z | 2013-01-25T00:00:00.000 | {
"year": 2013,
"sha1": "2838bf009722d82fba13790b56dcb943ec4f9c6a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "f1d9bcdda344a3ee4d480b08a8effb4d4bbc1f7f",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
11715426 | pes2o/s2orc | v3-fos-license | Impaired degradation of WNK1 and WNK4 kinases causes PHAII in mutant KLHL3 knock-in mice
Pseudohypoaldosteronism type II (PHAII) is a hereditary disease characterized by salt-sensitive hypertension, hyperkalemia and metabolic acidosis, and genes encoding with-no-lysine kinase 1 (WNK1) and WNK4 kinases are known to be responsible. Recently, Kelch-like 3 (KLHL3) and Cullin3, components of KLHL3-Cullin3 E3 ligase, were newly identified as responsible for PHAII. We have reported that WNK4 is the substrate of KLHL3-Cullin3 E3 ligase-mediated ubiquitination. However, WNK1 and Na–Cl cotransporter (NCC) were also reported to be a substrate of KLHL3-Cullin3 E3 ligase by other groups. Therefore, it remains unclear which molecule is the target(s) of KLHL3. To investigate the pathogenesis of PHAII caused by KLHL3 mutation, we generated and analyzed KLHL3 R528H/ 1 knock-in mice. KLHL3 R528H/ 1 knock-in mice exhibited salt-sensitive hypertension, hyperkalemia and metabolic acidosis. Moreover, the phosphorylation of NCC was increased in the KLHL3 R528H/ 1 mouse kidney,indicating thattheKLHL3 R528H/ 1 knock-inmouseisanidealmouse modelofPHAII.Interestingly,the protein expression of both WNK1 and WNK4 was significantly increased in the KLHL3 R528H/ 1 mouse kidney, confirming that increases in these WNK kinases activated the WNK-OSR1/SPAK-NCC phosphorylation cascade in KLHL3 R528H/ 1 knock-in mice. To examine whether mutant KLHL3 R528H can interact with WNK kinases, we measuredthebindingofTAMRA-labeledWNK1andWNK4peptidestofull-lengthKLHL3usingfluorescencecor-relation spectroscopy, and found that neither WNK1 nor WNK4 bound to mutant KLHL3 R528H. Thus, we found that increased proteinexpression levels ofWNK1 and WNK4 kinases causePHAIIbyKLHL3 R528Hmutation due to impaired KLHL3-Cullin3-mediated ubiquitination.
INTRODUCTION
Pseudohypoaldosteronism type II (PHAII) is a hereditary disease characterized by salt-sensitive hypertension, hyperkalemia, and metabolic acidosis (1,2). Mutations in with-no-lysine kinase 1 (WNK1) and WNK4 genes were reported to be responsible for PHAII (3). It was previously demonstrated that the WNK kinase family phosphorylates and activates oxidative stress-responsive kinase 1 (OSR1) and STE20/ SPS1-related proline/alanine-rich kinase (SPAK) (4,5), and that activated OSR1/SPAK kinases could phosphorylate and activate Na-Cl cotransporter (NCC), constituting the WNK-OSR1/SPAK-NCC phosphorylation that KLHL3 interacts with Cullin3 and WNK4, induces WNK4 ubiquitination and reduces WNK4 protein levels in cultured cells and Xenopus laevis oocytes (21)(22)(23)(24). Interestingly, it was also reported that WNK1 could be a substrate of KLHL3-Cullin3 E3 ubiquitin ligase (21,24). Moreover, another group reported that KLHL3 was able to bind to NCC and regulate its intracellular localization in cultured cells (16). Therefore, it remains unclear which molecule involved in the pathogenesis of PHAII is the in vivo target of KLHL3. In addition, the above experiments were performed in cultured cells. Thus, it is necessary to clarify the role of KLHL3 mutation in PHAII pathogenesis in vivo.
In this study, to answer these questions, we generated KLHL3 R528H/+ knock-in mice that carry the same mutation as autosomal dominant type PHAII patients. This KLHL3 R528H/+ knock-in PHAII model mouse revealed that increased protein expression levels of WNK1 and WNK4 kinases, due to impaired binding of KLHL3 with WNK kinases, cause PHAII in vivo. These results also indicated that both WNK1 and WNK4 are physiologically regulated by KLHL3-Cullin3-mediated ubiquitination in vivo.
RESULTS
Generation of KLHL3 R528H/1 knock-in mice KLHL3 R528H/+ knock-in mice were generated using homologous recombination in Baltha1 embryonic stem (ES) cells to create a mutant allele (25). Exon 15 of the Klhl3 gene was replaced by a cassette expressing a neomycin selective marker flanked by loxP sites, which was followed by the mutant exon 15 (R528H) (Fig. 1A). Recombinant ES cell clones were injected into morula to generate chimeric mice. The neo cassette was deleted by crossing the mutant KLHL3 flox/+ mice with CAG promoter-Cre transgenic mice. Successful generation of KLHL3 R528H/+ knock-in mice was confirmed by genomic sequencing (Fig. 1B-E). In addition to the generation of KLHL3 R528H/+ heterozygous mice, we generated KLHL3 R528H/R528H homozygous mice to more readily detect the pathological effects of mutant KLHL3 R528H.
PHAII phenotypes of KLHL3 R528H/1 knock-in mice There were no significant differences in body weight and physical appearance between KLHL3 R528H/+ and wild-type mice. To confirm the KLHL3 R528H/+ mouse as an accurate model of PHAII, we measured systolic blood pressure of mice fed a normalsalt diet. The systolic blood pressure in the KLHL3 R528H/+ mice fed a normal-salt diet did not differ from that of wild-type mice ( Table 1). Since PHAII shows salt-sensitive hypertension, we then measured blood pressure in mice fed a high-salt diet, which revealed that a high-salt diet produced significantly higher systolic blood pressure in KLHL3 R528H/+ mice compared with wild-type mice (133.5 + 1.6 mmHg versus 120.1 + 5.1 mmHg, respectively; n ¼ 5 and 4, P , 0.05). Moreover, as shown in Table 2, KLHL3 R528H/+ mice exhibited hyperkalemia and metabolic acidosis similar to PHAII patients. These data clearly indicate that the KLHL3 R528H/+ mouse is an ideal model of PHAII caused by KLHL3 mutation. The severity of hyperkalemia and metabolic acidosis was not changed under a high-salt diet (Supplementary Material, Fig. S1). We also performed blood pressure measurement and analysis of blood biochemical characteristics of KLHL3 R528H/R528H homozygous knock-in mice (Tables 1 and 2). Although KLHL3 R528H/R528H mice also exhibited salt-sensitive hypertension, hyperkalemia and metabolic acidosis, the blood pressure and blood biochemistries did not significantly differ from those of KLHL3 R528H/+ mice. Increased protein expression levels of WNK1 and WNK4 in KLHL3 R528H/1 mouse kidney To investigate the pathogenesis of PHAII caused by the R528H mutation of KLHL3 in vivo, we examined the protein expression and phosphorylation of molecules constituting the WNK signaling pathway. As shown in Figure 2A and B, protein expression levels of WNK1 and WNK4 were significantly increased 1.8and 1.4-fold, respectively, in the kidney of KLHL3 R528H/+ mice compared with those of wild-type mice. Accordingly, phosphorylation of OSR1, SPAK and NCC was also increased in KLHL3 R528H/+ mice. The KLHL3 R528H/R528H homozygous mouse showed obvious increases of WNK1 and WNK4 protein levels (6.9-and 2.4-fold, respectively, compared with wild-type mice) and increased phosphorylation of OSR1 and SPAK. However, we also found that the protein level and the phosphorylation status of NCC in KLHL3 R528H/R528H homozygous knock-in mice were not significantly increased compared with those in KLHL3 R528H/+ heterozygous knock-in mice, suggesting that the levels of increased WNK1 and WNK4 in the KLHL3 R528H/+ heterozygous knock-in mice might be high enough to fully phosphorylate and activate NCC. Considering that constitutive activation of NCC is the cause of PHAII, this saturated phosphorylation status of NCC could explain why the blood pressure and blood chemistries in KLHL3 R528H/ R528H homozygous knock-in mice did not differ from those of KLHL3 R528H/+ mice (Tables 1 and 2).
In addition, we confirmed that mRNA levels of WNK1 and WNK4 were not increased in the KLHL3 R528H/+ heterozygous mouse kidney (Fig. 2C), indicating that the increased protein levels of WNK1 and WNK4 were due to impaired degradation rather than transcriptional activation. To confirm that protein expression levels of WNK1 and WNK4 were increased in distal convoluted tubules (DCTs) where NCC is present, we performed double immunofluorescence of mouse kidney. As shown in Figure 3A and B, most of the KLHL3, WNK1 and WNK4 signals were colocalized with NCC, and signal intensities of WNK1 and WNK4 at DCT were apparently higher in the KLHL3 R528H/+ mouse kidney. Considering that both the WNK1 and WNK4 transgenic mice were reported to cause activation of the WNK-OSR1/SPAK-NCC phosphorylation cascade (9,21), these data clearly indicated that the essential pathogenesis of PHAII caused by KLHL3 mutation is due to increased WNK1 and WNK4 in DCT, leading to activation of the WNK-OSR1/ SPAK-NCC phosphorylation signaling cascade.
Defective binding between the acidic motif of WNK1/WNK4 and mutant KLHL3 R528H
We had previously reported that KLHL3 mutation in the Kelch domain decreased its binding to the acidic domain of WNK4, leading to impaired ubiquitination and reduced WNK4 protein levels in HEK 293 cells (21). To confirm that the increased protein levels of WNK1 and WNK4 in KLHL3 R528H/+ mouse kidney were caused by impaired binding between WNK kinases and the mutant KLHL3 R528H, we measured the diffusion time of the TAMRA-labeled acidic motif of WNK1 or WNK4 peptide using fluorescence correlation spectroscopy (FCS) in the presence of different concentrations of GST-fusion proteins of wild-type and mutant KLHL3 R528H; the binding of KLHL3 to these peptides is observed as increased diffusion time of the fluorescent peptide (21,26,27). Wild-type KLHL3 increased the diffusion time of the TAMRA-labeled WNK1 and WNK4 peptides, confirming that wild-type KLHL3 can interact with the acidic motif of WNK1 as well as that of WNK4 (Fig. 4). On the Regulation of epithelial Na 1 channels and ROMK in KLHL3 R528H/1 mice We investigated epithelial Na + channels (ENaC) and renal outer medullary K + channels (ROMK), two important channels for Na + reabsorption and K + secretion in cortical collecting ducts (CCD), in KLHL3 R528H/+ mice. As shown in Figure 5, although KLHL3 R528H/+ mice did not show a significant change in the protein levels of total ENaC a subunit (85 kDa) compared with wild-type mice, the levels of cleaved ENaC a subunit (35 kDa) were increased in KLHL3 R528H/+ mice. The level of ENaC b subunit was also increased in KLHL3 R528H/+ mice. Similar to ENaC a subunit, KLHL3 R528H/+ mice showed no significant change in the total protein level of ENaC g subunit (85 kDa). However, a significant increase of cleaved ENaC g subunit (70 kDa) was found in the KLHL3 R528H/+ mouse kidney. Similar to WNK4 D561A/+ knock-in mice (7), these results indicate that ENaC is activated in the KLHL3 R528H/+ mouse kidney. On the other hand, KLHL3 R528H/+ mice did not show a significant difference in protein levels of ROMK in immunoblot analysis of whole kidney (Fig. 5).
DISCUSSION
Investigation of the pathophysiology of PHAII is extremely important, not only to increase knowledge of this rare inherited disease, but also for the discovery of novel mechanisms of salt handling in the kidney. Although KLHL3 was identified as responsible for PHAII, several molecules have recently been reported as substrates that interact with KLHL3 in cultured cells. It was demonstrated that the loss of interaction between KLHL3 and WNK4 induced impaired ubiquitination of WNK4 and increased protein levels of WNK4 (21)(22)(23)(24). In addition, WNK1 was also reported to be bound to KLHL3 (21,24). In contrast, Louis-Dit-Picard et al. (16) reported that KLHL3 is responsible for direct regulation of NCC membrane expression. Therefore, the pathophysiological role of KLHL3 in PHAII required investigation of in vivo kidney. For this purpose, we generated KLHL3 R528H/+ knock-in mice that carry a mutation found in human PHAII patients (15,16). This KLHL3 R528H/+ knock-in mouse exhibited salt-sensitive hypertension, hyperkalemia and metabolic acidosis, which are characteristic symptoms of PHAII patients. Moreover, increased NCC phosphorylation was also observed. These results clearly confirmed that our KLHL3 R528H/+ mouse is an ideal model of mutant KLHL3-induced PHAII.
To investigate the mechanisms of PHAII in KLHL3 R528H/+ mice, we assessed the protein levels and phosphorylation status of the WNK-OSR1/SPAK-NCC phosphorylation signaling cascade. Interestingly, KLHL3 R528H/+ heterozygous mice showed increased protein levels of both WNK1 and WNK4 in the kidney, and KLHL3 R528H/R528H homozygous mice more
5056
Human Molecular Genetics, 2014, Vol. 23,No. 19 clearly demonstrated increased WNK1 and WNK4 protein levels. We further demonstrated using an immunofluorescence assay that protein levels of both WNK1 and WNK4 are increased in DCT in KLHL3 R528H/+ mice. We also confirmed by FCS assay that mutant KLHL3 R528H did not bind to the acidic motif of either WNK1 or WNK4. Considering these observations, we have demonstrated for the first time that the essential mechanism of mutant KLHL3-induced PHAII involves impaired ubiquitination and increased protein levels of both of WNK1 and WNK4 in DCT in vivo (Fig. 6). Importantly, these facts suggest that Cullin3-KLHL3 E3 ligase complexes physiologically regulate WNK-OSR1/ SPAK-NCC phosphorylation cascades in vivo, indicating that KLHL3 plays an important role in the physiological mechanisms of sodium handling in the kidney. To date, physiological regulators of the WNK-OSR1/SPAK-NCC phosphorylation signal cascade, such as insulin (28 -31), angiotensin II (10,(32)(33)(34) and aldosterone (32,(35)(36)(37), have been reported. However, the regulatory mechanism of these factors in WNK signaling remains unknown. The novel KLHL3-mediated regulation of WNK signals might be involved in these mechanisms. Further investigation is required to clarify this issue.
It was reported that KLHL3 was able to bind to NCC and regulate its intracellular localization in cultured cells (16). However, WNK and OSR1/SPAK were activated in KLHL3 R528H/+ mice, but were not down-regulated by increased NCC phosphorylation. Moreover, it was reported that simple over-expression of NCC did not produce the PHAII phenotype in NCC transgenic mice (38), indicating that increased phosphorylation, but not increased protein expression, of NCC is required for the PHAII phenotype in vivo. Considering these in vivo observations, the essential pathogenesis of PHAII caused by KLHL3 mutation is not due to impaired ubiquitination or regulation of NCC, but to the impaired ubiquitination of WNK kinases. This discrepancy could be due to the experimental system, the genetically engineered mouse model or cultured cells.
Boyden et al. (15) reported that symptoms of human PHAII patients caused by KLHL3 mutation are more severe than those caused by mutation in WNK1 or WNK4. WNK4 D561A/+ knock-in (7) and WNK1 +/FHHt (9) PHAII mouse models exhibited increases of only a single kind of WNK kinase that carries a mutation. The increased severity of KLHL3 mutation-induced PHAII in human patients may be explained by the physiological difference between the increase of 'both WNK1 and WNK4 kinases' and 'a single WNK kinase'. Accumulation of both WNK1 and WNK4 kinases by KLHL3 mutation could result in further increases in phosphorylation of downstream components compared with the accumulation of a single WNK kinase (Fig. 6). However, the KLHL3 R528H/+ mouse shows a less severe phenotype than other mouse models of PHAII previously reported (7,9), which could be explained by the difference of origin of ES cells used for the generation of the knock-in mice, i.e. the difference of genetic background of these mice. In this study we utilized Baltha1 ES cells derived from C57BL/6 mouse, which is a one-renin-gene mouse strain. However, the other mouse models of PHA II were established with ES cells derived from two-renin-gene mouse strains, 129Sv (7,9). It was reported that 129Sv mice showed increased blood pressure response to salt intake, compared with C57BL/6 mice (39). Further investigation to compare the phenotypes of three PHAII model mice under the same genetic background will be required. Finally, we discuss the involvement of ENaC and ROMK in PHAII caused by KLHL3 mutation. ENaC was activated in KLHL3 R528H/+ heterozygous knock-in mice, similar to WNK4 D561A/+ mice (7). However, inconsistent with the KLHL3 R528H/+ and WNK4 D561A/+ mice, WNK1 +/FHHt mice are reported to show no activation of ENaC (9). This difference might be explained by whether WNK4 is increased or not. On the other hand, in the case of ROMK, we did not find differences in ROMK expression in the kidney between wild-type and KLHL3 R528H/+ mice. However, we were unable to perform microdissection of kidney, and the available anti-ROMK antibody detects all ROMK variants. Therefore, to clarify the in vivo effect of KLHL3 on ROMK in CCD, further investigation is required.
In summary, we established and analyzed KLHL3 R528H/+ knock-in mice, and clarified the essential pathogenesis of mutant KLHL3-induced PHAII. Mutant KLHL3 causes accumulation of both WNK1 and WNK4 at DCT due to loss of binding ability of KLHL3 to WNK kinases. This regulation of the WNK-OSR1/SPAK-NCC phosphorylation cascade by Cullin3-KLHL3 E3 ligase complexes plays important physiological and pathophysiological roles for sodium handling in the kidney in vivo.
Generation of KLHL3 R528H/1 knock-in mice
To generate KLHL3 R528/+ knock-in mice, the targeting vector was prepared using the BAC recombineering system (40). The point mutation (R528H) was introduced into exon 15 of the targeting vector by galK selection system (41). The targeting vector was then transfected into Baltha1 ES cells (25), which are derived from C57BL/6 mice, by electroporation as previously reported (42). After selection with 150 mg/ml G418 and 2 mM ganciclovir, targeted ES clones were selected by PCR with a sense primer F1 (5 ′ -ATA GCA GAG CCG TCT CTG TG-3 ′ ) located within the neo cassette and an antisense primer R1 (5 ′ -ACT TGT GTA GCG CCA AGT GC-3 ′ ) located following exon 15, Southern blotting and sequencing of the mutation site. Selected ES clones were injected into C57BL/6 morula. Chimeric males were bred with C57BL/6 females to produce mutant KLHL3 flox/+ (R528H) mice, and the neo cassette was then deleted by crossing these mutant KLHL3 flox/+ mice with transgenic mice expressing Cre recombinase under the control of the CAG promoter (43). Offsprings were genotyped by PCR with sense primer F2 (5 ′ -CAC AGG GTA ACT GGG GCT GGT-3 ′ ) and antisense primer R2 (5 ′ -GGA AGA ACT GTG ACC CCC GC-3 ′ ) flanking the remaining loxP site and exon 15.
Animals
Studies were performed using 10-week-old mice that had free access to food and water. Mice of each genotype were placed on a normal-salt diet (NaCl 0.4% w/w) or a high-salt diet (8.0% w/w) for 1 week. All experiments were performed 1 week after dietary change. The Animal Care and Use Committee of Tokyo Medical and Dental University approved the experimental protocol.
Measurement of blood pressure
We measured blood pressure by using a radiotelemetric method in which a blood pressure transducer (Data Sciences International, St. Paul, MN, USA) was inserted into the left carotid artery. Seven days after transplantation, each mouse was housed individually in a standard cage on a receiver under a 12 h lightdark cycle. Systolic and diastolic blood pressure was recorded every minute via radiotelemetry. For each mouse, we measured blood pressure values for more than 3 consecutive days and calculated the mean + SE of all values. These experiments were performed under a normal-salt (NaCl 0.4% w/w) or high-salt (8.0% w/w) diet.
Blood analysis
Blood was drawn from the retro-orbital sinus under light ether anesthesia. Serum data were determined using the i-STAT system (FUSO Pharmaceutical Industries, Osaka, Japan).
Quantitative PCR analysis
Quantitative PCR analysis was performed on kidney as previously described (46). Total RNA from mouse kidneys was extracted using TRIzol reagent (Invitrogen, Carlsbad, CA, USA), according to the manufacturer's instructions. Total RNA was reverse transcribed using Omniscript reverse transcriptase (Qiagen, Hilden, Germany). Quantitative real-time PCR by Thermal Cycler Dice (Takara Bio, Otsu, Japan) was performed using the primer sets shown in a previous report (47).
Fluorescence correlation spectroscopy
Fluorescent TAMRA-labeled WNK1 and WNK4 peptides covering the acidic motif were prepared (Hokkaido System Science Co., Ltd., Hokkaido, Japan). Human full-length KLHL3 (wildtype and R528H mutant) was cloned into pGEX6P-1 vectors. The recombinant GST-fusion KLHL3 protein expressed in BL21 Escherichia coli cells was purified using glutathione Sepharose beads. The TAMRA-labeled WNK peptides were incubated at room temperature for 30 min with different concentrations of GST-KLHL3 (0 -2 mM) in 1× PBS containing 0.05% Tween 20 (reaction buffer). FCS measurements using the FluoroPoint-light analytical system (Olympus, Tokyo, Japan) were performed as previously described (26,27). The measurements were repeated five times per sample.
Statistics
Data are presented as means + SE. A student's t-test was used for comparisons between groups. ANOVA and Tukey's test were used for multiple comparisons. | 2017-04-15T07:55:40.672Z | 2014-10-01T00:00:00.000 | {
"year": 2014,
"sha1": "3e15e397851761bfc80c07a446b775978955648d",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/hmg/article-pdf/23/19/5052/17260600/ddu217.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "e2ba21a090b70f2ce2304d9e61319ecaa2d61f0e",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
11114783 | pes2o/s2orc | v3-fos-license | A New Taxon of Basal Ceratopsian from China and the Early Evolution of Ceratopsia
Ceratopsia is one of the best studied herbivorous ornithischian clades, but the early evolution of Ceratopsia, including the placement of Psittacosaurus, is still controversial and unclear. Here, we report a second basal ceratopsian, Hualianceratops wucaiwanensis gen. et sp. nov., from the Upper Jurassic (Oxfordian) Shishugou Formation of the Junggar Basin, northwestern China. This new taxon is characterized by a prominent caudodorsal process on the subtemporal ramus of the jugal, a robust quadrate with an expansive quadratojugal facet, a prominent notch near the ventral region of the quadrate, a deep and short dentary, and strongly rugose texturing on the lateral surface of the dentary. Hualianceratops shares several derived characters with both Psittacosaurus and the basal ceratopsians Yinlong, Chaoyangsaurus, and Xuanhuaceratops. A new comprehensive phylogeny of ceratopsians weakly supports both Yinlong and Hualianceratops as chaoyangsaurids (along with Chaoyangsaurus and Xuanhuaceratops), as well as the monophyly of Chaoyangosauridae + Psittacosaurus. This analysis also weakly supports the novel hypothesis that Chaoyangsauridae + Psittacosaurus is the sister group to the rest of Neoceratopsia, suggesting a basal split between these clades before the Late Jurassic. This phylogeny and the earliest Late Jurassic age of Yinlong and Hualianceratops imply that at least five ceratopsian lineages (Yinlong, Hualianceratops, Chaoyangsaurus + Xuanhuaceratops, Psittacosaurus, Neoceratopsia) were present at the beginning of the Late Jurassic.
Phylogenetic analysis
To assess the systematic position of Hualianceratops, a new character list and matrix was compiled and analyzed (S1 and S2 Files). Our data matrix was mainly based on those of Ryan et al. [15] and Farke et al. [9], which were in turn modified from the matrices of Makovicky and Norell [8], Makovicky [16], Xu et al. [17], and Lee et al. [18]. However, these matrices contain few characters germane to the basalmost ceratopsians. To rectify this situation, characters were added from the matrices of Xu et al. [1] and Butler et al. [19]. Additionally, 19 new characters were added based on personal observation of basal ceratopsians, and four new taxa (Aquilops, Auroraceratops, Psittacosaurus lujiatunensis, Yinlong wucaiwanensis) were coded into the matrix. The character coding for Auroraceratops and Aquilops were based on Morschhauser [11] and Farke et al. [9], respectively. Psittacosaurus lujiatunensis was coded from Zhou et al. [20], as well as the complete adult skull IVPP V12617 [21]. The codings for Yinlong, Archaeoceratops, Liaoceratops, Xuanhuaceratops, and Chaoyangsaurus were also modified where needed from previous analyses based on first hand observation (see S1 for details).
The final data matrix consists of 210 characters scored for 27 ingroup taxa. Ten outgroups were chosen to accurately polarize characters and determine the composition of Ceratopsia. The outgroup taxa include the basal ornithischians Lesothosaurus, Heterodontosaurus, and Agilisaurus, the non-iguanodontian ornithopods Haya, Orodromeus and Jeholosaurus, and the pachycephalosaurians Wannanosaurus, Goyocephale, Homalocephale and Stegoceras. The matrix was analyzed using TNT [22], and all characters were treated as equally weighted. Nine characters (19, 20, 70, 98, 128, 146, 171, 174, 178; characters 18, 19, 69, 97, 127, 145, 170, 173 and 177 in TNT's protocol starting with state 0) were treated as ordered (additive). The analysis was conducted with the maximum trees set to 99,999 and zero-length branches collapsed, using a heuristic search with 1000 replicates of Tree Bisection and Reconnection (TBR) holding 10 trees with each replicate, followed by tree swapping using TBR on the trees in memory. Standard bootstrap values (absolute frequencies) were calculated using a traditional heuristic search (100 replicates of TBR each bootstrap replicate, 10 trees saved per TBR) with 1000 bootstrap replications. Bremer supports were calculated by running the script "Bremer.run" and checking this using heuristic searches saving suboptimal trees up to 8 steps longer and running "Bremer supports" in TNT, then repeating with different random seeds 10 times.
Nomenclatural Acts
The electronic edition of this article conforms to the requirements of the amended International Code of Zoological Nomenclature, and hence the new names contained herein are available under that Code from the electronic edition of this article. This published work and the nomenclatural acts it contains have been registered in ZooBank, the online registration system for the ICZN. The ZooBank LSIDs (Life Science Identifiers) can be resolved and the associated information viewed through any standard web browser by appending the LSID to the prefix "http://zoobank.org/". The LSID for this publication is: urn:lsid:zoobank.org:pub:AA4CAF05-484C-40E7-995D-9DE61D488111. The electronic edition of this work was published in a journal with an ISSN (1932-6203), and has been archived and is available from the following digital repositories: PubMed Central (http://www.ncbi.nlm.nih.gov/pmc), LOCKSS (http://www. lockss.org). Definition. A stem-based taxon defined as all ceratopsians more closely related to Chaoyangsaurus youngi than to Psittacosaurus mongoliensis [26] or Triceratops horridus [27].
Systematic Paleontology
Revised Diagnosis. Chaoyangsaurids may be distinguished from other ceratopsians by the following synapomorphies: semicircular ventral process near the medial face of the mandibular glenoid [3], expanded, flat dorsal surface of the squamosal with a stalked quadrate process, deep sulcus dividing the quadrate condyles, ventral margin of the angular extending laterally to form a ridge with a distinct concavity formed above the ridge, predentary reduced and much shorter than premaxillary oral margin, dorsal and ventral margin of the dentary converged rostrally more than 20% of the depth.
Diagnosis. A basal ceratopsian distinguished from other chaoyangsaurids by five autapomorphies: a prominent caudodorsal process on the infratemporal ramus of the jugal, a robust quadrate with an expansive quadratojugal facet, a transversely expanded rostral margin of the quadrate above the quadratojugal facet, a prominent notch near the ventral region of the quadrate, and a deep and short dentary. Unique combination of character states includes: strongly rugose sculpturing present on the lateral surface of the dentary (distinct from Chaoyangsaurus and Yinlong, but present in Xuanhuaceratops), transversely expanded ventral surface of the infratemporal ramus of the jugal (distinct from Yinlong, but present in Chaoyangsaurus), ventral grooves present on the quadratojugal for articulating with the caudal end of the jugal (distinct from Xuanhuaceratops, but present in Chaoyangsaurus and Yinlong).
Description and Comparisons Skull
The skull is estimated to be 25 cm long (from the rostral end to the quadrate condyles) based on the preserved materials. It is significantly larger and more robust than the Yinlong downsi holotype (IVPP V14530), which measures 18 cm long, and slightly smaller than the largest Yinlong skull (IVPP V18637) based on the jugal, quadrate and dentary ( Table 1). The preorbital region is missing, but the ventral margin of a large antorbital fossa is preserved. The orbit is circular and seems to be the largest opening of the skull. The infratemporal fenestra is deep and subelliptical in outline, and slightly narrower than the orbit. The jugal-postorbital bar is wide compared to that of Psittacosaurus [29], but still relatively narrower than that of other neoceratopsians, such as Liaoceratops [4]. The textured ornamentation is well developed and present in most of the preserved skull bones, including the jugal, quadratojugal, postorbital squamosal, dentary, surangular, and angular. It is similar in development to that of Xuanhuaceratops [3], and more developed than in other basal ceratopsians. Textured ornamentation is usually present on the sub-temporal ramus of the jugal, postorbital, and angular in chaoyangsaurids and the basal neoceratopsians Liaoceratops, Archaeoceratops, and Auroraceratops, but is absent in Aquilops and more derived neoceratopsians [9]. These ornamentations are usually textured and grooved, and differ from the nodular-like ornamentation of pachycephalosaurs [30].
Maxilla
The right maxilla is missing its rostral region as well as part of its dorsal and caudal margins (Figs 1 and 2). Most of the tooth-bearing portion is preserved although most crowns are missing; at least nine alveoli are preserved (Fig 2). The antorbital fossa is partially preserved.
The toothrow is gently bowed medially, as in all ceratopsians and most ornithischians. The maxilla is rostrocaudally elongate, and its caudal end is overlapped almost entirely by the infraorbital ramus of the jugal. The toothrow is medially inset to form a deep buccal emargination, which is deeper than that of Yinlong [1,14]. The emargination is well-defined caudally, but quickly becomes less distinct rostrally. At least six large, subcircular nutrient foramina form a line along the lateral surface of the maxilla, parallel to the alveolar margin and immediately below the buccal emargination.
The dorsal margin of the preserved maxilla is smooth and slightly curved where it forms the well-defined ventral edge of the large antorbital fossa. The antorbital fossa has a relatively straight caudal margin (formed by the jugal) and a rostroventrally sloping ventral margin. The shape of the complete antorbital fossa cannot be determined. The ventral edge of the antorbital fossa is significantly lower than that of the orbit, as in other ceratopsians. Caudally, the maxilla tapers and is broadly overlapped by the ventromedial surface of the infraorbital ramus of the jugal. The medial surface of the maxilla is rugose and flattened, and seems to have been compressed during preservation. The articulations with the lacrimal, premaxilla, and nasal are not preserved. In overall morphology, the preserved portion of the maxilla is similar to that of Yinlong.
Jugal
The right jugal and the caudal portion of the left jugal are well preserved (Figs 1 and 3). The infraorbital ramus of the right jugal is separated from the main body and attached to the maxilla (Fig 2). The depth of the suborbital and subtemporal rami are relatively equal, as in most basal ceratopsians; the infratemporal ramus is substantially deeper in more derived neoceratopsians (e.g., Liaoceratops [4], Yamaceratops [8]). The infraorbital ramus is much shorter than the infratemporal ramus, as in Psittacosaurus and derived neoceratopsians such as Protoceratops, but in contrast to the basal neoceratopsians Liaoceratops, Archaeoceratops and Yamaceratops which have relatively shorter infratemporal rami [4,5,8]. The lateral surface of the entire jugal is rugose and strongly textured, as in Xuanhuaceratops [3], Archaeoceratops [5] and Auroraceratops [11]. In Yinlong, the textured surface is restricted to the postorbital and infratemporal rami and absent on the infraorbital ramus [14]. This feature is absent or weak in Chaoyangsaurus [2], Psittacosaurus [29], Liaoceratops [4] and Aquilops [9]. Large oval nodules are present on the jugal in Yinlong [14], but are absent in Hualianceratops.
The infraorbital ramus is transversely thick, tapers rostrally, and forms the entire caudal margin of the antorbital fossa. The dorsal margin of the ramus is smooth and curved rostrodorsally where it forms the ventral margin of the orbit. The body of the infraorbital ramus thins mediolaterally to the narrow orbital margin. The medial surface of the infraorbital ramus is not exposed.
The tall postorbital ramus of the jugal is rostrocaudally wide and contributes to a wide jugal-postorbital bar, as in basal neoceratopsians (e.g., Liaoceratops [4]; Archaeoceratops [5]). In Psittacosaurus the jugal-postorbital bar is much narrower [31]. In medial view, the distal end of the postorbital process of the jugal tapers caudodorsally, reaching to the medioventral surface of the main body of the postorbital. In lateral view, it is broadly overlapped by the jugal process of the postorbital so that most of the postorbital ramus of the jugal is concealed, as in Yinlong and Psittacosaurus [31], but unlike the situation in Liaoceratops and Archaeoceratops where the exposed jugal occupies the rostral half of the jugal-postorbital bar [4,5].
The robust infratemporal ramus is strongly arched laterally like that of Yinlong, Chaoyangsaurus and Psittacosaurus, whereas it is nearly straight or just slightly curved in basal neoceratopsians, such as Liaoceratops, Archaeoceratops and Auroraceratops [4,5,11] (Fig 4). It has a subtriangular cross section with a thin dorsal edge and thickened ventral edge (Figs 1 and 3). The surface of the ventral margin is textured and flattened in both the jugals. This is similar to the situation in Chaoyangsaurus (IGCAGS V371) and Psittacosaurus (e.g., IVPP V12617), but unlike Yinlong (e.g., IVPP V14530) and other basal neoceratopsians which have thin ventral edges on their jugal infratemporal rami. There is no trace of an epijugal or jugal horn, as in Yinlong and Chaoyangsaurus. [32].
The caudal end of the infratemporal ramus bifurcates into a caudomedial process and a caudodorsal process (Figs 1 and 3). The larger caudomedial process tapers and curves medially, reflecting the lateral arching of the entire ramus. The dorsal margin of its distal end is thin and fits into a deep sulcus on the lateral surface of the quadratojugal, as in Yinlong [14]. The smaller caudodorsal process extends nearly directly dorsally from the infratemporal ramus. It is transversely compressed, and triangular in lateral view (Figs 1 and 3); its rostral margin forms the caudoventral margin of the infratemporal fenestra. The caudodorsal process fits onto a facet on the lateral surface of the rostral quadratojugal, nearly eliminating the quadratojugal from the margin of the infratemporal fenestra. This differs from the jugal of Yinlong, where the caudodorsal process is relatively smaller and extends caudally along the inferior margin of the infratemporal fenestra, allowing the quadratojugal a larger contribution to the infratemporal fenestra margin [1]. In Psittacosaurus, the caudodorsal process is nearly as long as the caudomedial process, but as in Yinlong extends caudally and does not curve dorsally along the rear of the infratemporal fenestra ( Fig 4C). In most neoceratopsians the caudodorsal process is lost (e.g., Liaoceratops) (Fig 4). A prominent node is present on the ventral surface of the left infratemporal ramus of Hualianceratops, but is absent from the right jugal (Figs 1 and 3); this feature may vary per individual.
Quadratojugal
Both of the quadratojugals are partially preserved (Figs 3 and 5). The rostral part of the left quadratojugal is preserved and articulated with the jugal and quadrate (Fig 3). The caudoventral part of the right quadratojugal is also preserved and articulated with the quadrate (Fig 5). The preserved quadratojugal is mediolaterally compressed and is ornamented on its exposed lateral surface.
In lateral view, the rostral margin of the quadratojugal follows the curve of the infratemporal fenestra, but is nearly completely covered laterally by the jugal. Only a small sliver of rostrodorsal quadratojugal was apparently exposed along the margin of the infratemporal fenestra, although poor preservation of the associated quadrate makes this difficult to confirm. Dorsally, New Basal Ceratopsian and Early Evolution of Ceratopsia the quadratojugal appears to extend onto the rostromedial surface of the quadrate. However, this area is poorly preserved and the quadratojugal is broken along its dorsal margin. The quadratojugal extended distally to reach the lateral quadrate condyles.
The rostral quadratojugal is incised by a deep sulcus for articulation with the caudomedial process of the infratemporal ramus of the jugal, as in Yinlong (Fig 5C and 5F). This feature is very weakly developed in Chaoyangsaurus and Xuanhuaceratops (pers. obs.), and absent in other ceratopsians. The quadratojugal tapers rostrally and extends forward to reach the main body of the jugal, deep to its infratemporal ramus (Fig 3D). The straight ventral margin is thin and oriented horizontally, while the rostrodorsal margin curves along the caudoventral margin of the infratemporal fenestra. The caudoventral margin of the quadratojugal is indented by a large, oval fossa or groove that faces rostrolaterally (Fig 5C). This fossa is adjacent to a large notch in the lateral margin of the quadratojugal wing of the quadrate.
Quadrate
The right quadrate is well preserved, but is missing most of the pterygoid wing (Fig 5). The shaft is robust and rostrocaudally wide. The dorsal portion of the shaft is strongly deflected caudodorsally (Fig 5A), as in Psittacosaurus [33], but unlike that of Yinlong and other neoceratopsians which have relative straight to slightly curved proximal quadrate shafts [1]. Because only one quadrate is preserved, the possibility that the strong deflection of the quadrate head may be accentuated by or due to distortion cannot be eliminated. The shaft becomes rostrocaudally narrower but maintains a robust rostral margin (about 10.5 mm in transversely width) above the quadratojugal wing as it curves caudally to approach the quadrate head, as in Psittacosaurus (IVPP V12617). This condition differs from that of Yinlong and other basal neoceratopsians which have transversely narrow rostral margins [14]. The pterygoid wing extends rostromedially from the shaft as a deep, thin plate that is broken near its base. Distal to the thickened rostrodorsal margin of the shaft, the broad quadratojugal wing supports an enormous quadratojugal facet whose width is 57% its length (27.8 mm in maximum width, 49 mm in length). This facet occupies half the length of the quadrate shaft, as in the largest individual of Yinlong (IVPP V18637). However, the latter has a much narrower facet (38% width to length; 15 mm in maximum width, 40 mm in length) (Fig 5G). In other known neoceratopsians and Psittacosaurus, the quadratojugal facet occupies less than half the length of the distal quadrate shaft [33]. This indicates an extensive overlap between the quadrate and quadratojugal in Hualianceratops. The caudal margin of the quadratojugal facet is distinct and raised relative to the quadrate shaft. The facet occupies nearly the entire width of the lateral margin of the quadrate shaft, and extends distally nearly to the condyles.
A small, oval foramen pierces through the quadrate shaft caudal to the quadratojugal facet, as in Yinlong and basal neoceratopsians (e.g., Auroraceratops [11]; Liaoceratops, IVPP V12738). Ventrally, the quadrate expands slightly forming the lateral and medial condyles. The medial condyle is just slightly smaller but extends more ventrally than the lateral condyle. The condyles are compressed rostrocaudally and separated by a broad, shallow sulcus. This is different from Yinlong (IVPP V18637), Chaoyangsaurus (IGCAGS V371), and Xuanhuaceratops [3], where the condyles are more rounded and prominent and separated by a deep and narrow intercondylar sulcus (Fig 5H and 5I). In this respect, the condyles of Hualianceratops are more similar to those of other neoceratopsians (e.g., Liaoceratops) and Psittacosaurus (e.g., IVPP V12617).
A prominent V-shaped notch incises the ventral region of the quadratojugal wing of the quadrate (Fig 5C and 5F). This notch is covered laterally by the quadratojugal, but is visible in medial view. A quadrate notch is present in Yinlong (e.g., IVPP V18637) (Fig 5G), but is shallow, incises the rostral margin of the quadratojugal facet, and is positioned slightly above the midlength of the quadratojugal wing (Fig 5G). Quadrate notches (also referred to a paraquadratic notches) such as that found in Hualianceratops and Yinlong are widespread in ornithopods, but are absent/weak or unknown in other ceratopsians [19]. The notch shape in Hualianceratops appears to be unique among ornithischians.
Postorbital
The right postorbital is partially preserved in articulation with the jugal and the postorbital ramus of the squamosal (Fig 1). The articulation with the frontal and parietal are missing. The main body of the postorbital was compressed and flattened mediolaterally during preservation. Its lateral surface retains the same rugose texturing as found on the jugal and quadratojugal.
The jugal process of the postorbital is mediolaterally compressed and tapers rostroventrally. It broadly overlaps a facet on the rostrolateral surface of the postorbital process of the jugal, and forms about half of the caudal margin of the orbit and the rostral half of the broad postorbital-jugal bar.
The caudal margin of the jugal process of the postorbital grades smoothly into the ventral margin of the squamosal process. The squamosal process expands medially into a subtriangular cross section, and tapers caudally. The tapered rostral end of the postorbital process of the squamosal extends nearly to the rostral limit of the infratemporal fenestra and appears to articulate with the dorsomedial aspect of the squamosal process of the postorbital. The squamosal process of the postorbital appears to be relatively smaller than that of Yinlong, suggesting a relatively smaller infratemporal fenestra. The postorbital appears to have formed a portion of the rostrolateral margin of the supratemporal fenestra. The caudal and lateral margins of the postorbital are eroded, and no nodes or nodules can be detected.
Squamosal
The left squamosal is partially preserved (Figs 1 and 6), and is very similar to that of Yinlong in having a flat, expanded dorsum and a constricted, stalked quadrate process. The dorsal surface is flattened and textured, and two large nodes are present along the caudal margin, as in Yinlong. These thick nodes become somewhat dorsoventrally compressed towards the margin of the element. The lateral node occurs at the caudolateral corner of the squamosal and is more robust than the more medial one.
The lateral node extends laterally to overhang the quadrate process at an angle of 120 degrees, whereas the caudal node extends caudodorsally at a right angle to the quadrate process.
Ventrally, the squamosal contracts to form a discrete, stalk-like quadrate process. The cotylus and cotylar processes are broken and absent. In lateral view, a fossa is present on the lateral surface of the quadrate process that is confluent with the caudodorsal margin of the lower temporal fenestra, as occurs in many ornithischians (Fig 6C).
Mandible
The paired mandibles are partially preserved in articulation but are missing most of the postdentary series (Fig 7). The right dentary is compressed and mediodorsally distorted; the left dentary is straight in lateral view although the surface is seriously damaged.
Predentary. The predentary is partially preserved. It is triangular in ventral view, and converges to a narrow rostral margin; it appears to have had a rounded rather than a sharp ventral keel. The caudoventral process is well-preserved and covers most of the mandibular symphysis, although the caudalmost symphysis remains exposed. Caudally the process bifurcates, as in other basal ceratopsians (e.g., Liaoceratops [4]; Psittacosaurus, IVPP V12617), and most ornithopods (e.g., Haya [34] and Changchunsaurus [35]) (Fig 7C). The condition in Yinlong is not known. Both dorsolateral processes are missing. The tip of the predentary is broken below the level of the dorsal margin of the dentary.
Dentary. Both dentaries are preserved. The right one is nearly complete but its oral margin is damaged (Fig 7). Although the entire toothrow is preserved, an accurate tooth count is not possible due to remaining matrix, but at least seven teeth are preserved in the left dentary. The first tooth lies immediately behind the predentary, precluding a diastema.
The dentary is deep and short, and its dorsal and ventral margins converge rostrally. It measures 83.2 mm in length along its ventral margin, and has depths of 26.7 mm (32% length) at the rostral end, 43.3 mm (52% length) at the rear of the toothrow, and 63.3 mm (76% length) at the apex of the coronoid process. In the holotype of Yinlong (IVPP V14530), the dentary has a length of 81.6 mm at the base, just slightly shorter than that of Hualianceratops (Fig 8A and 8C). However, the Yinlong holotype has depths of 19.8 mm (24% length) at the rostral end, 26.2 mm (32% length) at the rear of the tooth row, and 40.4 mm (50% length) at the apex of the coronoid process. Therefore, the dentary is much deeper relative to its length in Hualianceratops than in Yinlong. A relatively deep dentary is a derived feature that occurs in Psittacosaurus [29,33] and some derived neoceratopsians such as Protoceratops [36] and Leptoceratops [15] (Fig 8).
The tooth row is medially inset from the lateral surface of the dentary to form a shallow buccal emargination. The emargination is not sharply defined. A row of nine subcircular nutrient foramina is present immediately dorsal to the emargination. The lateral surface is strongly textured, as in Xuanhuaceratops [3], the basal neoceratopsian Archaeoceratops [5], and pachycephalosaurs [37], but unlike Yinlong and Psittacosaurus [33] which have smooth, untextured dentaries. In ventral view, the rostroventral ends of each dentary curves medially to meet its partner at the midline forming a spout-shaped symphysis as in all ornithischians [19].
A large subround external mandibular fenestra is present at the junction of the dentary, surangular, and angular. The dentary then projects caudodorsally to form the rostral portion of the low coronoid eminence. The dentary meets the surangular along a straight, caudodorsally oriented suture, and the angular along a similarly oriented but shorter suture. The dentary does not project caudally below the angular. On its medial surface, the dentary has an extensive contact with the splenial (see below), as well as a short contact with the rostralmost part of the prearticular.
Surangular. The rostral part of the surangular is preserved with the right mandible ( Fig 7). It is thin and slightly bowed laterally, and forms the rostroventral margin of the external mandibular fenestra. The ventral part of the lateral surface is strongly textured like the dentary. This differs from Yinlong [1], where the surangular lacks external texturing (Fig 8A and 8C). The dorsal edge of the surangular is expanded mediolaterally to form a bar-like dorsal margin that is oriented nearly horizontally in its preserved rostral portion, and reaches to the highest point of the dentary on the coronoid eminence.
Angular. The rostral portion of the right angular is preserved with the right mandible, and forms the rostroventral margin of the external mandibular fenestra. The surface of the angular is strongly textured on both its lateral and ventral surfaces. The ventral portion of the angular expands medially to form a thick and robust ventral mandibular margin. The medial surface of the angular is partially covered by the caudal splenial.
Splenial. Both splenials are exposed in medial view, but are missing their caudalmost ends (Fig 7C and 7D). The plate-like splenial is applied to the medial surface of the dentary, surangular, and angular. The tapered rostral end extends nearly to the dentary symphysis. The ventral margin of the splenial extends caudally, paralleling the ventral margin of the dentary. The dorsal margin expands rapidly caudodorsally until it covers nearly the entire medial dentary at its apex near the coronoid process of the dentary (44 mm deep at this point). A large opening on the dorsal margin has broken margins and is likely due to damage. Behind the apex of the splenial, the caudal margin of the splenial plunges sharply caudoventrally to 22 mm deep at the base of the caudal ramus. This margin also forms the rostral and ventral margins of an oval internal mandibular fenestra. Behind the internal mandibular fenestra, the dorsal and ventral margins of the splenial begin to converge as the element tapers caudally and covers the internal surface of the ventral angular.
Prearticular. A thin sliver of the rostral prearticular is preserved along the medial surface of the mandible (Fig 7C and 7D). The rostral prearticular is dorsoventrally narrow, and extends up the rear of the splenial, terminating close to its apex, to form the dorsal and caudal margin of the internal mandibular fenestra.
Dentition
No premaxillary teeth are preserved. The apical half of an unworn tooth crown is exposed in the right maxilla, approximately in the middle of the toothrow (Fig 2C and 2D). The exposed crown is triangular with four large, tabular denticles along the mesial and distal carinae, flanking a similarly sized apical denticle. Denticular ridges are absent; damage to the surface of the tooth precludes determining whether an apical ridge was present.
Several teeth are present in the left dentary, but all are poorly preserved. They are triangular in lingual view, expanding at the base and tapering to their apex. Weak ridges are present on the lingual surface, as in basal ceratopsians Yinlong and Chaoyangsaurus. No prominent apical ridge is present, in contrast to neoceratopsians (e.g., Archaeoceratops) and Psittacosaurus [31].
Postcranial skeleton
The skull material was found associated with a partial skeleton, although most of this material is in very poor condition and little information can be gleaned at present.
Two co-ossified sacral centra are present (Fig 9A and 9B) but more were certainly present; one is nearly complete and the other is fragmentary. The centrum length exceeds its width, which is greater than the height. Fragments of the proximal right tibia and fibula are present, but severely compressed and poorly preserved. The shaft of the left fibula is fractured but otherwise well-preserved. It is slender and straight, robust at the proximal end and narrows distally. The articular ends are not preserved.
Part of the left pes is well preserved in articulation and partially prepared from the block ( Fig 9C); it includes the distal ends of metatarsals (MT) I and II, the complete second digit, and parts of the third and fourth digit.
The distal end of MT I is visible in ventral view. Its distal end is slightly curved medially, and the shaft is semicircular in cross section. The ventral surface is flattened to slightly concave. The distal condyles are partially damaged, although the lateral condyle appears more prominent than the medial. The distal end of MT II is more robust than that of MT I. The distal condyles of MT II are dorsoventrally expanded and well separated from one another by a deep ginglymus. A deep tendon insertion pit is centered on the lateral surface of the exposed condyle.
The phalanges of digits II-IV are well preserved. Both phalanges are preserved for digit II. Digit III preserves the distal end of the first phanlanx, and both distal phalanges. Digit IV is represented by the distal end of the proximal phalanx, and the three more distal phalanges. All phalanges are taller than mediolaterally wide, have well-developed ginglymi, and posses deep tendon insertion pits on the lateral surfaces of all phalanges. As in most ornithischians, the phalanges decrease in overall size from digit II through digit IV.
The ungual phalanges of digits II and III are missing their tips, but that of digit IV is complete. All unguals are slightly downcurved, mediolaterally compressed, and have an elongate sulcus on their medial and lateral surface. In all respects they resemble those of Yinlong.
Phylogenetic analysis results
The analysis recovered 216 most parsimonious trees of 461 steps each (Consistency Index = 0.49; Retention Index = 0.78). The strict consensus tree is shown in Fig 10 along with bootstrap and Bremer support values. This analysis places Yinlong and Hualianceratops within a monophyletic Chaoyangsauridae, along with Chaoyangsaurus and Xuanhuaceratops, as suggested by Morschhauser [11] and Han [38] (Fig 10). No unambiguous synapomorphies support this clade, but nine characters (61, 62, 67, 70, 76, 109, 119, 138, 143) under either ACCTRAN or DELTRAN optimize to this node in the strict consensus. Nevertheless, the bootstrap value for this node is 61, while Bremer support is 2, similar to the support for the Psittacosaurus-Chaoyangsauridae clade that has six unambiguous synapomorphies. Relationships among the four chaoyangsaurids are unresolved. This is likely due, in part, to the preponderance of missing data and non-overlapping data in Hualianceratops (81% missing data), Xuanhuaceratops (78%), and Chaoyangsaurus (50%). Until more complete materials of these taxa are discovered, their relationships to one another may remain impossible to decipher.
Despite their co-occurrence in the Shishugou Formation and general similarity, Yinlong and Hualianceratops are supported as sister taxa only in half (108) of the 216 MPTs (maximum parsimony trees). In the MPTs where they are sister taxa, they are united by a single character: a prominent sulcus on the ventral quadratojugal for articulation with the jugal (character 70). A sulcus is present but weakly developed in Chaoyangsaurus, and absent in Xuanhuaceratops. Additionally, Yinlong and Hualianceratops are the only ceratopsians known to have a dorsally flat, laterally and caudally expanded squamosal (character 61). However, this character also occurs in pachycephalosaurs and the condition is unknown in either Chaoyangsaurus or Xuanhuaceratops (though it is absent in Psittacosaurus); it may therefore prove to be a synapomorphy of Marginocephalia. Because these characters are ambiguously distributed among the MPTs, neither can support a sister-group relationship between Yinlong and Hualianceratops, and thus the two taxa cannot be considered congeneric. Other MPTs have Yinlong closer than Hualianceratops to the other chaoyangsaurids based on character 74; or Hualianceratops sister to Xuanhuaceratops based on character 120, and Chaoyangsaurus sister to these two taxa based on character 144.
The phylogenetic analysis suggests a sister-taxon relationship between Chaoyangsauridae and Psittacosaurus that is weakly supported with a bootstrap value of 62 and Bremer support of 2, and in some trees two steps longer Psittacosaurus is sister-taxon to Neoceratopsia. Six unambiguous synapomorphies support this clade, including a short preorbital region less than 40% the length of the skull (character 5), the length of rostral to the caudal edge of maxillary less than 55% of the length of the skull (character 7), the infratemporal ramus of the jugal is longer than the infraorbital ramus (character 46), the jugal infratemporal process is strongly arched laterally (character 48), the infratemporal process of the jugal extends caudally along the ventral margin of the quadratojugal (character 49), and the surangular length is more than 50% of the total mandibular length (character 131). Two of these features relate to the relatively short snout, and three concern the morphology of the infratemporal portion of the jugal. The sixth feature, a relatively long surangular, is recognized here for the first time. This relationship suggests an early divergence between Psittacosaurus (late Barremian-Albian) [39,40] and chaoyangsaurids (Late Jurassic) prior to the Late Jurassic, and thus a long ghost lineage (about 38 Ma) for Psittacosaurus (Fig 11).
This analysis weakly supports a basal split between a Psittacosaurus + Chaoyangsauridae clade and Neoceratopsia, a novel result. This removes the chaoyangsaurids from Neoceratopsia following the definition of the group by Sereno [42]: The most inclusive clade including Triceratops horridus but not Psittacosaurus mongoliensis. The monophyly of Neoceratopsia in this analysis is supported by a bootstrap value of 78 and Bremer support of 2, and Psittacosaurus groups with Neoceratopsia only in trees three or more steps longer than the MPTs. The relationships found in our analysis imply that this basal split in the Ceratopsia occurred prior to Fig 11. Ghost lineages implied by this phylogenetic analysis. Ghost lineages (gray lines) implied by the stratigraphic distributions of taxa and the most parsimonious trees. Thick black lines indicate single occurrences, their length reflects uncertainty in dating. One of the phylogenies implying the fewest lineages is shown. Geochronology from [41]. the Late Jurassic (Fig 11). Currently, the earliest known neoceratopsian is Liaoceratops from the lowermost part of the Early Cretaceous Yixian Formation of China, where it co-occurs with Psittacosaurus [4,40]. This phylogeny suggests a long, undiscovered ghost lineage for Neoceratopsia through the Late Jurassic.
Discussion and Conclusion
Hualianceratops (IVPP V18641) represents the second species of basal ceratopsian present in the upper part of the Shishugou Formation at the Wucaiwan locality. Though Yinlong possesses a number of autapomorphies, the incompleteness of the Hualianceratops material does not allow all of these characters to be evaluated. While two characters have been recognized that are uniquely shared by these taxa (a deep sulcus on the ventral surface of the quadratojugal for articulation with the jugal, and a squamosal with a flat dorsal surface that expands both laterally and caudally), neither unambiguously define a sister-group relationship between these taxa (see above). Hualianceratops is distinct from Yinlong in possessing the following characters: a prominent dorsal process on the infratemporal ramus of the jugal, a robust quadrate with an expanded rostral margin above the quadratojugal facet, an expansive quadratojugal facet, a deep notch on the ventral jugal wing of the quadrate, a shallow sulcus between the quadrate condyles, and strongly rugose sculpturing on the lateral surface of the dentary. None of these characters occur in individuals of Yinlong of any size, suggesting they are not ontogenetically dependent [14].
Yinlong downsi shares some derived feature with both Psittacosaurus and neoceratopsians [14]. Interestingly, Hualianceratops shares more derived characters with Psittacosaurus than with basal neoceratopsians. This includes the divergent quadratojugal process and the flattened ventral surface of the jugal, the caudodorsally curved quadrate head, the deep and short dentary. However, the large antorbital fossa, preserved squamosal and sculpture lateral surface of most bones are quite different from that of Psittacosaurus. Additionally, the wide jugal-postorbital bar is more like basal neoceratopsians.
The age of the two Shishugou species within the dating error for the beginning of the Oxfordian [41] coupled with the most parsimonious phylogenies imply that at least five lineage of ceratopsians were present at the beginning of the Late Jurassic (Fig 11), including the two Shishugou species. The grouping of Psittacosaurus with chaoyangsaurids (Fig 11) implies long ghost lineages for Psittacosaurus and Neoceratopsia. By comparison, if there are no morphological constraints on the phylogeny then only two ceratopsian lineages are minimally necessary at the beginning of the Oxfordian, the two Shishugou species. Furthermore, all of the alternative MPTs indicate at least three lineages of chaoyangsaurids were present (assuming the autapomorphies of the two Shishugou taxa debar them from being direct ancestors to any other taxa). Three lineages are implied when the two Shishugou taxa are sister-taxa with a Chaoyangsaurus-Xuanhuaceratops clade or when the former are paraphyletic with the latter, but four lineages are implied when Chaoyangsaurus and Xuanhuaceratops are paraphyletic to a Yinlong-Hualianceratops clade. The presence of at least five lineages at the beginning of the Late Jurassic contrasts with the previous published analyses indicating only a minimum of two lineages at this time [1,9], Yinlong and all other ceratopsians, and prior to 2006 no ceratopsians were known from the beginning of the Late Jurassic. In any case, this phylogeny implies that ceratopsian phylogenetic diversification was well established by the beginning of the Late Jurassic. | 2016-05-12T22:15:10.714Z | 2015-12-09T00:00:00.000 | {
"year": 2015,
"sha1": "31dfff910f4e0297fb60fecabf2f2090b0b95349",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0143369&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "71cc5636673a94ca8123939f6953bdf8658a9e09",
"s2fieldsofstudy": [
"Geography"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
132188128 | pes2o/s2orc | v3-fos-license | Assessment of the Effects of the Free Maternal Health Policy on Maternal Health: A Case Study of New Juaben Municipality, Koforidua, Ghana
The main objective of the study was to verify the effect of the free maternal care policy on maternal health in the New Juaben Municipality, Ghana. The population for the study encompassed women of the child bearing age (10-49) in the New Juaben Municipality. Both primary and secondary sources of data were employed for this study. Purposive and accidental sampling procedures were used to select the respondents. The sample size was two hundred (200). This study used trend analysis as its main approach of analyzing the available data. The findings show that antenatal attendance has been increasing over the years. This is attributed to the introduction of the free maternal health policy. However, quality of care remains a problem due to the enormous attendance. It was also recommended that there is still a great need to introduce other measures to reduce maternal mortality in the municipality. It was also recommended that quality of care must be addressed and also more efforts should be made in the services that have provided to the patients. It was concluded that, although the policy has not eradicated maternal mortality completely yet it has contributed to its significant reduction in the region.
BACKGROUND TO THE STUDY
At the turn of this century, 189 countries endorsed the Millennium Declaration and signed up to meeting eight goals. One of these (Millennium Development Goal 5) is to "improve maternal health" (Lancet, 2005). The concept of maternal mortality has become an everyday phenomenon of the contemporary world. Women of childbearing age all over the world irrespective of race, education, occupation or marital status are faced with the agony of pregnancy's related challenges leading to their dead. The concentration before this era was argued on the ability to bear children and thus people though sad, were happy for the children's' survival. Global attention, however, began to focus more seriously on maternal deaths in the 1980's precisely 1985 when, Rosenfield and Maine published a thought-provoking article in the Lancet (Senah, 2003). In the article titled 'Maternal Mortality-a neglected tragedy-where is the M in MCH?' Rosenfield and Maine warned the world of the fact that many countries were neglecting this important problem and that existing programs were unlikely to reduce the high maternal mortality rates (Senah, 2003). Hence, in 2008, the Government of Ghana, introduced the free maternal policy which aimed at tackling this minas.
STATEMENT OF THE PROBLEM
The Eastern Regional Hospital in Koforidua recorded a total of 41 maternal deaths in 2010, as against 36 recorded in 2009. According to Dr Anim Boamah, head of the Department of Obstetrics and Gynaecology of the Regional Hospital, the high rate of maternal deaths is attributable to high abortion related deaths, primary post partum haemorrhage, hypertensive diseases in pregnancy and ruptured uterus, HIV related deaths and pre-natal infections. The others included placental abruption, cardiac failure and Sickle cell disease with anaesthetic death, ectopic pregnancy, pulmonary embolism, obstructed labour and hypoglycaemia recording death each. (GNA, 2010). Reasons attributed to these deaths were late referrals (adding that documents of most of those referrals were either incomplete or had some vital information on them being omitted). Dr. Anim Boamah indicated that 2010 had 27 out of 37 maternal mortality cases resulting from late referrals. Five (5) of these pregnant women were already brought in dead. However in 2003, to improve maternal health and survival, the government of Ghana implemented a new policy that removed delivery fees in Health facilities in the four mostdeprived regions of the country. Less than two years later, the government extended the policy to the rest of Ghana, removing delivery fees in all public, private and mission facilities (Impact, PRB, 2007). On July 1st, 2008 the President of Ghana announced that the government is providing free maternal care for pregnant women to improve the attainment of MDG 4 and 5. The decision was to implement this through NHIS (National Health Insurance scheme) to give all mothers full package access to antenatal, prenatal and postnatal care (NHIS, 2008). As a result of the high rate of maternal mortality recorded in the district, it was necessary to undertake research in other to carefully access the major causes of maternal mortality at the expense of free maternal health care.
OBJECTIVES OF THE STUDY
The main objective of the study is to assess the effect of the free maternal health policy on utilization of maternal health services in the New Juaben Municipality. More specifically the study aimed to; I. Assess the trend in the antenatal attendance in the health care facilities after the implementation of the free maternal health care policy. II.
Analyze the effect of the policy on the quality of care in the health facilities. III.
Examine the prospects and challenges of the policy and make recommendations to the problem.
LOCATION
The New Juaben is the regional capital of the Eastern Region and has an estimated population of 152,858. It covers a land size of 110.0 square kilometers with a population density of 1,483. The Municipality shares boundary on the North with East Akim District, on the South with Akwapim North, Yilo Krobo on the East and Suhum Kraboa Coaltar on the West. It lies between latitude 60 degrees North and 70 degrees North (New Juaben Municipal Health Directorate, 2012).
HEALTH FACILITIES
The inhabitants seek health services through the following network of health facilities;
TARGET POPULATION AND SAMPLING TECHNIQUES
The population for the study encompasses women of child bearing age (10-49) in the New Juaben Municipality as prescribed by the New Juaben Municipal Health Directorate. Pregnant women who are using and have ever used the free maternal health care was chosen for this study because they are viewed to be much more informed of the pros and cons of the policy and can throw more light on the interview. The Municipality Health Directorate has divided the municipality into sub-municipalities comprising Oyoko/Jumapo, Effiduase/Asokore/Akwadum, Koforidua/Zongo, Regional Hospital, Medical Village, Private Maternity, Traditional Birth Attendants and SDA Hospital. Secondary data from all these facilities were used in the analysis.
However out of these health facilities, five (5) health facilities that admit and deliver pregnant women were chosen as the target health facilities from which respondents were picked for the primary data. They are the Eastern Regional Hospital, Zongo Health Centre, Akwadum RCH, Jumapo Health Centre and Pat Maternity Home. The study requires a focus on a targeted sample of women in the New Juaben municipality who have used and are using the free maternal health services in the last five years. Two sampling techniques namely the purposive sampling and accidental sampling techniques were employed to select respondents to participate in the study.
After purposively selecting those beneficiaries who have used these services in the stated years, the accidental sampling technique was used to gather information from three hundred (300) women who are using the free maternal health policy from the five selected facilities that admit and deliver pregnant women in the New Juaben municipality.
Three hundred (200) questionnaires were administered. Out of this, one hundred (100) respondents were selected from the Regional Hospital, twenty-five (25) from Zongo Health Centre, twenty-five (25) from Akwadum RCH, twenty-five (25) from Jumapo Health Centre and twenty-five (25) from Pat Maternity Home. The assigning of this quota of respondents to these health facilities is in response to the number of attendance to these facilities.
DATA COLLECTION PROCEDURE
The researcher carried out a pre-test on the primary data to authenticate the validity and reliability of his interview guide before the start of his actual field work. A total of ten (10) questionnaires and two (2) interviews were carried out during the period of the pre-test. The questionnaires were then restructured and the necessary corrections made before the actual fieldwork was carried out. To avoid false information during the actual field study women used for the pre-test were different from those used for the actual study.
RESEARCH DESIGN
A secondary data (quantitative) supported by a structured interview (qualitative) were used as the main instruments to gather the data from the women from the five health facilities. The first section of the interview for the women solicited information on the background beneficiaries. They constituted questions about age, education, marital status, occupation while the questions in the second section tackled their utilization and accessibility of the health care services and challenges faced. The research design chosen for this work was the descriptive survey design. This study also used trend analysis as its main approach of analyzing the available data.
Both primary and secondary sources of data were employed for this study. The secondary data consulted includes annual reports, presentations at key conferences, annual statistics, half year reports, policy documents, books and other documents about maternal care in the New Juaben Municipality. To buttress points on the free maternal care and utilization rate and to complement the field data. It also gave direction to the study by providing the researcher with fair knowledge about the impact of the free maternal care policy on maternal care services.
The primary data, on the other hand, were obtained from respondents in New Juaben Municipality through interviews with the help of a semi-structured interview guide. The use of this technique gave the researcher the chance to appraise the validity and reliability of the respondents' answers. It also gives the investigator access to vital information which the secondary data could not provide.
DATA ANALYSIS
The study also used the trend analysis as its source of the data analysis. This arose out of the type of study being carried out. Rosenberg (1997) states the selection of a strategy for analyzing trend data depends in part on the purpose of the analysis by the researcher. She indicates once there is a sound conceptual framework, tables, graphs and statistical analysis are tools for examining and analyzing trend data; graphs, in particular, are an effective tool for presenting the pattern of change over time.
Under trend analysis regardless of whether statistical techniques were used for analyzing data over time, the most straightforward and intuitive first step in assessing a trend is to plot the actual observed numbers or rates of interest by year (or some other period deemed appropriate From the table 4.1.1, it is observed that a total number of two hundred (200) women were interviewed. It is observed that 20-29 age group have the highest representation of 50 percent. The 40+ age group has the least representation of 10 percent. It is observed that most of the respondents are the youth between the ages of 20-29 years. This means that when a policy is going to be made, it should be focused on 20-29 age group because they have the highest percentage. (200) respondents, the majority of them, representing 45 percent were married and 30 percent were single. The least of 25 percent were separated. This means that those who are married enjoyed the services more than those who are not married. Over all, the proportion of antenatal that received care has also been increasing and decreasing since the year 2007 to 2012. However, the proportion of antenatal that received care has been hovering above 80percent which is encouraging.
THE TREND IN THE ANTENATAL ATTENDANCE AFTER THE POLICY
From Figure 1, that is the Trend Analysis Plot for Antenatal Attendance, the index that is 1,2,3,4,5and 6 represents the years 2007,2008,2009,2010,2011 Table 4.3.1, 50 percent said that the overall performance of the free policy was satisfactory, followed by 20percent who said the overall performance of the free policy was good, and 15 percent and 10 percent said the overall performance was poor and very good respectively. The least of 5prcent said the overall performance of the free policy was excellent. Table 4.3.3, 40 percent indicated that the attitude of the health professional was satisfactory, followed by 30 percent who said the attitude of health professionals was not good, and 20 percent said the attitude of health professionals was good. The least of 10 percent said the attitude of health professionals was very good. Table 4.3.4 above indicates the responses on the level of satisfaction with the service providers. From Table 4.3.4, 60 percent had little satisfaction with the service providers, followed by 25 percent being satisfied with the service providers, 10 percent been dissatisfied with the service providers and also 5 percent been very satisfied with the service providers. More effort should be made in the services being provided to the patients. Table 4.4.1, 35 percent said the challenge with the policy is the delay at the service points and patients with NHIS are not treated well, followed by 20 percent said the drugs were not of good quality, and the least of 10 percent said the challenge is with the card acquisition. Table 4.4.2, 35 percent said the time spent at the facility should be reduced, and patients with NHIS should be treated well, followed by 20 percent who said more quality drugs should be added. The least of 10 percent said the bottlenecks with the card production and distribution should be removed. Table 4.4.3, 54 percent said the policy should be maintained because it helps those who cannot afford health care to access health care and 46 percent who said the policy should be maintained because it encourages pregnant women to go for regular treatment.
ANTENATAL ATTENDANCE IN THE HEALTH CARE FACILITY AFTER THE IMPLEMENTATION OF THE FREE MATERNAL HEALTH CARE POLICY
Ghana Health insurance (under which the free maternal care policy falls) allows (6) ANC visits (NHIS, 2010). As shown in Table 4
THE EFFECT OF THE POLICY ON THE QUALITY OF CARE IN THE HEALTH FACILITY
The presence and quality of care imparted by the health service providers, availability of equipment and medical supplies in the health service facility determines the decision of the needy women to visit the facility (United Nations Economic and Social Commission for Asia and the Pacific, 2008). Some mortality cases in the municipality arises out of shortage of blood/ non availability of fresh frozen plasma, lack of adequate intravenous fluids and other supplies, (NJMHD, 2009).
From table 4.3.1 shows that 50percent said the overall performance of the free policy was satisfactory, followed by 20percent who said the overall performance of the free policy was good and 15percent and 10percent said the overall performance was poor and very good respectively. The least of 5percent said the overall performance of the free policy was excellent. The implication is that close 40percent said the attitude of the health professional was satisfactory, followed by 30percent who said attitude of health professionals was not good and 20percent said attitude of health professionals was good. The least of 10percent said attitude of health professionals was very good.
From table 4.3.3, 60percent said the effectiveness of the NHIS medicine given was satisfactory, followed by 20percent and 15percent said the medicine were very good and good respectively and the least of 5percent said the medicine was not good.
From table 4.3.4, 60percent had little satisfaction with the service providers, followed by 25percent been satisfied with the service providers, 10percent been dissatisfied with the service providers and also 5percent been very satisfied with the service providers.
Hence the overall findings can be summarized as: The study showed that the registered antenatal totaled 4884 for the year 2007 kept on increasing in the subsequent years up to 5570 in the year 2012. Also, it was observed that only 10 percent of the attitude of the health professionals was very good. Further, 60 percent of the respondents are satisfied with the NHIS medicine to give to them and also 60 percent had little satisfaction with the service providers. The study revealed that the challenge of the policy was the delay at the service point, and also patients with NHIS are not treated well.
RECOMMEDATIONS
The issue of Maternal Mortality in Ghana is of concern and will continue to be of concern if certain parts of the problem are not addressed well. Ghana needs to focus on several key areas to help reduce maternal mortality. The ultimate goal for MDG 5 is to reduce maternal mortality by three quarters by the year 2015. The country will be able to make some strong reductions in specific areas of trouble they are currently facing with the high rate of maternal if other strategies apart from free healthcare are employed. Ghana is heading in the right direction with their sense of urgency to come up with solutions to the problem and the willingness to draw from various expertise. The study, therefore, makes these recommendations; Since the registered antenatal has kept on increasing in the subsequent years, the government should add new things to the existing policy so that many pregnant women will register for their antenatal. Management should train service providers to improve the quality of care. Since only 10 percent of the respondents said the attitude of the professionals was very good, management should organize workshops for the health professionals to work on their attitude. Time spent at the service point should be reduced and also patient with NHIS should be treated with care like the way patients without NHIS are treaded.
CONCLUSION
Based on the problem posed by maternal mortality and its effect on achieving the target set by the Millennium Development Goals, the following objectives which have been elaborated below were formulated for the investigation: Firstly, the research aimed at establishing the trends in the antenatal attendance in the health care facilities after the implementation of the free maternal health care policy. The total number of antenatal out of the expected pregnancies, who registered their pregnancies, has been increasing over the years. Similarly, the average number of visits per client has also improved over the past years, from 3.0 in 2007 to 4.1 in 2012. This antenatal increases in the municipality can be attributed to the coming into being of the free maternal healthcare policy. Hence, the policy has directly affected antenatal attendance.
Secondly, the research aimed at establishing the effect of the policy on the quality of care in the health facilities. Skilled care needs to be scaled up which would mean better usage of the human power that is available. Therefore, even though the policy has had an effect on quality of care, it has not been remarkable. Meaning more work needs to be done. Lastly, research aimed at establishing the prospects and challenges of the policy. Even though the policy has lots of challenges, it can be concluded that the policy should be maintained and strengthened as well as introduce other measures to reduce maternal mortality. | 2019-04-26T14:23:16.904Z | 2016-08-22T00:00:00.000 | {
"year": 2016,
"sha1": "7cd78b7e2610be7616b2e2a4adfc8a413ca7024d",
"oa_license": "CCBYNC",
"oa_url": "https://thejournalofbusiness.org/index.php/site/article/download/292/620",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "c4f591377556ed0934bbc6f8c5b695671e3727f6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
10601475 | pes2o/s2orc | v3-fos-license | The first Iranian recommendations on prevention, evaluation and management of high blood pressure.
This paper presents the complete report of the first Iranian Recommendations on Prevention, Evaluation and Management of High Blood Pressure. The purpose is to provide an evidence-based approach to the prevention, management and control of hypertension (HTN) by adapting the most internationally known and used guidelines to the local health care status with consideration of the currently available data and based on the locally conducted researches on HTN as well as social and health care requirements. A working group of national and international experts participated in discussions and collaborated in decision-making, writing and reviewing the whole report. Multiple subcommittees worked together to review the recent national and international literature on HTN in different areas. We used the evaluation tool that is called "AGREE" and considered a score of > 60% as a high score. We adapted the Canadian Hypertension Education Program (CHEP), the United Kingdom's National Institute for Health and Clinical Excellence (NICE) and the US-based joint National Committee on Prevention, Detection, Evaluation and Treatment of High Blood Pressure (JNC7). The key topics that are highlighted in this report include: The importance of ambulatory and self-measurement of blood pressure, evaluation of cardiovascular risk in HTN patients, the role of lifestyle modification in the prevention of HTN and its control with more emphasis on salt intake reduction and weight control, introducing pharmacotherapy suitable for uncomplicated HTN or specific situations and the available drugs in Iran, highlighting the importance of angiotensin-converting enzyme inhibitors, angiotensin-receptor blockers and calcium channel blockers as the first line therapy in many situations, the non-use of beta blockers as the first time treatment except in specific conditions, treating HTN in women, children, obese and elderly patients, the patient compliance to improve HTN control, practical guidelines to improve the patient's information on knowing their risk and self-care as well as a quick reference guide that can serve as simplified guidelines for physicians. The working team decided to update these recommendations every two years.
Introduction
Global perspective. Hypertension (HTN) is a major cause of disability recognized as a leading risk factor for death in the world, causing an estimated 7.5 million deaths per year (13% of all deaths). 1 HTN is a known independent risk factor for cardiovascular (CVD) events. As blood pressure (BP) level increases, so does the risk of heart attack, heart failure, stroke and kidney disease. 2 More specifically, according to data from the International Society of HTN 3 on the global burden of blood-pressure-related disease, approximately 54% of stroke, 47% of ischemic heart disease and 25% of other CVD worldwide was attributable to high BP. For those ages 40 to 70, each increment of 20 mmHg in systolic BP or 10 mmHg diastolic BP doubles the risk of CVD across the BP range of 115/75 to 185/115 mm Hg. 4 Elevations in BP can result in acute end-organ damage 5 such as stroke, dementia and chronic kidney disease whilst lowering 5 mmHg diastolic BP is estimated to reduce the risk of stroke by 34% and ischemic heart disease by 21% from any pre-treatment level; however, there is no threshold. 6 The benefits of lowering BP are well documented.
Clinical trials have shown ARYA Atherosclerosis Journal 2012 (Fall); Volume 8, Issue 3 antihypertensive therapy to be associated with reducing stroke risk by 35-40%; myocardial infarction by 20-25%; and heart failure by over 50%. 7 Iranian perspective. The current national picture for Iran suggests a high prevalence of CVD risk factors in both adult and younger populations. According to one study that examined risk factors in about 15,000 subjects in Tehran, aged 30 or older, 78% of men and 80% of women presented at least one CVD risk factor. Hypertension was presented in 20.4% of adults and 12.7% in children and adolescents. It was also concluded that the sustained hypertension is on the rise in younger generation of school age children. 8 The national surveillance STEPwise study that was done by the Ministry of Health (MOH) demonstrated a prevalence of 25% for HTN among adults aged 25-64 years. 9 On the other hand, HTN was introduces as the strongest risk factor for CVD events in an Iranian population. 10 According to available data from Iran, awareness, treatment and control of hypertension is generally low. Many studies estimated the awareness and treatment of hypertension in Iranians to be approximately 50% and 35%, respectively, while the control rate for hypertension is less than 16% (Table 1). 11 Two previous studies were conducted to compare the prevalence of hypertension in the rural and urban areas of Guilan and Isfahan provinces demonstrated that both systolic and diastolic BP were lower in the mountain villages and rural areas compared to cities in both sexes. 12,13 Through the education of patients and physicians, it is possible to achieve BP control in the Iranian population, just as other countries have been striving to achieve. In the US, hypertension affects an estimated 65 million people. The good news is that with implementation of national guidelines, public education programs, policies, increase the awareness, prevention, and effective treatment of hypertension significant positive results have been achieved. Hypertension control increased from 10% in NHANES II (1976)(1977)(1978)(1979)(1980) to 31% a decade later (1999)(2000). 13 In one study in Iran, the impact of comprehensive community-based interventional strategies for CVD prevention and health promotion led to a significant reduction in blood pressure and salt intake( Figure 1). 14 This document aims to provide clinicians and their patients with the necessary tools and evidence based information to better guide them in hypertension prevention and management as set forth in various guidelines from Canada, Europe, and the US. It is possible to learn from developed countries what will grant us the most success in controlling hypertension and curbing the upward trend in related cardiovascular disease in our country.
Materials and Methods
Clinical guidelines for the management To develop the guideline we decided to use a comprehensive approach to review and adapt few of the most internationally known and used HTN guidelines. Therefore, we used the evaluation tool "AGREE" and considered a high score of > 60%. We adapted CHEP, NICE and JNC7 guidelines with the following outlines: 1. Prioritized the evidenced based recommendations 2. Evaluated and discussed the required field and health care gaps among the hypertension experts from all related medical specialties 3. Conducted a systematic evaluation of the current strategies for HTN prevention and treatment in order to ensure consistency 4. Considered recommendations which have flexibility to be adapted and modified for the special cases 5. Included the references and resources that are used Search methods The search strategy for this guideline was as follows: 1. The keywords "hypertension" and "blood Pressure" were used for search in MeSH. 2. In Pubmed (www.pubmed.gov) search, an advanced search setting of "practice guideline" and "guideline", all adults 19+ years, publication since 2005 on, a total number of 31 guidelines for hypertension was found. 3. In TRIP (www.tripdatabase.com) search, for all publication since 2005 on a total of 91 guidelines from different countries was found. 4. In National Guideline Clearinghouse website (www.guideline.gov) we used the keyword "hypertension" and advanced search of adults 19+ years, publication since 2005 on, "metaanalysis of RCT". A total of 13 references were found. 5. In Cochrane (www.cochrane.gov) when EBM guideline was searched, no guideline was found. 6. In NICE website (www.nice.org.uk), UK hypertension guideline was found. 7. No Iranian guideline on hypertension was found in manual search in the previous evidences. 8. From the above publications, three hypertension guidelines were selected. These three guidelines were evaluated with the "AGREE" tool by the two experts who were familiar with use of this tool. Out of the three guidelines only one rated > 60% in the "AGREE" tool. The high rated guideline then were selected. 9. After selection of the guideline, search was continued for all RCTs on "hypertension" in adults between years 2008 to 2011 and 47 publications were chosen. 10. The first draft was edited based on the searched publications and selected guideline. The draft was reevaluated with the use of "AGREE" tool and then presented in the advisory meeting. The recommendations in this guideline are based on the scientific evidence and where there was a gap 100 ARYA Atherosclerosis Journal 2012 (Fall); Volume 8, Issue 3 in the current evidences, the experts' panel and clinical experiences were used. Our goal is to reevaluate the current guideline in 2 years to ensure the evidences used are up-to-date and current. This recommendation will be implemented in few sites before it is sent by the Ministry of Health to be cascaded down all over the country.
High blood pressure risk factors
Age and gender, race and ethnicity, family history, obstructive sleep apnea, lifestyle factors, salt and potassium, physical activity level and exercise, diet, medications and street drugs, kidney problems, other medical problems are all well-known risk factors for high blood pressure. 15 Accurate BP measurement a) BP measurement in the office Blood pressure should be assessed whenever appropriate at visits, which includes when screening for hypertension, assessing cardiovascular risk and monitoring anti-hypertensive treatment. Use of standardized measurement techniques is recommended (grade D). Automated blood pressure measurements can be used in the office setting (grade D). Automated office measured systolic blood pressure (SBP) of 140 mmHg or higher or diastolic blood pressure (DBP) of 90 mmHg or higher should be considered analogous to mean awake ambulatory SBP of 140 mmHg or higher and DBP of 90 mmHg or higher, respectively (grade D). 16 b) Self-measurement of BP In much of the hypertension education information directed at consumers, a common message is "Know your numbers". Promoting awareness of hypertension and its health risks is augmented by home monitoring of BP between doctor visits. Self-measurement of BP is helpful in diagnosing hypertension. It is also a useful adjunct method of monitoring BP between inoffice measurements. Regular home monitoring of BP is particularly useful for patients with diabetes mellitus, chronic kidney disease, suspected nonadherence, demonstrated white coat effect, BP controlled in the office but not at home (masked hypertension).
However, health care professionals should ensure that patients who measure their BP at home have adequate training, and if necessary, repeat training in measuring their BP. Patients should be observed to determine that they measure BP correctly and should be given adequate information about interpreting these readings. 16 Patients should only use validated devices (including electronic devices), which must be regularly checked against a device of known calibration.
Home BP values for assessing white coat hypertension or sustained hypertension should be based on duplicate measures, morning and evening, for an initial seven-day period. First day home BP values should be disregarded.
Home SBP values ≥ 135 mmHg or DBP values ≥ 85 mmHg should be considered elevated and associated with an increased overall mortality risk analogous to office SBP readings of ≥ 140 mmHg or DBP ≥ 90 mmHg. 16 The method of proper selfmeasurement of BP is fully explained in the complete report. 17 c) Ambulatory BP Measuring ambulatory BP is useful for evaluating the possibility of office-induced, or "white coat", hypertension. When home monitoring of BP by the patient suggests white coat hypertension, its presence should be confirmed with ambulatory BP before making treatment decisions. Note that the absence of a 10-20 percent BP decrease during sleep may indicate increased CVD risk. 2 Therapy adjustment should be considered in patients with a mean 24 hour ambulatory SBP of ≥ 130 mmHg or DBP of ≥ 80 mmHg or a mean awake SBP of ≥ 135 mmHg or DBP of ≥ 85 mmHg (grade C). 16 After accurate BP measurement, it is classified according to table 2. d) Identifying resistant hypertension In some patients, high blood pressure may not respond to interventions such as lifestyle modifications and/or drug treatment. Assessing why this is happening is essential to modifying treatment recommendations to optimize the blood pressure control. A number of contributing factors may affect high BP and make it resistant to treatment, such as the improper BP measurement, excess sodium intake, inadequate diuretic therapy, medication issues (i.e. inadequate doses, drug actions and interactions), excess alcohol intake and identifiable causes of hypertension. 2
Evaluation of CVD Risk
A global cardiovascular risk should be assessed for all hypertensive patients (grade A). This approach allows a more accurate prediction of an individual's cardiovascular risk and use of more appropriate pharmacological treatment. Informing patients of their global cardiovascular risk improves effectiveness of risk factor modification and adherence to treatment (grade B). Assessment of total CVD risk should be based on epidemiologic risk factor data appropriate to the population to which it is applied. The well-known Framingham Heart Study produced sex specific coronary heart disease risk assessment in a white middle class population. Due to its limitation in terms of generalization and the need for calibration to be used in certain populations, a new cardiovascular disease risk assessment tool was proposed by a group of Canadian and European hypertension expert.
The tool that allows clinicians to better identify patients at high total risk of developing cardiovascular disease is called Systematic Coronary Cerebrovascular Risk Evaluation (Score). The SCORE risk assessment was derived from a large dataset of prospective European studies and predicts fatal atherosclerotic cardiovascular disease over a ten-year period. This risk estimation is based on the following risk factors: gender, age, smoking, systolic blood pressure and total cholesterol. The threshold for high risk based on fatal cardiovascular events is defined as "higher than 5%", instead of the previous "higher than 20%" using a composite coronary endpoint. 18 This SCORE model originally has been calibrated according to each European country's mortality statistics. In other words, if used on the entire population aged 40-65, it will predict the exact number of fatal CVD-events that eventually will occur after 10 years. This could be also adapted for Iranian population and use by our health care professionals in near future.
Laboratory Tests
In patients diagnosed with hypertension, various laboratory tests should be done at given intervals to monitor BP concerning treatment efficacy. 16 In addition, testing is done to assess the possibility of target organ damage (e.g. hypertensive damage to the heart and kidneys), the presence of other diseases (e.g. diabetes) and to identify potential secondary causes of hypertension (e.g. kidney disease). 19 In all patients with hypertension, routine laboratory tests should include: 16 Cell blood count (grade D), urinalysis (grade D), urinary albumin excretion in patients with diabetes (grade D), biochemistry (potassium, sodium, blood urea nitrogen and creatinine) (grade D), fasting blood glucose (grade D), lipid panel including fasting serum total cholesterol and high density lipoprotein cholesterol, low density lipoprotein cholesterol and triglycerides (grade D) and standard 12-lead electrocardiography (grade C).
It is also suggested to: Assess urinary albumin excretion in all patients (grade D) Monitor all treated hypertensive patients for the new appearance of diabetes as per currently established diabetes guidelines (grade B) Repeat tests during the maintenance phase of hypertension management (e.g. electrolytes, creatinine and fasting lipids) at a frequency that reflects the clinical situation (grade D) Assess uric acid (grade D)
Secondary Hypertension
Although in more than 90% of patients with high blood pressure no underlying causes could be identified, up to 10% of hypertensives have a secondary hypertension. This emphasizes the role of screening in order to rule out underlying causes of hypertension or so-called secondary hypertension. Screening begins with a complete medical history, physical examination and laboratory investigations. 20 The most common cause for secondary hypertension is chronic renal disease. Other common causes include renovascular hypertension, primary aldosteronism, and pheochromocytoma. Followings are some steps to follow when investigating the secondary causes of hypertension. Kidney disease. An elevated creatinine or reduced eGFR indicates kidney disease. Further investigations to confirm the diagnosis are kidney ultrasonography, urine analysis, electrolyte measurement and metabolic evaluation. Renovascular hypertension. It is the second most common cause of secondary hypertension. 21 Following findings are associated with the presence of renovascular disease: More than 1.5 cm difference in length between the ARYA Atherosclerosis Journal 2012 (Fall); Volume 8, Issue 3 two kidneys in ultrasonography, abdominal bruit, deterioration of renal function in response to ACE inhibitors, generalized arteriosclerotic occlusive disease with hypertension, malignant or accelerated hypertension, new onset of hypertension after 50 years of age (suggestive of atherosclerotic renal stenosis), refractory hypertension, significant hypertension (diastolic blood pressure > 110 mmHg) in a young adult (< 35 years old), sudden development or worsening of hypertension.
In patients with normal renal function in initial screening test a magnetic resonance angiography and/or computed tomographic angiography of kidney is recommended as the next evaluating step. In patients with diminished renal function in initial screening a duplex Doppler ultrasonography of kidney is recommended as the next evaluating step. Primary aldosteronism (PA). If in initial screening the following conditions identified, further investigation for PA is mandatory: Unexplained hypokalemia (spontaneous or diuretic-induced), resistant hypertension, severe hypertension (> 160 mmHg systolic or > 100 mmHg diastolic), early onset (juvenile) hypertension (< 20 years), and/or stroke (< 50 years), incidentally discovered apparently nonfunctioning adrenal mass (incidentaloma), evidence of target organ damage (left ventricular hypertrophy, diastolic dysfunction atrioventricular block, carotid atherosclerosis, microalbuminuria, endothelial dysfunction) particularly if disproportionate for the severity of hypertension.
Initial screening test after careful preparation of the patient is measurement of plasma aldosterone levels and plasma renin activity as aldosterone-torenin ratio (ARR). Pheochromocytoma.
Pheochromocytoma should be suspected in those who have one or more of the following conditions: Hyper adrenergic spells (e.g. self-limited episodes of non-exertional palpitations, diaphoresis, headache, tremor, or pallor), resistant hypertension, a familial syndrome that predisposes to catecholamine-secreting tumors (e.g. MEN-2, NFI, VHL), a family history of pheochromocytoma, incidentaloma of adrenal, presser response during anesthesia, surgery or angiography (hypotension or hypertension with or without cardiac arrhythmia), onset of hypertension at a young age (< 20 years), idiopathic dilated cardiomyopathy, a history of gastric stromal tumor or pulmonary chondromas (carney triad), hypertension and diabetes.
Detection of secondary hypertension is important for proper management and selecting right choice of treatment.
Approach to primary prevention of hypertension
Hypertension can be prevented by complementary application of strategies that target the general population and individuals and groups at higher risk for high blood pressure. Lifestyle interventions are more likely to be successful and the absolute reductions in risk of hypertension are likely to be greater when targeted in persons who are older and those who have a higher risk of developing hypertension compared with their counterparts who are younger or have a lower risk. However, prevention strategies applied early in life provide the greatest long-term potential for avoiding the precursors that lead to hypertension and elevated blood pressure levels and for reducing the overall burden of blood pressure related complications in the community. A population-based approach that is aimed at achieving a downward shift in the distribution of blood pressure in the general population is an important component for any comprehensive plan to prevent hypertension. A small decrement in the distribution of systolic blood pressure is likely to result in a substantial reduction in the burden of blood pressure-related illness as mentioned earlier. Public health approaches, such as lowering sodium content or caloric density in the food supply, and providing attractive, safe, and convenient opportunities for exercise are ideal population-based approaches for reduction of average blood pressure in the community. Enhancing access to appropriate facilities (parks, walking trails and bike paths) and to effective behavior change models is a useful strategy for increasing physical activity in general population. 22
Management of hypertension
The optimal management of hypertension includes primary prevention of CVD through lifestyle modifications as well as possibly the use of antihypertensive drug therapy. is ≥ 140 mmHg with organ damage or other CVD risk factors (grade A). Consider treatment in all patients with above indications regardless of age (grade B). Pay attention when managing patients who are elderly and frail. Stage I hypertension without target organ damage or other comorbidities after 3 to 6 months of nonpharmacological treatment (grade A). a) Setting treatment goals Setting a blood pressure target depends on a patient's BP level (stage 1 or 2) of hypertension, as well as other medical conditions. Blood pressure targets will vary according to which of the following criteria a patient meets: 16 Hypertension with no target organ damage or other medical condition Target: Less than 140/90 mmHg (grade A) Patients with diabetes Target: Less than 130/80 mmHg (grade A) Patients with chronic kidney disease Target: Less than 130/80 (grade A) Treatment compliance is a key factor in successfully meeting and maintaining of blood pressure targets. Studies showed that the good adherence to therapy is associated with lower mortality when compared to poor adherence. 33 For drug therapy, single-pill antihypertensive combinations are recommended, since a simplified regimen may help increase patient treatment compliance. 16 b) Lifestyle modification Promoting and helping patients to make healthy lifestyle changes can be effective for preventing and managing hypertension, as well as for reducing cardiovascular disease risk. For patients with mild hypertension, recommending lifestyle changes may be enough to control blood pressure (grade D). Modifications can include quitting smoking, regular exercise (grade D), losing weight and reducing abdominal obesity (grade C), limiting alcohol intake (grade B) and eating a healthy diet (grade B) that is low in fat and high in fiber, decrease intake of sodium (grade B) and stress management. Losing weight, for example, can reduce blood pressure by 1.6/1.1 mmHg for every 1 kg (2.2 lb.) of weight lost. 23 For patients who require treatment with medication, making lifestyle changes in addition to drug therapy, can help patients to achieve better BP control.
In addition, some patients may be motivated to make lifestyle changes if it means they can reduce or stop use of antihypertensive drug therapy. Treated patients who have a low CVD risk and well-controlled BP can be offered a trial reduction or withdrawal of therapy combined with appropriate lifestyle recommendations and continued monitoring. 24 Lifestyle modifications can have a significant positive effect on blood pressure reduction: 25 Weight reduction: Maintain normal body weight (body mass index 18.5-24.9 kg/m2).
Average BP reduction: 5-20 mmHg/10 kg weight loss Dietary sodium reduction: Reduce dietary sodium intake to < 100 mmol per day (2.4 g sodium or 6 g sodium chloride).
Average BP reduction: 2-8 mmHg DASH eating plan: Adopt a diet rich in fruits, vegetables, and low-fat dairy products with reduced content of saturated and total fat. Average BP reduction: 8-14 mmHg American National Institute of Health recommended the DASH diet, which stands for dietary approaches to stop hypertension, to lower blood pressure. 24 More details on DASH diet are explained in the complete report. 17 Aerobic physical activity: Regular aerobic physical activity (e.g., brisk walking) at least 30 minutes per day, most days of the week.
Average BP reduction: 4-9 mmHg Aerobic type exercise is more effective than exercise type that involves resistance training, (e.g. weight lifting). 26 Stress management: In patients in whom stress could be a contributing factor to their hypertension, use of relaxation technique showed to be effective (grade B). c) pharmacologic Treatment 16 While lifestyle changes can be a very effective part of managing hypertension, patients usually need a combination of lifestyle changes and drug therapy to achieve target blood pressures. Patients often need more than one type of anti-hypertensive medication, or combination therapy to reach their blood pressure goal (See Figures 2 and 3 Their safety profile is good. ACEIs have been shown to reduce mortality and morbidity in patients with congestive heart failure and in post myocardial infarction patients with reduced left ventricular ejection fraction. In patients at increased cardiovascular risk, ACEIs have been shown to reduce morbidity and mortality as well as cardiovascular mortality in diabetic patients. In addition, they have been shown to prevent the onset of microalbuminuria, reduce proteinuria and retard the progression of renal disease as well as nondiabetic renal disease. In patients with established vascular disease but normal left ventricular function, ACEIs reduce mortality, myocardial infarction, stroke and new onset congestive heart failure. These benefits are independent of their effects on left ventricular function and blood pressure. Adverse effects include cough and, rarely, angioedema. In patients with renovascular disease or renal impairment, deterioration in renal function may occur. Serum creatinine should be checked before initiation and repeated within one to two weeks after initiation (for 2 months then longer). Any increase should be confirmed immediately and monitored. If there is a rise of serum creatinine of more than 30% from baseline within two months, ACEIs should be stopped. ACEIs may increase fetal and neonatal mortality and therefore are contraindicated in pregnancy, and should be avoided in those planning pregnancy (Table 3). Angiotensin receptor blockers (ARBs) ARBs are drugs that specifically block angiotensin II receptor. Unlike ACEIs, persistent dry cough is less of a problem. ARBs are recommended in ACEI intolerance patients. As with ACEIs, they are contraindicated in pregnancy and bilateral renal artery stenosis. ARBs are effective in preventing progression of diabetic nephropathy and may reduce the incidence of major cardiac events in patients with heart failure hypertensive LVH and diastolic heart failure. The Blood Pressure Lowering Treatment Trialist Collaboration (BPLTTC) in a meta-analysis of 21 randomized trial found that there were no clear differences between ACEIs and ARBs for the outcomes of stroke and heart failure. Combination of ARBs and ACEIs are not recommended except for reducing proteinuria (Table 4).
Calcium channel blockers (CCBs)
Long-acting CCBs have been shown to be safe and effective in lowering blood pressure, both as first-line agents and in combination with other classes of antihypertensive drugs (Table 5). There are three major classes of CCBs (phenylalkylamines, dihydropyridines and benzothiazepines) with different characteristics and all are effective in lowering BP. With few exceptions, they have no undesirable metabolic effects and their safety profile in hypertension is good. Longacting dihydropyridine CCBs are particularly effective in reducing isolated systolic hypertension. They are also effective in reducing cerebrovascular events by 10% compared with other active therapies. Short acting CCBs are no longer recommended and should be phased out. The use of sublingual nifedipine is also discouraged. Long acting CCBs may also be useful in treating hypertensives with coronary heart disease. Adverse effects include initial tachycardia, headache, flushing, constipation and ankle edema. Unlike other dihydropyridine CCBs, verapamil may reduce heart rate and care should be exercised when used with betablockers. β-Blockers β-blockers have long been established in the treatment of hypertension. They are particularly useful in hypertensive patients with effort angina, tachyarrhythmias or previous myocardial infarction where they have been shown to reduce cardiovascular morbidity and mortality. Certain beta-blockers such as carvedilol (beta-and alpha-blocker), bisoprolol and long-acting metoprolol have been shown to be beneficial in patients with heart failure. Beta-blockers are absolutely contraindicated in patients with active obstructive airways disease and heart block (2 nd and 3 rd degree) and have relative contraindication in peripheral arterial disease and first-degree AV block. They are generally well tolerated. Adverse effects reported include dyslipidemia, masking of hypoglycaemia, increased incidence of new onset diabetes mellitus, erectile dysfunction, nightmares and cold extremities. However, with the advent of newer antihypertensive agents with better efficacy and better safety profile, concern has been voiced over their widespread use in the treatment of hypertension (Table 6).
Diuretics
The use of diuretics is well established in the treatment of hypertension (Table 7). Thiazide diuretics are especially cheap and are one of the most widely used antihypertensive agents. When used in patients with essential hypertension and relatively normal renal function, thiazides are more potent than loop diuretics to decreased hypertension. However, in patients with renal insufficiency (serum creatinine 2.5 mg/dl), thiazides are less effective and loop diuretics should be used instead. Diuretics may be used as initial therapy. They also enhance the efficacy of other classes of antihypertensive drugs when used in combination. In the elderly with no co-morbid conditions, diuretics are the drugs of choice in the treatment of systolic-diastolic hypertension and isolated systolic hypertension. Diuretics not only reduce the incidence of fatal and non-fatal strokes but also cardiovascular morbidity and mortality. Thiazide diuretics should be used with care in patients with gout with an absolute contraindication in patients presenting with active gout as they may precipitate an acute attack. Potassium-sparing diuretics may cause hyperkalemia if given together with ACEIs or ARBs or in patients with underlying renal insufficiency. Aldosterone antagonists and potassium sparing diuretics should be avoided in patients with serum potassium > 195 mg/L. Adverse effects are uncommon, unless high doses are used. These include increased serum triglyceride, glucose and uric acid; decreased potassium, sodium and magnesium levels and erectile dysfunction. Serum electrolytes, in particular potassium, should be closely monitored. The α-blockers and the combined α-‚ βblockers The peripheral α 1 -adrenergic blockers lower BP by reducing peripheral resistance. They also reduce prostatic and urethral smooth muscle tone and provide symptomatic relief for patients with early benign prostatic hypertrophy (BPH). They should be the logical choice for hypertensive patients with BPH. The use of non-specific α-adrenergic blockers like phentolamine and phenoxybenzamine has been restricted to the treatment of pheochromocytoma (Tables 8-10).
Alpha-1 adrenergic blockers have favorable effects on lipid metabolism. However, postural hypotension is a known side effect, especially at initiation of therapy. Combined α‚ β-blockers offer enhanced neurohormonal blockade. Labetalol has been in use for over 20 years and is safe in pregnancy. Its intravenous formulation is useful in hypertensive emergencies, including pre-eclampsia and eclampsia. Carvedilol has been shown to be effective in hypertension and to improve mortality and morbidity in patients with heart failure. In addition, it has no adverse effects on insulin resistance and lipid metabolism. However, its safety in pregnancy has not been established. Table 9 shows direct vasodilators which used in hypertension treatment. In some patients treated once daily, the antihypertensive effect may diminish towards the end of the dosing interval (trough effect). Blood pressure should be measured just prior to dosing to determine if satisfactory blood pressure control is obtained. Accordingly, an increase in dose or frequency may need to be considered. Appropriate drug choices in hypertensive patient with compelling indication are presented in table 11. Additional notes: Caution is required when prescribing to women of childbearing potential as ACEIs, ARBs and direct renin inhibitors are potential teratogens. It is not recommended to combine an ACE inhibitor with an ARB except for special situation that will be judged by specialists (grade A).
Combination therapy using two first line agents may also be considered as initial treatment of hypertension (grade C) if systolic BP is 20 mmHg above target or if diastolic BP is 10 mmHg above target. Caution should be exercised in frail and elderly patients when using combination therapy fist line. Combination of an ACE inhibitor (or an ARB) and a dihydropyridine CCB is preferred to an ACE inhibitor (or an ARB) and hydrochlorothiazide (grade A). Global vascular protection treatment for hypertensive adults with no additional disease Statin therapy is recommended in hypertensive patients with three or more cardiovascular risk factors in patients older than 40 years old (grade A) or in patients with established atherosclerotic disease regardless of age (grade A).
Strong consideration should be given to the addition of low does acetylsalicylic acid therapy in hypertensive patients (grade A). In patients older than 50 years old caution should be exercised if BP is not controlled (grade C). 17 and briefly presented here (Table 12).
High blood pressure in paediatrics
The presence and need for management of modifiable cardiovascular risk factors, such as hypertension, dyslipidemia, diabetes and obesity, in the pediatric population is becoming increasingly apparent. Obesity in childhood is predictive of obesity in adulthood, and is a major cardiovascular risk factor. While childhood obesity is most widespread in the most industrialized countries, it is also becoming a fast-growing problem in developing countries, such as Iran. 27 The childhood obesity epidemic appears to be associated with the increasing prevalence of HTN. Systolic blood pressure has been recognized as a stronger predictor of mortality than diastolic blood pressure in middle aged and older adults. However, in younger population, isolated diastolic hypertension is more common. According to some studies, elevated diastolic blood pressure (with a risk threshold of 90 mmHg) in adolescent males was more consistently related to mortality than systolic hypertension. 28 Hypertension guidelines such as the European Society of Hypertension (2009) and the US (JNC 7) have recognized the significance of hypertension in children and adolescents, and how it contributes to the growing burden of cardiovascular disease worldwide.
Pediatric BP targets are: 29 In general: BP < 90 th age, sex and height specific percentile BP < 75 th percentile in children without proteinuria BP < 50 th percentile in cases of proteinuria 24-hour ambulatory BP (ABP) measurements is strongly recommended Most common causes of hypertension by age group in pediatrics are presented in the full report. 17 High blood pressure in obesity Recent epidemiological studies have revealed that the prevalence of obesity in Iran is equal to or higher than Europe and the United States and it is the primary cause of the rising prevalence of important comorbid states such as hypertension, cardiovascular and renal disease. This is also in line with the present etiologies of death in Iran with cardiovascular disease and cancer accounting for nearly 60% of causes of nontraumatic death. The prevalence of obesity in Iran has reached epidemic proportions and is specifically affecting women and younger age groups too. Increasing prevalence of obesity and metabolic syndromes might therefore be expected. Thus, it is not surprising that death from cardiovascular diseases is presently the most common cause of mortality in Iran and constitutes 45% of etiologies for all types of death in this country. Different age and sex groups lead to large variation in the prevalence of obesity in Iran. The prevalence of obesity in those less than 18 year olds was about 5.5% and 21.5% in the older group. In adults, women reported higher prevalence of obesity in comparison to men. The difference between women and men regarding obesity prevalence was increased by age. [30][31][32][33][34][35][36][37][38][39][40] Obesity (body mass index ≥ 30 kg/m 2 ) in hypertensive patients presents a unique challenge to optimal management of blood pressure. It is an increasingly common risk factor for both hypertension and CVD. The overall prevalence of hypertension increases in relation to rising obesity levels. 41 In treated patients, obesity may be associated with poor response to antihypertensive therapy 16 and can be a secondary cause of resistant hypertension. An epidemiological study found that as weight increased, greater levels of BP (from 145.5/84.5 to 149.5/89 mmHg), worse control over BP (from 29.6% to 15.4%) and a greater prevalence of metabolic syndrome (from 20.8% to 66.9%) were observed. 42 In addition, an increase in both body mass index and abdominal obesity was associated with worse control of BP. Reducing abdominal obesity and achieving a healthy weight should be encouraged in all hypertensive patients. High blood pressure in elderly Hypertension in elderly might deserve more attention. Hypertension occurs in more than two-thirds of individuals after age 65 years and is the population with the lowest rates of BP control. Treatment recommendations for older people with hypertension, including those who have isolated systolic hypertension, should follow the same principles outlined for the general care of hypertension. However, when treating the elderly for hypertension, it is also necessary to consider the other medical conditions that they may have. Some of these conditions may make the patients more prone to side effects from the medications. It is recommended that the antihypertensive medications be started at low doses and increased slowly to avoid a too rapid or excessive lowering of blood pressure; however, standard doses and multiple drugs are needed in the majority of older people to reach appropriate blood pressure targets. 2
High blood pressure in sleep apnea
The seventh report of the Joint National Committee 2 identified obstructive sleep apnea as an important identifiable cause of hypertension. As many as half of all patients with sleep apnea may have underlying hypertension, and many patients with hypertension, particularly resistant hypertension, may have obstructive sleep apnea. In fact, there seems to be an interaction between obstructive sleep apnea severity and resistance to antihypertensive medications. The first-line treatment for obstructive sleep apneainduced hypertension is treatment of sleep apnea and antihypertensive medications as indicated. The most effective methods of treatment of sleep apnea include continuous positive airway pressure, postural adjustments and weight control. Tracheostomy is considered a last resort in difficult-to-treat, medically complicated obstructive sleep apnea. 43 High blood pressure in women In women, use of oral contraceptive pills may increase blood pressure and the risk of hypertension increases with duration of use. Therefore, in women taking oral contraceptives, regular blood pressure measuring should be conducted. In women with hypertension or the risk of development of hypertension, other forms of contraception might be considered. In contrast, menopausal hormone therapy does not raise blood pressure.
Women with hypertension who become pregnant should be followed carefully because of increased risks to mother and fetus. Methyldopa, β-blockers, and vasodilators are preferred medications for the safety of the fetus. ACEIs and ARBs should not be used during pregnancy because of the potential risk for the fetus and should be avoided in women who are likely to become pregnant.
Preeclampsia, which occurs after the 20 th week of pregnancy, is characterized by new-onset or worsening hypertension, albuminuria, and hyperuricemia, sometimes with coagulation abnormalities. In some patients, preeclampsia may develop into a hypertensive urgency or emergency and may require hospitalization, intensive monitoring, early fetal delivery, and parental antihypertensive and anticonvulsant therapy. 2
Hypertension crisis Definition
Hypertensive crisis is defined as acute (severe, rapid) elevation of BP, with a systolic blood pressure usually greater than 180 mmHg (in HTN emergency it is often higher than 220 mmHg) or a diastolic blood pressure greater than 110 mmHg (in HTN emergency it is often higher than 120 mmHg). The rapidity of the rise may be more important than the absolute level in producing acute vascular damage.
Hypertensive crisis is divided into two categories based on the presence or absence of target organ damage (TOD). When acute evidence of impending or ongoing damage of a target organ (i.e. cardiovascular, renal, central nervous system) is present, the condition is considered a hypertensive emergency and the rapid reduction of BP is indicated to minimize TOD. Examples include hypertensive encephalopathy, intracranial hemorrhage, acute myocardial infarction, unstable angina, acute left ventricular failure with pulmonary edema, dissecting aneurysm, acute renal failure and eclampsia of pregnancy.
If no acute clinical evidence of TOD is present in patients with severely elevated BP, the condition is considered hypertensive urgency. In this situation, evidence of TOD is often present, but nonprogressive. Examples include upper levels of stage II hypertension (usually higher than 180/110 mmHg) associated with severe headache, shortness of breath, epistaxis, pedal edema, severe anxiety.
Accelerated hypertension is defined as a recent significant increase over baseline BP that is associated with TOD. This is usually seen as vascular damage on funduscopic examination, such as flame-shaped hemorrhages or soft exudates, but without papilledema. In contrast to hypertensive emergency, no evidence suggests a benefit from rapid and aggressive reduction in BP in patients with hypertensive urgency. In fact, such aggressive therapy may harm the patient, resulting in cardiac, renal, or cerebral hypoperfusion.
The primary goal of the emergency physician is to determine which patients with acute hypertension are exhibiting symptoms of end-organ damage and require immediate intravenous (IV) parenteral therapy. In contrast, patients presenting with acutely elevated BP without symptoms should have initiation of medical therapy and close follow-up in the outpatient setting. The emergency physician must be capable of appropriately evaluating patients with an elevated BP, correctly classifying the hypertension, determining the aggressiveness and timing of therapeutic interventions, and making disposition decisions.
Etiology
The rapidity of onset suggests a triggering factor superimposed on pre-existing hypertension. Most patients with a HTN crisis have already diagnosed but uncontrolled (primary) hypertension and have recent ARYA Atherosclerosis Journal 2012 (Fall); Volume 8, Issue 3 or abrupt discontinuation of previously prescribed antihypertensive therapy or have poor adherence to the medications. The lack of a primary care physician and failure to adhere to prescribed antihypertensive regimens are major risk factors for hypertensive emergencies.
History and physical examination
The symptoms and signs of hypertensive crises vary from patient to patient (Table 13). The most common presentations of hypertensive emergencies at an emergency room are neurologic symptoms and signs of hypertensive encephalopathy (headache, altered level of consciousness) and/or focal neurologic signs or cerebral infarction, acute left ventricular failure (pulmonary edema or merely dyspnea) and coronary ischemic syndromes (chest discomfort, acute myocardial infarction). In some patients, severe injury to the kidneys may lead to acute renal failure with oliguria and/or hematuria.
A patient with hypertension emergencies almost always has retinal papilledema as well as flame-shaped hemorrhages and exudates.
BP must be checked in both arms to screen for aortic dissection. Furthermore, screen for carotid or renal bruits, palpate the precordium, looking for sustained left ventricular lift; and auscultate for a third or fourth heart sound or murmurs (Table 14).
Laboratory evaluation
A complete blood count and smear (to exclude a microangiopathic anemia), electrolytes, blood urea nitrogen, creatinine, urinalysis (to evaluate for renal impairment), and electrocardiogram should be obtained in all patients. Chest radiography should be obtained in patients with shortness of breath or chest pain, and a brain computed tomography scan should be obtained in patients with neurologic symptoms. In patients with unequal pulses and/or evidence of a widened mediastinum on the chest radiograph, a chest computed tomography or magnetic resonance imaging should be considered.
Medical management
An important point to remember in the management of the patient with any degree of BP elevation is to "treat the patient and not the number" and the Hippocratic edict "first, do no harm" is advisable. Patients with hypertension emergencies are usually admitted to a critical care unit for continuous cardiac monitoring, frequent assessment of neurologic status and urine output, and administration of intravenous antihypertensive medications and fluids.
On the other hand, patients with hypertensive urgencies do not mandate admission to the hospital. The goal of therapy in these cases is to reduce BP within 24 hours, which can be achieved as an outpatient.
The initial goal of treatment in hypertension emergencies is to reduce the mean arterial pressure by no more than 25% within 30-60 minutes and, then if stable, to reach a goal BP of 160/100-110 mmHg within 2 to 6 hours. However, the BP should not be lowered to normal levels at first 24 hours. 43 If this level of BP is well tolerated and the patient is clinically stable, further gradual reductions toward a normal BP can be implemented in the next 24-48 hours. There are exceptions to this rule: 1. Acute ischemic stroke: Excessive or rapid reductions in BP should be avoided in acute stroke. In patients with intracerebral hematomas, the controlled lowering of the BP is currently recommended only when the SBP is > 200 mmHg or the DBP is > 110 mmHg. 44 2. Aortic dissection: In these patients, the SBP should be lowered to below 100-110 mmHg if tolerated within 10-20 minutes.
3. Patients who urgently need thrombolytic therapy must have their BPs lowered sufficiently, and it should be lowered to 160/110 mmHg before initiating thrombolytics.
In hypertensive urgency, goal of lowering of BP by about 20% should be achieved over 24 to 48 hours and it makes sense to initiate a medication that will be indicated for long-term use. Excessive falls in pressure that may precipitate renal, cerebral, or coronary ischemia should be avoided. For this reason, short-acting nifedipine (oral and sublingual) is no longer considered acceptable in the initial treatment of hypertensive emergencies or urgencies and are not recommended. Clonidine and angiotensin-converting enzyme inhibitors are long acting and poorly titratable, but these agents are particularly useful in the management of hypertensive urgencies.
Critical care nurses should monitor BP every 5 to 10 minutes until goals are reached.
As mentioned, most patients with hypertensive emergencies are volume depleted. After initial reduction of BP, gentle volume repletion with intravenous crystalloid will serve to restore organ perfusion and prevent the precipitous fall in blood pressure that may occur with antihypertensive therapy. 44 Oral therapies can be started as the intravenous agents are slowly titrated down.
To follow and reassess, the next visit should be done for all of the patients within 1-2 weeks to ensure that the BP is improving and there are no further complications of uncontrolled hypertension.
In pregnant patients, SBP persistently ≥ 180 mmHg or DBP persistently ≥ 110 mmHg (105 mmHg in some institutions) are considered hypertensive crises (hypertensive emergency) and require immediate pharmacologic management using intravenous drugs. In the vast majority of the cases, this process can only be terminated by delivery. 44 Before delivery, it is desirable to maintain the DBP greater than 90 mmHg in these patients because this pressure allows for adequate utero-placental perfusion, and prevents acute fetal distress and progressing to an in utero death or to perinatal asphyxia. 44
Treatment
No single ideal agent exists. Drugs are chosen based on their rapidity of action, ease of use, special situations, and convention as well as the target organ that is involved. 44 In selected hypertensive emergency settings, choice drugs and target BP are presented in table 15. All treatment agents whether intravenous or oral ones, their dose, side effects and contraindications are presented in the full report. 17
High blood pressure in perioperative patients
Peri-operative hypertension is commonly encountered in patients who undergo surgery. Despite many attempts in order to standardize the method to characterize the intraoperative hemodynamics, these methods still vary widely. In addition, there is a lack of consensus concerning treatment thresholds and appropriate therapeutic targets, which makes absolute recommendations about treatment difficult. Nevertheless, perioperative hypertension requires careful management. When treatment is necessary, therapy should be individualized for the patients.
The goal of controlling perioperative hypertension is to protect organ function, and is currently recommended based on the assumption that the risk of complications will be reduced and outcomes improved. However, the treatment of acute elevations in blood pressure (defined as an increase in systolic BP, diastolic BP, or mean arterial pressure by > 20% over baseline in the perioperative period) is without a uniform approach. In general, the treatment goal should be based on the patient's preoperative BP. A conservative target would be approximately 10% above that baseline; however, a more aggressive ARYA Atherosclerosis Journal 2012 (Fall); Volume 8, Issue 3 approach to lowering blood pressure may be warranted for patients at very high risk of bleeding or with severe heart failure who would benefit from afterload reduction. Careful monitoring of patient response to therapy, and adjustment of treatment, are paramount to safe and effective treatment of perioperative hypertension. After surgery, the clinician can safely transmit the patient to an effective oral antihypertensive regimen to manage the long-term risks of hypertension and cardiovascular diseases. The ideal choices, methods of administration and new agents are explained in the full report. 17
How to improve patient compliance
Poor adherence to prescribed therapies is common in patients with hypertension, and should be considered in the evaluation of the hypertensive patient with poor blood pressure control. When initiating treatment in patients newly diagnosed with hypertension and when monitoring patients with existent disease, providers should identify barriers to medication adherence and actively engage patients in shared decision-making regarding their treatment. These activities will facilitate adherence, which may lead to improved outcomes for patients with hypertension and other chronic cardiovascular diseases. When possible, the following strategies may be of help in improving patient compliance: simplifying the prescribed medication to once daily dosing, replacing multiple pill antihypertensive combinations with single pill combinations, educating patients and their families about their disease and prescribed medication, assessing adherence to pharmacological and nonpharmacological treatment in each office visit.
When to refer to specialists
Most patients can be effectively managed by the general practitioners; some patients however, should be referred to the specialist for further assessment and appropriate management. Patients with the following conditions are recommended to be referred to appropriate specialists: 1. Patient with accelerated or malignant hypertension, that blood pressure is usually higher than 180/110 mmHg, with signs of papilledema and/or retinal hemorrhage. 2. Patient with resistant hypertension in whom hypertension may often be associated with subclinical organ damage and added cardiovascular risk. 3. If blood pressure goals are not achieved within 6 months, or a previously good control is lost. 4. Pregnant women with hypertension should be referred to an obstetrician for further management. 5. De novo onset of hypertension or aggravation of BP levels during the postpartum period, especially if there is significant proteinuria. 6. Patient with recent onset of target organ damage 7. Children < 18 years old 8. Younger patients (i.e. < 40 years) with uncomplicated stage 1 hypertension also are recommended to refer to a specialist for exclusion of secondary causes of hypertension and detailed evaluation of target organ damage. 9. Hypertensive patients who do not reach targets or whose HDL-cholesterol or triglyceride levels remain abnormal
Patient information
For patients, understanding why they need to monitor, prevent, and manage high blood pressure is a challenge. Since elevated BP does not usually produce any symptoms, it is not the kind of ailment that convinces patients to see their physician, such as a pain condition. Because hypertension is a silent condition, physicians need to impress upon patients how it relates to cardiovascular risk and other diseases. A quick, simple and practical educational package targeting patients and their families have accompanied these recommendations with the main goals of: 1. How to know their cvd risk factors 45 2. Method of self-care for HTN patients and their families 46 The whole educational package was introduced in the full report. 17 Even if you already have high blood pressure, healthy lifestyle measures can help you to control it better and maybe even reduce or remove the need for blood-pressure medication. Managing your blood pressure can also help you prevent cardiovascular disease and other health conditions so you can be healthier for longer. | 2016-05-12T22:15:10.714Z | 2012-01-01T00:00:00.000 | {
"year": 2012,
"sha1": "f0dc0f676ee8f447014f2f36e1b07eb19e8a03a4",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "f0dc0f676ee8f447014f2f36e1b07eb19e8a03a4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119272763 | pes2o/s2orc | v3-fos-license | The Comparative Exploration of the Ice Giant Planets with Twin Spacecraft: Unveiling the History of our Solar System
In the course of the selection of the scientific themes for the second and third L-class missions of the Cosmic Vision 2015-2025 program of the European Space Agency, the exploration of the ice giant planets Uranus and Neptune was defined"a timely milestone, fully appropriate for an L class mission". Among the proposed scientific themes, we presented the scientific case of exploring both planets and their satellites in the framework of a single L-class mission and proposed a mission scenario that could allow to achieve this result. In this work we present an updated and more complete discussion of the scientific rationale and of the mission concept for a comparative exploration of the ice giant planets Uranus and Neptune and of their satellite systems with twin spacecraft. The first goal of comparatively studying these two similar yet extremely different systems is to shed new light on the ancient past of the Solar System and on the processes that shaped its formation and evolution. This, in turn, would reveal whether the Solar System and the very diverse extrasolar systems discovered so far all share a common origin or if different environments and mechanisms were responsible for their formation. A space mission to the ice giants would also open up the possibility to use Uranus and Neptune as templates in the study of one of the most abundant type of extrasolar planets in the galaxy. Finally, such a mission would allow a detailed study of the interplanetary and gravitational environments at a range of distances from the Sun poorly covered by direct exploration, improving the constraints on the fundamental theories of gravitation and on the behaviour of the solar wind and the interplanetary magnetic field.
Introduction
The planets of our Solar System are divided into two main classes: the terrestrial planets, populating the inner Solar System, and the giant planets, which dominate the outer Solar System. The giant planets, in turn, can be divided into the gas giants Jupiter and Saturn, whose mass is mostly constituted of H and He, and the ice giants Uranus and Neptune, whose bulk composition is instead dominated by the combination of the astrophysical ices H 2 O, NH 3 and CH 4 with metals and silicates. While H and He constitute more than 90% of the masses of the gas giants, they constitute no more than 15-20% of those of the ice giants (Lunine 1993). The terrestrial planets and the gas giants have been extensively studied with ground-based observations and with a large numbers of dedicated space missions. The bulk of the data on the ice giants, on the contrary, has been supplied by the NASA mission Voyager 2, which performed a fly-by of Uranus in 1986 followed by one of Neptune in 1989.
The giant planets likely appeared extremely early in the history of the Solar System, forming across the short time-span when the Sun was still surrounded by a circumstellar disk of gas and dust and therefore predating the terrestrial planets. The role of the giant planets in shaping the formation and evolution of the young Solar System was already recognized in the pioneering works by Oort and Safronov in 1950-1960. In particular, Safronov (1969 suggested that Jupiter's formation would have injected new material, in the form of planetesimals scattered by the gas giant, in the formation regions of Uranus and Neptune. More recently, a revised interpretation of planetary formation, obtained by studying extrasolar planetary systems, gave rise to the idea that the Solar System could have undergone a much more violent evolution than previously imagined (e.g. the Nice Model for the Late Heavy Bombardment, , in which the giant planets played a major role in shaping the current structure of the Solar System. The scientific case of a space mission to both the ice giants Uranus and Neptune and to their satellite systems and the associated mission concept were first illustrated in the white paper "The ODINUS Mission Concept" 1 submitted to European Space Agency (ESA) in response to its call for white papers 2 for the definition of the scientific themes of the L2 and L3 missions. The purpose of this paper is to provide an updated and expanded discussion, building on the feedbacks the 1 http://odinus.iaps.inaf.it or, on ESA website, http://sci.esa.int/jump.cfm?oid=52030. The ODINUS acronym is derived from the main fields of investigation of the mission concept: Origins, Dynamics and Interiors of the Neptunian and Uranian Systems. ODINUS white paper received from ESA and the scientific community at large, of this scientific case and of its relevance for advancing our understanding of the ancient past of the Solar System and, more generally, of how planetary systems form and evolve. While we will mainly focus on the open questions that the comparative exploration of the ice giants can address, to better illustrate the challenges presented by performing such a task within a single space mission and the feasibility of the proposed approach, we will also provide a concise but updated description of the ODINUS mission concept, based on the ideas discussed in the white paper and on the results of the following interactions with ESA and the scientific community.
From the perspective of ESA Cosmic Vision 2015-2025 program, the focus of such a mission and of this paper is on the first scientific theme "What are the conditions for planetary formation and the emergence of life?" (see Fig. 1). The study of the formation of the Solar System, however, cannot be separated from that of its present state and of the physical processes that govern its evolution. In discussing the scientific case for a mission to the ice giants, therefore, we will address also the second and third scientific themes of the Cosmic Vision 2015-2025 program, i.e. "How does the Solar System work?" and "What are the fundamental physical laws of the Universe?" (see Fig. 1). In the following we will use these scientific themes of the ESA Cosmic Vision 2015-2025 program as the guideline to discuss the scientific case for a mission to the ice giants and their satellite systems (Sects. 2, 3 and 4). The ODINUS mission concept and the scientific rationale of its twin spacecraft approach will be discussed in Sect. 5 together with the preliminary assessment of its feasibility performed by ESA. Finally, in Sect. 6 we will summarize the outcomes of the selection of the scientific themes for the L2 and L3 missions by ESA, with a focus on the evaluation of the scientific relevance and timeliness of the exploration of the ice giants and on future prospects.
Theme 1: What are the conditions for planetary formation and the emergence of life?
In this section we will briefly summarize how our understanding of the processes of planetary formation has evolved across the years, discuss their chronological sequence for what concerns the Solar System and highlight how the exploration of Uranus, Neptune and their satellite systems can provide deeper insight and better understanding of the history of the Solar System and how it compares to what we learned from the extrasolar systems discovered to date. It must be noted that the present knowledge on this subject is limited by current observational capabilities and likely supplies only an incomplete view. However, it is not easy to provide a sense of how our knowledge of exoplanets will evolve by the time an L class mission to the ice giants will be launched (currently, no earlier than L4 or 2040). Future space telescopes like ESA M3 Plato and NASA Transiting Exoplanet Survey Satellite (TESS) will explore regions of the phase-space currently unreachable, making it difficult to predict whether the new exoplanets will conform to the partial picture we can draw so far or if we are going to be surprised once more. Concerning the characterization of exoplanets, the James Webb Space Telescope (JWST) will surely provide information on the atmospheric composition of several extrasolar planets but dedicated missions, like ESA M3 mission candidate Exoplanet Characterization Observatory (EChO), for the systematic investigation of atmospheric composition are not currently planned. For further discussion on the subject we refer the readers to Turrini, Nelson & Barbieri (2014) and references therein.
brought about the idea that planetary formation was a local, orderly process that produced regular, well-spaced and, above all, stable planetary systems and orbital configurations. However, with the discovery of more and more planetary systems and of free floating planets (Sumi et al. 2011) through ground-based and space-based observations, it is becoming apparent that planetary formation can result in a wide range of outcomes, most of them not necessarily consistent with the picture derived from the observations of the Solar System. The orbital structure of the majority of the discovered planetary systems seems to be strongly affected by planetary migration due to the exchange of angular momentum with the circumstellar disks (see e.g. Papaloizou et al. 2007 and references therein) in which the forming planets are embedded, and by the so-called "Jumping Jupiters" mechanism (Weidenschilling & Marzari 1996;Marzari & Weidenschilling 2002), which invokes multiple planetary encounters, generally after the dispersal of the circumstellar disk, with chaotic exchange of angular momentum between the different planetary bodies involved and the possible ejection of one or more of them.
The growing body of evidence that dynamical and collisional processes, often chaotic and violent, can dramatically influence the evolution of young planetary systems gave rise to the idea that also our Solar System could have undergone the same kind of evolution and represent a "lucky" case in which the end result was a stable and regular planetary system. The most successful attempt to date to apply this kind of evolution to the Solar System was the so-called Nice Model Levison et al. 2011), a Jumping Jupiter scenario formulated to link the event known as the Late Heavy Bombardment (LHB in the following, see e.g. Hartmann et al. 2000 for a review) to a phase of dynamical instability involving all the giant planets. In the Nice Model, the giant planets of the Solar System are postulated to be initially located on a more compact orbital configuration than their present one and to interact with a massive primordial trans-Neptunian region. The gravitational perturbations among the giant planets are initially mitigated by the trans-Neptunian disk, whose population in turn is eroded. Once the trans-Neptunian disk becomes unable to mitigate the effects of the interactions among the giant planets, the orbits of the latter become excited and a series of close encounters takes place. The net result of the ensuing Jumping Jupiters process, in those scenarios that reproduce more closely the present orbital structure of the Solar System, is a small inward migration of Jupiter and marked outward migrations of Saturn, Uranus and Neptune Levison et al. 2011).
The importance of the Nice Model lies in the fact that it strongly supports the idea that the giant planets did not form where we see them today or, in other words, that what we observe today is not necessarily a reflection of the Solar System as it was immediately after the end of its formation process. Particularly interesting in the context of the study of Uranus and Neptune is that, in about half the cases considered in the Nice Model scenario, the ice giants swapped their orbits . The success of the Nice Model in explaining several features of the Solar System opened the road to more extreme scenarios, also based on the migration of the giant planets and the Jumping Jupiters mechanism, either postulating the existence of a now lost fifth giant planet Batygin et al. 2012;Nesvorny and Morbidelli 2012) or postulating an earlier phase of migration and chaotic evolution more violent and extreme than the one described in the Nice Model (Walsh et al. 2011). One of the most fascinating aspects of these scenarios is that they all invoke a certain degree of mixing of the solid materials that compose the Solar System. The mixing is generally the larger the more the causing event is located toward the beginning of the Solar System's lifetime. As an example, the "Grand Tack" scenario (Walsh et al. 2011) implies a much stronger remixing than the one that the LHB would cause in the framework of the Nice Model (see e.g. Levison 2009).
A more or less extensive migration of the giant planets is not required, however, to have a remixing of the solid material in the Solar Nebula. As the pioneering work of Safronov (1969) pointed out, the formation of Jupiter would scatter the planetesimals in its vicinity both inward and outward from its orbit (the "Jovian Early Bombardment" scenario, see Fig. 2 and Turrini et al. , 2012Turrini 2013;Turrini & Svetsov 2014) . In particular, the outward flux of ejected material was postulated by Safronov (1969) to raise the density of solid material in the formation regions of Uranus and Neptune and increase their accretion rate to make it consistent with the lifetime of the Solar Nebula. The inward flux crosses the regions of the terrestrial planets and the asteroid belt, with potentially important implications for the collisional and compositional evolution of the inner Solar System (see Fig. 2 and Weidenschilling 1975, Weidenschilling et al. 2001Turrini et al. , 2012Turrini 2013;Turrini & Svetsov 2014). The influence of Jupiter's formation, however, is not limited to the scattering of neighbouring planetesimals: the orbital resonances with the planet would extract planetesimals from more distant regions and put them on orbits crossing those of the other forming giant planets (see Fig. 2 and Weidenschilling et al. 2001;Turrini et al. , 2012Turrini 2013;Turrini & Svetsov 2014;Turrini, Nelson & Barbieri 2014).
Page 5/29
Figure 2: orbital distribution of the Solar Nebula 2x10 5 years after the beginning of the accretion of the nebular gas by Jupiter in the simulations performed by . The cases considered encompass the classical scenario with no migration (top left), moderate migration (0.25-0.5 au, top right and bottom left) and extensive migration (1 au, bottom right). Planetesimals that formed between 2 and 4 au are indicated in red, those that formed between 4 and 7 au in light blue and those that between 7 and 10 au in dark blue. The open circles are the positions of Jupiter at the beginning of the simulations, the filled ones are the position of Jupiter once fully formed. The excited planetesimals outside 6 au represent the outward flux predicted by Safronov (1969).
One of the regions affected by the orbital resonances is the asteroid belt (see Fig. 2 and Turrini et al. , 2012Turrini, Nelson & Barbieri 2014): rocky material can therefore be extracted from the inner Solar System and, as in the original idea from Safronov (1969), possibly be accreted by the forming giant planets (see e.g. Nelson, Turrini and Barbieri 2013 and Turrini, Nelson and Barbieri 2014 for the case of Jupiter) or captured in their circumplanetary disk and incorporated in their satellites.
The Role of Ice Giants in Unveiling the Past of Solar System
As discussed in the previous section, during its history the Solar System went through a series of violent processes that shaped its present structure. The main actors of these processes were the giant planets Jupiter and Saturn. Due to their smaller masses and their likely later formation, Uranus and Neptune were strongly affected by these very same processes together with the rest of the protoplanetary disk. In this section, we will reorganize the events discussed in the Sect. 2.1 in a chronological order and discuss their implications for Uranus and Neptune and their satellite systems. If we follow the description of the history of the Solar System by , we can divide it into three main phases: the Solar Nebula, the Primordial Solar System and the Modern Solar System. This schematic view of the evolution of the Solar System is summarized in Fig. 3, where we report the main events that took place across the different phases.
The Solar Nebula
From the point of view of the giant planets, the Solar Nebula (see Fig. 3) is the period across which they were forming in the circumsolar disk and migrating due to disk-planet interactions. While the gas giants Jupiter and Saturn are forming, the sudden increase of their gravitational perturbations (due to their rapid gas accretion phases) causes a sequence of bombardment events throughout the protoplanetary disk, which called the Primordial Heavy Page 6/29 . The events marking the transition between the different phases are in bold characters.
Bombardment. The prototype of the Primordial Heavy Bombardment is the Jovian Early Bombardment (see Fig. 2, Sect. 2.1 and Turrini et al. , 2012Turrini 2013;Turrini & Svetsov 2014), triggered by the formation of Jupiter, which was likely the first gas giant to form. The later formation of Saturn would cause a second, similar event, plausibly of lower intensity due to its smaller mass with respect to Jupiter . One of the consequences of the Primordial Heavy Bombardment is that, after the formation of the first giant planet, each successive giant planet forms from a more and more evolved and remixed disk, in which the abundances of the various elements and materials are different from the original ones, with implications for the rock/ice ratio and the ratio between different ices in the cores of the giant planets and in the material available for the forming satellites. Measuring the composition of the ice giants and their satellites, and in particular the abundances of noble gases and the isotopic ratios of the different elements, therefore provide a window on the dynamical evolution of the Solar System.
In the classical view of the formation of the Solar System (Safronov 1969), the migration of the giant planets due to their exchange of angular momentum with the circumsolar disk was absent and the main role in reshuffling the protoplanetary disk was played by the Primordial Heavy Bombardment. Recent results in the study of the implications of the Jovian Early Bombardment for the asteroid (4) Vesta (Turrini 2013;Turrini & Svetsov 2014) suggest, however, that this view should be amended and a moderate migration of Jupiter (of the order of 0.25-0.5 au) is required to fit the observational data. As shown in Fig. 2, even such a limited displacement of the giant planet would have implications for the reshuffling of the different materials in the Solar System. In the alternative scenarios we discussed in Sect. 2.1, the proposed extreme migration of the giant planets would have played a more significant role in the reshuffling of the different materials in the Solar System. Specifically, in the "Grand Tack" scenario (Walsh et al. 2011) the giant planets are hypothesized to migrate extensively across the Solar System. Their formation regions, in this case, would be markedly different from those assumed by the classical scenario (both in the case of a moderate migration and in that of a more marked migration like the one shown in Fig. 2) and the composition of their planetary cores would be affected by it.
Part of the planetesimals that the giant planets scatter or excite while forming and migrating would collide with the giant planets themselves, resulting in a "late veneer" of high-Z elements delivered into their atmospheres (Turrini, Nelson & Barbieri 2014). The contribution of high-Z elements provided by this phase of late accretion could have contributed to the super-solar abundances of C, N, S, Ar, Kr and Xe in the atmosphere of Jupiter measured by the probe released by the NASA mission Galileo and those measured in the atmospheres of the other giant planets (see Wong et al. 2008 for a more in-depth discussion of the measured abundances and the proposed causes and Turrini, Nelson & Barbieri 2014). All these remixing events, moreover, affect the source materials, captured in the form of planetesimals by the circumplanetary disks, from which the regular satellites of the giant planets can form (see for a review). Depending on the formation time of the relevant giant planet and on the amount of radiogenic sources incorporated in the rocky fraction of the source material, the regular satellites could already differentiate across this phase of the life of the Solar System. Finally, across the Solar Nebula phase a first generation of irregular satellites of the giant planets could have been captured from the protoplanetary disk due to collisions, the effects of gas drag or a combination of the two (see e.g. Mosqueira et al. 2010 for a discussion). This first generation of irregular satellites, however, would not survive the LHB, if the latter is associated with a dynamical instability of the outer Solar System like the one hypothesized by the Nice Model, and a second generation would be created by the LHB itself ).
The Primordial Solar System
Somewhere between the Solar Nebula and the Primordial Solar System phases (see Fig. 3), two events contributed to shape the Uranian and Neptunian satellite systems. One was the giant impact of a planetary embryo with Uranus, suggested to be responsible for its 98° obliquity. As discussed by and references therein, it is possible that the original satellite system of the ice giant was destroyed during this event and new satellites formed from the debris of the original ones. The second event was the capture of Triton by Neptune and the following shrinking and circularization of its orbit, which caused the removal of most of the original regular satellites of the ice giant (see e.g. Mosqueira et al. 2010 for a discussion). Across these events and during the first 100 Ma of the life of the Solar System, the giant planets would continue perturbing the planetesimals and planetary embryos residing in the inner and outer Solar System: part of these perturbed bodies (a few per cent in the classical scenario, Guillot & Gladman 2000) would impact againts the giant planets themselves and result in a secular phase of late accretion. The captured mass during this secular accretion appears to be of the same order of magnitude as that delivered by the Primordial Heavy Bombardment (Turrini, Nelson & Barbieri 2014).
Throughout the Primordial Solar System, the Nice Model predicts that the giant planets would still be on different, closer orbits with respect to their present ones. Once the dynamical instability responsible for the LHB takes place, icy planetesimals from what will become the trans-neptunian region are more efficiently excited into high-eccentricity, giant planet-crossing orbits analogous to those of the present-day Centaurs. A fraction of these planetesimals will impact the giant planets (Matter et al. 2009), but their contribution to the late enrichment of their atmospheres is not enough to explain the currently observed abundances (Matter et al. 2009) and is more limited with respect to the one of planetesimals captured at earlier times (Guillot & Gladman 2000;Turrini, Nelson & Barbieri 2014). A fraction of these planetesimals will also impact on the satellites of the giant planets, contributing to their contamination by exogenous material and possibly supplying energy for their late differentiation (Barr & Canup 2010). In particular, Barr & Canup (2010) argue that the LHB could cause the differentiation of Ganymede but not that of Callisto, in agreement with the available data on their internal structures. Another implication of the Nice Model is that any preexisting population of irregular satellites would be destroyed as a consequence of the close encounters between the giant planets . However, showed that three-body effects between the giant planets and the planetesimals during the planetary encounters invoked by the Nice Model would naturally supply a way to re-populate the satellite systems of the giant planets with irregular satellites. It must be noted that these studies are based on the earlier formulation of the Nice Model and that the implications of its more recent formulations Levison et al. 2011) are still to be addressed. Nevertheless, they show that the evolution of the Solar System across the Primordial Solar System phase could have a non-negligible role in shaping the presentday Uranus and Neptune and their satellite systems.
The Modern Solar System
The Modern Solar System phase starts after the end of the LHB (see Fig. 3) and, differently from the previous two phases, instead of violent processes it is dominated by more regular, secular ones. Moreover, the population of small bodies in the outer Solar System is significantly smaller than that at earlier times, so that collisional processes are less intense than before. Most of the information that we can gather through crater counting on the surface of the satellites of the giant planets refers to this long, more quiescent phase, especially if the satellites are still geophysically active and undergo resurfacing, as it appears to be the case for Triton (see Schubert et al. 2010 for a discussion). In the case of geophysically active satellites, moreover, the surface features and composition supply us information on their more recent internal state, i.e. they again give us insights into the processes that acted across the Modern Solar System phase. Depending on the degree of geophysical activity and the flux of impactors (both planetocentric, i.e. other satellites, and heliocentric, e.g. comets and Centaurs), the surfaces of the satellites can be contaminated to various degrees by exogenous materials (see e.g. Mosqueira et al. 2010;Schubert et al. 2010 for a discussion), an effect that has to be taken into account while interpreting spectral data, which typically probe just the first few mm of the satellite surfaces. Across the Modern Solar System, moreover, the secular effects of space-weathering due to various exogenous sources (e.g. solar wind, magnetospheric plasma, cosmic rays) contribute to the surface evolution of the satellites in ways that are still poorly quantified or even understood.
The exploration of Uranus and Neptune and the history of the Solar System
As Sects. 2.1 and 2.2 highlight, our view of the processes of planetary formation and of the evolution of the Solar System has greatly changed over the last twenty years but most of the new ideas are in the process of growing to full maturity or need new observational data to test them against. The comparative study of Uranus and Neptune and their satellite systems will enable outstanding problems to be addressed, as the ice giants were affected more than other planets by the violent processes that sculpted the early Solar System and yet they are the least explored and more mysterious of the giant planets. In particular, the exploration of Uranus and Neptune and of their satellite systems allows probing those phases of the life of the Solar System preceding the formation of the terrestrial planets, which completed their assembly only after a few 10 7 years (see Fig. 3).
The primary information that a mission to Uranus and Neptune should gather to investigate the history of the Solar System are: What are the atmospheric composition and enrichment with respect to the solar abundances of the two planets? What are the bulk densities and the masses of the ice giants and their satellites? What are the interior structures and density profiles of the ice giants and their satellites? What is the surface composition of the regular and irregular satellites? Which satellites are fully or partially differentiated and which ones are undifferentiated? Using these data, the open questions that such a mission can help to answer are: When and where did the planets form? Did they migrate? If so, how much? Did Uranus and Neptune swap their positions as hypothesized by the Nice Model? What is the ice-to-rock ratio of the cores of ice giants and of their satellites? How much "non-local" material was available to them when they formed? Where did this "non-local" material originated from? Are the satellites of Uranus primordial or did they reform after the planet tilted its spin axis?
What were the effects of the capture of Triton for the Neptunian satellites? Where did the irregular satellites originate? Can they be used to constrain the dynamical evolution of the ice giants? Note that the questions and the related measurements here reported do not aim to address all the possible investigations that a mission to the ice giants could perform, but focus on the primary driver of the proposed mission concept, i.e. the study of the past history of the Solar System. A discussion of several other measurements and studies that such a mission will allow, albeit nonexaustive, is provided is Sect. 3.
Uranus and Neptune as templates for the extrasolar planets
As we detailed in Sect. 2.3, a mission to the ice giants has the potential to provide precious information on the history of our Solar System and on the processes that shaped its formation and evolution. Moreover, the study of the ice giants is important also to gain deeper insight on one of the most abundant classes of extrasolar planets according to the observational sample to date. Based on the data supplied by the NASA mission Kepler once corrected for selection effects, about one star out of five in our galaxy should possess at least a Neptune-like planet (Fressin et al. 2013). While there is a growing amount of efforts devoted to the characterization of the atmospheric composition of giant exoplanets with ground-based or space-based facilities (see e.g. Turrini, Nelson & Barbieri 2014 and references therein), the only observational ground-truth we possess on this class of planets, especially from the point of view of their interior, is represented by the observations performed by Voyager 2 during its fly-bys of Uranus in 1986 and of Neptune in 1989 and from ground-based observations that, however, cannot supply the same coverage over all phase angles and observing geometries and cannot achieve the same spatial resolution as the one obtained from a spacecraft.
It is important to point out that the Neptune-like candidates discovered by Kepler so far have orbital periods of less than about 1 year: they are therefore characterized by orbits between one and two orders of magnitude closer to their host stars than they solar counterparts Uranus and Neptune. Because of this, these exoplanets are generally classified as "warm" or "hot" Neptunes depending on their atmospheric temperatures. Their atmospheric composition and meteorology are expected to be extremely different from those of the ice giants in the Solar System; currently, however, data are available only for the atmospheric composition of one exo-Neptune (source: www.exoplanet.eu), GJ 436 b (Madhusudhan & Seager 2011). Nevertheless, Uranus and Neptune are the only examples of this class of planets within the reach of a space mission and can represent the templates to interpret the data that present and future missions, devoted to the discovery and characterization of exoplanets, will gather. From this point of view, it is particularly important to characterize both ice giants in the Solar System and not just one of them, as we presently don't know whether extreme obliquities like that of Uranus are common or not from a galactic perspective. The study of Uranus and Neptune can therefore provide a key to identify similar configurations in other planetary systems and properly interpret them.
Theme 2: How does the Solar System work?
A mission devoted to exploring the ice giants and their satellites to unveil the history of the Solar System would gather a wealth of data on the present status of the Uranian and Neptunian systems. The collected data would enable a more complete understanding of how the surfaces and interiors of icy satellites evolve so far from the Sun. Moreover, the coupled investigation of Uranus and Neptune, so similar and yet so different, would provide fundamental new insights into the cause of their different atmospheric and thermal behaviours.
Atmospheres of Uranus and Neptune
The Herschel observations of Uranus and Neptune (Feuchtgruber et al., 2013) confirmed that the ice giants have a remarkably similar D/H content (4.4±0.4×10 -5 and 4.1±0.4×10 -5 respectively), suggesting a common source of icy planetesimals in the protoplanetary disk. Further insight on the conditions of the disk in its outer regions can be derived from the relative enrichment (with respect to the Solar values) of C, N, S and O, by determination of the abundances of the corresponding reduced forms. At the current date, methane is still the only of these reduced forms that has been directly detected in both ice giants (e.g.: Baines et al., 1994) 3 . Recent analyses by Karkoschka andTomasko (2009) andTice et al., (2013) indicate that the methane mixing ratio varies with latitude. An extensive investigation of the minor gases in the atmospheres of the icy giants (with a special attention to their horizontal and vertical variations) is therefore extremely urgent to ultimately characterize the emergence of our Solar System.
The post-Voyager 2 observations of Uranus by ground-based and space telescopes revealed a progressive increase of meteorological activity (cloud and dark spots occurrence) in the proximity of Northern Spring equinox (see, e.g. Sromovsky et al., 2012). While this evolution is undoubtedly related to the extreme obliquity of the planet, the relative roles of solar illumination and internal heating (and its possible variations) remain to be assessed by detailed studies at high spatial resolution. Even more important, spacecraft infrared observations will provide an extensive coverage of the night hemisphere. The possibility of comparing the atmospheric behaviour of Uranus with the extremely dynamic meteorology of Neptune provides a unique opportunity to gain insights on the response of thick atmospheres to time-variable forcing, representing therefore a new area of tests for future atmospheric global circulation models, in conditions not found in terrestrial planets or gas giants.
Uranus zonal winds are currently characterized by moderately retrograde values (-50 m s -1 ) at the equator that progressively become prograde, to reach a maximum value of 200 m s -1 at 50N (Sromovsky et al., 2012). On Neptune, a similar pattern is observed, but the absolute speed values are strongly amplified, to reach -despite the limited solar energy input -some of the most extreme values (400 m s -1 or more) observed in the Solar System (Martin et al., 2012). Wind speed fields are the most immediate proxy for atmospheric circulation and their modeling can provide constraints on very general properties of the atmosphere, such as the extent of deep convection (Suomi et al., 1991). While ground based observers have considerably expanded the results of Voyager 2, an extensive, long-term, and high spatial resolution cloud tracking campaign remains essential to study the ultimate causes of these extreme phenomena.
Patterns of zonal winds of ice giants as revealed by available data are also noteworthy for their lack of coherence (variation of absolute values, high dispersions and differences in results from different spectral bands) once compared to the Jupiter and Saturn cases (see Hammel et al., 2001, Hammel et al., 2005 for Uranus, Sromovsky et al., 1993 andFitzpatrick et al., 2014 for Neptune). The assessment of the relative role of different phenomena (such as vertical wind shear, transient clouds due to dynamically driven sublimation and condensation, temporal variations on different time scales) will highly benefit from the long-term, high spatial resolution monitoring.
Neptune shows an unexpected temperature of 750 K in its stratosphere (Broadfoot et al., 1989) that cannot be justified by the small solar UV flux available at that heliocentric distance. More complex mechanisms -such as energy exchange with magnetospheric ions (Soderlund et al. 2013)shall become predominant in these regions. Uranus, on the other hand, offers unique magnetospheric geometries because of its high obliquity and strong inclination of magnetic axis (see also Sect. 3.3).
The satellites of Uranus and Neptune
The geological history and the composition of the satellites of Uranus and Neptune are poorly known due to the limited resolution and surface coverage of the Voyager 2 observations. The Uranian satellites Ariel and Miranda showed a complex surface geology, dominated by extensional tectonic structures plausibly linked to their thermal and internal evolution (see Prockter et al. 2010 and references therein). Umbriel appeared featureless and dark, but the analysis of the images suggests an ancient tectonic system (see Prockter et al. 2010 and references therein). Little is known about Titania and Oberon, as the resolution of the images taken by Voyager 2 was not enough to distinguish tectonic features, but their surfaces both appeared to be affected by the presence of dark material. The partial coverage of the surface of Triton revealed one of the youngest surfaces of the Solar System, suggesting the satellite is possibly more active than Europa (see Schubert et al. 2010 and references therein). Notwithstanding this, the surface of Triton showed a variety of cryovolcanic, tectonic and atmospheric features and processes (see Prockter et al. 2010 and references therein). The improved mapping of these satellites, both in terms of coverage and resolution, would enable much improved measurements of their crater records and their surface morphologies, which in turn would provide a deeper insight into their past collisional and geophysical histories.
From the point of view of their surface composition, the Uranian satellites are characterized by the presence of crystalline H 2 O ice (see Dalton et al. 2010 and references therein). The spectral features of Ariel, Umbriel and Titania showed also the presence of CO 2 ice, which however should be unstable over timescales of the order of the life of the Solar System, while CO 2 ice was not observed on Oberon (see Grundy et al. 2006;Dalton et al. 2010 and references therein). In the case of Miranda, the possible presence of ammonia hydrate was observed but both the presence of the spectral band and its interpretation are to be confirmed (see Dalton et al. 2010 and references therein). The confirmation of the presence of ammonia would be of great importance due to its antifreezing role in the satellite interiors. The spectra of Triton possess the absorption bands of five ices: N 2 , CH 4 , CO, CO 2 , and H 2 O (Dalton et al. 2010). The detection of the HCN ice band has been reported, which could imply the presence of more complex materials of astrobiological interest (see Dalton et al. 2010 and references therein). Triton also possesses a tenuous atmosphere mainly composed of N 2 and CO, which undergoes seasonal cycles of sublimation and re-condensation (see Dalton et al. 2010 and references therein). Images taken by Voyager 2 revealed active geyser-like vents on the surface of Triton, indicating that the satellite is still geologically active (even if at present it is not tidally heated, see Schubert et al. 2010 and references therein) and, similarly to the Saturnian satellite Enceladus (Spencer et al. 2009 and references therein), possesses liquid water in its interior, sharing its astrobiological potential as a possible sub-surface habitable habitat.
Both Uranus and Neptune possess a family of irregular satellites. Neptune, in particular, possesses the largest irregular satellite (not counting Triton) in the Solar System, i.e. Nereid. Aside their estimated sizes and the fact that observational data suggest they might be more abundant than those of Jupiter and Saturn (Jewitt and Haghighipour 2007), almost nothing is known of these bodies. The collisional evolution of the irregular satellites results in the secular production of dust, as supported by observational data in the Jovian and Saturnian systems (Krivov et al. 2002, Verbiscer et al. 2009). Depending on their sizes, the non-gravitational forces can either strip away the dust particles from their planetocentric orbits or cause them to spiral inward and impact with the regular satellites or the planets (see Schubert et al. 2010 and references therein; see also Tosi et al. 2010 andTamayo et al. 2011 for the specific case of the Saturnian system). In the case of the Uranian system, Tamayo et al. (2013) recently showed that the latter effect would affect the surface of the four outermost regular satellites (due to the dynamical instability caused by the obliquity of Uranus) and could explain the increasing trend of leading-trailing color asymmetries of the hemispheres of the satellites with planetocentric distance observed by Buratti and Mosher (1991). The study of the irregular satellites would therefore constrain the origin of the dark material, and likely other contaminants, observed on some of the Uranian satellites and discriminate whether it originated from the irregular satellites or it was the result of local (e.g. the interaction with the magnetosphere) or endogenous processes.
Magnetosphere-Exosphere-Ionosphere coupling in the Uranian and Neptunian systems
Neptune and Uranus have strong non-axial multipolar magnetic field components compared with the axial dipole component (Connerney et al., 1991;Herbert, 2009). The magnetic fields of both planets are generated in the deep, electrically conducting regions of their interiors, i.e. in electrolyte layers composed of water, methane and ammonia (Hubbard et al., 1991;Nellis et al., 1997) or superionic water (Redmer et al., 2011). Numerous modelling efforts have shown that the mechanism of a dynamo operating in a thin shell surrounding a stably-stratified fluid interior produces magnetic field morphologies similar to those of Uranus and Neptune (Hubbard et al., 1995;Holme and Bloxham, 1996;Stanley and Bloxham, 2006). In addition, Goḿez-Peŕez and Heimpel (2007) showed that weakly dipolar and strongly tilted dynamo fields are stable in the presence of strong zonal circulation and when the flow has a dominant effect over the magnetic fields. Guervilly et al. (2012) proposed that if some mechanism is able to transport angular momentum from the surface down to the deep, fully conducting region then the zonal motions may influence the generation of the magnetic field. Such zonal jets at the giant planets may exert, by viscous or electromagnetic coupling, an external forcing at the top of the deeper conducting envelope. The model by Guervilly et al. (2012) assumes an idealized one-way coupling between the outer and deep regions, assuming a constant (throughout the whole modeled layer) conductivity and ignoring the back reaction of the deep layer onto the outer layer. In order to assess the role of zonal winds in the generation and topology of the magnetic fields of Uranus and Neptune, determination of the compressibility of the layers, of the radial profiles of the electrical conductivity, of the viscosity and of the viscous coupling between electrically insulating and conducting regions, is necessary. These quantities cannot be estimated from direct measurements. Nonetheless, the proposed mission concept can provide key constraints for further modeling efforts devoted to characterizing the longitudinal profile of zonal winds at the cloud top and their possible secular variations (by means of visible and IR imaging), the magnetic field and the gravitational field. Namely, the orbits of the two spacecraft can be optimized to allow determination of gravity fields at least up to order 12, to assess the scale height of exponential decay of zonal winds along the rotation axis (Kaspi et al, 2010), which constrain the degree of dynamic coupling between surface and interior.
The highly non-symmetric internal magnetic fields of Uranus and Neptune (Ness et al. 1986(Ness et al. , 1989Connerey et al. 1991;Guervilly et al., 2012), coupled with the relatively fast rotation and the unusual inclination of the rotation axes from the orbital planes, imply that their magnetospheres are subject to drastic geometrical variations on both diurnal and seasonal timescales. The relative orientations of the planetary spin and their magnetic dipole axes and the direction of the solar wind flow determine the configuration of each magnetosphere and, consequently, the plasma dynamics in these regions.
Due to the planet's large obliquity, Uranus' asymmetric magnetosphere varies from a pole-on to a orthogonal configuration during a Uranian year (84 Earth years) and changes from an "open" to a "closed" configuration during a Uranian day. At solstice (when Uranus' magnetic dipole simply rotates around the vector of the direction of the solar wind flow) plasma motions due to the rotation of the planet and by the solar wind are effectively decoupled (Selesnick and Richardson, 1986;Vasyliunas, 1986). Moreover, the Voyager 2 plasma observations showed that when the Uranus dipole field is oppositely directed to the interplanetary field, injection events to the inner magnetosphere (likely driven by reconnection every planetary rotation period) are present (Sittler et al., 1987). The time-dependent modulation of the magnetic reconnection sites, the details of the solar wind plasma entry in the inner magnetosphere of Uranus and the properties of the plasma precipitation to the planet's exosphere and ionosphere are unknown. Models indicate that Uranus' ionosphere is dominated by H + at higher altitudes and H 3 + lower down (Capone et al., 1977;Chandler and Waite, 1986;Majeed et al., 2004), produced by either energetic particle precipitation or solar ultraviolet (UV) radiation. Our current knowledge of the aurora of Uranus is limited since it is based only on 1) a spatially resolved observation of the UV aurora (by the Ultraviolet Spectrograph data on board Voyager 2, Herbert 2009), 2) observations of the FUV and IR aurora with the Hubble Space Telescope (Ballester, 1998), and 3) observations from ground-based telescopes (e.g., Trafton et al., 1999). The details of the solar wind plasma interaction with the planet's exosphere, ionosphere and upper atmosphere (possibly through charge exchange, atmospheric sputtering, pick-up by the local field), the seasonal and diurnal variation of the efficiency of each mechanism as well as the total energy balance (deposition/loss) due to magnetosphere-exosphere-ionosphere coupling are unknown. Since the exact mechanism providing the required additional heating of the upper atmosphere of Uranus is also unknown, new in situ plasma and energetic neutral particles observations could become of particular importance to determine whether the extent to which plasma precipitation to the exosphere has a key role in this context. The magnetospheric interaction with the Uranian moons and rings can be studied through in situ measurements of magnetic field, charged particles, and energetic neutrals emitted from the surfaces. Finally, remote imaging of charge exchange energetic neutral atoms (ENAs) would offer a unique opportunity to monitor the plasma circulation where moons and/or Uranus' exosphere are present.
Neptune's magnetic field (Ness et al., 1989;Connerey et al. 1991) has a complex geometry that includes relatively large contributions from localized sources or higher order magnetic multipoles, or both, yet not well determined (Ness et al. 1989). Neptune is a relatively weak source of auroral emissions at UV and radio wavelengths (Broadfoot et al., 1989;Bishop et al., 1995;Zarka et al., 1995). Although this non-observation does not rule out an active magnetosphere per se, it ruled out processes similar to those associated with the aurora observed at Uranus. Whereas the plasma in the magnetosphere of Uranus has a relatively low density and is thought to be primarily of solar-wind origin, at Neptune, the distribution of plasma is generally interpreted as indicating that Triton is a major source (Krimigis et al., 1989;Mauk et al., 1991Mauk et al., , 1994Belcher et al., 1989;Richardson et al., 1991). Escape of neutral hydrogen and nitrogen from Triton maintains a large neutral cloud (Triton torus) that is believed to be source of neutral hydrogen and nitrogen (Decker and Cheng, 1994). The escape of neutrals from Triton could be an additional plasma source for Neptune's magnetosphere (through ionization). Our knowledge of the plasma dynamics in the magnetosphere of Neptune as well as on the neutral particles production in Triton's atmosphere is limited. New in situ plasma and energetic neutral particles observations focused on Triton's region can provide important information on the role of the combined effects of photoionization, electron impact ionization, and charge exchange in the context of the coupling of a complex asymmetric planetary magnetosphere with a satellite exosphere at large distances from the Sun.
Planetary and satellite interiors
The available constraints on interior models of Uranus and Neptune are limited. The gravitational harmonics of these planets have been measured only up to fourth degree (J 2 , J 4 ), and the planetary shapes and rotation periods are not well known (see e.g. Helled et al. 2011 and references therein). The response coefficients of Uranus and Neptune suggest that the latter is less centrally condensed than the former (De Pater and Lissauer 2010).
The thermal structures of these planets are also intriguing (see e.g. Helled et al. 2011 and references therein). Uranus stands uniquely among the outer planets for the extremely low value (0.042±0.047 W m -2 ) of its internal energy flux (Pearl et al., 1990). This figure sharply contrasts with Neptune, where Voyager 2 determined a value of 0.433±0.046 W m -2 (Pearl et al., 1991). The two ice giants must therefore differ in their internal structure, heat transport mechanisms, and/or in their formation history. Substantial differences in internal structures are suggested by the analysis of available gravitational data for the two planets . Namely, the Uranus gravity data are compatible with layered convection in the shell, which inhibits the transport of heat. Alternative views call -among the others -for a later formation age of Neptune (Gudkova et al., 1988). Consequently, heat fluxes represent, along with gravity, magnetic data and wind fields (Soderlund et al. 2013), the key experimental constraints to characterize the interior of Uranus and Neptune and their evolution.
The information on the interior structure of the satellites of Uranus and Neptune is even more limited and is mostly derived from their average densities, which are used to infer the rock-to-ice ratios, and their surface geology, which suggests that across their lives they possessed partially or completely molten interiors (De Pater and Lissauer 2010). As a consequence, the data that can be collected by a mission to the ice giants on their interiors will play an important role in filling up this gap in our understanding of the icy satellites in the outer Solar System.
Gravity data can indeed be used to constrain the internal structure and composition of the planets. The gravitational potential due to a body with rotational symmetry can be represented by an harmonic expansion of the type 2n J 2n P 2n (cos θ ) ) + 1 2 ω 2 r 2 cos 2 θ , see e.g. Helled et al. 2011, where r, θ, φ are spherical coordinates, G the Newtonian gravitational constant, M the mass of the primary body and ω its rotational angular velocity. The specific potential depends on the zonal coefficients J 2n . Such deviations of the primary body gravitational field from the spherical symmetry (due to its rotational state and internal structure and composition) perturb the orbit of the spacecraft and can be extracted via a precise orbit determination and parameter estimation procedure from the tracking data (usually range and range-rate in a typical radio science experiment). Fundamental to this objective is a proper modelling of the spacecraft dynamics, both gravitational and non-gravitational. This could be non-trivial in case of a complex spacecraft (the ideal would be a test mass) and -in selected cases -could require also the use of an on-board accelerometer (Iafolla et al., 2010). In the case of Uranus, measurements of the precession of its elliptical rings should add to the list of observables. Of course, this investigation of the internal structure of the primaries can be also extended to their satellites. Indeed, selected fly-bys of the satellites will allow for the determination of their gravitational coefficient and, at least, of their lowest-degree multipoles. The set of estimated parameters could include also the masses of the planets or of their satellites (see the right-hand side of the previous equation). This is a measurement that is difficult to perform remotely, and a direct result of having a probe orbiting the various bodies of the system.
An alternate and complementary method to probe the internal structures of Uranus and Neptune consists of using seismic techniques that were developed for the Sun (helioseismology, see e.g. Goldreich & Keeley 1977), then successfully applied to stars with the ESA space mission CoRoT and the NASA space mission Kepler (Michel et al. 2008, Borucki 2009), and tested on Jupiter (Gaulme et al. 2011). Seismology consists of identifying the acoustic eigen-modes, whose frequency distribution reflects the inner sound speed profile. The main advantage of seismic methods with respect to gravity moments is that waves propagate down to the central region of the planet, while gravitational moments are mainly sensitive to the external 20% of the planetary radius. The second advantage is that the inversion problem is not model dependent, neither on the equation of state or on the abundances that we want to measure. As regards Uranus and Neptune, the difference in internal energy flux should appear as a difference in the amplitude of acoustic modes. Moreover, a by-product of the seismological observations is the map of the wind fields in the atmospheres of the giant planets (Schmider et al. 2009;Murphy et al. 2012), which as mentioned previously provides additional constraints on the interior structure of the planets themselves (Soderlund et a. 2013). As for helioseismology, two approaches may be used to perform such seismic measurements, either with Doppler spectro-imaging (e.g. Schmider et al. 2007), or visible photometry (Gaulme & Mosser 2005). A dedicated study needs to be conducted to determine whether a seismological investigation is feasible in the framework of the mission concept described in Sect. 5 and, in case, which method is the most appropriate for these two planets.
Heliosphere science
During the mission cruise phase, it will be possible to obtain important information on the interplanetary medium properties at different distances from the Sun as well as on the heliospheric structure and its interactions with the interstellar medium. Although there is plenty of information on how solar wind and coronal mass ejections interact with the interplanetary medium at 1 au from the Sun, little is known on how this interaction works at larger distances. The measurements of the interplanetary magnetic field fluctuations and plasma densities variations at different distances from the Sun, such as those that a mission to the ice giants would allow, can provide information for understanding the origin of turbulence in the solar wind and its evolution from its source to the heliopause. A mission to the ice giants, therefore, would give an opportunity to obtain constraints for the processes of energy transfer in different regions of the solar system and to understand the mechanisms of the energy dissipation.
In order to answer a series of fundamental questions concerning the particle acceleration in the Solar System, the galactic cosmic ray modulation and the plasma/planetary bodies interaction, it is important to have knowledge of the overall structure of the heliosphere. Prevailing models of the shape of the heliosphere suggest a cometary-type interaction with a possible bow shock and/or heliopause, heliosheath, and termination shock (Axford, 1973;Fichtner et al., 2000). However, recent energetic neutral atom images obtained by the Ion and Neutral Camera (INCA) onboard the NASA spacecraft Cassini did not conform to these models . Specifically, the map obtained by Cassini/INCA revealed a broad belt of energetic protons with non-thermal pressure comparable to that of the local interstellar magnetic field . In October 2008, the NASA mission Interstellar Boundary Explorer (IBEX) was launched with energetic neutral atom cameras specifically designed to map the heliospheric boundary at lower (<6 keV) energies (McComas et al., 2009;Funsten et al., 2009). Both IBEX and INCA identified in the energetic neutral atom images dominant topological features (ribbon or belt) that can be explained on the basis of a model that considers an energetic neutral atom-inferred non-thermal proton pressure filling the heliosheath from the termination shock to the heliopause .
The ENA imaging is a promising technique for remote imaging of the heliospheric boundary. Hydrogen ENAs are generated in the heliosheath through charge-exchange between the shocked solar wind protons and the cold neutral interstellar hydrogen gas. The shocked protons in this region are mostly isotropic and some fraction of the resulting ENAs will propagate radially inwards, unimpeded by the interplanetary magnetic field (Hsieh et al., 1992;Gruntman et al., 2001). Synchronized ENA observations with the dual spacecraft of the proposed mission concept will provide a mapping of the shocked solar wind protons, and will reveal information on the heliosheath structure and the properties of the complex interstellar interaction. The proposed measurements of the heliosheath structure, to be performed from the Uranus and Neptune orbits, are required for the achievement of just additional science objectives. However, such measurements could still be of significant interest, since they would complement the IBEX observations extending them to a different Solar cycle and possibly with a better angular resolution, since the spacecraft will be closer to the heliopause. Moreover, in case the spacecraft arrives when Uranus is in the heliotail's side, the expected angular separation between primary and secondary oxygen populations will be higher than the one at 1 au. As a result, these two populations could be discriminated with higher accuracy than in the Earth's orbit case (McComas et al., 2009;Möbius et al., 2009). Different models of interaction of the solar wind with the interstellar medium could be constrained by ENAs observations in a wide energy range comprising IBEX and INCA range. Different vantage points for ENA imaging could be useful to reconstruct the ENA generation geometries.
The design of the ENA cameras is intended to meet the requirements for measuring the ENAs generated in the heliosheath. Combined ENA and magnetic field measurements at the orbits of Uranus and Neptune will provide complementary information (to the one obtained from the Earth's orbit) for addressing the question whether the interaction of the heliosphere with the interstellar magnetic field takes place at the termination shock or at the heliopause.
Theme 3: What are the fundamental physical laws of the Universe?
Since the early days of interplanetary exploration missions, spacecraft have been used as (nearly) test masses to probe the gravitational machinery of Solar System and, more in general, as a test for fundamental physics. Though general relativity is currently regarded as a very effective description of gravitational phenomena, having passed all the experimental tests (both in the weak-and strongfield regimes, see e.g. Will 2006) so far, it is challenged by theoretical (e.g. Grand Unification, Strings) scenarios (e.g., Damour et al. 2002) and by cosmological findings (Turyshev, 2008). Stringent tests of general relativity have been obtained in the past by studying the motion of spacecraft during their cruise phase, as well as the propagation of electromagnetic waves between spacecraft and Earth (see e.g. Bertotti et al. 2003 for the measurement of the Shapiro time delay and the corresponding improved bound on post-Newtonian parameter γ). In this respect, the spacecraft are considered as test mass subject (mainly) to the gravitational attraction of Solar System bodies. Well-established equations of motions can then be tested against the experimental data, in order to place strong constraints on possible deviations from what is predicted by general relativity. At the same time, the spacecraft -acting as a virtual bouncing point for microwave pulses -enables a precise measurement of the propagation of electromagnetic waves between Earth and spacecraft (e.g., Shapiro time delay). Being very effective in the past in ruling out possibilities of "exotic physics" (i.e., the so-called "Pioneer Anomaly", see Anderson et al. 1998b), such tests could be used in the future to further pursue experiments in this way. The very-weak-field environment of the more external regions of the Solar System is particularly interesting, in that "exotic" phenomenology such as MOND could be probed (see e.g. Famaey & McGaugh 2012). While it could be possible to replicate the Cassini test for the measurement of γ, the most interesting possibility offered by the mission will be the opportunity of testing the gravitational interaction at a scale of distances at which few precision measurements are available. Since the standard scenario predicts nothing new at these scales, an eventual signal that could be clearly traceable to a gravitational origin will be a strong candidate of new phenomenology. This possibility however depends on the availability of a very stable reference point given by the spacecraft itself. This implies a strong reduction (or knowledge at the same level of accuracy) of all non-gravitational dynamics. These tests would help extend the scale at which precision information on gravitational dynamics is available; this will contribute to bridging the "local" scale (in which precise measurements on gravitational dynamics are available) to more "global" scales (subject to puzzling phenomenology as dark matter and dark energy).
The experimental setup needed to perform the previous tests also allows for constraining the amount of non-luminous matter remnant of Solar System formation (e.g., the trans-Neptunian region), as well as the presence of some form of dark matter that could be trapped in a halo around the Sun (Anderson et al. 1989;Anderson et al. 1995). At least two approaches for placing constraints to such an amount of matter can be considered, and in fact have been used with regard to Pioneer 10, Pioneer 11 and Voyager 2 trajectories.
The first assumes a spherically symmetric matter distribution around the Sun, and estimates its gravitational perturbations on bodies outside the distribution (as -case by case -Jupiter, Uranus and Neptune) using range points obtained during flybys. Anderson et al. (1995) obtained the limits of 0.32 ± 0.49 for Uranus and -1.9 ± 1.8 for Neptune respectively, both results being in units of 10 -6 M ⨀ respectively 4 . The negative sign in Neptune's result is interesting: it may point to a nonspherically symmetric mass distribution inside Neptune's orbit.
The second approach considered ten years of Pioneer 10 trajectory inside what has been supposed to be the trans-Neptunian region; bounds on the density of small-size particles have been obtained from the lack of detectable damage to the spacecraft (namely to the propellant tank). For example, having parameterized the density distribution of Kuiper Belt particles with n(r) = n 0 r -γ , and taking a large γ, a bound of M ⨁ /3 for ρ < 0.4 g cm -3 and of M ⨁ /10 for ρ < 0.133 g cm -3 , ρ being the particles density, has been obtained in Anderson et al. (1998a).
Such approaches could well be applied to a mission towards the two ice giants, to place further constraints on non-luminous matter. In general, precisely reconstructing the orbit of the probe(s) during the entire cruise will enable a possible repetition of this test at various distances from the Sun. An advantage of a orbiter at the ice giants, with respect to previous measurements, is that the former would provide for a rather long series of measurements instead of the few ones (one for each fly-by pass) reported in the past.
We can notice that this type of experiment, performed instead when the spacecraft is in orbit around one of the two planets, implies an improvement of the corresponding planet ephemerides. Since in the current best implementation of Solar System ephemerides (see e.g. Folkner et al., 2008) the orbits of Uranus and Neptune are not so well determined as the ones of more inner bodies, due in particular to the lack of recent spacecraft tracking 5 , any further data in this direction will help to enhance the ephemerides themselves.
The ODINUS mission concept and the scientific rationale of the twin spacecraft approach
The approach proposed to ESA in the white paper "The ODINUS Mission Concept" was to use a set of twin spacecraft, each to be placed in orbit around one of the two ice giant planets (see Fig. 4). The traditional approach for the exploration of the giant planets in the Solar System is to focus either on the study of a planetary body and its satellites (e.g. the NASA missions Galileo and Cassini to the Jovian and Saturnian systems) or on the investigation of more specific aspects (e.g. the NASA mission Juno to study the interior of Jupiter and the ESA mission JUICE to explore the Jovian moons Ganymede, Callisto and Europa). This is a well tested approach that allows for a thorough investigation of the subject under study and to collect large quantities of highly detailed data. The only drawback of this approach is that comparative studies of the different giant planets are possible only after decades, especially since the datasets provided by the different missions are not necessarily homogeneous or characterized by the same level of completeness, as the different missions generally focus on different investigations. In the case of the well-studied Jovian and 4 The quoted result has been obtained with a particular choice for the fit. Other, similar estimates have been provided with different fit assumptions. 5 The only available are those from the Voyager 2 fly-bys.
Saturnian systems, about 10 years passed before it became possible to compare the dataset supplied by the Galileo mission with the first data supplied by the Cassini mission. Moreover, in order to be able to perform a detailed comparative study of the satellites of these two giant planets it will be necessary to wait until the completion of the JUICE mission, due to the limited coverage of the data from Galileo. As a consequence, about half a century would be required before we can fully address the differences and similarities between the Jovian and Saturnian systems.
Exploring the Uranian and Neptunian systems with the traditional approach would require either half a century of efforts or the focus on this exclusive goal over two consecutive L-class missions of ESA's Cosmic Vision program or its future counterpart. In a scenario where, to balance between the different needs of the astrophysics community as in the recent selection of the science themes for the L2 and L3 missions 6 , ESA would devote the L4 and L6 missions to the exploration of these two giant planets, the launch of the L6 missions would occur in 2052 or later (assuming the temporal distance between L4, L5 and L6 is of 6 years as the nominal interval between L2 and L3): assuming travel times to Uranus and Neptune of about 13 and 16 years respectively, as in the scenarios assumed for the Uranus Pathfinder (Arridge et al. 2012) and the OSS (Outer Solar System, Christophe et al. 2012) mission proposals and in the studies conducted by ESA (ESOC 2010) and NASA , the completion of the two missions would occur no earlier than 2068, i.e. more than half a century from now. In the unrealistic scenario of devoting both L4 and L5 missions to the exploration of the ice giants, it would be possible to complete this task by about 2060 but at the cost of not having L-class missions devoted to astrophysics before L6. The approach proposed by the ODINUS mission concept is different from the traditional one in that it focuses on the use of two M-class spacecraft to be launched toward two different targets in the framework of the same mission (see Fig. 4). The use of twin spacecraft, aside limiting the development cost of the mission, allows for performing measurements with the same set of instruments in the Uranian and Neptunian systems, supplying data of similar quality and potential completeness. Moreover, during at least part of the cruise the two spacecraft will fly on independent orbits, allowing for the study of the interplanetary medium at different angular positions (see instruments that can be included in the scientific payload, implying a less in-depth exploration of the two systems with respect to what would be possible with two dedicated missions. As we discussed in the mission concept that we presented in the white paper "The ODINUS Mission Concept" and that we will now discuss concisely, a careful selection of the instruments and design of the spacecraft can mitigate the importance of this drawback (see Sect. 5.4). Finally, we want to emphasize that, due to the different travel time to reach the two planets, the high-activity phases at the Uranian and Neptunian systems will not overlap (see Sects. 5.1,5.2 and 5.4), thus limiting the complexity of the mission management.
The twin spacecraft and their cruise to the ice giants
As we mentioned previously, the founding idea of the ODINUS mission concept is to have a set of twin spacecraft (which we dubbed Freyr and Freyja from the twin gods of the Norse pantheon) to be placed in orbit around Uranus and Neptune respectively (see Fig. 4). In order to fit the budget of an L-class mission, a conservative, straw-man configuration for the ODINUS mission could be based on two spacecraft similar to that of the NASA mission New Horizons, i.e.: about 6 instruments in the scientific payload + radio science; about 600 kg of dry mass for each spacecraft; hybrid (solar electric and chemical) propulsion; radioisotope-powered energy source. The scientific payload and the dry mass of the spacecraft were estimated, in the original white paper, from the assessment of the fuel budget needed to reach the ice giants and to insert them on planetocentric orbits in the worst case scenario. Specifically, we considered the Hohmann transfer orbit between Earth and Uranus (or Neptune) with an orbital insertion at about 2x10 7 km from the planet on a highly eccentric orbit, obtaining a required Δv of about 5 km s -1 , which in turn translated into a wet-to-dry mass ratio of about five for the spacecraft. This implies that 600 kg of dry mass of each spacecraft requires a wet mass at launch of about 3000 kg. Such a wet mass at launch would make the mission feasible either considering a single launch of the Freyr and Freyja spacecraft with an Ariane V rocket or two separate launches with Soyuz rockets. As we mentioned above, however, this configuration of the spacecraft and of the orbital transfer to the ice giants is extremely conservative and based on the worst case scenario. Preliminary assessments of optimized transfer orbits performed by Thales Alenia Space (J. Poncy, private communication) indicate that a far lower Δv, estimated to be of the order of 1.5 km s -1 , could allow for reaching the ice giants and inserting into their orbits while at the same time relaxing the constraints on the wet-to-dry mass ratio.
A related problem is that the studies performed by ESA (ESOC, 2010), NASA and Thales Alenia Space (J. Poncy, private communication) all indicate that the launch window for a mission to Uranus falls between 2025 and 2030, after which Jupiter will not be in a favourable position. Both studies performed by ESA (ESOC, 2010) and NASA indicate an even smaller launch window for a mission to Neptune, falling between 2025 and 2028. Launch dates at later times would result in an increase of the time of flight, the required fuel or both. However, these constraints can be loosened with the use of a hybrid solar electric-chemical propulsion system where the solar electric propulsion can be used up to the orbital region of Jupiter in order to by-pass the problem of the unfavourable position of Jupiter and reduce the time of flight (Safa et al. 2013).
A more complete assessment of the orbital path, the wet-to-dry mass ratio and the mass budget for the scientific payload was beyond the scope of the white paper and, more generally, of ESA's call, as the outcome is strongly influenced by the currently undetermined possible launch dates. As a consequence, in this work we maintained the original, conservative estimate of the masses for the spacecraft and the scientific payload reported above. In terms of duration of the cruise phases, we adopted nominal times of flight of 13 years for Uranus and 16 years for Neptune based on the results of ESA's (ESOC, 2010) and NASA's studies. The scenario contemplating two separate launches with Soyuz rockets allows for the two trajectories to be optimized independently, thus allowing for the largest savings of either fuel or travel time. A preliminary check of the orbital positions of Uranus and Neptune showed, however, that the two ice giants will be in a favourable position to launch the two spacecraft together with an Ariane V rocket and then separate their paths between Jupiter and Uranus.
Finally, the distance between the ice giants and the Sun make the use of solar panels difficult in terms of both size and weight of the panels themselves (Arridge et al. 2011;J. Poncy, private communication). A mission to the ice giants therefore requires the use of radioisotope-powered energy source. For a launch date beyond 2034 (the nominal launch date for L3) Am-based thermoelectric generators should be available to European space mission with an adequate technological maturity (TRL 5 foreseen for 2018, Ambrosi 2013). Such generators should be able to provide 1.5-2 W kg -1 (Ambrosi 2013; Safa et al. 2013). Assuming a power budget of the order of 200 W for each spacecraft, the mass requested by the power generators and sources would be of the order of 100 kg, thus not affecting in any significant way the mass budget and the possibility to use either Soyuz or Ariane V rockets.
The post-insertion orbital tour of the spacecraft and the exploration strategy of the Uranian and Neptunian systems
In the white paper, the original idea we proposed was to have the spacecraft enter their planetocentric orbits in the regions populated by the irregular satellites thanks to the chemical propulsion, and then to take advantage of ionic propulsion to slowly spiral inward toward the planets. During their inward drift, the spacecraft would have crossed the orbital regions of the regular satellites and, as a end-mission scenario, eventually entered the planetary atmospheres of the two ice giants to perform in situ measurements. However, the energy budget available to the spacecraft, as discussed in Sect. 5.1, would make it impossible to use the ionic propulsion once at Uranus and Neptune (Safa et al. 2013). As a consequence, the orbital tour of the two systems should be realistically planned based only on the use of chemical propulsion and gravitational assists of the satellites, but these constraints should allow for maintaining the basic exploration strategy we proposed in the white paper (Safa et al. 2013).
As mentioned above, the insertion orbits are chosen to insert the spacecraft on high eccentricity orbits at the orbital distance of the irregular satellites, and have one or more fly-bys with members of this family of small bodies. The spacecraft will then change their orbits (either by performing a manoeuvre or taking advantage of a gravitational assist by a regular satellite, thanks to the initial high eccentricity orbit) to transfer to the regions populated by the regular satellites, possibly maintaining eccentric orbits to allow for the contemporary observation of the regular satellites and the planets or their ring systems. The nominal duration of the orbital tours of the two spacecraft, once in orbit around Uranus and Neptune, is planned to be of three years. As three years is also the difference between the duration of the cruise phases of Freyr and Freyja, the Freyr spacecraft would complete its mission at Uranus more or less contemporary to the beginning of Freyja's mission at Neptune. The nominal duration of the orbital tours therefore allows for having only one spacecraft fully operational at a given time, minimizing the complexity of the ODINUS mission in terms of management and optimizing the use of the receiving stations at ground.
In case of a moderate eccentricity of the orbits of the two spacecraft after insertion, the orbital tours at the systems of the ice giants would be divided into two phases: a first 1.5-2 years long phase focusing on the of investigation of the satellites and 1-1.5 years long phase focusing instead on the study of the planets and their ring systems. In case, instead, of a high eccentricity of the postinsertion orbits, the two spacecraft could in principle observe the planets and their ring systems while at pericentre and the satellites while farther away from the planets: the orbital tours could then simply be planned as a single 3 years long phase. The high obliquity values of Uranus and Neptune imply that the regular satellites orbit on planes significantly inclined with respect to the ecliptic plane. As a consequence, unless the fuel budget and the orbital studies indicate the possibility of inserting the spacecraft on high-inclination orbits, the orbital paths of the spacecraft will need to be optimized to allow for as many close encounters as possible with the regular satellites in the lifetime of the mission. This is particularly important in the case of Uranus, where the satellites orbit almost perpendicularly to the ecliptic plane: a spacecraft orbiting near the latter would therefore allow only for short close encounters with the regular satellites when they are approaching and crossing the ecliptic plane itself. Based on NASA's studies for a mission to Uranus ), a 2 years long phase focused on the exploration of the satellites would allow for two fly-bys of each of the five major satellites of the giant planet.
At the end of the nominal mission at the planets, the updated scenario we propose is to perform a manoeuvre to put the spacecraft on eccentric orbits whose pericentres are located at the boundaries of the atmospheres of the planets, and then perform a second manoeuvre to change their orbits into low eccentricity, high altitude orbits inside the very atmospheres of the planets. The studies performed by NASA for a mission to Uranus indicate that in the stratosphere of the planet (the same should hold true for Neptune) there is a 300 km wide window where such an atmospheric entry would be feasible without putting at risk the integrity of the spacecraft due to thermal solicitations. This end-mission scenario would allow for performing in situ measurements of the atmospheric compositions and/or densities (depending on the scientific payload, see Sect. 5.3) without putting at risk the other phases of the mission, while at the same time taking advantage of the previous phases of characterization of the planets and their ring systems to minimize the risks damaging the spacecraft due to impacts with dust particles and micrometeorites. Note that the atmospheric entry at Uranus should plausibly occur at one of the poles, at the innermost ring of the planet seems to extend down to the boundary of the planetary atmosphere (De Pater et al. 2013).
The straw-man payload of the twin spacecraft
A possible straw-man payload for the two spacecraft, which could allow for the achievement of the goals of the ODINUS mission, is composed by: Camera (Wide and Narrow Angle); VIS-NIR Image Spectrometer; Magnetometer; Mass Spectrometer (Ions and Neutrals, INMS); Doppler Spectro-Imager (for seismic measurements); Microwave Radiometer; Radio-science package. The choice to limit the number of instruments on-board the spacecraft is due to the budget constraints, i.e. to the need of keeping the ODINUS mission inside the cost cap of an L-class mission (i.e. about 1 G€). Given the long times required to explore the ice giant planets (i.e. it would take 16 years from launch to explore Uranus and 19 to explore Neptune, see Sects. 5.1 and 5.2), the development of a highly integrated payload would allow for maximizing the number of instruments, thus the scientific return of the mission, and is therefore of critical importance (see also Sect. 5.4). Four instruments that would significantly improve the completeness of the exploration of Uranus and Neptune and their satellites and the scientific return of the mission are: Thermal IR Mapper; Energetic Neutral Atoms Detector (to complement the measurements of the INMS); Plasma Package; High-sensitivity Accelerometer (for the post-atmospheric entry phase). As discussed in Sect. 3.4, the alternative approach based on seismological measurements can be coupled to the more traditional investigation of the gravitational momenta to study the interiors of Uranus and Neptune. Of the two possible approaches (Doppler spectro-imaging or visible photometry) to perform seismological measurements, should visible photometry prove to be the technique of choice, the Doppler Spectro-Imager indicated in the straw-man payload could be replaced by one (or more) alternative instruments. Similarly, a lower wet-to-dry mass ratio than the very conservative one we adopted (see Sect. 5.1) would allow for increasing the dry mass of the spacecraft and, as a consequence, the number of instruments in the scientific payload.
Critical aspects, mitigation strategies and enabling technologies of the ODINUS mission
The preliminary feasibility assessment of the mission concept performed by ESA's Future Missions Preparation Office evaluated the ODINUS mission as feasible with the budget of an Lclass mission for the L3 launch window (Safa et al. 2013) with present-day technology and technologies currently under development in Europe. The two spacecraft are modelled after the one of the ongoing New Horizons mission and their wet masses, according to our first order estimates, would fit either the Soyuz (two launches scenario) or the Ariane V (single launch scenario) payload capabilities. With an estimated final cost of about 550 MEuro (source: NASA 7 ) for the New Horizons mission and taking into account that the development costs would be shared between the two spacecraft, the ODINUS mission would be feasible also from the point of view of the expected cost.
The most critical aspects for the success of the ODINUS mission are: 1. the availability of radioisotope-powered energy sources; 2. the achievable transfer rate (mainly in downlink); 3. the achievable wet-to-dry mass ratio and the mass constraints on the scientific payload; 4. the possibility of performing gravitational assist manoeuvres at Jupiter and/or Saturn. The first two critical aspects is due to the large distances of Uranus and Neptune from the Sun. Concerning the critical aspect 1, said distances make the use of solar panels for energy generation impractical, as already pointed out in Sect 5.1. The development of the required technology and the identification of an affordable and reliable energy source compliant with ESA's policies is therefore mandatory for the feasibility of the ODINUS mission. However, Am-based thermoelectric generators are already under study in Europe and their availability should not present a problem for launch dates later than the nominal one of L2 (Ambrosi 2013; Safa et al. 2013). Concerning critical aspect 2, possible mitigation strategies involve expanding the capabilities of the ESA's network of receiving stations on ground, calibrating the data volume to be collected during the mission phases at the ice giants to the achievable downlink data-rate, or a combination of both. Finally, the critical aspects 3 and 4 are intimately linked but, as we discussed in Sect. 5.1, the adopted wet-to-dry mass ratio is extremely conservative and the use of solar electric propulsion up to the orbital region of Jupiter should allow for by-passing the need for gravitational assist manoeuvres at one of the gaseous giant planets.
ESA's assessment on the scientific theme of the ice giants and conclusions
In the selection for the scientific themes of the L2 and L3 mission, the Senior Survey Committee appointed by ESA stated that "The SSC considered the study of the icy giants to be a theme of very high science quality and perfectly fitting the criteria for an L-class mission. However, in view of the competition with a range of other high quality science themes, and despite its undoubted quality, on balance and taking account of the wide array of themes, the SSC does not recommend this theme for L2 or L3. In view of its importance, however, the SSC recommends that every effort is made to pursue this theme through other means, such as cooperation on missions led by partner agencies." (Cesarsky et al. 2013). With New Horizons well on its path to Pluto and the trans-neptunian region, the ice giants Uranus and Neptune represent indeed the next frontier in the exploration of the Solar System and they potentially hold the key to unlock its ancient past down to its first and more violent phases. Their study can reveal whether the Solar System is one of the possible results of a general scenario of planetary formation, common to all planetary systems, or if the variety of orbital configurations of the extrasolar systems discovered so far are the outcome of a very different sequence of events than those that occurred in the Solar System. In this paper we focused on the scientific rationale of exploring both ice giants in the framework of a single mission, with the goal to perform a comparative study of Uranus, Neptune and their satellite systems. The alternative approach, i.e. the investigation of each of the ice giants with a dedicated space mission, is discussed in the papers by Arridge et al. (this issue) for the case of Uranus, and by Masters et al. (this issue) for the case of Neptune. As we discussed in Sect. 5, these two approaches have different strong and weak points based on the chosen trade-off between depth of exploration and time required to explore both ice giants. Nevertheless, all three mission scenarios designed around these two approaches were deemed feasible, in the framework of the technological development reasonably expected for the L3 launch window, during the preliminary feasibility assessment performed by ESA (Safa et al. 2013). The question we should therefore ask in order to plan the exploration of Uranus and Neptune is not what can we realistically do, but which of the mysteries the ice giants hold the answer to we want to address first. | 2019-04-13T07:50:17.292Z | 2014-02-11T00:00:00.000 | {
"year": 2014,
"sha1": "084eecedd44eef26c1219eb4822f2ad4917ddd64",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1402.2650",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "9ac48ea129c3b209d36a480ba4cafbbcb1f90fc1",
"s2fieldsofstudy": [
"Physics",
"Geology"
],
"extfieldsofstudy": [
"Physics"
]
} |
260986509 | pes2o/s2orc | v3-fos-license | Behavioral Intention and The Influence of Demographic Factors in Purchasing Environmentally Sustainable Products among Residents in Petaling
The National Sustainable Consumption and Production (SCP) Blueprint 2016-2030 by Malaysia government highlighted the cultivation of green growth through consumption and production activities. The policy indicated the cruciality to explore the determinants of purchasing behavioral intention of consumers on eco-friendly products. This study thus aims to determine the relationship between the Theory of Planned Behavior (TPB) components and the purchasing behavioral intention for environmentally sustainable products. Limited literature on the influence of demographic factors towards purchasing intention, motivates this study to also examine the influence of moderating variables: gender, and level of education on TPB components and the purchasing behavioral intention of environmentally sustainable products. This study employs Pearson Correlation, and Hayes PROCESS analysis for quantitative analysis involving 390 respondents living in the District of Petaling. The study showed that all the components of TPB; attitude (P=0.000), subjective norm (P=0.000) and perceived behavioral control (P=0.000), appeared to have a significant relationship with the purchasing behavioral intention for environmentally sustainable products. For moderating effect, the study found that Gender indicated mixed results in which it influences attitude (P=0.0033) and subjective norm (P=0.0425) but not for perceived behavioral control (0.3070) while education level exhibited influence towards all the components of TPB and purchasing behavioral intention (P<0.05).
Introduction
Developing countries like Malaysia face enormous challenges in sustainable development since Malaysia has been pursuing rapid industrialization backed by foreign investment since the late 1960s. The consequence of population consumption on the environment relies mainly on the culture, habits, and behavior of the population. However, environmental behavior is mainly driven by the ethical principles of the population. On a world basis, these challenges include concerns such as climate change, scarring resources, primarily energy, air, water, and soil pollutants. Each year, environmental problems or green issues are a global concern that is becoming more significant. According to Wan (2019), Malaysia encounters enormous dilemmas in maintaining sustainable development. One of Malaysia major environmental issues is a solid domestic waste. Malaysia, for example, is still very much focused on landfills as the primary waste disposal system at the moment. Based on a study conducted by World Wild Fund for Nature organization in 2019, Malaysia has recorded the highest annual per capita plastics consumption compared to China,Indonesia,Philippines,Thailand,and Vietnam,at 16.78 kg per person. Malaysia ranks second in total plastic waste generation among the other Asian countries. The use of non-environmentally friendly products creates environmental issues such as plastic pollution, and it is approximately 4.8 to 12.7 million tonnes of plastics found in the ocean. For example, it is expected that marine life would be threatened due to an increase in the volume of plastic waste by four times between the years 2010 and 2050 (WWF, 2020). Recent statistics by Department of Statistic Malaysia (2020) indicates that a total of 3,108.9 thousand tonnes of solid wastes was produced in 2019 as compared to 3, 098.7 thousand tonnes in 2018. Failure to curb those issues would lead to environmental deterioration in Malaysia as the waste primarily involved non-biodegradable materials that require a long-term period to decompose. Green consumption has become the priority of consumers and companies in addressing environmental problems (Goncalves et al., 2016). In evidence, according to the previous study by Nakashima (2012), from the UN Intergovernmental Panel on Climate Change, reforms in daily consumption, food, and energy use could significantly reduce ecological damage. Though customers' concerns about the global environment and the continued growth in green product sales, the market share in green products remains very limited, especially in Malaysia. Furthermore, according to Mei et al (2012), studies relating to green purchasing are relatively scarce among Asian countries compared to Western. Referring to Mamun et al (2018), various studies agreed that consumers' behaviors toward environmentally sustainable products are much coming from western context and very narrowly found in other sides of the world. Due to limited discovery on purchasing behavioral intention for environmental sustainability in Malaysia as there are constraints in literature on green purchasing behavioral intentions, this study was established to provide a broader view on the subject. Theory of Planned Behavior (TPB) is one of the most common models in the consumer behavior area utilized by many researchers. According to Kim et al (2013); Hsu et al (2017), TPB is a widely utilized model in green product buying intention. Therefore, the research considers critical components of the TPB, which are attitude, subjective norm, and perceived behavioral control (PBC), and measures these in the sense of purchasing environmentally sustainable products. However, numerous studies have suggested several improvements of TPB theory to address its shortcomings by identifying other possible moderating factors that could affect purchasing attention, such as gender, and levels of education. Thus, this study aims to determine the moderating influence of these demographic factors and ascertain the relationship of TPB with the purchasing intention of environmentally sustainable products. Petaling Jaya, Selangor, is selected as the location for the study as the District of Petaling has been designated as "A City with Soul Reward Sustainable Lifestyles," which incorporated it with the green sustainability practices (WWF, 2015).
Literature Review Attitude
Attitude is an overall assessment of personal behavior (Ajzen, 1991) and holds a significant influence in customer purchasing behaviors. Some cultures have built a significant positive relationship between attitude and behavioral intent in green products purchasing (Mostafa, 2009). Zhang et al (2019) identified that attitude have positive and significant relationship in purchasing intention for environmentally sustainable products for both hedonic and utilitarian green products. Many prior studies claimed that attitudes are a critical anterior variable of purchase intentions. The more positive consumer attitudes toward green products are, the stronger their purchase intentions would be. Thus, consumers with positive attitudes toward green products may be more willing to buy (Wang et al., 2016). This finding is consistent with previous research that found attitudes toward green products significantly affect purchase intentions (Kim & Han, 2010). Attitude is also tested as an independent factor for predicting behaviors. Studies on green consumption contended that customers are more likely to know about environmentally friendly goods when they have positive attitudes toward the goods (Paul et al., 2016).
Subjective Norms
Subjective norms are defined as feelings of social pressure from others that are meaningful to a person's performance in some way (Ajzen, 1991), and they encapsulate beliefs about certain behaviors from the social pressure on individuals. Subjective norm denotes the perceived social pressure for conducting or not performing the behavior. It is an individual's opinion which influences an individual's decision-making (Maichum et al., 2016). Multiple studies have noted that subjective norm is a significant variable that positively affects purchase intentions and participation in environmentally friendly consumption (Sun & Wang, 2019). In determining the subjective norm, both descriptive and injunctive normative values are essential (Ajzen, 2015). Commonly, families, colleagues, advisors, or other professional acts or responses are of the utmost importance when making their own decisions. This condition is called normative, descriptive values. In the TPB model, subjective norm is regarded as the second independent construct of interpretation. A few other researchers listed subjective norm as essential factors in marketing and consumer research behavioral intent, such as green product buying behavior, halal food buying intent, organic food purchase intent, and online buying intent (Hsu and Chan, 2015;Irianto, 2015). Moreover, Nguyen, Lobo and Nguyen (2018) found that subjective norms significantly impact environmentally sustainable consumption and serve as the foundation for various consumption models and theories. Besides, a study done by Zhuang et al (2021), revealed that subjective norms have relationship in influencing purchasing intention behavior for environmentally sustainable products. All these studies found the relationship between subjective norm and behavioral intention to be significant and positive. Subjective norms are thus supposed to indicate the desire to purchase green goods. Various studies investigated the effect of subjective norm and comparison groups on the intention to purchase and actual purchasing behavior. According to Joshi and Rahman (2015), out of 13 studies, 11 studies found a positive association between arbitrary or social norms and reference groups with purchasing intention and actual purchasing of green goods, while two studies found societal norms had a negative relationship with purchasing intention and actual purchasing activity.
Perceived Behavioral Control (PBC)
The concept "Perceived Behavioral Control" represents the sense of fulfilment to accomplish a behavior due to resources and opportunities that they have in terms of money, time, accessibility and the self-confidence to perform the actions. PBC assesses the ability and the capacity of a person to perform the behavior. A particular behavior may exist when an individual has both the capacity and motivation to conduct that behavior, not when the person has only one or no factors. Many scholars have found that confidence in the individual's ability to control their actions indicates a solid correlation to buy a specific product (Maichum et al., 2016). It defines his or her beliefs on the impact of both external and internal influences on behavioral success. For example, PBC research has indeed been correlated to the decision to buy green hotels (Rezai et al., 2011), healthy foods (Tarkiainen & Sundqvist, 2005), and green goods (Moser, 2015). The concept of PCB is when individuals can act or decide with a specified behavior. Recent study by Zhuang et al (2021) in their meta-analysis approach found that PBC has an influence on purchasing intention for green products. According to Ajzen (2015), PBC can impede individuals from performing a behavior or encourage individuals to perform an action when faced with challenges or obstacles. Perceived behavioral control in the TPB model is the third most significant determinant. Based on the previous study done by Wang et al (2014), Perceived Behavior Control was represented by various factors, such as perceived inconvenience, time costs, and resources. Moreover, Wang et al (2014) established a hypothesis and concluded that perceived behavior regulation has a positive and essential impact on sustainable consumption behaviors using data from a survey in China. Meanwhile, Olsen (2008), in his discussion, also indicated that self-efficacy, convenience, and availability under PBC play important control factors in influencing consumers' intention for purchasing environmentally sustainable goods.
Purchase Behavioral Intention of Environmentally Suustainable Products
Purchasing environmentally sustainable product intention is conceptualized as a person's likelihood or willingness and way to accept preference to products with eco-friendly characteristics over other traditional products when looking to purchase considerations (Mei et al., 2012). Intention also emphasizes a conscious action plan that involves actions and motivation to act on it (Maichum et al., 2016). While behavioral intentions drive actual behavior, and behavioral intentions are influenced by main components, which are behavioral attitudes, subjective norms, and perceived behavioral control. In the context of an environmentally sustainable product, it is about the willingness and to desire to have green products. In some of the previous literatures, buying intention may be helpful by knowing a customer's possibility on that contributes to a purchasing decision. Higher buying intention calculates higher probability where it constitutes to the higher likelihood of buying a specific product or service exists by the buyer (Kanuk & Schiffman, 2000). Purchasing behavioral intention study has been proven by prior studies which could be one of the best models in predicting individual's future behavior (Yadav & Pathak, 2017;Liobikiene et al., 2016).
Gender and Purchasing Behavioral Intention for Environmentally Sustainable Products
Gender factors are frequently assessed and often create conflicting findings. Prior studies have found the important effect of gender on purchasing behavioural intention for environmentally sustainable products. According to Scott and Casey (2006); Oerke & Bogner (2010); Xiao and Hong (2018), females are more inclined to be involved in environmental conservation, making them more pro-environmental than males. Besides, a study done by Ko and Jin (2017), revealed that subjective norms had a positive relationship between female college students in China and United States in the purchasing behavioral intention for environmentally sustainable products.Research by Witek and Kuzniar (2021) has found that male and female environmental attitudes differ significantly, particularly females who display a more positive attitude than males. Based on research done by Patel et. al (2017), the men are showing higher pro-environmental behavior compared to women. In contrast, Scott and Casey (2006) found that women and girls might be more sensitive to environmental issues as they naturally socialized, making them more caring and protective. This would lead to a higher tendency for the female to have the intention of buying environmentally sustainable products. Other related studies also shared similar findings and claimed that women have greater engagement in home-based environmental behaviours (Oerke & Bogner, 2010;Xiao & Hong, 2018).
Level of Education and Purchasing Behavioral Intention for Environmentally Sustainable Products
Previous research has proved that more pro-environmental views are correlated with education levels. Research done by Sinnappan & Rahman (2011) shows that demographic factors including levels of education can act as antecedents that affect the behavioral trend in purchasing environmentally sustainable products. Shamsi & Siddiqui (2017) indicated that only educational level significantly influences consumers' purchasing behavioral intention for environmentally sustainable products while Rowlands et al (2003) show that individuals with a higher level of education are willing to contribute more to green electricity. Shahnaei (2012) discovered that education levels significantly affect purchasing behavior for environmentally sustainable products among consumers in Malaysia. In addition, Xiao, Dunlap and Hong (2013) found that College-educated students are more likely to compromise financial wellbeing to promote environmental sustainability and indicate that higher levels of environmental interest are expressed by more highly educated Chinese. Furthermore, there was a significant relationship between age and educational attainment towards sustainable consumption. It showed that individuals aged above 18 years old and who have at least a high school education tend to be more responsible for gathering new products, information, and purchases. These individual categories might have a close relationship with the intention to purchase environmentally sustainable products (Maichum et al., 2016). On the contrary, some literature did not find a positive correlation between environmental behaviours with education. For example, Witek and Kuzniar (2021) found there was no significant result that justifies the influence of educational attainment with preference for environmentally sustainable products. Kristrom and Kiran (2014); Millock and Nauges (2014) claimed that there is no indication that education addresses energy use and the influence of education on natural foods consumption, respectively.
Methodology
A cross-sectional correlational design is adopted in this study to anscertain the relationship between the component of TPB and the purchasing intention for environmentally sustainable products. Purchasing behavioral intention study has been proven by prior studies as one of the best models in predicting individual's future behavior (Yadav & Pathak, 2017;Liobikiene et al., 2016). Using a stratified random sampling technique, the source of primary data came from a survey in which the respondents are people who reside in the District of Petaling, Selangor. The areas covered under the District of Petaling include Petaling Jaya City, Bukit Raja, Damansara, Petaling, and Sungai Buloh. The questionnaire consists of 4 sections of variables from the Theory of Planned Behavior (Azjen, 1991) and the 5-likert scale instruments adopted from various literatures as presented in Table. 1 Ha and Janda (2012) Table 2, all variables' reliability were assumed since all the Cronbach's Alpha values were above 0.7. The statement is supported by scholar Nunnaly and Bernstein (1994), which indicates that the Cronbach's Alpha values in determining the reliability assume of a study need to be more than 0.7.
Results and Discussion Profile of Respondents
There are six (6) components of the respondents profile which are (1) Area; (2) Gender; (3) Age; (4) Marital Status; (5) Level of Education; and (6) Employment Status. Out of 390 respondents, The highest number of respondents was from Petaling with 131 respondents (33.6 %). In terms of gender, the study received the highest response from female with 206 respondents (52.8 %), while 184 (47.2 %) are male respondents. In terms of age, the study obtained the highest response from 177 individuals at the range age of 18-28 years old (57.5 %) and the least number of respondents coming from respondents of age 60 years old (5.0 %). In terms of level of education, highest percentage is 37.2 % or 145 respondents with bachelor's degree followed by Diploma with 145 respondents (37.2 %). As for the employment status, this study obtained the highest number of respondents from the private sector with 161 respondents (41.3 %). The second highest respondents were those from government sector, with 71 respondents (18.2 %). Based on the findings showed in Table 4.2, there was a relationship between "Attitude" and "Behavioral purchasing intention for environmentally sustainable products" and it was positively significant (r=0.693, P= 0.000). Based on the result, this study discovers that one's attitude has a significant influence on purchasing intentions. It suggests that consumers with a more environmentally conscious mindset are more likely to purchase green and eco-friendly items. Attitude has a significant role in shaping consumers' intentions against sustainable products. Many earlier studies have explored the correlation between attitude and purchase intention for green products and found that attitude is essential in determining consumers' purchase intention. Sidique et al (2010); Khare (2015) found a strong correlation between attitude and purchase intention in their study. Moreover a study conducted by Lestari et al (2020) found that attitude has a positive relationship with the purchasing behavioral intention for environmentally sustainable products.
Result 1: The relationship between the Theory of Planned Behavior and purchasing behavioural intention for environmentally sustainable products among the public in the District of Petaling.
It was also found that, there was a relationship between "Subjective Norm" and "Behavioral purchasing intention for environmentally sustainable products" and it was statistically positive significant (r=0.493, P= 0.000). This research finding is supported by studies done by Zhuang et al (2021); Joshi and Rahman (2015); Hsu and Chan (2015), where subjective norms have a significant positive impact on purchasing behavioral intention for environmentally sustainable products. Other people's views and opinions constitute the pressure towards an individual to motivate a person's green purchasing intention because they feel that the people around them accept the behavior and they should too. Thus, social pressure can be linked with an individual purchasing behavioral intention. Moreover, for the third TPB factor, there was a significant positive relationship between "Perceived Behavioral Control (PBC)" and "Behavioral purchasing intention for environmentally sustainable products" (r=0.530, P= 0.000). The finding is also consistent with research done by Chen & Deng (2016); Muller et al (2021), and research by Paul et al (2016) in India, where the perceived behavioral control (PBC) besides attitude and subjective norm has a significant relationship with purchasing behavioral intention for environmentally sustainable products. Therefore, the result indicates that the public in the District of Petaling, Selangor, will be highly willing to purchase environmentally sustainable products when they think they can control the uncontrollable external factors.
Result 2: The moderating influence of gender on the relationship between the Theory of Planned Behavior and purchasing behavioural intention for environmentally sustainable products among the public in the District of Petaling.
i. Gender: Purchasing Behavioral Intention for Environmentally Sustainable Products (DV) and Attitude (IV1). Based on Table 4.3, there was influence of gender on Purchasing Behavioral Intention for Environmentally Sustainable Products (DV) and Attitude (IV1) since the interaction value of P was less than 0.05 (P<0.05, P=0.0033). Based on the PROCESS analysis results, the gender variable has a moderating influence between attitude and purchasing behavioral intention for environmentally sustainable products. This finding was supported by past research by Witek and Kuzniar (2021), where there is a distinction of attitude between men and women towards the intention of consuming environmentally sustainable products. Besides, research by Bojanowska and Kulisz (2020) showed that there was a close relation of gender on the visibility towards pro-environmentalist activities. In their study, Sun and Wang (2019); Bhutto et al (2019) revealed that there is an influence of gender on attitude and purchasing behavioral intention for environmentally sustainable products. The influence of gender differences in the aspect of attitude can be seen in which women are more environmentalist compared to men. This statement can be supported by Rezai et al (2011), where women are more prone to have the intention to exhibit green purchasing as they believed that it is beneficial to protect the current environment for future wellbeing.
ii. Gender: Purchasing Behavioral Intention for Environmentally Sustainable Products (DV) and Subjective Norm (IV2). Based on Table 4.4, gender had influences on Purchasing Behavioral Intention for Environmentally Sustainable Products (DV) and Subjective Norm (IV2) since the interaction value of P was less than 0.05 (P<0.05, P=0.0425). The finding in this study indicates that the gender variable does have a moderating influence on the relationship between the subjective norms and the purchasing behavioral intention for environmentally sustainable products. This is in line with the research done by Ko and Jin (2017); Bhutto et al (2019) where they found an interaction of gender variables on the relationship between subjective norm and purchasing behavioral intention for environmentally sustainable products. The study was carried out in China and United States, where the female consumers recognize other people's opinions as an enabler in embracing the green purchasing intention habit.
iii. Gender: Purchasing Behavioral Intention for Environmentally Sustainable Products (DV) and Perceived Behavioral Control (IV3). Based on Table 4.5, there was no influence of gender on Purchasing Behavioral Intention for Environmentally Sustainable Products (DV) and Perceived Behavioral Control (IV3) since the interaction value of P was more than 0.05 (P>0.05, P=0.3070). The gender variable applied in the study determines its moderating influence towards the perceived behavioral control and purchasing behavioral intention for environmentally sustainable products. It reveals no interaction of gender towards the purchasing behavioral intention. A simple interpretation of the result is that gender does not affect individuals' perspectives despite barriers or opportunities they faced in purchasing environmentally sustainable products, such as the availability of sustainable products, time, and cost. For example, when an individual, whether it is male or female facing with a barrier to buying green products due to high cost, both might have the same perception to have or not have the intention in consuming the sustainable products. According to World Economic Forum 2020, the purchasing gap between males and females has been narrowed over the years, where women are increasingly empowered. Gender difference is no longer a discrepancy in today's world in which males or females might have the same viewpoint in a specific behavior. The results are corresponding with a previous study by Andhy et al (2018) where PBC factors failed to demonstrate significant mean differences between male and female millennial populations. Indirectly, this finding indicates a new exploration of the gender influence towards PBC and purchasing behavioral intention. According to research done by Arissa et al (2020) in Petaling Jaya, different research findings conducted in a different area might produce a different result. Therefore, this might be one of the reasons why gender shows no influence on PBC and purchasing intention compared to other studies. For example, most of the prior research found that there was influence of gender between these two variables, such as Oerke & Bogner (2010); Xiao & Hong (2018); Patel et al (2017) and not for this study.
Result 3: The moderating influence of level of education on the relationship between the Theory of Planned Behavior and purchasing behavioral intention for environmentally sustainable products among the public in the District of Petaling.
i. Level of Education: Purchasing Behavioral Intention for Environmentally Sustainable Products (DV) and Attitude (IV1). norm, and perceived behavioral control) show the influence of level of education on the relationship between the Theory of Planned Behavior and purchasing behavioral intention for environmentally sustainable products. This is since results for interaction for P-value for attitude, subjective norm and perceived behavioral control is less than 0.05. Therefore, Ha6 is accepted where level of education has a moderating influence on the relationship between TPB components and purchasing behavioral intention for environmentally sustainable products among the public in the District of Petaling. This study demonstrated that level of education has a moderating effect on the Theory of Planned Behavior and the purchasing behavioral intention for environmentally sustainable products. Conforming to the same finding shared by Maichum et al (2016), education level influences an individual's perception towards the intention to purchase environmentally sustainable products. There were differences in results between the education groups, which shows that people with a higher level of education tend to intend to consume environmentally sustainable products as they have higher purchasing capacity than a lower level of the education group. Chekima et al (2016) disclosed the result of his result where the level of education is part of the important factor in influencing purchasing trend of an individual. Individuals with higher education levels may be well-informed on environmental knowledge and understand the importance of being wise consumers to purchase products that safe for the environment. Therefore, level of education can be concluded as part of the influencing variable in driving people in the District of Petaling to have the behavioral purchasing intention for environmentally sustainable products.
Conclusions
In conclusion, the study on purchasing behavioral intention for environmentally sustainable products is important to tackle the consumer purchasing trend. Human consumption study is needed, especially in sustainability products, since human consumption may disrupt ecological well-being as various products could harm the environment available around us. Exploring factors associated with behavioral intention and some mediating variables could provide an adequate overview to the government, business, and marketing partners to establish substantial efforts to ensure that environmentally sustainable products are part of the community practices. This is because failure to cultivate green practices through consumer consumption patterns could continuously deteriorate the environment since human consumption has been proven to be the cause of environmental degradation over the years. Thus, paying intention to the purchasing intention behavioral factors could shift the public preference from consuming conventional products that are not environmentally friendly to environmentally sustainable products. As a result, environmental problems can be mitigated. Employing the factors from the TPB in the study through quantitative method is relevant as the components of the theory have been proven by numerous prior researches capable of producing a relationship with the behavioral purchasing intention. Based on the analysis conducted among the public in the District of Petaling, empirical research findings show that attitude, subjective norm, and perceived behavioral control do have a significant relationship with the behavioral purchasing intention for environmentally sustainable products. In addition, the moderating influence for gender and level of education shows mixed results on the relationship between the Theory of Planned Behavior and purchasing behavioral intention for environmentally sustainable products. The moderating variable of gender shows mixed results of variable influence where age has the moderating influence on attitude and subjective norm except for perceived behavioral control. On the other hand, moderating variables of education do have moderating effects on all components in the Theory of Planned Behavior and the purchasing behavioral intention for environmentally sustainable products. Therefore, the study contributes to a deeper understanding of what factors could drive the purchasing behavioral intention for environmentally sustainable products for the current situation and the future niche of green practices. Measures should be taken by policymakers to improve the environmental interests of citizens through the implementation of appropriate environmental policies and regulations, increasing media intensity to foster environmental awareness and improving public understanding of policies and regulations in the area of environmental protection. Constructing a well-informed and knowledgeable society will facilitate the establishment of improved purchasing behavioral intention for environmentally sustainable products. The adoption of environmentally sustainable products practice is still at the infant stage in Malaysia, and there are various other factors that should be taken into consideration. Thus, future study is recommended to explore more factors which may associate with the purchasing behavioral intention for environmentally sustainable products. | 2023-08-19T15:21:10.298Z | 2023-06-20T00:00:00.000 | {
"year": 2023,
"sha1": "8306fa57ddea65b7b5721c2d799e71bda38d26fa",
"oa_license": "CCBY",
"oa_url": "https://hrmars.com/papers_submitted/18024/behavioral-intention-and-the-influence-of-demographic-factors-in-purchasing-environmentally-sustainable-products-among-residents-in-petaling.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8d4f1a356e213a2743085c0972be1a2dc82ebc62",
"s2fieldsofstudy": [
"Environmental Science",
"Business",
"Economics"
],
"extfieldsofstudy": []
} |
233919974 | pes2o/s2orc | v3-fos-license | Why Do Firms Fail to Engage Diversity? A Behavioral Strategy Perspective
,
Introduction
Although firms are supposed to evaluate employees based on merit, many studies show that well-qualified workers may not be hired or promoted for reasons irrelevant to merit (Galinsky et al. 2015, van Dijk et al. 2017, Eberhardt 2019. Some suboptimal evaluations result from explicit, taste-based discrimination (Becker 1971), whereas others derive from automatic, implicit biases, such as stereotyping or homophily (Fiske and Taylor 2013). Many scholars urge firms to overcome discrimination and engage diversity (i.e., to employ a diverse workforce and fully realize its potential), with justifications based either on a justice-centric view (e.g., including disadvantaged candidates is the right thing to do to address decades of prejudice) or a performance-centric view (e.g., recruiting team members with nonoverlapping cognitive diversity improves performance of complex tasks). Regardless of the mechanisms by which discrimination operates and the tactics used to counteract it, research shows that many firms still fail to engage diversity (Dobbin et al. 2015, McDonald et al. 2017, undervaluing qualified but atypical individuals while favoring those who fit positive stereotypes.
Less favorable treatment of counter-stereotypical but valuable human resources is puzzling from a strategy point of view because it implies an inefficient labor market in which money is being left on the table (Denrell et al. 2003). Firms that discriminate are likely to pay a performance penalty for failing to recruit the most qualified workers, whereas firms that overcome discrimination may gain advantages (Becker 1971) from recruiting atypical workers undervalued by rivals (Liu et al. 2017, Siegel et al. 2018. Over time, competition should select out biased firms, correcting for this labor-market inefficiency. Why do many firms nonetheless continue to fail to engage diversity? In other words, why are valuable but counter-stereotypical human resources, as untapped opportunities, not yet competed away? This paper addresses persistent failure to engage diversity from a behavioral strategy as arbitrage 1 perspective, which posits that attractive strategic opportunities tend to be protected by strong behavioral and social limits to arbitrage. Building on prior works in behavioral strategy (Powell et al. 2011, Gavetti 2012, Denrell et al. 2019, I integrate various behavioral failures outlined in the literature using an analogy from behavioral finance (Barberis andThaler 2003, Zuckerman 2012a), which states that price-value gaps of certain assets as arbitrage opportunities may persist when "limits to arbitrage" deter exploitation and hence preserve market inefficiencies 2 (Shleifer and Vishny 1997). I propose four limits to arbitragecognizing, searching, reconfiguring, and legitimizing (CSRL)-in strategic contexts. The CSRL limits help explain the mechanisms that allow biases against valuable resources to persist and illuminate approaches to overcoming these limits in order to exploit the biases as opportunities.
I illustrate the application of the CSRL limits to arbitrage using a case from Major League Baseball (MLB) described in Moneyball (Lewis 2003). An MLB team's advantage is strongly associated with its ability to recruit superior players, yet most MLB teams historically judged players based on their lookwhether they fit the stereotype of a successful player. In the late 1990s, the Oakland Athletics (the "A's") and their manager, Billy Beane, exploited this opportunity by acquiring undervalued players (e.g., counter-stereotypical players with more competence than implied by their salaries) from rivals. Consequently, between 1999 and2003, the team achieved impressive winning percentages with one of the lowest payrolls in the MLB. Moneyball is often portrayed as a triumph of data analytics, yet this fails to fully explain the A's success; after all, data on MLB players and sabermetric analytic methods had been publicly available for decades. A greater puzzle is why such exploitation did not occur sooner.
As I will elaborate, data analytics is only one of the factors that helped the A's address the searching limit by identifying undervalued players, and particularly unconventional ones. Other CSRL limits deterred MLB teams from appreciating, imitating, and justifying Beane's approach, reducing ex post competition to such an extent that they allowed the A's to enjoy competitive advantage (Peteraf 1993) until Michael Lewis's (2003) book helped eliminate several of these limits. This case has important implications beyond professional sports, note Thaler andSunstein (2003, p. 1390): "If Lewis is right about the blunders and the confusions of those who run baseball teams, then his tale has a lot to tell us about blunders and confusions in many other domains." In the MLB, the economic stakes of flawed recruitment are extremely high, and there is no obvious economic barrier to exploiting inefficiencies. If the labor market can be inefficient there, one might expect labor markets outside sports to entail larger mispricing, greater CSRL limits, and more untapped opportunities.
Applying a behavioral strategy as arbitrage perspective to the debate on diversity generates interesting theoretical and practical contributions. First, it complements the growing literature on diversity by providing a novel lens that views failure to engage diversity as being protected by various behavioral and social limits to arbitrage. Firms fail to engage diversity not necessarily because they disagree with the reasons for hiring a diverse workforce, such as those based on a normative, justice-centric view (e.g., including workers with disadvantaged identities) or a pragmatic, performance-centric view (e.g., complex tasks require diverse teams with nonoverlapping cognitive repertoires); rather, such failures may result from context-dependent factors that prevent firms from overcoming CSRL limits. For example, diverse candidates may be ruled out because they do not look qualified, or their contributions/outputs may be discounted by important stakeholders, such as the media, investors, and customers. More generally, this perspective complements normative and pragmatic mainstream views: Overcoming CSRL limits is essential for doing the right thing and for improving performance.
This paper also contributes to the strategy literature. Sustainable competitive advantage is usually attributed to firms' control over valuable, rare, nonimitable, and nonsubstitutable resources (Barney 1991). Instead of obtaining these resources via luck, endowment, or path-dependent cumulation processes (Dierickx and Cool 1989, Makadok and Barney 2001, Helfat and Lieberman 2002, Denrell et al. 2003, Andriani and Cattani 2016, firms can search a vast reservoir that includes latent resources, alternative uses of existing resources, and their combinations (Lippman and Rumelt 2003a, b). To simplify such a search, Felin and Zenger (2017) propose that strategists should start with a contrarian theory that guides problem formulation and key experimentation. A behavioral strategy as arbitrage perspective posits that not all contrarian theories are associated with attractive opportunities (Pontikes and Barnett 2017). Attractive opportunities tend to be protected by strong CSRL limits to arbitrage. This perspective guides strategists to identify the relevant behavioral and social problems to solve in order to locate and exploit valuable resources. Fortune favors strategists who apply this perspective to refine their contrarian theory and search for viable opportunities.
Finally, presenting failure to engage diversity as an attractive opportunity has interesting practical implications. Compared with prevalent but ineffective debiasing and training approaches to engaging diversity (Kalev et al. 2006), a behavioral strategy as arbitrage perspective (with Moneyball as an analogy) may nudge more strategists to evaluate diversity differently and engage in the arbitrage activities needed to eliminate market inefficiencies (Zuckerman 2012b). This by no means suggests that exploiting behavioral opportunities is easy; as Gavetti (2012, p. 14) writes, "what is strategically attractive is so precisely because it is extremely difficult to achieve." Understanding the four limits will help strategists assess their context-dependent constraints and develop feasible exploitation strategies more systematically. One ambition is for the idea of strategy as arbitrage to be diffused to such an extent that it will eliminate inefficiencies and allow merit to determine pay and career prospects in the long run, as demonstrated by the diffusion of the Moneyball strategy in many professional sports after 2003 (Lewis 2016). A behavioral strategy perspective may provide a surprisingly effective approach to help nonsports industries fix their persistent failure to engage diversity.
The structure of this paper is as follows. Section 2 reviews the theoretical foundation of behavioral strategy as arbitrage. Section 3 applies CSRL limits to the context of diversity and illustrates how they preserve labor-market inefficiencies using the case of Moneyball. Section 4 discusses how Billy Beane and the A's overcame CSRL limits and the scope conditions necessary to exploit behavioral arbitrage opportunities. The paper concludes by discussing the broader implications of the CSRL framework, including how hype surrounding artificial intelligence (AI) may strengthen rather than weaken various limits to engaging diversity.
The Theoretical Foundation of Behavioral Strategy as Arbitrage
The behavioral strategy as arbitrage perspective posits that an opportunity is more attractive when it is protected by stronger behavioral and social limits to arbitrage. This perspective is more applicable to resources that can be priced and traded. For strategic resources that are nontradeable or unpriced (such as human resources subject to noncompete clauses or firm-specific resource combination), this perspective can still be applied in the following sense: The difficulties of pricing and trading these resources preserve possible misevaluations as arbitrage opportunities for a strategist who can overcome these difficulties as limits to arbitrage (e.g., valuing these resources and making them transferable). Stated differently, the infeasibility of arbitrage does not exclude an arbitrage opportunity, according to this perspective, but moderates the extent to which this opportunity is fleeting or attractive. 3 Since the idea of behavioral strategy as arbitrage builds on, but also deviates from, the common understanding of arbitrage, I first review the idea of arbitrage in financial markets before extending this analogy to strategic contexts.
Market Efficiency and Arbitrage
The scope for arbitrage-defined as exploitations of price-value difference (Barberis and Thaler 2003, Knorr Cetina and Preda 2012, Zuckerman 2012b)-in financial market depends on the extent to which the market is efficient. The well-known efficient market hypothesis (EMH) posits that prices accurately reflect assets' intrinsic value. Whether EMH is a realistic (or even useful) description of the market has been a central debate in economics and finance for decades (Thaler 2015). These arguments can be summarized by three competing perspectives (Zuckerman 2012b). The first perspective, which supports the EMH, posits that prices in financial markets are generally correct because of arbitrage activities (Fama 1970). Asset mispricing may occur temporarily but cannot persist because traders will identify mispricing as profit opportunities and arbitrage them away.
The second perspective, which rejects the EMH and posits that prices are socially constructed, is best captured by Keynes's (1936) beauty contest, in which market and prices are entirely speculation-driven and price bears no relation to intrinsic value.
These two perspectives represent two ends of a spectrum of how price represents (or is decoupled with) intrinsic value. Interestingly, they share the same actionable implication: that arbitrage is futile. For social constructivists, price-value differences do not exist by definition. For true believers of the EMH, there is no incentive to arbitrage because any given price-value difference as opportunity is too fleeting to pursue. Paradoxically, the EMH is self-defeating in the sense that the market cannot be efficient if all investors believe in the EMH and, in turn, dismiss arbitrage activities.
The third perspective provides a synthesis to these two extremes. Built on Benjamin Graham's (1959) value-investing paradigm, this perspective posits that price-value difference can occur but will converge because the market is like a voting machine in the short run (i.e., prices are unreliable because of investors' biases and sentiments), but a weighing machine in the long run (i.e., mispricing will be corrected because the market will figure it out). According to this perspective, arbitrage is not futile: Mispricing as arbitrage opportunities can exist and favor investors who have superior insights in valuation. Value investors should be contrarian: Instead of following or opposing the market (or the crowd), they should come up with an independent valuation of an asset and invest in the asset when its estimated value is higher than the market price (i.e., buy low) or short-sell the asset when the estimated value is lower than the market price (i.e., sell high). Profit will be realized when the estimated value is correct and market price converges to this value.
The development of the limits to arbitrage concept (Shleifer and Vishny 1997) strengthens the third perspective in addressing the challenges from the other two perspectives. Arbitrage opportunities are not as fleeting as the EMH predicts because investors 1195 Liu: Behavioral Strategy as Arbitrage who identity a price-value difference are not always able to exploit the difference. This implies that the price-value difference as a profitable opportunity may persist until the limits to arbitrage disappear or are overcome. This creates incentives for value investors to search for arbitrage opportunities in the first place. On the other hand, social constructivists often cite dotcom and housing bubbles to emphasize the speculative nature of price and as an existing proof that the market cannot figure it out (MacKenzie et al. 2007). An asymmetry in limits to arbitrage explains why arbitrages cannot always eliminate mispricing: Arbitrages are more effective at correcting underestimation (i.e., mispricing is eliminated when the demand for an underpriced asset increases) than at correcting overestimation (e.g., mispricing persists when the demand for borrowing an overpriced asset to short-sell it cannot be satisfied) (Massey andThaler 2013, Turco andZuckerman 2014). The implication is that price-value differences often occur and tend to converge, thanks to arbitrage activities, but the difference (as well as market inefficiency) can persist for a long time when limits to arbitrage are strong.
CSRL Limits to Arbitrage in Strategic Contexts
The idea of limits to arbitrage in financial markets can be extended to strategic contexts. Barney (1986) argues that abnormal returns would not exist if the strategic factor market were efficient because the price of acquiring a resource would reflect the value this resource could create. Since firms' traits and actions are enabled by various resources, which are ultimately acquired in the factor market, one has to assume the factor market's failure to allow the possibility of strategic opportunities (Denrell et al. 2003). Recent advances in behavioral strategy revisit this assumption and illustrate how behavioral failures may preserve factor market inefficiency (Gavetti 2012). For example, Fang and Liu (2018) highlight how cognitive biases, such as the status quo and homophily biases, can be translated into approaches that enable firms without resource advantages to disrupt industry incumbents. Denrell et al. (2019) argue that the way in which people are fooled by randomness creates an alternative source of opportunity, but highlight the sociocognitive complications of pursuing such opportunities.
Building on these prior works, this paper proposes a framework that integrates various behavioral failures under the idea of limits to arbitrage and applies it to the context of diversity in order to search for untapped opportunities in the labor market. To illustrate some of these behavioral failures, consider a thought experiment. Let us assume that resource X is valuable, as obtaining X will increase a firm's sales revenue, decrease its production cost, or both. Much of the strategy research focuses on how firms may develop the capacity to sense, seize, and integrate resource X as a profit opportunity (Teece et al. 1997). However, Barney's (1986) critique holds that resource X's expected profit-generation capacity will approach zero if many firms can sense, seize, and integrate resource X. For resource X to remain attractive, one must focus on failures-namely, why many firms fail to sense, fail to seize, or fail to integrate resource X to such an extent that resource X remains mispriced or underutilized relative to the value it can generate.
Because of various bounds on rationality, firms may fail to recognize resource X's value. I label this type of failure as resulting from cognizing limits. Boundedly rational individuals and firms may overlook resource X because they simplify the world during learning processes (Levinthal and March 1993) or through cognitive shortcuts, such as decision heuristics, simple rules, or mental representations (Davis et al. 2009, Gavetti 2012, Csaszar and Levinthal 2016. These simplifications serve as fast-and-frugal heuristics when decision makers modify them over time through immediate and reliable feedback (Gigerenzer and Goldstein 1996). Otherwise, they are likely to generate biases shared by many individuals and firms. For example, firms tend to cluster around a few strategic groups, and firms within such groups usually develop and share similar mental models, such as how to compete in their industry (Porac et al. 1995). If resource X is cognitively proximate to these firms, most of them will sense and compete for it, making its superior profit-generating capacity self-defeating. A necessary condition for resource X to remain valuable is that it is cognitively distant from these firms, so they will systematically overlook it owing to the bounds of their shared mental model. Importantly, "[o]rganizations and the individuals in them are notoriously reluctant to give up such mental models" (Levinthal and March 1993, p. 99). This predicts that many firms, particularly incumbents that take a mental model for granted, will make the similar mistake of ignoring resource X, preserving it as an untapped opportunity.
Even when firms sense a valuable resource may exist, they may fail to seize it (i.e., the right resource X) because of various learning failures. I label this type of failure as resulting from searching limits. For example, firms may not profit from resource X if they cannot overcome information asymmetry and distinguish it from the lemons in the market (Akerlof 1970). Firms may learn from experience to undervalue resource X when its value cannot be accurately estimated without complementary resources (Cohen andLevinthal 1990, Mosakowski 1997) or substantial experience (Denrell and March 2001). Moreover, firms may develop bias in favor of their own resource, resource Y, if it has led to prior successes (Audia et al. 2000). Salient success in an industry may also generate halo effects and fads, making some resources more popular than justified by their value (Rosenzweig 2007, Pontikes andBarnett 2017). These are just some of the traps documented in the literature on experiential and social learning. A shared feature of these failures is that they tend to lead firms to persistently seize less valuable resources, abandon the more valuable resource X prematurely, or both, preserving resource X as an undervalued opportunity.
Even when firms sense resource X and avoid seizing the wrong resource X, they may fail to integrate and realize its potential value or competence because of organizational dynamics. I label this type of failure as resulting from reconfiguring limits. Resources can be acquired, but competences-efficiency potentials that are leveraged from the firm's resources-need to be realized through effective organizational processes (Barney 1995, Teece 2007. Firms may own valuable resource X but fail to realize its competence for many reasons. Firms may not be motivated to integrate resource X when their current performance is coded as successful. Even when motivated to change, firms may underutilize resource X if it is competency-destroying (Henderson and Clark 1990). For example, it may create new products that cannibalize existing products' market share, or the innovation enabled by resource X may challenge a firm's existing power and status hierarchy. Strong resistance to integrating novel resources is to be expected from well-managed firms (Nelson andWinter 1982, Hannan andFreeman 1984). Even when resource X promises improvement in the long run (a positive content effect from adopting resource X), firms may not survive the cascading disruptions to routines in the short run (a negative process effect from adopting resource X). Underutilization, failures, or abandonments after seizing resource X may stigmatize it on the market, preserving it as an apparently unattractive opportunity.
Even when firms have the capacity to sense, seize, and integrate resource X, they may choose not to if doing so would be socially destructive (Benner andZenger 2016, Correll et al. 2017). I label this type of failure as resulting from legitimizing limits. For example, firms may not profit from resource X if important stakeholders discount the output value owing to its uniqueness or incomprehensiveness (Zuckerman 1999, Litov et al. 2012. Firms may distance themselves from resource X if using it implies deviation from taken-for-granted norms or institutional logic (Oliver 1997). Using resource X may be so detrimental to the reputation and status of a firm and its managers that they ignore what appear to be obvious opportunities (Jonsson and Regnér 2009). Interdependency may also create pluralistic ignorance around valuable resources, where many recognize resource X's value but no one is daring enough to break the "iron cage" (DiMaggio and Powell 1983). In this manner, lowhanging fruit can be protected like the emperor's new clothes.
The four limits introduced-cognizing, searching, reconfiguring, and legitimizing-deter firms from sensing, seizing, integrating, and justifying the valuable resource X. These limits operate like filters (see Figure 1): Some firms may fail to sense resource X because of cognitive distance; for those that sense it, some may fail to seize the truly valuable resource X owing to the difficulty of overcoming learning traps when experimenting with atypical resources; for those that sense and seize resource X, some may fail to integrate it because of internal resistance to changing or disrupted routines; for those that sense, seize, and integrate resource X, some may fail to justify to important stakeholders that using this resource is legitimate. These CSRL limits may be so strong that no firm can overcome them all, thereby protecting resource X as an untapped opportunity from being arbitraged away. This is bad news for factor market efficiencies, but good news for strategists who understand CSRL limits when searching for untapped strategic opportunities.
The perspective of behavioral strategy as arbitrage aims to integrate existing behavioral science findings.
To use an analogy, Porter's Five Forces framework turned industrial economics on its head by showing how well-known economic forces that are detrimental to perfect competition can help predict an industry's profitability. Similarly, behavioral strategy as arbitrage uses knowledge developed in behavioral sciences to illuminate how well-known behavioral failures may help predict when noneconomic limits create and sustain strategic opportunities. Finally, the perspective of behavioral strategy as arbitrage is context independent, but applying the theory to search for opportunities arising from particular behavioral failures is context dependent. Given the prevalence of documented behavioral failures, inefficiencies might be predicted in many markets. However, context-dependent information and knowledge are required to identify how exactly these behavioral failures generate price-value gaps locally 4 and how to overcome these limits. In the next section, I will apply this perspective to the context of diversity using Moneyball as an illustrative example.
CSRL Limits to Arbitrage in the Context of Diversity: The Case of Moneyball
Here, I apply the behavioral strategy as arbitrage perspective and the CSRL limits to the context of diversity and explore why many firms fail to engage diversity. I follow the definition of diversity by Jackson et al. (2003, p. 802) as "the distribution of personal attributes among interdependent members of a work unit." Failing to engage diversity means that managers or firms, knowingly or unknowingly, fail to recruit atypical but qualified members when assembling a team to fulfil its goals. Note that the definition of qualified is often dependent on a team's composition and goals. For example, the performance bonus of engaging diversity is greatest when a team faces a complex task and its members have nonoverlapping cognitive diversity (Page 2017). Since measuring cognitive diversity and judging the interdependent merit of team members is challenging, these difficulties create precisely the limits that prevent firms from reliably sensing, seizing, and integrating sufficiently diverse team members. 5 As illustrated by the case of Moneyball, qualified but atypical individuals may be underestimated even when their merit is only weakly dependent on team composition and when the task is not complex. Greater failures can therefore be predicted when judging merit depends on more factors. These behavioral failures to engage diversity suggest the persistence of unrealized performance bonuses as untapped opportunities. I illustrate the application of the CSRL limits using Moneyball before discussing examples beyond sports.
The Cognizing Limit
The cognizing limit to arbitrage relates to how boundedly rational individuals and firms make systematic, suboptimal decisions when they simplify the complex world through decision shortcuts or mental representations. In the context of diversity, this limit focuses on the possibility of overlooking valuable but counter-stereotypical candidates (or overly favoring stereotypical candidates). A stereotype is an overgeneralized belief about the warmth and competence of a certain category of people that is usually based on easily observable traits, such as gender, race, age, build, or sexual orientation (Fiske and Taylor 2013). Which stereotypes are favored is context dependent, but the presence of a widely acknowledged stereotype suggests that many individuals and firms share a similar mental model in that context, which creates and preserves similar blind spots in their evaluations.
3.1.1. Moneyball. Identifying skilled players is one of the most important sources of competitive advantage in the MLB. The most reliable basis for predicting skills is track record. Thus, players should ideally be hired based on whether they have performed better and more reliably than their peers. However, an important limitation of this approach is that players with strong and reliable track records, such as incumbent MLB and college baseball players, are expensive. Most teams are unable to win bidding wars for these players when competing against richer teams, such as the New York Yankees. Thus, many are forced to search for talent among those with less reliable track records, such as high schoolers. MLB teams identify talents with limited track records by sending their scouts to observe high school games and report potential draft picks to the team manager. However, it is very difficult to judge better players simply by observing their performance: "One absolutely cannot tell, by watching, the difference between a .300 hitter and a .275 hitter. The difference is one extra hit every two weeks" (Lewis 2003, p. 68). As a result, scouts (largely retired baseball players) tend to use a representative heuristic based on their prior experience (Tversky and Kahneman 1974): Good players tend to have a certain look, the main feature being that they look like fit, powerful players. This suggests that some competent players, particularly those who are overweight, slower, or shorter than average baseball players, may be passed over by default. In contrast, young players who look similar to prototypical MLB players are judged to have greater potential to succeed in the MLB.
Judging talents using a representative heuristic is likely to be a fast-and-frugal decision shortcut. After all, many stereotypes emerge from a strong correlation between displaying such traits and superior performance. MLB scouts usually have to travel to hundreds of high schools per year and spend limited time at each school. The representative heuristic helps them screen hundreds of candidates at a glance.
However, this heuristic may become ineffective, particularly when it diffuses to become a dominant mental model for scouts predicting high school talents' future performance. Stereotypical predictions are based on correlations, and judgments based on imperfect correlations will inevitably lead to omission and commission errors (Christensen andKnudsen 2010, Csaszar andLevinthal 2016). A commission error occurs when players are drafted based on having the look but cannot perform as the stereotype predicts. This had been the case for the A's manager, Billy Beane, who had the look in high school but never lived up to expectations in the MLB. Detecting commission errors is not particularly difficult in the MLB, because self-fulfilling processes are relatively weak as compared with other professional sports, such as basketball (Mauboussin 2012). That is, an overrated baseball player is unlikely to meet performance expectations simply because his manager, teammates, and fans falsely believe he will meet them.
A stronger cognizing limit in the MLB is the detection of omission errors. Some promising players may be mistakenly dismissed because they are too counter-stereotypical. This may happen even to individuals with a strong track record, such as "submarine pitcher" Chad Bradford (Lewis 2003). Bradford played for the Charlotte Knights (Chicago White Sox's Triple-A affiliate) and was briefly promoted to the major league thanks to a pitcher's injury. Bradford's excellent performance continued, but the White Sox manager demoted him to the minor league when the teammate recovered. The White Sox manager attributed Bradford's wins to good luck, despite his track record. Rejecting competent players despite clear evidence of their competence may result from the diffusion of representative heuristic applications. Over time, fewer competent but counter-stereotypical players will be available for observation in the MLB, making it increasingly difficult to correct omission errors. For example, a successful submarine pitcher like Bradford was probably a sample of one to the White Sox manager. He may have been right to dismiss this atypical case, but this sensible judgment was built on a larger sampling bias: Team managers could not see Bradford's merit because he was too "cognitively distant" (Gavetti 2012). Yet the distance was created because too many team managers adopted the same mental model, to such an extent that it reinforced a conventional, though flawed, wisdom that players without the stereotypical look cannot be good. Counter-stereotypical but competent players like Bradford remain undervalued because many experienced managers are blind to their merit, owing to their oversimplified representations of the world.
3.1.2. Beyond Sports. This discussion suggests that recognizing counter-stereotypical merit to organizations outside the world of professional sports is likely to be more challenging, because it is less about evaluating individuals' physical traits and more about their invisible cognitive diversity. Imagine that an executive wants to assemble a team to address a complex task: Whom should they recruit to join the team? According to the logic of generating a diversity bonus (Page 2017), they should first evaluate the nature of the task, in terms of the types of knowledge, tools, or experience required to address it. They should then recruit members with nonoverlapping cognitive resources that match the task requirements. This ideal scenario suggests the presence of cognizing limits that deter the executive from recruiting a sufficiently diverse team to address the complex task.
For example, the executive is likely to take a cognitive shortcut by predicting cognitive diversity based on identity diversity, just as MLB managers predict merit using the representative heuristic. Identity diversity may contribute a diversity bonus, but its influence is likely to have a mediating or moderating effect on cognitive diversity. For example, identity diversity in teams may positively moderate the expression of cognitive diversity: People are more likely to appreciate an opposing opinion if it comes from a person of a different social category (e.g., status or race) than a similar one (Dumas et al. 2013). On the other hand, people may have idiosyncratic experiences because of the social categories to which they belong. The resulting differences in experiences, rather than their differing social belonging, may be useful cognitive resources for generating diversity bonuses. This suggests that identity diversity in teams is, at best, an unreliable indicator of a team's cognitive repertoire.
However, people, organizations, and policymakers usually mistakenly equate identity diversity with cognitive diversity because the former is more easily recognizable and measurable than the latter. This occurs even though research shows that demographically diverse crowds (by gender and race) are typically not wiser than homogeneous crowds (de Oliveira and Nisbett 2018). A shared mechanism of many decision biases is a substitution effect (Kahneman 2011): Humans usually substitute a difficult question (e.g., Does this candidate have different cognitive resources from existing team members?) with an easier question (e.g., Does this candidate "look" different from existing team members?). This implies that the cognitive diversity of an identity-diverse team may be overrated unless the executive resists the temptation to apply oversimplified mental models when evaluating team members and their cognitive repertoire.
In summary, the cognizing limit to arbitrage may deter firms from engaging valuable human resources when qualified candidates deviate from what a stereotypical, competent employee should look like. The limit may be so strong that managers deny clear evidence contrary to their mental representations 1199 Liu: Behavioral Strategy as Arbitrage Organization Science, 2021, vol. 32, no. 5, pp. 1193-1209 The Author(s) (e.g., the case of Bradford). Thus, valuable but atypical human resources remain untapped opportunities.
The Searching Limit
The searching limit to arbitrage concerns how individuals and firms systematically fail to identify and seize valuable but cognitively distant resources because of various learning failures. In the context of diversity, this limit focuses on the difficulty of identifying undervalued human resources among counter-stereotypical ones and overvalued ones among stereotypical ones. Even when firms manage to apply a different mental model and recognize the possibility of labor-market inefficiencies, identifying and seizing the right "hidden gems" (or dismissing the right "overrated stars") is nontrivial. For example, the data or metrics necessary to measure the value of atypical resources may not exist (Litov et al. 2012), suggesting that managers may fail to compute the correct values critical to evaluate opportunities. Moreover, to find the "hidden gems," one usually needs to experiment with many atypical candidates. This, in turn, creates variances in performance sufficient to deter many managers from continuing on the path of more distant search and exploration (Denrell and March 2001). Valuable human resources may thus remain mispriced, even when firms sense the presence of inefficiencies.
3.2.1. Moneyball. The case of Moneyball is usually portrayed as a triumph of data analytics. Yet the A's and Billy Beane were not the first team or manager to recognize the inefficiencies in MLB hiring and attempt to use data and statistical methods to search for valuable but mispriced players. Many MLB teams had evaluated players using available data since the 1980s. The challenge was not that data were difficult to acquire, but that many performance measures in the existing data were, in fact, misleading. Applying statistics to existing data may strengthen misevaluations because the results look scientific, but doing so enhances only confidence, not competence. Managers need to experiment with alternative measures to overcome the searching limit. However, such activities entice them into various learning traps that deter them from seizing the right hidden gems.
Take hitters' statistics, for example. Hitters are evaluated on both their offensive and defensive capacity, and two performance measures are widely used. On the offensive side, a good hitter is expected to have a high average runs-batted-in (RBI) score, which credits a hitter for making a play that allows runs to be scored. The problem with this measure is that it correlates not only with the hitter's offensive capacity but also with his teammates' capacity. To gain a higher RBI, a hitter needs more of his teammates to be on base in the first place. A good hitter may be undervalued if he happens to be on a less resourceful team with fewer competent teammates, whereas a mediocre hitter may be overvalued if he is fortunate enough to be on a stronger team. On the defensive side, a good player (as fielder) is expected to make fewer errors. According to the MLB official website, an error refers to a judgment by the official scorer that a fielder "fails to convert an out on a play that an average fielder should have made." 6 The problem with this measure is that it is vague and subjective. To make an error, a fielder needs to be close enough to where the ball falls to allow a miss or catch to be recorded in the first place, implying that a fielder with poorer judgment or slower movement may make fewer errors than a better fielder. Moreover, the record of errors is determined entirely by the official scorer, who receives no feedback on these judgments and is unlikely, in a few seconds, to have the cognitive capacity to compare an observed miss against all counterfactuals that an average fielder might have made. The implication is that searching for a valuable hitter based on existing metrics may create systematic over-and underestimations because many measures are confounding, imprecise, and subjective. Now consider pitchers' statistics. Many sabermetricians agree that performance measures for pitchers are more reliable than those for hitters (Lewis 2003). An exception is for closing pitchers, or closers, who specialize in getting the final outs in a close game when their team is leading. When a closer is used and the team wins, it is framed as the closer saving the game; otherwise it is a framed as a blown save. The problems with this measure are twofold: First, it is based on a small sample size; second, the framing makes the outcome sound more important than it actually is. Closers are used mainly in final innings, meaning that their performance is based on much smaller samples than those for starting or relief pitchers. However, their less reliable performances may be exaggerated by the phrase "save": They may receive too much credit for wins (or blames for losses), even though many outcomes may have been achieved largely without their intervention.
Another learning trap occurs when these misevaluations lead to disappointing performance. That is, social learning and benchmarking may encourage inefficient metrics to persist in the MLB. When underperforming, most teams follow a standard search strategy of learning from the most successful teams (Haunschild and Miner 1997), such as the New York Yankees. However, rich teams such as the New York Yankees can afford to keep players who not only do well on existing measures (such as high RBIs, low errors, or more saves) but also perform reliably well.
Learning from these salient successes seems to confirm the robustness of existing performance metrics, but this strategy may only work for the richest teams that have no need to make trade-offs.
Beyond Sports.
A specific searching limit in the context of diversity is a misplaced belief in meritocracy. According to the "no test exists" rule to assembling a diverse team, "no test applied to individuals will be guaranteed to produce the most creative groups" (Page 2017, p. 95). Complex tasks require a cognitively diverse team; however, the team's cognitive diversity cannot be recognized in isolation or ex ante, but must be identified along with the team composition and expansion. A candidate's cognitive resource is useful only when it produces additional ideas or perspectives that differ from those of existing team members. Yet cognitive differences that are useful for filling the gap are only recognizable after an existing team has tackled the task and realized its own shortcomings.
Rather than appreciating the no test exists rule and hiring team members sequentially, organizations often believe they can solve complex problems by recruiting the "best individuals," according to objective criteria. This belief holds when addressing noncomplex tasks, as the most able and creative individuals are expected to master all the skills and ideas needed to solve the task (e.g., a difficult mathematical problem). However, this belief in meritocracy becomes a searching limit that deters the executive from recognizing that a better team could potentially have been assembled. Importantly, the no test exists rule does not undermine individual ability or creativity. This limit highlights that the common practice of recruiting the best candidates according to objective criteria may create a searching limit. Teams cannot discover their mistakes unless they experiment with candidates who are sufficiently different from existing members, or even unqualified based on objective criteria.
Finally, even an executive who correctly identifies that the assembled team is insufficiently diverse may be trapped by the "hot stove effect" when searching nonlocally (Denrell and March 2001). Executives may be shocked by hiring errors, because attempts to hire a cognitively diverse member usually entail experimenting with many atypical hires. Such experiments may lead to long-term performance improvement, but specific hires may cause immediate disasters that prompt the premature termination of searching.
In summary, the searching limit to arbitrage may deter firms from seizing the right hidden gems, even when they sense labor-market inefficiencies. Existing data and measures may be systematically misleading, but various experiential and social learning traps may deter managers from discovering these flaws or from experimenting with alternative measures and candidates. As a result, valuable human resources may remain under the radar.
The Reconfiguring Limit
The reconfiguring limit to arbitrage describes firms' systematic failure to integrate valuable resources because of a resistance to change or failure to reorganize routines. In the context of diversity, this limit focuses on the difficulty of fully realizing the potential of atypical hires in teams. Even when firms manage to sense and seize the right hidden gems, this does not necessarily mean that other employees or team members will appreciate their value, particularly when the acquired resources are unconventional. This may lead to underutilization of these resources or even a selffulfilling prophecy, whereby they fail to create value because many falsely believe that they cannot do so. Valuable human resources may remain underutilized or abandoned prematurely, even when firms sense and seize them.
3.3.1. Moneyball. Billy Beane and the A's were not the first MLB general manager and team to overcome the cognizing and searching limits. Many MLB fans, particularly Bill James (author of the famous Historical Baseball Abstract; James 1985), recognized the inefficiencies in the MLB and created alternative, more effective measures to evaluate players. Most MLB teams ignored these advances in sabermetrics. Some did follow them but, because of the reconfiguring limit, failed to overcome resistance from internal stakeholders.
Take, for example, John Henry, who was briefly the owner of the MLB's Miami Marlins. Having made a fortune by exploiting the inefficiencies of financial markets, Henry believed he could replicate his success in the MLB: People in both fields operate with beliefs and biases. To the extent you can eliminate both and replace them with data, you gain a clear advantage . . . Many people think they are smarter than others in baseball and that the game on the field is simply what they think it is through their set of images/beliefs. Actual data from the market means more than individual perception/ belief. The same is true in baseball. (Lewis 2003, p. 56) Based on his belief that he could profit from inefficiencies in the MLB, Henry acquired the Marlins in 1999 and adopted more efficient metrics for evaluating, recruiting, and managing players. However, the Marlins had some of the worst performances in their history under Henry, and he sold his shares in the team in 2002. Henry's problem was social and political: His approach was so different from the 1201 Liu: Behavioral Strategy as Arbitrage conventional MLB playbook, and how he implemented it as an outsider was so radical, that the entire team (manager, coach, scouts, and players) resisted the changes through noncooperation. As past successful MLB players themselves, many of these internal stakeholders benefited from the existing value system, such as having the right look or high performance on popular (but misleading) measures. They hesitated to adopt an approach that might harm their selfidentification, even though it would clearly help them identify the best resources in the business. These internal stakeholders defended their value system so strongly that it seemed they would rather lose games than sacrifice their identity.
Similar challenges had occurred at the A's before Beane became general manager. Beane's predecessor, Sandy Alderson, had also adopted sabermetrics to improve player recruitment. However, the A's coach instructed many acquired players to do the opposite of what they had been hired to do. Traditionally, baseon-balls was considered a pitcher's error and an irrelevant measure of wins. The sabermetrics approach suggests that this (as well as the on-base rate) is an important measure because it is more highly correlated with wins than other popular measures, such as batting averages. More importantly, high base-onballs should be credited to hitters, who are likely to have unusual patience and superior judgment that exploits pitchers' weaknesses. But the hidden gems trained or acquired by Alderson lost their patience or judgment because their coach, Tony La Russa, told them to unleash their natural aggression and swing freely. Alderson never challenged La Russa for ruining the recruitment strategy, as quoted in Lewis (2003, p. 60): "There was no very good reason for this; it's just the way it was, because the guys who ran the front office typically had never played in the big leagues." La Russa overgeneralized from his experience as an MLB player and rejected insights from outsiders like Alderson who had never played with the league. The A's had embraced sabermetrics long before Beane became general manager, but they failed to overcome the reconfiguring limit because possible improvements were blocked by powerful gatekeepers who disallowed changes that contradicted their worldview.
Beyond Sports.
Hiring cognitively diverse team members does not necessarily imply that their cognitive diversity will be effectively expressed, communicated, assimilated, and integrated. Even when a sufficiently diverse team is assembled, unique perspectives and knowledge may be left unassimilated unless the team has a culture or norm that encourages people to challenge the status quo and value differences. Worse, existing team members may not understand the logic of generating diversity bonuses and may interpret atypical hiring that deviates from objective criteria as discrimination or favoritism, leading to hostility to the new recruit. This may generate a diversity penalty rather than a bonus (Leslie 2018). For example, recent studies show that when females or racial minorities are hired as executives or chief executive officers (CEOs), they may perform less well than expected because male or white executives may withdraw support owing to their perceived loss of identity (McDonald et al. 2017). This implies that simply including diverse team members is insufficient because of reconfiguring limits.
When a firm's goal is to achieve critical mass from scaling up, diversity is not always beneficial (Dierickx and Cool 1989). Instead, homogeneity of knowledge, experience, and connections may facilitate communication and create trust among team members. These qualities are essential when a team's main task is less about creating innovative ideas and more about selecting and developing the best among them (Reagans et al. 2005, Keum andSee 2017). Diversity is important when the task requires substantial cognitive diversity rather than social cohesion and harmony among team members, and the firm needs to avoid possible mismatches.
In summary, the reconfiguring limit to arbitrage may deter firms from integrating atypical resources, even when they manage to sense and seize these valuable resources. Resistance from existing members may be so strong that the valuable resources may be set up to fail. As a result, unconventional but valuable human resources may be underutilized or even stigmatized in the labor market.
The Legitimizing Limit
The legitimizing limit to arbitrage relates to how firms fail to justify to external stakeholders that the output from unconventional resources is indeed valuable or the process of generating the output is legitimate. In the context of diversity, this limit focuses on how external stakeholders may dismiss the performance bonus from engaging diversity if they discount or refuse to acknowledge the process or output.
3.4.1. Moneyball. One might think that the number of wins is the most important performance measure to MLB teams. But whereas team wins are indeed important to their fans (who contribute to revenues via ticket sales), they are not necessarily the most relevant consideration for team owners and management. Instead, following the norm is considered paramount to many of them. Deviating from conventional wisdom about how an MLB team should be run may attract disapproval from the MLB "social club" (Lewis 2003). Problematically, according to sabermetrician Voros McCracken: [The MLB is] a self-populating institution. Knowledge is institutionalized. The people involved with baseball who aren't players are ex-players . . . They aren't equipped to evaluate their own systems. They don't have the mechanisms to get rid of the bad. They either keep everything or get rid of everything, and they rarely do the latter. (Lewis 2003, p. 239).
The implication is a separation of brains and capital, as highlighted in the limits to arbitrage in financial markets (Shleifer and Vishny 1997). Even if managers recognize efficient approaches to winning more games and making more money, they cannot convince their owners, who listen to those who appear to be more legitimate in the sport, even when their knowledge is outdated or flawed.
The social cost of adopting an unconventional approach may outweigh the economic benefits of doing so. Managers who adopt unconventional approaches may not get credit when they succeed. For example, the A's unusual success-winning many games with a limited budget-became so salient that the MLB organized a committee to study this aberration, but its conclusion was mainly that "they've been lucky" (Lewis 2003, p. 122). Many guards of the "MLB club" (such as ex-players as commentators) criticized Beane's approach and questioned why, if his approach was so effective, the A's didn't win the World Series. Such criticisms are not fact based but taste based. Many professional sport seasons are structured to mock rationality; success during the long regular season is much more reliable than success during the brief playoffs (Denrell et al. 2015). Yet teams and fans care much more about the playoffs, despite outcomes depending more on luck. MLB insiders didn't acknowledge the A's success because how it was produced was not to their taste. Pointing out flaws in their criticisms would be unlikely to change their evaluations, but rather would enhance "anti-intellectual resentment" (Lewis 2003, p. 99), which was based in the belief that MLB outsiders know nothing except how to produce numbers on computers and thus have no right to challenge the MLB's norms. Billy Beane was criticized precisely because his unconventional approach led to successes that humiliated insiders and because, as an ex-player himself, he "betrayed" this club. Other teams and managers may have hesitated to follow in Beane's footsteps for fear of a social backlash.
On the other hand, managers who adopt unconventional approaches may become scapegoats when they fail to meet expectations. This happened to Paul DePodesta, an A's assistant of Beane's who was good at analyzing players' value using sabermetric principles. He was hired as general manager of the Los Angeles Dodgers in 2004 but fired shortly after, following a terrible season. The reason for his termination was mainly bad luck: Several players whom DePodesta hired later proved valuable, but six of them were injured in 2005. The Dodgers' 2005 season resulted in the team's worst record since 1992, and its owner, partly influenced by two strong anti-Moneyball sports columnists at the Los Angeles Times, fired DePodesta as a result. The implication is a typical agency problem: Achieving mediocre performance by following convention is a more reliable survival strategy for MLB managers, even though some are aware of more efficient approaches. 7 3.4.2. Beyond Sports. Even when a team is able to sense, seize, and integrate unconventional resources, the legitimizing limit may still impede realization of the diversity bonus. The executive must convince relevant stakeholders that the diversity bonus is real. Research shows that if performance measurement is based on subjective evaluation or is socially constructed, evaluations are likely to reflect evaluators' biases (Becker 1971). For example, a diverse team may generate a novel artistic innovation that spans multiple categories in a surprising way. However, if there are no objective criteria for evaluating this artistic output, evaluators may use other cues, such as judgments based on creators' stereotypes, or may conform with high-status colleagues' evaluations. This suggests that diversity bonuses may be generated but discounted so heavily that they are no longer profitable. Venture capitalists (VCs), for example, may correctly identify the uniqueness of undervalued start-ups, such as having entrepreneurs from atypical backgrounds or developing an unconventional innovation. However, they may be unable to profit from this superior insight if they cannot convince other investors of its value. If VCs rightly foresee this legitimizing limit, they may forgo this start-up, failing to realize the diversity bonus despite recognizing it. Similarly, analysts may not understand a firm's atypical strategy and may discount it (Benner and Zenger 2016), limiting the acquisition of diverse assets (Zuckerman 1999).
In summary, the legitimizing limit to arbitrage may deter firms from engaging in valuable diversity even when they privately know that doing so might lead to superior performance. Self-interested managers may choose not to pursue obvious opportunities that may appear illegitimate to important stakeholders if their incentives are structured to punish unconventional successes and reward legitimized mediocrity or even failures.
Overcoming the CSRL Limits to Arbitrage Mispriced Diversity
The case of Moneyball illustrates how CSRL limits deter many MLB teams and managers from sensing,
1203
Liu: Behavioral Strategy as Arbitrage seizing, integrating, or justifying valuable but atypical players. These strong limits preserve behavioral failures and labor-market inefficiencies, such that undervalued players remain untapped opportunities. Teams that are able to supersede these limits more effectively than their rivals can monopolize the opportunity and earn contrarian profit. This was the case for the A's and Billy Beane from 1999 to 2003. They exploited an untapped opportunity in the MLB-recruiting and using valuable but atypical players to gain more wins-because they managed to overcome all the CSRL limits more effectively than their rivals. As discussed, some teams, such as the Miami Marlins under John Henry, overcame some CSRL limits, but remaining limits still effectively deterred them from allowing Moneyball to occur sooner. As will be elaborated, overcoming all the CSRL limits usually depends not only on becoming more rational or strategic, but also on being in the right place at the right time: If strategists happen to have "preferential access to the missing piece of the puzzle, identifying the opportunity might be easy" (Denrell et al. 2003, p. 985).
In terms of cognizing limits in the MLB, a shared mental model may have been so popular that many teams and their management could not see how atypical players (such as Chad Bradford) might actually be competent. What motivated Beane to pay attention to, and eventually adopt, a different mental model was largely his personal, idiosyncratic experience. Beane had been a promising high schooler, but his MLB career had been disappointing. He knew from experience that the conventional practice of drafting stereotypical players with the right look was flawed. In fact, Beane turned his experience on its head by using his antitheses as a guide. That is, he sought players unlike himself, such as young men "not looking good in a uniform . . . couldn't play anything but baseball . . . had gone to college" (Lewis 2003, p. 117). Hundreds of high schoolers were mistakenly drafted into the MLB because they, like Beane, had a stereotypical look, but only Beane took advantage of this blunder and turned it into a contrarian theory (Felin and Zenger 2017) that allowed him to see what his rivals failed to see.
The searching limit in the MLB is about identifying the right hidden gems among atypical candidates. This task is nontrivial, because most atypical players are not competent, as rightly predicted by the representative heuristic. Teams searching for the truly undervalued among atypical players face many learning traps. As discussed, the challenge is not only about analyzing data, but also about collecting and analyzing more reliable data. This limit was not particularly challenging to Beane. His predecessor at the A's, Sandy Alderson, had adopted sabermetrics principles in the 1990s (including collecting, purchasing, and analyzing unconventional but more reliable performance metrics), suggesting that Beane had already gone through part of the learning curve when he took over the team in late 1997. Hiring Paul DePodesta, a Harvard-trained economist, as his assistant improved the A's efficiency in identifying undervalued players, but was probably not essential for the A's success, since many fans would have loved to contribute equivalent skill and knowledge freely to any MLB team willing to listen to them.
What is more surprising is Beane's strategic exploitation of rivals being constrained by the searching limit. As discussed, many closing pitchers are overvalued because their performance is based on a small sample size and is sensitive to framing. Beane reassigned some of the A's above-average relief pitchers as closers, and many soon seemed more valuable than they actually were. Rivals that persisted in using the number of games saved were fooled and became overenthusiastic when Beane proposed deals to trade these closers. The A's benefited from this sell high strategy and winner's curse in trades. Good deals based on apparent but misleading superior performance were engineered to allow the A's to gain more resources to recruit undervalued players.
The reconfiguring limit deterred some teams (such as the Marlins) from exploiting inefficiencies in the MLB ahead of Beane. In fact, the A's had been deterred by the same limit before Beane took over because Coach Tony La Russa had refused to make use of the atypical players hired by Alderson. The solution was to replace him with a low-profile coach, Art Howe, who "was hired to implement the ideas of the front office" (Lewis 2003, p. 61). Beane also ensured that incentives were structured to reward players for delivering what they were hired for, such as high base-on-balls, and to punish them if they followed the conventional playbook, which actually harmed performances, such as stealing bases or sacrificing strikes. Importantly, unlike Alderson, Beane had the authority to implement this unconventional strategy: He was known as "the guy destined for the Hall of Fame who never panned out" (Lewis 2003, p. 57). That is, he was a living example of the inadequacy of the conventional MLB playbook for A's scouts and players. Beane also facilitated the integration of atypical players by reducing the influence of his own biases. Knowing that his own judgments might also be influenced by stereotypes, Beane tended to meet the players he hired infrequently. By reducing his exposure to salient but misleading cues, he set himself up to evaluate and use players based on their contributions to wins, rather than by their look.
Finally, the legitimizing limit was very strong in the MLB. The MLB playbook probably only worked for the richest teams, but other teams felt pressure to follow these rules, even though some may have privately known that they were not the most efficient (Correll et al. 2017). The fact that Beane managed one of the poorest teams in the MLB and could not afford to go after the same players as other teams probably enabled the A's to overcome the legitimizing limit more effectively than their rivals. Owing to the resource constraint, the A's owner ignored journalists' criticisms of Beane's approach and allowed him to experiment with different types of players to enhance performance, effectively relaxing this limit. Moreover, Beane used the A's underdog status to his advantage: He justified his acquisition of apparently flawed players by his lack of resources. A's management got excited when they realized that the flaw that caused rivals to discount some players in the deals was "something that just doesn't matter" (Lewis 2003, p. 116). The A's deal counterparts were fooled because they believed that pursuing flawed players was a legitimate move for resource-poor teams like the A's.
In summary, Billy Beane and the A's managed to overcome all the CSRL limits in the MLB and monopolize an untapped opportunity. Despite having one of the lowest payrolls, the A's thrived by systematically acquiring players from rivals at a lower price than implied by their contributions to winning. Beane's idiosyncratic experiences and the A's circumstances made them less blind and constrained, which allowed them to exploit the opportunity. How Beane strategized with his experience and circumstances also played an important role in integrating and justifying the atypical resources more effectively than their predecessors. Overall, planned and unplanned behavioral asymmetry between Beane, the A's, and other MLB teams and managers explains why it was Beane who successfully exploited this opportunity.
The case of Moneyball also highlights how overcoming the CSRL limits generates sustainable competitive advantage for a strategist (Peteraf 1993), as these limits can create effective isolation mechanisms (Rumelt 1984) that deter ex post competitions (e.g., many MLB teams attributed the A's success to luck and did not bother to study or imitate their approach) and mobility (e.g., rivals had limited interest in hiring the A's atypical players, no matter how good their performances were). In the case of Moneyball, the A's sustainable competitive advantage became fleeting when the CSRL limits were eliminated by Michael Lewis's bestseller. Still, this case highlights how strong CSRL limits can preserve attractive opportunities: Strategists who can overcome these limits can enjoy competitive advantage when rivals continue to be deterred by these limits.
In Search of Behavioral Arbitrage Opportunities
Searching for viable strategic opportunities is like searching for a needle in many haystacks (Lippman and Rumelt 2003a). Felin and Zenger (2017) propose that strategists can simplify the process by developing a contrarian theory to reduce the number of haystacks that need to be searched. This paper presents a perspective that can refine the search by positing that attractive opportunities tend to be protected by strong behavioral and social limits to exploiting them. This perspective thus helps strategists locate the most promising-and most overlooked-haystack. The proposed CSRL limits help a contrarian strategist formulate specific behavioral and social problems and experimentation in order to identify and overcome these limits. More generally, this perspective provides a template for searching for persistent behavioral failures and, in turn, untapped opportunities. I illustrated the application of this perspective in the context of diversity and human resources. Future research might use this approach to outline the specific limits that preserve opportunities in other strategically relevant contexts. Importantly, untapped strategic opportunities are not necessarily tied to any particular approach (e.g., wisdom of the crowd trumps experts), method (e.g., data analytics trumps conventional evaluations), or presumption (e.g., diversity enhances performance). Astroball (Reiter 2018), an update of the evolution of the Moneyball strategy, illustrates how one of the worst-performing MLB teams, the Houston Astros, won the 2017 World Series by rediscovering the value of scouts' judgment. Untapped opportunities emerge when too many people share similar enthusiasms for a particular approach, method, or presumption (e.g., replacing scouts with data analytics) to such an extent that all alternatives become too cognitively distant to them. For example, when the Moneyball strategy became a fad after 2003, scouts' input into hiring decisions was severely marginalized and underestimated. Thus, opportunities existed for those willing to try (and capable of) becoming contrarian (Felin and Zenger 2017), as the Houston Astros did. Nevertheless, the Astros' success may trigger another cycle of diffusion, imitation, and socialization, and a new set of CSRL limits as well as strategic opportunities. To paraphrase Mark Twain, as a strategist, whenever you find yourself on the side of the majority, it is time to search for contrarian opportunities.
Behavioral strategy as arbitrage also contributes to the diversity literature by providing a distinct perspective that complements the two mainstream views of why firms should engage diversity. Much research and many practices address diversity from a normative,
1205
Liu: Behavioral Strategy as Arbitrage justice-centric perspective (Nkomo et al. 2019), stipulating that firms should encourage the inclusion of individuals with certain disadvantaged social identities, such as female, black, or immigrant. Others emphasize a pragmatic, performance-centric view (Page 2017), stating that firms that engage diversitysolving complex tasks by assembling cognitively diverse teams-are likely to earn a performance bonus. Behavioral strategy as arbitrage suggests that both perspectives are incomplete. Taking a normative but behaviorally naïve perspective on organizations has been shown to backfire: Doing the right thing, such as fixing historical social injustice through affirmative action, without considering the CSRL limits may reinforce rather than attenuate the disadvantages of certain identity groups (Dobbin et al. 2015). One challenge is that many who take a normative stance believe that "pragmatic logics carry less weight than normative arguments" (Page 2017, p. 6). This belief may also create greater CSRL limits, because those who have a perceived moral high ground are more likely to make biased judgments and discount viable alternatives when something does not fit their moral values-the so-called paradox of meritocracy (Castilla and Benard 2010). On the other hand, the pragmatic view does not yet address the behavioral and social limits associated with exploiting the performance bonus from engaging diversity. The logic of generating a diversity bonus may be clear theoretically, but behavioral failures prevent these bonuses from being exploited practically. The perspective of behavioral strategy as arbitrage illuminates the importance of studying the forces that generate behavioral failures when engaging diversity in order to do the right thing, improve performance, or both. Organizations should also clearly distinguish between normative and pragmatic perspectives, because being stuck in the middle creates greater CSRL limits.
Recent hype around AI serves as an interesting illustration of the relevance of CSRL limits. Many AI algorithms, similar to human cognitions, predict behaviors or categories based on simplifications of complex reality and generalizations of the inferences obtained. Although effective in many ways, this simplification may create overgeneralization and predictable blind spots. For example, algorithms can only optimize what can be quantified, but many subtle characteristics, such as cognitive diversity in teams, cannot yet be measured reliably, leading to systematic misevaluations by naïve AI users. Moreover, AI is only as smart as the data it is fed, but existing data may reflect decades of accumulated human biases and social injustice. This is why Amazon ditched its AI recruiting tool, which favored males for technical jobs. This incident also suggests that less salient biases than gender stereotypes may be utilized by algorithms, creating subtle iron cages that trap future generations. Even if strategists overcome the cognizing and searching limits in algorithms, it is still challenging for existing organizations to integrate and adapt to AI. For example, who should be held accountable when AI predictions go wrong, particularly when the algorithms are too sophisticated to be comprehended by managers and stakeholders, such as predictions based on deep-learning algorithms? Although some managers appreciate these concerns about AI, they may be forced to adopt it prematurely when investors or the media uncritically believe that AI, combined with big data, is the solution to every problem. Unfortunately, taken together, the hype surrounding AI may actually reinforce existing CSRL limits that deter firms from engaging diversity. However, the good news is that these limits also preserve attractive, untapped opportunities for firms that are able to predict results based on algorithms that allow enriched representations, to sanitize big but polluted data, to redesign organizational structures to adapt to AI, and to rebel against the myths of AI and their true believers.
Finally, the discussion of factors contributing to the success of the A's and Beane also potentially reconciles two competing views on the origin of great strategies and performance. Many strategy researchers consider great strategies to be "rooted in meaningful departures from a prevailing status quo-the cognitions, practices, routines, and institutions that stabilize a market or competitive order at any given point in time" (Gavetti and Porac 2018, p. 354). They suggest systematic pathways to greatness, such as by deepening, extending, or replacing the existing market or competitive order (p. 364). An alternative, more pessimistic view is that there is no such systematic pathway (Denrell et al. 2003, March 2006, Andriani and Cattani 2016. As Moneyball illustrates, Beane's and the A's departure from the status quo was a mixture of luck (happening to be in a poor team with an enlightened predecessor as mentor and a hands-off team owner) and strategy (e.g., Beane maximized returns from his (un)fortunate experiences and his team's limited resources). Exceptional performance is likely to occur in exceptional circumstances (Denrell and Liu 2012), implying that great strategies can improve performance, but are insufficient to achieve great performance. Great performance, such as radical innovation, exceptional growth/return, or unprecedented achievements, is more likely to occur in contexts where most firms are deterred by various limits in cognitions, practices, routines, and institutions, except for a few that happen to overcome these limits by being closer to the right time and right place.
This view should not discourage strategists, as it simply adds one more clue to solving the strategic 1206 Liu: Behavioral Strategy as Arbitrage Organization Science, 2021, vol. 32, no. 5, pp. 1193-1209 paradox that attractive opportunities should not be easy to exploit. Popular strategy theories teach us that attractive industry opportunities are protected by strong limits that deter entry (Porter 1980), and that attractive resource opportunities are protected by strong limits that deter imitation and substitution (Barney 1991). The behavioral strategy as arbitrage perspective resembles this logic and suggests that attractive behavioral opportunities cannot be lowhanging fruit, but must be protected by strong limits that deter deliberation, learning, changes, and being contrarian. Whoever can overcome all these limits will monopolize the contrarian profit. By looking for sticky behavioral failures, one may be able to identify untapped strategic opportunities. Fortune favors the strategists prepared with an acute awareness of behavioral and social dynamics.
Endnotes 1 Arbitrage is about exploitations of price-value gaps (Barberis andThaler 2003, Zuckerman 2012b). In Section 2, I elaborate on how I extend this concept to strategic contexts. In financial markets, arbitrage describes how rational traders take advantage of less rational investors' biased evaluations. For example, suppose that Firm A's fundamental value is $10 per stock share. Imagine that a group of irrational traders, or "noise traders" (Delong et al. 1990), becomes overly pessimistic about Firm A's prospects, pushing its stock price down to $5. A rational trader, Trader X, can profit by acquiring the undervalued Stock A and can hedge the risk by shorting a substitute stock-for example, stock of Firm B operating in the same industry with a similar prospective cash flow as Firm A. If Firm A's stock price subsequently bounces back to its fundamental value of $10 (i.e., when the market recovers from the overreaction), the profit earned by Trader X is the temporary price difference ($10 − $5 = $5) times the volume of Stock X acquired, minus the cost of the hedge. If Firm A's stock price subsequently deviates further from its fundamental value of $10-for example, because of a piece of industry news that negatively impacts both Firm A and Firm B, and hence pushes Firm A's price, say, from $5 to $3 and Firm B's price from $10 to $8, then Trader X can attenuate the loss (i.e., the decrease of $2 in Firm A's share price times the acquired volume) through the hedge. That is, Trader X can sell Stock B at $10, with the acquisition cost equal to its current price of $8. 2 In financial markets, traders may identify a mispriced asset, but arbitraging the mispricing may be infeasible because of at least three types of limits to arbitrage. First, there is a hedging risk because the substitute stock is rarely perfect. Following the example from endnote 1, Stock B's price may not decrease enough (or at all) when negative industry news is announced, suggesting a failed or insufficient hedge. Second, there is a capital risk because traders rarely invest their own money. "A separation of brains and capital" (Shleifer and Vishny 1997) exposes traders to the risk that they may lose capital support if their investors are not immune to the misevaluations upon which the arbitrage opportunity is based, as illustrated by Michael Lewis's book, The Big Short (Lewis 2011). The third type of risk concerns implementation: Mispricing may occur, but it may not lend itself to a feasible arbitrage strategy, or the cost of implementation, such as the borrowing cost to implement sufficient short selling in a hedge, may be too high. As Keynes put it, "[t]he market can stay irrational longer than you and I can remain solvent" (Shilling 1993, p. 236). Overall, these limits suggest that although an arbitrage opportunity may exist, it may be too costly or risky to be feasible. A mispricing may be identified, but with no profitable investment strategy (i.e., no free lunch), allowing the mispricing and market inefficiencies to persist (i.e., prices are incorrect). 3 This argument is consistent with how Barney (1989) responded to the critique of Dierickx and Cool (1989) that important strategic resources are often cumulated within firms and not tradeable on strategic factor market. Barney argues that tradability is an nonissue in his 1986 strategic factor market framework because the framework "applies in the analysis of the return potential of these assets" (1989, p. 1512). Similarly, misevaluation of assets creates potential arbitrage opportunities. They are more difficult to exploit if the assets under consideration are nontradeable or firm specific, but this does not mean that the potential does not exist. Of course, an arbitrage opportunity may only exist counterfactually and could never be realized because of limits that are impossible to overcome. This is a practical rather than theoretical constraint of this perspective. Also, its application may still offer useful guidance for strategists considering whether an opportunity is worth pursuing. 4 In contrast to financial arbitrage, which relies on a general equilibrium framework, a behavioral strategy as arbitrage perspective, consistent with the resource-based view of the firm (Peteraf 1993;Lippman and Rumelt 2003a, b), relies on a partial equilibrium framework. 5 This is consistent with the idea that many strategic resources are difficult to price, as they involve combinations of firm-specific resources. The fact that they are difficult to value suggests that their misevaluation, as well as the resulting opportunities, may be protected by strong limits to arbitrage. 6 See http://m.mlb.com/glossary/standard-stats/error (accessed February 15, 2021). 7 Or, as John Maynard Keynes (1936, p. 158) put it, "Worldly wisdom teaches that it is better for reputation to fail conventionally than to succeed unconventionally." | 2021-05-08T00:03:31.937Z | 2021-02-19T00:00:00.000 | {
"year": 2021,
"sha1": "b9e3c9f2c67af98dc9eaa47ff765940e6c900248",
"oa_license": "CCBYNC",
"oa_url": "http://wrap.warwick.ac.uk/140032/7/WRAP-why-do-firms-fail-engage-diversity-behavioral-strategy-Liu-2020..pdf",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "1122c80a6b1f158e280fb27c0caadcd221a6ea1b",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
212940482 | pes2o/s2orc | v3-fos-license | Radiation-Induced Structural Changes of Miscanthus Biomass
: Efficient pretreatment is a prerequisite for lignocellulosic biomass biorefinery due to the structure of lignocellulose. This study is a first-time investigation into the structural changes of Miscanthus biomass treated with 60 Co γ -ray irradiation in different doses up to 1200 kGy. The structural properties of the treated sample have been systematically characterized by FTIR, thermogravimetric analysis (TGA), XRD, gel permeation chromatography (GPC), a laser particle size analyzer, SEM, an atomic force microscope (AFM), and NMR. The results show that irradiation treatment can partially destroy the intra- or inter-molecular hydrogen bonds of biomass. Irradiation treatment can also reduce particle size, narrow the distribution range, as well as increase the specific surface area of biomasses. Noticeably, the TGA stability of the treated biomass decreases with increasing absorbed doses. To respond to these structural changes, the treated biomass can be easily hydrolyzed by cellulases with a high yield of reducing sugars (557.58 mg/g biomass), much higher than that of the untreated sample. We conclude that irradiation treatment can damage biomass structure, a promising strategy for biomass biorefinery in the future.
Introduction
As a C4 perennial grass, Miscanthus is an important potential energy crop in China and Europe. It is characterized by higher adaptability, bio-production, and fiber content, as well as lower ash content and input requirements in comparison to conventional crops [1,2]. The chemical composition of Miscanthus includes cellulose (370-501 g/kg dry solids), hemicellulose (283-g/kg dry solids), and lignin (68.7-127.6 g/kg dry solids) [3]. Biofuels and biochemicals from Miscanthus have been comprehensively studied in the past few years. However, the recalcitrant structure of biomass has become one of the main factors restricting biofuel production [1,4]. Cellulose and hemicellulose constitute the whole biomass and are firmly linked with lignin molecules through covalent and hydrogenic bonds, which make the biomass structure extremely strong and difficult to pretreat [5,6]. To enhance cellulose hydrolysis, the method of biomass pretreatment should efficiently damage the recalcitrant biomass structure [7][8][9]. In recent years, many researchers have reported a number of excellent pretreatment strategies, including milling, alkali, acid, hydrothermal and ammonia explosion methods, as well as biological degradation [1,3,[9][10][11][12].
Recently, more efforts have focused on irradiation pretreatment of biomass using mild temperatures, no water washing, and minimal use of undesirable inhibitory products [13]. Moreover, irradiation pretreatment can significantly damage the recalcitrant structure of lignocellulosic biomass, especially degrading cellulose structure, which facilitates the downstream enzymatic saccharification [14]. In our previous studies, the effect of irradiation pretreatment on the structure of microcrystal cellulose as well as hemicellulose was intensively investigated [15,16], and we demonstrated that irradiation pretreatment followed by enzymatic hydrolysis can result in high sugar yields [17]. Therefore, irradiation pretreatment may be one of the most promising methods to overcome this recalcitrance.
To date, little information has been reported on the degradation mechanism of real biomass treated by irradiation. In this work, irradiation pretreatment was employed to degrade the structure of Miscanthus to improve its downstream enzymatic hydrolysis. The effect of different absorbed doses up to 1200 kGy on recalcitrant structures is illustrated to better understand the degradation mechanism of irradiation treatment.
Materials
Miscanthus floridulus was gathered from a local experimental base in Hunan Agricultural University, China. After it was dried in an oven at 60 °C for 8 h, the samples were ground and sieved through 40 mesh(the particle size of sample is approx.425 μm). The feedstock was stored at room temperature for the following experiments.
60 Co γ-Ray Irradiation Pretreatment of Miscanthus
The procedures of irradiation treatment and enzymatic hydrolysis were conducted according to the method reported by Liu et al. [14]. Briefly, the irradiation treatment experiments were carried out on a 60 Co γ-ray irradiation device at 1.85 × 10 16 Bq in the Hunan Irradiation Center (Liuyang City, China). Approximately 200 g of grounded dry Miscanthus biomass was put in a glass bottle, and then irradiated at a 60 Co-γ radiation source intensity of 9.99 × 10 15 Bq at 2.0 kGy h −1 dose rate. The specific levels of absorbed doses were fixed at 0 (untreated as blank control), 400, 600, 800, 1000, and 1200 kGy.
Chemical Compositions Analyses
The chemical compositions of Miscanthus, including the content of moisture, cellulose, lignin, and reducing sugar before and after irradiation treatment were determined using the analytical methods provided in the GB/T 2677 standard. The content of hemicellulose was calculated by subtracting cellulose from holocellulose [18]. Each experimental result was repeated in triplicate.
Gel Permeation Chromatography (GPC)
The GPC method was employed to determine the molecular weight (MW) distribution of irradiated Miscanthus, and the detailed procedure of GPC was described in a previous study [16].
X-Ray Diffraction Analysis (XRD)
The crystalline phase of irradiated Miscanthus cellulose was demonstrated by a D/max 2500 diffractometer (Rigaku Corporation, Tokyo, Japan). The conditions were: acceleration voltage 40 kW and current 30 mA. The Cu-Kα wavelength was 1.54 Å, grade range 2θ ranged from 10° to 40° with the step size of 0.002°. The crystallinity index (CrI) of cellulose was calculated using Equation (1) [18] (1) where CrI is the crystallinity index, I002 is for the highest peak intensity at 2θ = 22.6°, and Iam is the intensity diffraction for amorphous cellulose at 2θ = 16.0°.
2.4.3. Solid State 1 H and 13 C nuclear magnetic resonance (NMR) 1 H and 13 C NMR measurements were conducted on a Bruker AVANCE III-HD 600 spectrometer (Bruker, Priyanka City, Germany). Their resonance frequencies were 600.1 and 150.9 MHz, respectively. 1 H and 13 C NMR spectra were both recorded using a 4 mm magic angle spinning (MAS) probe equipped with a spinning rate of 14 kHz. Other conditions of the 1 H NMR spectra were a π/2 pulse length of 2.57 μs and a recycle delay of 5 s. Other conditions of 13 C NMR spectra were a contact time of 2 ms and a recycle delay of 5 s.
Fourier Transform Infrared spectroscopy (FTIR)
FTIR measurements of Miscanthus samples were carried out on a Prestige-21 FTIR instrument (Shimadzu, Japan). All spectra were recorded in the range of 4000 to 400 cm −1 with an accumulation of 128 scans at 4 cm −1 resolution. Each sample was in conjunction with potassium bromide (KBr) powder and pressed inside a hydraulic press for spectroscopic analytical sample preparation.
Particle size distribution and specific surface Area (SSA)
SSA measurement was performed on a laser particle size analyzer (Mastersizer 2000, Malvern, Worcestershire, U.K.). The sample was uniformly dispersed in sodium hexametaphosphate solution, and then loaded into the analyzer for measurement.
Scanning electron microscope (SEM)
The morphology of Miscanthus was observed by a JSM-6360LV SEM (Japan Electron Optics Laboratory Co., Ltd., Tokyo, Japan). The samples were first sputter-coated with gold prior to observation by SEM [19].
Atomic force microscope (AFM)
The dimension morphology of Miscanthus biomass after irradiation treatment was imaged using AFM (Bruker, Priyanka City, Germany) through ScanAsyst mode with a scanning frequency of 1.49 Hz. All images of height, amplitude and phase were simultaneously obtained in tapping mode with a MPP-11100 etched silicon probe, of which the nominal frequency and nominal spring constant were set at 300 kHz and 40 N /m, respectively [20].
Thermogravimetric snalysis (TGA)
TGA thermal stability of the Miscanthus biomass was determined on a TGA Q50 analyzer (Waters Co., Milford, MA, USA). The N2 with a flow rate of 50 mL/min was used as the carrier gas. The temperature ranged from room temperature to 900 °C with a heating rate of 30 °C/min.
Degree of Polymerization (DP)
The DP of the Miscanthus biomass was measured through the viscosity variance using Ubbelohde viscometry at 25 ± 0.5 °C. An extrapolation method was used to calculate the intrinsic viscosity (ηt) of the sample. Therefore, the values of ηt and DP were calculated according to Equations (2) and (3): where t0 and ti are the initial and end times (min) for the cupriethylenediamine solution to run through the capillary, respectively, and W is the mass weight (g) of the sample.
Enzymatic Hydrolysis of the Irradiated Miscanthus
The irradiated Miscanthus biomass was used for enzymatic hydrolysis according to the procedure described by Su et al. [17]. After hydrolysis, the reducing sugars were determined by HPLC. The conditions of HPLC were a temperature of 65 °C, and the mobile phase was 5 mM H2SO4 at a flow rate of 0.6 mL/min [14].
Statistical Analysis
The statistical analyses were determined using SPSS 22.0 (IBM, Armonk, NY, USA), and the final data are expressed by average ± SD.
Effect of Irradiation Treatment on Chemical Compositions of Miscanthus Biomass
The composition changes of Miscanthus biomass after different absorbed doses are presented in Table 1. As shown in Table 1, the main components of Miscanthus are cellulose, hemicellulose, and lignin, and their total content accounts for 89.7%. In comparison with the compositions of giant reed and Chinese silvergrass reported in the literature, Miscanthus has the same contents of holocellulose and lignin as giant reed, and the total content is higher than Chinese silvergrass [18]. After irradiation treatment, the contents of cellulose and hemicellulose of Miscanthus decreased with the increase in absorbed doses up to 1200 kGy, and irradiation treatment had little influence on lignin content. That is, the content of lignin before and after irradiation treatment was almost stable. These phenomena were confirmed by other researches [18,21]. The structure of destroyed cellulose can improve enzymatic hydrolysis during downstream saccharification [13]. To illustrate why irradiation has a significant effect on cellulose and hemicellulose but little effect on lignin, the biomass structure was comprehensively elucidated by FTIR, TGA, XRD, GPC, a laser particle size analyzer, SEM, AFM, and NMR in this work. Table 2 shows the changes in particle size, distribution, and special surface area of Miscanthus biomass after irradiation treatment. As shown in Table 2, the Sauter mean diameter D [3, 2] and the volume average particle diameter D [4,3] greatly decreased with the increase in absorbed doses. For the untreated sample, the values of D [3,2] and D [4,3] were 23.808 and 221.005 μm, respectively, higher than those of irradiated samples. For instance, after irradiation treatment at 1200 kGy, the values of D [3,2] and D [4,3] were 7.357 and 20.099 μm, respectively. In addition, the values of d (0.1), d (0.5), and d (0.9) for untreated samples were 18.026, 153.465, and 547.317 μm, respectively, whereas for the irradiated samples at 1200 kGy, these values significantly reduced to 2.772, 18.423, and 39.082 μm, respectively. Therefore, an absorbed dose-dependent particle size for irradiated biomass was observed in this work. In detail, with the increase in absorbed doses, particle size distribution moved toward small particles. This showed that the irradiation treatment can remarkably affect both the particle sizes and their distribution due to the destruction of the stubborn structure caused by irradiation. As shown in Table 2, the SSA of the untreated Miscanthus was 0.252 m 2 g −1 , whereas for irradiated samples, SSA showed an absorbed dose-dependent increase and the maximum value was 0.815 m 2 g −1 at 1200 kGy. These results are in good agreement with our previous works [19,21]. We demonstrated that the increase in SSA enhances the accessibility of enzymes to cellulose substrate, resulting in the improvement of cellulose digestibility. To further confirm the above-mentioned hypothesis, the irradiated biomass underwent enzymatic hydrolysis to reduce sugar production, and the results are shown in Figure 1. In comparison with the untreated Miscanthus sample, the irradiated sample at over 600 kGy had a higher level of methylene blue adsorption (596.99 μg/g) and a higher reduced sugar yield (557.58 mg/g). The methylene blue adsorption, a factor of enzyme accessibility to the substrate, slowly increased from 555.18 to 596.99 μg/g in the tested absorbed doses. It increased sugar reduction from 118.27 to 557.58 mg/g. A reasonable explanation is that the accessibility of enzymes increased after the biomass was treated by irradiation, indicating irradiation pretreatment is an effective method to improve the efficiency of enzymatic hydrolysis of a biomass. This phenomenon is in good agreement with that reported by Beardmore et al. [22], who demonstrated that SSA improvement of sulfite pulp increases the efficiency of enzymatic hydrolysis.
DP, CrI, and Molecular Weight Distribution
The effect of irradiation treatment on molecular weight distribution, DP, and CrI of Miscanthus biomass was investigated, and the results are shown in Table 3. When the biomass was irradiated from 0 to 1200 kGy, the DP values decreased from 366,225 to 11,354, indicating that irradiation treatment can destroy the biomass structure. Moreover, the values for Mw(the weight average molecular mass) and Mn (the number average molecular mass) decreased with the increase in absorbed doses. Briefly, the Mw and Mn values of the untreated samples (0 kGy) were 542,342 and 45,544 Da, respectively, whereas for the irradiated samples at 1200 kGy, the values of Mw and Mn decreased to 150,821 and 30,237 Da, respectively. In comparison with the untreated sample, the value of Mw/Mn decreased after irradiation treatment. However, the value of Mw/Mn showed no absorbed dose dependence due to the similar decrease variance in Mw and Mn values. We found that the Miscanthus macromolecular structure was degraded after irradiation treatment [23]. These structural changes in biomass are helpful for enzymatic hydrolysis [24]. Table 3 shows that the CrI values of irradiated samples showed a dose-dependent decreasing tendency, which also improved the enzymatic hydrolysis efficiency. The change in CrI values demonstrated that irradiation treatment may damage the crystalline structure of cellulose, which was confirmed by TGA analysis in the following experiment. Note: Mw, the weight average molecular mass; Mn, the number average molecular mass; Mw/Mn, the polydispersity index; DP, the degree of polymerization; CrI , the crystallinity index.
TGA Measurement
The TGA thermal stability of biomass usually has two obvious narrow peaks: one is the degradation peak of hemicellulose, the other is the peak of cellulose. However, the TGA peak of lignin is very broad and partly overlaps hemicellulose and cellulose [25]. The TGA profiles of the irradiated Miscanthus biomass are depicted in Figure 2. There are two peaks (Tmax L and Tmax H) observed from TGA curves at doses of 400, 600, and 800 kGy, but only a single peak (Tmax L) at 1000 and 1200 kGy. The disappearance of the high-temperature peak (Tmax H) was caused by the side chains cleavage of biomass under the higher absorbed doses. In addition, a small shift in Tmax L value to a lower temperature (Table 3) occurred when absorbed dose increased up to 1200 kGy, indicating the backbone of the biomass was degraded by the irradiation treatment. The phenomenon does not align with the CrI values of cellulose in Table 3. Figure 2. Effect of absorbed doses on the thermal stability profiles of Miscanthus biomass.
XRD Analysis
XRD measurement was conducted to assess the crystalline phase of Miscanthus biomass under different absorbed doses. As observed in Figure 3, the CrI peaks at ~16° and 22° lattices remained stable without new lattice generation after irradiation treatment. As described in Table 3, the CrI values showed an absorbed dose-dependent decrease, indicating the crystalline area of cellulose was destroyed to some extent. The CrI values of irradiated Miscanthus declined slightly from 37.86% to 28.43% under the absorbed doses from 0 to 1200 kGy ( Table 3). The effect of irradiation treatment on cellulose crystalline phase is dependent on absorbed dosage and the biomass species [16,26]. Huang et al. [26] demonstrated that the crystalline phase cleavage by irradiation treatment is contributed by the enzymatic hydrolysis of biomass.
FTIR Analysis
The FTIR of Miscanthus biomass after irradiation treatment was investigated, and the spectra are shown in Figure 4. Three characteristic peaks of holocellulose are: 1730 cm −1 is ascribed to C=O carbonyl group, 1372 cm −1 is ascribed to -C-C-, and 1237 cm −1 is ascribed to C-O-of acetyl group, as observed in Figure 4. With the increase in absorbed doses, the intensity of the three peaks increased, indicating that the structural damage degree of Miscanthus fibers increases with the increase in the absorbed dose. The changes of the typical peaks of the guaiacyl and syringyl lignin vibration at 1230-1515 cm −1 also showed absorbed dose-dependence. Simultaneously, the peak at 910 cm −1 ascribed to the C-O-C group in the epoxide guaiacyl lignin was so small that the change in its height was not significant, which is good agreement with the data of wheat straw treated by irradiation and diluted acid [27]. 3.3.5. 1 H and 13 C NMR Analysis 1 H and 13 C NMR analyses were carried out to evaluate the chemical structural changes of Miscanthus biomass after irradiation treatment, and the data are shown in Figure 5. For the untreated sample (0 kGy), the chemical shift at 5 ppm of 1 H NMR profiles was ascribed to the H + proton signal peak of hemicellulose, whereas in the cases of irradiated samples, the 1 H proton signal peak was apparently enlarged and shifted to 5.2 ppm with the increase in absorbed doses from 400 to 1200 kGy. From the 13 C NMR profiles, the peaks intensity of the glucosyl unit at 21.3, 74.6, and 88.7 ppm reduced with the increase in absorbed doses. However, the intensity of the peak at 56.6 ppm increased with the increase in the absorbed dose. These small variations in carbon chemical shifts of irradiated samples may lead to intermolecular C-C bond cleavage in backbone structure. In combination with the FTIR, and 1 H and 13 C-NMR profiles, we concluded that inter-molecular hydrogen bonds and carbon-carbon bond cleavage of biomass are caused by high absorbed doses [13,19].
SEM Analysis
The morphology of Miscanthus biomass after irradiation treatment was investigated by SEM, and the images are presented in Figure 6. The morphology of untreated Miscanthus (0 kGy) showed a smoothly compacted and ordered surface. After irradiation treatment, the surface morphology of samples displayed many small fragments, and the structure was rough and irregular. The degree of structural damage grew stronger with the increase in absorbed doses up to 1200 kGy. The phenomenon shows that irradiation treatment can effectively destroy the tight structure of the Miscanthus biomass, which was confirmed by many researchers in literature [18,28,29]. The morphology change of cellulose by irradiation treatment will improve the accessibility of cellulase to biomass, leading to high enzymatic digestibility [14,30]. Figure 7. In the case of the untreated sample (0 kGy), the AFM image showed an intact and uniform surface, and the affinity of the AFM probe to the hydrophilic region is 46.53 nm (Figure 7a). For irradiation treated samples, their AFM images appeared non-uniform and there was a spherical surface in the amplitude phase, probably attributed to the exposure of cellulose (Figures 7b-f). The affinity of the AFM probe to the hydrophilic regions increased up to 243.67 nm. The light color of hydrophilic regions means a remarkable change in the phase image. A similar phenomenon was confirmed by other researchers in the literature [20,31]. Chundawat et al. [31] reported that the AFM probe adhered more keenly to hydrophilic areas, resulting in a greater change in the phase of the sample.
Conclusions
In summary, these findings demonstrate that γ-irradiation treatment can decrease crystalline cellulose, particle size, and DP while increasing the SSA of a biomass, resulting in the improvement in the cellulase accessibility to cellulose. Furthermore, irradiation treatment can substantially reduce structural and thermal stability by damaging the recalcitrant biomass structure. These structural changes in irradiated biomass will contribute to the further reduction of sugar release. The findings in this work provide an insight into the degradation mechanism of biomass by irradiation treatment, which is helpful for biomass biorefinery in the future. However, the practical application of irradiation treatment should be still discussed for its economic feasibility, which is under consideration in our laboratory. | 2020-02-13T09:20:08.524Z | 2020-02-07T00:00:00.000 | {
"year": 2020,
"sha1": "84b94d5df171a3c88bf912c57b22e81b3a1d1637",
"oa_license": "CCBY",
"oa_url": "https://res.mdpi.com/d_attachment/applsci/applsci-10-01130/article_deploy/applsci-10-01130-v2.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "8b95a1b1a2b8512278ce5850fc7715b3b28efd9c",
"s2fieldsofstudy": [
"Environmental Science",
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
220373743 | pes2o/s2orc | v3-fos-license | Brain Hepcidin Suppresses Major Pathologies in Experimental Parkinsonism
Summary Despite intensive research on Parkinson disease (PD) for decades, this common neurodegenerative disease remains incurable. We hypothesize that abnormal iron accumulation is a common thread underlying the emergence of the hallmarks of PD, namely mitochondrial dysfunction and α-synuclein accumulation. We investigated the powerful action of the main iron regulator hepcidin in the brain. In both the rotenone and 6-hydroxydopamine models of PD, overexpression of hepcidin by means of a virus-based strategy prevented dopamine neuronal loss and suppressed major pathologies of Parkinsonism as well as motor deficits. Hepcidin protected rotenone-induced mitochondrial deficits by reducing cellular and mitochondrial iron accumulation. In addition, hepcidin decreased α-synuclein accumulation and promoted clearance of α-synuclein through decreasing iron content that leads to activation of autophagy. Our results not only pinpoint a critical role of iron-overload in the pathogenesis of PD but also demonstrate that targeting brain iron levels through hepcidin is a promising therapeutic direction.
Hepcidin represses iron accumulation and mitochondrial deficits in Parkinsonism
Hepcidin promotes clearance of a-synuclein via autophagy activation in PD model Manipulating brain hepcidin level has potential application in treating PD
INTRODUCTION
Parkinson disease (PD) is a progressive and incurable neurodegenerative disease, affecting 1%-2% of people older than 60 years (Hauser and Hastings, 2013). Patients suffering from PD exhibit characteristic motor dysfunctions, including rigidity, akinesia, tremor, and postural instability. The degeneration of dopaminergic neurons in the substantia nigra pars compacta (SNc) is the major cause of PD motor dysfunction. Currently, a-synuclein accumulation and mitochondrial dysfunction are two most common features and proposed causes of PD parthenogenesis (Haelterman et al., 2014;Grü newald et al., 2019;Charvin et al., 2018). a-Synuclein is the major component of Lewy bodies and neurites (Goedert, 2015). It is believed that aggregation of a-synuclein, which can severely impact proteasomes, lysosomes, mitochondria, endoplasmic reticulum (ER), and cell membrane aggregation, triggers the onset of PD (Dehay et al., 2016;Kalia et al., 2013). As for mitochondria dysfunction, findings from mitochondrial toxins, genetic studies, and patients suggest that it can be one of the general and primary causes of PD (Haelterman et al., 2014). In fact, a-synuclein accumulation and mitochondrial dysfunction might be linked during PD pathogenesis (Haelterman et al., 2014).
Iron accumulation is another pathological hallmark of PD. It has been reported by different groups that with disease progression, iron is gradually accumulated in the substantia nigra (SN) in PD patients (Ahmadi et al., 2020;Dexter et al., 1987;Ghassaban et al., 2019;Hirsch et al., 1991;Langley et al., 2019;Pyatigorskaya et al., 2015;Thomas et al., 2020). Consistent with this observation, in 6-hydroxydopamine (6-OHDA)-and 1methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP)-induced PD animal models, iron levels are elevated in the SN concurrently with dopaminergic neuron loss, whereas iron chelator administration or iron-related gene rectification can ameliorate this impairment (Kaur et al., 2003;Shachar et al., 2004;Elkouzi et al., 2019;Crichton et al., 2019;Jiang et al., 2019;Nuñ ez and Chana-Cuevas, 2018). Iron accumulation and neurodegeneration are associated with altered expressions of iron-transport proteins, such as divalent metal transporter 1 (DMT1) with iron responsive element (DMT1+) Salazar et al., 2008;Urrutia et al., 2017a;Song et al., 2007;Chen et al., 2015). Increased free iron level causes neuronal damage and death not only via the Fenton reaction (Hadzhieva et al., 2014), which generates hydroxyl radicals that oxidize proteins, lipids, and DNA, but also affects the formation of a-synuclein oligomers in the presence of Fe 3+ through a direct interaction (Levin et al., 2011). We speculate that iron accumulation, especially in free iron form, is a common thread connecting both mitochondrial dysfunction and a-synuclein aggregation in PD and that strategies that manipulate the level of iron would be highly beneficial. However, there is currently no applicable iron-modifying treatment for PD. Iron chelators have been investigated for clinical applications for several decades without success, although several new iron chelators seem to be promising in clinical trials recently (Devos et al., 2014;Moreau et al., 2018;Singh et al., 2019).
Hepcidin is a conserved 25-amino acid peptide that plays a key role in regulating iron metabolism (Hentze et al., 2010). Hepcidin is mainly produced in the liver to maintain iron homeostasis by regulating iron absorption in the intestine, iron recycling in the spleen, iron storage in the liver, and its utilization in the bone marrow (Hentze et al., 2010). Intriguingly, hepcidin is also expressed in the brain. Our previous work has demonstrated that hepcidin decreases iron uptake and release via decreasing the levels of DMT1+, DMT1 without IRE (DMT1À), transferrin receptor (TfR), and Fpn, in various types of cultured cells including astrocytes (Du et al., 2011), macrophages (Du et al., 2012), and microvascular endothelial cells . In cultured neurons, hepcidin decreases DMT1, TfR, Fpn, ferritin-L, and ferritin-H contents (Zhou et al., 2017;. Recently, we found that hepcidin reduces brain iron in iron-overloaded rats and suppresses transport of transferrin-bound iron from the periphery into the brain . Moreover, iron accumulation and oxidative stress are suppressed in the SN in iron-overloaded rats by hepcidin (Gong et al., 2016). However, whether hepcidin plays any role in central nervous system disease including PD is still to be investigated. Given its prominent iron-regulatory effect on brain iron accumulation, we hypothesize that hepcidin overexpression could be a promising strategy for treating PD.
In this study, we determined whether overexpression of hepcidin could rectify Parkinsonian symptoms and pathogenic changes in 6-OHDA-and rotenone-induced models of PD. We then investigated whether the beneficial effects of hepcidin are a result of amelioration of iron dyshomeostasis and other cellular dysfunctions in neurons. We found that hepcidin protects against rotenone-induced mitochondrial deficits by suppressing cellular and mitochondrial iron accumulation. In addition, it was revealed that hepcidin mediates a-synuclein clearance through decreasing iron accumulation and subsequent autophagy activation.
Therapeutic Effects of Hepcidin in PD Animal Models
We studied the potential therapeutic effects of hepcidin in two well-established rat models of PD, namely chronic IP administration of rotenone and acute unilateral 6-OHDA injections into the medial forebrain bundle. Although the neurotoxicology of these two models that lead to Parkinsonian symptoms are different (Blesa et al., 2012), iron accumulation is a shared feature (Mastroberardino et al., 2009). To assess the effect of hepcidin, we upregulated the expression of endogenous hepcidin in the brain by construction and injection of adenovirus that carries the hepcidin gene (Ad-hepcidin) . In our previous study, we reported that after ICV injection of Ad-hepcidin, there were similar fold of increase of hepcidin mRNA levels in the cortex, hippocampus, and SN (Gong et al., 2016). The experimental paradigms of these sets of experiments are shown in Figures 1A and 1B. Overexpression of hepcidin in these two models by injection of Ad-hepcidin was confirmed by immunohistochemistry (Figures S1A and S1B) and ELISA (Figures S1C and S1D). Increased intensity of hepcidin was detected in neurons and glia in the SN ( Figure S1A). In the rotenone model, we tested the effects of Ad-hepcidin administration on motor ability of the rat using the catalepsy tests (grid test and bar test), stepping test, rotarod test, and ladder rung walking test 7 weeks after rotenone injection. Compared with the healthy control group, we found that rotenone treatment prolonged first-step latency in the grid test (p < 0.01) and latency to leave the bar in the bar test (p < 0.05). Injection of Ad-hepcidin but not the blank AAV (Ad-blank) reversed the prolonged latency in the grid test (p < 0.05, compared with rotenone group) and the bar test (p < 0.05, compared with rotenone group) ( Figures 1C and 1D). In the stepping test, the numbers of adjusting steps of both the left and right paws were significantly reduced in rotenone-injected rats (p < 0.01, compared with the respective control). Similarly, Ad-hepcidin, but not Ad-blank, reversed the reduction in adjusting steps caused by rotenone (p < 0.01, compared with rotenone group) ( Figures 1E and 1F). The beneficial effect of hepcidin overexpression in ameliorating Parkinsonian motor deficits was confirmed by the rotarod test and the ladder rung walking test ( Figures S2A and S2B). Ad-hepcidin was also administered to normal rats without rotenone injection, and the motor ability of these rats did not differ from controls (Figures S3A and S3B). Consistent with these behavioral findings, immunohistochemical staining for tyrosine hydroxylase (TH) at the termination of experiments showed that rotenone caused severe reduction in TH immunoreactivity in the striatum (Figure S2C) and the number of TH-positive neurons in the SN ( Figures 1G and 1H), which could be rescued by Ad-hepcidin, but not Ad-blank, treatment. 2 iScience 23, 101284, July 24, 2020 iScience Article In 6-OHDA-injected rats in which the motor function was affected unilaterally, we assessed the effects of hepcidin based on the forelimb cylinder test 2 weeks after 6-OHDA administration and the rotational behavior induced by R-(À)-apomorphine hydrochloride 4 weeks after 6-OHDA administration. Consistent with the rotenone model, upregulation of hepcidin rectified motor deficits in 6-OHDA-lesioned rats (Figures 1I and 1J). When compared with the unlesioned side, TH neurons were almost completely depleted in the SN of the lesioned side as revealed by TH-immunostaining (p < 0.001), whereas Ad-hepcidin rescued TH neurons significantly (p < 0.01) ( Figures S2D and S2E).
Hepcidin Rectifies Iron Accumulation in Models of PD
To investigate the relationship between iron accumulation and hepcidin in the PD models, we determined the level of iron in the SN in rotenone-and 6-OHDA-treated rats and examined the effects of hepcidin. In rotenonetreated rats, neuronal loss was associated with deposition of iron within the SNc, where TH-positive dopaminergic neurons are located ( Figure 2A). The morphology of the iron-stained spots is analogous to that of TH neurons, suggesting iron accumulation in the dopaminergic neurons in SNc. Concomitant with the effect in rescuing TH neuron loss, ICV injection of Ad-hepcidin, but not Ad-blank, suppressed iron accumulation in the SNc (Figure 2A). Consistently, the total iron level in the SN as measured by GFAAS was higher in rotenone-injected rats compared with control rats (p < 0.05), whereas Ad-hepcidin rather than Ad-blank rectified the abnormal iron levels (p < 0.01, compared with rotenone group, Figure 2B). Therefore, the protection conferred by hepcidin overexpression is likely related to reduced iron accumulation in the cell body of dopaminergic neurons in SNc.
As iron accumulation and neurodegeneration is associated with altered expressions of iron-transport proteins in several models of PD, we investigated whether the iron-suppressive effect of hepcidin is linked to . Injection of rotenone (IP) induced a significant increase of first-step and descent latencies in grid and bar test, respectively, which was reduced by ICV Ad-hepcidin injection (C and D). In the stepping test, the reduction of adjusting steps induced by rotenone was also improved by Ad-hepcidin, for both the left and right paw (E and F). Tyrosine hydroxylase-positive (TH + ) neurons in the SNc were significantly reduced in count in rotenone treatment. Ad-hepcidin injection ameliorated TH + neuron loss. Quantitative analysis showed that the effect of Adhepcidin was statistically significant. Data were presented as percentage of control; the scale bar represents 200 mm (G and H). Ad-hepcidin injection into SN suppressed apomorphine-induced contralateral rotation in 6-OHDA model of PD (I). In the cylinder test, Ad-hepcidin ameliorated 6-OHDA-induced forelimb use asymmetry (J). *p < 0.05, **p < 0.01, ***p < 0.001, one-way ANOVA for C-G, two-tailed t test for I, J (n = 5-15 in each group); error bars, S.E.M.
OPEN ACCESS
iScience 23, 101284, July 24, 2020 3 iScience Article changes in the levels of iron-transport proteins in the SN. First, by western blot analysis, we found that rotenone-injected rats expressed much higher levels of DMT1+ in the SN (p < 0.05). Ad-hepcidin injection drastically decreased the DMT1+ levels (p < 0.01, compared with rotenone group, Figure 2C), but rotenone and Ad-hepcidin did not affect levels of DMT1À ( Figure 2D). At the same time, the level of TfR was also higher in rotenone-injected rats (p < 0.05). Ad-hepcidin rather than Ad-blank downregulated TfR expression in rotenone-injected rats to a level that was comparable to that of the control (p < 0.05, Figure 2E). In addition, Adhepcidin, but not Ad-blank, led to a modest but significant decrease in the level of Fpn, the only iron export protein, in the SN (p < 0.05), although rotenone treatment did not affect Fpn level ( Figure 2F). Thus, hepcidin could rectify the abnormal expressions of iron-import proteins associated with rotenone-induced Parkinsonism, with a slight suppression of iron export protein, contributing to iron suppression in SN. Ad-hepcidin was also administered to normal rats without rotenone injection, and DMT1+, TfR, and Fpn were found to be decreased compared with controls ( Figures S3C-S3F).
Similarly, Ad-hepcidin effectively suppressed the iron deposition in the SN induced by 6-OHDA (Figure S4A). The iron repressive effect of hepcidin in 6-OHDA model was confirmed by measurement of iron content in the SN ( Figure S4B). As for iron-transport proteins, 6-OHDA induced DMT1+ overexpression in the SN (p < 0.01), whereas Ad-hepcidin suppressed this effect (p < 0.01) ( Figure S4C). Consistent with the rotenone model, 6-OHDA and hepcidin did not significantly affect DMT1À levels in the SN (Figure S4D). As for TfR, 6-OHDA induced its overexpression in the SN (p < 0.05), whereas Ad-hepcidin suppressed it (p < 0.05) ( Figure S4E). Fpn was not significantly altered in the SN by 6-OHDA lesion, whereas . Rotenone injection induced a significant iron content elevation in the SN, an effect that was blocked by Ad-hepcidin injection (B). DMT1+ and TfR were overexpressed in the SN in rotenone-injected rats. Ad-hepcidin suppressed rotenone-induced overexpression of DMT1+ and TfR (C and E). The levels of DMT1Àwere not affected by rotenone or Ad-hepcidin (D). Adhepcidin, but not rotenone, reduced the level of Fpn in the SN (F). *p < 0.05, **p < 0.01, one-way ANOVA (n = 5-10 for each group); error bars, S.E.M.
We provided further evidence to suggest that the site of these changes were the dopaminergic neurons in the SNc. Thus, double staining of TH and iron transport proteins revealed overexpression of DMT1+ and TfR in the neuronal somata of TH-positive neurons in the SNc in rotenone-treated rats, which was suppressed by Ad-hepcidin but not Ad-blank ( Figures S5A and S5B). On the other hand, in rotenone-treated rats, staining intensity of DMT1Àwas increased in the nucleus of TH + and other neurons, whereas Ad-hepcidin, but not Ad-blank, inhibited this change ( Figure S5C). Also, Ad-hepcidin slightly decreased the fluorescence intensity of Fpn in the SN and TH + neurons ( Figure S5D). All together, these results show that hepcidin specifically rectifies the anomalous expression of iron-transport proteins in dopaminergic neurons in the SN.
Hepcidin Ameliorates Mitochondrial Deficits in Rotenone-Induced Model of PD
Rotenone is well known to interfere with electron transport process in the mitochondria and mitochondrial function (Ahmed and Krishnamoorthy, 1992;Darrouzet et al., 1998;Gutman et al., 1970;Grivennikova et al., 1997;Ramsay and Singer, 1992). Hence, we examined the functional integrity of mitochondria in the PD models and the effects of hepcidin treatment. We first confirmed by electron microscopy that, under rotenone treatment, the mitochondria of neurons within the SNc appeared to be damaged in that the cristae were fragmented. Ad-hepcidin, but not Ad-blank, injection ameliorated such rotenone-induced mitochondrial malformation ( Figure 3A). Moreover, in this model, rotenone induced inhibition of complex I activity (p < 0.05) and depleted ATP (p < 0.05) in the SN. These parameters were also restored by Ad-hepcidin but not Ad-blank (p < 0.05, compared with rotenone-treated group; Figures 3B and 3C). . Ad-hepcidin rescued rotenone-induced mitochondrial complex I activity inhibition and ATP depletion in the SN. The data of Complex I activity were normalized to control, and the ATP levels were presented by arbitrary fluorescence intensity (B and C). Ad-hepcidin rescued rotenone-induced mitochondrial complex I activity inhibition, membrane potential decrease, and ROS overproduction in isolated mitochondria in rat brain. The data of complex I activity and membrane potential were normalized to controls (D-F). (G) Ad-hepcidin supressed rotenone-induced iron increase in isolated mitochondria in rat brain. *p < 0.05, **p < 0.01, one-way ANOVA (n = 5-10 in each group); error bars, S.E.M.
OPEN ACCESS
iScience 23, 101284, July 24, 2020 5 iScience Article To further pinpoint the site of action of hepcidin with respect to its protective function on the mitochondria, we isolated mitochondria directly from the brain and assessed different parameters. We found that in rotenone-treated animals, mitochondrial complex I activity was severely suppressed and the inner membrane potential significantly decreased (p < 0.05). On the other hand, ROS production was elevated (p < 0.05). These effects were largely rectified by Ad-hepcidin, but not Ad-blank, administration (p < 0.05, compared with rotenone group; Figures 3D-3F). Interestingly, in rotenone-treated rats, iron content in the isolated mitochondria was increased (p < 0.05), which was suppressed by Ad-hepcidin but not Ad-blank (p < 0.001, compared with rotenone group; Figure 3G), implicating a relationship between mitochondrial iron level and the protective effect of hepcidin on mitochondrial functions.
Hepcidin Rectifies Mitochondrial Deficits via Prevention of Iron Accumulation
We hypothesized that cellular and mitochondrial free iron accumulation is a prerequisite of severe mitochondria deficiency in PD, and the iron-repressive function of hepcidin accounts for its mitochondria protective effect. We studied the correlation between intracellular free ferrous iron level and mitochondrial activity by co-staining with Calcein-AM and TMRM in rotenone-treated SH-SY5Y cells. Calcein-AM intensity (green fluorescence) was negatively correlated to divalent ion level (mainly iron), whereas TMRM (red fluorescence) was positively correlated to mitochondrial membrane potential. In our experiments, rotenone treatment led to diminished Calcein-AM and TMRM fluorescence intensity, indicating iron accumulation and mitochondrial functional deficiency in SH-SY5Y cells, whereas co-treatment with hepcidin peptide (100nM) protected rotenone-treated cells from iron accumulation and mitochondrial damage. Importantly, co-treatment with FeSO 4 (5mM) blocked the effect of hepcidin on both decreasing iron level and protecting mitochondrial activity in these cells ( Figure 4A). Thus, when the iron-suppressive effect of hepcidin was interfered by iron treatment, hepcidin had almost no effect on mitochondrial function.
In another set of experiments, we made use of RPA, a red fluorescent dye that can enter mitochondria, and the fluorescence intensity of which is negatively correlated with mitochondrial free iron level (Petrat et al., 2002;Rauen et al., 2003). We also made use of Rh-123, a green fluorescent dye that can enter mitochondria, and the fluorescent intensity of which is negatively correlated with mitochondrial membrane potential. In rotenone-treated cells, mitochondrial ferrous iron levels were raised, indicated by decreased red fluorescence intensity. Hepcidin peptide effectively prevented mitochondrial free iron accumulation, which was blocked by FeSO 4 ( Figure 4B). Under this condition, hepcidin also failed to rescue rotenone-induced mitochondrial membrane potential decrease. These findings further support that hepcidin leads to mitochondrial iron decrease and protects mitochondria.
The above observations were verified by quantitative essays. In addition to signal changes of Calcein-AM ( Figure 4C) and RPA intensity ( Figure 4D) for semi-quantifying free ferrous iron level in the cells and mitochondria respectively, JC-1 was used as an indicator of mitochondrial membrane potential ( Figure 4E). Our results confirmed that hepcidin is effective in protecting mitochondria by decreasing cellular and mitochondrial free iron accumulation.
Hepcidin Suppresses a-Synuclein Accumulation by Reducing Iron Accumulation
A distinct feature of the rotenone model of PD is the presence of a-synuclein accumulation in vulnerable areas of the brain, recapitulating a major pathological hallmark of human Parkinsonism. To test whether hepcidin can protect against a-synuclein accumulation in rotenone-induced rat model of PD, brain sections were co-stained for a-synuclein and TH. As shown in Figure 5A, in rotenone-treated rats, a-synuclein was expressed in the SNc. TH co-staining showed strong expression of a-synuclein in TH-positive neurons.
When examining a-synuclein in higher magnification, we found clear discrete spots of immunofluorescence, suggesting aggregation of this protein within the TH neurons. Treatment with Ad-hepcidin, but not Ad-blank, suppressed rotenone-induced a-synuclein accumulation in TH + neurons.
Different forms of a-synuclein possess divergent solubility. Triton-soluble fraction represents normal a-synuclein. Some types of small a-synuclein oligomers are triton-insoluble SDS-soluble (Kostka et al., 2008), whereas many types of neurotoxic-and pathological-related a-synuclein dimers or oligomers are SDSresistant (Levin et al., 2011;Kostka et al., 2008;Cappai et al., 2005;Grassi et al., 2018). The increased level of a-synuclein in SDS-resistant urea-soluble fraction also suggests aggregation of a-synuclein, which is a highly disease-associated event in PD patient and transgenic models (Lee et al., 2002;Kahle et al., 2001). Thereafter, proteins in the SN were isolated by Triton lysate, followed by SDS and urea. The levels ll OPEN ACCESS 6 iScience 23, 101284, July 24, 2020 iScience Article of a-synuclein were elevated in Triton (p < 0.05) and urea (p < 0.05) fractions but not SDS fractions in rotenone-treated rats. Ad-hepcidin abrogated rotenone-induced elevation of a-synuclein (p < 0.05, compared with rotenone group) ( Figures 5B-5D). The above findings were also supported by in vitro studies in SH-SY5Y cells with some cells treated with rotenone (20nM) alone or rotenone (20nM) + hepcidin (100nM). Rotenone treatment caused dramatic a-synuclein accumulation in these cells, whereas hepcidin suppressed it ( Figure 5E). Besides, the mRNA levels of a-synuclein was not altered either in the SN of rotenone-treated rats or in SH-SY5Y cells, and hepcidin did not affect mRNA levels of a-synuclein in both models (data not shown). Thus, both in vitro and in vivo evidence support that hepcidin inhibits a-synuclein accumulation in the rotenone model.
To establish a relationship between a-synuclein and iron accumulation under the effect of rotenone, we exploited the fact that a-synuclein accumulation can be induced in SH-SY5Y cells after prolonged rotenone treatment and that the iron-regulatory effect of hepcidin can be interfered by manipulation of iron content. Thus, in normal SH-SY5Y cells in which treatment with 20 nM of rotenone for more than 3 days caused a-synuclein accumulation (p < 0.01) ( Figure 5F), a significant rise in cellular iron content was found in parallel (p < 0.05) ( Figure 5G). Co-treatment with hepcidin suppressed the increases in both a-synuclein and iron content (p < 0.05). Consistent with the experiments on mitochondria, supplementation of exogenous FeSO 4 A B C D E Figure 4. Hepcidin Ameliorated Rotenone-Induced Mitochondrial Damage via Suppressing Iron Accumulation SH-SY5Y cells were treated with rotenone with or without hepcidin peptide for 24 h. Cells were co-stained with calcein-AM (green) and TMRM (red). Hepcidin reversed both rotenone-induced iron increase and mitochondrial membrane potential decrease, whereas FeSO 4 (5mM) treatment blocked the effect of hepcidin on cellular iron level and mitochondrial activity. The scale bar represents 5 mm (A). The SH-SY5Y cells were co-stained with RPA (red) and Rh123 (green). Rotenone treatment caused mitochondrial free ferrous iron accumulation, indicated by decreased RPA signal, whereas hepcidin suppressed these effects. FeSO 4 blocked the effect of hepcidin on mitochondrial iron level and activity in rotenone-treated cells. The scale bar represents 5 mm (B). The effect of iron treatment on intracellular ferrous iron levels was quantified using the calcein-AM method. FeSO 4 treatment blocked the effect of hepcidin on iron levels (C). The effect of iron treatment on mitochondrial free ferrous iron levels was measured using RPA. FeSO 4 treatment blocked the effect of hepcidin on mitochondrial iron levels (D). The cells were incubated with JC-1 dye, and the mitochondrial membrane potential was indicated by the ratio of red/green fluorescence intensity. Hepcidin suppressed rotenone-induced mitochondrial membrane potential decrease, whereas FeSO 4 blocked the effect of hepcidin on mitochondria. The data were normalized to control (E). *p < 0.05, **p < 0.01, one-way ANOVA (n = 5-10 in each group); error bars, S.E.M.
Hepcidin Mediates a-Synuclein Clearance via Iron Accumulation Suppression and Subsequent Activation of Autophagy
Because the accumulation, oligomerization, and aggregation of a-synuclein is generally regarded as toxic and could play a key role in the neurodegenerative process occurring in PD, elucidating how hepcidin can promote the clearance of this protein is an important question to address. It is known that some forms of SDS-resistant a-synuclein derives from incomplete autophagic degradation (Grassi et al., 2018), and the oligomerized and aggregated forms of a-synuclein are mainly degraded by the autophagy-lysosome system (Ebrahimi-Fakhari et al., 2011. Therefore, we asked whether rotenone and hepcidin exert any effect on the autophagy process. The levels of two autophagy markers, P62 and LC3B, were quantified in both the in vivo and in vitro rotenone models. In rotenone-treated rats, P62 was increased and the ratio of LC3B II/I was decreased in the SN (p < 0.05, Figures 6A and 6B), indicating suppression of autophagy. Ad-hepcidin, rather than Ad-blank, reduced P62 and raised the ratio of LC3BII/I, implying that hepcidin could induce autophagy (p < 0.05, compared with rotenone group, Figures 6A and 6B). Consistently, in SH-SY5Y cells, rotenone treatment induced an increase in P62 and a decrease in the ratio of LC3BII/I iScience Article (p < 0.05). These changes were reversed by hepcidin peptide (100nM) (p < 0.05 compared with rotenone group, Figures 6C and 6D). The cells were treated with bafilomycin A1 (Baf A1) (0.2 mM), an inhibitor of the late phase of autophagy, for 8 h before harvest, with increased P62 levels and LC3BII/I ratio in all groups observed. At the same time, Baf A1 failed to block rotenone-induced decrease of LC3BII/I ratio ( Figures 6E and 6F), implying rotenone-induced autophagy initiation inhibition. In rotenone-treated cells, Baf A1 blocked hepcidin-mediated decrease of P62 but not the increase of LC3BII/I ratio (p < 0.05, compared with rotenone + hepcidin group), suggesting an activation of autophagy initiation by hepcidin ( Figures 6E and 6F). In addition, the fact that addition of FeSO4 could abolish the effect of hepcidin on rotenone-induced changes in these markers (p < 0.05, compared with rotenone + hepcidin group) suggests the involvement of hepcidin-mediated reduction in iron level in the process ( Figures 6C and 6D), that is, iron accumulation could be a cause of autophagy inhibition.
To elucidate whether hepcidin-mediated autophagy activation is responsible for a-synuclein clearance, we utilized a rotenone-induced in vitro model of PD. Both autophagy inhibitors, chloroquine (CQ) (10mM) and 3-Methyladenine (3MA) (5mM), blocked the effect of hepcidin in reversing rotenone-induced a-synuclein accumulation (p < 0.05, compared with rotenone + hepcidin group, Figure 6G). We also did parallel experiments on the proteasome system. As expected, hepcidin failed to ameliorate rotenone-induced inhibition of proteasome activity, and its mediation of a-synuclein clearance could not be blocked by proteasome inhibitor MG132 (2.5mM) in rotenone-treated cells (data not shown). Taken all together, we demonstrate Figure 6. Hepcidin Mediated A-syn Clearance via Autophagy Activation Rotenone treatment (IP) caused P62 accumulation and LC3BII/I ratio reduction in the SN, whereas Ad-hepcidin suppressed these effects (A and B). In SH-SY5Y cells, hepcidin suppressed rotenone-induced P62 accumulation and decrease of LC3BII/I ratio, an effect that was blocked by FeSO 4 (5mM) treatment (C and D). Baf A1 treatment caused P62 accumulation in all four groups, with a more significant rise of P62 in control and rotenone + hepcidin groups compared with the condition of no Baf A1 (C and E). Hepcidin failed to suppress P62 accumulation in rotenone-treated cells under Baf A1 treatment (E). Baf A1 treatment caused a general rise of LC3BII/I ratio in all of four groups (D and F). Hepcidin still increased the ratio of LC3BII/I when cells were treated with rotenone and Baf A1, an effect suppressed by FeSO 4 (F). Both autophagy inhibitor CQ and 3MA blocked hepcidin-mediated A-syn clearance in rotenonetreated SH-SY5Y cells. The data were normalized to control (G). *p < 0.05, **p < 0.01, one-way ANOVA (n = 5-10 in each group); error bars, S.E.M.
OPEN ACCESS
iScience 23, 101284, July 24, 2020 9 iScience Article that hepcidin promotes a-synuclein clearance via specific autophagy activation in rotenone models, which is dependent on a reduction of free iron level.
DISCUSSION
In this study, we presented evidence supporting that iron accumulation could be a common thread of two major pathological hallmarks of PD, mitochondrial dysfunction and a-synuclein accumulation. More importantly, we found that suppression of iron accumulation in the brain by overexpression of hepcidin could rectify both mitochondrial dysfunction and a-synuclein accumulation in animal models of PD and achieve a therapeutic effect on motor deficits. The major experimental paradigm for the generation of Parkinsonism in this study is based on chronic administration of rotenone. Rotenone is known not only to cause highly selective nigrostriatal dopaminergic degeneration and motor deficits, including bradykinesia, postural instability, and rigidity, but also to result in fibrillar cytoplasmic inclusions that contain a-synuclein and ubiquitin in dopaminergic neurons of the SN (Betarbet et al., 2000(Betarbet et al., , 2006Feng et al., 2006;Cannon et al., 2009). Furthermore, chronic rotenone treatment on differentiated SH-SY5Y cells reproduces Lewy neurites with accumulated a-synuclein (Borland et al., 2008).
Although mounting evidence suggests that iron is accumulated in the SN together with mitochondrial deficits in PD, the mechanistic link, if any, between these two phenomena has not been established firmly. It has been proposed that proteins containing Fe-S clusters in mitochondria and IRP1 could be the causal link between mitochondrial damage and consequential mitochondrial and cytoplasmic iron increase (Muñ oz et al., 2016;Liddell and White, 2018;Cerri et al., 2019;Mena et al., 2011Mena et al., , 2015b. Meanwhile, it had been reported that mitochondrial health and iron homeostasis are co-regulated by Nrf2, a redox-sensitive transcription factor (Ammal Kaidery et al., 2019). However, whether mitochondrial and cytoplasmic labile iron increase is a prerequisite for mitochondria deficits and effective treatment targeting excess iron to rescue mitochondria have not been well explored. In a previous in vitro study on midbrain dopaminergic neurons, MPTP induced overexpression of DMT1 and iron influx together with a mitochondria membrane potential decrease, whereas DFO, the iron chelator, abolished all of these effects (Zhang et al., 2009). As DFO is unable to pass through the mitochondrial membrane and chelate mitochondrial iron, the protective effect of DFO implies that iron outside the mitochondria exacerbates mitochondrial damage. However, a group recently reported that chelators that mainly decrease mitochondrial iron pool was much more effective than those that decrease the cytoplasmic iron pool, indicating that the mitochondrial iron pool plays a more important role in mitochondrial intoxication (Mena et al., 2015a). These studies implicate that excessive free iron accumulation could account for mitochondria deficiency in PD. Our result that hepcidin ameliorated rotenone-induced mitochondrial malformation and deficiency in the SN supports this notion. Thus, natural iron-regulatory protein can be as effective as iron chelators in regulating mitochondrial iron dyshomeostasis. These findings are in line with previous studies showing that mitochondrial ferritin, an iron storage protein specifically located in mitochondria that possesses high homology to H-ferritin (Levi et al., 2001), could protect mitochondria and suppress ROS and dopaminergic neural loss in both 6-OHDA-and MPTP-induced PD models (Shi et al., 2010;You et al., 2016).
Our finding on the effects of hepcidin in a-synucleinopathy is of particular interest. Phosphorylated a-synuclein fibrils are the major constituent of Lewy neurites and Lewy bodies, which are the hallmarks of PD. Most a-synuclein in SDS-insoluble urea-soluble fraction is phosphorylated at Ser129, whereas normal a-synuclein in Triton fraction is not (Fujiwara et al., 2002). Except fibrils, phosphorylated a-synuclein dimer that induces mitochondrial deficits is also SDS resistant (Grassi et al., 2018). Besides, a-synuclein forms various SDS-resistant oligomers that are toxic to cells, such as species induced by Fe 3+ or dopamine (Kostka et al., 2008;Cappai et al., 2005). In the present study, SDS-resistant a-synuclein was elevated in rotenone-treated rats, whereas hepcidin exerted strong suppressive effect, suggesting repression of PD pathogenesis. The interaction between iron accumulation and a-synucleinopathy in PD has also been documented. Iron accumulation occurs in the SN where Lewy bodies are also abundantly present Spillantini et al., 1997), and ferrous and ferric iron are observed to be present in Lewy bodies (Peng et al., 2010). In ex vivo experiments, iron directly interacts with a-synuclein and promotes its oligomerization and fibrillization (Joppe et al., 2019). Several in vitro experiments also imply post-transcriptional regulation of a-synuclein by iron via IRP-IRE signaling pathway . As for a-synuclein degradation, iron may suppress the proteasomal degradation of a-synuclein by inactivating Parkin (Ganguly et al., 2020). In this study, we confirmed that iron accumulation is one of the major causes of a-synucleinopathy and found that hepcidin suppressed autophagic flux inhibition via decreasing overloaded free ll OPEN ACCESS 10 iScience 23, 101284, July 24, 2020 iScience Article iron, thereby promoting autophagic degradation of a-synuclein. The autophagy-lysosomal pathway is the major pathway for a-synuclein degradation during increased a-synuclein burden or when a-synuclein is aggregated. However, this pathway may be impaired in PD. In sporadic PD patients, the activity of a-galactosidase A, a lysosomal hydrolase, is significantly decreased . Moreover, it has been widely reported that boosting autophagy promotes a-synuclein clearance and protects neurons in PD models (Dehay et al., 2010;Lonskaya et al., 2013;Jang et al., 2016;Hou et al., 2015). In addition, iron chelator DFO induced autophagy and exerted its protective effects in rotenone-treated SH-SY5Y cells . Taken all together, our results suggest that iron accumulation plays role in autophagy flux inhibition and subsequent a-synuclein accumulation in PD, whereas rectifying iron homeostasis via hepcidin can reduce free iron, activate autophagy, and promote a-synuclein clearance.
The iron-suppressing effect of hepcidin in dopaminergic neurons is predominantly achieved through the suppression of DMT1+ and TfR, the two major iron import proteins, responsible for non-transferrin bound iron and transferrin bound iron import, respectively. In fact, both DMT1+ and TfR have consistently been implicated in human PD as well as animal models of PD Aguirre et al., 2012;Jia et al., 2015;Salazar et al., 2008). The inhibitory effect of hepcidin on iron uptake in neurons have been reported by us previously (Zhou et al., 2017;, and the mechanism may be the same as in astrocytes and macrophages, which is via cAMP-PKA pathway (Du et al., 2011(Du et al., , 2012. Fpn is the only identified iron export protein as well as the target of hepcidin located on the cell membrane. However, hepcidin-mediated Fpn reduction in dopaminergic neurons does not appear to contribute to the effect. Despite some studies revealing coincidence of changed Fpn levels in the SN and the fate of dopaminergic neurons (Xu et al., 2017;Lee et al., 2009;Finkelstein et al., 2017;Lv et al., 2011;Zhang et al., 2014) in PD models and the detrimental effect of Fpn silencing in 6-OHDA-treated cultured cells , there are other studies showing that Fpn deletion has no apparent consequence on dopaminergic neurons in the SN in mice (Matak et al., 2016). Ferrous iron is exported by Fpn following oxidation by ceruloplasmin (CP), a protein that has been shown to be associated with PD . However, the link between CP and Fpn in dopaminergic degeneration is not clear and worth to be investigated in the future. Secondly, hepcidin still suppresses iron uptake in iron-depleted cells, indicating that the inhibitory effect of hepcidin on iron uptake is not a feedback of hepcidin-mediated Fpn degradation and subsequent iron increase (Du et al., 2011). Thirdly, cyclic adenosine monophosphate is normally a second messenger response to hormone receptors located on the membrane of target cell , which implies the existence of an unidentified receptor of hepcidin. Except repressing iron uptake in dopaminergic neurons, given that hepcidin reduces iron transport across the blood-brain barrier (BBB) in normal and iron overload conditions via regulating DMT1 and TfR McCarthy and Kosman, 2014), as proposed in a ''bypass model'' Qian and Ke, 2019), and the properties of the BBB are altered in PD (Stolp and Dziegielewska, 2009), it is probable that hepcidin ameliorates rotenone-induced neuronal degeneration and iron accumulation partially via suppressing iron import into brain. In summary, although Ad-hepcidin also decreased Fpn expression, we found that suppression of iron influx is the overwhelming effect of hepcidin in the SN. Nevertheless, to establish a clear causal relation between hepcidin-mediated iron reduction and other beneficial effects of hepcidin, identifying the receptor of hepcidin or manipulation of hepcidin-regulated iron proteins is warranted in future studies.
As an endogenous cationic hormone containing 25 amino acids, it has been reported that hepcidin can cross the BBB (Raha-Chowdhury et al., 2015;Xiong et al., 2016;Vela, 2018), although the permeability is not yet determined. To increase the penetration rate across the BBB, hepcidin can be conjugated with plasma membrane transducing domains (Murriel and Dowdy, 2006), or ligands targeting receptors on the BBB, notably transferrin or TfR monoclonal antibodies (Angeli et al., Pardridge, 2015). Alternative approaches that are potentially feasible include delivery by nanoparticles (Grabrucker et al., 2016;Angeli et al., 2019) and the use of hepcidin agonists (Casu et al., 2018).
It is noteworthy that hepcidin has a dual role in diseases associated with inflammation and iron overload (Vela, 2018). Hepcidin is an antimicrobial peptide that is overproduced under infection, inflammation, and stress as an innate defense mechanism. A consensus on whether hepcidin overexpression in different disease models is beneficial or detrimental has not been reached. For example, hepcidin could exacerbate brain damage in ischemic or hemorrhagic stroke (Tan et al., 2016;Ding et al., 2011;Xiong et al., 2016), whereas it has been reported to be protective in iron overload condition and Alzheimer disease (AD) (Gong et al., 2016;Urrutia et al., 2017b). One possibility is that the timing of hepcidin is critical, being ll OPEN ACCESS iScience 23, 101284, July 24, 2020 11 iScience Article beneficial mainly in pre-treatment (Vela, 2018). However, given that we did not pre-treat hepcidin in our model, differences in the nature of disease may account for the observed differences. Specifically, stroke is a disease with acute and severe damage to BBB and neurons, whereas in neurodegenerative diseases neurons are damaged chronically usually with years of progression. Inflammation is far more severe in stroke than in neurodegenerative diseases (Wang et al., 2018;Mracsko and Veltkamp, 2014;Stephenson et al., 2018), with notable differences in roles of immunocytes between ischemic stroke and neurodegenerative diseases (Rosset et al., 2015;Zhu et al., 2019;Solleiro-Villavicencio and Rivasarancibia, 2018;Gonzalez and Pacheco, 2014;Brochard et al., 2008;Kustrimovic et al., 2019;Chen et al., 2018). Given that AD and PD share some common mechanisms of iron metabolism and hepcidin is also decreased in the brain in AD animal models and patients (Raha et al., 2013), it is reasonable to speculate a similar protective role of hepcidin in PD. Also, it is possible that microglia, astrocytes, and the peripheral immune system all contribute to neurotoxin-induced dopaminergic neuronal degeneration. Inflammation could be triggered by mitochondrial damage and a-synucleinopathy (Geto et al., 2020;Gelders et al., 2018;Troncosoescudero et al., 2018) and acts as an amplifier of degenerative events including aggravating mitochondrial damage and iron accumulation in neurons via cytokines, ROS, and TLRs signal in PD (Urrutia et al., 2014;Trudler et al., 2015). Nonetheless, our in vitro experiments demonstrated effectiveness of hepcidin on cultured neurons. Meanwhile, 6-OHDA and rotenone not only primarily and selectively damage dopaminergic neurons in vivo but also cause neurodegeneration and iron accumulation without glia in in vitro studies (Mouhape et al., Workman et al., 2015;Mena et al., 2011;Jia et al., 2015). Moreover, a direct pathway from mitochondria dysfunction to Fe-S proteins-mediated iron accumulation in neurons without involvement of inflammation in early steps has been reported (Urrutia et al., 2014;Muñ oz et al., 2016;Mena et al., 2015b). Thus, a direct action on dopaminergic neurons rather than glia is believed to be a more important protective mechanism of hepcidin in the present study.
In conclusion, our study demonstrates the beneficial effect of hepcidin in PD animal models and elucidates the mechanism in neurons, which is summarized in Figure 7. On a broader perspective, our study confirms the detrimental effect of iron overload in the pathogenic process of Parkinsonism. Rectifying iron dyshomeostasis via manipulation of hepcidin level in the brain could suppress iron accumulation and other major pathologies of PD and therefore represents a promising therapeutic direction. iScience Article
Limitations of the Study
As discussed earlier, the limitation of present study is that whether the beneficial effect of hepcidin on dopaminergic neurons is also via modulation of inflammation in PD. Besides, the contribution of glia cells in the protective process is not experimentally addressed.
Resource Availability Lead Contact
Further information and requests for resources and reagents should be directed to and will be fulfilled by the Lead Contact, Ya Ke (yake@cuhk.edu.hk).
Materials Availability
Viruses generated in this study will be made available on request, but we may require a payment and/or a completed Materials Transfer Agreement if there is potential for commercial application.
Data and Code Availability
Raw data will be shared upon receipt of a reasonable request.
METHODS
All methods can be found in the accompanying Transparent Methods supplemental file.
Peng, Y., Wang, C., Xu, H.H., Liu, Y.N., and Zhou, F. (2010 the midline. After 6-OHDA was injected for 14 days, cylinder test was performed to assess motor ability. Then, some rats were kept for 14 more days for testing apomorphine-induced rotation and TH staining, the others were sacrificed for all the other experiments.
Behavioral tests
The catalepsy test, stepping test, rotarod test and ladder rung walking test were performed 35 days after the start of rotenone injection to assess the motor ability of rats in different groups. The catalepsy test consisted of the grid test and the bar test (Huang et al., 2006, Bashkatova et al., 2004. In the grid test, the rat was made to hang from a vertical grid with a distance of 1 cm between each wire. The time from holding onto the grid to the first paw movement was recorded as first-step latency. In the bar test, the forepaws of rat were placed on a bar parallel to and 5 cm above the base. The time from placing the front paws on the bar to the removal of one paw from the bar was recorded as the leave latency. In the stepping test (Olsson et al., 1995), the rat was held by the experimenter with one hand fixing the hindlimbs and slightly raising the animals' hind above the surface while the experimenter's other hand fixed the forelimb. With one paw touching the table, the rat was moved slowly sideways (5 sec for 0.9 m) by the experimenter, first forward and then in the backward. The number of adjusting steps was counted for both paws in the back and forward directions of movement. In the rotarod test (Li et al., 2017), rats were placed on a spinning roller, accelerating from 4 to 40 rpm at a rate of 20 rpm/min. The duration that each rat stayed on the rod was recorded and the maximum of testing time was 180 sec. Each rat was assessed for 5 times and the averaged latency was calculated to represent its motor ability. The ladder rung walking test was used to assess skilled walking in rats (Metz and Whishaw, 2009). The apparatus consisted of 2 walls and metal rungs (3 mm in diameter) that could be inserted to create a floor with a minimum distance of 1 cm between rungs. The pattern of the rungs was irregular and varied in different trials. The distance of the rungs changed randomly from 1 to 3 cm.
Rats were trained to walk across the ladder for 5 times and then tested 5 times.
The performance of limb placement was rated from 6 (best) to 0 (worst) as previously described. The cylinder test and apomorphine-induced rotation were also utilized to assess the motor deficits in the 6-OHDA induced model of PD (Rumpel et al., 2015). In cylinder test, rats were placed in a transparent cylinder for 3 min. The number of independent wall placements for the left forelimb, right forelimb and both forelimbs simultaneously were counted in the cylinder, and the percentage of impaired forelimb use calculated. After subcutaneous injection of apomorphine (0.5 mg/kg), contralateral rotation was recorded for 15 min.
Virus construction
Hepcidin protein encoding region (GenBank NM-053469) was cloned, digested with BglII and SalI (New England, USA) and ligated into a green fluorescent protein (GFP)-tagged pAdTrack shuttle vector (Invitrogen, USA) . The positive clone obtained (Hepc-shuttle), was transferred into Adeasy-1 plasmid containing BJ5183 E. coli. The positive clone (HepcAdeasy) was then linearized with Pac I (New England, USA) and transferred into HEK293 cells (Invitrogen, USA). One week later, recombinant viruses, named Ad-hepcidin, were collected and purified. GFP-expressing adenovirus, named Ad-blank, was utilized as negative control.
Cell treatment
For mitochondrial related experiments, cells were treated with 100nM rotenone for 1 day. For the other experiments, chronic cell model of PD was created by 20 nM rotenone treatment for 3 days to cause α-synuclein accumulation.
Immunohistochemistry
The fixed brain sections or cells were blocked with 4 % normal goat serum and incubated with diluted primary antibody. The sections were then washed and incubated with specific secondary antibody. For immunohistochemistry, antibody-incubated sections were washed and further immersed in a solution the insoluble Prussian blue dye, which is a complex hydrated ferric ferrocyanide substance (Meguro et al., 2007, Perls, 1867. Briefly, brain sections were incubated in a freshly prepared solution of 7% potassium ferrocyanide (3% HCl). After incubation, the sections were washed and immersed in 99 % methanol containing 1% hydrogen peroxide to quench endogenous peroxidase activity. Afterwards, the sections were washed and incubated in a solution of DAB to enhance the signals.
Western blot
Proteins from brain tissues or cultured cells were extracted with RIPA.
Subsequently, the homogenates were centrifuged and the supernatants were harvested. For α-synuclein fractionation, brain tissues were lysated with triton, and pallets were further lysated with stronger detergent. The supernatants were denatured. Lysates were loaded and run in a single track of SDS-PAGE under reducing conditions and subsequently transferred to a pure nitrocellulose membrane. The blots were blocked and then incubated with primary antibodies. After that, the NC membrane was washed and incubated in appropriate secondary antibody. The primary antibody used: mouse anti-β-actin monoclonal antibody (Lot 052M4816V, Cat A2228, Sigma-Aldrich Inc). The primary antibodies used to detect TfR, DMT1+, DMT1-, Fpn and α-synculein in immunohistochemistry were also used for Western blot. The secondary antibodies: goat anti-mouse and anti-rabbit IRDye 800 CW IgG (Li-Cor, Lincoln, NE, USA).
Hepcidin measurements
Determinations of hepcidin contents in the SN were conducted with ELISA kits (Shanghai Yuanye Bio-Technology Co., Ltd, China) following protocols provided by manufacturer.
Mitochondria isolation
Rats were decapitated and the brains were dissected out and chopped for mitochondria isolation (Sims and Anderson, 2008). Rat midbrains were transferred to Dounce homogenizer and isolation buffer was added to produce a 10 % mixture. The tissue pieces were homogenized and centrifuged at 1000g at 4 ℃ for 5 min. The supernatant was collected and centrifuged at 20000g at 4℃ for 10 min. The pellet was resuspended and homogenized in cold 15 % Percoll solution. The solution was pipetted on the upper layers of the density gradient (consisting of 23 % Percoll on the top and 40 % Percoll on the bottom in centrifuge tubes), and centrifuged at 30000g at 4℃ for 5 min. Then, the lowest band (enriched in mitochondria fraction) was collected. Isolation buffer was added to the mitochondrial fraction in a volume ratio of 4 : 1. The mixture was centrifuged at 16000g at 4 ℃ for 10 min. The pellet was added with fatty-acid-free BSA and isolation buffer and mixed gently. The mixture was centrifuged at 7000g at 4 ℃ for 10 min. The pellet was gently resuspended and homogenized in isolation buffer.
ATP measurement
Isolated mitochondria were immediately incubated for 5 min at 37℃ with 2.5 mM ADP, 1 mM pyruvate, and 1 mM malate. Luciferin substrate and luciferase enzyme (Promega, USA) were then added to generate luminescence. The bioluminescence was assessed on the Perkin Elmer Envision spectrophotometer. ATP in the SN were extracted by trichloroacetic acid and measured using the same kit.
ROS production quantification
Isolated mitochondria were incubated for 30 min at 37°C in reaction buffer containing 10 μM H2-DCFDA. Fluorescence was read using a BMG Novo Star Galaxy spectrofluorimeter with 485 nm excitation and 520 nm emission filters.
Fluorimetric analysis of mitochondrial membrane potential (ΔΨm)
Isolated mitochondria or cultured cells were incubated with JC-1 staining buffer according to the manufacturer's instructions (Sigma, CS0760). The fluorescence intensity of JC-1 aggregate was detected with 520 nm excitation and 590 nm emission filters, whereas the JC-1 monomer was measured with 485 nm excitation and 520 nm emission filters using a BMG Novo Star Galaxy spectrofluorometer. The fluorescence intensity ratio of aggregates to monomers was calculated as an indicator of ΔΨm.
Complex I activity
The measurement has described in detail elsewhere (Estornell et al., 1993).
Extracted mitochondria or the SN homogenate was reacted with a mixture containing CoQ and KCN at 37°C for 5min. Then, NADH was added and the absorbance was read at 340nm every 20sec for 2min to calculate the changing rate.
Transmission electron microscopy
Transmission electron microscopy was conducted to assess the morphological changes of mitochondria . Rats were quickly perfused with saline followed by saline containing 0.5 % glutaraldehyde and 4 % paraformaldehyde. Samples of the SNc were further fixated in 2.5 % glutaraldehyde for 4 hours followed by 1-2 % osmium tetroxide for 45min.
Then samples were dehydrated in graded alcohol. Afterwards, the pieces were washed with prophylene oxide, kept in prophylene oxide/Epon-812 mixture, and then embedded in Epon-812 resin for 2 days at 60°C. The embedded samples were sectioned at a thickness of 80 nm using a diamond knife (Diatome, Switzerland) on an Ultracut E microtome (Leica, Deerfield, IL). The sections were mounted on copper mesh and stained with uranyl acetate and lead nitrate. Pictures were taken from each section using a transmission electron microscope (Hitachi H-7700, Japan).
Free ferrous iron measurement
Ferrous iron measurement on cells was performed using calcein AM method (Epsztejn et al., 1997). Treated SH-SY5Y cells were incubated with 0.
RPA staining
SH-SY5Y cells were loaded with benzyl ester (RPA) (0.2 μM) in HBSS buffer for 20 min at 37°C, followed by an additional 30 min of incubation in dye-free buffer. Then cells were observed under confocal microscope, or fluorescence was measured using a fluorescence spectrophotometer at 530 nm (excitation) and 590 nm (emission).
Quantification and statistical analysis
Two group comparisons were performed by two-tailed Student's t test. Group comparisons (more than two groups) were analyzed using one-way ANOVA followed by Tukey/Kramer's post-hoc test. For some paired experiments (more than two groups), paired two-way ANOVA were applied followed by Tukey/Kramer's post-hoc test. Data is expressed as mean ± SEM. Results were regarded significant at P < 0.05. | 2020-06-25T09:07:19.166Z | 2020-06-19T00:00:00.000 | {
"year": 2020,
"sha1": "98934edcffca05d8fca3a21cece543bcd6654c56",
"oa_license": "CCBYNCND",
"oa_url": "http://www.cell.com/article/S2589004220304715/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7f411bcc6e34a01e544720f852c6564a7d9275b2",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
203217899 | pes2o/s2orc | v3-fos-license | Analysis of the innovation value chain in strategic projects of the Brazilian Army
Purpose – The purpose of this paper is to describe and compare seven case studies of strategic innovation projects of the Brazilian army; these projects present high transformational potential and high investments and are supported by technology and science policies. Design/methodology/approach – The authors present herein multiple case studies in which the authors conduct a documentary analysis of the innovation processes in the Brazilian army, as well as semi-structured interviews conducted with eight servicemen with more than 15 years of working experience. Findings – The results obtained suggest that the innovation process occurs in four stages: creation, selection, development and diffusion of ideas. Practical implications – The research is relevant because it presents how the interaction between the Brazilian army, companies and academia strengthens the innovation ecosystem, stimulating the development of best practices for the management of strategic projects. Originality/value – The main contribution of this study is to present the strategic project management of innovation based on public policies and investment in projects of the Brazilian army, which are drivers for the development of ecosystems that promote the creation and expansion of companies, the diffusion of technological knowledge in universities, and suitable solutions for the military sector. successive technology generations and quick changes in the strategic military environment.
Introduction
Even though innovation has been more and more present in entrepreneurial competitiveness, it also affects other sectors, e.g. civil defense. The military activity, known for its work intensity and troop ships, started to receive intensive capital and innovation investments from the half of the nineteenth century onwards (Markusen, 1986); there was, therefore, a shift from an weaponry competitiveness toward a scientific competition (Paarlberg, 2004;Schmidt, 2013). If during the Second World War the source of the military supremacy corresponded to the industrial production capacity of weaponry of countries like the USA (Paarlberg, 2004), by the end of the war, the scientific capacities started to focus on military powers (Schmidt, 2013), producing successive technology generations and quick changes in the strategic military environment.
In recent approaches on innovation projects, the role played by ecosystems for the success of these enterprises has been gaining strategic relevance, especially for enterprises involved in long-term and highly complex activities. Studies point out a need to carry out studies on the management of the innovation ecosystems regarding uncertainty, as well as their use regarding radical innovation, new markets and emerging industries, in which the value creation outweighs the value capture (De Vasconcelos Gomes, Facin, Salerno, & Ikenami, 2018), e.g. the strategic projects of the Brazilian army.
Our paper analyzes how the innovation process occurs during the management of strategic projects of the Brazilian army. In this context, the purpose of our research is to describe and compare seven case studies that present innovation projects characterized by high investments and transformation potential of the Brazilian army. We also intend to analyze how these processes strengthen the ecosystems, deal with uncertainties in the sector and promote the interaction among players an in environment that presents several restrictions and singularities.
Theoretical framework 2.1 Organizational innovation process
Organized in sets of activities related to idea creation, problem shooting, implementation and diffusion, the purpose of every innovation process is the generation of a significant economic impact (Salerno, de Vasconcelos Gomes, Silva, Bagno, & Freitas, 2015). In this process, not all ideas are used. Through an innovation funnel, ideas that are more likely to meet the market needs are selected to continue in the process until the implementation stage . The purpose of the innovation funnel is to dismiss ideas in order to pursue a continuous reduction of uncertainties of a project or a set of projects (Silva, Bagno, & Salerno, 2014). In the first phase, also known as front-end phase, ideas are created and then screened according to their relevance; they are then analyzed in a second filter (Phase 2) in order to be approved and used in projects; at last, there is the introduction in the market in Phase 3 (Salerno et al., 2015).
The collaboration between internal and external players in the innovation process according to Clark and Wheelwright (1992) is considered as a necessary technology and innovation source and for a wider selection of new ideas. Chesbrough (2003), on the other hand, proposed a structured open innovation model, as well as the acquisition of knowledge from external sources. According to Chesbrough (2003), open innovation is a way to obtain knowledge through the participation of the ecosystem players.
This model is in accordance with the ideas by Tidd, Bessant, and Pavitt (2001), in which resources from other external organizations reduce the costs of technological development, as well as market entry risks and the development time of a new product. In this model, it is possible to observe the collaboration from the external environment toward the company; and knowledge can also flow out of the organization toward external players through licensing, technology and spin-offs (Bueno & Balestrin, 2012).
Ideas created within the organization and ideas that stem from external partnerships, collaborations and interactions have to go through the procedures of selection, development and implementation before reaching the market as new products, services, processes, business models or a combination of two or more (Goffin & Mitchell, 2005).
Through this innovation ecosystem, the different players (bonded with the common purpose of ensuring value generation) can work both in a dependent way, as suppliers and purchasers, or in a more independent way, only for development and commercialization (Adner & Kapoor, 2010). The common focus of these players is co-innovation and the adoption of the necessary technology and innovation to implement new technologies effectively. This collaboration overcomes the traditional concept of value chain, in which it is possible to benefit from the intensive exchange of knowledge and adequacy to the environment in which the players operate (Lubik, Garnsey, Minshall, & Platts, 2013).
The benefits yielded to the economy through the innovation ecosystem through R&D investments of the military sector cover not only the creation of research and professional training centers, but also spin-off effects already in the initial phases of the research and valuable contracts established between the government and other companies that operate in the ecosystem (Mowery, 2010).
The analytical border, one of the characteristics of the innovation ecosystem, is not limited to national borders, regional clusters, contractual relations and/or complementary providers (Tsujimoto, Kajikawa, Tomita, & Matsumoto, 2018). This is an interesting aspect when analyzing innovation ecosystems in the defense sector because the business players are not the only ones covered; other non-commercial players, e.g. society, are also comprehended. According to innovation ecosystem literature, the players involved in the ecosystem and the leadership of other organizations are associated with a specific company (Nambisan & Baron, 2013); regarding the military sector, however, strategic projects are carried out by the Brazilian army.
Just like a collaborative network (Camarinha-Matos & Afsarmanesh, 2008), an innovation ecosystem is a long-term strategic collaborative network, guided by goals and aiming at specific business opportunities (Graça & Camarinha-Matos, 2017).
In the conceptual structure proposed by De Vasconcelos Gomes et al. (2018), the innovation ecosystem is characterized by the joint value creation accomplished by interconnected and interdependent players ( focal companies, suppliers, complementary innovators and regulators). In the life cycle of the ecosystem, these players cooperate and compete among themselves in a co-evolution process, i.e. the reflex of the collaboration can be noticed in the evolution of the players from the expansion of the company to a greater participation in university and research centers.
A successful example of the interaction in ecosystems for the defense sector is Route 128 in Massachusetts, USA, which aggregates technological interests, high skilled human resources, infrastructure and the existence of venture capital in the region through the government, industries and academia (Massachusetts Institute of Technology and Harvard University) (Silva & Quandt, 2019).
Similarly, Almeida (2013) indicates that the form of action of Defense Advanced Research Projects Agency (DARPA in USA of America) shows how difficult it is to innovate without being inserted into an innovation ecosystem. The author gives the example of the close relationship between research centers, universities and private companies in the USA.
Regarding Canada, Nimmo (2013) presented the technological script of the initiative Soldier Systems Technology Roadmap, in which the role played by the government as a client seeks to engage industries, academia and other research organizations in order to modernize the Canadian army.
Among the innovation models, Goffin and Mitchell (2005) proposed a model with two extra elements, which totals five main areas or elements of innovation management. The Innovation Pentathlon Framework is composed of the elements: ideas, prioritization, implementation, innovation strategy and people and organization. The element innovation strategy is subject to the high management to develop and to fulfill strategic goals. The focus is a fundamental point in this phase, occurring through constant observation and monitoring of market trends and new technologies, with management being responsible for communicating the role of innovation within the company's areas. The element people and organization is related to people management and can occur through incentive policies, trainings and creation of an organizational structure that stimulates innovation (Goffin & Mitchell, 2005;Oke, 2007).
For multi-project organizations, Cooper (1993) indicates a model known as stage-gate. Such model understands that technological innovation is a process focused on the development of new products (Silva et al., 2014). According to Cooper (1993), the development of new products must be fragmented in predetermined stages; each of them consists of a list of prescribed, cross-functional and parallel activities, explaining the construction of knowledge, which is materialized in a good or a service through the other stages.
The stage-gate model is composed of five stages and five gates. The beginning of the process occurs with the emergence of an idea that is developed as it goes through specific evaluations throughout the process. The gates represent the decision whether to continue or to interrupt the project. The process occurs from Gate 1 (ideas are evaluated according to their feasibility in order to be forwarded to the R&D area with information about potentials and market entry) to Gate 5, where the global viability of the project is evaluated in terms of product, production process, consumer acceptance and economic issues.
In a structured high-performance innovation model ( Jonash & Sommerlatte, 2001), the process innovation needs to be present in all value chains of the company; it cannot be restricted to R&D departments (Silva et al., 2014). This model has two fundamental principles: to provide the entire company with innovation, creating value; and to boost technology and the competencies necessary to accelerate the sustainable innovation while providing competitive advantage. The first principle shows that significant innovations stem from an internal mobilization, involving the entire value chain. The second principle occurs through technology platforms and competence management. However, these activities are only possible if the company directs its efforts toward the five fundamental elements: processes, strategy, organization, resources and learning.
From the perspective of innovation value chain by Birkinshaw (2017), the process of idea creation is the first stage for a company to improve its outcomes regarding innovation. For this, the idea creation is separated in three phases: internal, interaction and external. The conversion is separated in two phases (selection and development), while diffusion presents only one phase (dissemination).
In the phase of idea creation, the environments internal, external and interaction between them are observed and a critical analysis is made regarding the importance of the emergence of new ideas, as well as the interaction among them, in order to be sustained outside company's borders. In the conversion phase, the ideas created are selected according to their importance and relevance, so that they can be implemented in products. The third and last phasedisseminationapproaches the propagation of the idea; the diffusion of the idea is established in percentages.
Another approach of the innovation process is based on empathy, inclusive thinking, experimentation, optimism and collaboration. This is the so-called design thinking a field that uses sensibility and designer methods to meet people's need based on what is technologically feasible; it is a viable business strategy that can be transformed into consumer value and market opportunity. Design thinking projects go through three stages: inspiration according to the circumstances (problem, opportunity or both) that enables finding solutions; ideation in the process of creating, developing and testing ideas; and implementation to design a roadmap toward the market (Brown, 2008;Geissdoerfer, Bocken, & Hultink, 2016).
A critical dimension for the innovation process is related to the organization and management, and the innovation perspective is central in the renovation process (Tidd et al., 2001). In this context, the idea needs to emerge from an analysis not only regarding the environment (internal and external), but also from sensitive signs on threats and opportunities. This is the co-called search action. The next stage of the model is known as selection, which is responsible for deciding which signs must be taken into consideration. The third stage is implementation, i.e. responsible for translating the potential of the initial idea into something new and launch the product in an internal or external market. The fourth and last step described in the model is related to value capture, which is accomplished through the development of innovationin terms of sustainable adoptionand diffusionrelated to learning and progression throughout the life cycle in order to enable the company to develop its own knowledge basis and improve the ways through which the process is managed.
We present in Table I the main models related to the innovation process described herein. Based on the characteristics of the organizational models of innovation, we elaborated a conceptual model taking into account the approach of innovation value chain by Birkinshaw (2017) Innovation is a generic process associated with survival and growth, composed of three phases. The effective innovation management assumes a good performance in four aspects Innovation process: search, selection and implementation (permeated by learning) Strategy Organizational context Support Implementation mechanisms External relationship Open innovation funnel Chesbrough (2003) To add value to the organization through multiple ways to see opportunities in the current or in new businesses Gathering ideas is susceptible to inputs at any point: idea creation, internal development, acquisition of licenses, scale up products, etc. Innovation value chain Birkinshaw (2017) Innovation as an integrated flow: from the creation of ideas toward the market entry. This approach enables the identification of challenges in the inactive process Innovation process: idea creation, conversion and diffusion Design thinking Brown (2008) Based on empathy, integrating thinking, experimentation, optimism and collaboration Immersion (understanding and observation); ideation; prototyping; development Pentathlon Goffin and Mitchell (2005) To boost the organizational innovation strategy Creation of ideas; prioritization and selection; implementation; innovation strategy; people; organization Sources: Based on Silva et al. (2014) and Mazzola Table I. Innovation process models process was divided into four stages: idea creation, selection of the best ideas, development and (adoption and) diffusion. Table II presents the conceptual model.
Innovation and defense in the Brazilian army
The technological evolution has been causing transformations in the armed forces, as well as in the defense sector, keeping track of changes in the innovation environment and its consequences in the sectors of telecommunication, energy, railways and aviation (Davies & Hobday, 2005). It occurs because contemporary wars depend on military strategies based on strategic and tactic advantages obtained through the intensive use of technology and knowledge (Martins-Mota, 2009).
At the first moment of the industrialization of the Brazilian defensefrom the 1970s until the middle of the 1990s, with a peak during the 1980sthe sector was dominated by contracts with aviation companies (Embraer), armored vehicles (Engesa) and missiles (Aviras). This phase was characterized by technologies that met the local demands through innovation between multiple sectors and international cooperation in aeronautic and naval sectors (Amarante & Franko, 2017).
After the 1990s, the Brazilian defense industry was affected by a decrease and recession in the domestic market, which resulted in a significant reduction in defense production (Amarante & Franko, 2017). In 1999, the Ministry of Defense was created in order to establish a strategy for the sector; however, only one part of the budget (considered one of the highest among the ministries) was applied in investments related to development and innovation (M. . Only in 2008, during the second phase, with the NDS (END, in Portuguese) the Brazilian defense industry restructured its guidelines along with innovation policies for the security Stage Characteristics Idea creation Internal and external cooperation Ideas are the inputs to develop the rest of the process (Goffin & Mitchell, 2005) Idea selection Initial screening; the best ideas are detailed and analyzed Concepts and projects can either be rejected or become the final innovative product (Goffin & Mitchell, 2005) Ideas that meet technical requirements and consumers' and market's needs Detailed investigation Evaluation of the importance and relevance of ideas (Hansen & Birkinshaw, 2007) Decision (taking into account how the company can improve itself ) (Tidd & Bessant, 2015) Development Fast and efficient development of the new product, service or process or the combination of them (Goffin & Mitchell, 2005) Path from the emergence of the idea to the first result (Hansen & Birkinshaw, 2007) Development of prototypes, tests and refinement (Brown, 2008) To translate a potential idea into something new, launching the product in an internal or external market (Tidd & Bessant, 2015) Adoption and diffusion Pre-commercialization (Cooper, 1993) In the Brazilian army, the pre-commercialization is the pilot implementation in a determined area for practical tests carried out with the final user Production and launch in the market (Cooper, 1993) After testing, there is a mass elaboration and distribution of products to places previously programed to operate Disclosure in the entire organization (Hansen & Birkinshaw, 2007) After testing and during the implementation, the disclosure is accomplished through army channels and the media in general Source: Authors Table II. Conceptual model of the innovation process of the country's borders focusing on natural resources of the then newly discovered pre-salt and gas reserves (M. . With the creation of END, investments enabled the development of industrial policies that boosted the development related to social and environmental aspects, while opening up to the competition to update the infrastructure (aircrafts, ships and vehicles), and establishing collaboration partnerships with national defense companies (Amarante & Franko, 2017).
These partnerships involved private technology companies, universities and research centers in three strategic sectors: aerospace, cybernetics and nuclear energy. The technology provided by these players are involved in several fields of the national industry like fighter jets, smart weaponry, submarines (nuclear and conventional), drones, communication technologies (M. Mazzucato & C. C. R. Penna, 2016) and health and agriculture solutions (Mowery, 2010).
These guidelines adopted by the Brazilian army, i.e. promoting the Brazilian defense industry through innovation, is in line with strategies developed by other countries, traditionally involved with the war industry, like the USA, Russia, France and England, and are similar to the ones developed by other emerging countries like India, China and South Africa (Leske, 2018). In the emerging economies, R&D expenditures on defense have a positive impact on innovation systems. Therefore, Brazil can learn some lessons from these emerging countries in order to analyze possible actions in the innovation ecosystem of the military sector: the high investment in R&D in India and South Africa; the positive impact of strategic and economic measures in the Chinese market; Russia's recovery strategy in a scenario very similar to the one presented by Brazil (Leske, 2015).
Currently, the land force develops seven strategic projects focused on innovation. Created and developed in the Office for Army Projects (EPEx, in Portuguese) located in the city of Brasília, they are known as: Strategic Project ASTROS 2020, Cyber Defense Strategic Project, Anti-Aircraft Defense Strategic Project, PROTEGER Project, Guarani Project, Full Operational Capability Strategic Project (OCOP, in Portuguese) and Integrated System of Sensing Border (SISFRON, in Portuguese). The projects are described in Table III.
Methodology
Our research presents a qualitative approach and a multiple case study, whose focus is on the strategic projects of the Brazilian army. In the current study, we used a descriptive and exploratory approach to analyze the innovation process by means of primary data collection from interviews and secondary data from official documents.
Among the 824 projects developed by the Braço Forte Strategy (EBF, in Portuguese), we chose only seven because they present a high financial investment ( from the 150bn reais invested in EBF, the seven projects demand an investment of approximately 90bn reais) and are transformational mechanisms in the army; i.e. they present a few transformation vectors: science and technology, doctrine, education and culture, engineering, management, logistics, budget and finances, training and employment and human resources. To choose these specific projects, we considered the importance, coverage and impact on every system; the lack of evaluation in terms of the common characteristics among them; the possibility to propose a systematization for future projects in order to enable the emergence of a more competitive and innovative model for the Brazilian army.
From all data collection sources suggested by Yin (2005), we used documentation, interview and direct observation. The data collection was accomplished through the conduction of semi-structured interviews in order to understand the role played by some leaders in the decision-making process.
In this context, we analyzed the army's official website, as well as the manuals and regulations in order to prepare the presentation of the research, the application of the interview model and the mapping of innovation processes. Then, we planned and carried out the semi-structured interviews.
In the documentary analysis, the purpose was to understand the process of innovation in the Brazilian army; in other words, to understand the stages in which they occur, the responsibilities and the requirements demanded by the seven projects. We analyzed official documents obtained from the Military Institute of Engineering (IME, in Portuguese), Source: Authors Table III.
Strategic projects focused on innovation
Brazilian Army Command and General Staff School (ECEME, in Portuguese), in journals, reports, official documents and lectures. When searching for these documents, we identified the stages of idea creation, selection of ideas, development and diffusion of innovation, according to the model in Table II. The documents located were publications of the official journal of the federal government (DOU, in Portuguese), 42 army reports, 10 ECEME journals, 10 lectures on innovation and defense, data from a symposium on innovation and investor relations reports provided by companies that took part in the projects. After gathering all information, we elaborated an integrated document in order to facilitate the access to the information.
In order to increase the reliability and validity of the research, we analyzed the answer of eight servicemen in different periods through a semi-structured and open questionnaire, and through observation methods (interviews and documentary analysis).
In order to evaluate the opinion of the servicemen regarding the innovation process in the Brazilian army, the interviews were conducted with servicemen that had working experience in the army and knowledge about the seven chosen projects. The criteria used to choose the interviewees were: working time in the army, knowledge about the projects, operation in the military sphere, participation in planning, development or implementation in the strategic and operational level in at least one of the projects. Before such criteria, we interviewed eight active servicemen: four of them were working in the army for more than 25 years; the remaining four, for more than 15 years. The purpose was to evaluate the innovation processes in the Brazilian army verifying possible gaps in future adequacy. Table IV presents the open questions and the semi-structured interview. We got in touch with them over the telephone in order to explain the purpose of the study and to introduce the researcher and the data collection process (organizational policy related to innovation management, methodological guiding, training programs and funding).
Open questions
To what extent is the innovation process (technological and non-technological) of the defense sector integrated to strategic projects of the Brazilian army? Initially, four innovative projects were created (Guarani, SISFRON, DEFESA Anti-Aircraft Defense and OCOP). Based on Decree 134 (September 10, 2012), other three projects were incorporated (PROTEGER, ASTROS 2020, Cyber Defense and public-private partnership). What were these seven projects created for? Are these projects integrated? Is their purpose to develop an innovation system and a modern army? Semi-structured questions How does each one of the strategic projects help the innovation process of the army? How is the interaction among project agents? Which are the main barriers to the interaction of projects (armed forces, universities and industry)? How to overcome these barriers? Which are the main facilitators for project interaction? Which are the main interests of each agent when relating to other agents? What happens to the project when the interaction increases? Which are the main results or benefits expected from this interaction? How is it possible to motivate researchers to innovate? How supporting infrastructure does influence the search for partnerships when developing technological (and non-technological) innovation? How does the office establish a link among the innovations of a project in order to facilitate or benefit another project? Is there a collaboration among members of other forces in these projects? How do you think such interaction should occur? Which are the advantages and disadvantages of PPPs? Is there an interaction between projects and civil educational institutions? Which are the main improvements developed by EPEx in the processes that facilitated the project management? Table IV.
Open question and semi-structured interview
In order to carry out the interview with each participant, we used a semi-structured interview script (open interview). Each interview lasted between 1 hto 1 h and 30 min; they were conducted personally with five interviewees and per e-mail and telephone with the other three interviewees (two army generals and one colonel) and occurred in August 2016 (four interviews) and December 2016 (four interviews).
The data were analyzed based on the interpretation of the researcher and theoretical framework. The analysis categories were provided by the documentary analysis and bibliographic research.
We accomplished a discourse analysis in order to identify how the seven strategic army projects work and relate among themselves based on texts and interviews.
In order to carry out the analysis, we used the software Nvivo, which enabled the indexation of the texts stemming from the interviews, the insertion of the most relevant speeches before the variables of the conceptual model, as a basis for the analysis process and the integration between written material and speeches.
Analysis
According to the interviewees, there is an integration among projects as it is possible to identify common objectives and synergy, whose purpose is to develop an innovation system and a more modern army.
According to the conceptual model and methodological procedures presented herein, the results will be presented according to the order of the innovation process in the Brazilian army: idea creation, selection, development and diffusion.
Idea creation
In the army, the idea creation occurs according to the following actions (Servicemen 5-8): "(i) information obtained via exchange programs; that is, during interactions between Brazilian and foreign servicemen, and (ii) through operational reports (RIDOP, in Portuguese), a document that presents lessons learned during different military activities, sent from different headquarters to the Land Operation Command (COTer, in Portuguese), describing the main problems and needs of the army in a general way." In the COTer, ideas are discussed to solve determined problems in the force. With this, ideas are sent to the high command of the army, then forwarded to the selection phase and, if approved, they turn into projects. According to every servicemen interviewed herein, the creation of the seven projects occurred due to the "need to provide the army with new capabilities, seeking for a progress that can be used in the entire force." According to Servicemen 1-3, the strategic projects are responsible for providing the institution with new capabilities, which will enable the force to fulfill the planned transformation.
As result, it will be possible to meet the demands from the present and from the future with regard to the defense of the Brazilian territory: "the conception of transformation in the army is not just a modernization of already existing materials, but it is the acquisition of new capabilities that, in practice, means the achievement of innovation required by the land forces." The idea creation in ASTROS 2020 came up from the need of the Brazilian army to provide means capable of providing long-range shooting with high precision and lethality to the land forces (EPEx, 2016).
Until the development of the Project ASTROS 2020, the Brazilian army had no surface-to-air missile. In this context, comparative analyses were carried out with solutions provided by other armed forces and with similar characteristics to the ones the Brazilian army needed in terms of long-range shooting; therefore, the project started being developed.
In the cybernetics defense, the development occurred due to the need to create an institution in charge of coordinating and integrating efforts to compose the defense.
In the Anti-Aircraft Defense, the idea creation occurred in order to provide the land forces with the capability to meet the demands for land strategic structures in the country, defending the territory from possible air space attacks.
With regard to PROTEGER, the idea creation came up in order to expand the capacity of the Brazilian army to coordinate operations regarding society protection.
The Guarani Project emerged from the interest of turning the infantry military organizations into modernized cavalry organizations (EPEx, 2016).
In the OCOP Project, the idea creation came up to "provide the operational units with military materials" (Servicemen 5 and 7) in order to meet not only the requirements predicted by the defense of the territory (according to Article 142 of the Brazilian constitution), but also to the operations of Law and Order Guarantee and the several subsidiary missions attributed to the Ministry of defense. In order to fulfill this goal, 17 integrating projects were elaborated.
The creation of ideas for the Project SISFRON came from the need to "monitor borders" (Servicemen 1-4) in order to fight cross-border crimes, to bring social benefits to border communities and to increase the presence of the government along the border.
In every seven strategic projects, the creation of ideas occurred through a process known as cross-pollination (Birkinshaw, 2017); i.e. through the collaboration among units (military organizations). In this sense, the internal and external cooperation is necessary in this stage ; through such cooperation, it is possible to meet the technical requirements and consumer and market needs (Goffin & Mitchell, 2005). However, in addition to internal and external cooperation and taking into account that ideas are created from the inside out and vice versa, ideas can also be recycled . This is the particular case of ASTROS 2020, in which the surface-to-air missile already existed, but had to be modernized and suited to the current technological scenarioshooting system and target controland to the adjustment in the logistic process. Table V presents an overview of the idea creation stage.
Selection
In this stage, only the best ideas are chosen for the development of new products, processes and services, i.e. important and relevant ideas (Birkinshaw, 2017). Regarding the seven projects, the ideas "were selected by the military high command and forwarded to EPEx, which improves and develops them in order to meet the demands required by the force" (Servicemen 5). The best ideas are the ones technically viable, financially feasible and that meet the operational needs of the Brazilian army.
According to Servicemen 1-3: "all projects have one manager and one supervisor, who are responsible for leading the management team. The manager and the supervisor are either army generals or colonels with professional maturity." According to the statement by Serviceman 5, the three criteria adopted to select the best ideas are: "finances"subject to the finances board, responsible for the financial analysis, resource availability, investment possibilities and costs; "operational needs"represented by COTER, the institution responsible for the operational area of the force, identifying operational needs and categorizing priorities; and "technology"subject to the board of science, technology and innovation (DCTI, in Portuguese). The purpose is to verify the feasibility of the projects according to the current technology and establish cost valuation. The insertion of the three spheres represents the best ideas that will consequently turn into projects, meeting the needs of the force. Table VI indicates the guidelines for idea selection.
Development
The projects are an answer to the attributions of the army demanded by the documents that regulate the defense of the Brazilian territory. Once this legal landmarkexternal to the Project Need Creation of ideas ASTROS 2020 To provide means capable of bringing longrange shooting support with high precision and lethality to land forces Several technologies used around the world, and understanding the demands made by the force Cyber Defense To ensure the defense (safeguard) of online digital means (cybernetic), governmental or not It emerged from the idea to develop a protection system to store data on the Brazilian army and institutional websites, especially for the World Cup and the Olympics Anti-Aircraft Defense To provide the land force with the capacity to meet the defense demands of the land strategic structures of the country Stemming from the adjustment of anti-aircraft artillery units PROTEGER To increase the capacity of the Brazilian army to coordinate operations to protect the society Stemming from society protection ideas, as well as strategic structures, considering the increasing need for protection Guarani To turn the infantry military organizations into modernized cavalry organizations It emerged from the idea to promote greater mobility, armored protection and shooting power OCOP To provide operational units with military equipment It emerged from the idea to adjust the weaponry and equipment used by the Brazilian army; i.e. to modernize the Brazilian army according to current needs SISFRON To monitor 22,000 km borders It emerged from the idea to improve the control of accessing the Brazilian border Common features of the projects Ideas created through information obtained from exchange programs and informational reports (RIDOP) The ideation of the seven projects occurred due to the need to provide the army with new capabilities, looking for improvements that could be used in the entire force The transformation conception of the army does not refer only to the modernization of existing materials, but to the development of new capabilities, which indicates the accomplishment of the innovation required Table V.
Projects
Purpose to select the idea ASTROS 2020 To increase the artillery capacity by providing extended range, flexibility and lethality (Serviceman 2) Cyber Defense To protect the cyber environment (Serviceman 4) Anti-Aircraft Defense To suppress possible air threats in the world scenario PROTEGER Selection of the best ideas and products that operate in cities affected by natural disasters; to protect more than 600 strategic structures; to provide support in cases of public calamity; and to create regional operation centers (Serviceman 8) Guarani To increase the mobility of infantry military organizations (Serviceman 6) OCOP To obtain the necessary materialdefense productsto fulfill the force's land operation (Serviceman 7) SISFRON To monitor borders ensuring the continuous and safe flow of the land force (Servicemen 1-4) All Projects The ideas are selected by the high command and forwarded to EPEx, which improves these ideas and enables their development Opinion of the servicemen interviewed herein All projects are strategic: one manager and one supervisorarmy generals and/or colonels with professional maturitylead a management team The best ideas are the ones technically viable, financially feasible, and that meet the operational needs of the Brazilian army The selection of ideas is based on: finances; operational needs; and technology. The intersection among these spheres represents the best ideas, which will consequently become projects forceis approved, the army develops strategic projects, whose scope is presented to government institutions that interfere with the execution regarding resources, such as the Ministry of Defense, the Ministry of Planning, the National Treasury Secretariat, among others. "The main challenge, in executing large projects, is the lack of regularity of budgetary resources" (Serviceman 2); i.e. the lack of long-term planning, which affects the development stage directly. One of the ways to facilitate the planning and improve the continuous flow is the establishment of public-private partnerships (PPP). However, "even though there are significant advantages and several successful cases abroad, Brazil has no consolidated experience regarding PPPs." There is "a need to clarify some legal issues in order to enable initiatives with legal security because, usually, these are long-term initiatives" (Serviceman 1).
The development stage, despite coordinated by EPEx, occurs in a decentralized way. In this sense, every project has a reference center that is responsible for its development, not only during the technical, but also during the testing phase. In the case studied herein, each project is developed in a center that can establish a PPP. One example of an activity used during the development phase is the establishment of prototypes, which enables the manufacturing of products that will be tested jointly in pilot projects. "The place where they will be tested is always planned, as well as the materials to be tested" (Serviceman 5). Table VII presents an overview about the stages that constitute the innovation value chain of strategic projects.
Diffusion
It occurs through the implementation of projects in determined places; their effects are tested in loco. This way, it is possible to identify possible flaws and to make all necessary adjustments. After the adaptation of the projects to meet the needs of the Brazilian army, they are implemented in military organizations to be used and evaluated one more time. When necessary, new adjustments are accomplished. Only after calibrating all adjustments, it is possible to choose the regions where new tests will be carried out; however, not only with one military organization, but involving other organizations. This way, it is possible to identify if the projects are fully integrated to the operations and if they meet the requirements to which they were developed, generating the necessary systemic capabilities.
Servicemen 1-3 mentioned that some of these products were "(i) the Saber Radar M60 (Anti-Aircraft strategic project), designed to monitor the air space and already present in the Anti-Aircraft artillery of the army; (ii) Saber Radar M20 (ground surveillance), acquired and distributed to border units; and (iii) Guarani, used during the 2014 FIFA World Cup and 2016 FIFA Confederations Cup." The adoption of implemented projects and products occurred in two steps. Initially, part of the project and/or material was analyzed and tested. After testing, the safety and basic functionally were verified. After this, a region is chosen and a group of servicemen test the product during a determined period in real circumstances, i.e. the functioning of the product in practice. After the accomplishment of several tests in real situations, there is a new evaluation and verification of strengths and improvement areas. Reports are analyzed and products and processes go through adjustments; they are then distributed.
The projects implemented so far improved significantly the mobility of the troop and armored protection, increased the shooting capacity, promoted more security by means of communication (digital, radio frequency and telephony), improved the border protection (ground and air space), besides increasing the modularization capacity and the protection of critical structures.
Regarding the challenges found, it is hard to keep up with the schedule due to the delay of financial support; i.e. due to the slowness to release funds. The long-term and high-tech projects adapt to new technologies and to the development of new techniques that help adjusting the projects to the Brazilian reality. Table VIII shows an overview of the diffusion stage.
Conclusion: implications and future research
Our study described the innovation process in the Brazilian army with four stages outlined in the conceptual model: idea creation, selection of the best ideas, development and adoption and diffusion. In order to meet the research goal, a qualitative study was carried out, focusing on seven strategic projects not only because they intend to change the land forces by modernizing and equipping it, but because they are focused on relevant issues, i.e. the protection of society, maintenance of law and order and defense of the country.
After the emergence of the NDS (END, in Portuguese) and the National Defense Policy (PND, in Portuguese), issues related to the promotion of scientific research, technological development, the production capacity of materials and services relevant in the defense sector, the intensification of the exchange between the armed forces and universities, research centers, industries and partnerships with other countries, started to be considered relevant and became more and more frequent (ABDI, 2013).
In this sense, it is worth pointing out that the seven strategic projects are in line with the triple helix model ( Juarez, 2016), in which the participation of players of the public, academic and industrial spheres is necessary in order to overcome a social relevant challenge through technology. The prominence of the university is related not only to training and research Projects Characteristics of the development ASTROS 2020 Main development center of the company AVIBRAS; more than 60 companies are involved in the process. The project presents high skilled labor force, intensive knowledge on missile navigation, a 300 km missile reach and exports potential. Besides, it was responsible for the generation of 7,700 job positions (Serviceman 3) Cyber Defense There is not one main company; however, there are 25 companies taking part in the project. The purpose is to provide network safety and training for the cyber field (Serviceman 4). Through technology, it was possible to neutralize around 756 attacks during the 2014 FIFA World Cup Anti-Aircraft Defense There is no main company, but several companies involved in the development. The development of radar and command and control technologies ensures the protection of strategic structures (Serviceman 5). The reach of M60 and M200 are 60 km and 200 km, respectively. The project enabled the generation of 2,300 job positions PROTEGER There are 20 agents involved in the development. The main purposes of this project are to provide support in cases of natural disasters, public calamity and protection in land strategic structures (Serviceman 8) Guarani IVECO is the main company, but there are other 50 companies also involved in the project. It was possible to create 2,890 job positions; at least 60 armored vehicles were manufactured annually. It presents a high exports potential and the purpose is to strengthen the national defense industry, as well as the mechanization of infantry brigade and cavalry brigade (Serviceman 6) OCOP It relies on 30 companies involved in the development. In this project, 3,150 IA2 IMBEL rifles, 6,500 vehicles and 26 naval vessels were acquired. The purpose of this project is to modernize several materials, revitalize armored vehicles and helicopters and the acquisition of defense products (Serviceman 7) SISFRON Accomplished through the joint venture TEPRO, which included the selection of the main suppliers of electromagnetic sensors, tactical communication, optronics and infrastructure. In total, 12,200 job positions were created, and its purpose is to fight border crime, to bring social benefits to border communities and to promote the presence of the government along the border (Serviceman 3) All Projects Development phase of army projects: coordinated by EPEx and in a decentralized way Use of reference centers responsible for developing the projects during the testing and technical phases Possibility to establish public-private partnerships Main challenge: to get budgetary resources for a long-term planning characteristics, but also to the academic entrepreneurship, which enables the economic use of the knowledge produced. The industry, in order to ensure competitive advantage, has to be open to external sources of innovation. The government has to support and facilitate the synergy between universities and industries. In order to meet the requirements of the triple helix, the establishment of public policies in line with the defense sector is also necessary (M. Mazzucato & C. C. R. Penna, 2016). This way, the results indicate that, just like in countries with a broader tradition in the defense sector like the USA, the development of innovation in defense areas presents a cooperative behavior in global innovation chains in order to distribute costs and acquire technology (Amarante & Franko, 2017).
The findings of our study indicate the possibility of technology spillover; that is, the use of technology for civil players in the innovation ecosystem and for global use, which is observed in the weaponry industry of the USA (Leske, 2013) and in Brazilian unmanned aerial vehicles, radar systems and satellites commercialized in international markets (COMDEFESA, 2011).
Based on the results presented herein, we consider that the innovation process in the Brazilian army occurs in a gradual way in order to fulfill all constitutional obligations and the END guidelines, besides showing that the efforts used to develop the seven projects culminate in a modern and well-prepared army.
The characteristics found in each project in each one of the stages are different. Such difference is observed, for instance, in the stages of idea creation and selection, in which each project presents differences in terms of use and manufacturing. Despite different, they are, however, compatible (e.g. the development stage) because they aim at the participation of companies and universities, and promote the implementation of the project in order to meet the demands of the force and possible exports of the product (diffusion).
Projects
Characteristics of the diffusion ASTROS 2020 AVIBRAS -US$ 350m contract established with the Indonesian government in order to develop 36 Astros 2020 missile platforms in exchange for technology and defense cooperation Cyber Defense To block hacker attacks in official websites Anti-Aircraft Defense RADAR SABER M60developed to monitor air space; it is already present in the army's anti-aircraft artillery. RADAR SABER M20land surveillance; acquired and distributed across border units PROTEGER The Cavalry Guard's Second Regiment accomplished an operation in the city of Seropédica, Rio de Janeiro. The operation consisted of the establishment of a control station to ensure security and access control to sensitive areas, in order to protect the area during major events, like the Olympics and the 2016 Summer Paralympics Guarani Used during the 2014 FIFA World Cup and 2016 FIFA Confederations Cup OCOP All operational military organizations of the Brazilian army received new vehicles. Example: 10-ton 6×6 trucks, served as a prime mover towing the 155-mm gun The modernization of the vehicles involve the acquisition of non-armored cars and the recovery of armored vehicles, and the use of weaponry and vessels. Example: 5.56-mm IMBEL A2 (IA2) rifles; the first 1,500 batch was acquired in 2013 SISFRON In 2014, the first SISFRON unit was activated in the state of Mato Grosso do Sul (city of Dourados) to strengthen the presence and action capacity of the government along the border, besides helping policemen to fight illicit acts, like drug and weapons trade, smuggling and also health protection All Projects The diffusion in the army occurs through the implementation of projects in determined pilot areas in order test the produced effects in loco There is an identification of possible flaws and the necessary adjustments are accomplished. After the adjustments of the projects to the needs of the Brazilian army, they are implemented in military organizations to be used and evaluated one more time Due to differences found in the stages of creation and selection of ideas, which is a consequence of the different technologies inside the ecosystem, it is hard to predict relevant business aspects; that is, to deal with collective uncertainties (De Vasconcelos Gomes et al., 2018), like the management of resources and long-term planning, which is shown in Table VIII. These results indicate a need for management models adequate to the different complexities of the projects, and that can deal with the uncertainties that affect the performance of the players of the ecosystem.
The results contribute to literature on innovation project management by emphasizing the importance of the establishment of an interaction mechanism and the creation of ecosystems. In other words, the closer and the more collaborative the relationship among players, the higher the possibility to create innovation ecosystems that promote the evolution of the agents involved and innovation diffusion. In the cases analyzed herein, some companies were created, e.g. IVECO and AVIBRAS, to operate in strategic projects; they evolved from the innovation development and are known as a world reference in determined knowledge areas and production of equipment.
The projects of the companies inserted in the ecosystems involved automobile manufacturers and military equipment integrators, besides equipment, services and distribution suppliers. The predictability of the demand, one of the biggest problems and risks for companies operating in this sector, was mitigated by means of the financial amounts applied direct and indirectly, which provided the companies with financial stability and conditions to enter the external market. The more involved in the ecosystems of military project ecosystems supported by investments, the greater the management of strategic projects. The greater the market stability, the greater the participation in the ecosystem.
This study corroborates the public and private management when presenting the advantages of participating in ecosystems of strategic innovation project management in the public sector. With the establishment of public policies that promote innovation, the ecosystems involving public sectors (like the Brazilian army, public companies and universities) will indicate new markets for companies to enter and expand, development of technological knowledge to universities and establishment of adequate solutions to meet the needs of the army.
Based on the results obtained herein, we recommend the conduction of quantitative research in future studies to investigate other issues: the fulfillment of project goals; the influence of the schedule on the innovation process when it is centralized in only one company and when it is developed by several companies; the identification of positive and negative impacts that political and economic changes can have on the project; the confirmation that the seven projects sufficient to bring the desired modernization to the Brazilian army. The analysis of these other variables will emphasize the innovation process of the Brazilian army, meeting the goals proposed by the force and ensuring the country's protection. | 2019-09-17T03:09:09.472Z | 2019-10-21T00:00:00.000 | {
"year": 2019,
"sha1": "5d31de55a5ce03e10867e831c0c6009ab617c1f9",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1108/rege-01-2019-0016",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "083cfbd05060d56f0ca6b8c5bcc58db4dfff0029",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
235380957 | pes2o/s2orc | v3-fos-license | Large variation in anti-SARS-CoV-2 antibody prevalence among essential workers in Geneva, Switzerland
Limited data exist on SARS-CoV-2 infection rates across sectors and occupations, hindering our ability to make rational policy, including vaccination prioritization, to protect workers and limit SARS-CoV-2 spread. Here, we present results from our SEROCoV-WORK + study, a serosurvey of workers recruited after the first wave of the COVID-19 pandemic in Geneva, Switzerland. We tested workers (May 18—September 18, 2020) from 16 sectors and 32 occupations for anti-SARS-CoV-2 IgG antibodies. Of 10,513 participants, 1026 (9.8%) tested positive. The seropositivity rate ranged from 4.2% in the media sector to 14.3% in the nursing home sector. We found considerable within-sector variability: nursing home (0%–31.4%), homecare (3.9%–12.6%), healthcare (0%–23.5%), public administration (2.6%–24.6%), and public security (0%–16.7%). Seropositivity rates also varied across occupations, from 15.0% among kitchen staff and 14.4% among nurses, to 5.4% among domestic care workers and 2.8% among journalists. Our findings show that seropositivity rates varied widely across sectors, between facilities within sectors, and across occupations, reflecting a higher exposure in certain sectors and occupations.
A s the first wave of the ongoing COVID-19 pandemic swept across the northern hemisphere in the spring of 2020, most countries adopted wide-ranging measures to limit the spread of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) 1 . Most measures, however, imposed little or no restrictions on 'essential' workers-whose occupations are deemed indispensable for the provision of crucial services, including healthcare, transportation, food production, social work, among others [2][3][4] . Depending on the nature of their function, workers may be exposed to infectious, often asymptomatic 5 , members of the public and/or colleagues, increasing their risk of infection compared with individuals working from home-and beyond the risk of household transmission 6,7 . Cumulating evidence indicates that healthcare workers in hospitals face an increased risk of infection, although the risk may be departmentspecific or associated to social rather than professional activities, whilst some studies have found no increased risk [8][9][10][11][12][13] . Some evidence shows that healthcare workers in nonhospital settings, such as nursing homes, are also at increased risk [14][15][16] . Less is known about occupations outside healthcare settings, though evidence from RT-PCR testing indicates that public-facing workers such as waiters, social workers, and transport drivers may be at increased risk 17,18 . As the second or third waves of the pandemic spread across the world and mass vaccination starts, there remains an urgent need to better characterize the risk of SARS-CoV-2 infection among workers who are mobilized during lockdowns in order to better guide public health policy to both limit the spread of the virus and protect exposed workers, including in vaccine prioritization.
The canton of Geneva in Switzerland reported its first confirmed COVID-19 case on February 26, 2020, and by April 26, 2021, 54,964 confirmed cases (110.0 per 1000 inhabitants) and 729 deaths had been reported 19 . Lockdown measures-similar to those implemented regionally or nationally across Europe and North America-were imposed on March 16, 2020 from which only essential businesses remained operational and the population was encouraged to stay home; progressive lifting of restrictions took place from April 27, 2020 until June 6, 2020.To understand the SARS-CoV-2 infection risk among workers from key sectors that remained operational during the lockdown, we established the SEROCoV-WORK + serology study, using the same testing procedure as in our SEROCoV-POP populationbased seroprevalence study 20,21 , whose representative sample can serve as comparison.
Seropositivity rate varied widely across sectors, from as high as 14.3% in the nursing home sector to as low as 4.2% in media. Seropositivity rate reached 12.1% in the homecare sector, 11.1% in healthcare, 11.0% in pharmacy and 10.1% in the food industry. Relative to healthcare sector participants, those in the public security sector (RR: 0.53; 95% CrI: 0.22-0.93) and early childhood education sector (RR: 0.37; 95% CrI: 0.09-0.84) were at lower risk of being seropositive. Participants who were fully confined during the lockdown were more likely to be seropositive (RR: 1.27; 95% CrI: 1.03-1.56), compared with mobilized participants. In line with this finding, participants with at least one out-of-work exposure to a confirmed COVID-19 case were more likely to be seropositive (RR: 2.29; 95% CrI: 1.93-2.74) compared with their counterparts without any exposure.
A large heterogeneity in seropositivity rate was observed across participating facilities within every sector (Fig. 1). For example, it ranged from 0 to 31.4% across 21 nursing homes, and from 2.6 to 24.6% across 8 public administration facilities. Furthermore, occupations within facilities and sectors also showed varying degrees of seropositivity rate. Across the 32 occupations, seropositivity rate was as high as 15.0% among kitchen staff and 14.4% among nurses/nurse assistants, and as low as 5.4% among domestic care workers and 2.8% among journalists (Fig. 2).
Sensitivity analyses excluding participants who were confined during the lockdown yielded similar results (Supplementary figs. 1-2; Supplementary table 3). Disaggregating seropositivity rates by out-of-work exposure showed that participants with at least one out-of-work exposure to a confirmed COVID-19 case generally had twice the seropositivity rate compared with their same-sector or same-occupation counterparts with no out-ofwork exposure (Supplementary tables 4-5). Further adjusting relative risk estimates for known out-of-work exposures, however, showed no appreciable change from the main results (Supplementary table 6).
Discussion
In this large sample of workers from 16 sectors who were mobilized during the spring 2020 Swiss lockdown, we observed that the proportion of workers having developed anti-SARS-CoV-2 antibodies after the first COVID-19 wave varied widely across sectors, across facilities within sectors and across occupations within sectors. With few important exceptions, our findings do not show a pattern of increased risk of SARS-CoV-2 infection among sectors and occupations of workers who were mobilized during the lockdown, the overall seropositivity rate of this sample being only slightly higher than that of the general working-age population during the spring 2020 21 . Yet, there was considerable variability across sectors and occupations.
While nursing home workers exhibited the highest seropositivity rate relative to that of the working-age population, reflecting evidence from Spain 14 , Sweden 15 , and the UK 16 , it differed widely across nursing home facilities; this intra-sector variability, which was observed in almost all sectors, may reflect overdispersion, a well-known characteristic of SARS-CoV-2 transmission dynamics 22,23 . Whether the degree of adherence to preventive measures within facilities and in private life may also help explain this heterogeneity will be elucidated in further studies. It is also possible that most infections had occurred before strict measures were implemented. Healthcare sector workers showed a generally higher seropositivity rate than the general population, consistent with extensive evidence from many countries [8][9][10]14,18 .
Among occupations, the highest proportion of seropositive workers was observed among kitchen staff, who worked primarily in nursing homes; this may reflect the increased infection risk that is present when customers/staff fail to practice appropriate hand hygiene, social distancing, and mask wearing when indicatedmeasures that are difficult to follow while eating and drinking indoors [24][25][26] . Nurses also exhibited higher seropositivity than the general population, a pattern consistently reported elsewhere 8,9,[11][12][13] , and likely reflective of the extended cumulative exposure time to SARS-CoV-2 they have as front-line patientfacing workers 22,27 ; importantly, contact tracing data have shown that the majority of nurses working in the main healthcare institution in Geneva who were infected with SARS-CoV-2 became infected during social interactions or transportation to Vertical orange bar and yellow area indicate general working-age population seropositivity rate and 95% binomial confidence interval, respectively, from SEROCoV-POP study 20,21 . Small gray vertical bars show the proportion positive of all participants per sector. Facilities with <10 participants are not shown as dots, but these participants are included in the sector average. Population seropositivity Fig. 2 Prevalence of anti-SARS-CoV-2 IgG antibodies by occupation, SEROCoV-WORK + study, May-September 2020, Geneva, Switzerland. Sample size: 10,513 participants, 1026 of which were seropositive. Red dots indicate mean seropositivity rate for each occupation, while horizontal gray lines represent 95% binomial confidence intervals. Dot size indicates number of employees with that occupation. Vertical orange bar and yellow area indicate general working-age population seropositivity rate and 95% binomial confidence interval, respectively, from SEROCoV-POP study 20 and from work 28 . Still, as mass vaccination campaigns are being planned and implemented worldwide, our results strengthen the evidence to prioritize access to healthcare workers, particularly nurses, as well as employees of nursing homes, including kitchen staff.
While we did not find a clear trend in the association between educational level and risk of infection as reported in other countries [29][30][31] , we found that those with a doctorate degree were at highest risk, reflecting our previous findings in the Geneva population 21 ; this risk is likely linked to the type of occupations held by these individuals, most of whom worked in the healthcare sector in our sample. The observed higher risk of being seropositive among fully confined participants may be due to the relatively high risk of transmission within the household when a single household member was infected during the first wave 6,7 , or due to having been infected before or after the mandatory homeworking period. Participants with out-of-work exposure to confirmed cases of COVID-19 had twice the risk of infection compared with their non-exposed counterparts in the samesector and occupation. This out-of-work exposure is likely determined by several socioeconomic and demographic factors, which in turn are also associated with occupation. While our results showed that out-of-work exposure was associated with higher seropositivity rates, it only partly explained the wide heterogeneity observed across sectors and occupations. The observed reduced risk of being seropositive in ex-smokers and current smokers reflected our previous population-based findings 21 ; this may be due to potential interaction between the SARS-CoV-2 spike protein and nicotinic acetylcholine receptors 32 , heightened preventive practices and risk aversion among smokers given their increased risk of viral and bacterial respiratory diseases 33,34 , or residual confounding. Importantly, while contrasting findings have been reported about the association between smoking and SARS-CoV-2 infection risk, extensive evidence indicates that smoking is associated with increased risk of severe COVID-19 35,36 .
Our findings provide an important picture of SARS-CoV-2 infection in a large and diverse sample of workers considered to be at higher-risk of exposure. However, we acknowledge some important limitations. Participants were selected from a list of potentially mobilized facilities in the canton of Geneva-our sample may thus not be representative of the overall population of mobilized workers. Despite the 4-month recruitment period, during which weekly reported infections remained low in the canton of Geneva ( Supplementary Fig. 3), time of participation in the study was not related with seropositivity rates ( Supplementary Fig. 4). Importantly, the time frame captured in our data included periods before and after population-wide government-mandated preventive measures were implemented-including heterogeneous sector-specific measures and a lockdown, followed by progressive lifting of lockdown measures. It is likely that the inter-and intra-sectoral variability in seropositivity rate could be partly explained by preventive measures. However, our data do not allow to account for the potential impact of preventive measures. Finally, although we accounted for self-reported outof-work exposure to confirmed cases of COVID-19, unknown exposure to SARS-CoV-2 outside the work setting, such as in transportation to and from work, may have also played a role.
In conclusion, we conducted a serosurvey among workers in 16 sectors deemed as essential for the smooth functioning of society during lockdowns. With the exclusion of the nursing home, homecare and healthcare sectors, as well as nurses/nurse assistants, and kitchen employees within facilities, we found little evidence to support that workers in sectors that were not confined during the initial COVID-19 lockdown faced a greater risk of contracting SARS-CoV-2 during the first wave than the general working-age population. Importantly, seropositivity rates differed widely across sectors, between facilities within sectors, and across occupations.
Methods
As there was no list of workers mobilized during the pandemic in the canton of Geneva (a state of Switzerland with about 500,000 inhabitants) from which we could draw our sample, we selected public and private companies and institutions-hereafter facilities-that were potentially mobilized (see Supplementary Note 1 for further selection and recruitment information). Facilities were eligible for participation if they were located in the canton of Geneva and had remained mostly operational with on-site staff activity during the lockdown. Participating facilities in turn invited their employees to participate on a voluntary basis. The mean participation rate per facility was 45% (median 41%). All participants gave written informed consent, completed a questionnaire and provided a venous blood sample. Samples in the SEROCoV-WORK + study were collected from May 18 until September 18, 2020, while samples in the population-based survey-which data are used here for comparison-were collected from April 6 until June 30, 2020 21 . The recruitment phase of both studies was completed before the beginning of the second pandemic wave, and SARS-CoV-2 circulation was low in Geneva until the end of September 2020 19 . The Cantonal Research Ethics Commission of Geneva, Switzerland, approved the study (project number 2020-00881).
We assessed anti-SARS-CoV-2 IgG antibodies using a commercially available ELISA (Euroimmun; Lübeck, Germany #EI 2606-9601G) targeting the S1 domain of the virus spike protein, using the manufacturer's recommended cut-off of ≥1.1 for seropositivity 37 . We standardized occupations into 16 sectors and 32 occupation groups (Supplementary Tables 1-2). To estimate the relative risk (posterior mean and 95% central credible interval) of being seropositive across different groups of participants, we fit Bayesian regression models that accounted for age, sex, and test performance including, when appropriate, random effects for facilities. We implemented these models in the Stan probabilistic programming language, using the rstan package 38 . We ran 5000 samples (four chains of 1500 iterations, each with 250 warmup iterations discarded), assessing convergence visually and using shinystan diagnostics checks 39 . In sensitivity analyses, we excluded participants who were fully confined during the lockdown, evaluated seropositivity rate according to out-of-work exposure to confirmed COVID-19 cases, and incorporated out-of-work exposure as a covariate in the main model. Full details of the model are in the Supplementary Information. For comparison, all figures include the seropositivity rate from the SEROCoV-POP representative sample of the Geneva working-age population 20,21 .
Reporting summary. Further information on research design is available in the Nature Research Reporting Summary linked to this article.
Data availability
Participants' informed consent does not authorize data, even coded, to be immediately available. It does allow, however, for the data to be made available to the scientific community upon submission of a data request application to the investigators board via the corresponding author. Virologically-confirmed SARS-CoV-2 infection data from the Canton of Geneva: https://infocovid.smc.unige.ch/. Biological material can be reused for further studies upon approval by the cantonal ethics commission. | 2021-06-10T06:16:32.625Z | 2021-06-08T00:00:00.000 | {
"year": 2021,
"sha1": "069efb015b7128222b1c2bfa2687e18ac40655ee",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41467-021-23796-4.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "18ed51d1f7f3490b9b0d42adabdcdf0c6e824dd8",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259146139 | pes2o/s2orc | v3-fos-license | The effect of COVID-19 vaccination on anticoagulation stability in adolescents and young adults using vitamin K antagonists
Introduction The European Medicine Agency has authorized COVID-19 vaccination in adolescents and young adults (AYAs) from 12 years onwards. In elderly vitamin K antagonist (VKA) users, COVID-19 vaccination has been associated with an increased risk of supra- and subtherapeutic INRs. Whether this association is also observed in AYAs using VKA is unknown. Our aim was to describe the stability of anticoagulation after COVID-19 vaccination in AYA VKA users. Materials and methods A case-crossover study was performed in a cohort of AYAs (12–30 years) using VKAs. The most recent INR results before vaccination, the reference period, were compared with the most recent INR after the first and, if applicable, second vaccination. Several sensitivity analyses were performed in which we restricted our analysis to stable patients and patients without interacting events. Results 101 AYAs were included, with a median age [IQR] of 25 [7] years, of whom 51.5 % were male and 68.3 % used acenocoumarol. We observed a decrease of 20.8 % in INRs within range after the first vaccination, due to an increase of 16.8 % in supratherapeutic INRs. These results were verified in our sensitivity analyses. No differences were observed after the second vaccination compared to before and after the first vaccination. Complications after vaccination occurred less often than before vaccination (9.0 vs 3.0 bleedings) and were non-severe. Conclusions the stability of anticoagulation after COVID-19 vaccination was decreased in AYA VKA users. However, the decrease might not be clinically relevant as no increase of complications nor significant dose adjustments were observed.
Introduction
Vaccination against severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), the virus causing coronavirus disease 2019 (COVID- 19), has played a crucial role in controlling the COVID-19 pandemic [1,2]. However, there are concerns regarding the potential interactions and side-effects of these vaccines, which have been associated with the development of coagulation disorders and thrombotic events [3][4][5].
Notably, the Comirnaty (BNT162b2) vaccine has been found to have a negative effect on the quality of anticoagulation stability in elderly VKA users, with an increased risk of supra-and subtherapeutic international normalized ratios (INRs) observed in both stable and unstable outpatients after the first vaccination [6]. Whether these effects also extend to younger VKA users is uncertain.
The use of VKA in adolescents and young adults (AYAs) poses unique challenges [7][8][9]. For instance, the treatment of VKA in AYAs is rare and, therefore, there is a lack of prospective clinical trials of VKA use in AYAs [10][11][12]. The results of trials in adult and elderly VKA users are extrapolated to the AYA population, albeit with little high-level evidence [7,[13][14][15]. In addition, AYAs who require anticoagulation tend to have severe medical conditions, comorbidities and thrombophilia [16], increasing the risk of (recurrent) venous thrombosis and bleeding or resulting in polypharmacy. Consequentially, anticoagulation stability is usually lower in AYAs compared to adult patients [17][18][19][20], making them potentially more prone to therapeutic instability after COVID-19 vaccination.
In combination with the challenging VKA treatment in AYAs, systemic side effects caused by the COVID-19 vaccine, such as fever, chills and nausea, might affect the stability of anticoagulation in AYAs [21]. AYAs reported a similar percentage of systemic side effects as adults in the randomized control trials [22][23][24][25]. Considering these findings, we conducted a case-crossover pilot study in a cohort of AYAs from four anticoagulation clinics to investigate the stability of anticoagulation after COVID-19 vaccination in AYA VKA users.
Study design and setting
In this pilot study, we looked into the possible effect of COVID-19 vaccination on anticoagulation stability in AYA VKA users. This study had an observational case-cross over design and was performed in four anticoagulation clinics in the Netherlands (Atalmedial, trombosedienst Leiden, Star-shl and Elkerliek Trombosedienst). We included all outpatient VKA users aged between 12 and 30 years, who received at least one COVID-19 vaccine in 2021 and had a post-vaccination INR measurement before the 15th of December 2021. Patients were excluded if they did not have an INR measurement before COVID-19 vaccination or between the first and second vaccination, if applicable. Other exclusion criteria were a deviant INR range (e.g. 3.0-4.0 or 1.5-2.0), switch to another therapeutic INR range or other type of VKA during the three months before and after the first COVID-19 vaccination. The Erasmus University Medical Centre's ethics committee granted a waiver for informed consent (MEC-2021-0481).
Outcomes
Our main outcome was the percentage (%) of sub-and supratherapeutic INR results after both vaccinations. We used the most recent INR measured before vaccination and the first INR measured after both vaccinations. We also studied the mean INR and VKA dosages and the percentage of INR results followed by a significant dose adjustment after the first and second COVID-19 vaccination. A significant dose adjustment was defined as any dose adjustment of 10 % or more. The percentage of INR results below, within or above therapeutic range was determined before first vaccination and after both vaccinations. Furthermore, we measured the percentage (%) of INRs ≥5, as this has been associated with a higher risk of bleeding complications [26]. These outcomes were analysed both in the total group of AYA VKA users, also referred to as every vaccine recipient, and in the group who completed the vaccination programme. We defined completion as either receiving one Jcovden (Ad26.COV2.S) vaccine or receiving two vaccines in case of BNT162b2, Spikevax (mRNA-1273) and Vaxzevria (ChAdOx1-S).
In the total group, we also looked at the number of registered complications, thrombosis and bleeding, three months before vaccination and three months after the last vaccination. Severe bleeding was defined as every intracranial bleed, every intra-articular bleed and every bleeding leading to death, hospital admission or surgical intervention. Finally, we looked at the duration of instability, which was defined as the number of weeks between the first COVID-19 vaccination and the moment on which the percentages of INR results were similar to the percentage of INR results below, within or above therapeutic range before vaccination.
Data collection and sources
At the anticoagulation clinics, all VKA users are strictly monitored at least once every six weeks. The frequency of INR monitoring depends on the INR value, individual monitoring time, scheduled interventions and changes in co-medication. In the year of 2021, the federation of Dutch anticoagulation clinics (FNT) encouraged the anticoagulation clinics to measure the INR within 2 weeks after COVID-19 vaccination. During each patient visit, changes in co-medication, bleeding events, scheduled interventions, hospital admission and onset of co-morbidities and illnesses are documented using a standardized short questionnaire. Along with this information, the anticoagulation clinics registered the date and type of the received vaccination. The date of COVID-19 vaccination was defined as the date registered in the Electronic Medical Record (EMR).
Baseline characteristics, year of VKA initiation, indication for VKA treatment, INR target range, INR results and VKA dosages were retrieved from the EMR. Other collected data included interventions, hospital admissions, registered complications, co-medication, type and date of COVID-19 vaccination. This information was automatically retrieved from the EMR on the 15th of December 2021.
Statistical analysis
Data for continuous variables were expressed as means with standard deviation (SD) or median with interquartile range (IQR) depending on the normality of the distribution. We expressed categorical data as numbers with percentages. In this study, VKA users were compared to themselves (crossover analysis). The reference categories in the main analysis were the most recent INR and VKA dosage before vaccination. We compared absolute differences in INR and VKA dosage using paired t-test or a Wilcoxon signed-rank test in case of a normal distribution or skewed distribution, respectively. Percentages were compared using McNemar's test. The median time between first and second COVID-19 vaccination and INR measurement was calculated using the Kaplan Meier estimator. We used pair-wise deletion in case of missing data. The percentage of missing was described. No imputation of missing variables was performed. All statistical analyses were performed with IBM SPSS statistics version 28.
Sensitivity analysis
Several sensitivity analyses were performed to verify our results. As the most recent INR before vaccination might be influenced by professionals awaiting the optimal INR to vaccinate, we replaced the most recent INR before vaccination with the second most recent INR and the INR measured 1-2 months before vaccination. If the second INR before vaccination turned out to be measured at 1-2 months before vaccination, this INR and the one before it was used. Furthermore, a subgroup analysis was performed in the patients with a Time in Therapeutic Range (TTR) of 75 % or higher. The TTR was based on the INR results of the previous 6 months before COVID-19 vaccination and calculated by using the Rosendaal method [27].
In addition, a similar analysis was done using two subsequent INRs four months before COVID-19 vaccination. Similarly, we stratified by type of VKA (acenocoumarol or phenprocoumon), because phenprocoumon is associated with better anticoagulation control as it has a longer half-live compared to acenocoumarol [28]. Finally, a subgroup analysis was performed by excluding AYAs who received a surgical intervention, had been hospitalized or started or ceased any medication interacting with VKA in the three months before and after COVID-19 vaccination. Interacting medication was identified by using the national list of interacting medication of the FNT [29].
Baseline characteristics
A total of 113 AYAs were vaccinated and had a post-vaccination INR measured. After applying the exclusion criteria, 101 AYAs were eligible of whom 11 were younger than 18 years [ Fig. 1]. Of the 101 AYAs, the median age [IQR] was 25 [7] years and 52/101 (51.5 %) were male (sex at birth). Acenocoumarol was the preferred VKA (68. Table 1.
Anticoagulation control after the first vaccination
We observed a decrease of 20.8 % in INRs within range after the first vaccination compared to before vaccination [ Fig range is defined as any therapeutic target range, which differs from 2.0 to 3.0 or 2.5-3.5. An INR switch is defined as a switch from therapeutic range during study period and a VKA switch is defined as a switch from VKA type during study period. Table 2]. The mean INR level (SD) was higher, but not statistically significant, after the first vaccination compared to before vaccination [2.6 (0.83) vs 2.7 (0.92), p = 0.647] in all AYAs. We did not observe any difference in the median number of VKA tablets nor in significant dose adjustments before and after the first vaccination in the total group [ Table 3].
No increase in subtherapeutic INRs was observed after the first vaccination compared to before vaccination in the total group, nor in recipients who completed the vaccination programme [Tables 2 & 3]. Similarly, no differences were seen in the percentages of INR ≥ 5 before and after the first vaccination in both groups. Three weeks after vaccination, the instability of anticoagulation had resolved as the percentage of INRs above, within and below range were similar to before vaccination [i. e. 12/101 (11.
Anticoagulation control after the second vaccination
In AYAs who completed the vaccination programme, the median number of VKA tablets was lower after the second vaccination compared to the first vaccination in both acenocoumarol and phenprocoumon users [ Table 3]. This decrease did not result from an increase of significant dose adjustments. The percentage of INRs within range was similar after the second vaccination compared to before vaccination and compared to the first vaccination [ Fig. 2B]. Similarly, we did not observe any differences in the percentage of sub-and supratherapeutic INRs nor in the percentage of INRs ≥5 after the second vaccination. There were no differences in INR results and VKA tablets after the first vaccination compared to the second vaccination.
AYAs below 18 years
We did not observe any differences in the outcomes after vaccination in the included 11 AYAs below 18 years. Most of them stayed in range after the first COVID-19 vaccination [6/11 (54.5 % vs 5/11 (45.5 %), p = 1.00]. No differences were observed in median INR level or in the number of VKA tablets after the first COVID-19 vaccination. None of them had an INR ≥ 5 before and after first vaccination. No complications were reported three months before and after the COVID-19 vaccination.
Sensitivity analyses
The percentage of INRs out of range was higher after the first vaccination, irrespective of the chosen baseline INR [Fig. 3]. Similarly, an increase in supratherapeutic INRs was seen after the first COVID-19 vaccination in both sensitivity analyses. When we restricted our
Discussion
Our research aimed to describe the stability of Vitamin K antagonist (VKA) after COVID-19 vaccination in adolescents and young adults (AYAs). As we observed a decrease of 20.8 % in INRs within range after first vaccination, our results indicate that COVID-19 vaccination is correlated with a negative effect on anticoagulation stability. This negative effect remained after restricting the analysis to the most stable patients (TTR >75 %) and patients without interacting events. The decrease in INRs within range did not result in increased complications in the three months after COVID-19 vaccination.
Stability of anticoagulation after vaccination has mainly been described in the adult and elderly VKA users. In most studies, no effect on the INR after vaccination was observed [30][31][32][33][34]. Only two studies have looked at the percentage of INRs out of range after COVID-19 vaccination [6,35]. Lotti et al. [35] did not observe a correlation, but this might be caused by them comparing the median of all INRs 3 months before and after COVID-19 vaccination and possibly missing short effects of the vaccine. Bauman et al. [36] observed similar results in their study about the effect of vaccination on anticoagulation control in paediatric VKA users. They included 28 paediatric VKA users (range 10 months-17 years old) and did not observed a relevant increase in the median INR and in the percentages of INRs out of range after vaccination. In contrast, our research group observed an increase of INRs out of range when comparing the most recent INR before and after COVID-19 vaccination [6]. In the present study, we observed a similar increase of INRs out of range after COVID-19 vaccination in adolescents and young adults. These results might indicate an association between COVID-19 vaccination and anticoagulation stability in younger VKA users as well. In the subgroup of adolescents, we did not observe a difference of the percentages of INRs out of range after vaccination, probably because of a lack of power. Another possible contributing factor might be that Canadian adolescents and almost all Dutch adolescents (10/11) were home-monitored, which has been shown to result in better anticoagulation stability in paediatric patients [37].
The increase of INRs out of range was not associated with an increase in complications after COVID-19 vaccination. The limited number of complications after COVID-19 vaccination is in line with the literature on post-vaccination complications in both adult and paediatric VKA users [30][31][32][33][34][35][36]. However, Dutch anticoagulation clinics were encouraged to monitor the anticoagulation status shortly after COVID-19 vaccination, so any necessary dose adjustments could rapidly be made. The increased intensity of monitoring of VKA users has probably affected our observed number of complications before, but mainly after COVID-19 vaccination. Therefore, we can only conclude from our results that COVID-19 vaccination is not correlated with a large increase in bleeding complications. Whether the number of bleeding complications are indeed low after COVID-19 vaccination might be answered by prospective studies with similar monitoring frequencies before and after COVID-19 vaccination.
Several explanations can be proposed to explain the correlation between COVID-19 vaccination and instability of anticoagulation in VKA users. COVID-19 vaccination itself or systemic reactions to the vaccine could have resulted in an increase of INRs out of range. These systemic reactions occur in AYAs as frequently as in adults, most often after the second dose [22][23][24][25]. We observed the increase in INRs out of range mainly after the first vaccination. This observation could indicate that other reasons, such as subconsciously altering the VKA dosages by physicians and patients, are responsible for increased INRs out of range after COVID-19 vaccination. Although we cannot rule out the possibility of physicians and patients decreasing their VKA dosages during the observation period, we would have expected a larger increase in subtherapeutic INRs and significant dose adjustments after the first COVID-19 vaccination. In addition, our sensitivity analyses demonstrated similar results as the main analysis; an increase in supratherapeutic INRs was seen, irrespective of the chosen baseline INR. Finally, intramuscular vaccination is deemed safe in anticoagulated patients, so dose adjustments for vaccination alone should not have been necessary [38].
Our study has strengths and limitations. First, this study is the first to analyse the effect of any vaccine on the anticoagulation control in adolescents and young adults. Second, we could include 101 AYAs by using the electronic patient files from four large anticoagulation clinics. Finally, we have included acenocoumarol as well as phenprocoumon users. Compared to the international frequently used warfarin, acenocoumarol has a relatively short-half life while phenprocoumon has a relatively long half-life. As we included both users, our results might be similar for warfarin AYA users [39]. Our main limitation is that we cannot blindly generalize our results to adolescent VKA users, due to the small number of adolescents included. No effect of the COVID-19 vaccine was seen in the adolescents, probably because of a lack of power. Our second limitation is that not every AYA had two vaccines registered. We were unable to identify the individual reasons for missing a second registration. Reasons might be not receiving the vaccine before the 15th of December, experiencing side effects, a SARS-CoV-2 infection before the vaccination programme started, or the vaccine not being registered by the anticoagulation clinics. We tried to reduce the possible biases resulting from these reasons by analysing both the INR results of AYAs who completed the vaccination programme and AYAs who only received the first vaccination. Our third limitation is that we cannot exclude the possibility that the effect on anticoagulation control was due to dose adjustments by patients or physicians. However, there was no increase in significant dose adjustments during our observation period and the results of our sensitivity analyses followed the main analysis.
To conclude, in this pilot study we observed an decrease in the percentage of INR results within range after COVID-19 vaccination in adolescent and young adult VKA users. It might be advisable to monitor the INR shortly after COVID-19 vaccination in AYA VKA users, to make necessary dose adjustments in those with an out-of-range INR to reduce time spend out of range and the risk of complications. Whether these complications are reduced by monitoring the INR shortly after vaccination is still arguable and should be investigated in well-designed studies. Kruip has received unrestricted grants paid to the department for research outside this work from Sobi and has received speaking fees paid to the department from Roche, Sobi and BMS. | 2023-06-14T05:07:43.689Z | 2023-06-12T00:00:00.000 | {
"year": 2023,
"sha1": "677ac9489b6c926a35ebe3fe76309bc7ae31031a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "aed80cd4c86e955ca0712b9cc0101d28c69e70cb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247930521 | pes2o/s2orc | v3-fos-license | Nanocomposite hydrogels for biomedical applications
Abstract Nanomaterials' unique structures at the nanometer level determine their incredible functions, and based on this, they can be widely used in the field of nanomedicine. However, nanomaterials do possess disadvantages that cannot be ignored, such as burst release, rapid elimination, and poor bioadhesion. Hydrogels are scaffolds with three‐dimensional structures, and they exhibit good biocompatibility and drug release capacity. Hydrogels are also associated with disadvantages for biomedical applications such as poor anti‐tumor capability, weak bioimaging capability, limited responsiveness, and so on. Incorporating nanomaterials into the 3D hydrogel network through physical or chemical covalent action may be an effective method to avoid their disadvantages. In nanocomposite hydrogel systems, multifunctional nanomaterials often work as the function core, giving the hydrogels a variety of properties (such as photo‐thermal conversion, magnetothermal conversion, conductivity, targeting tumor, etc.). While, hydrogels can effectively improve the retention effect of nanomaterials and make the nanoparticles have good plasticity to adapt to various biomedical applications (such as various biosensors). Nanocomposite hydrogel systems have broad application prospects in biomedicine. In this review, we comprehensively summarize and discuss the most recent advances of nanomaterials composite hydrogels in biomedicine, including drug and cell delivery, cancer treatment, tissue regeneration, biosensing, and bioimaging, and we also briefly discussed the current situation of their commoditization in biomedicine.
and so on; 2DM can be divided into single layer and multiple layers (Table 1). Nanomaterials possess many excellent functions due to their nanoscale size and rich structures. The 3D sizes of 0DM do not differ too significantly (length: width: height ≈ 1) (Figure 2a). 1 Over the past few decades, various 0DM have been successfully prepared, such as gold nanoparticles (Au NPs), 34 mesoporous silica nanoparticles (MSNs), 34 silver nanoparticles (Ag NPs), 36 and titanium dioxide (TiO 2 ) nanoparticles. 37 Au NPs, as a kind of precious metal nanoparticles, have rich properties due to their nanostructure: good biosafety, phototherapeutic properties, surface modification, bioimaging, and so on. For example, Au NPs have been used as alternative nanoprobes for live cell imaging by dark field microscopy (DFM). 38 F I G U R E 1 Schematic diagram of the main content of this review. Nanomaterials can be divided into zero-dimensional materials (0DM), onedimensional materials (1DM), and two-dimensional materials (2DM) according to their size, length-diameter ratio, and diameter-thickness ratio. Based on the polymer composition, hydrogels can be distributed into the natural polymer, synthetic polymer, and composite polymer hydrogels. According to the macroscopic phenotype of the composite systems, nanomaterials and hydrogels mainly combined into four composite systems, including nanocomposite hydrogel microneedles (MNs), injectable nanocomposite hydrogels, self-healing nanocomposite hydrogels and bioimaging nanocomposite hydrogels. We reviewed the recent advances and future challenges of the composite systems, which almost involve all areas of biomedicine, including drug and cell delivery, cancer treatment, tissue regeneration, biosensing, and bioimaging 1DM refers to materials possessing a relatively large length-todiameter ratio (Figure 2b), such as nanocrystalline cellulose (CNCs), 62 gold nanowires (Au NWs), 63 carbon nanotubes (CNTs), 3 and so on. 1DM often exhibits a high degree of anisotropy and based on this, it simultaneously exhibits many excellent properties such as extremely high tensile strength and specific responding to temperature and 3-Aminophenylboronic acid, aniline, and polyvinyl alcohol Ultramolecular assembly of hydrogels, dynamic bonds (hydrogen bonding and π-π stacking) Self-healing, 3D scaffold, drug loading capability, electric conduction humidity. 64,65 For example, CNTs, as a new carbon-based nanomaterials containing hollow fibrous structure, have properties of high tensile strength, gas adsorption, high conductivity, and thermal conductivity. Thus, it is commonly used for mechanical and conductive enhancement of macroscopic materials, commonly used in the field of biosensing. 66 2DM refer to materials that possess a relatively large diameterthickness ratio ( Figure 2C) such as graphene, 61,67,68 reduced graphene oxide (rGO), 69 graphene oxide (GO), 39,40 phosphorene, 44,70,71 MXene, 48
| RECENT ADVANCES IN HYDROGELS
According to the polymer composition, hydrogels can be distributed into the natural polymer, synthetic polymer, and composite polymer hydrogels (Table 1). Natural polymer hydrogels exhibit good biocompatibility and biodegradability, and hence they possess very promising application potentials in biomedicine. For example, researchers have used chitosan and β-glycerophosphate to prepare an injectable thermosensitive hydrogel (Figure 2d). 52 Dextran and cellulose, as typical nonionic natural polysaccharide hydrogels, usually need to be combined with cells or nano drugs to meet specific requirements. 45,97 Sodium alginate is sensitive to Ca 2+ and has good biocompatibility. It can load cells or nano drugs for human body. 53 Synthetic components also possess excellent properties and are easy to design and modify, and they also play an indispensable part in the construction of hydrogels. For example, researchers have used poly(caprolactone)-poly (ethyleneglycol)-poly(caprolactone) to develop an injectable thermosensitive hydrogel (Figure 2e). 54 Poly(acrylamide-co-maleic anhydride) (P(AM-co-MAH)) hydrogel is self-healing because of rich hydrogen bonds, and polymethyl vinyl ether-salt-maleic acid hydrogel possesses large swelling capacity due to the hydrophilic group ( COOH). 55,56 For practical applications, hydrogels are often required to simultaneously possess many properties such as good biocompatibility, biodegradability, proper hydrophilic, good mechanical properties, and special functional groups. However, many natural polymers or synthetic polymers cannot meet the above requirements alone. Figure 2f). 57 The designed composite hydrogel can simultaneously possess the excellent properties of natural or synthetic polymers to achieve synergistic complementarity. 98 Hydrogels that are able to intelligently respond to environmental changes (such as heat, pH, light, and ultrasound) can be designed by introducing relevant functional groups to achieve in situ gelation and controlled drug release. 99 For example, groups with double bonds can be grafted onto dextran by transesterification. When exposed to UV light, double bonds can be polymerized to form photo-sensitive hydrogels. 100 It is also possible to design self-healing hydrogels by introducing dynamic cross-linking. 98 For example, the hydrogel that based on 3-aminophenylboronic acid, aniline, and polyvinyl alcohol, has strong self-healing ability because of the dynamic cross-linking (hydrogen bonding and π-π stacking). 101
| RECENT ADVANCES OF NANOCOMPOSITE HYDROGELS IN BIOMEDICAL ENGINEERING
Nanomaterials have successfully demonstrated their huge application potential in nanomedicine by virtue of their various distinctive physical and chemical characteristics. 15,42,45 For example, Au NPs have good catalytic activity, photothermal conversion performance, biological imaging performance, and electrical conductivity. 38 However, they do exhibit some shortcomings that inevitably hinder their application effect and practicality (such as burst release, rapid removal, poor biological adhesion, and irreversible deformation). 19
| Recent advances of nanocomposite hydrogel microneedle patches
MN technology has the advantages of painlessness, low invasiveness, low infectivity, and it is an ideal transdermal method. 102 The length of MNs is usually no more than 1 mm, which is much smaller than the traditional metal injection needles (the length of the traditional metal injection needles is not less than 4 mm). 103
| Recent advances of injectable nanocomposite hydrogels
Injectable nanocomposite hydrogels can be prepared by adding nanomaterials to injectable hydrogels. Nanomaterials can adjust the structures and properties of hydrogels at the nano level and can also expand the functions of the system (such as PTT and PDT). Injectable hydrogels can effectively improve the retention effect of nanomaterials and make the composite systems have good plasticity to adapt to various biomedical application. 113 One of the outstanding advantages of injectable nanocomposite hydrogels is that they can almost adapt and fill any application place (such as various of wounds).
In injectable systems, nanomaterials generally blend with hydrogels through physical action. and gene therapy. 24,114 Tumor PTT based on nanomaterials has become a popular anti-tumor method in recent years. 42,47 According to different light-mass interaction mechanisms in optical radiation, the photothermal conversion mechanism can be divided into two types stemming from nanomaterials, including nano metallic materials based on local plasmon heating and nano semiconductor materials with nonradiation relaxation. 115 Both mechanisms can perform effective light-to-heat conversion. The photothermal mechanism of nano metal materials primarily involves the generation of thermal electrons under light radiation and the final photothermal conversion. Such nanomaterials include classic precious metal nanomaterials (such as Au NPs and Ag NPs) and emerging two-dimensional plasmon materials (such as MXenes). 115 The photothermal mechanism of nanosemiconductor materials primarily involves electron diffusion under light radiation and composite carriers. 116 The 2D BP is an example of these materials. The anti-tumor effect of PTT is related to the photothermal conversion efficiency of nanomaterials. 117 produce a 42 C high temperature and hypertoxic active oxygen in response to magnetic field to achieve anti-tumor ( Figure 5c). 119 Tumor immunotherapy based on injectable nanocomposite hydrogels also exhibits very good potential. 122 Nanoparticles integrate the anticancer drug DOX and arginine-rich molecules to achieve both chemotherapy and immunotherapy. 124 128 Additionally, myocardial infarction can cause myocardial ischemia and based on this, pro-vascularization is necessary during treatment. 129 (Figure 6d).
The 2D BP produces local heat (PTT) and ROS (PDT) under NIR, which cannot only remove hyperplastic synovial tissue but also stimulate the regeneration of cartilage injury. Concurrently, the degradation products of BP can provide sufficient raw materials for osteogenic differentiation. 133
| Skin tissue engineering
Skin damage is often accompanied by bacterial infections that cause secondary wound deterioration. 40 The shape and depth of the skin wounds are often irregular, and it is difficult to fully fit the wound even after cutting the gels. nanoparticles into a chitosan matrix to develop an injectable temperature-sensitive nanocomposite hydrogel. The gel cannot only promote the regeneration of skin tissue but can also mimic the effects of PTT and PDT to effectively inhibit skin tumors (Table 2). 37
| Recent advances of self-healing nanocomposite hydrogels
Self-healing hydrogel means that the material can repair properties to their original state after being damaged, which could prolong the lifespan of the material and reduce its unreliability. 106 Referring to the self-healing mechanism, the self-healing behavior is mainly induced by the reversible interaction of the hydrogel itself (dynamic chemical bonds and noncovalent interactions). 101,145 There are many dynamic chemical bonds, such as C N bond (acyl hydrazone bond and imide bond), B O bond (phenyl borate), C C/C S bond (reversible radical reaction), Schiff base bond and disulfide bond. There are also many dynamic noncovalent interactions, such as multiple hydrogen bond interaction, π-π stacking, ion interaction (metal coordination), hostguest interaction, and hydrophobic interaction. 58,101,106,145 In recent years, self-healing hydrogels have exhibited great potential to become brittle hydrogel substitutes based on their durability and long-term stability. 106 However, depending on their intended use, self-healing hydrogels also must include a variety of properties such as electrical conductivity, photosensitivity, adhesion, and appropriate mechanical strength. Nanomaterials are small and possess various characteristics such as conductivity, light sensitivity, and pH response. 16,146 Therefore, a promising method is to incorporate nanomaterials into selfhealing hydrogels to develop nanocomposite hydrogels possessing multiple functions and self-healing ability. Due to the interaction of polymer-nanomaterials (such as charge interaction, hydrogen bond and hydrophobic interaction), nanomaterials can also enhance the self-healing ability of hydrogels. 106 Self-healing nanocomposite hydrogels exhibit huge application potential in controlled drugs release, electronic skin, and biosensing. 147
| Controlled drugs release
Self-healing nanocomposite hydrogels can control the release of medicines through dynamic networks on/off. 148 hydrophilic copolymer that can rapidly gel at physiological pH. 150 The hydrogel not only has self-healing ability, but also shows multiresponsiveness to pH, glucose, H 2 O 2 , and temperature. The hydrogel system has huge potential for controlled drugs release.
| Bionic electronic skin
The materials used to prepare the electronic skin must possess good toughness, self-healing properties, and electrical conductivity to allow
| Glucose sensor
Blood glucose monitoring is very necessary for diabetic patients. Then, CeO 2 /MnO 2 nanoparticles loaded with GOx were connected to the gel through covalent action to serve as an electrocatalysis medium ( Figure 9a). 57 An additional covering agent was concurrently applied to the hydrogel to av CeO 2 /MnO 2 NPs run-off. The gel was then covalently bonded to a flexible chip to form a flexible glucose sensor. Since the CeO 2 /MnO 2 NPs act as electrocatalytic medium, the sensor exhibits a rapid and sensitive response to glucose (t < 3 s). Due to the reversible Schiff base bond between quaternized chitosan and oxidized dextran, the hydrogel exhibits strong self-repair properties that allow the sensor to adapt to various deformations and damage. The hydrophilic polymer network and self-healing function greatly improve the sensitivity and service life of the glucose sensor, and the sensor can even work continuously for more than 30 days in vitro. In addition, study has been successfully examining glucose in oral saliva by nanocomposite hydrogel combined with GOx. 154 These studies show that nanocomposite hydrogels combined with GOx has great potential in noninvasive detection of glucose.
| Strain sensor
Any hydrogel used as a skin strain sensor must possess good toughness, conductivity, and self-repair ability to allow for long service life and good electrical signal transmission ability under external stress. 155 various motions in real-time. 153 Moreover, the hydrogel system can also promote angiogenesis, accelerate collagen deposition, anti-infection, so as to repair diabetic wounds. The developed nanocomposite hydrogel combines the functions of tissue regeneration and biosensing, and it can likely achieve dynamic monitoring of human movement while repairing diabetic wounds.
| Recent advances in bioimaging nanocomposite hydrogels
Common imaging methods in the biomedical field include magnetic resonance (MRI), photoacoustic (PAI), and fluorescence imaging (FLI).
Combining nanomaterials with imaging functions and hydrogels with good biocompatibility to obtain bioimaging nanocomposite hydrogels. 14 quickly. 170 The safety of nanomaterials is mainly related to the material itself (such as element composition, nano size, surface charge).
Among them, the size of nanomaterials is a factor affecting their biosafety and properties. For example, 10 nm Au NPs absorb green light and thus appear red. The melting temperature decreases dramatically as the size goes down. Moreover, 2-3 nm Au NPs are excellent catalysts, which also exhibit considerable magnetism. At this size, they are still metallic, but smaller ones turn into insulators. Their equilibrium structure changes to icosahedral symmetry, or they are even hollow or planar, depending on size. 171 Therefore, in commercialization, it is also very important to select the appropriate size and ensure good size uniformity of nanomaterials. In addition, deep application in vivo usually requires higher safety than superficial application. The deeper the application depth of the product, the easier it is to contact the human blood circulation system and central nervous system, the higher the safety requirements of the product. 172 Nanocomposite hydrogels also tend to be complex systems even when applied to shallow surfaces, which require high safety.
Biomedical products mainly contain medical devices and new drugs. Medical devices can be divided into Class I, II, and III according to the use risk of products from small to large. 173 and high cost of R&D, production, and approval, but also comes from the difficulties in the production and material design of nanomaterials and hydrogels themselves.
Compared with in vivo applications, in vitro biomedical applications (biosensors, artificial skin, and biological actuators) pay more attention to the functionality and stability of the composite systems.
This puts forward higher requirements for the material design and manufacture of nanomaterials and hydrogels. For in vitro applications, nanomaterials act more as the sensing core, endowing the composite system with electrical conductivity, being able to sense various small changes in the surrounding environment (temperature, stress, light, and substance concentration) and convert them into electrical signals. 57,156 Nanomaterials need to perform signal conversion repeatedly in this process, which puts forward higher requirements on the functionality and stability of nanomaterials. Therefore, in material designs, nanomaterials need to consider the good homogeneity, appropriate size, micromorphology, and surface chemical modification.
In vitro applications also place extremely high requirements on the environmental tolerance of hydrogels. The hydrogels mainly determine the macroscopic phenotype of the composite systems. In vitro applications require hydrogels to withstand extreme temperatures, maintain humidity, wear resistance, strong flexibility, viscosity, degradation resistance, and appropriate swelling rate. In addition, the hydrogels must also have good retention of nanomaterials. 49,57 In designing, hydrogels often consider suitable polymer molecular weight, directional chemical modification, multicomponent combination, and so on. Manufacturing is also very important for product transformation. In the manufacturing process, it is necessary to control the cost of products, such as production capacity, input/output ratio, qualified product ratio, automation degree, and so on. At present, a trend is to expand the in vivo functions of in vitro products, such as integrating skin repair functions on biosensors. 153 However, this puts forward higher safety requirements for the composite system and brings difficulties to the design and preparation of materials.
DATA AVAILABILITY STATEMENT
Data sharing not applicable to this article as no datasets were generated or analysed during the current study. | 2022-04-04T15:28:47.483Z | 2022-04-02T00:00:00.000 | {
"year": 2022,
"sha1": "89afcb143a76e8474b055cc90f280a9e8a5e108d",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1002/btm2.10315",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0ba64f820997e80f745a38d031e431c444668bd7",
"s2fieldsofstudy": [
"Materials Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
240264744 | pes2o/s2orc | v3-fos-license | Towards a digital syllable-based reading intervention: An interview study with second- graders
Reading is an essential ability and a cornerstone of education. However, learning to read can be challenging for children. To scaffold young learners, a number of reading interventions were developed, including a syllable-based approach in German, which has proven to be successful, but resource and time consuming through individual interaction by educators. To improve the reach of the reading intervention, we present the first step towards a digital intervention, following a humancentred design approach. In this contribution, we present the implementation of a digital prototype, developed with the feedback of expert evaluations, as well as an interview study with secondgraders. The results of interviews with children showed that the app is suitable to be applied in the target age group, that children had fun using it and were motivated to further do so. The study also provides design implications for transferring an analogue concept into a digital application.
INTRODUCTION
Around 6.2 million people in Germany cannot read properly (Grotlüschen & Riekmann, 2011). The origin of the inability or a deficiency in reading capabilities can often be traced back to the initial attempts to learn reading during primary school. This problem is reflected in around 15% of German fourth graders, who show deficits in extracting meaning from presented texts (Bos, Tarelli, Bremerich-Vos, & Schwippert, 2012). In the overall reading performance, German children showed a lower mean performance score relative to most other European countries (Hußmann et al., 2017). This is due a stagnating overall score for Germany, which has not improved significantly since 2001 in contrast to many other European countries. However, there is also high variability in the reading abilities, with an increase in children on the lowest level of competence to 6% from 3% in 2001.
Given the high transparency of the German language that allows reading words on a letter-byletter basis, even poor readers can read words accurately. However, this process is resource demanding and prone to errors (Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001), and thus not suited for understanding the meaning of a text. To read words accurately and fast, readers need to learn to use units larger than single letters (i.e., syllables and morphemes)a step most poor readers in German experience difficulties with. Due to the relevance of reading abilities for knowledge acquisition (Nagler, Lindberg, & Hasselhorn, 2017) as well as social and cultural activities (Naumann, Artelt, Schneider, & Stanat, 2010), problems in reading acquisition should be addressed as early and as widespread as possible.
A number of analogue interventions were designed for poor readers in primary school in different languages (e.g., Ecalle, Magnan, & Calmus, 2009;Heikkilä, Aro, Närhi, Westerholm, & Ahonen, 2013), including German (e.g., Müller, Richter, & Karageorgos, 2020). The latter is a syllable-based reading intervention to foster word reading skills for groups of second-graders. A digital version thereof could facilitate learning anytime anywhere, allow personalized help and potentially provide a fun approach of additional training. In this contribution, we first present a theoretical overview on reading acquisition and the reading intervention. Second, we developed a high-fidelity prototype that digitalizes a fraction of the syllable-based reading intervention, alongside with an interview study with second-graders that aims to explore whether a digital version could be employed with the target age group. Our contribution is twofold: (1) We demonstrate a systematic approach to transferring established concepts into a digital format by adapting a validated analogous training for mobile devices; (2) We derive relevant design implications based on a target group oriented evaluation of the application.
Theoretical background on reading acquisition
In general, the acquisition of reading skills on the word-, sentence-and text-level of young children varies within the development process. Children quickly improve their reading abilities during the early grades, whereas the degree of improvement is lower in third-graders and subsequent grades (Foorman, Francis, Fletcher, Schatschneider, & Mehta, 1998;Logan et al., 2013). Due to the trajectory of reading skills development during grade one through four (Foorman et al., 1998), early intervention is essential to train deficient reading abilities with the aim to return children to the path of normal reading development (Francis, Shaywitz, Stuebing, Shaywitz, & Fletcher, 1996;Müller, Otterbein-Gutsche, & Richter, 2018). Without intervention, children, who have poor reading skills in early grades, are highly likely to remain poor readers (Landerl & Wimmer, 2008).
As children progress in reading development , they should reach the consolidated phase as they are able to memorize a larger amount of sight words by extracting larger chunks of grapheme-phoneme connections, including syllables (Ehri, 1995(Ehri, , 2005. The more words can be read by orthographic representations like syllables the faster word recognition is possible. Because syllables are often used in multiple words, readers might profit from transfer effects. Syllable-based reading therefore qualifies as a training technique to gain knowledge which is transferable to unknown texts or words (Huemer, Aro, Landerl, & Lyytinen, 2010). However, poor readers in German often experience difficulties in using (sub)lexical units like syllables or morphemes to recognize words faster via orthographic decoding processes (Landerl & Wimmer, 2008). In consequence, their word recognition is slow and error prone which leads to difficulties in sentence-and text-based reading comprehension. Their trajectory of reading skill development is less steep compared to readers with good word recognition skills (Pfost, Hattie, Dörfler, & Artelt, 2014).
Using the capacity of syllables, Müller et al. (2018;2020) developed a syllable-based reading intervention for second-graders. The intervention aims to enhance the word recognition processes by repeatedly reading and segmenting syllables using 24 different game-like training sessions. The word material within the manualized intervention was selected based on the 500 most frequent German syllables in texts typically read by 6-8 year old children (cf. database childLex, Schroeder, Würzner, Heister, Geyken, & Kliegl, 2015). The intervention aims to strengthen the mental representations of syllables and words consisting of these syllables to foster the accuracy and fluency of word recognition. Thus, the intervention should support poor readers to reach the consolidated phase instead of reading words letter-by-letter. Müller et al. (2018;2020) evaluated the reading intervention in the form of biweekly training sessions organized by specifically prepared trainers in addition to regular lessons. Participating children showed significant improvements regarding phonological and orthographic word recognition, hence being able to read words more quickly and accurately compared to same-skilled poor readers in the control condition. Participants also profited from enhanced reading comprehension due to transfer effects (Müller, Richter, & Karageorgos, 2020; for results in Grade 4 see Müller, Richter, Karageorgos, Krawietz, & Ennemoser, 2017). The results of the intervention highlight the value of early support in reading development to effectively improve reading skills (Klatte, Steinbrink, Bergström, & Lachmann, 2013;Tacke, 2005). However, conducting such an intervention in small groups is time and resource consuming. Additionally, even though group-based interventions have been shown to work, individual support for each learner would be beneficial (Groth, Hasko, Bruder, Kunze, & Schulte-Körne, 2013).
Related work on the development of appbased learning
Mobile apps have been introduced into the field of education for young learners for a wide range of topics ranging from science to language acquisition (e.g., Henderson & Yeow, 2012;Walter-Laager et al., 2017). Several studies have highlighted the positive effects of integrating tablet apps in educational contexts (e.g., Bastian & Aufenanger, 2017;Schoppek & Tulis, 2010), even compared to desktop applications (Sung, Chang, & Liu, 2016). There are also indications that mobile apps increase motivation (Su & Cheng, 2015;Tillmann & Bremer, 2017). Further advantages include access at any time, versatility, adaptability (Rossing, Miller, Cecil, & Stamper, 2012) and individual feedback to consolidate learning progress (Blok, Oostdam, Otter, & Overmaat, 2002).
Although a great amount of educational apps exists for young learners, quantity does not ensure quality of content (Papadakis, Kalogiannakis, & Zaranis, 2018). Haßler, Major, and Hennessy (2016) criticize that only a small subset of tablet apps is empirically tested, and that only a few studies adhere to minimum quality criteria, stressing the need for further empirical research. The general notion that adequately tested mobile apps might benefit learning and that introducing technology into the classroom can additionally enhance interest and motivation appears to be well supported (Hochberg, Kuhn, & Müller, 2018).
There are several apps with the overarching goal to practice and improve reading. For example, there are apps with picture books for beginners in reading (e.g., Cahill & McGill-Franzen, 2013), or multimodal e-books to train children's reading skills (e.g., Morgan, 2013), indicating different possibilities to support reading acquisition. Focusing on spelling in German, there are similar issues with the apps in terms of quality (Fleischhauer, Schledjewski, & Grosche, 2017). To date, no evidence-based app using a syllable-based approach with focus on orthographic and phonological processes is available.
The contrast between the large amount of reading apps available and the lack of using established and evaluated approaches highlights a need for a scientifically driven development process (Hirsh-Pasek et al., 2015). In order to obtain the benefits of mobile learning, the app needs to be designed focusing on its usability and handling by children. Comfort and ease of use are also important aspects. When adapting an established reading intervention from an analogous course-based format to a mobile app, several aspects have to be considered. For example, motivational aspects and self-directed learning become more relevant, because users interact with the app by themselves, and potentially have no external guidance (Ericsson, Krampe, & Tesch-Römer, 1993). A mobile app should promote self-directed learning on different levels of difficulties in pupils, similar to the benefits reported for technology-rich learning environments (Fahnoe & Mishra, 2013;Rashid & Asghar, 2016;Rossing et al., 2012;Underwood, Luckin, & Winters, 2012).
Contribution
Acquiring recognition and understanding of written words is one of the main educational goals in primary school. For poor readers learning to read in German, a syllable-based intervention has been shown to positively influence word reading skills and, in consequence, text-based reading comprehension (Müller, Richter, & Karageorgos, 2020;Müller et al., 2017). However, the analogue application of this approach is time and resource consuming, as many of the conventional interventions highly rely on individual interaction by educators to convey the learning material. A digital version of this intervention for reading acquisition could potentially overcome these limitations. However, the transfer of such an evidence-based face-to-face intervention into a digital version should be conducted with a human-centred design approach. As noted by Görgen, Huemer, Schulte-Körne, and Moll (2020), until recently the focus for development for children with reading problems has been on analogue trainings with experts. To the best of our knowledge, even though steps towards digital reading trainings have been made, they are not set up as transferred version of a tested analogue training. Even though steps in this direction have been taken by comparing pen-and-paper and digital content in math education (Maertens, Vandewaetere, Cornillie, & Desmet, 2014).
As a first step, this study addresses to which degree such analogue teaching material, which is originally presented by an instructor, can be transferred into a digital version. Perceived usability and enjoyment regarding the learning material might be crucial in determining the frequency of use of the digital version, both by children at home and teachers in their classes. Only if these potential motivational obstacles are overcome and the underlying principles of the intervention can be conveyed, a subsequent larger investigation on the actual learning outcome in the target group can be considered.
We therefore present the first step of a humancentred design approach to implement a digital version of the reading intervention by Müller, Richter, and Otterbein-Gutsche (2020), providing an operational high-fidelity prototype that integrates expert feedback from educational psychologists and primary school teachers as well as feedback from second-graders, the target group of the reading intervention. Our work demonstrates a systematic approach to transferring an analogue concept into a digital application and design implications derived from this process.
It is important to keep in mind that the current state of the app is not yet a complete digital version of the original intervention, for which similarly positive results on learning outcomes may be expected. However, the prototype is intended as an elaborated proof of concept on whether the intervention has the potential to be developed into a mobile, easy-to-use and motivating application, which is accepted by pupils and teachers alike and able to improve reading abilities, based on empirically tested scientific material. To this end, we addressed the following research questions: RQ1: Can the central elements of the analogue syllable-based reading intervention be successfully transferred into a digital version? RQ2: Are second-graders motivated to use the app again?
RQ3: Do the second-graders enjoy using the app?
PROTOTYPE OF THE READING APP
For the prototype, three educational games used in the reading intervention (Müller, Richter, & Otterbein-Gutsche, 2020) were chosen for implementation in Android Studio (Google, 2019). First, a selection of games were implemented as low-fidelity prototypes using Axure (Axure Software Solutions Inc., 2019). Second, three of these prototypes were chosen for implementation as an Android app, based on their ability to capture essential aspects of the reading intervention such as the ability to separate words into syllables or the acquisition of syllable rhythm and syllable classification.
Analysis of requirements
By digitalizing the complete syllable-based intervention, it could potentially benefit from the advantages of technology-based tools (Biancarosa & Griffiths, 2012;Sung et al., 2016). A digital version could also avoid the logistic drawbacks of the analogue intervention, which requires specifically prepared teachers. Furthermore, as the app is used by each child individually, the pace for the training is not affected by the group (Groth et al., 2013) and an adaptive training at the child's individual reading level is possible.
We identify four main criteria for our app based on relevant requirements for mobile education applications for children as proposed by Mkpojiogu, Hussain, and Hassan (2018) (1) efficiency in terms of adequate and accurate feedback, (2) effectiveness with regard to the comprehensibility of content and navigation elements, (3) the self-descriptiveness of the individual games and the app overall (learnability) and (4) child-friendly design to assure user satisfaction and reduce cognitive load. Additionally, we focused on the correctness of content. These requirements have to be considered for the adaption of the analogue version to a digital version, which is closely based on the original training while being extended by meaningful modifications, such as an immediate feedback mechanism, to fully exploit the potential of a digital adaption. For subsequent development, further requirements such as adapting the app's overall difficulty level to the children's individual skills requires further analysis of user data, e.g. by collecting performance data.
The correctness of the content is addressed by a direct transfer from the analogue reading intervention to the digital version, and a feedback loop via an expert evaluation. The four requirements for educational apps for children (Mkpojiogu et al., 2018) are dependent on design choices during the development process. The crucial idea behind these requirements is that second-graders, especially with reading deficiency, should be able to successfully interact with the app. Therefore, vivid explanatory videos that address both the navigation within the app, as well as the content and procedures within each individual game are integrated. Additionally, visual and acoustic feedback are used throughout the interaction using simple, consistent sounds with the main purpose to indicate if a given solution to a task was right or wrong (Mejtoft, Lindberg, Söderström, & Mårell-Olsson, 2017). By implementing colorful and expressive graphics, the app is designed for the target age group of the reading intervention.
Finally, because scientifically developed technologies are often not integrated into everyday classrooms (Scherer, Tondeur, Siddiq, & Baran, 2018), the to be developed app had to be accepted not only by experts, but also by pupils as well as their teachers to be of practical use.
Main concept of the digital intervention
The main menu of the app is depicted as a map of an island structured in different areas represented by distinctive symbols corresponding to the three respective games (flowers, lighthouse, and school). After selecting one of the games, the user can choose between three difficulty levels. Completing a game in one of the difficulties returns the user to the difficulty selection screen.
Within each difficulty level, five tasks are presented sequentially, with a progress bar illustrating the amount of finished tasks per level. Each task is set to be worked on in an answer-until-correct style (Blair, 2013;Epstein, Epstein, & Brosvic, 2001), allowing the learner to reach the correct solution eventually. Users are also allowed to abandon the currently played level and return to the difficulty selection screen.
The design of the buttons was kept simple and comprehensible. The difficulty selection screen for each respective game as well as the levels themselves use an image of the previous app layer in the upper left corner as a button to return to this layer. To ensure proper understanding regarding the functionality of the app, tutorial videos are displayed when starting the app for the first time and before each game. Each tutorial video can be repeated in each game by pressing a button labeled with a question mark.
Implementation of the selected games
The first implemented task is 'Reading with Arcs' in the analogue reading intervention. Children are instructed to mark the syllables of words with arcs and thereby read the words syllable-by-syllable. In the analogous intervention, syllables are marked with arcs while reading a word aloud with the whole class, as well as silently during individual work. The aim is to make syllables the salient unit of processing instead of reading words letter by letter (Müller, Richter, & Karageorgos, 2020).
The game 'Reading with Arcs' from the analogue intervention was adapted and called 'Flower Meadow' and is associated with the flower button in the selection screen. Similar to the analogue version of the game, users have to hyphenate words into syllables. Separation of the words is represented by the flight path of a bee. Every letter of the respective word is placed on a separate flower (Figure 1). Users separate the syllables by dragging the icon of a bee to the final letter of the current syllable within the word. The number of syllables per word varies according to the level difficulty of the game. Visual and audio feedback is used to indicate a right or wrong answer. Once users correctly hyphenate a word, their progress is acknowledged and they proceed with the next word in the level. Difficulty is adapted by inclusion of longer words in higher levels. Once the progress bar in the level is filled, the user is informed about passing the game in the respective difficulty and allowed to return to the difficulty selection screen. The second selected game is 'Vowel and Consonant', which serves primarily to teach the composition of a syllable and the distinction between vowels and consonants. The original version of the learning exercise describes each syllable as a boat, which has exactly one captain but several sailors. The vowel, or diphthong, is the captain of the boat, the consonants are the sailors that the children must recognize. In German each syllable contains a vowel or diphthong as its nucleus, therefore identifying the vowel or diphthong can help children to distinguish the syllables within a word. This general rule is introduced in the third session of the analogous intervention and is trained in every session afterwards. Recognizing individual syllables and their structure is a fundamental requirement for the differentiation of syllables and thus the reading of whole words.
In the digital version 'Vowel and Consonant' was adapted and renamed as 'Sailor Game' and is associated with the lighthouse button in the selection screen. The digital version visualizes the explanation used in the reading intervention of each syllable as a boat with a captain by dividing a word and placing its syllables in separate boats, initially representing each letter as a sailor ( Figure 2). Users have to recognize the vowels, vowel mutations or diphthongs as the captain of each syllable of the depicted word. The goal for each level is to find all captains by touching the vowels and diphthongs. Positive feedback is shown by highlighting the correctly selected figures with a green border and changing the sailor into a captain, negative feedback by a red shading, and a shake animation. Furthermore, the facial expression of the selected sailor changes accordingly as well as the auditory feedback. The difficulty of each level varies due to the inclusion of monosyllabic words in difficulty level one, progressively to three-syllable words in difficulty level three. Once the progress bar in the level is filled, the user is informed about passing the game in the respective difficulty and allowed to return to the difficulty selection screen. The third game 'Syllable Salad' occurs first in the sixth session of the reading intervention. In the original version of the intervention, the child receives an envelope, which contains cards with several separate syllables. The task is to bring these syllables in the correct order to merge them into words. In the digital version, 'Syllable Salad' is represented as a school building on the overview map. Users rearrange individual syllables to form a complete word. To this end, the syllables of a word are displayed randomly scattered on a blackboard (Figure 3). The task is to place the syllables in the right order by using a drag and drop mechanic. Positive and negative feedback is communicated via sound output. The individual levels vary in their degree of difficulty in terms of the number of syllables. In difficulty level one, two-syllable words are presented, while difficulty level two consists of three-syllable and difficulty level three of foursyllable words. Once the progress bar in the level is filled, the user is informed about passing the game in the respective difficulty and allowed to return to the difficulty selection screen.
Expert review
In order to examine whether the four requirements for the app (efficiency, effectiveness, learnability and child-friendly design to assure user satisfaction and reduce cognitive load) have been met by our implementation, and prior to conducting a study with second-graders, the prototype was evaluated by experts. Three researchers (one professor, 2 postdocs) at the chair of educational psychology separately used and commented on the app. In individual interviews, the experts first watched and commented the instruction videos. Afterwards, they were invited to use the app without time limit and to freely navigate within the app. During interaction with the app, the experts gave oral feedback, which was recorded. After they finished operating with the app, additional half-structured interviews were conducted. The questions focused on possible difficulties that could arise during the children's app usage, issues the pupils could need assistance with, and the adequacy of the implemented feedback. They were also asked to provide feedback for the procedure of the study based on their expertise, and to comment on the interview guideline for half-structured interviews, planned to be conducted with the children in the subsequent study.
Regarding the instruction videos, the experts mainly reported potential problems with the wording of the conveyed information and the feedback. The experts provided recommendations for the graphical user interface, especially for the main menu's design, the font size, as well as the font type within the games. They further suggested to include an instruction video to explain the general handling of the app. Finally, the usability of the game 'Syllable Salad' was criticized. All expert feedback was taken into account and the app was adapted accordingly.
After implementation of the expert feedback, we presented the app to the class teacher of the pupils participating in the study, as well as an additional primary school teacher. Both explicitly confirmed the app's and interview's suitability for the evaluation.
USER TESTING
We conducted a study with a school class of second-graders who interacted with the app in a controlled setting and took part in a subsequent interview. The main aims were to test if the central aspects of the analogue syllable-based reading intervention can be successfully transferred into a digital version (RQ1), whether second-graders rate the app as motivating (RQ2) and whether secondgraders enjoy using the app (RQ3). For future adaptions, we were interested in the performance of the children using the app and therefore measured errors and time spent on specific tasks.
Participants
The study was conducted in a primary school in Germany. In total, 17 children with written consent by their legal guardians used the app. The age ranged from seven to nine years (M = 7.65, SD = 0.61). Twelve girls with a mean age of 7.75 years (SD = 0.62) and five boys with a mean age of 7.40 years (SD = 0.55) participated in the survey. According to the class teacher, the reading abilities of the participating children were heterogeneous.
Design and procedure
The study was separated into two parts for each participant, with two investigators assigned to one child. The sessions were held individually and were set to not exceed 40 minutes.
First, each pupil was invited to interact with the app presented on a tablet, playing each game for approximately five minutes, using the same sequence of the three games, 'Flower Meadow', 'Sailor Game' and ending with 'Syllable Salad'. Afterwards, each child could freely use the app for up to five more minutes. The screen capture software 'DU Recorder' (DU-Apps-Studio, 2018) was used to record the complete interaction with the app. Questions asked during the interaction and the children's behaviour were recorded manually in writing, including instances in which the child needed assistance.
Second, a half-structured qualitative interview was conducted based on the interview guidelines discussed with the experts. One of the investigators asked questions and interacted with the participant while the other researcher noted answers and behaviour. The interview consisted of 17 open-text questions with child-friendly wording, which concerned demography, including age and gender, experience with tablet usage, joy of use, perceived learning outcome, basic understanding of the app's handling, the app's design and the design of the individual games. The pupils were asked which of the games they perceived to be hardest and easiest respectively, which game they liked best and which game they wanted to play again in the future. There were also open questions for positive and negative feedback on each individual game. Finally, the pupils were asked for general feedback and were invited to give additional suggestions.
RESULTS
The interviews were evaluated using the data and text analysis software MAXQDA 2018 (VERBI Software, 2018). Statistical tests were conducted using SPSS Version 25.
Screen-capture data
Due to technical problems, only 16 screen-capture videos could be analysed. Each video was evaluated independently by two investigators. The following criteria were extracted from each screen capture: (1) number of accomplished levels within five minutes, (2) time per level, (3) number of content-related errors, (4) number of interface-related errors and (5) choice of games during free interaction. Errors were separated into two categories to differentiate between problems with the interaction (interface-related), such as trying to interact with a non-interactive object, and problems with the reading task (content-related), such as selecting a wrong answer (Table 1). Interface-related errors should be as low as possible while content-related errors are based on the ability of the learner and are not necessarily zero when confronted with more difficult tasks. Each child completed all levels of 'Syllable Salad' as well as all levels of 'Sailor Game', except for one participant who did not finish the third level before the five minutes had elapsed. In 'Flower Meadow', each participant finished the first level, but only nine children completed the third level within the five minutes.
Mean (SD) Min -Max Mean (SD) Min -Max Mean (SD) Min -Max
The mean time spent on each level of 'Flower Meadow' was 01:29 minutes, for 'Sailor Game' it was 00:48 minutes per level and for 'Syllable Salad' the mean duration was 01:23 minutes per level. Only for the game 'Syllable Salad', the mean time increased with difficulty of the levels. In 'Flower Meadow' the average time per level was nearly the same for all three levels while in the sailor game the first level took longest (Figure 4). To assess content und interface-related errors we calculated the mean number of errors, only for completed levels, within each game, resulting in a decrease in participants in higher levels (see Figure 5).
Descriptively, the 'Sailor Game' had the lowest number of content-related errors overall, followed by 'Flower Meadow' and finally 'Syllable Salad' with the highest number of content errors. It is noteworthy that 'Sailor Game' had the highest error rate in the first level whereas in 'Flower Meadow' and 'Syllable Salad' the content-related errors increased with the difficulty of the levels.
Concerning the interface-related errors, 'Sailor Game' had no interface-related errors at all. 'Syllable Salad' had fewer interface-related errors compared to 'Flower Meadow'.
When being allowed to play with the app freely, two of the children were not interested in more interaction with the app, and for three participants data was not recorded during their interaction. The remaining pupils (n = 13) played one or multiple games during their free interaction. 'Flower Meadow' was played by ten children, 'Sailor Game' by eight children, and 'Syllable Salad' was chosen by six children.
Figure 5: Mean content errors and mean interfacerelated errors per child and level. Error bars represent
SEs.
Interview data
All interview data was separated into segments, which were assigned a category by a group of three raters based on majority. The same three raters assigned categories for all interviews.
Regarding the app in general, all children reported having fun while using the app and separately claimed that they liked the appearance of the app. Concerning the overall interaction, 13 participants stated that the handling of the app was clear and comprehensible to them.
When asked about the enjoyment of each game separately, all children affirmed that they enjoyed playing the 'Sailor Game'. The pupils gave different reasons why they enjoyed the game, such as the game being easy to solve, easy to use, funny, that they liked the sailor theme and the visual feedback. For 'Flower Meadow', 15 children stated that they enjoyed the game, with the main reason of having fun playing the game. For 'Syllable Salad' also 15 children affirmed that they enjoyed playing the game and stated having fun and handling the game easily. Additionally, four children also mentioned the learning effect and the challenge the game posed.
Regarding the perceived difficulty, six children chose 'Syllable Salad' as the most difficult game, while 'Flower Meadow' and 'Sailor Game' were mentioned four times each. As the easiest game, 'Sailor Game' was chosen seven times, 'Flower Meadow' six times and 'Syllable Salad' three times.
When the participants had to decide on their favourite game within the app, six of the children chose 'Flower Meadow', followed by five children selecting 'Syllable Salad' and four of the children choosing 'Sailor Game'. Two children had no favourite game. The children stated that their choice was based on the easy handling, easy solvability, fun and overall appearance.
When asked whether they would like to use the app again, all children agreed. However, ten children were only interested to replay certain games: 'Syllable Salad' as well as 'Sailor Game' were mentioned seven times, 'Flower Meadow' was named three times.
Interestingly, 14 children stated that they have learned something while playing with the app. Improving in splitting syllables was mentioned by eleven children, four stated that they improved their understanding of words and vowels, and two said to have improved in reading.
When asked for general feedback on the app, four of the children emphasized the graphics, three mentioned having fun and two commented on the perceived learning. Additionally, three pupils mentioned the simple handling and two enjoyed the challenge of the tasks. Regarding negative aspects, one of the children did not like the type of visual feedback and one noted that the difficulty might not be suited based on individual abilities.
Last, the pupils were asked for suggestions on how to improve the app. In general, improvement of graphics was mentioned four times. Additionally three of the children asked to implement more games and levels. Most suggestions for improvement occurred on 'Syllable Salad'. Individual suggestions included to use funny words, increase difficulty and use different graphics. In total, having fun while using the app was mentioned 31 times across all interviews. The easy interaction was named 15 times and the perceived knowledge gain ten times.
DISCUSSION
Concerning RQ1, the evaluation by psychological experts, teachers and the study demonstrated that the selected games of the analogue training can be digitalized. The pupils were overall able to independently work with the app based on the integrated instructions. However, the results have to be considered in detail. The quantitative data demonstrated substantial variability within the participating children in the abilities needed for the content of the app. There was also high variability in the interface-related errors for 'Flower Meadow' and 'Syllable Salad'. However, content-related errors were more numerous than interface-related errors in all three games. Only in the 'Sailor Game' no interface-related errors occurred, indicating the need to improve the interaction within both other games. For RQ2 and RQ3 we suggest that the results, especially the interviews, might have been impacted by the novelty of the games itself, especially relative to day-to-day school activities.
The interaction with the app itself should be unaffected in terms of content-related and interfacerelated errors and we still consider the information gained in the interviews as valuable insights into the target group. The overall reception of the app and the three implemented games was very positive. Given the fact that the games are clearly focused on learning, the interest of all participating children to use the app again is an encouraging sign, supporting the notion that the app is perceived as motivating, even though it should be treated with caution. Furthermore, the heterogeneous choice as favourite game can be considered as an indicator that the games are comparably enjoyable, which is supported by how often the children reported having fun, which supports RQ3.
Lessons learned
However, the performance of the children within the app in terms of time per level, content-related and interface-related errors indicates room for improvements. The time per level, as well as content-related errors increased in higher levels in 'Syllable Salad', with only few interface-related errors in each level. This suggests that the planned increase in difficulty across levels worked and shows that there appear to be no significant problems in using the game.
For 'Sailor Game' the mean time per level is similar across levels and the low mean number of contentrelated errors is even decreasing in higher levels. There were no interface-related errors at all for this game. 'Sailor Game' can therefore be considered as a relatively easy game in terms of content, which is also easy to use, as supported by the interview data.
Concerning 'Flower Meadow', the mean time per level is also constant across levels and contentrelated errors increased slightly in higher levels.
Both measures can be considered as an indicator that the planned increase in difficulty across levels was noticeable. However, the number of interface-related errors in 'Flower Meadow' is descriptively higher compared to both other games, especially in the first level. We suggest that, even though the game appears to be not very difficult in terms of content, the current mode of interaction is not ideal, which results in increased interface-related errors. The interaction should be improved, especially because the games were selected to be suited as introduction for the app and therefore should be easy to interact with. Furthermore, these interfacerelated errors might be perceived as contentrelated errors by the children, potentially resulting in confusion about the correct solution.
The difficulty of 'Syllable Salad' and 'Flower Meadow' measured in errors matches the subjectively perceived difficulty of the games as indicated by the choices for easiest and most difficult game. Interestingly the larger amount of input-related errors in 'Flower Meadow' did not appear to have influenced this assessment. Furthermore, considering the choice of favourite game, 'Flower Meadow' was chosen most often. The input-errors might therefore not be regarded as a critical problem for the children, this is most likely the case because errors were not costly and the correct input could be executed without delay.
When the children could freely interact with the app, half of the children chose the levels of the games they did not manage to finish before. This behaviour can be interpreted as an indication that the children were motivated to finish all levels. The implemented aspects of gamified elements in the learning environment such as the progress bars within each level and the increasing difficulty across levels might have incentivised the children to rise up to these challenges. In summary, the degree of difficulty appeared appropriate and motivating for the children and the overall interaction with the app worked well.
Considering the initial research aim of adapting an analogue reading intervention to a mobile app, several aspects can be noted. Foremost, based on the initial prototype and the reactions within the study, it can be assumed that central elements of the intervention can be implemented into an app (RQ1). Furthermore, the results of the interviews illustrate the children's motivation to use the app again, (RQ2). The interviews also show an overall enjoyment of the children using the app (RQ3). The results should be considered in light of a relatively small group of participants in a situation that is not equivalent to the planned context of use. We still regard these preliminary indications for future development as highly promising.
The combined approach of capturing and analysing the interactions with the games and the use of the interviews allowed to identify problems as well as potentials. This allows us to address the problem of input-errors in a more differentiated way, seeing that even though minimizing these errors is an obvious task in development, the consideration of a design, which minimizes their impact is also worthwhile for subsequent versions.
Design implications
Based on the results of the interviews and analysis of the screen-capture data we derived several concrete improvements for the app that might also serve as general design implications for app development in this area. Regarding the app's user interface it is noticeable, that the use of text requires a font that is easy to read and therefore more suitable for beginning readers. Furthermore, the function of the buttons was not always clear.
Although the buttons were already designed to be noticeable in size and contrast, it can be generally noted that app developers should focus more on the layout of critical user interface elements, such as buttons. This includes their self-explanatory presentation in the app and intuitive use. This is in line with the need for help by some children throughout the study. Therefore, the tutorials and help functions have to be extended to improve understanding. These observations indicate that especially children might profit from a more interactive way of explaining the core app functions, including active participation and exercises.
Furthermore, our results highlight the need of a stepwise and differentiated approach for transferring an analogue concept into a digital application. The selected requirements (1) efficiency in terms of adequate and accurate feedback, (2) effectiveness with regard to the comprehensibility of content and navigation elements, (3) the self-descriptiveness of the individual games and the app overall (learnability) and (4) child-friendly design to assure user satisfaction and reduce cognitive load can only be addressed by an iterative process including experts and the target group of children. Expert feedback alone, which is often used in form of heuristic evaluations has clearly not been sufficient to finalize the app as demonstrated by the study. However, the study with the second-graders by itself might have been similarly problematic because central problems such as the approach to the instructions and potential difficulties with the initial font had to be identified to actually allow for meaningful interaction with the app.
CONCLUSION
For this contribution, we used a human-centred design approach to develop a prototype of a mobile learning app based on an analogue syllable-based reading intervention (Müller, Richter, & Otterbein-Gutsche, 2020). Such a digital version could help to address the need for a more accessible reading intervention for children with deficient reading abilities.
The preliminary prototype was subject to an evaluation by experts and was subsequently revised. A study was conducted with a class of second-graders, who played with the app and were interviewed about the interaction. In general, the participating children enjoyed using the app and were motivated for continued use. The study demonstrated that core aspects of the analogue reading intervention can be adapted into a digital version. Results of the study generated valuable insights to improve the current state of the prototype. The feedback from the target age group can help guide future development of the app, e.g. on how to improve the introduction tutorials, therefore ensuring the quality of the developed app (Papadakis et al., 2018). These insights also provide implications regarding the development and design of educational apps for children.
Engaging a human-centred design approach allowed to incorporate the perspective of theoretical experts (i.e., pedagogical psychologists), practical experts (i.e., primary school teachers) and users (i.e., second-graders). This enabled us to iteratively improve the prototype, establish a solid foundation for subsequent development of the prototype into a complete app, and to present this development approach as insight for similar endeavours.
The participating children also asked for extension of the levels and the app itself, which is less a needed improvement and more an indication that there is demand for the adaption of the reading intervention by Müller et al. (2018).
In the long term, the learning outcome of the app needs to be examined with pre and post tests for children with different reading abilities in a large sample. Because the underlying intervention has been shown to be effective (Müller, Richter, & Karageorgos, 2020;Müller et al., 2017) in a long term study, the effect of such a short interaction as in the study is only the first step in a comprehensive evaluation of the digital adaption. The goals during development were focused on our research questions: the proof of concept, motivation in the target group and interest in use of the app. Our goal is therefore to scientifically establish a usable and well thought out app, which adequately represents and meaningfully expands the analogue intervention, and also test this improved version in terms of learning success on a large scale (Haßler et al., 2016).
ACKNOWLEDGEMENTS
This work was funded by the German Federal Ministry of Education and Research (BMBF) under the grand agreement MobiLe (03VP07080). | 2021-10-31T15:10:55.119Z | 2021-07-01T00:00:00.000 | {
"year": 2021,
"sha1": "b6a5794eeb559730511453b92674cf5cb597625f",
"oa_license": "CCBY",
"oa_url": "https://www.scienceopen.com/document_file/b9ff8c39-3b45-4f66-8c97-a66d16be912f/ScienceOpen/271_Schaper_HCI2021.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "6f45c84f9f1cdf342a5b199dbc587a5f2aadbfe8",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
} |
265044360 | pes2o/s2orc | v3-fos-license | reconcILS : A gene tree-species tree reconciliation algorithm that allows for incomplete lineage sorting
. Reconciliation algorithms provide an accounting of the evolutionary history of individual gene trees given a species tree. Many reconciliation algorithms consider only duplication and loss events (and sometimes horizontal transfer), ignoring effects of the coalescent process, including incomplete lineage sorting (ILS). Here, we present a new algorithm for carrying out reconciliation that accurately accounts for incomplete lineage sorting by treating ILS as a series of nearest neighbor interchange (NNI) events. For discordant branches of the gene tree identified by last common ancestor (LCA) mapping, our algorithm recursively chooses the optimal history by comparing the cost of duplication and loss to the cost of NNI and loss. We demonstrate the accuracy of our new method, which we call reconcILS , using a new simulation engine ( dupcoal ) that can accurately generate gene trees produced by the interaction of duplication, ILS, and loss. We show that reconcILS is much more accurate than models that ignore ILS, and at least as accurate or better than the leading method that can model ILS, duplication, and loss. We discuss how our method can also be extended in the future for use as a reconciliation algorithm in additional scenarios. Availability: reconcILS is implemented in Python 3 and is available at https://github.com/smishra677/ reconcILS . The dupcoal simulator is implemented in Python 3 and is available at https://github. com/meganlsmith/dupcoal
Introduction
The sequencing of a large number of genomes from many different species has highlighted the dynamic nature of genes: they are often duplicated, lost, or moved between species via introgression (no gain of an extra copy) or horizontal gene transfer (gain of an extra copy).One of the main goals of evolutionary and comparative genomics has been to quantify this dynamism, both in terms of the number of events of each type and their timing.A popular approach for quantifying both measures is gene tree-species tree reconciliation (reviewed in Boussau and Scornavacca 2020).Given a gene tree topology and a species tree topology, reconciliation algorithms attempt to infer the evolutionary events necessary to explain any discordance in the respective topologies.Many different reconciliation algorithms have been developed, most often considering only duplication and loss (so-called "DL" models; e.g Goodman et al. 1979;Page 1994;Guigo et al. 1996;Page and Charleston 1997;Chen et al. 2000;Zmasek and Eddy 2001;Durand et al. 2006), but also sometimes considering duplication, transfer, and loss ("DTL" models; e.g.Bansal et al. 2012;Szöllősi et al. 2013;Jacox et al. 2016;Morel et al. 2020).
Despite the success-and widespread use-of DL and DTL reconciliation algorithms, they do not consider gene tree discordance due to the coalescent process.The coalescent is a model that describes how different sequences within a population are related (Kingman 1982;Hudson 1983;Tajima 1983), and is relevant even when considering sequences from different species because these must also coalesce in shared ancestral populations.Importantly, gene trees and species trees can be discordant solely due to coalescence, in a process often called incomplete lineage sorting (ILS) or deep coalescence (Maddison 1997).Standard DL and DTL algorithms will incorrectly infer one duplication and three losses for every discordance only due to ILS (Hahn 2007), resulting in misleading results when applying reconciliation methods to trees with ILS (e.g.Stull et al. 2021).Duplication, loss, and coalescence can also interact in complex and non-intuitive ways (Rasmussen and Kellis 2012;Mallo et al. 2014;Li et al. 2021), making it almost impossible to simply exclude gene trees discordant due to ILS from an analysis.
A few different approaches to reconciliation have been used to deal with discordance due to coalescence.An early approach allowed short branches in the species tree-which can lead to ILS-to be collapsed (Vernot et al. 2008;Stolzer et al. 2012).The use of non-binary species trees means that such algorithms will not mistake ILS alone for duplication and loss; however, they do not account for interactions between events.Chan et al. (2017) introduced a reconciliation algorithm that included duplication, loss, transfer, and ILS, but it assumed that new duplicates were completely linked to their parental locus (i.e.no recombination between them).Given that the majority of new gene duplicates are not completely linked (Schrider and Hahn 2010), such an assumption limits the number of genes that can be analyzed.Wu et al. (2014) developed a parsimony-based algorithm (DLCpar) that models duplication, loss, and ILS; this approach is similar to the probabilistic DLCoalRecon algorithm (Rasmussen and Kellis 2012), but runs much faster and without the need to specify duplication and loss rates ahead of time.
Here, we introduce a new algorithm for finding the most parsimonious reconciliation of a gene tree and a species tree in the presence of duplication, loss, and ILS.Our goal is to improve upon existing methods by increasing speed and accuracy, as well as by incorporating more coalescent processes (i.e.ILS is not the only cause of discordance due to the coalescent).In addition, although DLCpar has been shown to perform well (Wu et al. 2014;Du et al. 2021), it is not straightforward to extend its algorithm to other tools that also have reconciliation steps (e.g.Thomas et al. 2017;Zhang et al. 2020).We hope that the algorithm introduced here will be easily adaptable to such scenarios.We demonstrate the use of this new algorithm by applying it to 23 whole primate genomes, a dataset that has previously been shown to contain a large amount of discordance due to ILS (Vanderpool et al. 2020).
Multilocus multispecies coalescent simulator
We begin with a description of the simulation program introduced here, dupcoal, as it allows us to explain the biological model underlying both the simulations and inferences.
The model we use closely resembles the multilocus multispecies coalescent (MLMSC) model of Li et al. (2021).This model treats each individual locus as evolving via the multispecies coalescent (MSC) model, in Fig. 1.The multilocus multispecies coalescent model generates full gene trees.a) A gene tree at the parent locus (blue) is drawn from the species tree (outlined in black) via the multispecies coalescent (MSC) model.A duplication event (red star) is drawn from a birth-death model over the species tree, in this case occurring in the population ancestral to species B and C. b) A gene tree at the daughter locus is drawn from the MSC.The duplication is placed on this gene tree at the time at which it occurred in the species tree (shown as a dotted line), choosing one of the two possible branches at random.The inserted DNA now takes on the gene tree branch labeled in red.c)-g) Possible ways that the parent and daughter loci coalesce to form the full gene tree.In all panels the star and red branch below it come from the daughter gene tree, and the blue tree comes from the parent gene tree.The coalescent process joining the two trees is shown as a red branch above the star.The trees in the bottom row are simply unfolded views of the full gene trees.
which discordance between individual gene trees and the species tree can arise only via incomplete lineage sorting.The MLMSC further models relationships among these loci, in addition to losses at any locus.
To illustrate how duplications and losses occur, and how relationships among loci are generated in dupcoal, we use a simplified example (a full description of the simulation algorithm can be found at https://github.com/meganlsmith/dupcoal). As a first step, a gene tree is generated via the MSC at the "parent" locus (Figure 1a); this tree exists regardless of whether subsequent duplications occur.Duplications and losses are then generated via a birth-death process on the species tree, with each event occurring at a specific time on a specific branch of the species tree (one such duplication is represented as a star in Figure 1a).For each duplication event, a gene tree is generated via the MSC at a "daughter" locus (Figure 1b).This tree represents the history of the locus at which the duplicated DNA will be inserted, and exists with or without this duplication event; for now we assume that this locus is unlinked from the parent locus.The mutation representing the duplication event is then placed on the gene tree at the daughter locus at the same time that it occurred on the species tree-if multiple branches of the daughter gene tree exist at this point in time on the relevant branch, the mutation is assigned to one at random (star in Figure 1b).If the mutation occurs on a discordant branch of the daughter gene tree (that is, one that does not exist in the species tree topology), this is referred to as "hemiplasy" or "copy-number hemiplasy."Such events are not simulated by SimPhy (Mallo et al. 2016), a popular platform for carrying out gene tree simulations with duplications and losses.Regardless of where exactly the mutation occurs on the daughter gene tree, the duplication now takes on the topology and branch lengths of the tree below this point (red branch in Figure 1b).
The process described thus far has produced two gene trees-one at the parent locus and one at the daughter locus-either or both of which may be discordant due to ILS.Because gene duplication is fundamentally a process by which an existing haplotype in a population at one locus is inserted into another location, the inserted DNA at the new daughter locus carries with it a history of coalescence with all of the other haplotypes at the parent locus.This coalescent process joining the two trees can cause additional discordance, even among gene trees that individually cannot be discordant (e.g. when the species tree has only two species).We illustrate this coalescent process here with a simple example.
Figures 1c-g show the five outcomes possible when joining the parent and daughter trees shown in Figures 1a and 1b. Figure 1c is the simplest outcome, as the daughter tree joins the parent tree on the same branch that the mutation occurred on (i.e. the one leading to species C); this is the tree that is assumed to be produced by duplication under standard DL reconciliation.Figures 1d and 1e represent cases where the daughter tree joins the parent tree on a different branch, each of which would require one NNI move to recover the tree topology in Figure 1c.Finally, Figures 1f and 1g represent more extreme cases, in which coalescence between parent and daughter further up the tree requires two NNI moves to recover the topology in Figure 1c.
It should be emphasized here that all of the topologies presented in Figure 1 involve only a single duplication and no losses, and neither the parent tree nor the daughter tree is discordant due to ILS.In fact, the examples in Figures 1d and 1e are not technically due to ILS either, since coalescence is occurring in the same population of the species tree (the ancestor of species B and C); in addition, in this scenario these two trees are actually equiprobable with the topology in Figure 1c.These simple examples demonstrate that the vagaries of the coalescent process lead to a thicket of different tree topologies.
Losses are also simulated on the species tree and placed on each gene tree.Loss events are placed on branches of gene trees in the same manner as duplications, with all branches below such mutations removed.Together, these processes make up the steps by which our simulator can generate full gene trees (i.e.ones with the parent and all daughter trees joined together).We used functions from DendroPy (Sukumaran and Holder 2010) to generate trees under the MSC and to manipulate parent and daughter trees; we used functions from NumPy (Harris et al. 2020) to draw coalescent times, duplication times, and loss times from exponential distributions.reconcILS Inputs and outputs.Given a rooted, bifurcating species tree, S, and a rooted, bifurcating gene tree, G, as input, reconcILS outputs the minimum number of duplications, losses, and NNIs required to reconcile G and S given event costs.Inferred events are all also assigned to nodes in the gene tree.
Main algorithm.An overview of reconcILS is given in Algorithm 1 (see Supplementary Materials).Our approach combines multiple sub-approaches that are commonly used in DL reconciliation and other phylogenetic inferences.As a first step, we use last common ancestor (LCA) mapping (Zmasek and Eddy 2001) to efficiently map each node in G to a node in S (Figure 2a).
Unlike in standard DL methods, LCA mapping here is used to initialize each round of our algorithmand is sometimes used iteratively within a round-without representing the final reconciliation.Instead, this step identifies cases in which multiple nodes in the gene tree map to an identical node of the species tree.For instance, in Figure 2a nodes G 1 and G 2 in the gene tree both map to node S 1 in the species tree.We therefore call S 1 a "multiply mapped" node.Additionally, we refer to the branch in the gene tree bracketed by nodes G 1 and G 2 as a "discordant" branch (labeled K 1 in Figure 2a); such branches do not exist in the species tree.The focus of our method is in determining whether discordant branches should be resolved via NNI and loss (Figure 2b) or via duplication and loss (Figure 2c).Choosing among events (NNI, duplication, and loss) is carried out greedily for each branch, with no consideration of a globally optimal set of events among all discordant branches in the gene tree.Consequently, after choosing a set of events for one branch, the process repeats itself until there are no multiply mapped nodes and no discordant branches.Duplications implied by multiply mapped tip nodes in the species trees are inferred via standard methods; inferences of losses are explained below.
Here it may also be important to point out why we are using NNI rearrangements to model ILS.NNI has been implemented in reconciliation algorithms before, but in these cases it is used to correct gene trees that may have been inferred incorrectly (e.g.Chen et al. 2000;Chaudhary et al. 2011;Nguyen et al. 2013).We use NNI because the effects of ILS on gene trees can be modeled as a series of NNI rearrangements.For three species undergoing ILS, there are two possible discordant topologies, each of which is one NNI move away from the concordant topology.For four lineages undergoing ILS, there are 14 possible discordant topologies, each of which are between one and three NNI moves away from the concordant topology.(Note that it is the number of lineages undergoing ILS at a specific "knot" on the tree that matters, not the total number of tips in the tree.)Our algorithm takes advantage of this relationship between NNI and ILS in order to reconcile gene trees; other tree rearrangements (e.g.subtree-prune-regraft) could of course be used as an alternative, but would be much less efficient.
Traversing trees.The high-level description given above does not provide details about several important steps in our algorithm.In this section and the next two we provide some more information.
After the initial LCA mapping, our algorithm performs a preorder traversal of the species tree.For each species tree node, the algorithm checks how many gene tree nodes are mapped to it.If there are fewer than 2 mappings, the algorithm invokes a function that asks whether speciation or loss can be inferred.Nodes with 1 mapping are speciation events-these do not incur a cost.Nodes that have 0 mappings represent losses.Our function asks whether the parent node of a node with 0 mappings itself has 0 mappings.If not, a loss is inferred; if so, no event is recorded, because the loss must have occurred further up the tree.If there are 2 or more mappings the species tree node is multiply mapped, and a function to carry out "local reconciliation" is called.Local reconciliation is described in more detail in the next section, but here can simply be described as determining the events that have generated the multiple mappings to a single species tree node.
If there are only two adjacent nodes in the gene tree mapped to a node in the species tree (i.e. one discordant branch), then the method proceeds as described below.However, when there are more than two nodes mapped to an internal species tree node-and therefore more than one discordant branch in the gene tree-we must decide which branch to reconcile with the multiply mapped node first.We have found some cases where the order in which the branches are resolved matters, with only one of the choices leading to an optimal reconciliation.Algorithm 2 (see Supplementary Materials) describes how we choose which discordant branch to reconcile first.This algorithm calculates the "bipartition cost" of each possible branch-the number of bipartitions and losses induced by NNI moves applied to each branch (Figure S1).Such a calculation can be carried out quickly, and by choosing the branch with the lowest bipartition cost, we can more accurately proceed with reconciliation.
Once reconciliation for a multiply mapped node is carried out, our algorithm begins a new round.In each round we repeat LCA mapping and continue traversing the species tree, counting mappings at each node.Here, nodes G1 and G2 in the gene tree both map to node S1 in the species tree; we therefore refer to S1 as a multiply mapped node.Additionally, G1 and G2 together define a discordant branch, K1, in the gene tree.For clarity, all mappings involving tip nodes have been excluded.b) The algorithm attempts to reconcile the discordant branch using NNI and loss.Here, reconciliation can be carried out with 1 NNI and 0 losses.c) The algorithm attempts to reconcile the discordant branch using duplication and loss.Here, reconciliation can be carried out with 1 duplication and 3 losses.reconcILS must decide between the reconcilations in b) and c) for every discordant branch.
Choosing among events.In order for our algorithm to more efficiently and optimally choose among different evolutionary events (e.g. Figure 2b vs 2c), we have introduced the concept of a "local reconciliation" for a multiply mapped node.Here, a species tree node is locally reconciled if the number of gene tree nodes mapped to it, m, is reduced; for internal nodes of the species tree, this by definition also reduces the number of discordant branches in the gene tree.
The use of local reconciliation has several advantages.Most importantly, it provides a criterion for moving forward: we can determine when reconciliation has been achieved within a single round of the algorithm.It also makes it clear that there are some scenarios that can only be reconciled using duplication and not NNI (e.g. a multiply mapped tip node), in which case choosing events is straightforward.In a single round of our algorithm, we attempt to locally reconcile an identified multiply mapped node using both NNI+loss and duplication+loss (Figure 2).
Note that, given our definition, a species tree node can be locally reconciled by NNI while still remaining multiply mapped; e.g., if m = 3 is reduced to m = 2, the node still requires more reconciliation.In this case our algorithm will carry out up to m − 1 NNI moves within a single round, as long as each one results in a local reconciliation.For example, starting with m = 3, the first NNI move will locally reconcile a node if one of the two NNI topologies reduces the number of mappings to the species tree (we have never observed a case where both NNI topologies reduce this number).In this case, m = 2, and we now carry out another NNI move on the updated gene tree.Only one move needs to be examined at this point, however, since the back-NNI rearrangement would return us to the starting state.The cost of all of these NNI moves is then compared to the cost of duplication and loss at the end of the round.Duplication+loss naturally allows for more than one of each type of event for a local reconciliation, and is therefore not handled differently when m > 2.
As with all parsimony-based reconciliations, reconcILS requires a cost for each event type: one for duplications (C D ), one for losses (C L ), and one for NNIs (C I ).Reconciliations can be very sensitive to costs, such that maximum parsimony solutions under one set of costs can be very different than under another set of costs.Although there are few ways to determine costs independently (but see Discussion), there are some approaches for finding the Pareto-optimal costs within a reconciliation context (e.g.Libeskind-Hadas et al. 2014;Mawhorter et al. 2019).Here, as in most commonly used reconcilation algorithms, we allow the user to set costs.The default costs within reconcILS are set to C D = 1.1, C L = 1, and C I = 1.(The default costs within DLCpar are C D = 2, C L = 2, and C I = 1.)It is also important to note that DLCpar is specifying a cost of ILS, while reconcILS is specifying a cost for each NNI induced by even a single ILS event.This distinction will become important when we test the accuracy of each method below, as we will be inferring slightly different events with the two methods (we use the same costs in both methods for these comparisons).
Finally, at the end of a round, reconcILS will sum the cost of all events in order to choose the optimal local reconciliation.If there are equally optimal reconciliations, reconcILS will choose the duplication+loss solution.Labeling a reconciled gene tree.A full reconciliation of a gene tree, G, and species tree, S, will have locally reconciled all multiply mapped nodes.The total number of duplications, losses, and NNIs required for this reconciliation is the sum of each event type across all branches; this number is reported by reconcILS.We also aim to label the gene tree with these events-identifying where each event occurred in history-as this is an important output of reconciliation algorithms.As reconcILS traversed the species tree finding optimal local reconciliations, it also recorded the number and location of all events.Labeling therefore consists of placing these events on the gene tree.
Traditional DL reconciliation algorithms label gene tree nodes as either D (for duplication) or S (for speciation); they may also label the location of losses, L. Algorithms including ILS should be able to additionally label nodes as I.However, there is no standard way to do this labelling, largely because there is no exact map between gene tree nodes and species tree nodes under ILS.In this sense, the labeling problem for algorithms including ILS differs fundamentally from labeling under DL and DTL algorithms.Here, we have created our own labeling system for ILS, one that we think is both understandable and easy to work with.
Our solution for placing the I label on nodes of gene trees in reconcILS is to denote both a daughter branch affected and the number of NNI moves that were associated with this branch.A few examples are shown in Figure 3, which also helps to illustrate why these choices were made.Figure 3a labels the gene tree from Figure 1f-with more complete tip names-with output from reconcILS.As can be seen, this tree has one node (denoted with a small black dot for clarity) labeled with both duplication and ILS.Further, the labeling tells you which daughter branch was reconciled with NNI (C 1 ), and how many NNI moves were required to reconcile it (2).Without such information, simply labeling the node with I would not have distinguished histories involving coalescent events along the A branch from those along the C 1 branch.
There are two important caveats to this approach, one perhaps more obvious than the other.First, a specific labeling is associated with a specific reconciliation.That is, if we had found a different optimal reconciliation for the tree in Figure 3a (for instance, one duplication and two losses), we would have had different labeled events.So, as might be expected, a labeling is always reported in the context of a particular reconciliation.
Second, for some ILS events it is not always clear which of the two nodes flanking a discordant branch to label with I: this is why there is no exact map between gene tree node labels and species tree nodes under ILS.For a discordant branch, we have arbitrarily set reconcILS to always choose the node closest to the tips as I.In some cases this choice should not matter very much.Consider the toy reconciliation in Figure 2b-should we have placed the NNI event on node G 1 or on node G 2 ?A branch undergoes NNI, not a node, so it is not clear which node flanking such a branch should be chosen; in this case the choice does not seem to imply any difference in evolutionary histories.In contrast, consider the reconciliations shown in Figure 3b and 3c-these represent the output of reconcILS applied to the gene trees in Figure 1d and 1e, respectively.While we have left the daughter tree branch in red for clarity, it is important to note that these trees are identical topologically, which is why reconcILS has labelled them in exactly the same way.The algorithm has no way to "see" the red branches, and has consequently simply picked the bottom-most node in both cases as the one to label as having undergone ILS and duplication, even though this is not precisely correct in Figure 3c.At the moment this is a limitation of reconcILS (and all methods attempting to label gene trees with ILS events), but we mention other possible solutions in the Discussion.
reconcILS also labels duplications and losses in the absence of ILS. Figure 3d shows how both individual duplications and losses are labelled: observe that neither requires a label with a specified number of events (like the I label), nor does either need to specify which branch they act on.reconcILS places the label L on any node in the implied gene tree that is not represented by a gene copy; for example, generic tip nodes A, C, and B in Figure 3d do not exist in the input gene tree itself, but are labelled with an L. reconcILS is able to do this because every duplication event is represented internally by generating a new copy of the species tree below the duplicated node-if losses occur on these implied trees, we can track their timing as well.Note that internal nodes can also be labelled with an L, so that the timing of loss events must be inferred by comparing the state of parent and daughter nodes (i.e.losses on internal branches will lead the descendant internal node and all of its daughter nodes to be labelled with an L).Finally, any node without a label can be inferred to be a speciation node.
Assessing accuracy using simulations
To test the accuracy and speed of reconcILS compared to other methods, we used our simulator (dupcoal) to generate gene trees evolving under ILS, duplication, and loss.Data were simulated on a species tree with topology (A:1.5,(B:0.5,C:0.5):1);branch lengths are measured in coalescent units.Given this tree, we expect single-copy gene trees to be concordant approximately 75% of the time.Simulated rates of duplication and loss were λ = 0.3 and µ = 0.3, respectively, except where stated otherwise.The default costs of reconcILS were used for all analyses of simulated data for all methods (C D = 1.1, C L = 1, C I = 1).Note that reconcILS, dupcoal, and ETE3 are written in Python 3, while DLCpar is written (and was run) in Python 2.
Accuracy of standard DL methods.As a simple test of DL reconciliation algorithms-which do not allow for ILS-we used the DL method implemented in ETE3 (Huerta-Cepas et al. 2016).We ran simulations with and without duplications, but always with the loss rate set to 0; this allows us to see incorrectly inferred losses more easily.As expected (Hahn 2007), in simulated trees with discordance due to ILS but no duplication or loss, ETE3 always inferred 1 extra duplication and 3 extra losses (Figure S2a).In cases with discordance due to both coalescence and duplication, but no loss, ETE3 again inferred a slight excess of duplications, but a large excess of losses (Figure S2b).
Accuracy of reconcILS and DLCpar.We ran two reconciliation methods that allow for ILS-reconcILS and DLCpar (Wu et al. 2014)-on 1000 gene trees simulated as described above.Overall, the accuracy of both methods was high and similar for inferences of ILS (Figure 4a) and duplication (Figure 4b).The correlation (as measured by Spearman's ρ) between simulated and inferred ILS events was 0.719 for reconcILS and 0.580 for DLCpar (Figure 4a).It is important to note that these two methods are inferring slightly different kinds of events, which we account for in these results.reconcILS infers the number of NNI events needed to reconcile a gene tree; for example, the gene tree in Figure 3a requires two NNI events.The results reported for reconcILS therefore compare simulated NNI events vs. inferred events.In contrast, DLCpar infers the number of ILS events needed, regardless of the number of NNI rearrangements that occur within such an event.The tree in Figure 3a only counts as one ILS event using this method.Results for reported for DLCpar therefore compare simulated vs. inferred ILS events, to ensure that a fair comparison can be made.In both cases the methods are doing well in their respective inferences, but with a slight boost in accuracy for reconcILS.
Inferences about duplication events were also highly accurate and highly similar between reconcILS and DLCpar (correlations of 0.894 and 0.892, respectively).The slight undercounting in both methods is due to cases in which duplication events are immediately followed by losses (the same can happen with ILS events followed by loss).Such events cannot be inferred by any parsimony-based algorithm, and are likely not inferrable by probabilistic methods either (e.g.Rasmussen and Kellis 2012).There is quite a difference between reconcILS and DLCpar in the inference of losses (correlations of 0.729 and 0.334, respectively; Figure 4c).Upon closer inspection, the lower accuracy in DLCpar is due to many loss events of one particular type not being counted.Specifically, DLCpar does not count losses that occur on either of the branches attaching to the root node of the species tree, as these will have occurred above the root of the resulting gene tree.This means that, for instance, DLCpar does not count the loss that occurred along the A branch in gene trees with the topology (B,C).(Gene trees with only a single tip, i.e. the topology (A), do not run at all in DLCpar, but were not counted here.)We confirmed this behavior on larger trees in DLCpar, as well as in all trees in ETE3.The difference in counting of losses among algorithms arises because reconcILS explicitly traverses the species tree, while the other methods do not.This enables our method to track losses that occur above the root of the input gene tree, but below the root of the input species tree.However, there may be some cases in which the behavior of DLCpar is preferred.For instance, since gene families have to originate on some branch of the species tree (e.g.Richter et al. 2018), forcing reconciliation algorithms to infer losses on branches that pre-date this appearance would be incorrect.It is not clear which approach to counting losses one should prefer when analyzing real data.
Analysis of primate genomes
To further demonstrate the power and accuracy of reconcILS, we applied it to a dataset of gene trees inferred from 23 primate genomes (Vanderpool et al. 2020;Smith et al. 2022).These genomes all come from the suborder Haplorhini, which includes all primates except the lemurs and lorises (i.e.suborder Strepsirhini).
Among these genomes, we analyzed two sets of gene trees that were previously inferred: First, we considered 1,820 single-copy orthologs ("Single-copy clusters" dataset in Smith et al. 2022).This dataset is not expected to contain any duplicates, though it does contain losses.However, because there are high levels of ILS among the primates sampled here, the dataset serves as an ideal comparator for reconciliation approaches with and without ILS.Second, 11,555 trees containing both orthologs and paralogs were used to infer duplications, losses, and ILS among the primates ("All paralogs" dataset in Smith et al. 2022).When using reconcILS, gene trees with polytomies were randomly resolved, and costs were set to C D = 2, C L = 2, and C I = 1.We used both reconcILS and ETE3 to analyze the two datasets, assigning inferred events to each branch of the primate tree.Fig. 5. Results from primates.a) Analysis of 1,820 single-copy orthologs.Both reconcILS and ETE3 were used to analyze the data, with duplication, NNI, and loss events mapped to each branch of the species tree.On each branch, the bar plots show results from reconcILS on top, with results from ETE3 mirrored below.ETE3 cannot infer NNI events, so this bar is always set to 0. b) Analysis of 11,555 gene trees containing both orthologs and paralogs.Results are presented in the same manner as in panel a.However, note that the height of all bars is normalized within a panel, so that heights of bars within a tree can be compared among branches, but the heights of bars between the trees cannot be compared.Figure S3 shows the absolute counts of all events from both panels, and Figure S4 shows species tree with branch lengths in substitution units rather than as a cladogram, as shown here.
Single-copy ortholog dataset.Results obtained from analysis of only the single-copy orthologs are shown graphically in Figure 5a (see Figure S3a for numerical values on each branch).The pattern that stands out from this graph is that ETE3 is vastly overestimating the number of duplications and, especially, the number of losses.Because ETE3 does not model ILS, all such events must be reconciled by one duplication (placed above the discordant branch) and three losses (placed below the discordant branch; Hahn 2007).We can see this effect in the data by looking at the correlation between the gene concordance factor (gCF) estimated for a branch with the number of duplicates inferred on its parent branch.Gene concordance factors quantify the fraction of loci that contain the same branch as the species tree; when single-copy genes are analyzed, low gCFs can indicate discordance due to ILS (Baum 2007;Lanfear and Hahn 2023).The correlation between gCFs for these data (estimated in Smith et al. 2022) and duplications inferred by ETE3 on parent branches is ρ = −0.69.Lower gCFs lead to higher numbers of duplicates, as expected when ILS is not accounted for-this problem is not unique to ETE3, but should be expected for all standard DL and DTL approaches.Such errors can lead to incorrect inferences of bursts of duplication and loss.
Using the same reasoning, we can ask about the accuracy of reconcILS by examining the correlation between NNI events inferred by our algorithm and gCF values for single-copy genes.High NNI counts indicate more ILS, and therefore this measure should be negatively correlated with gCF values for the same branch.Indeed, the correlation between NNI counts and gCF values is ρ = −0.86,demonstrating the high accuracy of reconcILS.However, we can also see that reconcILS does sometimes infer a small number of duplicates (Figure 5a; Figure S3a).While it is possible that some of the single-copy genes we analyzed represent so-called pseudoorthologs-and therefore do contain actual duplication events-these are expected to be extremely rare (Smith and Hahn 2022).Instead, these false duplicates are likely due to ILS among four or more lineages.Recall that, although our algorithm has been developed to account for rearrangements due to the coalescent process, ILS among four lineages can result in gene trees that are three NNI moves away from the species tree; ILS among more lineages will produce trees that are even further away.Therefore, multiple NNI events are sometimes simply more costly than duplication and loss, even when ILS is the true cause of discordance.
Full gene tree dataset.We also examined patterns of ILS, duplication, and loss for the entire primate dataset.The results from the full set of gene trees (Figure 5b; Figure S3b) largely mirror the results from the single-copy orthologs (which are a subset of the entire dataset).Part of the apparent similarity is simply due to normalizing the two datasets separately, but several non-graphical factors also contribute to the pattern.First, although there are many true duplicates in the full dataset, the vast majority of gene trees only have events on tip branches: 7,693 out of 11,555 gene trees have duplicates specific to one species (Smith et al. 2022).Another ∼1,000 trees have duplicates specific to two species.Combined with the 1,820 single-copy genes, it becomes clear that there are in fact relatively few trees with duplicates on internal branches even in the full dataset.
Second, errors in inferring duplications and losses due to the coalescent process still dominate the results from standard DL reconciliation (i.e.ETE3), possibly even more so than in the single-copy dataset.Because ILS (and introgression) can affect all gene trees, trees with duplicates can experience twice as much discordance when there are twice as many branches.One example of this can be seen among the Papionini (macaques, baboons, geladas, and mandrills): ETE3 has to infer >25,000 losses on each of the tip branches in this clade, and more than 10,000 losses on most of the internal branches (Figure 5b; Figure S3b).There are also many thousands of duplicates inferred on internal branches of this subclade, though these species have the same number of total protein-coding genes as all the others analyzed here (Vanderpool et al. 2020).Although the inferred duplications and losses in this group were also relatively higher in the single-copy dataset, it is clear that even a small number of true duplicates can have an outsized effect on methods that cannot account for ILS.
The analysis using reconcILS is also highly similar between the two datasets, again largely because most duplicates are confined to tip branches.reconcILS is able to deal with discordance due to the coalescent, even in the presence of duplication: the correlation between NNI events inferred on each branch of the species tree from the single-copy orthologs and the full dataset is ρ = 0.99.Although the numbers of events are of course larger with the larger dataset (Figure S3b), the effect of ILS on any given branch of the species tree is experienced by all gene trees.Given its ability to deal with ILS, duplication events across the primate tree will be more reliably inferred with reconcILS.
Discussion
We have introduced a new reconciliation algorithm and associated software package (reconcILS) that considers discordance due to duplication, loss, and coalescence.To better demonstrate the accuracy of our method, we have also developed a new simulation program (dupcoal) that generates gene trees while allowing these processes to interact.Used together, our results demonstrate that reconcILS is a fast and accurate approach to gene tree reconciliation, one that offers multiple advantages over alternative approaches, even those that also consider ILS (e.g.DLCpar; Wu et al. 2014).Nevertheless, there are still several caveats to consider, as well as future directions to explore.
Despite our demonstration of good performance in practice, the biggest caveat to our approach is the lack of a theoretical proof that our method finds most parsimonious reconciliations.While such proofs often require strict assumptions-which can be relaxed in a simulation setting-they are very helpful in providing confidence across untested parameter space.However, our approach also differs fundamentally from DL and DTL algorithms with such proofs, as there is not a one-to-one labeling between gene trees and species trees when ILS is considered.As mentioned above, there is often ambiguity in how to label nodes in gene trees affected by ILS and duplication (e.g.Figures 3b and 3c).This is due in part to how reconcILS uses NNI rearrangements-it is not clear which node to pick if a branch is rearranged-and in part to the intrinsic lack of information contained only within tree topologies when coalescence and duplication interact.The difficulty in labeling is not confined to this work: DLCpar does not label nodes, and RecPhyloXML, a format for representing reconciled trees, does not include a way to label ILS events at all (Duchemin et al. 2018).
Nevertheless, labeling gene trees serves multiple important functions, and is often interpreted as biological truth.For example, multiple methods use inferred gene tree labels to identify orthologous genes (genes related by speciation events; Fitch 1970).The general approach taken is to label nodes using reconciliation or similar approaches, treating any node not labeled with a D as a speciation node; genes whose common ancestor is a speciation node are subsequently defined as orthologs.Both standard DL reconciliation and so-called "overlap" methods (e.g.Huerta-Cepas et al. 2007;Emms and Kelly 2019) will consistently label the top-most node flanking a discordant branch with a duplication event.This means that, in cases like the one shown in Figure 3b, these methods will label the wrong node with a D, and subsequently identify the wrong pair of genes as orthologs.(Figure 3c would be labeled correctly; recall that this outcome has an equal probability to the one in Figure 3b in the scenario considered here.)Cases such as these have been referred to as "unsorted homologs" (Mallo et al. 2014), and will occur any time duplication and coalescence interact, either before or after a speciation event.They can also occur even if only two species are analyzed (e.g. if only species B and C were considered in Figures 3b and 3c), which means that approaches that simply collapse short branches to avoid ILS will also be affected.Although reconcILS currently only labels one node with a D, in the future we could allow for multiple possible labelings in order to emphasize the inherent ambiguity of some labels.
The algorithm used by reconcILS deals at least in part with an additional interaction between duplication, loss, and coalescence: hemiplasy.Hemiplasy is the phenomenon by which a single mutational event on a discordant gene tree will be interpreted as multiple events ("homoplasy") when reconciled to the species tree (Avise and Robinson 2008;Hahn and Nakhleh 2016).In analyses of nucleotide substitutions, generic binary traits, and even continuous traits, hemiplasy requires that mutations have occurred on a discordant branch of a gene tree (Mendes and Hahn 2016;Guerrero and Hahn 2018;Hibbins et al. 2023).In the context of either duplication or loss, such mutations would have to occur on a discordant branch of the daughter gene tree to lead to "copy-number hemiplasy."Unfortunately, such events are still not accurately inferred by reconcILS: we have found that the algorithm consistently proposes an extra loss when hemiplasy occurs (dupcoal can track and report simulated hemiplasy events, allowing us to easily make such comparisons).However, there is also a broader definition of copy-number hemiplasy in use: a duplication or loss that does not fix in all descendant lineages (Rasmussen and Kellis 2012;Wu et al. 2014).While this definition encompasses the stricter one given above, it also includes cases such as the one shown in Figure 1.Because the mutation in that figure occurs in the common ancestral population of species B and C, but was only inherited by lineage C, this scenario would be considered to be hemiplasy by the broader definition (although it technically does not involve ILS if the outcome resembles the gene trees in Figure 1c-e, since coalescence is happening in the same population).The number of each type of event in such cases is accurately inferred using reconcILS; DLCpar assumes that this type of hemiplasy does not occur.Note, however, that if the duplication event (the star in Figure 1a) occurred a few generations later-after the split between species B and C, along a tip lineage-the scenario would not be considered hemiplasy by this broader definition, but would still produce the same range of gene trees (i.e. Figure 1c-g).(It would now involve ILS, however.)Regardless of the exact definitions and terms used, it should be clear that duplication, loss, and coalescence can interact to produce a wide array of topologies not considered by most reconciliation approaches.
One relatively straightforward improvement that could be made to reconcILS is in the cost of different events.As discussed earlier, it is difficult to determine biologically realistic costs for duplication, loss, and ILS.One approach to determining duplication and loss costs would be to infer the number of such events using an alternative, non-reconciliation approach (e.g.Mendes et al. 2020), setting the cost in reconcILS to be inversely related to the inferred number of events of each type.Varying rates of duplication and loss can also be inferred along different branches of the species tree, and so the costs of these events could vary commensurately along the tree.Similarly, we can envision a simple way to scale the costs of NNI with the length of species tree branches: this is because the probability of discordance due to ILS is inversely proportional to internal branch lengths on a species tree (Hudson 1983).We could therefore make NNI events on longer branches of the species tree more costly, which would properly weight the relative probability of such events locally on the tree.
There are also several extensions that could be made to dupcoal, our new simulation engine.Currently, dupcoal draws a daughter gene tree from the multispecies coalescent (MSC) at random: this implicitly assumes that the parent and daughter trees are completely unlinked.It should be straightforward, however, to draw the daughter tree at a given recombination distance from the parent tree, ranging from fully linked to fully unlinked.In addition, though the software currently draws gene trees from the MSC, we could also include introgression as a source of discordance by drawing trees from the multispecies network coalescent (Meng and Kubatko 2009;Yu et al. 2011).In this case, users would simply input a species network to simulate from, rather than a species tree.We should also note the differences between the current version of dupcoal and alternative simulators.As mentioned previously, SimPhy (Mallo et al. 2016) does not simulate hemiplasy of any kind, as it uses the same three-tree model of Rasmussen and Kellis (2012) and Wu et al. (2014).However, this method does include gene transfers, which are not simulated by dupcoal.The MLMSC simulator of Li et al. (2021) simulates both hemiplasy and transfers.dupcoal is therefore most similar to this method, with two small differences.First, dupcoal does not change the rate of duplication and loss above speciation events to account for ancestral polymorphism (cf.Gillespie and Langley 1979); instead, users can simply extend all nodes 2N generations into the past to do so (where N is the effective population size).Second, dupcoal does not account for the possibility of further events occurring while duplications and losses are still polymorphic in a population.In order to streamline the simulation process, we have chosen to ignore this squared term.
Finally, we envision multiple ways in which the reconcILS algorithm could be used in conjunction with other methods.For instance, ASTRAL-Pro (Zhang et al. 2020) infers a species tree from gene trees containing paralogs by first labeling nodes in each gene tree as either duplication or speciation.The authors of this software rightfully recognize the problems with labeling discussed here, but nevertheless their labeling step might benefit from methods that include ILS.Likewise, GRAMPA (Thomas et al. 2017) allows for reconciliation of polyploidy events, but does not include coalescence as a source of discordance; their algorithm could easily accommodate this extension.There are also multiple approaches that can be used to "correct" gene trees via reconciliation (e.g.El-Mabrouk and Ouangraoua 2017; Christensen et al. 2020).These and many other tasks might benefit from approaches that model duplication, loss, and coalescence in a single framework.
Fig. 2 .
Fig.2.Summary of reconcILS.a) The initial step carries out LCA mapping from the gene tree to the species tree.Here, nodes G1 and G2 in the gene tree both map to node S1 in the species tree; we therefore refer to S1 as a multiply mapped node.Additionally, G1 and G2 together define a discordant branch, K1, in the gene tree.For clarity, all mappings involving tip nodes have been excluded.b) The algorithm attempts to reconcile the discordant branch using NNI and loss.Here, reconciliation can be carried out with 1 NNI and 0 losses.c) The algorithm attempts to reconcile the discordant branch using duplication and loss.Here, reconciliation can be carried out with 1 duplication and 3 losses.reconcILS must decide between the reconcilations in b) and c) for every discordant branch.
Fig. 3 .
Fig. 3. Labeling reconciled gene trees.a) One possible reconciliation of the gene tree from Figure 1f labeled by reconcILS.The single ILS node (I) has been labeled with a branch that has been rearranged by NNI (C1), the number of NNIs it has undergone (2), and the fact that it also represents a duplication (D).b) The gene tree from Figure 1d labeled by reconcILS.This tree is correctly labeled, but note that it is the same topology as in panel c (see main text for further explanation).c) The gene tree from Figure 1e labeled by reconcILS.Note that the program has labeled the bottom-most node flanking the discordant branch with D and I, although that is not the true history.d) A gene tree with only duplications and losses (L) labeled by reconcILS.Internal branches could also be labeled with L, though none are shown here.
Fig. 4 .
Fig. 4. Accuracy of reconcILS and DLCpar.a) Accuracy of inferred ILS events.Orange circles represent results from reconcILS.Blue circles represent results from DLCpar.Circle size is proportional to the number of reconciliations with each value, with jitter applied for clarity.Lines represent best-fit regressions (as calculated in Seaborn; Waskom 2021), plus confidence intervals.b) Accuracy of inferred duplication events.c) Accuracy of inferred loss events. | 2023-11-08T14:15:05.276Z | 2024-05-16T00:00:00.000 | {
"year": 2024,
"sha1": "615388f621c6f078f0f51d365a6621622058e4c9",
"oa_license": "CCBYNC",
"oa_url": "https://www.biorxiv.org/content/biorxiv/early/2023/11/05/2023.11.03.565544.full.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "2e576d66e5b7dea78ed7ca6efa89670147344c4c",
"s2fieldsofstudy": [
"Biology",
"Computer Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
243499069 | pes2o/s2orc | v3-fos-license | A Coherent Accumulation Detection Method Based on SA-DPT for Highly Maneuvering Target
The emergence of highly maneuverable weak targets has led to a serious degradation or even failure of traditional radar detection. In this paper, a coherent accumulation algorithm based on combination of scaling algorithm(SA) and discrete polynomial-phase Transform(DPT)is proposed in terms of the calculation burden and detection performance, which can, firstly, perform fewer times of speed parameter compensation based on SA for the transmitted signal, and selects the effective delay range of the target signal; secondly, for the extracted echo signal, the DPT algorithm is used to estimate the target speed and acceleration. The proposed algorithm analyzes the influence of time delay range, compensation speed and delay unit on the detection performance, and gives the improvement degree of output SNR and the amount of complex multiplication . Finally, the experimental data verify the effectiveness of the proposed algorithm in accumulation gain and parameter estimation. This method is for sub-optimal estimation, requiring much less computation than joint search methods, but performs better than cross-correlation methods.
With the development of aircraft technology, more and more aircraft have the characteristics of high speed, high maneuverability and stealth, which brings great challenges to conventional radar detection. With the same observation time, the echo accumulation energy of a highly maneuverable weak target is lower than that of conventional target.
To improve the detection probability of targets, the common method is long time accumulation, which includes coherent accumulation and noncoherent accumulation. Among them, the accumulation gain of noncoherent accumulation method is low, this paper mainly studies the coherent accumulation method.At present, there are mainly two kinds of methods for long time coherent accumulation [1][2][3][4][5][6], one is joint search methods based on parameter, which first estimates the range of target speed, acceleration and other parameters , and then detects the target signal through two-dimensional or multi-dimensional parameter search. For example, [7] studied the parameter compensation methods for speed and acceleration based on SA,improving the probability of target detection. [8] studied the parameter estimation methods for distance, speed, and acceleration based on Maximum likelihood estimator (MLE), and analyzed the parameter errors using the Cramer-Rao Lower Bound (CRLB). [9]- [12] studied the compensation method for range migration (RM) based on the keystone (KT) method, but there were speed ambiguity numbers for the target with higher speed. [13]- [16] studied a highly maneuvering target detection methods based on Radon Fourier transform (RFT), which used Radon transform and FT to complete RM compensation and Doppler frequency migration compensation, respectively. [17] studied the detection method for highly maneuvering targets based on Three-dimensional scaled transform (TDST), whose output signal-to-noise ratio (SNR) had improved by more than 10 dB compared with that of the MTD method . [18] studied the RM compensation based on Radon transform and Doppler frequency migration compensation based on Lv's distribution (LVD), in which the detection performance is close to RFT. [19]- [22] studied the coherent accumulation method for high speed target based on FRFT. [23]- [24] studied the sparse Fourier transform (SFT) method to accelerate the speed compensation and spectrum computation of high speed targets.
Among the above methods,they have high accumulation gain and good detection performance for low SNR signals, but have large calculation burden and poor real-time performance.In order to reduce the amount of computation, the other is cross-correlation parameter estimation methods based on order reduction processing. This kind of method first performs cross-correlation processing on the target echo signal to realize the decoupling operation between speed and accumulation time, as well as dimension reduction processing. On this basis, it uses two-dimensional FFT to detect the target .For example, [25]- [28] studied the highly maneuvering target detection methods based on the Discrete polynomial-phase Transform (DPT). [29]- [33]studied the highly maneuvering target detection methods based on the thoughts of adjacent cross correlation function(ACCF).The second kind of method has the advantage of lower computational complexity than that of the first kind of method, but requires a higher input SNR than that of the first kind of method and is not suitable for detecting weak target signals.
Considering both the improvement of the detection performance and the reduction of calculation burden, this paper focuses on analyzing the reasons why the DPT method reduces the detection ability of weak target signals, In order to further improve the detection performance of the DPT method , a coherent accumulation detection method based on combination of SA-DPT for highly maneuvering targets has been proposed in this paper, with its structure arranged as follows: The first part that overviews the long time coherent accumulation method commonly used for weak signal detection, and analyzes its detection performance and calculation burden; In the second part that gives out the maneuvering target echo model based on the LFM radar; In the third part, a long time coherent accumulation method based on the combination of SA and DPT is proposed; In the fourth part, the implementation process of the proposed method is given, and the effects of signal processing range, speed compensation factor and delay unit on the detection performance are analyzed, and the complex multiplication operation of different methods are compared and analyzed; In the fifth part, the experimental data are used to verify the effectiveness of the proposed method. The sixth part summarizes and discusses the full text.
High manevering target echo model
Assuming that the detection system is an LFM pulse radar, the frequency spectrum of the transmitted and received signal after down-conversion can then be expressed as [19] (2) where, 0 A is the amplitude of the signal, is frequency modulation rate, 0 T is pulse width, 0 R is the initial range of the target distance from radar , is the first-order term of the Doppler frequency change, 0 v is the target radial speed, a is the target radial acceleration, T is pulse repetition period, c is velocity of light, c f is signal carrier, / c cf l = is the transmitted signal wavelength.
Multiplying Equation(1) by (2) to arrive at the frequency domain and time domain signals after pulse compression of the echo signal as: is the product of the time width and the bandwidth, B is the bandwidth of input signals.
It can be seen from Equation (3) and (4) that the exponential functions that cause the target envelope to produce time delay migration and Doppler frequency migration are: where, (7) to (8) show that in the same accumulation time, the greater the target speed and acceleration, the more serious the target energy diffusion, and therefore the detection probability will fall significantly. The time delay migration caused by speed and the Doppler frequency migration caused by acceleration have been mainly considered in this paper.
SA-DPT
In order to overcome the energy expansion caused by speed and acceleration, a coherent accumulation method based on SA-DPT is proposed in this paper. The SA method can store the speed compensation factor and complete the speed compensation processing of the transmitted signal in advance, but the compensation processing mentioned in [2,[5][6][7][8] can complete the speed compensation after receiving the echo signal. Therefore, the SA method can improve the operation efficiency.
In addition, the delay range of the target after SA compensation is extracted,on this basis, the speed and acceleration information of the target can be obtained by using DPT. Compared with DPT processing in [27][28][29], the proposed method improves the detection probability of weak signal. At the same time, the operation of the proposed method is mainly one-dimensional parameter search and two-dimensional FFT processing, and its amount of operation is much less than that in [5][6][7][8]. The proposed method chieves a compromise in computational complexity, detection performance, and parameter estimation accuracy.
Speed compensation method based on SA (1) Speed compensation of transmitted signal
In order to correct the echo envelope range migration (4) and improve the efficiency of the method, the SA method is used to compensate the speed of the transmitted signal. Equation (1)~ (7) suggest that the factor that causes the target echo envelope to migration is the exponential function term exp( 2 ,Therefore, after speed compensation on Equation (1), its expression can be written as: In (10), ' Substituting (10) for (1), Equation (4) can be rewritten as: After MTD for equation (11), it can be approximately expressed as: In (12), the closer when 0 b is approching to ' 0 b , the smaller the RM of the echo envelope is , which is more conducive to the detection of the target. Next, the selection method of speed parameters v % in equation (12) is given.
(2) Selection of ṽ If the envelope migration is less than a distance unit, the relationship among the speed compensation factor and the target speed and accumulation time NT in (12) can be expressed as follows: It can obtain from (13) Let the target maximum speed is max n . The range of ṽ is then: Equation (15) gives out the compensation speed range and step interval. For the sake of fast acquiring the compensation range of the speed , the speed is searched using the binary search technique; then the Q in (15) can be rewritten as:
The target detection and parameter estimation based on DPT
In (12), only speed compensation processing can be completed, but when the target acceleration becomes larger, the accumulated energy of echo signal in (12) diffuses seriously.When the first kind of method in the introduction is used for processing, the amount of calculation is large and the real time performance is poor. Therefore, considering the advantages of the first kind method and the second kind method, this paper proposes to use (12) to first obtain the time delay estimation range of the target, and then carry out DPT to the signal within this range to complete the target detection, and then obtain the information such as speed and acceleration.
Estimation of delay range
After finding the maximum value in (12), the target delay position can be estimated from the peak corresponding to the parameter. This process can be expressed as: where, ) is the time delay estimation of the initial position of the target,the corresponding range is: where, b can be set according to Equation (7). After selecting the target signal in (11) using the Equation (18), the Equation (11) can be modified as:
The target detection based on DPT
(1) Speed and acceleration estimation based on DPT If the IFFT transformation is performed on (19), the frequency domain expression of the signal can be obtained as: Considering the influence of noise, after crosscorrelation processing of (20) by column, it can be obtained: ( ', '), ' [1, ] Gaussian noise with 0 mean and variance 2 , N is the delay unit relative to N. When performing IFFT and FFT processing on n f , ' in (21) respectively, it can obtain: According to the results of (10) and (22), the values of the target velocity and acceleration can be approximately expressed respectively as: [28] : where, m , n respectively represent the time delay position and Doppler frequency position in (22) , and v %is the compensation value of the speed in (10).
According to ' v ) and ' a ) , the compensation function is constructed , and is substituted into the Equation (3), the parameters such as the distance and speed of the target can be obtained by using the MTD method. The implementation process of the above method is shown in Figure 1. can be expressed as [26] : Comprehensive equations (25) and (26) Comparing the output SNR in (27) with the output SNR of DPT in [26,28], it can be obtained: is the noise variance of DPT in [26], and the relationship with 2 in this paper is: Substituting (29) into (28), it can be obtained: According to the equation (30), the smaller ' , the more obvious the improvement of output SNR, which is more conducive to the detection of weak signals.
Implementation process of the proposed method
In order to clearly understand the execution process of the proposed method, the relevant processing results are given in Figure 2. Suppose that the carrier frequency of the radar is 3GHz, the pulse repetition period is 3ms, the number of accumulated pulses is 128, the signal bandwidth is 5MHz, the baseband sampling rate is 10MHz, the target is a point target, the maximum radial speed is 1000m / s, the maximum radial acceleration is 10m /s 2 , and the SNR is -30dB, ' 10 , . Figure 2 (a) shows the MTD results, and Figure 2 (b) shows the direct DPT results in [26], from which it can be seen that the target signal is difficult to be found. Figure 2 (c) shows the MTD results based on SA, from which it can be seen that the SNR of echo signal is higher than that of Figure 2 (a), but the energy diffusion is still serious, and it is difficult to accurately judge the target. Figure 2 (d) is the target signal extracted by the proposed method , in which the delay range is the area between the two red lines in Figure 2 (c). Figure 2 (e) shows the DPT result of the signal in Figure 2 (d), and its output SNR is greatly improved compared with Figure 2 (b), from which the target can be clearly found.
on detection performance
In order to analyze the influence of ' on detection performance, the simulation parameters are set as in 4.1. Figure 3 shows the comparison of detection performance under different ' , from which it can be seen that the smaller ' is, the better the detection performance. When the detection probability is 80%, the requirement for the input SNR when ' is 20 is about 5dB lower than that when ' is 100, which is basically consistent with the theoretical value of the Equation (30), which is more conducive to the detection of weak signals.
The influence of N t on detection performance
Let the simulation parameters be the same as in 4.1 , ' 20 . Figure 4 shows the detection performance analysis of the proposed method with different N t , from which it can be seen that the larger N t is, the greater the required input SNR with the same detection probability, but the smaller the relative estimation error of speed and acceleration is. On the contrary, the smaller N t is, the smaller the required input SNR is with the same detection probability, but the larger the relative estimation error of speed and acceleration is. In order to facilitate the detection of weak signals, N t can be set as 0.3~0.5 .
The influence of v on detection performance
Let the simulation parameters be the same as in 4.1 , ' 20 . Figure 5 shows a comparative analysis on the detection performance of the proposed method with different ṽ . In addition, it can be seen from Figure5 (a) that there is little difference in detection performance when the speed compensation value is 500m / s and 1000m / s, which shows that the proposed method has great adaptability to the speed compensation value, which can improve the execution efficiency of the method. Under the same conditions, the speed search times of the proposed method are about 30% less than that of MLE. Figure5 (b) shows that when the detection probability is 80%, with the proposed method the input SNR is about 10dB higher than with the MLE and about 5dB lower than with the DPT , indicating that the detection performance of the proposed algorithm is between the two.
Computation load
In what follows, the computational complexities of MLE, KT, the direct DPT, and the proposed method are analyzed . Assume that M and N denote the number of range cells and the number of pulses, k represents the number of speed compensation in MLE, KT, and k' represents the number of speed compensation in the proposed method, where .With the proposed method, the phase compensation of the transmitted signal is before the pulse pressure, and therefore its calculated value can be stored in the register beforehand. The pulse pressure processing after one speed compensation requires Table 1 and Figure 6. Table 1 shows that, in addition to the direct DPT, the proposed method has obvious advantages over other methods in terms of complex multiplication burden. As can be seen from Figure 6, as the speed increases, the computational advantage of the proposed method becomes more obvious. Therefore, combined with the results in Table 1, Figure 6 and Figure 5 (b), it can be seen that the proposed method has a good balance in detection performance and real-time performance,thus conducive to the detection of weak target signals and the real-time realization at the same time.
Experimental results and discussion
In order to further verify the performance of the method, simulation and measured data are analyzed.
Simulation experiment
(1) Single target detection performance Suppose the radar system parameters be the same as 4.1 ,the target speed is 1000m/s, the acceleration is 15m/s 2 , its distance from the radar is 100 km, the SNR of the input signal is -25 dB, the speed compensation value is 500m/s, and ' 20 . Figure 7(a) gives the MTD processing result of the echo signal, Figure 7(b) gives the processing result by the direct DPT , and Figure 7(c) shows the processing result by the proposed method . Compared with the results in Figure 7, the MTD method still has obvious energy diffusion after speed compensation. The direct DPT method and the proposed method can achieve target detection, but the output SNR of the proposed method is larger, which is more conducive to improving the target detection probability and parameter estimation accuracy. (2) Multi-target detection performance Two targets are assumed, with a speed of 1000m/s, 1050m/s, an acceleration of 5m/s 2 , 10m/s 2 respectively. The SNR of the input signals is -25 dB, and the speed compensation value is 500m/s, ' 100 ,and other parameters are the same as 4.1. Figure 8 shows that MTD has caused serious energy diffusion. The direct DPT method can detect two targets, but it suffers from Doppler spectrum spread. In contrast, the proposed method is capable of satisfactory detection of two targets and the target resolution is the highest among the several methods. The above experiments show that the proposed method can effectively detect weak signals without accurate speed compensation, and has a good compromise between detection performance and target resolution.
Verification of measured data 5.2.1 S-band radar data
In order to further verify the effectiveness of the method, the following measured data from aircraft were used for analysis, and the White Gaussian Noise that was added was -25 dB. The radar operated at the S band, the carrier frequency is 3GHz, the signal bandwidth is 2 MHz, the pulse repetition time is 600 μs, the pulse duration is 60 μs, the integration pulse number is 2048, and the sampling frequency is 4 MHz, the maximum speed of the target is 600m/s, and the acceleration is 0.1m/s 2 . Figure 9 shows the processing results of different methods, among which Figure 9(a) is the results of MTD , from which it can be seen that the accumulated energy of the target signal is smaller and the phenomenon of diffusion occurs; Figure 9(b) is the result of the direct DPT , from which the target can be observed, but the probability of false alarm is relatively high. Figure 9(c) indicate the processing results of the proposed method, with the speed compensation value to be 300m/s. Comparing the results of Figure 9, it can be concluded that the proposed method can detect weak target signals, which verifies the effectiveness of the proposed method.
X-band radar data
The radar operates in the X-band, the transmission waveform is LFM signal, the pulse repetition frequency is 10kHz, the bandwidth is 2GHz, and the accumulation time is 100ms. Figure 10 (a) shows the MTD results, Figure 10 (b) shows the Time-Doppler results of MTD, Figure 11 (a) shows the results of the proposed method with a compensation speed of 150m / s, and Figure 11 (b) shows the Time-Doppler results of the proposed method. Comparing Figure 10 and Figure 11, it can be seen that after the processing of the proposed method, the time-delay migration of the target is corrected, which is conducive to subsequent target detection and imaging processing.
. CONCLUSION
Aiming at the problem of large amount of computation in high maneuvering target detection method, a hybrid coherent accumulation algorithm is proposed in this paper. This method combines the respective advantages of parameter compensation method and cross-correlation processing method. The specific performance is as follows: 1) The proposed method only needs to compensate the target speed for a few times, and then the DPT method can be used to obtain the target speed and acceleration information.
2) The proposed algorithm uses SA to obtain the time delay estimation range of the target, which can improve the output SNR of DPT , which is not only conducive to improving the accuracy of parameter estimation, but also conducive to reducing the amount of operation.
3) The proposed method is suitable for coherent accumulation detection of long time weak signals. when the accumulation time further increases, the target may appear cross beam. The authors will consider integrating TBD method for research in the future in [34,35]. | 2021-10-15T15:30:14.321Z | 2021-10-11T00:00:00.000 | {
"year": 2021,
"sha1": "f461c4b3f23e9ffbd5397b7b2e6ce4afc3e65d93",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-952438/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "b375beaf0458e5a07dd59487c9834c06b8731419",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
266219094 | pes2o/s2orc | v3-fos-license | Potential Impact of Using ChatGPT-3.5 in the Theoretical and Practical Multi-Level Approach to Open-Source Remote Sensing Archaeology, Preliminary Considerations
: This study aimed to evaluate the impact of using an AI model, specifically ChatGPT-3.5, in remote sensing (RS) applied to archaeological research. It assessed the model’s abilities in several aspects, in accordance with a multi-level analysis of its usefulness: providing answers to both general and specific questions related to archaeological research; identifying and referencing the sources of information it uses; recommending appropriate tools based on the user’s desired outcome; assisting users in performing basic functions and processes in RS for archaeology (RSA); assisting users in carrying out complex processes for advanced RSA; and integrating with the tools and libraries commonly used in RSA. ChatGPT-3.5 was selected due to its availability as a free resource. The research also aimed to analyse the user’s prior skills, competencies, and language proficiency required to effectively utilise the model for achieving their research goals. Additionally, the study involved generating JavaScript code for interacting with the free Google Earth Engine tool as part of its research objectives. Use of these free tools, it was possible to demonstrate the impact that ChatGPT-3.5 can have when embedded in an archaeological RS flowchart on different levels. In particular, it was shown to be useful both for the theoretical part and for the generation of simple and complex processes and elaborations.
Introduction
The last three decades have been strongly marked by the impact of technologies on human life and, in particular, by their unprecedented and widespread vertical and horizontal penetration into everyday life.This impact has obviously occurred in all human activities, as well as in archaeology.With this respect, a massive technological revolution has dramatically influenced documentation techniques (i) before, (ii) during, and (iii) after excavation.Non-invasive archaeology has proven to be extremely useful in understanding or hypothesising the presence of possible remains of archaeological interest under the ground in the stages prior to archaeological excavation by providing information on the nature of the surface and subsurface using remote sensing (RS) and earth observation (EO) techniques [1][2][3][4][5][6].
RS and EO applied to archaeology are not a recent discipline.Studies in this field can be found as early as the second half of the 19th century, such as those by F. Stolze and F. C. Andreas in 1874, in Persepolis in Iran [5,7,8], and continuing with (ii) G. Boni in the Roman Forum (1899), and later (1908 approx.) in Venice, Ostia, and Pompeii [7,9,10].
The technological development of satellites and sensors has produced a major change in remote sensing and opened up new perspectives for archaeological prospecting activities [34][35][36][37].The launch of NASA's (National Aeronautics and Space Administration) Landsat missions represented a real change in the RS applied to the CH.In particular, in 1972, the US government distributed the data to scholars from all over the world and renamed the mission as Landsat [11,[38][39][40].
During the 1980s, archaeologists started to structure a methodology of RSA [41][42][43].During the same years, there was the First International Conference on Remote Sensing and Cartography in Archaeology and the creation of the European Remote Sensing Centre in Strasbourg, now the European Space Agency (ESA).In 1984, the first NASA-sponsored conference on RSA was held, organised by Tom Sever and James Wiseman, entitled 'Remote Sensing and Archaeology: Potential for the Future' [11,44].During the 1990s, RS applied to archaeology was particularly favoured by the development of several applied studies on available satellite and airborne sensors, the development of performing software and hardware and the combined use of existing technologies with the newly developed GIS (Geographical Information System).The integration of data with GIS systems led archaeologists to integrate satellite data into the concept of landscape-scale archaeology, opening up the possibility for large-scale analyses and the creation of previously unexplored spatial correlations [11,[45][46][47].These developments generated a real change in the RS approach applied to archaeology.This change of mindset towards a more modern interpretation by archaeologists and scholars from other disciplines to the combination of RS and archaeology has created a great development of research, materialised over time in conferences [48,49], books [11,44,[50][51][52][53][54], reviews, and papers [46,47,[55][56][57][58].It was revolutionary for archaeology itself and for the discovery of buried cultural heritage (CH), just as it profoundly changed the way these new technologies were conceived [52,[59][60][61][62][63].
The availability of open big-data and the development of increasingly high-performance computing and storage platforms has certainly contributed to boosting research in this field in the last few years [2,57,[64][65][66][67][68].
Considerable progress has been made in the ability to identify a wide range of proxy indicators of the presence of buried archaeological sites.As is well known, the identification of buried archaeological features by optical satellite data is based on the observation of changes in reflectance that are useful to highlight changes: (i) in the health and phenological cycle of vegetation; (ii) and in soil moisture retention [62,69,70].These changes are particularly evident in the red, green, near-infrared (NIR), red edge and short-wave infrared (SWIR) bands [34,[71][72][73][74].In recent years, RS studies in archaeology have focused on the use of different systems to improve the visibility of features of archaeological interest.The most common practices are (i) spectral enhancement via the creation of indices (mathematical combination between bands) such as indices derived from the use of NIR, Red, and Green (e.g., NDVI, GNDVI, and SAVI) [63,[75][76][77][78][79] or indices based on SWIR (e.g., NDMI and MSI) [80-82]; (ii) radiometric enhancement obtained using linear and non-linear stretching or equalisation of the histogram to increase the contrast between pixel classes [83,84]; (iii) transformation, aggregation or reduction in data using various techniques such as TCT (Tasseled Cap Transformation) [85], PCA (Principal Component Analysis) and SPCA (Selective PCA) [86][87][88][89], local and global spatial autocorrelation indices (e.g., Anselin Local Moran's I, Getis-Ord's index and Geary's index); and (iv) classification (e.g., K-Means, Isodata, and machine-and deep-learning based classification) [90,91].
Last but not least, a huge contribution has been provided to the development of increasingly complex machine-and deep-learning systems associated with increasingly simple and, in many cases, completely free software and tools, such as QGIS and Google Earth Engine (GEE) [1,[92][93][94][95][96][97][98][99][100].The entry of AI (Artificial Intelligence) into many fields, including the fields of RSA, has undoubtedly been a subject of discussion in recent years, Heritage 2023, 6 7642 when systems based on pre-trained language models have entered the world scene, becoming very popular even among non-specialists, with solutions suitable for operating in various fields (e.g., graphics, text, copywriting, marketing, data analysis) [101].These systems provide different types of output (e.g., images and text) from text or voice input by the user.For this reason, they are extremely easy to use and affordable for everyone.
The aim of this study was to analyse some aspects of the impact that an AI model based on pre-trained language models such as ChatGPT-3.5 (Generative Pre-trained Transformer) can have in the RSA, in particular the model's ability to provide: (i) answers to (general and specific) questions on the issue; (ii) information about the references from which he/she has taken information; (iii) information about the tools to be used depending on the user's desired outcome; and help the user to perform simple and complex processes for RSA investigations interacting with the different tools and libraries.For the purposes of the research, ChatGPT was asked to generate codes mainly in JavaScript in order to interact with the free GEE tool.In addition, the aim of the research was also to cross-sectionally assess the extent to which prior skills, competences, and language properties the user must have in order to achieve the required goal of the model.
Materials and Methods
The study followed the flowchart illustrated in Figure 1.
Heritage 2023, 6, FOR PEER REVIEW 3 simple and, in many cases, completely free software and tools, such as QGIS and Google Earth Engine (GEE) [1,[92][93][94][95][96][97][98][99][100].The entry of AI (Artificial Intelligence) into many fields, including the fields of RSA, has undoubtedly been a subject of discussion in recent years, when systems based on pre-trained language models have entered the world scene, becoming very popular even among non-specialists, with solutions suitable for operating in various fields (e.g., graphics, text, copywriting, marketing, data analysis) [101].These systems provide different types of output (e.g., images and text) from text or voice input by the user.For this reason, they are extremely easy to use and affordable for everyone.
The aim of this study was to analyse some aspects of the impact that an AI model based on pre-trained language models such as ChatGPT-3.5 (Generative Pre-trained Transformer) can have in the RSA, in particular the model's ability to provide: (i) answers to (general and specific) questions on the issue; (ii) information about the references from which he/she has taken information; (iii) information about the tools to be used depending on the user's desired outcome; and help the user to perform simple and complex processes for RSA investigations interacting with the different tools and libraries.For the purposes of the research, ChatGPT was asked to generate codes mainly in JavaScript in order to interact with the free GEE tool.In addition, the aim of the research was also to crosssectionally assess the extent to which prior skills, competences, and language properties the user must have in order to achieve the required goal of the model.
Materials and Methods
The study followed the flowchart illustrated in Figure 1.The following tools were used: (i) OpenAI ChatGPT-3.5 as a pre-trained language model and (ii) GEE.All conversations made between authors and ChatGPT-3.5 are shown in the SIs (Supplementary Information).
OpenAI ChatGPT-3
ChatGPT is a Generative Pre-trained Transformer (GPT) based on a natural language processing (NLP) [102-106], i.e., a large language model (LLM) that, using deep-learning, understands a text or voice input and reproduces output based on what has been understood.It was released in 2020 by OpenAI [102,107,108].OpenAI's main goal is to develop artificial intelligence that is safe, beneficial and accessible to all [109].
Starting in 2018, OpenAI created GPT-1, GPT-2, and released GPT-3.Version 3.5 of ChatGPT was used for this paper since it is free.It was released in 2022, and today, version 4 is already available.ChatGPT 3.5 is an upgraded version of ChatGPT 3, with several improvements in terms of accuracy, safety, and usability.ChatGPT 3.5 is generally considered to be more accurate than ChatGPT 3.This is due to a number of factors, including (i) a more sophisticated training process that uses reinforcement learning with human feedback, (ii) larger dataset of training data, (iii) improved algorithms for handling natural language.ChatGPT 3.5 is designed to be more usable than ChatGPT 3. ChatGPT-3.5 training data stopped in 2021, and many limitations are imposed on the system with regard to the language to be used in responses, the type of responses to be given, and some formal language conventions.Billions of BPE (byte-pair-encoded) tokens were used for training (Table 1).Since ChatGPT was released, the scientific community has been using it, and articles have been published on it [107] on several topics [106,[110][111][112][113][114][115][116][117][118][119][120], as well as ethical issues that have arisen very recently [121][122][123][124][125][126].To date, there are few studies that demonstrate the usefulness of GPT or derived tools (e.g., Visual ChatGPT) in the field of RS and satellite image classification [127][128][129][130].A useful tool made available by the world community via the web (e.g., GitHub or several Google extensions) is the possibility of being able to use prompts (i.e., texts explaining to ChatGPT what to do) that are already pre-compiled so as to (i) save time and (ii) prevent the system from being trained wrongly or giving wrong answers.
ChatGPT is used to obtain different types of output [106]: (i) Generated Text: It can generate coherent and relevant responses and text based on the given questions or instructions.It can be used to answer specific questions, provide explanations, generate creative content, or even play the role of a virtual character in a conversational interaction; (ii) Translations: By utilising ChatGPT's translation API, it is possible to provide text in a particular language and obtain a translation in another specified language.This can be useful in supporting multilingual communication and facilitating understanding between individuals who speak different languages; (iii) Speech Synthesis: It can be used to generate text-to-speech synthesis; (iv) Content Research and Generation: It can be used to conduct research on specific topics and generate content based on the results; (v) Interactive Assistance: ChatGPT's APIs enable the creation of interactive applications that leverage its conversational capabilities to engage in conversations and respond to user queries.This can be used to develop chatbots, virtual assistants, or interactive support tools.
GEE and Sentinel-2 L2A
GEE [96] is a powerful open-source tool provided by Google.It provides a web-based interface and interactive development environment (IDE) that allows users to access and work with a wide range of datasets spanning over forty years of global data.These datasets include satellite data from missions such as MODIS, ALOS, Landsat and Sentinels, as well as other useful data such as digital terrain models, shapefiles, meteorological data and land cover information [131].
GEE is known for its high-performance computing capabilities and its ability to handle large amounts of data, making it a valuable tool in the field of remote sensing and big data analysis [94][95][96][97][98].It has gained popularity in various disciplines, with the number of scientific papers on GEE increasing significantly over the years.Researchers have used GEE in several fields, such as vegetation [132][133][134][135][136], land use and land cover [137,138], hydrology [139][140][141], climate [142,143], and cultural heritage analysis [92,93,144,145].
The availability of GEE has also led to the creation and sharing of many free tools, which can be found on the GEE website [146].
For the present study, GEE was chosen because it runs in the JavaScript programming language, and its codes can be generated via ChatGPT-3.5,simultaneously overcoming the (i) problem of computing satellite data locally and (ii) writing code.The aim here is to assess the impact ChatGPT-3.5 can have on research in the field of archaeology and RS.The data used for this study were data from the ESA Sentinel-2 (S2) L2A satellites, now considered the best in terms of spatial, spectral, and temporal resolution among the free data for RS archaeology contained in GEE [147,148].The dataset used in all analyses is the 'COPERNICUS/S2_SR_HARMONIZED' dataset, which covers a time span from March 2017 to the present (Table 2).
Approach to ChatGPT-3.5
In literature, the approach followed relies on the human-GPT interaction to generate a conversation on a given topic (e.g., RSA) and report the conversation by drawing conclusions about the validity of the model's answers [110].Instead, in this paper, the authors have tried to achieve a higher level of technicality and depth in order to estimate the impact of artificial intelligence (AI) in the context of RSA research.User and language model interactions were mainly based on a multi-level approach based on three stages, aimed at demonstrating the potential of the use of AI for such research for users of all levels (e.g., archaeologists with little experience in RS, archaeologists experienced in RS, engineers and programmers experienced in RS), that can be defined as follows: (i) Entry level: (a) RS and archaeology (general theory and methods), (b) current research trends (up to 2021), (c) used references; (ii) Medium level: (a) provide information about the tools to be used depending on the user's desired outcome, (b) help the user to use simple functions and processes in JavaScript underlying RS applied to archaeology; (iii) Advanced level: (a) help the user to perform complex processes for advanced RS work applied to archaeology (e.g., classification and statistics) and to create a complete process by recreating the methodology described in a scientific paper on the topic step-by-step, (c) interoperability between different tools and libraries currently used for RS in archaeology.
The evaluation also took into account the level of previous competence and the type of language used by the user to assess how the user and machine balance each other in achieving a result [149][150][151].
The evaluation scale used was based on a method inspired by the Likert scale, based on a psychometric technique for measuring attitude [152].It is a multi-point accuracy scale, generally rated from 1 to 5 or 1 to 7 points, where answers are graded from the lowest value, completely wrong answer, to the highest value, completely correct answer.For the present study, a scale of 0 to 4 (5 points) was used, as follows (Figure 2): (1) Not correct (0%); (2) Almost completely incorrect.The system starts by providing minimal information, but the essential parts are missing or incorrect (25%); (3) It's not quite correct, but it's not totally incorrect.The system provides correct information but mistakes or omits some important information (50%); (4) Almost completely correct (75%); (5) Correct.The system explains the required topic comprehensively (100%); Heritage 2023, 6, FOR PEER REVIEW 7 involving a binary answer (e.g., yes and no), the values considered are 0 (incorrect) and 4 (correct).
The same rating scale was also used for the code tasks so that uniform results could be obtained for evaluation purposes.In this case, the scale was created as follows: (1) Not working (0%); (2) Partially functional or functional after intensive user interaction (25%); (3) Partially working or working after further explanation by the user at ChatGPT (50%); (4) Almost completely correct even without user interaction (75%); (5) Fully functional as generated by AI (100%); During the process, questions or tasks with scores as shown in Figure 1 were: (i) resubmitted in a similar but not equal way; (ii) an in-depth examination or correction was requested; (iii) changes in the answer or in the generated code were requested.This approach was repeated two to four times each time a response was unsatisfactory.
In order to ask and evaluate questions on several levels, the authors of this study have different levels of knowledge of remote sensing for archaeology.Respectively: (i) F.V. has no experience in the use of RS for archaeology and has asked and evaluated entry-level questions.These were joined by those with more experience in order to avoid giving positive marks even in the case of wrong answers; (ii) A.M.A. and M.S. already have experience with RS for archaeology, but none about data processing tools and therefore assessed the mid-level answers, always under the supervision of the most experienced; (iii) N.A., M.D., R.L., and N.M. can be considered experts in RS for archaeology and assessed the answers for the advanced level.
The scores were then established by mutual agreement between the authors.In the case of open or descriptive answers, this scale was used in this way, where each point corresponds to a 25% increase in the goodness of the answer.In the case of questions involving a binary answer (e.g., yes and no), the values considered are 0 (incorrect) and 4 (correct).
The same rating scale was also used for the code tasks so that uniform results could be obtained for evaluation purposes.In this case, the scale was created as follows: (1) Not working (0%); (2) Partially functional or functional after intensive user interaction (25%); (3) Partially working or working after further explanation by the user at ChatGPT (50%); (4) Almost completely correct even without user interaction (75%); (5) Fully functional as generated by AI (100%); During the process, questions or tasks with scores as shown in Figure 1 were: (i) resubmitted in a similar but not equal way; (ii) an in-depth examination or correction was requested; (iii) changes in the answer or in the generated code were requested.This approach was repeated two to four times each time a response was unsatisfactory.
In order to ask and evaluate questions on several levels, the authors of this study have different levels of knowledge of remote sensing for archaeology.Respectively: (i) F.V. has no experience in the use of RS for archaeology and has asked and evaluated entry-level questions.These were joined by those with more experience in order to avoid giving positive marks even in the case of wrong answers; (ii) A.M.A. and M.S. already have experience with RS for archaeology, but none about data processing tools and therefore assessed the mid-level answers, always under the supervision of the most experienced; (iii) N.A., M.D., R.L., and N.M. can be considered experts in RS for archaeology and assessed the answers for the advanced level.
The scores were then established by mutual agreement between the authors.
Entry Level
The first set of questions focused on the general use of RSA.The purpose of this approach was to assess the reliability of methodological discourse and how useful AI can be for the training of a researcher approaching the subject.
In this section, the questions posed to ChatGPT were about the theory, methods, and references of RSA, alternating with requests for more in-depth information based on the answers given by the AI itself.The topics covered are presented in full in SI A. The questions were: (i) provide an overview explaining the history of studies and the discipline from the late 19th century to 2021; (ii) deepen optical multispectral satellite remote sensing in archaeology; (iii) list the best free data for satellite remote sensing in archaeology; (iv) provide a real case study of a remote sensing study for archaeology, carried out from satellite data; (v) provide an example of a remote sensing study for archaeology, carried out with Sentinel-2 from satellite data; (vi) give information on the use of Sentinel-2 for the discovery of buried archaeological features by providing a step-by-step explanation, including tools and software to be used, where to start from and how to obtain at least the most commonly used vegetation indices.Finally, also add bibliographical references.An in-depth analysis of the relationship between AI and references was therefore analysed separately and is not part of the overall statistics.The question asked in this case was, "Please report 10 important scientific references for each year starting written in the N about 'Remote Sensing' and 'Archaeology' using scheme author(s), year, title, journal", where N is a year between 2010 and 2020 (SI D).
Medium Level
The second level (SI B) of in-depth study was based on the assumption that the user is already familiar with the main issues concerning RSA.The questions were, therefore, of a theoretical-practical nature and were aimed at having the user create code strings for relatively simple operations.These operations mainly concern the use of satellite data to create outputs useful for RS studies for archaeology, such as (i) RGB images, (ii) infrared false colour images, and (iii) vegetation indices.GEE was used as a tool for satellite data analysis.
The questions were structured along the same lines as previously described, as follows: (i) describe the main tools and software used as part of RS for archaeology with Sentinel-2 data; (ii) illustrate the open source tools used as part of RS for archaeology with Sentinel-2 data; (iii) describe the most commonly used packages, libraries and modules used as part of RS for archaeology in (a) Python, (b) R and Rstudio and (c) JavaScript; (iv) show the specifications of the Sentinel-2 satellite; (v) create a code base in JavaScript, compatible with GEE, to select, filter and crop the S2-L2A dataset on a geometry called an Area of Interest (AOI); (vi) display (map.AddLayer function) RGB and Infrared False Colour (R: Nir, G: Red, B: Green) annual averages on a map; (vii) finally, add to the collection the most commonly used vegetation indices for archaeological RS for a flat agricultural landscape.
Advanced Level
The advanced level (SI C) was a much more technical and practical approach than the previous ones.At this stage, ChatGPT was asked to reproduce a methodological approach used in other studies of RSA.Reference was made to the papers [144].In particular, the methodology used in [92,93,144] and generally applicable to other contexts was followed.
The methodological approach used in these papers involves the following steps with the aim of improving the visibility of features of archaeological interest far beyond the possibilities offered by individual vegetation indices, i.e.,: (i) choice of dataset; (ii) choice of period of interest; (iii) dataset filtering (cloudiness and AOI); (iv) creation of vegetation indices throughout the collection considered; (v) selection, on the basis of already known or identifiable evidence of areas of archaeological interest and areas of no archaeological interest, (vi) analysis of spectra and creation of M statistic to evaluate the images in the collection where there is the greatest difference in signal, where M is described in [153]; (vii) selection of images with M > n, where n is a value close to 1; (viii) merging all images into one multi-band image; (ix) data normalisation (as suggested by ChatGPT-3.5);(x) Create a Selective Principal Component Analysis (SPCA); (xi) Calculate statistics of image neighbourhoods; (xiii) and produce a classification (unsupervised and supervised).Produce an unsupervised and supervised classification.In the latter case, ChatGPT-3.5 chose to use a K-means as the unsupervised classification [154] and an SVF (Support Vector Machine) [155,156] as the supervised one.All operations were carried out with the aim of having ChatGPT create a single JavaScript code that could be used in GEE.The aim was to prove its usefulness in the creation of complex flowcharts.
Results
The final results of ChatGPT-3.5'sanswers to the questions asked show interesting behaviour and are shown in Table 3.The complete transcripts are shown in SIs A, B, and C. The result of the entry-level answers achieved a score of 43 out of a maximum of 76 and are described in Section 2.3.1.and SI A. The average result was 2.26.The system was able to answer the questions posed, although it made some errors.
On the general questions about the theory (SI A, 1-6), ChatGPT-3.5 provides acceptable answers, especially for students, researchers, and scholars who want to approach the subject.It is capable of generating a credible, structured text that could easily be used as a basis for developing further research.In 7 to 19 (SI A), ChatGPT-3.5:(i) correctly cited a study done by S. H. Parcak [11], although providing a slightly wrong year of publication; (ii) incorrectly quoted works by Drs.R. Lasaponara and N. Masini, giving plausible but not true titles, although very close to the originals; (iii) gave his own interpretation of RS works for archaeology, simulating possibly real cases.
In particular, references turned out to be a problem already encountered in other papers, as ChatGPT-3.5 often provides information that is similar to the real, plausible, but not true [157][158][159].The conversation is shown in SI D and can be resumed as follows (Table 4).Table 4. Evaluation of references provided using ChatGPT-3.5,year by year, with keywords "Remote Sensing" and "Archaeology".For each year, 10 texts were asked to be cited.The results of an analysed sample of 100 texts produced by AI show a percentage of 99% of invented texts or texts similar to real ones but not correct.Only in one case did the AI correctly quote a text.SI D shows how GPT reworks authors, titles, years and magazines in a way that creates plausible, but not real, references [157].For the year 2020, GPT provides no information but invites the applicant to consult Google Scholar and other scientific reference repositories.
Year
The result for the medium-level answers was 75 out of 92, with an average of 3.26.Details are given in Section 2.3.2. and SI B.
ChatGPT-3.5 proved capable of: (i) provide general information on software used in RS for archaeology (SI B, 1-3); (ii) indicating the required code in a console to be copy-andpasted directly into R, RStudio, Python, and GEE interfaces (SI B, 4-7); (iii) create tables from scratch with the required data (e.g., Table 2 or SI B, 8); (iv) developing simple codes such as those related to dataset selection or the selection of masks and areas of interest (SI B, 9-12) (Figure 3); (v) creating functions to generate vegetation indices (SI B, 13-23); and (iv) having the produced data displayed in the GEE map screen, such as true colour visualisation (Figure 4b), false colour infrared (Figure 4c), grey scale indices (Figure 4d), and print spectral index charts (SI B, 13-23) (Figure 5).In general, few major errors were found, but overall, the system addressed all requests.
Finally, the advanced level achieved a result of 141 out of 216, with an average of 2.61.The whole conversation is reported in SI C and explained in Section 2.3.3.The system was able to generate complex codes, in some cases committing errors and forcing the operator to intervene.ChatGPT-3.5 proved to be able to respond to and generate codes, either from existing codes or based on textual user requests, and thus generate codes from scratch.There were many mistakes made by the AI, mainly related to more complex and/or unclear user requests.In most cases, the errors were either (i) addressed after two or three requests, thus changing the way the task was requested, or (ii) resetting the conversation (e.g., SI C, 38 and 39).The system has proven to: (i) be able to give general information about the RS theory for archaeology or about sites, cities, and artefacts of archaeological interest (SI C, 1-4), albeit in some cases with errors and inaccuracies (e.g., SI C, 1-2); (ii) quickly and efficiently understand, generate, and edit complex JavaScript codes at the user's textual request, also from methodologies described in scientific papers (e.g., [144]), selecting datasets (e.g., Sentinel-2), applying masks (e.g., cloudiness), and creating functions for creating vegetation indices and charts (SI C, 5-19); (iii) quickly create advanced level codes to apply statistical extraction functions (e.g., mean, variance, standard deviation, and M statistic) or functions on the entire data collection considered (SI C, 20-40); (iv) successfully generate code for complex functions such as Selective Principal Component Analysis (SPCA) (Figure 6a), spatial analysis (Figure 6b), and classifications (Figure 7) that are usually not easy to write in JavaScript for a beginner or mid-level user (SI C, [41][42][43][44][45][46][47][48][49][50][51][52][53][54]. tables from scratch with the required data (e.g., Table 2 or SI B, 8); (iv) developing simple codes such as those related to dataset selection or the selection of masks and areas of interest (SI B, 9-12) (Figure 3); (v) creating functions to generate vegetation indices (SI B, [13][14][15][16][17][18][19][20][21][22][23]; and (iv) having the produced data displayed in the GEE map screen, such as true colour visualisation (Figure 4b), false colour infrared (Figure 4c), grey scale indices (Figure 4d), and print spectral index charts (SI B, 13-23) (Figure 5).In general, few major errors were found, but overall, the system addressed all requests.The system provides a console from which the code text can be copied.In this case, it is a JavaScript code that can be used in GEE to select the Sentinel-2 dataset at a precise time interval with a cloudiness threshold of 10%.
Figure 3.
Example of code generated using ChatGPT-3.5.The system provides a console from which the code text can be copied.In this case, it is a JavaScript code that can be used in GEE to select the Sentinel-2 dataset at a precise time interval with a cloudiness threshold of 10%.All data were produced using GEE and the codes provided by ChatGPT-3.5;the points were selected by hand by the user, based on [144].This data was used as the basis for the statistical calculations of the advanced level.
Finally, the advanced level achieved a result of 141 out of 216, with an average of 2.61.The whole conversation is reported in SI C and explained in Section 2.3.3.The system was able to generate complex codes, in some cases committing errors and forcing the operator to intervene.ChatGPT-3.5 proved to be able to respond to and generate codes, either from existing codes or based on textual user requests, and thus generate codes from scratch.(iv) successfully generate code for complex functions such as Selective Principal Component Analysis (SPCA) (Figure 6a), spatial analysis (Figure 6b), and classifications (Figure 7) that are usually not easy to write in JavaScript for a beginner or mid-level user (SI C, 41-54).level codes to apply statistical extraction functions (e.g., mean, variance, standard deviation, and M statistic) or functions on the entire data collection considered (SI C, 20-40); (iv) successfully generate code for complex functions such as Selective Principal Component Analysis (SPCA) (Figure 6a), spatial analysis (Figure 6b), and classifications (Figure 7) that are usually not easy to write in JavaScript for a beginner or mid-level user (SI C, 41-54).
Discussion
The conversations with ChatGPT revealed how this system has advantages and limitations for a practical application such as RS for archaeology.The numerical results (Table 3) obtained from the similar Likert scale, according to the flowchart and the evaluations expressed in Figures 2 and 3, show that there were appreciable differences depending on the use and type of questions asked.
The entry-level was the one with the lowest average total score.This was mainly dictated by the incorrect information provided in the requests for scientific papers and references.In the context of textual generation, in fact, these arguments were over-interpreted by the AI to the extent that they were credible but not usable for scientific or research work.This may be mainly due to the sources used for AI training, which probably do not include precise references on the topic of Remote Sensing Archaeology.In this case, Heritage 2023, 6
7652
GPT re-processes on the basis of knowledge that it has consistent but not real elements.It is conceivable that the IA has read contributions on the subject (e.g., from Wikipedia pages or Google Books), as well as popular pages on the subject, in which there are no clear references and then reworked the topic.In fact, ChatGPT-3.5 is able to bring back the names of the major authors on the subject of RSA, as well as the main journals and the main topics addressed in the literature, albeit with its own reinterpretation.
The next two levels, i.e., medium and advanced, also showed large differences in scores.The highest score was achieved in the case of the medium level.This involved relatively simple code-writing requests, often aimed at obtaining a single result.ChatGPT-3.5 proved extremely useful, particularly in writing code related to (i) the selection and filtering of datasets, (ii) the calculation of vegetation indices, and (iii) the visualisation and export of data.The AI has proven to be a very versatile tool for writing RS-related code in different languages and for converting code from one language to another.In particular, the creation of codes for the creation of visualisation indices proved to be extremely useful as it made it possible to create several different vegetation indices, already set with the bands of the Sentinel-2 satellite, without any particular expenditure of user's time in operations such as searching for the bands of the satellite in use for the chosen indices and writing the correct mathematical formulae in JavaScript.These operations, although not particularly complex, are often time-consuming.Another strength of RSA is that at the end of each piece of code, it provides an explanation and hints related to that code.An example is given in SI C questions 48 and 49.In these is the request to write a code to create an SPCA function.ChatGPT-3.5 suggests that the user also apply a data normalisation function in order to make the PCA itself work: "[...] Keep in mind that PCA is sensitive to the scaling of input data.Normalising the data before performing PCA, as you did earlier, is a good practice to ensure meaningful results".Furthermore, the results show that by exposing the methodology in a textual manner to the GPT system and requiring it to generate step-by-step code, it was possible to achieve results similar to those published using other tools [144].
A marked drop in performance was observed when writing complex codes with functions linked together throughout the chat.This is mainly due to errors in the writing of the code (e.g., Python functions that cannot be used in the GEE portal) or to user requests that are not always clear or not always understood by GPT itself.Another problem encountered (e.g., SI C, 38-39) was that of recursive error.That is, within the same conversation, once a misunderstanding or an error has occurred, this is carried over throughout the series of answers.This phenomenon is clearly emblematic of the essence of ChatGPT-3.5,which, in addition to being pre-trained, learns and works from the conversation with the user.When this point was reached, it was necessary to start a new conversation and provide GPT with the code produced up to that point so that new questions could be asked.ChatGPT-3.5 was able to read, understand, and explain the provided code to the user in text language.This aspect has also proved useful in cases where the user already possesses a starting code (e.g., tutorials made available on the internet) and wants to analyse and understand its features despite having limited knowledge of the programming language.
Conclusions
ChatGPT-3.5 proves to be a valid tool for beginners as well as intermediate or advanced users.It is able to provide useful information about software, tools, and techniques to be used in working with RS archaeology.As shown in the previous paragraph and SIs, however, the user must be cautious in using the information as it is provided, as the data may be distorted by arbitrary interpretation generated by the AI due to the information databases it contains.In fact, the system always tries to provide an answer to the question asked, although it does not always return valid and scientifically reliable information.In particular, the biggest problems occurred when GPT was asked to illustrate a topic related to a scientific paper.In these cases, the system created a plausible argument, probably similar to the truth, but reinterpreting it.However, ChatGPT-3.5 itself advises the user to pay attention since, as GPT itself admits, it is a speech with an artificial intelligence system.It is possible that the limitations encountered are due to privacy restrictions or an internal policy related to the laws of the different states where the system is used.As pointed out by other authors, ChatGPT-3.5 tends to give a high percentage of references with similar detail to the original (e.g., confabulated) [157].
Finally, ChatGPT-3.5 proved to be a very valuable tool in obtaining a fairly summarised overview of certain topics, such as the general themes of RS and archaeological RS, the sites of interest to be analysed and the tools that can be used.Similarly, it proved to be a useful and fast tool for generating tables on certain topics, such as Table 2, which shows data on the ESA Sentinel-2 satellites.
ChatGPT-3.5 proved to be able to respond to and generate simple and complex codes, either from existing codes or based on textual user requests, and thus generate codes from scratch.It can be a valuable support for both beginners and advanced users.In particular, it proved useful mainly for operations of a medium level of difficulty.In this segment, ChatGPT-3.5 is at its best, managing to address requests in an optimal manner and saving the user effort and time.The case of an advanced or expert user of both the programming language and RS archaeology is different.These users may find utility mainly for two operations: (i) converting code from one language to another and (ii) calling up particular functions that are difficult to write or remember (e.g., remembering bands in calculating vegetation indices).On the other hand, in the case of writing code from scratch, advanced users may probably encounter problems in using ChatGPT-3.5 and end up slowing down their work.In addition, ChatGPT-3.5 has shown that it can be used as an interpretation tool for an already-described methodology.This feature, a sort of methodological reverse engineering, could be particularly useful in the field of archaeological RS as it can enable scholars to explain a methodology described in a paper to the AI system and generate the code to replicate it, as demonstrated with the replication of the methodology of one of the reference papers [144].
ChatGPT version 3.5 is not the best performing in terms of text comprehension and response, and better performance in RS for archaeology can be achieved using ChatGPT-4, which is not free.A further increase in performance could be achieved using GPT-4 with its API (Application Programming Interface) connected to other services or using similar systems such as the connection between ChatGPT-4 and Bing or Google's Bard.Recently, several tools and plug-ins have been developed for geoscience, including RS, based precisely on the use of the ChatGPT-4 API.An example of this is the QGIS plug-in called QChatGPT, which allows the GIS environment to be connected to AI. Certainly, given recent fast development trends, AI of this type can be implemented in the automatic identification processes of features of archaeological interest and in the classification, segmentation, and recognition of features of archaeological interest in remote sensing images.In addition, to facilitate such use, ready-to-use prompts for the RS archaeology can be created and made available to users in the same way as they currently already exist in the form of template prompts relating to a wide variety of topics (e.g., communication, social media, automated responses to emails).
In conclusion, the paper demonstrates how this tool can be carefully incorporated into the workflows of the RSA, especially for low-and mid-level users.In particular, ChatGPT-3.5, and also GTP-4, can be successfully used (i) to obtain an overview of certain issues, (ii) generate lines of code, (iii) convert codes from different programming languages, and (iv) understand already written codes in order to rework or modify them.These are all activities that can be inscribed in RS workflows for archaeology by students and researchers.Although GPT has proven to be useful, there is a need for some important considerations as a warning for users to be cautious.It is important to emphasise that, especially for entry (low) and medium levels, ChatGPT can also be a harmful tool.In fact, it must be emphasised that many of the theoretical or reference answers were wrongly given by the system, even though they were proposed to the user as true or correct.It is only the side-by-side evaluation between the non-expert user and the expert user that made it possible to understand the problem in GPT's answer.This problem may depend on two
Figure 2 .
Figure 2. Response rating scheme according to modified Likert scale.
Figure 2 .
Figure 2. Response rating scheme according to modified Likert scale.
Figure 3 .
Figure 3. Example of code generated using ChatGPT-3.5.The system provides a console from which the code text can be copied.In this case, it is a JavaScript code that can be used in GEE to select the Sentinel-2 dataset at a precise time interval with a cloudiness threshold of 10%.
Heritage 2023, 6 , 11 Figure 4 .
Figure 4. (a) Location of the area of interest shown in (b-d) related to the identification of sections of the Via Appia Antica, as discussed in Lasaponara et al. 2021 [144], used as a comparison study for the present work; (b) true colours visualisation (R: Red, G: Green, B: Blue) annual (2017-2023) average; (c) false colours visualisation (R: Nir, G: Red, B: Green) annual (2017-2023) average; (d) NDVI annual (2017-2023) average.All the images were obtained in GEE from the code generated using ChatGPT-3.5.
Figure 4 . 12 Figure 5 .
Figure 4. (a) Location of the area of interest shown in (b-d) related to the identification of sections of the Via Appia Antica, as discussed in Lasaponara et al. 2021 [144], used as a comparison study
Figure 5 .
Figure 5. (a) Area of interest in true colour visualisation with positioning of points of interest relating to areas where features of archaeological interest have been identified and areas where there is presumably no archaeological significance; (b) graph containing the average trend over time of the NDVI index at the points indicated in a.All data were produced using GEE and the codes provided by ChatGPT-3.5;the points were selected by hand by the user, based on[144].This data was used as the basis for the statistical calculations of the advanced level.
tions for creating vegetation indices and charts (SI C, 5-19); (iii) quickly create advanced level codes to apply statistical extraction functions (e.g., mean, variance, standard deviation, and M statistic) or functions on the entire data collection considered (SI C, 20-40);
Figure 6 .
Figure 6.Comparison between data obtained from the code produced using ChatGPT-3.5(a,b) and those published in [144] (c,d): (a,c) Selective PCA; (b,d) Spatial statistics on SPCA.
Figure 6 .
Figure 6.Comparison between data obtained from the code produced using ChatGPT-3.5(a,b) and those published in [144] (c,d): (a,c) Selective PCA; (b,d) Spatial statistics on SPCA.
Figure 6 .
Figure 6.Comparison between data obtained from the code produced using ChatGPT-3.5(a,b) and those published in [144] (c,d): (a,c) Selective PCA; (b,d) Spatial statistics on SPCA.
Figure 7 .
Figure 7. Results of the code generated using ChatGPT-3.5:(a) unsupervised classification (K-means) applied to the SPCA; (b) supervised classification (Support Vector Machine) applied to the SPCA. | 2023-12-15T16:09:32.691Z | 2023-12-12T00:00:00.000 | {
"year": 2023,
"sha1": "03dc493b7b048b3eccbbe6284f4cb74d32d58ccb",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2571-9408/6/12/402/pdf?version=1702387749",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "93f1877cdf0f8461cfa4d8e8c0ed4b978e014ce3",
"s2fieldsofstudy": [
"Computer Science",
"Environmental Science"
],
"extfieldsofstudy": []
} |
264481912 | pes2o/s2orc | v3-fos-license | Research on Gearbox Fault Diagnosis Method Based on VMD and Optimized LSTM
: The reliability of gearboxes is extremely important for the normal operation of mechanical equipment. This paper proposes an optimized long short-term memory (LSTM) neural network fault diagnosis method. Additionally, a feature extraction method is employed, utilizing variational mode decomposition (VMD) and permutation entropy (PE). Firstly, the gear vibration signal is subjected to feature decomposition using VMD. Secondly, PE is calculated as a feature quantity output. Next, it is input into the improved LSTM fault diagnosis model, and the LSTM parameters are iteratively optimized using the chameleon search algorithm (CSA). Finally, the output of the fault diagnosis results is obtained. The experimental results show that the accuracy of the method exceeds 97.8%.
Introduction
With the development of modern industry, gears have become one of the most critical parts of modern industry.Gearboxes have been widely used in various machines due to their fixed transmission ratio, high transmission torque, and compact structure.The gearbox has become a variable speed transmission component for all kinds of machines.According to Nippon Steel Corporation [1], gear failures account for about 10.3% of the total number of machine failures.According to the statistics regarding gearboxes' failure parts, the failure of the gear itself accounted for the largest proportion of failures, about 60%; the gear drive is an important part of induced machine failure.Therefore, gearbox fault diagnosis research has important significance.
Gearboxes are mechanical systems in which several components operate in a lubricated environment, and lubrication analysis, acoustic emission analysis, and vibration analysis can be applied to detect faults [2].Among these, vibration signals have been widely considered.Many studies have proposed monitoring methods based on vibration signals.For example, S et al. proposed a comprehensive method for detecting and classifying rotating bearing faults in machines, by combining permutation entropy (PE) with Flexible Analytic Wavelet Transform (FAWT) methods.Healthy FAWT decomposition signals faulty bearing systems under different operating conditions.In decomposition, the PE value is used to calculate the signal and then it is provided as a feature vector for a support vector machine (SVM) classifier for different types and fault sizes [3].The present paper provides a novel approach for a Network Intrusion Detection System using machine learning and deep learning.Our approach uses two MLP (Multi-Layer Perceptron) models, one with three layers and the other with six layers.Random Forest is also used for classification.These models are ensembled in such a way that the final accuracy is boosted and the testing time is reduced.The novelty of this paper lies in the choice and the combination of the models for network security [4].Scholars have conducted in-depth research on the feature extraction and diagnosis methods of gearbox transmission faults based on vibration signals, and the current analysis methods based on the vibration signals of gearboxes mainly include Fourier transforms, empirical modal decomposition [5], ensemble empirical modal decomposition [6], wavelet packet decomposition [7], variational mode decomposition, and so on.These methods achieve, to some extent, the preprocessing of gearbox vibration signals and the classification and identification of transmission faults.Empirical modal decomposition can adaptively decompose vibration signals; however, the problem of modal aliasing is prominent, which makes the physical meaning of the missing decomposed modal components and the cause of the faults difficult to explain.Ensemble empirical modal decomposition is improved (based on empirical modal decomposition) by adding white noise signals, which suppresses the endpoint effect and modal aliasing to a certain extent, but its computational complexity increases.Wavelet packet decomposition solves the problem of insufficient extraction of high-frequency components of the signal based on a wavelet transform; however, its decomposition effect is very dependent on a manual wavelet base selection, which does not meet actual online diagnosis needs.Variational mode decomposition (VMD) solves the problem of modal aliasing, has a solid theoretical foundation, and can be used for better diagnosis.
VMD can better restore the original signal and adaptively decompose it into a time series with different frequencies that are relatively smooth.Therefore, in this paper, the vibration signal is analyzed by variational mode decomposition, to decompose the modal components that contain the information of normal and wear faults of gear transmission systems.In many cases, the VMD method provides a solid solution to the mode mixing problem in empirical mode decomposition (EMD) methods.As a result, this method has been used in many research areas such as mechanical diagnostics [8].VMD methods have also provided good performance in planetary gearbox research, as was executed by Feng et al. using joint VMD-based amplitude and frequency demodulation analysis and by Yong Li et al. using VMD power spectral entropy and deep neural networks (DNNs) [9].To detect faults in a two-stage planetary gearbox, Wu et al. demonstrated a new method based on Renyi entropy, two-dimensional VMD, full-vector spectral techniques, and compressive sensing [10].To verify correctness, an experimental study of fault test signals from a gearbox was carried out [11].Zhang et al. proposed a VMD based on the Locust Optimization algorithm for the selection of mode numbers and balance parameters [12].These proposals provide a good solution for selecting the mode number [13].
Quantification of the fault information contained in a time series of vibration signals that have undergone variational mode decomposition can be calculated using time series complexity metrics, such as approximate entropy, arrangement entropy, and other methods.Among them, the permutation entropy (PE) method is based on the spatial characteristics of a time series.Based on the spatial characteristics of the time series, it amplifies the small changes in the signal, which is more effective than other indicators in the diagnosis of abnormal states of mechanical equipment.Therefore, this paper chooses to apply permutation entropy in the process of gear transmission feature extraction to reflect the dynamic changes in the time series in different states of transmission.
With the rapid development of computer technology in recent years, LSTM models based on deep learning theory [14] have been heavily researched in recent years.Compared with other data-driven models, the LSTM neural network shows better prediction results, but the model performance is still vulnerable to the influence of data series and initial values of model parameters.Its convergence performance is greatly reduced when the number of classifications increases due to the increase in sample size.For this reason, optimization algorithms can be introduced to optimize its parameters.Commonly used examples are grid search methods, genetic algorithm, etc., but these methods have large computational complexity and easily fall into local optimum and present other problems.For this reason, a new algorithm named intelligent optimization is introduced to perform parameter optimization and fault classification on LSTM models.There have been many studies on the analysis of runoff sequences and the optimization of model parameters' initial values [15].For example, Sun constructed the idea of "decomposition-predictionreconstruction" and used a variational mode decomposition LSTM neural network model for runoff prediction of the Three Gorges reservoir, and the results showed that the model could effectively improve the accuracy of runoff prediction [15].Wei Qin et al. developed a simulated annealing long short-term memory network (SA-LSTM) model, which can more accurately describe the dynamic lag time relationship between hydropower stations [15].However, most of these studies focus on single data series decomposition or initial value optimization of LSTM model parameters, while there are few research results combining both.In this paper, we propose a coupled model, including variational mode decomposition (VMD), chameleon search algorithm (CSA), and long short-term memory (LSTM) neural networks, which is used in the mechanical field for fault diagnosis of a gearbox.
On this basis, an optimization method for long-and short-sequence fault diagnosis of gearbox transmissions based on variational mode decomposition substitution entropy and the chameleon search algorithm is proposed, and the actual test data of a gearbox transmission are divided into a training set and a test set, and the training set and the test set are further divided into two sets each.The features of the training set are extracted using VMD-PE, and models in the test set are trained and fault-diagnosed using CSA-LSTM.The results show that the method can accurately and quickly identify gearbox faults under 60% to 110% rated pressure test conditions, which is better than existing methods.
The second part of this study describes the construction, inference, and optimization process of the gearbox fault diagnosis model based on the VMD algorithm and the improved LSTM algorithm.The third part describes the setup of the experimental platform and the relevant parameters and conducts experimental validation and comparative experiments on the method proposed in this paper through examples, which further verifies the superiority of the method.Fault classification and diagnosis are carried out by the improved long short-term memory neural network algorithm and the experimental results are analyzed.The fourth part is the summary of the whole paper and discusses the outlook for future work.
Feature Extraction of Vibration Signals Based on Variational Mode Decomposition and Arrangement Entropy
As depicted in Figure 1, the proposed model operates in a series of steps to assess the condition of a gearbox.Initially, vibration signals are captured from the gearbox at a motor speed of 1420 rpm, employing a sampling frequency of 10 kHz.These signals are recorded in various states of gear looseness and wear, as well as when the gearbox is in optimal condition.The second phase involves the implementation of VMD to dissect the vibration signals.For a gearbox in pristine condition, the VMD solely is adept at extracting pertinent data.However, when evaluating a gearbox with transmission defects, a combination of VMD and arrangement entropy proves instrumental in isolating the characteristic features of these imperfections.In the subsequent stage, the identification of patterns indicative of gear wear states is executed by the chameleon search-optimized long shortterm memory sequence algorithm.This innovative approach is meticulously designed to delineate and recognize intricate patterns associated with various stages of gear deterioration.The final comparative analysis reveals the superior efficacy of the presented model over conventional methodologies.VMD outperforms empirical mode decomposition in signal decomposition efficiency.Similarly, arrangement entropy exhibits enhanced precision in quantifying fault features compared to multi-scale arrangement entropy and approximate entropy, among other time series complexity calculations.This rigorous assessment underscores the proposed model's enhanced accuracy and reliability in diagnosing and characterizing gearbox anomalies.
Feature Extraction Methods
In this study, VMD and PE algorithms were employed to distill the fault characteristics inherent in the gearbox transmission.Figure 2 delineates the comprehensive flowchart outlining the intricate process of feature extraction, expounded in the subsequent steps.This systematic approach ensures a meticulous analysis, shedding light on the nuanced anomalies and operational inefficiencies within the gearbox transmission.The approach is outlined as follows: (1) VMD decomposition of the vibration signal of the gearbox transmission is performed to obtain K components.(2) The number of correlations between the original transmission vibration signal and the modal components decomposed by VMD is calculated.A value less than 0.1 indicates that it is a non-effective state component.A value less than 0.1 indicates that it is a non-valid state component, and the component is removed.
(3) The entropy of each modal component is calculated as a feature quantity.
Variational Mode Decomposition (VMD)
This article tackles the issue of limited robustness encountered during the extraction of fault feature frequencies within gearbox systems.Utilizing the VMD method, signals indicative of wear or tooth breakage faults within the gearbox are decomposed into several Intrinsic Mode Function (IMF) components.Subsequently, a correlation coefficient analysis is employed to meticulously identify those modal components that are imbued with fault signals.These selected components then undergo envelope spectrum analysis, a process that efficiently extracts and illuminates the fault feature frequencies, offering insights into the intricate dynamics of the gearbox's operational deficiencies.This refined approach enhances the precision and reliability of detecting and analyzing faults, thereby contributing to the optimization of maintenance and repair strategies.
Variational mode decomposition (VMD) is characterized as a fully non-recursive decomposition model.Introduced by Dragomiretskiy et al. in 2014, the essence of VMD is anchored in the utilization of central frequencies and bandwidths of the extracted modes [16].The triumph of this decomposition technique is attributed to its approach of considering the solution as a constrained variational problem.Each process is meticulously crafted, ensuring that the extracted signals are both comprehensive and precise, thus bolstering the reliability and applicability of VMD in various analytical and diagnostic applications.
where denotes the decomposition mode, is the Dirac distribution, and × denotes the convolution. is the corresponding central frequency of the quadratic penalty term and Lagrange multipliers are introduced to solve the Equation (1).The augmented Lagrange quantities are shown below: where is the Lagrangian multiplier coefficient.The model number is provided in advance as a priori information and the equilibrium reference .Equation ( 2) can be solved by the alternating direction method of multipliers (ADMM) [17].All decomposed modes and the corresponding central frequencies are then updated according to (3) and Equation ( 4), respectively: (3) A detailed mathematical derivation and full algorithm for VMD can be found in [17].VMD has been reported to have better performance in identifying fault features from noisy and complex vibration signals compared to local mean decomposition (LMD), ensemble empirical mode decomposition (EEMD), and conventional EMD [18].
Entropy of a Permutation (Physics)
Alignment entropy is a measure of the complexity of a time series that introduces the idea of alignment [19] and is commonly used for kinetic mutation detection as follows: Let there be a time series {1,2,⋯,N} of length for which the phase space reconstruction yields In Equation ( 5): is the embedding dimension; τ is the time delay.Ranking the quantities in Xi in ascending order, we have In Equation ( 6), 1, 2, …, m are used to denote the column indexes where each element in i is located within the column index.
If two neighboring values are equal during the sorting process, they are sorted in ascending order by the subscript of i.In this way, i is mapped to a set of symbols () = (1, 2, …, m), where = 1,2, …, , where ≤ !, embedded in the time series of dimension m; there are a total of ! arrangements, that is, each m-dimensional subsequence i is mapped to one of ! arrangements.
Calculate the probability of occurrence of each symbol sequence with 1, 2, …, K, which is satisfied by Define the arrangement entropy of the time series (1, 2, ⋯, N) as The normal is ation of () is given as The value of ranges from 0 ≤ ≤ 1.The smaller the value of , the more regular the time sequence is and vice versa.The smaller the value of , the more regular the sequence is and, vice versa, the more complex it is.The value of amplifies the small and complex dynamics exhibited by the respective modal components of the normal and faulty states of the equipment.
Chameleon Search Algorithm (CSA)
The CSA is a novel meta-heuristic optimization algorithm based on the foraging strategy of chameleons proposed by Braik in 2021 [20].This article optimizes the parameters of LSTM based on CSA.The algorithm focuses on solving the optimization problem through a three-stage position involving searching for prey, eye rotation to find prey, and capturing prey.The CSA is mathematically described as follows: Initialization: The CSA begins by randomly initializing individuals of the chameleon population, each of which is a candidate solution to the target problem.Let the initial position of a chameleon individual with population size n in the dimensional search space be where is the first chameleon's dimension and is a random number uniformly generated in the range (0,1).Searching for prey: Chameleon groups search for and find food during foraging primarily by location updating via Equation (6).The location update is mathematically described as where +1 is the first j-th dimensional space of a chameleon at the + 1 position of the second iteration. is the t-th iteration position in the j-th dimension of the chameleon and 1 and 2 are the development capacity control coefficients. 1 and 2 are random numbers uniformly generated in the range (0,1), is the position of the t-th iteration of the j-th dimensional space of the chameleon at the best position, is the iteration's globally optimal position of the chameleon, is the chameleon perception probability, is the search ability control parameter, described as = (−/) * , is the sensitivity coefficient, and T and t are the maximum and current iteration numbers.
The eyes rotate to spot prey.The chameleon's eyes can rotate 360° to search for prey and update their position based on the position of the prey.The position update is mathematically described as where +1 is the first chameleon's + 1 position of the second iteration, is the -th iteration position of the chameleon, ‾ is the position of the center of the iteration of the chameleon, and m is the rotation matrix.
Catching prey: When the prey is close to the chameleon, the chameleon uses its tongue to attack the prey and capture it.The position is updated mathematically, described as where is the speed of the first chameleon's current velocity, −1 is the velocity of the last iteration of the chameleon, is the acceleration, and = 2590 (1− − ).
Long Short-Term Memory (LSTM) Neural Network
The architecture of LSTM neural networks is intricately designed, comprising an input layer, a hidden layer, a recurrent layer, and an output layer.Addressing the challenges of gradient vanishing and explosion inherent in recurrent neural networks (RNN), LSTM neural networks incorporate memory unit states within the hidden layer.This modification fosters enhanced computational efficiency and learning precision.Within this hidden layer, control units are distinctly categorized into the input gate, forget gate, and output gate.The input gate is tasked with the selective recording of new information into the cell state, ensuring that only relevant data are assimilated.Conversely, the forget gate is instrumental in selectively discarding redundant or irrelevant information from the cell, optimizing the storage efficiency.The output gate then meticulously channels the retained information to the succeeding neuron.This selective retention and omission of information endow the LSTM neural network with the capacity for long-term memory, enabling it to adeptly extract temporal features.By meticulously curating and processing data, the LSTM neural network stands as a robust model for managing complex sequential and time series data, ensuring precision and reliability in predictions and analyses.
Initially, vibration signals indicative of gearbox faults were meticulously collected.Based on the distinct characteristics of these signals, an LSTM model was thoughtfully designed and calibrated.Subsequently, an illustrative analysis was undertaken wherein the original vibration signals, obtained from the gearbox, were decomposed utilizing the VMD method.This allowed for an intricate examination and processing of the signals, ensuring that nuanced features were not overlooked.The sorted dataset, enriched with comprehensive insights, was then subjected to temporal information fusion.This process was facilitated by the rigorously established long short-term memory neural network model, ensuring that the resultant data were both holistic and precise, ready for further analysis and interpretation.
where, is the Sigmid activation function. is the threshold of the forgetting gate. is the weight of the forgetting gate.Update the two-part output of the input gate.The equation is ′() =⋅ anh ( ℎ (−1) + () + ) Update the cell state.The formula is where (−1) is the memory unit at moment -1.
Update the output gate output.The formula is where, ∘ , ∘ are the weights and thresholds corresponding to the output gates, respectively.ℎ () is the output vector of the hidden layer.Update the forecast output at the current moment.The formula is where V and c are the weights and thresholds of the implicit to-output layer connections, respectively.Equations ( 14)~(20) comprise the process of LSTM forward propagation, and then the error between the predicted and actual values is back-calculated to update the weights and thresholds until the maximum number of iterations is satisfied.
Chameleon Search Algorithm for Optimizing Long Short-Term Memory Neural Networks
The classification efficacy of a long short-term memory neural network machine is contingent upon specific parameters.In pursuit of optimizing parameters (penalty factor) and g (variance), we introduce the chameleon search optimization-long short-term memory neural network machine algorithm.This tailored approach is designed to meticulously refine these parameters, ensuring enhanced classification performance and accuracy in diverse applications.The specific steps of the chameleon search optimization-long short-term memory neural network algorithm are as follows: (1) Import the training set and test set, and do the normal is action process; (2) Initialize the parameters: initial penalty factor 0, initial variance 0, in the range of [21,22]; (3) Setting the eye rotation degree function, i.e., the function to be optimized, as the long and short-term neural network diagnostic accuracy function f with and as the relevant parameters, and adopting the chameleon search to find the optimum for and ; (4) When there is a situation where the eye rotation position is the same, i.e., the accuracy rate is the same, the combination of parameters with smaller value of is selected to reduce the computational complexity; (5) Iterate the loop until the maximum number of iterations N is reached; (6) Output the position of the chameleon, i.e., the optimal values of and , which are used as the given parameters to train the long and short-term neural network model; (7) Use the trained long and short-term sequence model to identify the gear wear level faults on the test set and derive the diagnostic results.
Comparison of Application Cases and Methods
This study employs a publicly accessible dataset for gearbox fault diagnosis, originally compiled by Zamanian, A.H., and colleagues in 2014, serving as a foundational resource for experimental validation [23].The test rig underwent evaluations under varied pinion conditions, with vibration signals meticulously captured by accelerometers operating at a sampling rate of 10 kHz over a 10-second duration.The compiled data are organized into three distinct packages, each representing a specific fault type.These encompass datasets characterizing the healthy state, gear breakage, and gear wear conditions, providing a comprehensive spectrum for in-depth analysis and evaluation.
The core components of the test stand unit's drive system primarily include a motor that functions as the drive input.This motor propels the gearbox via a belt and, in turn, the gearbox's output drives the brake system, also connected by a belt.Data acquisition is facilitated by an accelerometer strategically positioned on the drive end of the induction motor [24].With a sampling frequency set at 10 kHz and a sampling duration of 10 seconds, the parameters for data capture are meticulously defined.The motor operates at a speed of 1420 rpm/min.Given that the pinion gear is equipped with 15 teeth and the large gear with 110 teeth, the calculated meshing frequency equates to 355 Hz, derived from the formula (1420/60) × 15.However, spectrum analysis reveals an actual meshing frequency proximate to 365 Hz, offering nuanced insights into the intricate dynamics of gear engagement and operation.
Comparison of Modal Decomposition Methods
To assess the efficacy of the integrated approach combining VMD alignment entropy and the CSA-optimized LSTM algorithm, a comprehensive validation was conducted, particularly focusing on performance under variational mode decomposition techniques.To meticulously evaluate the effectiveness of the combined VMD and CSA-optimized LSTM algorithm, experimental data were derived from the vibration signals of gearboxes in three distinct states: normal gearing, worn gears, and broken gears.These signals were analyzed under the stringent condition of 100% rated lubricating oil pressure to ensure an exhaustive examination of the algorithm's performance across a spectrum of operational and wear conditions, thereby underscoring its versatility and robustness.
The VMD algorithm initiates by pre-setting the number of decomposed modes, denoted as .Illustratively, taking the vibration signal of a gearbox in its normal state, a Fourier transform is applied.This process yields a detailed spectrogram, vividly illustrating the vibrational signal characteristics of the gearbox in its pristine operational state.Following this, VMD decomposition is executed, effectively distilling the intrinsic vibrational modes and their respective characteristics.
The outcomes of this decomposition process are systematically illustrated in Figure 3, providing a visual and analytical insight into the vibrational dynamics of the gearbox.In this study, a random selection methodology was employed to curate the training set.Prior to initiating training, the original data from the gearbox underwent decomposition via VMD.Through iterative experimentation, it was observed that the VMD variational mode decomposition, when optimized using CSA, exhibited progressive convergence as the number of iterations increased.This methodological refinement ensures more accurate and consistent results, advancing our understanding of gearbox dynamics.
IMF1 is a trend component that reflects the overall trend of gear speed changes in the gearbox dataset.IMF2~IMFn are the remaining random components, resulting in very small prediction errors.When a transmission failure arises, alterations are observed not only in the fundamental frequency components but also in the octave components, each exhibiting varying degrees of suppression or enhancement.If the value of is set at 3, the distinction between different models and the frequency components of the transmission vibration signal "a" becomes minimal.This makes the comprehensive extraction of fault-state information challenging.On the other hand, a value exceeding 5 increases the complexity of the calculation significantly, introducing the risk of over-decomposition.Such a scenario is not optimal for prompt fault diagnosis due to the increased computational demands and potential for muddled insights.To strike a balance, we opted for a value of 5.This allowed us to efficiently capture the center frequency of different modes inherent in the normal-state vibration signal of the gearbox transmission without compromising computational efficiency or clarity of insights.The time-domain plot derived from the VMD decomposition is depicted in Figure 3a, and the corresponding frequency spectrum is presented in Figure 3b, offering a visual and analytical exploration of the transmission dynamics.
As can be seen in Figure 3, the modal aliasing phenomenon of VMD is effectively solved, and the corresponding center frequency of each modal component is consistent with the overall characteristics of the frequency derived from the fast Fourier transform, which realistically restores the information contained in the original signal.Compared with the existing empirical mode decomposition (EMD), the time-domain diagram of the transmission system's normal-state vibration signal decomposed by EMD is shown in Figure 4. Figure 4 illustrates that as the quantity of decomposed modes escalates, overlapping of modes commences from mode 6 onward.This overlap gives rise to invalid components, which are ineffectual in representing information pertinent to the transmission's wear state.To mitigate the influence of these non-representative components, we introduced correlation number analysis.This technique calculates the correlation between each modal component and the chosen transmission vibration signal, ensuring that only valid and informative components are retained for subsequent analysis, enhancing the precision and reliability of the diagnostic insights.
The correlation between each modal component and the selected transmission vibration signal is calculated to distinguish valid from invalid components based on their ability to reflect the state of the transmission.Modal components with correlation coefficients exceeding 0.1 are deemed valid, as they effectively encapsulate the transmission-state information and are, therefore, selected for further analysis.Conversely, components with correlation coefficients below 0.1 are classified as invalid, lacking the capacity to serve as feature quantities indicative of the transmission's state.These low-correlation components are unable to accurately portray the variations in vibration signals distinguishing normal and fault conditions of the transmission and are consequently excluded from further consideration and analysis.This ensures that the subsequent evaluation and diagnostics are rooted in the most informative and representative data, enhancing the accuracy and reliability of the findings.
VMD•PE and EMD•PE are used to extract features from the vibration signals.m and τ are required to set the embedding dimension and delay time in advance for PE calculation, and = 6 and = 1 are generally chosen to better reflect the dynamics of the time sequence, so = 6 and = 1 are chosen.PE is calculated as the feature quantity of the modal components of the three states of transmission, with a total of 750 sets of data with 250 sets for each state of the transmission.We randomly select 200 sets of normal data, 200 sets of incomplete loosening data, and 200 sets of complete loosening data as the training set and the rest as the test set and carry out the training of the model and the classification of the results through the CSA-optimized LSTM algorithm.The diagnostic results are shown in Tables 1 and 2. Tables 1 and 2 clearly delineate the superior efficacy of the proposed method outlined in this paper.It boasts a fault identification rate of 100% for diagnosing transmission wear associated with varying degrees of loosening.This impressive accuracy eclipses the 96.67% fault identification rate achieved by the EMD-PE method.This stark contrast in diagnostic precision attests to the robust capabilities of the VMD method in accurately discerning and isolating fault information related to gearbox transmission.The data underscore the method's exceptional performance, reinforcing its validity as a premier choice for intricate and reliable fault identification tasks.
Comparison of Time Series Complexity Metrics
The effectiveness of arrangement entropy in quantifying fault information for diagnosing gear-loosening faults during gearbox transmission is critically assessed.For a thorough evaluation, time series complexity indices including arrangement entropy, multiscale arrangement entropy, approximate entropy, among others, are applied to the modal components decomposed by VMD.
The CSA-optimized LSTM algorithm is employed for fault diagnosis, facilitating an intricate analysis and classification of the data.This ensures that the assessment is both comprehensive and precise, illuminating the nuanced fault characteristics and their implications.
The diagnostic outcomes, which offer a detailed insight into the capability of arrangement entropy and comparable indices in capturing and quantifying fault information, are vividly displayed in Figure 5 and itemized in Table 3.This detailed presentation aids in the meticulous examination of the indices' performance, offering robust data to inform optimized fault diagnosis and maintenance strategies.Insights gleaned from Figure 5 and Table 3 reveal a nuanced pattern in the performance of arrangement entropy in fault diagnosis.As the scale of arrangement entropy expands, there is a noted decline in the diagnostic accuracy for incomplete wear and broken gears.Despite this, arrangement entropy outperforms multi-scale arrangement entropy in quantifying fault information.
A comparative analysis with other indices underscores the superior efficacy of arrangement entropy, boasting a correct diagnosis rate that peaks at an impressive 100%.This remarkable precision is consistent, underscoring the robustness of arrangement entropy in capturing and articulating fault nuances.
These findings corroborate the assertion that arrangement entropy excels in dynamic detection performance during the diagnosis of loosening faults in gearbox transmission, outpacing other indicators.The metric's adeptness in encapsulating intricate fault dynamics underscores its pivotal role in enhancing the precision and reliability of diagnostic protocols in gearbox maintenance and repair.
Comparison of Optimization Algorithms
To assess the efficacy of the CSA-optimized LSTM model in diagnosing faulty gearbox transmissions, a comparative analysis was conducted deploying various optimization algorithms.These included the CSA-optimized LSTM algorithm and a grid search-optimized LSTM algorithm, among others, each applied systematically for fault diagnosis.Each algorithm's parameter settings are meticulously detailed in Table 4.The evaluation of these optimization algorithms' performance is anchored on two pivotal indices: the time required for computation and the correct diagnosis rate.These criteria offer a balanced perspective, encapsulating both the efficiency and accuracy dimensions of the algorithms' performance.The comparative outcomes, which provide a comprehensive insight into the relative performance and efficacy of each optimization algorithm, are tabulated in Table 5.These data serve as a robust foundation for evaluating the nuanced capabilities of the CSA-optimized LSTM model in the context of faulty gearbox transmission diagnosis, offering clear benchmarks for performance optimization and enhancement.Table 4. Parameter setting of each optimization algorithm.
Optimization Algorithm
Parameter Is Action
CSA
The number of iterations is 200, and the initial c and g are random numbers from 1 to 5 in high Grid Search Initially c and g are random numbers from 1 to 5 with an initial step size of 0.1
Genetic algorithm
The number of iterations is 200 and the c and granges from 1 to 5
Particle swarm algorithm
The number of iterations is 200 and the c and granges from 1 to 5 As evidenced in Tables 4 and 5, the CSA optimization algorithm outperforms its counterparts, namely the genetic algorithm, grid search method, and particle swarm algorithm, in terms of convergence speed.Impressively, it attains a test accuracy of 100%, adeptly circumventing the common pitfall of overfitting.This underscores the CSA optimization algorithm's superior efficacy and optimization performance, marking it as a leading solution that combines speed, accuracy, and robustness in delivering optimal results.
Troubleshooting of Variable Pressure States
High-speed bearings and gear problems such as spalling, pitting, etc., are usually caused by insufficient local lubrication.Therefore, gear transmission and lubricant pressure are closely related.
Since the lubricant pressure is not always maintained at the rated pressure during the operation of a gearbox, to verify the effectiveness of the method in the fault diagnosis performance under variable pressure conditions, the method was developed and validated.The effectiveness of the method in fault diagnosis under variable pressure conditions is verified.We selected 20%, 40%, 60%, 80%, 100%, and 110% of the rated pressure of the gearbox transmission vibration data from 750 sets.Using the method proposed in this paper, gearbox transmission wear fault feature extraction and fault type identification were carried out.The diagnostic results are shown in Figure 6.As depicted in Figure 6, there is a marked increase in the correct diagnosis rate correlating with the rise in pressure, consistently peaking at 100%.In such scenarios, the vibration signals of the gearbox transmission are notably susceptible to various forms of interference, leading to the obscuring of essential fault-state information.However, a shift is observed with the augmentation of pressure.Distinctive vibration characteristics, emblematic of varied transmission states, are observed.During this phase, the pervasive influence of interference factors on the overall fault depiction is markedly subdued.This resilient performance amidst escalating pressures underscores the method's adeptness in meticulously extracting wear-related failure characteristics, attuned to the nuanced variations in lubricant pressure states of transmission.Such robust diagnostic precision reaffirms the method's validity, positioning it as a reliable asset in intricate fault detection and characterization.
Comparison of Different 𝐾𝐾 Values for Troubleshooting
The choice of the value plays a crucial role in effectively extracting information related to the state of gearbox transmission.It also significantly impacts the time taken for modal decomposition.The assertion that = 5 is superior in fault diagnosis is examined in this context.To validate this claim, the value is varied, and 750 sets of gearbox transmission vibration data are employed, all collected under a state of 100% rated pressure.The proposed method is then applied to extract characteristics indicative of wear faults in the gearbox transmission and to identify the specific types of faults present.The diagnostic outcomes, derived from this comprehensive analysis, are systematically presented in Table 6.This tabulation provides a detailed perspective on the efficacy of different values in capturing and elucidating fault characteristics, offering valuable insights into the optimal value for enhanced diagnostic accuracy and efficiency.As illustrated in Table 6, a modal decomposition number of = 4 proves to be inadequate, resulting in an insufficient extraction of transmission vibration information.This shortfall precipitates a nearly 10% decline in the diagnosis accuracy rate.Conversely, with = 5, a balance is achieved; the modal decomposition is optimal, ensuring a moderate decomposition time and attaining a 100% correct diagnosis rate.When is increased to 6, the decomposition time almost doubles compared to = 5.This increase in modal decomposition time correlates with an over-decomposition scenario.The excess decomposition yields frequency components void of effective transmission-state information, leading to a nearly 10% reduction in the correct diagnosis rate.These findings corroborate that
Figure 1 .
Figure 1.Overall flowchart of the proposed model.
Figure 2 .
Figure 2. Flow chart of fault feature extraction.
Figure 3 .
Figure 3. Original data and VMD decomposition results of each sequence.(a) Time-domain diagram of the VMD decomposition.(b) Spectrogram of the VMD decomposition
Figure 4 .
Figure 4. Original data and EMD decomposition results of each sequence.(a) Time-domain diagram of the EMD decomposition.(b) Spectrogram of the EMD decomposition
Figure 6 .
Figure 6.Fault diagnosis results under different pressure conditions.
Table 3 .
Complexity index of different time series.
Table 6 .
Fault diagnosis results of different K values. | 2023-10-26T15:23:40.302Z | 2023-10-24T00:00:00.000 | {
"year": 2023,
"sha1": "5ccb66c3aab778f6fdc95bd0bd0ac8b94083a36e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/13/21/11637/pdf?version=1698199350",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "68efea0a1bc55863309f3253064a786f1a2ad3f0",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": []
} |
261545443 | pes2o/s2orc | v3-fos-license | Patient Safety Culture, Infection Prevention, and Patient Safety in the Operating Room: Health Workers’ Perspective
Introduction A hospital’s patient safety culture affects surgical outcomes. Operating room safety culture has been overlooked despite the importance of patient safety. The AHRQ’s Hospital Survey on Patient Safety Culture (HSOPSC) has been used worldwide to assess and enhance patient safety culture. This study examined how patient safety culture and infection prevention effect patient safety in the Operating Room (OR). Methods This observational study used an online survey and included 143 OR workers. Descriptive statistics and multilinear regression were used to examine how patient safety culture and infection prevention affects level of patient safety. Results Most responders worked in excellent-accredited general hospitals. Most responders were male, aged between 26 to 40 years old, and had bachelor’s degrees. Most were hospital-experienced nurses. Less than half had worked in units for over ten years. Organizational Learning - Continuous Improvement; Teamwork and Handoffs; and Information Exchange had the most positive responses in the OR. However, Staffing, Work Pace, and Patient Safety ranked lowest. Organizational Learning - Continuous Improvement and Hospital Management Support for Infection Prevention Efforts were found to affect OR patient safety level perceptions. Conclusion According to the findings of our study, the overall patient safety culture in the operating room remains weak which highlights the importance of continuing efforts to improve patient safety in the OR. Further study could be directed to identify organizational learning in infection prevention to enhance the patient safety in the OR.
Introduction
Maintaining patient safety in the operating room (OR) is crucial. 1 It is vital to avoid unnecessary problems, and the pressure to avoid these consequences during surgery is extremely intense.From 2012 to 2015, a research in a mediumsized hospital in a developed country identified 220 occurrences in the OR. 2 According to WHO data, surgical mistake (18%) was one of the top three activities leading to hazardous occurrences in poor countries. 3Furthermore, a blaming culture, a lack of support, and a lack of a clear and transparent investigative process appeared to amplify the impact of patient safety issues in the OR. 4 Whereas patient safety is one indicator that must be considered in the OR. 5 Patient safety culture is characterized as employees' attitudes, views, and values around patient safety. 6The Agency for Healthcare Research and Quality (AHRQ) released the Hospital Survey on Patient Safety Culture (HSOPSC) to examine patient safety culture in hospitals. 7The perceptions of frontline employees about the OR's safety culture may thus provide a credible foundation for evaluating and enhancing surgical patient safety. 8A hospital's patient safety culture can influence surgical patient outcomes. 9Improving a hospital's safety culture can help to assist hospital-level surgical quality improvement efforts.
Infection control and prevention are crucial for guaranteeing the safety of patients undergoing surgical operations in the OR. 10 To limit the risk of problems in surgical procedures, the WHO established 10 fundamental objectives for safe surgery and a Surgical Safety Checklist (SSC). 11SSC by WHO included three steps: before anesthesia induction (sign-in), before skin incision (time-out), and before the patient left the OR (sign-out).Nursing staff may check proof of sterility and equipment availability as an example of SSC items.This checklist is generally applicable to reduce major surgical complications.
Infection is a significant cause of morbidity and mortality in the OR, possibly due to an increase in the number of elderly surgical patients or those with a variety of chronic and immunocompromising conditions, as well as the emergence of antibiotic-resistant microorganisms. 10,12As a result, adequate interventions to limit the occurrence of related infections must be implemented in order to assure improved surgical outcomes. 10Infection prevention should be integrated into comprehensive surgical quality improvement programs to improve patient safety. 13urgeons, anesthesiologists, nurses, and other OT personnel play an important role in improving patient safety. 14The presence of errors, such as unintentional patient exposure in the OR, will have an impact on patient safety. 15Doctors and nurses who worked longer hours in the OR and reported more mishaps had less positive opinions about patient safety. 16 positive work atmosphere is critical because it can improve treatment outcomes and promote a safer care setting when combined with proper staffing. 16However, insufficient attention has been paid to the OR safety culture. 8Furthermore, a study conducted in one hospital in Indonesia discovered that the surgical department, including the OR, had the lowest patient safety culture score. 17Therefore purpose of this study was to investigate the impact of patient safety culture and infection prevention on patient safety in the OR.
Data Analysis and Synthesis
IBM SPSS 29 was used for the statistical analysis and the variables were reported using descriptive statistics.The hospital component variables and demographic data were first computed.The percentage of positive responses for each dimension was determined after inverting negatively phrased items, as advised in the guideline.The percentages greater than 75 showed satisfactory results in terms of patient safety culture, while those less than 50 indicated weak dimensions.Second, we used multilinear regression analysis to examine the effect of the SOPS variable on patient safety ratings, as well as the T-test and F-test because we tested more than one independent variable whether it was statistically predictive of patient safety in OR.A p-value of less than or equal to 0.05 is deemed statistically significant, showing the influence between the variables.
Ethics Approval
The study was approved by the ethics committee of Faculty of Nursing, Universitas Airlangga (No: 2796-KEPK).Due to the use of a questionnaire in this study, participation indicated a confirmation of implied consent.
Hospital Factor Variables and Demographic Factors
This study had 143 participants, with 79% (113) working in government owned hospitals, 97.2% (139) working in hospitals with excellent accreditation status, and 97.9% (140) working in general hospitals.The majority of respondents, 67.1% (96), were male, and 61.5% (88) were between the ages of 26 and 40.Half of the respondents (49.7% (71) hold a bachelor's degree or higher.The majority of respondents in the research (87.4% (125) work as nurses, 61.5% (88) have worked in hospitals for more than ten years, and 48% (69) have worked in units for more than ten years.52.4% (75) of respondents work an average of 30-40 hours each week.The vast majority of respondents, 95.1% (136), work directly with patients.Table 1 displays the hospital factor variables as well as the demographic factors.
Patient Safety Culture at the Unit Level
The average percentage of positive answers to "Organizational Learning -Continuous Improvement" is 96%.The majority of respondents felt that their hospital has a constructive manner or activities to promote patient safety.Furthermore, "Teamwork" and "Handoffs and Information Exchange" receive 92% positive responses on average.Respondents believe that hospital employees across different units or departments are cooperating and coordinating better.It also implies that respondents believe critical patient care information is shared between hospital units and shifts.
The average percentage of positive replies to "Reporting Patient Safety Events" and "Responses to Error" is 75%.It was observed that hospital staff believe there is a low level of reporting of errors noticed and addressed before they reach the patient.Furthermore, errors that could have harmed the patient but did not; are rarely reported.When hospital staff make mistakes, they are rarely treated fairly, and there is little emphasis on learning from mistakes and helping employees who are involved in errors.Furthermore, the average percentage of positive replies to the dimensions "Staffing and Work Pace" is just 66%.It proved that there are insufficient personnel to meet the demand, and employees work irregular hours and are pressed for time.However, the general impression of patient safety is low (43%).The average proportion of positive response for the SOPS dimensions is shown in Table 2.
Variables Predictive on Patient Safety
The T-test results revealed that there was an influence between the dimensions "Organizational Learning -Continuous Improvement" (p value = 0.000) and "Hospital Management Support for Infection Prevention Efforts" (p value = 0.000) on the overall perspective of patient safety in OR (could be seen in Table 3).The F-test findings revealed a simultaneous influence of SOPS dimensions on patient safety (p value = 0.000).Based on the termination coefficient output, the R square value is 0.973, indicating that the effect of the SOPS variable on the patient safety variable is 97.3%.
Discussion
This is Indonesia's first study to look into the impact of patient safety culture and infection prevention on patient safety in the OT.Patient safety culture in OT is essential because surgical services have a high risk of adverse patient safety outcomes such as surgical site infection or wrong operation.The higher risks were influenced by the patient, the procedure, the surgeon(s), and the hospital environment.Several concerns were highlighted in the research.First, some hospital culture dimensions in the OR had low scores, including "Staffing and Work Pace", "Reporting Patient Safety Events", and "Response to Errors".The AHRQ defines "Staffing and Work Pace" as employees working reasonable hours and without feeling pressured.According to the findings of this study, staff felt "Staffing and Work Pace" were at their lowest points, possibly because working in the OR can present a number of challenges that can lead to work-related stress or burnout, such as a lack of proper coordination, poor performance of the head nurse and hospital managers, and additional workload. 20urthermore, the findings of this study are consistent with other studies that found the lowest scores for SOPS dimensions "Reporting Patient Safety Events" and "Response to Error" in the OR. 8,21The low "Response to Error" score implies a common blame culture. 8According to the data, 20.3% of health workers believe that when a patient safety issue happens, the attention is on the individual rather than the problem.Other studies that revealed poor event reporting scores in OR found similar results. 22The findings of this study are consistent with prior findings, indicating that OR safety culture has to be improved. 8econd, the most predictive organizational culture determinants of patient safety were the "Organizational Learning -Continuous Improvement" dimensions, which received the highest scores."Organizational Learning -Continuous Improvement" is defined by the AHRQ as "work processes that are reviewed on a regular basis, changes are made to avoid repeating mistakes, and changes are evaluated". 7According to the findings of this study, the majority of healthcare employees believe that hospitals should examine work processes on a frequent basis.Also, healthcare workers agreed that hospitals analyze adjustments and make modifications to prevent mistakes from occurring again.
The findings of this study support prior findings that the organizational learning dimension gets the highest score in the OR. 23In the OR, user behavioral intention is indirectly influenced by strong organizational learning abilities, particularly in social influence. 24Furthermore, proper precautions must be implemented in the OR to limit the frequency of related infections in order to ensure patient safety. 10Working with staff attitudes toward patient safety could be another technique for improving compliance, which should reduce the number of surgical site infections. 25he average favorable answer for the dimensions "Hospital Management Support for Infection Prevention Efforts" was 84%.The majority of health workers believe that the checklist, standard operating procedure, clinical pathway, surveillance program, hospital policy, surgical site infection coordinator, feedback, and computer-based decision-making support system in the OR are satisfactory.A good infection control program in the OR decreases wound infection rates in patients and speeds up their postoperative recovery. 26his study has several limitations.First, the questionnaire was self-reported, which may cause potential boredom of respondents due to time, individual bias on one's own behavior, and potential concealment of true thoughts and perspectives. 27Other problems were connected to the use of Likert scales, the answers to which could be influenced by preceding questions or favor one side.Furthermore, using an online form may reduce participant coverage while also influencing participation characteristics, as seen in this study when nurse participants outnumbered other occupations.
Conclusion
According to the findings of our study, the overall patient safety culture in the OR remains weak.This highlights the importance of continuing efforts to improve patient safety in the OR.Furthermore, our research found that the dimensions "Organizational Learning -Continuous Improvement" and "Hospital Management Support for Infection Prevention Efforts" were among the significant dimension that affecting the overall perception of patient safety in the OR.Further study could be directed to identify organizational learning in infection prevention to enhance the patient safety in the OR.
Table 1
Hospital Factor and Demographics Frequency Distribution
Table 2
Average Positive Response Rate for the SOPS
Table 3
Regression Results | 2023-09-06T15:07:55.966Z | 2023-09-01T00:00:00.000 | {
"year": 2023,
"sha1": "e058c015231887ddfb83b9165a9b92c3f2730b3f",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=92470",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d121dc18771b3720ad93cf7b4a149c70e4369c0b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
2110609 | pes2o/s2orc | v3-fos-license | Targeting the intratumor heterogeneity in PMBL
Primary mediastinal B cell lymphoma (PMBL) affects predominantly young females and belongs to the most curable lymphoma subtypes. Nevertheless, treatment-related troubles were recognized: (1) late complications due to chemotherapy and radiation; (2) treatment failure during salvage therapy of relapsed patients.1 Therefore, further investigations of molecular mechanisms of PMBL oncogenesis are warranted.
The oncogenic program of PMBL shares similarities with that of classical Hodgkin lymphoma as well as with that of activated B cell like diffuse large B cell lymphoma (ABC-DLBCL). In particular, proliferation and survival of PMBL depends on constitutive activation of NF-κB and JAK-STAT pathways and expression of their targets, like MYC and BCL-XL.2 Interestingly, components of the ABC oncogenic program (pSTAT6, IRF4, CD30) are expressed only in parts of the tumor cells population of PMBL.3,4 On the other hand, the master regulator of germinal center (GC) oncogenic program, proto-oncogene BCL6 is also present in a variable portion of PMBL tumor cells.5
We were the first to show the mutually exclusive character of pSTAT6 and BCL6 staining within PMBL tumors. Therefore, we hypothesized that within a PMBL tumor the cell populations driven by different oncogenic programs may co-exist. Further, we have shown that at least a part of PMBL cells were sensitive to BCL6 inhibition. Targeting the JAK2/STAT6 axis also induced a partial anti-tumor effect. Subsequently, we have shown that combined treatment targeting BCL6 and pSTAT6 using specific inhibitors induce cell death in additive manner. Generally, our data suggests the existence of at least two functionally diverse cell subpopulations, driven by different oncogenic programs.6 This implies that by searching for specific molecular targets one should consider intratumor heterogeneity of PMBL, which is a consequence of different genetic, epigenetic, and environmental processes and is recognized as major obstacle to effective cancer treatment.7
In an effort to clarify how one may use the intratumor heterogeneity to improve the existing immunochemotherapy we knocked down BCL6 and STAT6 in PMBL cell lines followed by treatment with doxorubicin, vincristine, and rituximab, the components of current immunochemotherapy program R-CHOP. In two of three PMBL cell lines the BCL6 or STAT6 knock down sensitized PMBL cells to the components of conventional immunochemotherapy. Interestingly, although the major cell fractions expressed BCL6, the knock down of STAT6 induced a stronger response to R-CHOP components than the BCL6 inhibition. Thus, by the cell sensitization process in PMBL the size of druggable subpopulations does not play a major role. The other factors like interaction between subpopulations, e.g., production of growth factors and other signaling molecules, might explain the observed phenomenon.
Our study addresses several aspects of cancer therapy. First, it challenges the rationale of use gene expression profiling for individualization of cancer therapy. This method does not consider the intratumor heterogeneity and, therefore, would not provide the adequate information on oncogenic programs of minor tumor cell subpopulations. Immunohistochemistry, however, is able to detect even small subpopulations within a tumor sample driven by alternative pathways. Second, our finding stresses the importance of monitoring the dynamic of tumor subpopulations in relapsed tumors. This analysis may provide the further perspective for sensitization of relapsed tumor to conventional salvage therapy by targeting the population that is responsible for tumor re-growth.
It is also of interest to analyze the plasticity of BCL6+pSTAT6-, BCL6-pSTAT6+ and BCL6-pSTTA6- subpopulations. In our preliminary experiments we observed that single clonogenic cells are able to give rise to all types of subpopulations (unpublished data).
In sum, we draw the attention to the coexistence of cell subpopulations driven by alternative oncogenic mechanisms within a tumor. In proof-of-principal experiments we have shown a rationale for combination of inhibitors, targeting these pathways, with current immune-chemotherapy. In perspective, the targeted pre-treatment may provide a new therapeutic option: (1) to diminish the R-CHOP dose escalation in mostly young PMBL patients; and (2) to sensitize relapsed tumors to the second line therapy. (Fig. 1)
Figure 1. Tumor cell sensitization may have a potential in the optimizing of immunochemotherapy in PMBL. Cell sensitization or pre-treatment (yellow triangle) of cell subpopulations driven by known alternative oncogenic programs (green, red and ...
Primary mediastinal B cell lymphoma (PMBL) affects predominantly young females and belongs to the most curable lymphoma subtypes. Nevertheless, treatment-related troubles were recognized: (1) late complications due to chemotherapy and radiation; (2) treatment failure during salvage therapy of relapsed patients. 1 Therefore, further investigations of molecular mechanisms of PMBL oncogenesis are warranted.
The oncogenic program of PMBL shares similarities with that of classical Hodgkin lymphoma as well as with that of activated B cell like diffuse large B cell lymphoma (ABC-DLBCL). In particular, proliferation and survival of PMBL depends on constitutive activation of NF-κB and JAK-STAT pathways and expression of their targets, like MYC and BCL-XL. 2 Interestingly, components of the ABC oncogenic program (pSTAT6, IRF4, CD30) are expressed only in parts of the tumor cells population of PMBL. 3,4 On the other hand, the master regulator of germinal center (GC) oncogenic program, proto-oncogene BCL6 is also present in a variable portion of PMBL tumor cells. 5 We were the first to show the mutually exclusive character of pSTAT6 and BCL6 staining within PMBL tumors. Therefore, we hypothesized that within a PMBL tumor the cell populations driven by different oncogenic programs may co-exist. Further, we have shown that at least a part of PMBL cells were sensitive to BCL6 inhibition. Targeting the JAK2/STAT6 axis also induced a partial anti-tumor effect. Subsequently, we have shown that combined treatment targeting BCL6 and pSTAT6 using specific inhibitors induce cell death in additive manner. Generally, our data suggests the existence of at least two functionally diverse cell subpopulations, driven by different oncogenic programs. 6 This implies that by searching for specific molecular targets one should consider intratumor heterogeneity of PMBL, which is a consequence of different genetic, epigenetic, and environmental processes and is recognized as major obstacle to effective cancer treatment. 7 In an effort to clarify how one may use the intratumor heterogeneity to improve the existing immunochemotherapy we knocked down BCL6 and STAT6 in PMBL cell lines followed by treatment with doxorubicin, vincristine, and rituximab, the components of current immunochemotherapy program R-CHOP. In two of three PMBL cell lines the BCL6 or STAT6 knock down sensitized PMBL cells to the components of conventional immunochemotherapy. Interestingly, although the major cell fractions expressed BCL6, the knock down of STAT6 induced a stronger response to R-CHOP components than the BCL6 inhibition. Thus, by the cell sensitization process in PMBL the size of druggable subpopulations does not play a major role. The other factors like interaction between subpopulations, e.g., production of growth factors and other signaling molecules, might explain the observed phenomenon.
Our study addresses several aspects of cancer therapy. First, it challenges the Targeting the intratumor heterogeneity in PMBL olga ritz 1, *, Peter Möller 1 , and alexey ushmorov 2 1 institute of Pathology; university ulm; ulm, Germany; 2 institute of Physiological Chemistry; university of ulm; ulm, Germany Figure 1. tumor cell sensitization may have a potential in the optimizing of immunochemotherapy in PMBl. Cell sensitization or pre-treatment (yellow triangle) of cell subpopulations driven by known alternative oncogenic programs (green, red and blue circles) using specific inhibitors (yellow lightning) followed by standard immunochemotherapy (gray triangle). the cell populations with unknown oncogenic pathways are represented as gray circles. rationale of use gene expression profiling for individualization of cancer therapy. This method does not consider the intratumor heterogeneity and, therefore, would not provide the adequate information on oncogenic programs of minor tumor cell subpopulations. Immunohistochemistry, however, is able to detect even small subpopulations within a tumor sample driven by alternative pathways. Second, our finding stresses the importance of monitoring the dynamic of tumor subpopulations in relapsed tumors. This analysis may provide the further perspective for sensitization of relapsed tumor to conventional salvage therapy by targeting the population that is responsible for tumor re-growth.
It is also of interest to analyze the plasticity of BCL6+pSTAT6-, BCL6-pSTAT6+ and BCL6-pSTTA6-subpopulations. In our preliminary experiments we observed that single clonogenic cells are able to give rise to all types of subpopulations (unpublished data).
In sum, we draw the attention to the coexistence of cell subpopulations driven by alternative oncogenic mechanisms within a tumor. In proof-of-principal experiments we have shown a rationale for combination of inhibitors, targeting these pathways, with current immune-chemotherapy. In perspective, the targeted pretreatment may provide a new therapeutic option: (1) to diminish the R-CHOP dose escalation in mostly young PMBL patients; and (2) to sensitize relapsed tumors to the second line therapy. (Fig. 1) | 2018-04-03T02:13:49.651Z | 2014-07-15T00:00:00.000 | {
"year": 2014,
"sha1": "ce1ae89a148bfee14652866e7205ce5031c58c3f",
"oa_license": "CCBYNC",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.4161/cc.29828?needAccess=true",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "ce1ae89a148bfee14652866e7205ce5031c58c3f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
32326471 | pes2o/s2orc | v3-fos-license | Trans-repressor BEF-1 phosphorylation. A potential control mechanism for human ApoE gene regulation.
Human apolipoprotein E is a plasma lipoprotein that appears to play an important protective role in the development of atherosclerosis. While little is known about the regulation of apoE, recent studies have shown that cytokines repress apoE synthesis both in vivo and in vitro. Furthermore, we have recently shown that the endogenous apoE gene is negatively regulated by the nuclear trans-repressor BEF-1 in the human HepG2 cell line. In this study we demonstrate that treatment of HepG2 cells with the cytokine interleukin-1 and interleukin-6 resulted in the induction of an isoform of BEF-1, designated B1. The induction of the B1 isoform could be blocked by the protein kinase inhibitor staurosporine, suggesting that B1 is a phosphorylated form of BEF-1. As further support, the B1 isoform could also be induced by phorbol ester, and subsequently inhibited by staurosporine, implicating a role for protein kinase C-mediated phosphorylation. Quantitation of the levels of the BEF-1 isoforms, and studies in the presence of cyclohexamide, provided evidence for the phosphorylation of an existing intracellular pool of BEF-1, with no change in the total intracellular level. Under conditions that generated increased levels of the B1 isoform, there was a concomitant and proportional decrease in the level of apoE mRNA. The effect did not appear to be the result of improved binding to the apoE regulatory region as the DNA binding affinity of B1 was identical to native BEF-1. Our data suggest that the regulation of apoE by BEF-1 is modulated by differential phosphorylation, possibly through the protein kinase C pathway.
Human apolipoprotein E is a plasma lipoprotein that appears to play an important protective role in the development of atherosclerosis. While little is known about the regulation of apoE, recent studies have shown that cytokines repress apoE synthesis both in vivo and in vitro. Furthermore, we have recently shown that the endogenous apoE gene is negatively regulated by the nuclear trans-repressor BEF-1 in the human HepG2 cell line. In this study we demonstrate that treatment of HepG2 cells with the cytokine interleukin-1 and interleukin-6 resulted in the induction of an isoform of BEF-1, designated B1. The induction of the B1 isoform could be blocked by the protein kinase inhibitor staurosporine, suggesting that B1 is a phosphorylated form of BEF-1. As further support, the B1 isoform could also be induced by phorbol ester, and subsequently inhibited by staurosporine, implicating a role for protein kinase Cmediated phosphorylation. Quantitation of the levels of the BEF-1 isoforms, and studies in the presence of cyclohexamide, provided evidence for the phosphorylation of an existing intracellular pool of BEF-1, with no change in the total intracellular level. Under conditions that generated increased levels of the B1 isoform, there was a concomitant and proportional decrease in the level of apoE mRNA. The effect did not appear to be the result of improved binding to the apoE regulatory region as the DNA binding affinity of B1 was identical to native BEF-1. Our data suggest that the regulation of apoE by BEF-1 is modulated by differential phosphorylation, possibly through the protein kinase C pathway.
Apolipoprotein E (apoE), 1 a primary constituent of several classes of mammalian lipoproteins, functions in transport and redistribution of cholesterol and other lipids among various cells in the body (1)(2)(3). There is mounting evidence that apoE plays a major protective role in the development of arteriosclerosis (4 -9). In addition, apoE has proposed roles in immunoregulation and in the genesis of Alzheimer's disease (reviewed in Refs. 10 and 11).
The factors that regulate apoE expression from hepatic tissue, the primary source of this circulating lipoprotein, are not well understood. Using the human hepatoma cell line HepG2 as a model system, several groups have identified regulatory elements required for efficient expression of apoE (12,13), and we previously determined that the nuclear repressor factor BEF-1 negatively regulated apoE gene expression in these cells (14). BEF-1 is a member of the NF-1 family of nuclear factors (15,16), and studies have demonstrated that both its nuclear level and DNA binding activity are regulated via intracellular signaling, as demonstrated by effects mediated through the viral oncogene E1a and through a tyrosine phosphorylation that is required for its DNA binding activity (15,17). In different cells, BEF-1 exists in two isoforms designated B1 and B2. While the functional role of each of these isoforms is unknown, phosphatase studies have provided evidence suggesting that they represent a difference in serine/threonine phosphorylation (17). In confluent cultures of HepG2 cells, where apoE is under BEF-1-mediated repression (14), only the B2 isoform is expressed.
In a recent study, hamster hepatic apoE mRNA expression has been demonstrated to be repressed by the cytokines IL-1 and tumor necrosis factor (18). This result is consistent with previous studies demonstrating a similar repressive effect of IL-1 on macrophage apoE mRNA synthesis in culture (19). Because cytokines, including IL-1, are known to induce phosphorylation of many proteins (including transcription factors) (reviewed in Refs. 20 and 21), we have sought to investigate phosphorylation of BEF-1 as a potential mechanism for cytokine action in the repression of the apoE gene in HepG2 cells.
In this paper, we show that treatment of HepG2 cells with IL-1, as well as IL-6, induces an isoform of BEF-1 (B1) that binds to the apoE regulatory region with equal affinity of the uninduced B2 isoform. This induction appeared to be due to increased phosphorylation, as B1 could also be induced by phorbol ester, and the induction could be blocked by the protein kinase inhibitor staurosporine. Furthermore we show that both IL-1 and IL-6 suppress the synthesis of apoE, and with increasing phosphorylation of BEF-1 we observed a proportional repression of apoE mRNA. Our data suggest that differential phosphorylation of trans-repressor BEF-1, possibly through the protein kinase C (PKC) pathway, plays a role in cytokine induced apoE gene repression.
EXPERIMENTAL PROCEDURES
Materials-Dulbecco's modified Eagle's medium/F-12 (3:1) and fetal bovine serum were purchased from Life Technologies, Inc. IL-1 and IL-6 were from R & D Systems. [␣-32 P]CTP and [␥-32 P]ATP were from DuPont NEN. Staurosporine was purchased from LC Laboratories. The MiniPlus SepraGels were from Integrated Separation Systems. The Biotecx Ultraspec RNA isolation system was used for RNA isolation. The Ambion RPA II kit was used for RNA analysis. The Bio-Rad Prep-a-gene DNA purification kit was used for DNA isolations. The 3Ј antisense apoE riboprobe was transcribed with SP6 RNA polymerase and [␣-32 P]CTP using the Promega Riboprobe Gemini II kit. All other reagents and chemicals were of the highest quality available.
Analysis of Total RNA-HepG2 cells were maintained as described previously (14), and total RNA was isolated as described by Chomczynski and Sacchi (22) using the Biotecx Ultraspec RNA isolation system. * The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.
Induction of B1 Isoform of BEF-1-Initial experiments were
performed to determine if the cytokine IL-1 had an effect on BEF-1. Nuclear extracts were prepared from confluent cultures of HepG2 cells treated with 10 ng/ml IL-1, and EMSAs were performed using the 5Ј end-labeled ABEF-1 probe (14). As shown in the example in Fig. 1A, EMSAs performed using nuclear extracts from untreated HepG2 cells (lane 1) resulted in a single major DNA/protein complex (B2). In contrast, EM-SAs performed using nuclear extracts prepared from IL-1treated HepG2 cells gave two major DNA/protein complexes, B1 and B2. The minor complex migrating below B2 in each lane was not identified, but previous studies have demonstrated that it represents minor interaction of the BEF-1 site with NF-1 (15). Previously we have shown that HepG2 cell BEF-1 (forming predominately B2 EMSA complex and thus designated the B2 isoform of BEF-1, 17) and HeLa Cell BEF-1 (forming predominately B1 complex and designated the B1 isoform) are the same nuclear binding factor (14). We confirmed by competition studies that the IL-1-induced B1 displayed the same binding properties as the B2 isoform. As shown in Fig. 1B, cold ABEF-1 binding site competed both the B1 and B2 protein-DNA complexes with equivalent activity. Furthermore, similar competition experiments were repeated with identical results using cold prototype BEF-1 site as a competitor (14) and with labeled prototype binding site BEF-1 and cold ABEF-1 as a competitor (data not shown). These data indicate that B2 and the IL-1-induced B1 nuclear proteins are both BEF-1.
IL-1 Induction of B1 Appears to Be Due to Phosphorylation of B2-While previous studies with phosphatase-treated HeLa cell BEF-1 have suggested that complexes migrating as the B1 and B2 isoforms are due to differences in Ser/Thr phosphorylation (17), we performed experiments to determine if in fact the IL-1 induction of B1 in HepG2 cells was due to phosphorylation. An EMSA was performed using nuclear extracts derived from HepG2 cells treated either with IL-1 or with IL-1 plus the protein kinase inhibitor staurosporine (STS) ( Fig. 2A). Compared with the untreated controls (lanes 2 and 3), IL-1 induced a significant level of B1. The IL-1-induced HepG2 B1 isoform comigrated with the HeLa cell B1 shown in lane 1. However, as shown in lanes 6 and 7, STS completely blocked the induction of B1 by IL-1. We saw no significant change in BEF-1 migration or level when cells were treated with STS alone.
In additional experiments, we quantitated the induction of B1 by IL-1 as well as by the PKC activator, phorbol 12-myristate 13-acetate (PMA). As shown in Fig. 2B, we consistently observed an induction of the B1 isoform by IL-1 that could be completely inhibited by STS. The treatment of HepG2 cells with PMA also resulted in an induction of the B1 isoform, although to a lesser extent than with IL-1 (Fig. 2B). Furthermore, the PMA-induced phosphorylation of BEF-1 was also FIG. 1. BEF-1 B1 induction in IL-1-treated HepG2 cells. A, HepG2 cells were plated the day before treatment at a density of 10 6 cells/100-mm culture dish. Cells were then treated with 10 ng/ml IL-1, incubated for 16 -20 h, and nuclear extracts were prepared. An EMSA was performed using 1.0 g of the untreated or IL-1-treated extracts and 0.32 pmol of the 5Ј end-labeled ABEF-1 oligonucleotide which contains the human apoE regulatory sequences from Ϫ104 to Ϫ74. The BEF-1 isoforms leading to the two distinct protein-DNA complexes in EMSAs were labeled as B1 and B2. B, a gel mobility retardation assay was performed with 3 g of a cytokine-treated HepG2 cell nuclear extract and 0.32 pmol of a 5Ј end-labeled ABEF-1 oligonucleotide as the probe in a 10-l binding reaction. The amount of B1 and B2 factor bound by the 32 P-labeled ABEF-1 probe was determined in the presence of increasing concentrations of unlabeled ABEF-1. For the competition, 5 ϫ, 10 ϫ, 50 ϫ, and 100 ϫ molar excesses of unlabeled ABEF-1 oligonucleotide was added to the binding reactions. The counts per minute (cpm) bound to the 98-kDa protein was determined using a Betascope 603 blot analyzer and plotted as cpm bound in B1 and B2 versus molar excess of competitor oligomer.
FIG. 2. IL-1 induction of BEF-1 B1 is due to phosphorylation of BEF-1 B2.
A, HepG2 cells were plated the day before treatment, treated with 10 ng/ml IL-1 or 10 ng/ml IL-1 plus 100 nM STS for 16 -20 h, and nuclear extracts were prepared. An EMSA was performed using 0.5 and 1.0 g of the untreated or IL-1-treated extracts and 0.32 pmol (approximately 10 4 counts) of the 5Ј end-labeled ABEF-1 oligonucleotide. Lane 1, HeLa nuclear control extract; lanes 2-3, untreated; lanes 4 -5, IL-1 treated; lanes 6 and 7, IL-1 plus staurosporine-treated. B, comparison of the effect of IL-1 and PMA on B1 induction. Cells were treated with either IL-1 alone or IL-1 plus STS as above and by PMA alone or PMA plus STS (100 nM). The amount of B1 present in nuclear extracts 16 h later was determined as above.
ApoE Regulation and BEF-1 Phosphorylation 4590
inhibited by STS. While we cannot rule out the possibility of some other post-translational modification leading to induction of B1, the above data suggest that the cytokine induction of B1 in HepG2 cells is due to increased phosphorylation of B2, possibly acting through the PKC pathway. Furthermore, the relative lack of effect of STS on the low basal level of B1, compared with its block of induced phosphorylation, suggests that prephosphorylated BEF-1 is relatively long-lived IL-1 Induction Does Not Increase the Total Amount of BEF-1-In repeated experiments, the degree of B1 induction by IL-1 was determined. As shown in Fig. 3A, IL-1 treatment of HepG2 cells did not increase the total amount of BEF-1. While approximately 15% of the total BEF-1 in resting HepG2 cells was in the B1 isoform, IL-1 consistently resulted in an induction of the B1 isoform to approximately 45-50% of the total (Fig. 3B). We examined the amount of B1 induction in the presence and absence of CHX and as shown in Fig. 3C, we observed no difference in the degree of B1 induction following protein synthesis inhibition. These data together suggest that IL-1 treatment results in the phosphorylation of an existing intracellular pool of BEF-1. Furthermore, in experiments where cells were treated with and without CHX for up to 16 h we observed no significant decrease in the level of BEF-1, suggesting that this transcription factor is relatively long lived.
Phosphorylation of BEF-1 Creates a More Potent Repressor-Having demonstrated that the BEF-1 nuclear protein becomes phosphorylated in response to IL-1, we were interested in determining if this change affected its apoE repressor activity. In initial experiments, HepG2 cells were treated with IL-1, and the level of apoE secreted was examined by immunoprecipitation analysis following procedures described in Ref. 19. In this study, we also analyzed the effect of another cytokine, IL-6, which preliminary studies had shown also increased phosphorylation of BEF-1. Both IL-1 and IL-6 treatments reduced secreted HepG2 cell apoE, although IL-6 reduced apoE expression to a greater degree (60% with IL-6 versus 37% with IL-1).
Further analysis was performed to compare the degree of BEF-1 phosphorylation and mRNA levels in cells treated with or without IL-1 or IL-6. As shown in Fig. 4A, we observed the expected increase in IL-1-induced phosphorylation; however, we observed an even greater degree of BEF-1 phosphorylation with IL-6 in this and repeated experiments. As indicated for IL-1 above, there was no change in the total level of BEF-1 following treatment of the cells with IL-6; only a shift in the degree of phosphorylation. As shown in Fig. 4B, we observed approximately a 50% decrease in apoE mRNA following IL-1 treatment but an 81% decrease following IL-6 treatment. The proportionally greater decrease in mRNA levels with increasing degree of phosphorylation, and no change in total BEF-1 binding activity, suggests that the B1 isoform is a more potent repressor than B2 at the BEF-1 repressor binding site in the apoE upstream regulatory region.
BEF-1 Phosphorylation Does Not Affect Its DNA Binding Affinity-We speculated that the apparent increase in BEF-1 repressor activity following phosphorylation might result from an increase in DNA binding affinity. Therefore, we determined the relative affinities of the B1 and B2 isoforms for the apoE DNA binding site. As shown in Fig. 5, both the B1 and B2 isoforms displayed saturable binding to the apoE BEF-1 binding site. Using nonlinear regression analysis, the binding affinities for the two isoforms were determined to be identical, with an apparent K d of 17 nM. Thus, the modulation in repressive activity following phosphorylation of BEF-1 does not appear to result from an increase in affinity for the DNA binding site, suggesting that the mechanism may instead be the result of more efficient protein-protein interactions that result in inhibition of the transcriptional complex. DISCUSSION Changes in cellular gene transcription patterns induced by extracellular signals are an important part of many biological processes (reviewed in Ref. 27), and protein phosphorylation clearly has evolved as the most versatile post-translational modification for situations where rapid modulation of transcription factor activity is required in response to signals from receptors on the cell surface (reviewed in Refs. 28 and 29). Extracellular signaling molecules such as cytokines have been well described as affecting gene expression by modulating the phosphorylation of proteins directly involved in transcriptional control (reviewed in Ref. 20). These effects have been well described for the induction of expression mediated by NF B, c-Jun, c-Fos, and NF-IL-6 (29,30). In this study, we have provided evidence that the nuclear repressor factor BEF-1 becomes phosphorylated in response to cytokine activation of HepG2 cells, with a resulting increase in its functional repressor activity.
We were able to induce the phosphorylation of BEF-1 with IL-1, IL-6, and PMA. In preliminary studies, we also have HepG2 cells were treated with or without IL-1 or IL-6, incubated for 16 -20 h, and the level of nuclear BEF-1 B1 and B2 isoforms (A) was determined as above, and apoE mRNA levels (B) were quantitated with a RPA using an antisense probe synthesized to the 3Ј end of apoE as described under "Experimental Procedures." ApoE mRNA levels were normalized to human actin or human glyceraldehyde phosphate dehydrogenase mRNA levels. The data are presented as a percent of the control and are the result of the average Ϯ the S.E. of three independent experiments. For each experiment, RPA analysis was performed in triplicate.
ApoE Regulation and BEF-1 Phosphorylation 4591
shown that transforming growth factor- similarly induced the phosphorylation of BEF-1 generating the B1 isoform. Thus, extracellular signals from several agents appear to generate converging intracellular pathways that ultimately result in phosphorylation of nuclear repressor BEF-1. The PMA induction, and staurosporine inhibition of phosphorylation of B1, suggests PKC involvement in the differential phosphorylation of BEF-1. Researchers initially had attempted to implicate PKC in IL-1 signaling, since PMA, which directly activates PKC, mimics many of the actions of IL-1 (21). However, evidence has been presented questioning the importance of PKC in IL-1 action. Many studies have shown that inhibitors of protein kinase C such as staurosporine fail to block IL-1 responses in different cell types; and it has been suggested that staurosporine may even up-regulate IL-1 receptors and thereby increase the effect of IL-1 on some cells (reviewed in Ref. 21). In the case of BEF-1, however, we clearly show that STS blocked the induction by both cytokines and PMA. While our data would suggest that the mechanism for phosphorylation of B1 by these different agents is through a common pathway, it is possible that the induced phosphorylation by each agent is mediated through different kinases each inhibited by STS. It is not clear how increased phosphorylation might facilitate the repressive activity of BEF-1 against apoE. Until we understand what factors bind to previously mapped elements in the apoE upstream regulatory region (12,13,31), and how they interrelate with one another and BEF-1, we can only speculate on the molecular mechanism of apoE repression. As reviewed recently by Hill and Treisman (32), a number of transcription factors contain signal-regulated transcription activation domains, and it is presumed that regulated phosphorylation facilitates their interaction with the basal transcriptional machinery or co-activator proteins. We have shown that the phosphorylation of BEF-1 does not alter its binding affinity for the DNA, suggesting that the effect on activity is likely a post-DNA binding mechanism. The increased repressor activity of phosphorylated BEF-1 on apoE gene expression possibly is the result of enhanced interactions with auxiliary transcription factors at the apoE promoter region. Overall, the balance between and interaction of various positively and negatively acting factors plays a critical role in controlling the expression of genes, and with regard to apoE, the differential phosphorylation of the nuclear repressor BEF-1 appears at least in part to play a role in determining its expression and response to cell signals. Understanding how factors such as BEF-1 control apoE gene regulation may aid in the development of agents to modulate its expression and ultimately in controlling implicated disease processes. | 2018-04-03T02:10:33.659Z | 1996-03-01T00:00:00.000 | {
"year": 1996,
"sha1": "9280dbaf6e02db0b8aa847939bd37ded061cabd6",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/271/9/4589.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "820204a1329569969f642aeb1314a79dfa94a106",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
268148752 | pes2o/s2orc | v3-fos-license | The TESS SPOC FFI Target Sample Explored with Gaia
The TESS mission has provided the community with high-precision times series photometry for $\sim$2.8 million stars across the entire sky via the Full Frame Image (FFI) light curves produced by the TESS Science Processing Operations Centre (SPOC). This set of light curves is an extremely valuable resource for the discovery of transiting exoplanets and other stellar science. However, due to the sample selection, this set of light curves does not constitute a magnitude limited sample. In order to understand the effects of this sample selection, we use Gaia DR2 and DR3 to study the properties of the stars in the TESS-SPOC FFI light curve set, with the aim of providing vital context for further research using the sample. We report on the properties of the TESS-SPOC FFI Targets in Sectors 1 - 55 (covering Cycles 1 - 4). We cross-match the TESS-SPOC FFI Targets with the Gaia DR2 and DR3 catalogues of all targets brighter than Gaia magnitude 14 to understand the effects of sample selection on the overall stellar properties. This includes Gaia magnitude, parallax, radius, temperature, non-single star flags, luminosity, radial velocity and stellar surface gravity. In total, there are $\sim$16.7 million Gaia targets brighter than G=14, which when cross-matched with the TESS-SPOC FFI Targets leaves $\sim$2.75 million. We investigate the binarity of each TESS-SPOC FFI Target and calculate the radius detection limit from two detected TESS transits which could be detected around each target. Finally, we create a comprehensive main sequence TESS-SPOC FFI Target sample which can be utilised in future studies.
INTRODUCTION
Since its launch in April of 2018, the Transiting Exoplanet Survey Satellite (TESS: Ricker et al. 2015) has transformed the field of stellar and exoplanetary physics.TESS has successfully discovered scores of transiting exoplanets, including planets transiting very bright host stars (e.g.Gandolfi et al. 2018;Huang et al. 2018), small radius planets (e.g.Gilbert et al. 2020;Oddo et al. 2023), multi-planet systems (e.g.Leleu et al. 2021) and planets around young stars (e.g.Newton et al. 2019;Battley et al. 2020;Mann et al. 2022).At the time of writing, TESS is currently in its second extended mission, observing Sectors 70 -83.This marks the sixth year of TESS observations observing some of the ecliptic and the Northern hemisphere for a third time 1 .The extended TESS mission allows for the detection of longerperiod transiting planets (e.g.Gill et al. 2020;Lendl et al. 2020) and also opens the door into the search for stellar activity cycles (e.g.Davenport et al. 2020;Doyle et al. 2022).
TESS is composed of four cameras, each of which maps on to an array of four CCDs.The combined field-of-view (FOV) is 24 • × 94 • which makes up one Sector where each is observed for ∼27 days.For a full description of the TESS mission, see Ricker et al. (2015).There are various data collection modes for TESS, however, ★ A copy of the full main sequence TESS-SPOC FFI target sample (see, Table 2) is available at CDS.
For Sectors 1-26 the FFIs were acquired every 30 min where for Sectors 27-56 the cadence of the FFIs was improved to every 10 min.From Sector 57 onwards, the cadence was improved even further to 200 s, making this mode comparable to the 2-minute postage stamp targeted sample.The TESS Science Processing Operations Centre (SPOC: Jenkins et al. 2016) is responsible for the TESS Science pipeline which is based off the very successful Kepler pipeline.From the second year of the TESS mission, the SPOC began processing FFI light curves for up to 160,000 targets per Sector (Caldwell et al. 2020).All pixel and light curve data for the TESS-SPOC FFI Target sample are contributed as High-Level Science Products (HLSP) on the Mikulski Archive for Space Telescopes (MAST 2 ) where they are publicly available.Due to the fact that the TESS-SPOC FFI constitute a large and homogeneously processed set of light curves, they have been an extremely valuable resource for the community, used for a wide range of studies including planet discoveries (e.g.Gilbert et al. 2020;Eisner et al. 2021;Yee et al. 2022), statistical studies (e.g.Bryant et al. 2023), and binary star studies (e.g.IJspeert et al. 2021;Prša et al. 2022).
In this paper, we explore the physical parameters of the stars in the TESS-SPOC FFI Target sample using observations from the Gaia mission (Prusti et al. 2016), both from Gaia DR2 (Brown et al. 2018) and DR3 (Vallenari et al. 2023).In §2 we detail the full Gaia and SPOC samples and discuss how we cross-match them to achieve our final TESS-SPOC FFI Target sample.Here, we also look at the spread of observed TESS Sectors for each target.In §3 we determine the radius detection limit from two detected TESS transits which could be detected around each TESS-SPOC FFI target and also look at the distribution of TESS Objects of Interest (TOIs) in the colourmagnitude diagram of the TESS-SPOC FFI Target sample.For §4 we investigate the binarity of the sample using the non-single star and RUWE flags in the Gaia data.Finally in §5 we isolate main sequence stars in the TESS-SPOC FFI Target sample as a prime sample for further future statistical studies that wish to focus on dwarf stars.
TARGET SAMPLES
The precise method of sample selection for stars to be included in the TESS-SPOC FFI light curves results in a set of targets that is close to, but not quite, a magnitude limited sample.In this Section, we detail both the TESS-SPOC FFI Target sample, a Gaia magnitude limited sample, and investigate the difference between the two.
TESS-SPOC FFI Sample
The TESS-SPOC FFI targets are selected on a set of criteria designed to maximise scientific goals and minimise the impact on processing times in the SPOC context (Caldwell et al. 2020).The target selection is made in the following way: (i) All two-minute targets are selected (∼20,000 per Sector).(ii) Targets are selected with a H magnitude ≤10 or with a distance ≤100 pc, provided the crowding metric ≥0.5 and the TESS magnitude ≤16.
The crowding metric, as defined in the SPOC and Kepler pipelines (Smith et al. 2016), the fraction of flux in the optimal aperture that is due to the target star.For example, a crowding metric of 0.8 means 80% of the flux within the aperture is from the target star, therefore, in (ii) and (iii) above the crowding metric must be greater than 0.5 and 0.8 respectively for the target to be selected.
Targets are selected on a per CCD basis, following the priority order set out above.A limit of 10,000 targets per CCD is placed on the allocation to ensure that the processing time does not impact the operations of the SPOC.Given TESS contains a total of 16 CCDs, this results in a limit of 160,000 stars for the TESS-SPOC FFI light curves for a given sector.Many times this limit is indeed reached, which in practice means that there will be stars in the 3rd select category above which are not included.In such cases, the targets are selected by apparent magnitude, with the brightest stars being prioritised.The split of TESS-SPOC FFI targets for each sector in Cycle 1 is shown in Figure 1.From this it is clear the 2-min targets account for 10 -15%, the M dwarf sample with d≤100 pc accounts for 3-4% and the vast majority are targets with TESS magnitude ≤13.5, log surface gravity ≤3.5, and crowding metric ≤0.8.It is also worth noting there is a dip in the overall number of TESS-SPOC FFI targets between sectors 8 and 12 which is a result of the galactic plane being withing the TESS field of view, resulting in less targets meeting the crowding metric cut of 0.8.
In this study, we select all TESS-SPOC FFI targets which were observed in Sectors 1 -55, covering Cycles 1 -4.This is due to TESS currently observing in Cycle 5/6 where all data is not yet available at the time of writing.Furthermore, from Cycle 5 onwards TESS 1 2 3 4 5 6 7 8 9 10 11 12 will observe FFIs in 200-second cadence.By using observations from Cycles 1 -4, we have observations covering both northern and southern hemispheres twice, creating a complete sample which will allow for repeatability of findings between sectors and cycles.
For all of the TESS-SPOC FFI targets in Sectors 1 -55, we gather their TIC ID, right ascension, declination and their Gaia DR2 identifier which compromises a total of 2,891,782 individual targets.We also determine the number of TESS Sectors each individual target has been observed for and show the results in Figure 2. It can be seen that the majority of targets are observed for 2 to 3 Sectors where there is a peak in the histogram.There is a further peak at around 23 Sectors which represents targets within the ecliptic poles that lie within the TESS continuous viewing zone.Lastly, we also extract the precision of each target from the header of the SPOC light curve and use this information in §3.
Gaia Sample
The Gaia mission (Prusti et al. 2016) is part of the science programme of the European Space Agency (ESA) and was launched in December of 2013.The scientific goals of Gaia are extensive where the key aim is to measure the distances, positions, space motions and physical properties of one billion stars in our galaxy.Gaia provides a map of our galaxy for the first time, complete for all stars brighter than G=20.This provides detailed information on all stars in the TESS-SPOC FFI Target sample set out in Section 2.1.For this study, we use data released by Gaia in DR2 (Brown et al. 2018) and DR3 (Vallenari et al. 2023) accessing the data through a Table Access Protocol (TAP) query using ADQL in TOPCAT (Taylor 2005).
Gaia DR2 is based on data collected during the first 22 months of the Gaia mission.This release made the leap to high precision parallax and proper motion measurements for over one billion stars, and is a major advancement on DR1 with regards to the completeness and accuracy.Homogeneous multi-band photometry and large-scale radial velocity observations at the bright (G ≤ 13) end were also made available in the DR2 release.Gaia DR3 is based on EDR3 (an earlier release in 2021: Brown et al. 2021) where astrometry and broad band photometry were not updated from DR2.The full Gaia DR3 was released in 2022 with updated radial velocities, spectra from the radial velocity spectrograph, astrophysical parameters for sources based on the blue and red prism photometer (BP and RP) spectra, amongst much more.This release was based on data gathered over the first 34 months of the mission and provides significant improvement in both the precision and accuracy of measurements.At the time of writing Gaia DR3 is the most recent Gaia data release, and the next Gaia data release (DR4) will not be released before the end of 2025 3 .
We use the Gaia DR3 parallaxes to estimate the absolute Gaia magnitude in the band for individual stars following the methods of Babusiaux et al. (2018).In addition to retrieving the DR3 sample we also fetched the DR2 identifiers (source_id) which is very useful in cross-matching the TESS-SPOC FFI sample.This is because the TESS Input Catalogue (TIC v8.2: Stassun et al. 2019) IDs are associated with the Gaia DR2 identifiers.Between Gaia DR2 and DR3 there are changes to the identifiers just as there are between Gaia DR1 and DR2.This is as a result of the source lists becoming progressively more stable and of higher spatial resolution over the course of the Gaia mission.
The stellar astrophysical parameters contained within the Gaia DR3 release are comprised of atmospheric properties, evolutionary parameters, metallicity, individual chemical element abundances, and extinction parameters, along with other characterisation such as equivalent widths of the H line and activity index for cool active stars (see Vallenari et al. 2023, for full details).In Gaia DR2, median radial velocities for ∼ 7 million sources were presented, along with estimates of the stellar effective temperature, extinction, reddening, radius and luminosity for between 77 and 161 million sources (see Brown et al. 2018, for full details).Gaia DR3 contained newly determined radial velocities for about 33.8 million stars with RVS (median of the single-transit radial velocity spectrometer measurements) ≤ 14 and with 3100 K ≤ eff ≤ 14,500 K. Therefore, we use the radial velocity, parallax and log measurements from DR3 and the effective temperature, stellar radius and luminosity from DR2.
The Gaia DR2 effective temperatures are estimated from two distance-independent colours BP − and − RP .Machine learning is used to come up with a non-parametric model for the colourtemperature relation where the training sample contains targets with eff =3,000 -10,000 K. Therefore, there is no Gaia target with a temperature listed outside of this range.The luminosities from Gaia DR2 are estimated using the FLAME software package (part of the Apsis data processing system, see Bailer-Jones et al. 2013, for further details) utilising relations between the effective temperature and Gaia absolute magnitude.Finally, the stellar radius is then estimated using the effective temperature and luminosity.
TESS-SPOC FFI and Gaia Cross-Matched Sample
In order to cross-match the TESS-SPOC FFI Target sample with Gaia, we first create a magnitude limited sample of Gaia stars in DR3 where the relative precision on the parallax is better than 20% (following Babusiaux et al. (2018)) and <14.This brightness cut is used as the TESS-SPOC FFI Target sample predominantly comprises stars brighter than mag =∼ 13.5 (see §2.1 for further details).Furthermore, the number of Gaia targets fainter than G=14 increases significantly, making it more difficult and time consuming to access and download the Gaia sample.We cross-matched the TESS-SPOC FFI Targets to the Gaia magnitude limited sample using the Gaia DR2 source identifiers.This produced a total of 2,744,013 cross-matches, leaving 147,769 TESS-SPOC FFI targets without a match.This deficit results from the two cuts on magnitude (94.7%) and parallax error precision (5.3%) made when creating the Gaia magnitude limited sample.In order to form a more complete sample of the TESS-SPOC FFI targets we cross matched the remaining 147,769 with the Gaia DR3 catalogue directly using their right ascension and declination from the TIC and a radius search of 8 arcsec.This produced a further 146,604 TESS-SPOC FFI targets with Gaia DR3 parameters where the 1,165 missing targets were a result of high proper motion on the order of hundreds of milliarcseconds per year.There were a number of targets which were flagged as duplicates by TESS and these have been removed.The original cross match and the additional targets from the radius search make the full final TESS-SPOC FFI Target sample with a total count of 2,890,583 individual targets.
Figure 3 shows the spread of absolute and mean Gaia magnitudes in the TESS-SPOC FFI Target sample compared with the underlying Gaia sample for G<14.All stars which fall into the TESS field of view and are brighter than T=6 (but not so bright as to cause CCD saturation) are selected for 2 min cadence light curves (Stassun et al. 2018).Therefore, since all of these 2 min cadence targets are also chosen as TESS-SPOC FFI targets (see selection criteria set out in Section 2.1), the TESS-SPOC FFI Target sample is complete for almost all stars with G<6.
For stars fainter than G=6, the TESS-SPOC FFI Target sample is incomplete, and the degree of incompleteness is shown in Figure 4.In this plot, the black line represents the percentage of completeness within the TESS-SPOC FFI Target sample compared to the Gaia magnitude limited sample.The red line then shows the completeness of the TESS-SPOC FFI main sequence sample (defined in §5) compared to the Gaia magnitude limited main sequence sample which is created in the same way as the SPOC sample with a cut on surface gravity log > 3.5.From this, it is clear the full TESS-SPOC FFI Target sample is almost complete for stars with G<6 where it drops of suddenly.There is also a peak in completeness around G=9 which could correspond to large TESS Guest Investigator (GI) programmes as these targets are observed in 2 min cadence and so are automatically included in the TESS-SPOC FFI Target sample.For the main sequence sample there is a much higher completeness across the whole magnitude range which drops off towards the end at G=13.
Using the Gaia DR3 magnitudes, parallaxes and colours we are able to plot the full Hertzsprung Russell Diagram (HRD) for the TESS-SPOC FFI Target sample which can be seen in Figure 5.This shows the diversity of stellar classes and types which are present within the TESS-SPOC FFI Target sample.The majority of the sample are on the main sequence.At 0.75 magnitude above the main sequence, a subtle binary track can be seen which is due to nearequal mass main sequence binary stars.Also evident on the HRD are the spread of sub-giants and giants, a clump of blue hot-sub dwarfs and the faint white dwarf sequence.All of these classes of stars are present in the TESS-SPOC FFI Target sample due to being selected as 2-minute cadence targets.This selection is due to the stars being very bright (T<6) or through TESS GI programs that target specific classes of stars such as white dwarfs in G04137 and G022028 by JJ Hermes, subgiants in G022099 by J. Tayar and hot-sub dwarfs in G022141 by B. Barlow.
Figure 6 shows the ratio of TESS-SPOC FFI targets to the magnitude limited Gaia sample in the format of an HRD with a gird resolution of 2,000 × 2,000.Yellow colouring indicates the TESS-SPOC FFI Target sample has the same number of targets as the underlying Gaia sample -i.e. the TESS-SPOC FFI sample is approximately complete.Dark blue colouring indicates there are very few TESS-SPOC FFI targets compared with the underlying Gaia sample -i.e. the TESS-SPOC FFI sample is very incomplete.It is evident that the TESS-SPOC FFI Target sample is most complete for fainter main sequence low mass stars and also across the main sequence in general.The TESS-SPOC FFI Target sample is most incomplete for evolved/giant stars.This is a result of the TESS 2-minute targets being largely free of giant stars (see discussions in Stassun et al. 2018, on the assembly of the Candidate Target List).The reason for this was that TESS is an exoplanet transit survey mission, and transits around sub-giants or giants are much shallower compared to equivalent transits around main-sequence stars.Additionally, the criteria in which additional TESS-SPOC FFI targets are selected (see §2.1) include nearby stars (d<100 pc) and exclude low surface gravity stars (log > 3.5), both of which will naturally exclude horizontal and giant branch stars.
The distribution of Gaia stellar properties for the TESS-SPOC FFI Target sample are set out in Figure 7.The distributions in radial velocity is a Gaussian with a peak at 0 kms −1 , in line with the expectation for a large set of nearby stars selected from across the entire sky.The distribution of effective temperatures is also sharply peaked at around 6,000 K, reflecting the fact that the sample is largely made up of solar-type main sequence stars.There is a small bump in the distribution at approximately 3500 K, which comes about both from the selection of M-dwarfs in the 2 min frames and the selection of additional M-dwarfs in the TESS-SPOC FFI selection for nearby stars (d<100).Similarly the log and stellar radius distributions largely reflect the fact that dwarf stars on the main sequence make up the bulk of the sample.Figure 8 shows the distributions of distances which peaks at around 400 pc, with 90% of targets having a distance of 850 pc or less.Furthermore, we also plot the distribution of distances for ∼200,000 Kepler targets observed in Q1 where there is a peak at 1,000 pc.It is important to note here that TESS is observing many more stars at 1,000 pc compared to Kepler, however, from 1,500 pc onwards Kepler begins to take over observing more stars at a further distance.A small peak can be seen for < 100 pc, which is due to the selection of nearby stars in 2-minute sample and in the TESS-SPOC FFI selection criteria.
EXOPLANET DETECTIONS WITH TESS DATA
The primary science goal of TESS is to discover planets smaller than Neptune which transit stars bright enough to enable follow- up spectroscopic observations.It is this combination of photometry and spectroscopy which can provide properties of planetary systems such as planet radii, planet masses and atmospheric compositions.Therefore, the SPOC data products were developed with exoplanet transit searches as a focus.
TESS Targets of Interest
Potential transiting planets are found by searching for periodic flux decreases, known as Threshold Crossing Events (TCEs), in both the SPOC 2-min lightcurves created from postage stamps and 30-min cadence FFI light curves from the Quick Look Pipeline (QLP), which is another TESS data processing pipeline.These TCEs are then examined by the TESS Science Office (TSO) to identify planet candidates which would benefit from follow-up observations.Light curves are run through software to eliminate any obvious non-planetary signals where the remaining are manually vetted and listed as a TOI for follow up observations.Any TCEs which fall under other categories from eclipsing binaries to variable stars are not included in the TOI Catalog (Guerrero et al. 2021), but are included in the comprehensive TCE Catalog available on MAST.At the time of writing there are currently 6,687 TOIs which have been followed-up and confirmed or are still awaiting further observations to confirm the planetary system.
In Guerrero et al. (Figure 6, 2021) candidates from the TESS primary mission are over-plotted on a Gaia sample.In a similar manner, we took the list of all currently known TOIs and cross matched it with the TESS-SPOC/Gaia FFI Target list.This produced a total of 3,950 known TOIs which have SPOC FFI data.For the remaining 2,737 targets we cross matched them directly with the Gaia DR3 catalogue to obtain the Gaia parameters for each.This was done using the right ascension and declination of each target and a cone search of 6 degrees.This then allowed us to plot the two TOI samples as those with SPOC FFI data (red) and those without The radial velocity measurements are from Gaia DR3 and the remaining properties are taken from the Gaia DR2 catalogue.The range shown in each plot was cut to enable the distribution to be seen.The percentage of the sample which lies out of the axes is as follows; radial velocity 3.7%, temperature 1.8%, log 2.5% and radius 1.9%.with distances less than 2,000 pc.We also overplot the Kepler distances of ∼200,000 targets from Q1 (dark blue).The range shown has been cut to enable the distribution to be seen, therefore, 0.6% of TESS and 22% of Kepler targets lie outside the axes.
(green) in Figure 9 over top the SPOC HRD.While it is the 2-min TESS light curves that are used by TSO to identify TCE's, each 2-min light curve has a corresponding TESS-SPOC FFI light curve (see § 2.1).In Figure 9 it can be seen that the majority of TOIs lie within the main sequence with outliers close to the hot-sub dwarfs and white dwarfs as well as a sample in the horizontal and giant branches.It also provides an indication for TESS Cycles 1 -4 of which TOIs have been identified using SPOC data.The majority of those with SPOC data lie in the main sequence with the exception of a handful of outliers.This aligns with the strategy of selecting SPOC targets.
TESS-SPOC FFI sensitivity to Transit Detections
Since detecting transiting exoplanets is the main science goal of the TESS mission and each SPOC 2-min light curve is scanned for TCEs, it is worth exploring the sensitivity of the sample light curves to transit detections.Therefore, we explored what would be the radius detection limit from two detected TESS transits you could expect to detect for each of the TESS-SPOC FFI targets.Hereafter, we refer to this as the two transit radius detection limit.Note that this is an estimate designed to study the sample, rather than a strict limit for individual targets.Furthermore, we do not consider the number of transits which might be observed in total by TESS, therefore, we provide only upper-limits on the two transit detectable planet radius.In other words, smaller planets may be detected with many more transits.
To do this we extracted the precision of each TESS-SPOC FFI target taken from the SPOC light curve header where provided.The photometric precision metric within the SPOC light curves is based on the Kepler pipeline (Jenkins et al. 2010) and is called the combined differential photometric precision (CDPP).It is defined as the root mean square of the photometric noise on transit timescales and is used by the SPOC when searching for periodic transit searches.For our purposes, we use the CDPP of each target, the Multiple Event Statistic (MES) threshold of 7.1 from SPOC as a measure of signalto-noise used to claim a detection and assume a transit duration of 2.7 hours which has been taken as the median transit duration of all the known TOIs.While MES is not identical to signal-to-noise, it is similar according to Jenkins (2002); Twicken et al. (2018).We use this to be consistent with the SPOC pipeline, however, other studies such as Sullivan et al. (2015); Kunimoto et al. (2022) have used the threshold in signal-to-noise of 7.3.This is put together to estimate the two transit radius detection limit (R p min ) from two TESS transits in Equation 1: where SNR is the signal-to-noise threshold of 7.1/ √ 2 to account for a two transit detection, dur is the average TOI transit duration of 2.7 hrs (divided by 2 since we are using the 2-hour CDPP values), R * is the stellar radius and is the CDPP precision of the lightcurve.
In Figure 10 we plot individual HRDs color-coded by the two transit radius detection limit to show the spread in the context of the stellar properties for each TESS-SPOC FFI Target.Figure 11, then shows the same data of the two transit radius detection limit as a distribution, where the peak of the histogram lies at ∼ 6 ⊕ .Both plots are split up according to planetary radius corresponding to our solar system planets as; Earths/Super-Earths in the range 0.5 < ⊕ ≤ 2.0, Mini-Neptunes in the range 2.0 < ⊕ ≤ 4.0 , Gas Giants in the range 4.0 < ⊕ ≤ 27.0 and Non-Planetary Companions in the range 27.0 < ⊕ ≤ 100.0.As can be expected the Earth/Super-Earth population is mostly found amongst the lower main sequence population and Mini-Neptunes are found around stars on the whole of the main sequence.For the larger gas giants these tend to be spread slightly on the main sequence but are also found around stars which are evolved on the horizontal and giant branches.In §6, we compare our findings with that of other planet detection estimate results, namely that if Kunimoto et al. (2022).
NON SINGLE STARS/ BINARIES
Binary stars can be observed from Earth as visual/resolved, spectroscopic, eclipsing and astrometric.These gravitationally bound star systems are considered to be important as they allow the masses of stars to be determined (see Söderhjelm 1999;Al-Wardat et al. 2021).It is expected that up to 50% of all stars are in binary systems with some even as triple or higher-multiple systems.However, this fraction is higher for hotter stars, can decrease with higher metallicity and also is related to age and distance (Tian et al. 2018).As a result of this, Gaia provides its own parameters to indicate the potential binary nature of a given star.
Firstly, to indicate if any of our stars are members of binary systems we used the Renormalised Unit Weight Error (RUWE) from the Gaia DR3 Catalog (see Lindegren et al. 2018, for full details).The RUWE is a goodness-of fit measurement of the single-star model to the targets astrometry which is highly sensitive to the photocentre motion of binaries (see Belokurov et al. 2020).Overall, the RUWE is expected to be around 1.0 for all sources where the single-star model provides a good fit to the astrometric measurements where a RUWE ≥ 1.4 could indicate the star is non-single.In total, there are 540,038 TESS-SPOC FFI targets with a RUWE ≥ 1.4 which could be considered as potential binary systems, which are all plotted in Figure 12.In Figure 12, the RUWE value of each TESS-SPOC FFI target is shown as the colour of each point on the HRD.Here the more yellow the data point, the higher the RUWE value of the target and the more likely it is in a binary system.Overall, there is a spread of targets with RUWE ≥ 1.4 across the whole of the HRD including the main sequence, red giant and horizontal branches and even hot-sub dwarfs and white dwarfs.However, the majority of stars with very high RUWE values are those along the main sequence.This follows what we know about binary systems where approximately half of F, G and K stars are expected to be in a binary system (Raghavan et al. 2010;Moe & Di Stefano 2017).
In addition to the RUWE value, Gaia also provides a non-single star (NSS) flag as an indication of the binary nature of a target.This flag indicates the target has been identified as a NSS by the Gaia Data Processing and Analysis Consortium (DPAC).As such, the observations have been put through tests of various binary orbit models.The NSS solutions are then kept when significant and fit with an acceptable quality.For those where binarity was detected in several instruments a combination fit is considered to improve the precision of the orbital parameters.Therefore, the NSS flag is organised in three main Flags which informs on the nature of the NSS model.They are as follows: Flag 1 is an astrometric binary, Flag 2 is a spectroscopic binary and Flag 4 is an eclipsing binary.Some of the NSS flags represent models which are combinations of these three, for example Flag 5 would represent the combinations of Flag 1 and Flag 4 which informs us that there is a astrometric and eclipsing binary solution (full details of all the flags are in Table 1).
In total, 196,391 TESS-SPOC FFI targets have a NSS flag with the combinations within these shown in Table 1.The majority of targets have NSS flag 1 which indicates an astrometric binary.In Figure 12, each of the individual NSS flags are over-plotted onto the HRD to show the spread.It can be seen that the majority lie on the main sequence and red giant/horizontal branches with only two close to the white dwarf region.Furthermore, the higher NSS flags seem to correlate around the top of the main sequence in the region of the O, B and A stars.This seems reasonable as more than 70% of OBA A log histogram distribution of the calculated two transit radius detection limit of an exoplanet which could be detected around each TESS-SPOC FFI Target.The radius calculation was determined using the median transit duration of all TOIs as the transit duration, assuming a circular orbit and using the signal-to-noise of 7.1/ √ 2. The bars are colour coded according to the planetary radius with Earths/Super-Earths in the range 0.5 < ⊕ ≤ 2.0 as dark blue, Mini-Neptunes in the range 2.0 < ⊕ ≤ 4.0 as blue/green, Gas Giants in the range 4.0 < ⊕ ≤ 27.0 as green and Non-Planetary Companions in the range 27.0 < ⊕ ≤ 100.0 as yellow.
Table 1.The number of each TESS-SPOC FFI targets which fall into each of the non-single star Flags along with the descriptor for each.The final column shows the percentage of the TESS-SPOC FFI Target Sample of 2,890,583 sources.stars exist in binary systems (Sana et al. 2012).This could be a result of the average number of companions per OB primary being at least three times higher than that of low-mass stars with 0.5 companions on average (Zinnecker 2003;Grellmann et al. 2013).
NSS
Overall, both the Gaia binarity parameters have flagged 18.68% from RUWE and 6.79% from NSS in the TESS-SPOC FFI targets.In total, 157,206 TESS-SPOC FFI targets have both a RUWE ≥ 1.4 and NSS flag which equates to 5.43% of the sample.Therefore, a total of 579,223 TESS-SPOC FFI targets have either a RUWE ≥ 1.4 or a NSS which equates to 20.04% of the sample.This is a considerable fraction of the target sample, therefore, for future detailed analysis of these potential binary targets the Gaia information could be used to infer more details on the binary nature.This could be useful when considering searching for planets around these stars and/or looking at variability.
ISOLATING THE SPOC MAIN SEQUENCE
One of the main goals of this paper was to create a comprehensive main sequence TESS-SPOC FFI Target sample which can be utilised in other studies.In the creation of the TESS-SPOC FFI Target sample (Caldwell et al. 2020), targets were added up to 160,000 for each sector using the criteria of surface gravity log > 3.5.This added dwarfs and sub-giants to the sample which matches our criteria for selecting the main sequence.Therefore, we apply this criteria to the whole of the TESS-SPOC FFI Target sample which leaves us with 2,319,308 main sequence targets.The full HRD of this sample can be seen in Figure 13 where the main sequence is clearly identified and the horizontal and giant branches are no longer seen.The properties for each target, including the Gaia DR3 properties, are listed in Table 2 which is also available as a machine readable table online.
DISCUSSION & CONCLUSIONS
In this paper we have explored the properties of the TESS-SPOC FFI Target sample by utilising the Gaia DR3 catalogue.From this we have identified a well defined main sequence TESS-SPOC FFI Target sample which is available to the community for future studies (see Table 2).The aim behind this, is to standardise a sample which can be used for large surveys which are based on the TESS-SPOC FFI data which is publicly available.
Firstly, we created our final TESS-SPOC FFI target sample by cross-matching the TESS-SPOC FFI Target sample with Gaia DR3 using the Gaia DR2 source identifier.Any TESS-SPOC FFI targets which were missed out were matched using a cone search with Gaia on the right ascension and declination to form a more complete sample, see §2.3 for the full details.The comparisons between both the Gaia sample and the TESS-SPOC FFI Target sample with regards to absolute magnitude show a stronger spread amongst main sequence targets for SPOC, see Figure 3.The mean absolute Gaia magnitude for the Gaia sample is 2.6 and for the TESS-SPOC FFI Target sample is 3.9 (see Figure 3), which falls inline with the selection criteria for SPOC and is unsurprising.
For each of our TESS-SPOC FFI targets, we used the CDPP from the SPOC FFI light curves to estimate the two transit radius detection limit of an orbiting exoplanet.In order to do this, we also had to make a few assumptions: (i) the transit duration was set to 2.7 hours (median transit duration of all known TOIs) and (ii) the signal-tonoise threshold was set to 7.1/ √ 2. In Kunimoto et al. (2022), they use simulations of TESS data along with injection and recovery tests to determine the number of detectable planets around 8.5 million AFGKM stars from the TESS Candidate Target List (CTL: v8.01).Their stellar sample was taken from the CTL keeping only targets which were classified as dwarfs and removing all giants.This is different to the TESS-SPOC FFI Target sample which, as seen from Figure 5, has targets from main sequence dwarfs to red giants and white dwarfs.In total, over seven years of TESS observations Kunimoto et al. (2022) predict a planet yield of 12,519 ± 678, with 8426 ± 525 planets detectable from the primary mission and first extended mission (i.e. years 1 -4).It is important to note, they have an upper limit of 16 ⊕ for AFGK stars and 4 ⊕ for M stars within their injection and recovery tests.Overall, they find that G-type stars are the most common hosts and half of the TESS planet detections consist of gas giants with radii greater than 8 ⊕ .
From our TESS-SPOC FFI main sequence Target sample, 41% of targets are G-type stars which is in agreement with our median eff of 5,933 K, therefore, the majority of our two transit radii detection Table 2.A small sub sample of the main sequence TESS-SPOC FFI Target sample.In total we show a selection of 13 columns out of the 21 available for each target.Amongst these we include the TIC and Gaia DR3 IDs and also the estimated detectable two transit radius detection limit for each target as the last column.This table is available in its entirety in a machine-readable format at CDS where a small sub sample is shown here for guidance regarding its form and content.limits are from host stars which are G-type.Furthermore, we also find that 38% of our two transit radii detection limits are giants with > 8 ⊕ .We also have a peak in our distribution of detectable planetary radii which falls at ∼ 6 ⊕ .While it is important to note we are not producing a yield estimate, it is reassuring that similarities are found amongst our two transit radii detection limits.We do not perform any injection and recovery tests as it is beyond the scope of this paper.Furthermore, Kunimoto et al. (2022) used a model planet distribution which is a function of planet size, orbital period, and host star spectral type to predict their yield.In this study we only present the detectability, but do not consider how it varies with increasing numbers of transits, where this would lead to smaller planet radii detections.Furthermore, when calculating our two transit radii we assume uncorrelated Gaussian noise.However, the TESS SPOC pipeline goes to considerable effort to decorrelate the data before any transit detections, therefore, the noise within the light curves is generally not uncorrelated Gaussian noise and does not strictly obey square-root statistics.There is no simple solution to correct for this, but we would just like to make a note here for the reader.
With our cross-matched sample we explored the Gaia properties for the TESS-SPOC FFI Target sample, see Figure 7.The median effective temperature of the sample is 5,933 K, this lies in the range of G-type stars and is in agreement with our stellar spectral type distribution.Further to this, the median stellar radius is 1.36 ⊙ which is a typical radius for an FGK star.There also appears to be a second bump in radius at ∼ 0.5 ⊙ which is indicative of low mass M dwarfs.This can also be seen in distance at ∼100 pc and in effective temperature at ∼3000 K which all correspond to low mass M dwarf properties.This is not surprising since TESS has a redder band-pass than Kepler, making it easier to detect exoplanets around low mass stars.Furthermore, the increase in M dwarfs with TESS can also be attributed to their emphasis in the selection process for SPOC 2-min and SPOC FFI targets, see §2.1.With regards to distance, the median was 438 pc with 90% of stars having a distance of 850 pc or less.Finally, the median log is 4.25 which centres around what we know for the Sun with log = 4.43 (Gray 2021).There is also a second peak in the log distribution at 4.5 suggesting a group of high surface gravity stars, however, there is also a small peak at ∼0.3 ⊙ .Therefore, it is likely the peaks in log of 4.5, T eff of ∼ 3,000 K and radius of ∼0.3 ⊙ correspond to M dwarfs which forms part of the SPOC selection process.
Overall, this paper presents the first overview of the TESS-SPOC FFI Target sample, utilising data from Gaia DR3.We have produced the first HRD of the TESS-SPOC FFI Target sample showing a wide distribution of targets from the main sequence, red giant branch, white dwarfs and hot-sub dwarfs.Furthermore, we have produced a main sequence TESS-SPOC FFI Target sample which is publicly available for the community to use in further studies.Given the success of the SPOC pipeline and the planet yield expected from TESS going forward, we hope this will encourage a standardised target list to be used when appropriate.Furthermore, we have also estimated the two transit radius detection limit for each target using TESS-SPOC FFI data to inform future planet searches.This may help when developing search strategies for new transiting exoplanets in the future.
Figure 1 .Figure 2 .
Figure 1.A bar chart showing the split of TESS-SPOC FFI Targets in each sector for Cycle 1.In orange are the targets which are also observed in 2-min cadence (10 -15%), pink are the targets with d ≤100 pc (3 -4%) and in purple the remaining targets which represent those with T mag ≤13.5, log ≤3.5, and crowding metric ≤0.8.
3Figure 3 .
Figure 3.A histogram of the absolute Gaia magnitude and mean Gaia magnitude from DR3 for both the Gaia sample (yellow) and the TESS-SPOC FFI Target sample (navy).
Figure 4 .
Figure 4.The completeness of the TESS-SPOC FFI Target sample in comparison to the Gaia magnitude limited sample.The black line represents the full TESS-SPOC FFI sample compared to the full Gaia sample and the red line is the TESS-SPOC FFI main sequence sample compared to the Gaia main sequence sample defined in §5.
Figure 5 .
Figure 5.The colour-magnitude diagram of all TESS-SPOC FFI Targets from TESS Sectors 1 -55.All colours along with the parallax, used to determine the absolute Gaia magnitude, were taken from the Gaia DR3 catalogue.The colour scale represents the log of the density of stars.
Figure 6 .
Figure 6.Colour-magnitude diagram where the colour scale shows the ratio between TESS-SPOC FFI and Gaia targets.Yellow represents both samples having the same number of targets and dark blue where targets are in the Gaia sample but not the TESS-SPOC FFI sample.
Figure 7 .
Figure 7. Stellar properties of the TESS-SPOC FFI Target sample.The radial velocity measurements are from Gaia DR3 and the remaining properties are taken from the Gaia DR2 catalogue.The range shown in each plot was cut to enable the distribution to be seen.The percentage of the sample which lies out of the axes is as follows; radial velocity 3.7%, temperature 1.8%, log 2.5% and radius 1.9%.
Figure 8 .
Figure8.The log distribution of distances for the TESS-SPOC FFI Target sample, taken from the parallaxes of Gaia DR3 (light blue), showing those with distances less than 2,000 pc.We also overplot the Kepler distances of ∼200,000 targets from Q1 (dark blue).The range shown has been cut to enable the distribution to be seen, therefore, 0.6% of TESS and 22% of Kepler targets lie outside the axes.
Figure 9 .
Figure 9. TESS Targets of Interest (TOIs) over plotted as circles onto the colour-magnitude diagram of all TESS-SPOC FFI Targets.Those with SPOC data are plotted in red (4,270) with the remaining TOIs with no SPOC data over-plotted in green (2,417).
Figure 10 .
Figure10.Colour-magnitude diagrams showing the distribution of the calculated two transit radius detection limit of an exoplanet which could be detected around each TESS-SPOC FFI target.Each plot has been split up according to planetary radius with Earths/Super-Earths in the range 0.5 < ⊕ ≤ 2.0, mini-Neptunes in the range 2.0 < ⊕ ≤ 4.0, Gas Giants in the range 4.0 < ⊕ ≤ 27.0 and Non-Planetary Companions in the range 27.0 < ⊕ ≤ 100.0.The number of each category in each subplot is given in the top right hand corner.The radius calculation was determined using the average TOI duration of 2.7 hrs as the transit duration, assuming a circular orbit and using the signal-to-noise of 7.1/ √ 2.
Figure11.A log histogram distribution of the calculated two transit radius detection limit of an exoplanet which could be detected around each TESS-SPOC FFI Target.The radius calculation was determined using the median transit duration of all TOIs as the transit duration, assuming a circular orbit and using the signal-to-noise of 7.1/
Figure 12 .
Figure 12.The colour-magnitude diagram of all TESS-SPOC FFI Targets from TESS Sectors 1 -55.The left plot is colour coded according to Gaia RUWE and the right plot has the Gaia non-single star (NSS) flags over plotted.In each plot the higher RUWE and NSS flags are plotted on the top in order to be seen given the large sample size.Full details of what each NSS Flag represents are given inTable 1.
Figure 13 .
Figure 13.The Gaia colour-magnitude diagram of the TESS-SPOC FFI main sequence Target sample.These targets were isolated by filtering on log > 3.5 which resulted in a final main sequence TESS-SPOC FFI Target sample of ∼ 2.3 million. | 2024-03-03T18:44:08.879Z | 2024-02-28T00:00:00.000 | {
"year": 2024,
"sha1": "34ea12ce7be087e30e708837ee6a50bb7f6ebd66",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/mnras/advance-article-pdf/doi/10.1093/mnras/stae616/56794425/stae616.pdf",
"oa_status": "HYBRID",
"pdf_src": "ArXiv",
"pdf_hash": "7c9570455634565cf274f3007a36258e5226661c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
} |
252234931 | pes2o/s2orc | v3-fos-license | PRODUCT PACKAGING DESIGN AS THE BASIS OF PRODUCT PURCHASE DECISION
Market homogenization leads to fierce competition in the tea market. In recent years, with the rise of “green health” and the deep-rooted concept of “innovation”. Tea packaging design innovation has become a hot spot for research in the tea industry. The continuous upgrading of tea market demand has shown vigorous vitality and energy. The study aims to examine the research on the influence of Chinese tea packaging design on product purchasing decisions and analyze its importance and influencing factors. Further, strengthen the sustainable development of tea packaging design in Yanhu District, Shanxi Province, China. Theoretical analysis was conducted of tea cognitive attitudes, tea culture, tea packaging design, and purchase decision behavior. The final online questionnaire was distributed to 321 respondents in the Yanhu District. Empirical research results indicate that all three dimensions of tea packaging design factors positively influence, but the degree of influence varies. Tea products can be enhanced through packaging design to influence purchasing decisions. Ultimately, it can explore the packaging effect of tea products and add value to the connotation of tea products and meet the multi-level demand of consumers for tea products.
INTRODUCTION
Tea packaging design is a soft power publicity medium that tea companies have and is the key to invisible communication between tea enterprise products and consumer favor.
When consumers buy tea in the market, they are more willing to choose those tea products that are easier for them to trust. In addition to the influence of tea brands, excellent tea packaging design is also needed as an auxiliary means to influence consumers' purchasing decisions. Packaging is the first thing in contact with the consumer to stimulate the consumer's desire to buy, which is of great help in promoting sales. These functions are increasingly crucial to the business.
The experiential economy is increasingly embedded in our production live, and product packaging and design invariably influence the consumer experience. According to China Tea Marketing Association statistics, by continuously improving the green Ecoltechnology of China's tea plantations, the total sales of traditional tea products in China are expected to exceed RMB 300 billion in 2021, an increase of 3%-4% from 2020. Reflecting the future trend of price stability and considerable sales volume, the Chinese tea market is a thriving and sustainable environment (CTMA, 2021). However, the impact of COVID-19 will result in a significant decline in exports from the Chinese tea industry. It is expected that with the intensification of the epidemic worldwide, China's tea exports will maintain a significant reduction in volume in the short term. Tea is rich in tea polyphenols, tea pigments, other health benefits, and minerals (Khan & Mukhtar, 2007). Therefore, often drink tea to help human health. In recent years, tea has become consumers' first choice for "green, high quality, healthy" consumption due to its high-quality product attributes, wide variety, and exquisite packaging. China's different topography and climate also make the varieties of tea different and, therefore, can meet the needs of different consumer groups.
In the face of the current abundant resource advantages of Chinese tea and the epidemic's impact, attracting consumers to buy tea products has become the primary demand of tea producers, processors, and sellers. As a result, people have realized the importance of tea packaging to attract consumers and enhance tea sales. The tea packaging design serves as a bridge of information communication between tea products and consumers. Consumers mainly obtain the basic information about tea, perceive the emotional output of the packaging, and communicate culture through tea packaging to
Tea Cognition
China is the first producer and consumer of tea, with a long history of tea drinking, and tea has long been another way of embodiment of life. China's tea culture originates from a long time and is so profound that it has become one of the carriers to show the charm of Chinese traditional culture (Patil, Bachute, & Kotecha, 2021). Tea not only embodies a long-standing culture but also reflects a mood and renders an atmosphere, such as the beauty of the "tea ceremony" culture. There is a great interest in the healthiness of tea, which has increased its popularity of tea. Tea has been considered to have medicinal value in its long history. Tea contains antioxidant theophylline. Tea has many beneficial health properties, such as antioxidant, neuroprotective, and hypolipidemic benefits (Chacko, Thambi, Kuttan, & Nishigaki, 2010). Foreign perceptions of tea focus on its drinking characteristics, tea is primarily a casual drink, sipped when people are relaxed.
Tea Packaging Design
In the era of big data, internet intelligence, epidemic, and other factors influence the diversified development of the tea market in the domestic and international exchanges and communication. Consumers' demand for tea packaging design will also undergo demand changes in the direction of sustainable diversity. The most basic protection function of packaging cannot meet the requirements for tea purchase. Therefore, enterprises gradually focus on attracting consumers and analyzing the form, material, color, and other factors of product packaging design to assist consumers' emotional needs to enhance product competitiveness and improve product sales (López, Murillo, & Gónzalez, 2021). Product packaging mainly uses color labels and text to influence consumer purchase decisions, and the product packaging is the most compelling factor for consumers. In-depth to a cultural to brand impact level (C. C. Shen, 2014). From the perspective of green design, there should be environmental awareness in the selection of tea packaging materials and the use of materials (Lin & GUO, 2020).
Consumer Purchase Decisions
In recent years, the direction of consumer research has expanded from theory to the analysis of consumer preferences. In terms of product attributes, it was found through the analysis that the product's packaging design and product type significantly affect the Product Packaging Design ….. 520 customer's awareness of the brand's products (Fenko, Lotterman, & Galetzka, 2016).
Regarding consumer characteristics, some researchers have pointed out that the products people buy contain certain cultural connotations (Dewobroto, 2022). The products they buy reflect consumers' cultural attributes and norms, significantly influencing their perceptions and purchase behaviors (Ogden, Ogden, & Schau, 2004).
In terms of consumer characteristics, when consumers buy products that value greenness, greenness stimulates the desire to buy. To verify the influence of green and emotional value on consumer purchase and repurchase behavior (Ariffin, Yusof, Putit, & Shah, 2016). Some scholars have also used experimental field methods, combined with indepth interviews, and used logistic regression analysis of sample and interview data to verify the effects of gender and age on behavioral changes (Sahay, Sharma, & Mehta, 2012).
Previous researchers focus more on brand effect, regionality, culture, emotion, healthiness, packaging effect, and so on, which are more disorganized and without planning dimensions. This study will extract the vital tea packaging design influence on consumers' purchasing decisions and divide the dimensions to demonstrate the strength of the relationship more clearly between the two.
Consumer Purchase Decision Theory
Consumers are faced with a wide range of choices when faced with the many packaged teas on the market. So much so that they may not fully understand the complete information of the pre-purchased product and have difficulty accurately judging the quality of the tea from an objective point of view. First, consumers often perceive tea products through external packaging cues such as brand information, origin, and design style, affecting their desire to purchase. Secondly, consumers' attention and preference for tea products also affect their evaluation of the quality of packaged tea. Consumers' purchase decision display constantly considers their consumer needs and other factors. The purchase decision can be made in a "Stimulus-Organism-Response" process (S-O-R). It has been widely used in business studies and other fields to explore the interaction between different stimuli and consumers (Zhu, Li, Wang, He, & Tian, 2020). Product Packaging Design ….. 521
RESEARCH METHOD
The study used a mixed method of qualitative and quantitative approaches. For the quantitative approach, primary data were gathered from the general consumers in Yanhu District who were taken as the research subjects selected using the Yamane formula to derive a margin of error of 5% (confidence level at 95%) for the target population of this study. In all, 321 usable questionnaires were obtained for the survey. The sample size met the requirements of a sample size between 100-400 and an optimal ratio of 10:1 between the sample size and the total number of questionnaire questions as recommended in the SEM model and EFA analysis research (Worthington & Whittaker, 2006). The sample was selected mainly by random sampling using the Questionnaire Star software, and Yanhu District was selected as the survey area. So, this study's statistical methods were analyzed using SPSS and AMOS software for data testing.
Additional statistical analysis was used using Exploratory Factor Analysis to analyze the correspondence between factors and potential dimensions. Using exploratory factor analysis, we can extract the common factor among variables and explain all factor information by summarizing the main dimensions. Categories can be extracted from the many repetitive information, and dimensions can be classified for streamlining information.
The analysis in this study was conducted in an exploratory mode to determine the extent to which purchase decisions were associated with tea packaging design factors (Cudeck, 2000). This study is based on Exploratory Structural Equation Modeling (ESEM), reflecting the flexibility of factor analysis. Confirmatory Factor Analysis will be used to improve the rigor and scientific validity of this study.
To enhance the analysis, the study combines a large amount of literature at home and abroad. It collates the relevant research on tea cognition, tea packaging design, and consumer purchase decision in recent years to explore the cross-sectional relationship in the existing literature while closely following the current research trends (Snyder, 2019).
Pretest/Exploratory Factor Analysis
The reliability scores of 0.853 in the preliminary study were all higher than 0.8, thus indicating an authentic and reliable questionnaire and high quality of reliability (162 valid
Descriptive Statistical Analysis
Most tea consumers in Yanhu District are male by gender respondents, with 162.0 and 50.47% total. It means there are more male tea consumers in Yanhu District, and the gender ratio is relatively even. The age distribution of tea consumers in Yanhu District, most of the sample is "17-28 years old", with a proportion of 39.88%. In addition, the proportion of the 29-50 years old sample is 38.94%. This indicates that the age of consumers is a bit younger. It means that the mass market tea consumption is becoming increasingly evident in the trend of youthfulness, pulling a solid relationship with consumers, strengthening the traditional culture of tea, and maintaining health. More than 50% of the tea consumers in Yanhu District are "Undergraduate" in education. This indicates that the education level of consumers is high, which means that the level of education and literacy of consumers may also impact tea consumption.
More than 30% of the tea-drinking samples in Yanhu District are "Packaging design", with 110 respondents. This means that consumers still care about the packaging design of tea, and it is the main reason for buying tea. More than anything else, the packaging design will attract consumers to purchase product decisions. The price range of tea in Yanhu District is "140-200 yuan/kg" in most of the samples, and the proportion is 35.51%. The price of tea is moderate. It means that 114 respondents still prefer to buy tea products with high tea prices. This means that the high or low price of tea may affect the frequency of tea purchase, and the quality of tea packaging, etc. 20.87% of the sample chose "Public functionary", with 67 respondents. It means that the workplace and environment may impact the demand for tea. The reliability test analysis was carried out by importing the data of 321 final questionnaires into SPSS, as Table 1 shows: the Cronbach alpha was 0.929, which is better than 0.9 for each design dimension, indicating high confidence quality of data.
Furthermore, based on the "CITC value", all the data are above 0.4, reflecting better correlations among the analyzed items. Meanwhile, the reliability level is also well.
Therefore, the final 321 questionnaires had great overall reliability test results and passed the test requirements. Product Packaging Design ….. 527
Figure 1 SEM Model Coefficient Analysis Results
Source: Researchers'calculation based on the questionnaire using AMOS The survey found that consumers in Yanhu District did not particularly care about the text design features on the outer packaging of tea in the study and the portability of the packaging design function of tea products. The packaging form is more inclined to creative tea packaging design products. The factors of the three dimensions of sustainability, experience, and purchase decision of tea packaging design positively impact the product purchase decision. However, there are differences in the degree of impact. Based on the level of the p-value of each factor hypothesis showed significance (significant p-value of 0.000***<0.01), while Standardized coefficients(β) are more significant than 0.4. Therefore, all hypotheses H1-H10 are valid and reasonable. It confirmed the objectives and questions of this study that tea packaging design positively impacts product purchase decisions.
CONCLUSION
The study explores to answer the research objectives and questions, tea packaging design influences product purchasing decisions. Previous studies have analyzed the importance of tea packaging design and consumer buying behavior. This study takes consumers respondents (N=321) in Yanhu District, Yuncheng City, Shanxi Province as the research scope and explores the stimulating influence of different tea packaging design influencing factors on product purchasing decisions around the spotlight of tea packaging design and consumers' purchasing decisions. Regarding influence size, the product purchase decision has the most significant influence on the apparent shape of tea packaging whether the final consumer finally buys this product or not, but consumers' first vision will all view the appearance shape of the tea product first, followed by other influencing factors such as the Chinese traditional cultural elements of the tea product, uniqueness, graphic elements, and color.
Consistent with previous studies, the high or low price of tea may affect the frequency of tea consumption, the quality of tea packaging, etc. Meanwhile, the creative tea packaging design with expressive power can attract customers to consume and purchase, and the independent innovation of tea packaging design forms a new development trend.
The sustainability of tea packaging design is essential to the current product purchase decision process. Whether it is the inheritance and innovation of traditional culture, the environmental recyclability of the materials used, the concern for the health benefits of tea to the human body, or the details of specific product information reflected on the tea packaging design, all of which will directly affect the final purchase decision result. This indicates that the education, culture, and work environment may influence consumer purchases. Next are factors such as the brand, origin, and price of the tea. Consumers are now more receptive to fashionable and creative tea packaging design with the changing times. Tea product packaging focuses on user habits, such as easy to open and store. The unpacking method adds interest and reflects the uniqueness of each tea product to create a good user experience. Tea packaging design can build the brand image of tea enterprises and promote the development and communication of the tea culture industry. All these positively influence the product purchase decision.
Overall, the study results are consistent with the existing theoretical support. There is a trend toward youthfulness. Therefore, it is confirmed that tea packaging design is a fundamental consideration for product purchase decisions. Thus, it promotes the development needs and trends of the tea market, economy, culture, society, and environment at all levels. | 2022-09-15T15:56:32.167Z | 2022-07-31T00:00:00.000 | {
"year": 2022,
"sha1": "c87824e3ff881794b519a64e736995168caba079",
"oa_license": "CCBYSA",
"oa_url": "https://e-journal.ikhac.ac.id/index.php/iijse/article/download/2453/984",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d4fc3c50801a72c9a109647abda3a5c33398bc23",
"s2fieldsofstudy": [
"Business",
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
} |
260189279 | pes2o/s2orc | v3-fos-license | Instrumental Color Measurements Have Relationships to Fat Smearing in Fresh Sausage
Fat smearing, or poor fat particle definition, impacts the visual quality of sausage. However, objective methods of assessing fat smearing have not been identified. Therefore, the objective of this experiment was to determine the relationship between fat smearing and instrumental color analysis for fresh sausages to create a standard method for using instrumental color in fat smearing analysis. Meat blocks of pork (PK), beef (BF), and a mixture of pork and beef (P/B) were formed and processed at three different temperatures to create varying degrees of fat smearing. The average fat smearing score of each sausage was used to determine if a relationship existed with instrumental color measurements (CIE L*, a*, b*, and reflectance percentage at 580 nm and 630 nm) and color calculations. A correlation was observed for L* (R = −0.704) and the reflectance at 580 nm (R = −0.775) to PK fat smearing (p < 0.05). In P/B sausage, both reflectances at ratios between 630 nm and 580 nm were correlated to P/B fat smearing. No measurement or calculation was correlated with BF fat smearing (p > 0.05). Therefore, it is possible to use instrumental color analysis for the evaluation of fat smearing in pork and pork/beef blended sausage products, but not in beef sausage products.
Introduction
Sausage manufacturing processes involve grinding to reduce particle size, mixing to incorporate spices, and stuffing into casings. During manufacturing, fat smearing can occur, which gives the product a muddled appearance after stuffing and cooking. Smearing is when fat within the product loses its particle definition, and it is caused by a combination of unsaturated fatty acid composition and increasing sausage temperature [1]. Animal fats with high concentrations of unsaturated fatty acids have a lower melting point, causing a softer texture at colder temperatures than saturated fat [2,3]. Unsaturated fats, such as pork fat, are widely used because of the flavor they contribute to the finished sausage product. Consequently, the softness of unsaturated fat and the elevated temperatures caused by the mechanical action of sausage production can result in noticeable fat smearing that may decrease consumer acceptance and increase labor and processing costs [1].
Standard objective and subjective evaluations have the ability to measure product quality; however, there is no standardized analysis method available to evaluate fat smearing in fresh sausage products. Previous fat evaluations rely on human subjective analysis that can be highly variable or not available to all researchers [4]. However, several rapid, inexpensive technologies exist to evaluate meat color that could potentially replace subjective analysis and are nondestructive to the products being evaluated [5]. Hunter Lab values or CIE L, a, and b values, known as L*, a*, and b* are utilized by numerous researchers to measure meat color [6]. These measurements evaluate color as three components: L* = +light/−dark, a* = +red/−green, and b* = +yellow/−blue. These values can be reported as individual values or ratios of a*/b* to determine the difference in hue angle and saturation index [6].
In addition, reflectance ratios obtained from specific wavelengths are commonly used to determine surface color and changes in color over time [6]. However, the use of instrumental color analysis is largely focused on the lean meat color and has not been applied to the analysis of fat smearing.
Since fat smearing can drastically change the perception of a high-quality fresh sausage product, instant, noninvasive strategies may provide a detectable relationship between color and fat smearing. Our hypothesis was that instrumental color measurements have a relationship to observed fat smearing and can be used as a method for the analysis of fat smearing. The objective of this experiment was to determine the relationship between fat smearing and instrumental color analysis for fresh pork, beef, and blended sausage products to create a standard method for using instrumental color in fresh sausage analysis in order to quantify fat smearing.
Sample Preparation
Beef and pork trim were purchased and transported to the South Dakota State University Meat Laboratory, Brookings, SD, USA, and formulated into meat blocks consisting of 100% pork (PK), 100% beef (BF), and 50% pork + 50% beef blend (P/B) in triplicate batches (n = 9). Each batch was formulated to be 80% lean and 20% fat, and proximate analysis was conducted to verify the lean-to-fat ratio. Each batch was then divided into three equal sub-portions (n = 27). To create a distribution of fat smearing, each batch sub-portion was tempered to one of three temperature treatments: low temperature (−1.1 • C), medium temperature (4.4 • C), and high temperature (10 • C). Each batch was ground twice with a 4.76 mm grinding plate, mixed with 2 percent salt and 0.3 percent pepper, and stuffed into 5.1 cm diameter sausage casings. Each sausage was crust frozen and transversely bisected into slices, resulting in sausage patties (2.54 cm thick). Three patties were fabricated from the medial portion of each sausage. Two patties were packaged in PVC overwrap for visual and instrumental color analysis and one patty was vacuumed packaged, frozen, and stored at −20 • C for proximate analysis.
Fat Smearing Measurements
Visual analysis was conducted to determine the degree of fat smearing in each tray of patties by seven trained evaluators. Evaluators were trained using the standard scale created by Varnold et al. [7], and evaluators were determined proficient when able to have greater than 95% repeatability in scoring to the standard. One package from each sausage blend and temperature treatment (n = 27) was evaluated by utilizing a 15 cm anchored line scale to score each tray of patties. The anchors of the scale were 0 = no fat smearing and 15 = extreme fat smearing.
Instrumental Color Analysis
The same packages evaluated for fat smearing were placed in a cooler (3.3 • C) under fluorescent lighting. The instrumental color analysis included CIE L* (lightness; 0 = black, 100 = white), a* (redness/greenness; positive value = red, negative values = green), and b* (yellowness/blueness; positive values = yellow, negative values = blue). Instrumental color measurements were recorded in duplicate on the exposed cut surface of each patty, using a Minolta Chroma Meter CR-310 (Minolta Corp., Ramsey, NJ, USA) with a 50 mm diameter measuring area and a D65 illuminant. Reflectance percentages were measured for each patty at 580 nm and 630 nm, using a Hunter Mini Scan XE (Hunter Associates Laboratory, Inc., Model 45/0-L; Reston, VA, USA). All calculations for color parameters are outlined in AMSA [6]. Reflectance ratios were calculated for 630 nm − 580 nm and 630 nm/580 nm. Redness and discoloration were calculated using a*/b*. Chroma (saturation index) was calculated using the following formula: Finally, the hue angle was calculated using the following formula: Hue angle = arctangent (b*/a*) (2)
Proximate Analysis
A chemical analysis of the sausage was conducted to determine the moisture and fat of each sample in duplicate. The samples were immersed in liquid nitrogen and subsequently powdered with a Waring commercial blender (Waring Products Division, New Hartford, CT, USA). Replicate 2 g samples were dried in tin foil pans at 100 • C (24 h), and percent moisture was calculated as the difference between the original weight and dried weight. Ashless, N-free filter paper was then wrapped over the samples and tin foil pan and extracted with petroleum ether in a side arm soxhlet (60 h) for ether extraction of lipid followed by drying at 101 • C for 24 h [8]. Crude fat was calculated as the difference between dried sample weight and extracted sample weight.
Statistical Analysis
This experiment was designed for the correlation and regression analysis of instrumental color measurements and calculations in relation to the visual analysis of fat smearing. Data were analyzed using the correlation (Proc Corr) and regression (Proc Reg) procedure of the SAS software package (SAS version 9.4; SAS Institute Inc., Cary, NC, USA, 2012). To determine the possibility of a multiple linear regression model, Proc Reg was utilized with the selection criteria of the greatest adjusted R 2 and lowest Akaike information criterion (AIC) to determine the best-fit model. All correlations and regressions were determined significant at p < 0.05. To validate that fat content was consistent between replications, Proc GLM was used to ensure there were no differences found p > 0.05.
Results and Discussion
Proximate analysis confirmed that each sausage blend had no significant differences between the replicates (p > 0.05); thus, fat concertation would not vary among the samples to alter the fat smearing scores. Correlation coefficients (R) and regression coefficients (R 2 ) for individual instrumental color measurements and calculations in relation to visual fat smearing are displayed in Table 1. Table 1. Correlation and regression analysis of instrumental color (CIE L* (lightness), CIE a* (redness), and CIE b* (yellowness)), reflectance values, reflectance ratios, chroma, and hue angle of pork, beef, and mixed sausages to visual fat smearing scores. For PK sausage, correlations to fat smearing were observed for L* values (R = −0.704; p = 0.034) and the reflectance percentage at 580 nm (R = −0.775; p = 0.014), with remaining measurements and calculations lacking significance (p > 0.05). Regressions for these individual measurements determined L* values to have an R 2 of 0.496 ( Figure 1) and the reflectance percentage at 580 nm to have an R 2 of 0.601 (Figure 2) in relation to fat smearing. L* values are a measurement of lightness (+light/−dark), while reflectance at 580 nm is generally used in the calculation of the reduction of metmyoglobin (brown in color) to oxymyoglobin (red in color). The similarities indicated that a darker, redder color is indicative of greater fat smearing. For BF sausage, the correlation of instrumental color measurements and calculations lacked significance to fat smearing (Table 1; p > 0.05). However, a* values and the reflectance percentage at 580 nm were approaching significance (p = 0.078 and p = 0.080, respectively). This trend is similar to the findings in PK sausage that darker and/or redder values correlated to increased fat smearing. The smeared fat particles in the BF sausage may not have effectively altered to affect the color scores once smeared due to the increased red fibers containing a greater amount of myoglobin found in beef verse pork [8]. In addition, beef contains a large amount of saturated fatty acid within the muscle, meaning the fat is more solid or firm than unsaturated fatty acids at similar temperatures [9][10][11], indicating that fat smearing may be less severe in fresh beef sausage.
Sausage Type
For P/B sausage, correlations to fat smearing were observed for the reflectance ratios For BF sausage, the correlation of instrumental color measurements and calculations lacked significance to fat smearing (Table 1; p > 0.05). However, a* values and the reflectance percentage at 580 nm were approaching significance (p = 0.078 and p = 0.080, respectively). This trend is similar to the findings in PK sausage that darker and/or redder values correlated to increased fat smearing. The smeared fat particles in the BF sausage may not have effectively altered to affect the color scores once smeared due to the increased red fibers containing a greater amount of myoglobin found in beef verse pork [8]. In addition, beef contains a large amount of saturated fatty acid within the muscle, meaning the fat is more solid or firm than unsaturated fatty acids at similar temperatures [9][10][11], indicating that fat smearing may be less severe in fresh beef sausage.
For P/B sausage, correlations to fat smearing were observed for the reflectance ratios of 630 nm/580 nm (R = 0.817; p = 0.007) and 630 nm − 580 nm (R = 0.760, p = 0.017), with remaining measurement and calculations lacking significance (p > 0.05). Regressions for these individual calculations determined 630 nm/580 nm to have an R 2 of 0.667 ( Figure 3) and 630 nm − 580 nm to have an R 2 of 0.578 (Figure 4) in relation to fat smearing. These ratios can be used in fresh meat analysis as measurements of discoloration, as larger ratios indicate more redness due to either oxymyoglobin or deoxymyoblogin [6]. In this way, the findings in P/B sausage follow a similar logic as those found in the PK sausage, that redder colors relate to increased fat smearing. According to Weiss et al. [1], as fat smearing increases, the fat particles are no longer well-defined in the surrounding protein matrix. Thus, as fat smearing increases, the overall color of the sausage should become darker in color due to the lack of white fat particle definition. The data collected from this analysis show that, in general, color measurements that indicate darker, redder values correspond with increased fat smearing, particularly According to Weiss et al. [1], as fat smearing increases, the fat particles are no longer well-defined in the surrounding protein matrix. Thus, as fat smearing increases, the overall color of the sausage should become darker in color due to the lack of white fat particle definition. The data collected from this analysis show that, in general, color measurements that indicate darker, redder values correspond with increased fat smearing, particularly in PK and P/B sausages. According to Weiss et al. [1], as fat smearing increases, the fat particles are no longer well-defined in the surrounding protein matrix. Thus, as fat smearing increases, the overall color of the sausage should become darker in color due to the lack of white fat particle definition. The data collected from this analysis show that, in general, color measurements that indicate darker, redder values correspond with increased fat smearing, particularly in PK and P/B sausages.
Further analysis of the data analyzed the color measurements and calculations in a multiple linear regression using a selection criterion in SAS for the greatest R 2 and lowest AIC to determine the best-fit model. The regression equations for each sausage type are displayed in Table 2. P/B had a significant prediction equation (p = 0.006), while BF had an equation approaching significance (p = 0.061), and PK was not significant (p = 0.118). Although some of these equations were significant and could generate R 2 extremely close to 1.0, the number of parameters and calculations could be considered convoluted. This would require further analysis to determine a standardized equation, and, therefore, the individual measurements and/or calculations may offer a more practical application in the research setting to evaluate fat smearing. Table 2. Analysis of multiple linear regression models for the prediction of fat smearing using instrumental color measurements and calculations with the selection criteria of the greatest adjusted R 2 and lowest Akaike information criterion (AIC) to determine the best-fit model.
Conclusions
Previous research has reported correlations between numerous instrumental color measurements and visual lean color scores [6]. Thus, utilizing CIE L*, a*, and b* values and/or calculation ratios of these instrumental color values offers the potential to be successful in the detection of fat smearing differences.
Differences in fat smearing predictors were found between the different sausage types. These data indicate that the most effective method for determining the degree of fat smearing in PK is L* and reflectance wavelength at 580 nm. There were no measurements that showed a correlation to fat smearing in BF, indicating a need for additional research in this area. Reflectance ratios were found to be significant predictors of fat smearing in P/B sausage. Overall, the use of instrumental color analysis to determine the amount of fat smearing in sausages is an effective method, with the trend throughout all sausage types being that redder, darker color measurements relate to increased fat smearing. However, the use of another methodology for color analysis, such as near-infrared color analysis would be useful in conjunction with the instrumental use of the Minolta and Hunter Lab devices. | 2023-07-27T15:19:11.603Z | 2023-07-01T00:00:00.000 | {
"year": 2023,
"sha1": "b9cd4e159cab821199e88497aac805368e3a7489",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "d97aa520869cf48a8afa01d2453b05ea80f09f54",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
1673608 | pes2o/s2orc | v3-fos-license | Sulfur transfer and activation by ubiquitin-like modifier system Uba4•Urm1 link protein urmylation and tRNA thiolation in yeast
Urm1 is a unique dual-function member of the ubiquitin protein family and conserved from yeast to man. It acts both as a protein modifier in ubiquitin-like urmylation and as a sulfur donor for tRNA thiolation, which in concert with the Elongator pathway forms 5-methoxy-carbonyl-methyl-2-thio (mcm5s2) modified wobble uridines (U34) in anticodons. Using Saccharomyces cerevisiae as a model to study a relationship between these two functions, we examined whether cultivation temperature and sulfur supply previously implicated in the tRNA thiolation branch of the URM1 pathway also contribute to proper urmylation. Monitoring Urm1 conjugation, we found urmylation of the peroxiredoxin Ahp1 is suppressed either at elevated cultivation temperatures or under sulfur starvation. In line with this, mutants with sulfur transfer defects that are linked to enzymes (Tum1, Uba4) required for Urm1 activation by thiocarboxylation (Urm1-COSH) were found to maintain drastically reduced levels of Ahp1 urmylation and mcm5s2U34 modification. Moreover, as revealed by site specific mutagenesis, the S-transfer rhodanese domain (RHD) in the E1-like activator (Uba4) crucial for Urm1-COSH formation is critical but not essential for protein urmylation and tRNA thiolation. In sum, sulfur supply, transfer and activation chemically link protein urmylation and tRNA thiolation. These are features that distinguish the ubiquitin-like modifier system Uba4•Urm1 from canonical ubiquitin family members and will help elucidate whether, in addition to their mechanistic links, the protein and tRNA modification branches of the URM1 pathway may also relate in function to one another.
As for its urmylation role, data obtained under steadystate-conditions suggest the major pool of non-conjugated Urm1 is in its thiocarboxylate form [6,37]. So Urm1-COSH formation by Uba4 (Fig. 1) per se seems not to be sufficient for conjugation. However, when exposed to the thiol oxi-dizer diamide, Urm1-COSH generated in vitro becomes engageable in urmylation [6]. Together with evidence that reactive oxygen species (ROS) and diamide induce urmylation in yeast and human cells, the S-carrier and protein modifier functions of Urm1 were proposed to be coupled to each other linking both to oxidative stress [4,6,11,37]. In support of this is the evidence showing that ROS detoxifying peroxiredoxins are urmylated in yeast (Ahp1) and fruit flies (Prx5) [4,6,11,17].
Because of its dual-functionality Urm1 was coined a ubiquitin-like fossil at the crossroad of S-transfer and protein conjugation [8], thus deviating from canonical ubiquitination which is not known to depend on sulfur supply, Stransferases or E1 enzymes with RHD domains. To better understand the functional diversification of an ancestral Scarrier into today's members of the ubiquitin family, we therefore studied whether Urm1 dual-functions may be interlinked by comparing both tRNA thiolation and urmylation under URM1 pathway inactivating conditions [36,38].
Here we show that similar to heat-induced tRNA thiolation defects, Ahp1 urmylation in yeast is suppressed at 39°C. Moreover, as is the case with tRNA thiolation, Ahp1 urmylation is highly responsive to sulfur availability and requires the S-relay system that is dedicated for proper tRNA thiolation (via Urm1-COSH formation). In line with this, Urm1 functions in tRNA thiolation and urmylation depend on the rhodanese-type S-transfer region RHD in Uba4 (crucial for Urm1-COSH formation). In sum, the two URM1 pathway branches, tRNA thiolation and protein FIGURE 1: Sulfur flow within the URM1 pathway. The scheme depicts sulfur (red) flow and URM1 pathway players required for mobilization (Nfs1), transfer (Tum1), activation (Uba4, Urm1) or consumption (Ncs2•Ncs6) of sulfur. E1-like activator Uba4 (green) is key to Urm1 thiocarboxylation. Urm1-COSH formed this way donates sulfur to the tRNA thiolation branch which cooperates with the Elongator pathway to form 5-methoxy-carbonyl-methyl-2-thio-uridine in wobble positions (mcm 5 s 2 U34) of the indicated tRNA anticodons. Possibly (?) Urm1-COSH also feeds into the urmylation branch of the URM1 pathway. As for the latter, E2/E3 enzymes are elusive (?) and the relevance of urmylation for target protein function is ill-defined (?). The model is up-dated from work in the labs of Hayashi [7] and Suzuki [9]. urmylation, are chemically linked through sulfur supply, transfer and activation by the ubiquitin-like modifier system Uba4•Urm1.
Protein urmylation and tRNA thiolation are both thermosensitive
Loss of tRNA thiolation causes heat-sensitive growth in URM1 pathway mutants [7][8][9] and recently, URM1 pathway inactivation at 37°C or 39°C was shown to trigger tRNA thiolation defects sufficient for growth inhibition [36,[39][40][41]. Hence, we studied the ability of Urm1 to engage in urmylation at temperatures restrictive for tRNA thiolation. To do so, we used a yeast strain that expresses a TAPtagged Urm1 fusion (~35 kDa) previously shown to conjugate to proteins such as Ahp1 and Uba4 [11]. TAP-URM1 cells were grown to logarithmic growth phase at 30°C and split into two cultures. One was kept at 30°C, the other shifted to 39°C and both cultivated for three hours prior to protein urmylation analysis. Using electrophoretic mobility shift assays (EMSA) based on anti-TAP Western blots [11], we detected at 30°C non-conjugated TAP-Urm1 (~35 kDa) and a prominent up-shifted (~55 kDa) TAP signal ( Fig. 2A). We confirmed this is an Ahp1•TAP-Urm1 conjugate [11] by showing that it did not form in ahp1∆ mutants ( Fig. 2A) and that it was further up-shifted when tagged in AHP1-c-myc cells (Fig. S1). At 39°C, however, formation of Ahp1•TAP-Urm1 conjugates gradually declined over time and was almost absent after three hours ( Fig. 2A). Similarly, but less pronounced, the abundance of free TAP-Urm1 decreased over time, which contrasts with stable forms of unconjugated Ahp1 at 39°C ( Fig. 2A). Our data thus indicate that rather than correlating with an unstable target, loss of Ahp1 urmylation at 39°C is likely due to a fragile Urm1 modifier itself.
tRNA thiolation and translation defects in URM1 pathway mutants cause phenotypes that can be rescued by overexpressing tRNAs normally undergoing mcm 5 s 2 U34 modifications [8,15,25,42,43]. When higher-than-normal levels of these tRNAs, i.e. tRNA Gln , tRNA Lys and tRNA Glu [tQKE], were produced from a multi-copy plasmid, they failed to suppress either the Ahp1 urmylation defects or the low Urm1 abundance at 39°C (Fig. 2B). This suggests it is not a translational defect suppressible by tRNAs which underlies heat-sensitive urmylation. In support of this, we observed that translation inhibition by cycloheximide had no effect on Urm1 levels (Fig. S2). With previous data showing proteins required for S-transfer (Tum1) and Urm1 activation (Uba4) are unstable, too [36,38,39], heatsensitivity of the URM1 pathway may thus be multifactorial. In sum, Urm1 instability alone or combined with S-transfer defects at 39°C appear to inactivate urmylation and as previously shown, tRNA thiolation.
Sulfur supply and activation link tRNA thiolation and urmylation
Under conditions of methionine starvation, sulfur consuming pathways including mcm 5 s 2 U34 modifications, which require Urm1, Elongator and S-adenosyl-L-methionine, have been shown to dramatically decline in yeast [38]. This reinforces that sulfur supply and activation in form of Urm1-COSH is critical for tRNA thiolation [7][8][9]. Hence, we compared sulfur dependency between the two URM1 pathway branches, i.e. protein urmylation and tRNA thiolation, by examining the effects of starvation for the sulfur amino acid methionine (Met). TAP-URM1 cells were shifted from Met-containing to Met-free media, and urmylation was analyzed by EMSA. In the presence of Met, we detected free TAP-Urm1 and the prominent Ahp1 conjugate (~55 kDa) (Fig. 3A). Interestingly, while free TAP-Urm1 and nonconjugated Ahp1 remained stable irrespective of sulfur supply, urmylation of Ahp1 by TAP-Urm1 dramatically declined during S-starvation (Fig. 3A). Thus sulfur depletion FIGURE 2: Overexpression of tRNAs subject to Urm1-dependent U34 thiolation fails to suppress loss of Ahp1 urmylation at 39°C. (A) Urmylation of Ahp1 is suppressed at 39°C. An urm1∆ strain expressing TAP-URM1 was grown to logarithmic growth phase and split into two cultures prior to cultivation at 30°C or 39°C for three hours (h). Ahp1 urmylation analysis involved anti-TAP-based EMSA and immune blots with anti-Ahp1 to detect non-conjugated (free) Ahp1. Protein loading was controlled with anti-Cdc19 antibodies. (B) tRNA Gln , tRNA Lys and tRNA Glu (tQKE) overexpression cannot rescue defective Ahp1 urmylation at 39°C. Except for tQKE overexpression from multi-copy vector pQKE (Table S3), cell growth at 30°C or 39°C and urmylation analysis were as described in (A). Arrows (A, B) indicate the positions of non-conjugated (free) forms of TAP-Urm1 and Ahp1, as well as TAP-Urm1 conjugated to Ahp1 and loading control Cdc19. specifically suppresses Urm1 conjugation indicating that by analogy with tRNA thiolation, protein urmylation is also sensitive to sulfur supply.
EMSA exposed for longer times (Fig. S3) showed urmylation of proteins other than Ahp1 is also affected in tum1∆ cells stressing the importance of S-transfer for Urm1 conjugation. Recently, yeast Uba4 and its human homolog (hUBA4/MOCS3) have been reported to undergo urmylation themselves [6,11], so we studied Uba4 urmylation in a tum1∆ strain that co-expressed c-Myc-tagged Uba4 and HA-marked Urm1. EMSA based on immune blots with anti-HA-and anti-c-Myc-antibodies allowed detection of Uba4•Urm1 conjugates (~130 kDa doublet band) (Fig. S4). Intriguingly, Uba4 urmylation was not sensitive to TUM1 deletion (Fig. S4) and disruption of NCS2, NCS6 or AHP1 also had no effect on Urm1 conjugation to Uba4. To sum up, our data show that protein urmylation (except for Uba4) depends on sulfur supply and on the sulfur relay system (Nfs1, Tum1, Uba4) that contributes to the S-donor role for Urm1 in tRNA thiolation. Hence, S-transfer and Urm1 thiocarboxylation apparently link tRNA thiolation and urmylation to each other.
tRNA thiolation and urmylation are linked by catalytic Cys residues in Uba4
Uba4 maintains two domains (MoeBD; RHD) with catalytic cysteines (C225; C397) ( Fig. 4A). This organisation is peculiar since no E1 or other E1-like enzyme carries rhodaneselike domains (RHD) typical of S-transfer proteins such as Tum1 [21,45]. Since sulfur transfer to Uba4 via Tum1 is critical for Urm1 activation, we revisited C225 and C397 and examined their contributions to tRNA thiolation and urmylation. Previously, loss of function phenotypes were ascribed to cysteine ablative alanine (C225A or C397A) mutations and some of these were reported to lead to unstable proteins [3,6,8,9,18].
To prove these mcm 5 s 2 U34 defects are phenotypically relevant, we introduced the Cys substitutions into uba4∆ reporter cells with no functional Elongator (elp3∆) or Deg1 (deg1∆). In addition to loss of U34 thiolation, both reporter strains lack other tRNA modifications: Elp3 (as part of Elongator) cooperates with Uba4•Urm1 in mcm 5 s 2 U34 formation and Deg1 is necessary for pseudouridine synthesis at tRNA positions 38 and 39 [46,47]. The combined tRNA modifications defects in the two reporter strains result in thermosensitive growth that can be rescued by tRNA overexpression implying the phenotype is due to improper tRNA function [41,43,44]. Hence, suppression of thermosensitivity by Uba4 rescues the consequences of the tRNA thiolation defects (Fig. S5). Using this assay diagnostic for tRNA thiolation, we found that while C225S and C397S alone partially rescued thermosensitive growth of the elp3∆uba4∆ and deg1∆uba4∆ reporter strains, the double mutant C225S/C397S only weakly suppressed deg1∆uba4∆ cells (Fig. S5). So, C225S, C397S and C225S/C397S progressively reduce the capacity of Uba4 to restore growth and tRNA thiolation, a notion congruent with their differential U34 modification profiles (Fig. 4B).
Using EMSA we next compared urmylation in the Cys substitution mutants to wild-type and found the single ones (C225S, C397S) were clearly reduced in TAP-Urm1•Ahp1 conjugate formation (Fig. 4C). Again, in the C225S/C397S double mutant, defective urmylation was even further aggravated compared to each mutant alone but, importantly, not entirely abolished (Fig. 4C). Together, these urmylation assays go hand-in-hand with the tRNA thiolation profiles (Fig. 4B) and indicate that the Cys residues in the MoeBD (C225) and RHD (C397) regions of Uba4 overlap in function and collectively, contribute to full Uba4 functionality for proper tRNA thiolation and urmylation.
The rhodanese domain (RHD) in Uba4 links tRNA thiolation and urmylation
Having shown above that S-transfer is critical for Urm1 activation by Uba4 and that thiol-active cysteines are important to do so, we next focused on the role of the RHD, a rhodanese-like region in Uba4 with sulfur acceptor activity [18,23]. Upon RHD removal from Uba4 (Fig. 5A), we examined the ability of the resulting truncation (Uba4 1-328 ) to mediate tRNA thiolation and urmylation in uba4∆ cells. LC-MS/MS analysis showed that mcm 5 s 2 U34 modifications in tRNA anticodons from UBA4 1-328 cells dramatically declined down to ~4% of UBA4 wild-type levels (Fig. 5B). Markedly, these very low residual thiolation levels still partially suppressed the thermosensitive growth of the elp3∆uba4∆ and deg1∆uba4∆ reporter strains (Fig. S6). Thus, even without an RHD, Uba4 1-328 apparently engages in residual Urm1 activation sufficient enough for low-level tRNA thiolation. Indeed, this notion also correlates with EMSA showing that urmylation including Urm1 conjugation to Ahp1 was drastically affected in UBA4 1-328 cells with low-level of Ahp1•TAP-Urm1 conjugates detectable after longer exposure times (Fig. 5C). Thus, our results identify the RHD region on Uba4 as the main contributor to Urm1 dualfunctions in tRNA thiolation and urmylation.
When we tried to rescue UBA4 1-328 cells by coexpressing UBA4 329-440 (encoding the RHD alone), we were Fig. 3) reveals Cys225 and/or Cys397 substitution mutants significantly interfere (red arrows) with formation of the mcm 5 s 2 U34 modification. (C) Ahp1 urmylation is strongly (red arrows) affected by the C225S, C397S and C225S/C397S mutants. Urmylation assays (see Fig. 2) involved anti-TAP-based EMSA to detect non-conjugated (free) TAP-Urm1 and TAP-Urm1•Ahp1 conjugates. Protein loading was controlled with anti-Cdc19 antibodies. not able to improve tRNA thiolation based on its failure to rescue deg1Δuba4Δ cell growth and suppress Urm1 conjugation defects (Fig. S7). Together with our observation that UBA4 329-440 alone could not rescue defective tRNA thiolation or urmylation (Fig. S7) in uba4∆ cells, our data suggest that in order to enable Urm1 activation and thiocarboxylation, both Uba4 domains (MoeBD & RHD) need to be maintained in close proximity or on the same polypeptide rather than being provided separately. In sum, our results show that although catalytically critical for Uba4, the RHD on its own is non-functional. Probably, this explains why the MoeBD alone in Uba4 1-328 has residual E1like activity sufficient for very low-level tRNA thiolation and urmylation.
Tum1 feeds into tRNA thiolation and urmylation through the RHD in Uba4
Our data above indicate tRNA thiolation and urmylation defects in tum1∆ cells correlate with reduced S-transfer to Uba4 and improper Urm1 thiocarboxylation. Hence, we asked whether loss of TUM1 would affect low-level tRNA thiolation and urmylation observed in the absence of the RHD (Uba4 1-328 ). As revealed by LC-MS/MS, UBA4 1-328 cells still allowed for residual tRNA thiolation in the absence of Tum1 (Fig. 5B). Importantly, the remaining low mcm 5 s 2 U34 levels (~4% in relation to UBA4 wild-type) were similar, if not identical in both TUM1 and tum1∆ cells (Fig. 5B). To support this result, which points towards residual tRNA thiolation independent of TUM1, we investigated whether an additional tum1∆ null-allele affects the partial suppression of elp3∆uba4∆ or deg1∆uba4∆ growth by UBA4 (Fig. S6). If Tum1 accounted for residual activity of Uba4 1-328 , partial suppression may be abolished by TUM1 gene deletion. However, the phenotypic assays clearly demonstrate the opposite: a tum1∆ null-allele has no effect on partial suppression of elp3∆uba4∆ or deg1∆uba4∆ cell growth by UBA4 (Fig. S6). Similarly, weak protein urmylation typical of UBA4 1-328 cells was not altered in tandem with tum1∆ (Fig. 5D) indicating no additive effect of reduced S-transfer onto residual performance of Uba4 1-328 . This shows low residual levels of tRNA thiolation and urmylation in the absence of RHD are insensitive to Tum1, suggesting that Tum1-dependent S-transfer for proper function of Uba4 operates mainly, if not entirely, through its RHD region.
DISCUSSION
Compared to ubiquitin and ubiquitin-like proteins, Urm1 activation differs by thiocarboxylation [18,19]. Urm1-COSH generated this way provides sulfur for tRNA thiolation (Fig. 1) and possibly protein urmylation [6,18,23]. Such shared need for Urm1-COSH suggests both Urm1 functions are linked. Hence, we reasoned that conditions inactivating tRNA thiolation [36,[38][39][40][41] may also impact on urmylation. We show herein that cells grown at 39°C or starved for sulfur drastically suppress urmylation. At 39°C, loss of urmylation correlated with decreased levels of Urm1 itself, while S-starvation had no such effect (Fig. 2). This goes along with other URM1 pathway proteins (Tum1, Uba4, Ncs2, Ncs6) also reported to be unstable at higher temperatures [36,39]. While instable thiolase (Ncs2•Ncs6) affects tRNA thiolation, lowering Tum1 and Uba4 activities will interfere with S-transfer and Urm1 activation. So, suppression of urmylation at 39°C may be due to fragile Urm1 alone or combined with improper Urm1 activation [18,19]. URM1 pathway inactivation in S-starved cells (Fig. 3) is readily explained by the need of sulfur for Urm1-COSH formation [6,18,23]. In sum, suppression of tRNA thiolation and urmylation at 39°C or upon S-depletion likely involves inappropriate Urm1 thiocarboxylation. Down-regulation of tRNA thiolation was suggested to reduce translational competence and cell growth under sulfur limiting conditions which may be important to spare sulfur for other physiologically important processes [38]. Since we show Urm1 conjugation depends on Urm1-COSH and hence qualifies itself as a sulfur consuming process, loss of urmylation upon S-depletion could help spare sulfur as well. Also, we cannot exclude that a decrease in urmylation affects the activity of protein(s) that Urm1 attaches to.
Our findings that it is the activated sulfur in Urm1-COSH, which links tRNA thiolation and urmylation, may provide support for a previous model [37] that sees urmylation as a means to restrict -rather than spare (see above) -sulfur flow by reducing the pool of free Urm1 available for S-transfer and tRNA thiolation. Consistent with this, Elongator mutants with URM1 pathway related mcm 5 s 2 U34 modification defects (Fig. 1) allow urmylation to occur [5] and our data show thiolase mutants (ncs2∆, ncs6∆) maintain wild-type levels of Ahp1 urmylation (Fig. 3).
To check if this relationship is reciprocal we asked whether tRNA thiolation is affected by loss of Ahp1 urmylation and profiled yeast growth inhibition by a fungal tRNase (zymocin) (Fig. 6A, B) that requires mcm 5 s 2 -modified U34 for lethal anticodon cleavage [28,51,52]. While strains with defects in S-transfer (tum1Δ) and tRNA thiolation (urm1Δ, ncs6Δ) protected against zymocin (Fig. 6C,D) loss of Ahp1 urmylation (ahp1∆) could not. This indicates lethal anticodon cleavage due to proper tRNA thiolation occurs in the absence of Ahp1 urmylation. Together with our findings above, that tRNA thiolation is not required for Ahp1 urmylation, the two URM1 pathway branches -albeit mechanistically linked through sulfur activation -seem to be functionally separated from (rather than dependent on) each other. So to our minds, an above scenario where sulfur flow for tRNA thiolation is kept in-check by Ahp1 , thiouridine (s 2 U34) and 5-methoxycarbonylmethyl-uridine (mcm 5 U34). For simplicity, 'R' denotes ribose moieties. U34 thiolation (solid circle) requires S-transfer via Tum1, Urm1•Uba4 and thiolase Ncs2•Ncs6; mcm 5 side-chain (dotted circle) formation depends on Elongator [24,28]. (B) The mcm 5 s 2 U34 modification (asterisk) in tRNA Glu U*UC is efficiently cleaved by zymocin, a fungal tRNase lethal to S. cerevisiae cells (see C) [28,56,57]. (C, D) U34 modification defects (elp3Δ, tum1Δ, urm1Δ, ncs6Δ) protect against zymocin and loss of Ahp1 urmylation (ahp1Δ) confers wildtype (wt) like sensitivity. Growth tests involved killer eclipse assays using K. lactis zymocin producer and the indicated S. cerevisiae tester strains (see C) or toxin plate assays with ten-fold serial dilutions of the indicated tester strains in absence (left panel) or presence (other panels) of different doses of zymocin purified from K. lactis (see D). 'S' and 'R' indicate toxin sensitive and resistant responses, respectively. urmylation seems unlikely. We cannot, however, exclude targets other than Ahp1 whose urmylation still may affect tRNA thiolation. In this context, it is noteworthy that hURM1 and Urm1-like proteins (SAMP, TtuB) form conjugates with human and prokaryal orthologs of yeast thiolase (CTU2, ATPBD3, NcsA, TtuA) [6,12,13]. Whether this implies that S-transfer (via Urm1-COSH) for tRNA thiolation involves urmylation of components of the thiolase is unknown but may be an attractive twist to the above topic as it suggests the option of interdependence among the two URM1 pathway branches. Although the Sdonor role for tRNA thiolation has been demonstrated in vitro using human URM1 pathway players including hURM1 and CTU2 [23], we are not aware of sulfur transfer during lysine-directed urmylation to targets that Urm1 attaches to in yeast or other organisms.
In support of our findings that tRNA thiolation and urmylation depend on sulfur (Fig. 3), both URM1 pathway branches are significantly impaired in the absence of sulfur transferase Tum1. Interestingly, Tum1 seems not to contribute to urmylation of Uba4 suggesting this conjugation does not rely on S-transfer via Tum1 for Urm1-COSH formation (Fig. S4). Instead, an alternative mechanism, more similar to E1-like activation of ubiquitin-like modifiers, may be envisaged involving thioester bond formation prior to lysine-directed urmylation [21]. Remarkably, SUMO, a ubiquitin-like modifier, SUMOylates the Uba2 subunit of its own E1 complex at Lys residue 236 and without any E2 assistance [53]. Since E2/E3 urmylation enzymes are elusive, Uba4•Urm1 auto-urmylation comparable to Uba2•SUMO auto-SUMOylation may be plausible. Indeed, using site-directed UBA4 mutagenesis, we identified Lys candidate residues (K122, K248) that differentially contribute to Uba4 or Ahp1 urmylation (Fig. S8).
In contrast to previous reports [3,9,18], we demonstrate here that Cys residues in the MoeBD (C225) and RHD (C397) domains of Uba4 are catalytically important for formation of Urm1-COSH yet not essential (Fig. 4). This is based on our findings that mimetic Ser substitutions alone or combined (C225S, C397S, C225S/C397S) cannot entirely abolish Uba4 functions but progressively reduce tRNA thiolation and urmylation to levels (14-25%) sufficient enough to form significant amounts of Urm1 conjugates and s 2 U34modifed tRNA anticodons (Fig. 4). While it was shown that C397 can be persulfurated, which results in acyl-disulfide bond formation with Urm1 and subsequently in the release of Urm1-COSH [8,9,18,23], the role of C225 is unclear. C225 is analogous to active-site Cys residues in ubiquitinlike E1 enzymes but we are not aware it forms a thioester bond with Urm1 in vivo. Moreover, it is not essential for in vitro adenylation or thiocarboxylation of human URM1 by MOCS3/hUBA4 [18]. Hence, a potential role for C225 in the reductive cleavage of the acyl-disulfide bond between MOCS3 and hURM1 was proposed [18]. Our data showing residual sulfur transfer with C225S alone or combined with C397S (Fig. 4) may indicate the presence of another Cys residue capable of reductive cleavage and Urm1-COSH release. An alternative cysteine present in Uba4 or provided by yet another protein might also explain why the C397S mutant allowed, albeit at significantly reduced efficiencies, S-transfer for tRNA thiolation and urmylation (Fig. 4). Residual low-level tRNA thiolation and urmylation are even more compromised with Uba4 1-328 , the truncation lacking the RHD (including C397) and importantly, uncoupled from Tum1 (Fig. 5). This suggests an alternative sulfur mode of transfer which either involves a Cys residue other than C225 in the MoeBD motif of Uba4 or an S-donor distinct from Tum1 (Fig. S9). The latter (if existent) may be identified among candidates with assigned or cryptic RHDs of the rhodanese protein family [54,55].
Yeast strains, general methods and plasmid constructions
Growth of yeast strains (Table S1) was in routine YPD or SC media [56] for 3 days, and thermosensitivity was assayed on YPD media at 34°C, 35°C or 39°C. Table S2 lists primers used for PCR-based protocols [57,58] to introduce site-specific UBA4 mutations or generate and diagnose gene deletions. Uba4 cysteine to serine substitutions C225S, C397S and C225S/C397S carried on plasmids pAJ64, pAJ65 and pAJ69, respectively (Table S3) were generated by PCR-based sitedirected mutagenesis using a previously described protocol [59]. Correctness of each mutation was confirmed by Sangerbased DNA sequencing. In analogy, site-directed UBA4 mutagenesis of lysine residues (K122, K132, K156, K248) in the MoeBD of Uba4 resulted in arginine substitutions (Table S2). For UBA4 1-328 expression in yeast, the ORF coding for the Nterminal MoeB-like domain (MoeBD) of Uba4 alone was PCRamplified from template plasmid pAJ16 [11]. Using flanking NotI and NdeI restriction sites, the UBA4 1-328 -construct was then cloned into vector pAJ16, resulting in generation of pAJ82 (Table S3). Construction of plasmid pAJ113 (Table S3) for expression of the rhodanese-type domain (RHD) of Uba4 alone started with PCR-amplification of the UBA4 329-440 -T CYC1fragment from template plasmid pAJ16 [11]. This fragment was subcloned into NdeI-and SacI-digested plasmid pAJ16, giving rise to a UBA4 329-440 -T CYC1 -fragment, which was finally moved to yeast single copy vector pRS423 [60] restricted with BamHI and SacI. Transformation of yeast cells with PCR products or plasmids (Table S3) was done as previously described [61]. Qualitative assays to monitor sensitivity or resistance of S. cerevisiae cells to growth inhibition by the zymocin tRNase toxin complex involved previously described killer eclipse bioassays [33]. In more sensitive assays, growth performance of ten-fold serial dilutions of S. cerevisiae tester strains was monitored for 2-3 days at 30°C on YPD plates containing no toxin or 40-50% (v/v) partially purified zymocin [33]. Both assays used K. lactis killer strain AWJ137 (Table S1) as zymocin producer.
tRNA modification profiling
Total tRNA was isolated from yeast cultures and subjected to LC-MS/MS for tRNA anticodon modification analysis essentially as previously described [26,35,62]. Identification of mcm 5 U34 or mcm 5 s 2 U34 peaks was according to Jüdes et al. [11]. For intersample comparability of the detected modifications, the peak areas of the modified nucleosides, measured in triplicates, were normalized to the UV peak area of uridine.
Urmylation studies using electrophoretic mobility shift assays (EMSA)
Urmylation studies were done essentially as described [11] with yeast grown in standard SC media at 30°C to an OD 600 of ~1.0 [56]. To analyze whether Urm1 conjugation is affected by elevated temperatures, logarithmically growing yeast cells were shifted from 30°C to 39°C and sampled after 1-3 hours of incubation at 39°C. To monitor sulfur dependency of urmylation, methionine auxotrophic (met15Δ) cells in the background of BY4741 (Table S1) were pregrown to logarithmic growth phase in standard SC minimal media [56] containing the sulfur amino acid methionine (2 mg/ml), washed and further suspended in SC media without methionine as a sulfur source. Finally, cells were harvested after 1 and 2 hours of additional cultivation and broken open with a bead beater and lysed in a buffer (10 mM K-HEPES pH 7.0, 10 mM KCl, 1.5 mM MgCl 2 , 0.5 mM PMSF, 2 mM benzamidine) containing complete protease inhibitors (Roche) and 10 mM N-ethylmaleimide (NEM) as previously described [6,11]. EMSA and Western blot analyses used PVDF membranes essentially as described [11]. Detection of unconjugated Ahp1 in NEM-free samples used anti-Ahp1 serum [63] provided by Dr Kuge (Tohoku Pharmaceutical University, Japan). Protein loading was checked using anti-Cdc19 antibodies donated by Dr Thorner (University of California, USA). | 2018-04-03T06:14:42.391Z | 2016-10-24T00:00:00.000 | {
"year": 2016,
"sha1": "ddf5006b121c5e3740c78ceb7b40559767afe906",
"oa_license": "CCBY",
"oa_url": "http://microbialcell.com/wordpress/wp-content/uploads/2016/10/2016A-Juedes-Microbial-Cell.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "557884770d183722852d3e267089b2820ff9ad1f",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
83590335 | pes2o/s2orc | v3-fos-license | Interplay between music, emotion and cognitive function in health and disease
Music is one of the oldest and most powerful means to afford communication and convey emotion. Here I review recent research on the inter-relations between music, emotion and cognitive function both in healthy individuals and neurological patients with brain damage. This topic is timely given the increasing amount of evidence on the cognitive enhancing effects triggered by music exposure. The focus of this paper will be on how the brain can be stimulated by music and how music, through the emotional reaction, can in turn modify brain function. There are clear associations between music listening, musical training and general cognitive performance. However, future studies are needed to enhance understanding of the precise nature of the cognitive and neural mechanisms by which music influences cognition.
Music processing stems from the initial coding of sound waves in the ears, followed by tonotopic sound processing in the cochlear system and subsequent processing by a network of subcortical structures including medial geniculate thalamic nuclei before reaching the primary auditory cortex. Music perception is a complex process that involves the coordination of different brain regions. There are interactions between auditory cortical systems coding sound pitch and pitch relations across time, pre-motor and motor areas for tempo and rhythm 1 and pre-frontal cortex for tonality 2 (reviewed in ref. 3). The specific feature relations between the different musical components such as pitch, tempo and tonality may determine the emotional component of the music; 4 however, since music preference is highly idiosyncratic, the emotional power of a given musical piece will be greatly influenced by the individual's musical tastes and preferences.
Emotional responses to music have been categorized according to valence and arousal dimensions. 5 Pleasant music listening activates brain areas involved in emotion and reward such as the dorsal and orbital parts of the prefrontal cortex 6-8 and trigger enhanced neuronal responding and increased connectivity in mesolimbic brain areas (i.e., nucleus accumbens and ventral tegmental area) 9 that also respond to rewards. The fact that these emotional and reward-related regions are part of a dopaminergic system 10 points out to the suggestion that pleasurable music listening may lead to dopamine neurotransmitter release-although to the best of my knowledge this has not been tested directly and awaits future confirmation.
The effects of music listening on arousal have been demonstrated by recording the heart rate and the galvanic skin responses. Overall, the picture emerging from these studies suggests that music selected by the individual may be a powerful way of enhancing autonomic arousal (i.e., increasing the strength of the galvanic skin response 11 ). Sad music, in contrast with happy music, can reduce the level of arousal (i.e., decreasing skin conductance and slowing the heart rate 12 ). To the extent that arousal can facilitate the speed of our reactions to relevant behavioral targets and improve the orienting of attention, 13-16 music-induced alertness can be a powerful tool to modulate general cognitive performance. It is likely, however, that music effects on alertness are modulated by individual preferences for particular musical genres.
Music Influences Cognition in Health
This topic has not received much investigation to date, however, there is evidence that music listening can enhance several aspects of cognitive processing such as attention and creativity. 17-20 A wellknown effect of music listening on cognitive performance is the so-called 'Mozart effect'. This is an improvement in spatial reasoning skills when participants are exposed to Mozart relative to other control conditions. 17 Further research is consonant with the suggestion that the 'Mozart effect' may be due to the emotional reaction induced by the interaction between music exposure and the observer's musical preferences. 19 In this regard, it should be noted there is a clear link between emotional state and general cognitive functioning. Positive affect can lead to a more flexible and creative way of approaching problem solving, 20,21 it can improve the scope of memory recall in word association tasks 22,23 and it can enhance the scope of visual spatial attention processes as well as improve the selection of visual targets across time. [24][25][26] From this follows that pleasant music listening, through general positive affect induction, ought to trigger a similar facilitation of cognitive processing.
There is evidence for correlations between musical training and other human skills such as maths 27 and verbal abilities such as memory for words and reading processes. 28,29 Musical training also appears to be associated with a more rapid linguistic development in healthy child and with improved spelling skills in children with dyslexia. 30 It is difficult, however, to establish a causal Music is one of the oldest and most powerful means to afford communication and convey emotion. Here I review recent research on the inter-relations between music, emotion and cognitive function both in healthy individuals and neurological patients with brain damage. This topic is timely given the increasing amount of evidence on the cognitive enhancing effects triggered by music exposure. The focus of this paper will be on how the brain can be stimulated by music and how music, through the emotional reaction, can in turn modify brain function. There are clear associations between music listening, musical training and general cognitive performance. However, future studies are needed to enhance understanding of the precise nature of the cognitive and neural mechanisms by which music influences cognition. role of musical training from these investigations alone since these associations between musical training and cognitive performance may arise due to other factors. For example, individuals that undergo musical training may possess general enhanced cognitive capacities to start with. Also musical training effects may lead to an improvement of general cognitive capacities related to attention and memory function which can in turn influence performance on a wide range of cognitive tasks. 31 In fact, musical training may lead to brain plasticity changes 32-34 in a wide range of brain networks related to skilled motor processing, auditory and verbal processing, memory and attention, which in turn may transfer to benefit performance in many cognitive domains. It is also possible that individuals that undergo musical training may also benefit from the emotional enhancing influence of the music experience and through this impact on emotion, general cognitive processing can be enhanced.
Music-Based Restoration of Cognition in Disease
Nowadays, music is being used to improve brain function after brain insult in a wide range of neurological patient populations. The benefits of music-based therapy on cognitive recovery expand emotion, attention, memory and motor processes. There is even data suggesting that joint musical and kinetic stimulation can help to improve the clinical condition in cases of vegetative states after brain injury. 35 Motor training paired with exposure to auditory rhythms appears to be an effective mean of activating the motor system in stroke survivors and this can lead to improvements of motor function in the paretic arm. 36,37 There is also interesting evidence that musical-based training (i.e., learning to play a musical instrument) can be a powerful way of improving recovery of motor skills after stroke. 38 Auditory-motor feedback through music exposure can also lead to motor task improvements in the precision of arm and finger movements in patients with Parkinson disease. 39 In the memory and verbal domains, there is evidence that music exposure can influence verbal and autobiographical recall in patients with Alzheimer disease 40,41 and dementia. 42 There are also case reports of patients with aphasia that show improved speech when the patients sing familiar music lyrics relative to when the patients merely speak excerpts of familiar lyrics. 43 Thus, the musical component related to 'singing' the lyrics influenced speech production in these patients. In line with this, there is evidence that music therapy of speech based on melodic intonation (i.e., the incorporation of musical components such as melody and rhythm in the speech produced by the patient) can be effective to rehabilitate speech in aphasic patients. 44 Functional neuroimaging of the brain provided evidence of reactivation of Broca's area and the left prefrontal cortex when patients repeated words with melodic intonation relative to production without melodic intonation.
Two recent studies demonstrated the power of music to enhance awareness in stroke patients. 45,46 There is evidence that one hour of music listening a day over a period of two months can lead to a higher cognitive recovery in a general stroke population compared to patients groups on standard therapy care or other auditory-stimulation control conditions. Significant improvements though music listening can be observed in verbal memory and the control of attentional focusing. Moreover, music listening is also associated with significant mood improvements in the post-stroke stage. 45 There is also striking evidence of pleasant music effects on the degree of awareness of chronic stroke patients that suffer from visual neglect. 46 Visual neglect is a debilitating condition that follows brain lesions usually in the right hemisphere, where patients appear unaware of visual stimuli presented in the side of space contralateral to the brain lesion, despite having intact perceptual pathways. Interestingly, the visual neglect syndrome can be overcome by having patients to listen to their pleasant music listening. A recent study 46 showed that awareness of neglect patients for stimuli in their impaired visual field can be markedly improved when neglect patients listen to pleasant music relative to silence or unpreferred music. This recovery induced by pleasant music correlated with enhanced functional activation in emotional regions of the orbitofrontal cortex and attentional brain regions in spared areas of the parietal cortex and early visual regions. These findings are consistent with the suggestion that music influences visual cognition through positive affect.
The specific neural mechanism of the music effect remains to be established. Musically induced arousal and positive mood are likely moderators of the influence of music in cognition. Music may lead to enhanced or optimal neurotransmitter release either by activating noradrenergic transmission which is critical for alertness and attention 47 or by supporting dopaminergic activity in fronto-striatal networks 10 that support working memory processing. 48 Neurotransmitter release induced by music may boost the transmission of neural signals and the cognitive resources available for cognitive processing. It is early days however to describe precisely the nature of the neural mechanisms by which music influences cognition. Pleasant music listening and musical training may engage neuroplastic mechanisms both in the healthy and the injured brain. However, the nature of mechanisms supporting the interplay between music exposure, neuroplasticity and cognitive enhancement remains to be established.
Music is universally enjoyed by all cultures across the world and it is a rather rich source of sensory stimulation. The use of music for the treatment of cognitive disorders after brain insult is a benign, simple and non-expensive way of influencing brain function compared to other invasive treatments and pharmacological interventions. This resource should be exploited at its maximum level. | 2019-03-20T13:14:07.038Z | 2009-11-01T00:00:00.000 | {
"year": 2009,
"sha1": "ad019eb4013a5e7200ad7cf8f7a97d07ac93436c",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.4161/cib.2.6.9656",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "96a6cbd60d320b0bf2763229ac06752f4346043c",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Biology"
]
} |
246636579 | pes2o/s2orc | v3-fos-license | Australian podiatrists scheduled medicine prescribing practices and barriers and facilitators to endorsement: a cross-sectional survey
Background Non-medical prescribing is one healthcare reform strategy that has the potential to create health system savings and offer equitable and timely access to scheduled medicines. Podiatrists are well positioned to create health system efficiencies through prescribing, however, only a small proportion of Australian podiatrists are endorsed to prescribe scheduled medicines. Since scheduled medicines prescribed by Australian podiatrists are not subsidised by the Government, there is a lack of data available on the prescribing practices of Australian podiatrists. The aim of this research was to investigate the prescribing practices among Australian podiatrists and to explore barriers and facilitators that influence participation in endorsement. Methods Participants in this quantitative, cross-sectional study were registered and practicing Australian podiatrists who were recruited through a combination of professional networks, social media, and personal contacts. Respondents were invited to complete a customised self-reported online survey, developed using previously published research, research team’s expertise, and was piloted with podiatrists. The survey contained three sections: demographic data including clinical experience, questions pertaining to prescribing practices, and barriers and facilitators of the endorsement pathway. Results Respondents (n = 225) were predominantly female, aged 25–45, working in the private sector. Approximately one quarter were endorsed (15%) or in training to become endorsed (11%). Of the 168 non-endorsed respondents, 66% reported that they would like to undertake training to become an endorsed prescriber. The most common indications reported for prescribing or recommending medications include nail surgery (71%), foot infections 474 (88%), post-operative pain (67%), and mycosis (95%). The most recommended Schedule 2 medications were ibuprofen, paracetamol, and topical terbinafine. The most prescribed Schedule 4 medicines among endorsed podiatrists included lignocaine (84%), cephalexin (68%), flucloxacillin (68%), and amoxicillin with clavulanic acid (61%). Conclusion Podiatrists predominantly prescribe scheduled medicines to assist pain, inflammatory, or infectious conditions. Only a small proportion of scheduled medicines available for prescription by podiatrists with endorsed status were reportedly prescribed. Many barriers exist in the current endorsement for podiatrists, particularly related to training processes, including mentor access and supervised practice opportunities. Suggestions to address these barriers require targeted enabling strategies. Supplementary Information The online version contains supplementary material available at 10.1186/s13047-022-00515-w.
Conclusion:
Podiatrists predominantly prescribe scheduled medicines to assist pain, inflammatory, or infectious conditions. Only a small proportion of scheduled medicines available for prescription by podiatrists with endorsed status were reportedly prescribed. Many barriers exist in the current endorsement for podiatrists, particularly related to training processes, including mentor access and supervised practice opportunities. Suggestions to address these barriers require targeted enabling strategies.
Keywords: Podiatry, Prescribing, Endorsement for scheduled medicine
Background
In Australia there is a documented shortage of medical practitioners in many areas which impacts on the prescription of scheduled medicines amongst other concerns [1]. This shortage will only increase in the context of increases in life expectancy, rising chronic disease, and rural and remote workforce shortages [2][3][4]. Nonmedical prescribing is one healthcare reform strategy that has the potential to offer equitable and timely access to scheduled medicines [2,5], and allied health professionals have been identified internationally as of high value in this role [6]. The increasing use of non-medical prescribing has been shown to be cost effective and to improve patient satisfaction while not compromising care or safety [7][8][9]. Such strategies aim to build a flexible, responsive, and sustainable Australian healthcare workforce that fully utilises workforce resources and competencies [3,4,10].
Podiatrists frequently assess, diagnosis, and intervene in painful musculoskeletal injuries, inflammatory conditions, skin and soft-tissue infections, high risk and diabetic foot disease, and fungal infections [11]. Therefore, endorsed podiatrists are well positioned to reduce inconveniences and prevent duplicate visits to general practitioners. Timely prescription of scheduled medicines, such as antibiotic therapy for infected foot ulcers, may prevent deterioration and subsequent hospitalisation and/or limb amputation. Similarly, the ability to prescribe appropriate analgesia and anti-inflammatory medications for acute musculoskeletal injuries and inflammatory condition (such as gout) may reduce complications and emergency department presentations.
In Australia medicines and poisons are classified by schedules which correspond to the level of regulatory control over their availability, which are published in the Standard for the Uniform Scheduling of Medicines and Poisons (SUSMP) [12]. The schedules relevant to the National Podiatry Scheduled Medicines List include Schedule 2 (S2) which are substance that require advice from a pharmacist or licenced person for safe use, Schedule 3 (S3) which require professional advice for safe use but do not require a script, Schedule 4 (S4) which are prescription only medicines, and Schedule 8 (S8) which are controlled drugs [13]. Australian podiatrists with general registration, may administer, obtain, possess, sell, supply, or use (recommend) a range of S2 and S3 substances for the treatment of podiatric conditions, and may be authorised to possess and administer local anaesthetic agents as per the state and territory legislation in which they practice [14].
The Health Practitioner Regulation National Law, enacted by all Australian States and Territories, enables the national health practitioner boards to endorse the registration of suitably qualified health practitioners to prescribe scheduled medicines (endordsed) [13]. The minimum requirements established by the Podiatry Board of Australia to gain endorsement include holding an approved qualification (Pathway A) or completing a portfolio of evidence that includes a supervised practice component (Pathway B). This registration allows endorsed podiatrists to administer, obtain, possess, prescribe, sell, supply or use (prescribe) a broader range of medicines including S4 and S8 medicines from the National Podiatry Scheduled Medicines List, for the treatment of podiatric conditions [13]. Importantly, in addition to national regulation, podiatrists must comply with the legislation and regulations of the State and Territory in which they practice [13]. Therefore, the schedule medicines podiatrist can administer, obtain, possess, prescribe, sell, supply, or use may vary across jurisdictions. Although the Podiatry Board of Australia has been able to endorse podiatrists to prescribe scheduled medicines since 2010, less than 3% of Australian registered podiatrists have gained endorsement [15].
Podiatrists are not covered under the Australian Pharmaceutical Benefits Scheme (PBS) [16], which provides medicines at a Government-subsidised price and collates prescription histories. Therefore, there is a lack of quantitative information about prescribing practices of podiatrists, making it difficult to assess if endorsement for podiatrists leads to health system efficiencies or improved patient access and outcomes.
Little is known about why the uptake for endorsement for scheduled medicines remains low among Australian podiatrists. Graham and colleagues [17], in a qualitative thematic analysis of 13 podiatrists with and without endorsement, identified competence and autonomy (i.e. need/desire to broaden current scope of practice); social and workplace influences (i.e. access to mentors, supervised practice opportunities), and extrinsic motivators (i.e. time and cost of becoming in endorsed) are key barriers and facilitators for podiatrists gaining endorsement. These factors are yet to be explored within the broader podiatry population.
The aim of this research was therefore to investigate medicines prescribing and recommendation practices among Australian podiatrists and, to explore barriers and facilitators that influence the uptake of endorsement.
Research design
This research was a quantitative, cross-sectional survey design, conducted between July 2020 and December 2020. The online survey was created and delivered via SurveyMonkey®. Potential participants were provided with written information on the study aims and directives, giving online informed consent prior to commencing the survey. Respondents were advised that they could withdraw from the survey at any time by closing the browser, with data collected to that point included in the results. Those completing the survey were offered the option to provide an email address if they would like a summary of results supplied, otherwise all survey respondents remained anonymous. Ethical approval was gained from the University of South Australia Human Research Ethics Committee (Approval number 202938).
Participants and settings
All registered and practicing Australian podiatrists were eligible to participate (n = 5759) [15]. Participants were alerted to, and invited to participate in, the research via the Australian Podiatry Association, Facebook™, Twit-ter™, special interest groups, and through the authors' networks. This approach was particularly chosen to ensure maximum coverage of podiatrists, promote participation and achieve adequate response rate.
Survey design
Data were collected during a single round, purpose built, self-reported survey. The survey was developed collaboratively by the author group and pilot tested by three podiatrists, who were then excluded from the results, to ensure clarity and appropriateness of question structure, as well as face validity. The survey contained three sections: demographic data including clinical experience, questions pertaining to medicines prescribing or recommendation practices, and perceptions of the endorsement pathway.
Demographic data included registration type (podiatrist or podiatric surgeon), gender, age, state, or territory of most frequent practice, primary role (clinician, administrator, teacher, or educator, researcher), primary work sector (private, public, or both private and public), employment status (self-employed, employed etc.,), and location. Participants then identified as non-endorsed, endorsed, or undertaking the requirements for Pathway B to become endorsed for prescribing scheduled medicines (in-training), the location of practice where they were most likely to prescribe medications, and length of time since endorsement where relevant.
Questions pertaining to medicines prescribing and recommendation practices asked participants to identify how often (weekly, monthly, quarterly, annually, never) they prescribed (or recommended) from the medications listed on the Podiatry Board of Australia: Guidelines for endorsement for scheduled medicines [10]. The medicines were grouped into: antimycotics, antibacterial, actinic keratosis, drugs for gout, corticosteroid, nonsteroidal anti-inflammatories, analgesics, antihistamines, antidotes and antivenoms, local anaesthetics, emergency (anaphylactic reactions), and benzodiazepines. If participants had prescribed or recommended from a medication group, a drop-down list asked the specific medicaments prescribed from those approved for use by Australian podiatrists [13]. For a full list of responder choices, please see Supplementary Material Table 1.
Questions concerning barriers and facilitators to endorsement were developed based on the previous qualitative study conducted by Graham and colleagues [17]. Specifically, endorsed and in-training participants were asked to identify which facilitators, from a given list, most contributed to their decision to undertake endorsement. Example facilitators were 'It would enable me to offer complete patient care', 'I believe it is an essential skill for effective podiatry practice'. This same group then identified items that made it difficult to complete the requirements for endorsement. Examples include 'The time commitment involved impacted my private life' and 'Limited access to supervisors/mentors'.
Procedure
Participants were asked to indicate their endorsed status (non-endorsed, in-training, or endorsed), and the survey tool used skip-logic to skip to relevant questions. All responders were asked the same questions regarding demographics, prescription/recommendation practices and facilitators for endorsement, however, endorsed or in-training podiatrists were asked some additional questions specific to barriers surrounding the endorsement procedure.
Data management
Participants were categorised by endorsement status for descriptive purposes (e.g., Non-endorsed, Endorsed, and In-training). For questions relating to barriers and facilitators to training (i.e., survey section three), In-training and Endorsed outcomes are pooled as both groups have insight into undertaking the process to gain endorsement.
Data analysis
Data collected were de-identified and exported to Microsoft Excel (Microsoft Corporation (2018)) for descriptive analysis. All responses were presented as reported, except for the length of time respondents have held endorsements. For this question, responses were analysed as five categories of duration (< 1, 1-4, 5-9, 10-15 years) to align with durations reported by Australian Health Practitioner Regulation Agency (AHPRA) registrant summary report. Results are presented as frequencies.
Results
Of the 229 participants who agreed to take part in the survey, four failed to report their profession and were excluded from all further analyses. A total of 225 registered Australian podiatrists were included in the results for this survey, four of whom were podiatric surgeons.
Demographic information & clinical experience of respondents
Descriptive data of registered podiatrists who took part in the survey are presented in Table 1. Respondents were predominantly female, aged 25-45, working in the private sector. Most endorsed prescribers practiced in Queensland (33%) or Victoria (36%), worked within the metropolitan regions of Australia (86%) in private practice (73%), with over 10 years of clinical experience (79%). Most endorsed prescribers had less than 10 years of endorsement (86%) and none of the participants who reported that they were endorsed prescribers or undertaking the process to gain endorsement worked in remote or very remote parts of Australia. As presented in Table 1, the survey participants reflect the AHPRA registrant data published in March 2021 [15]. Of the 225 participants who held general registration, approximately one quarter (25.3%) were endorsed (n = 33, 14.7%) or undertaking the process to become endorsed (n = 24, 10.7%) (this included those completed and awaiting board approval). All podiatric surgeons (n = 4) were endorsed. Of the 168 non-endorsed respondents, 66.0% reported that they would like to gain endorsement.
Prescribing practices
The frequency of medications prescribed or recommended over the last 12-month period (weekly, monthly, quarterly, yearly, never) according to endorsement status are presented in Fig. 1. The groups of medications most frequently prescribed or recommended (on a weekly or monthly basis) by all respondents include local anaesthetics, antibacterial agents, analgesia, antimycotics, and non-steroidal anti-inflammatory drugs. The most common indications reported for prescribing/recommending these medications include nail surgery (71%), foot Aboriginal health service n/a 0 n/a *Location was defined using the Australian Statistical Geography Standard (ASGS), which defines relative remoteness, using the Accessibility and Remoteness Index of Australia (ARIA+). Further detail has been reported elsewhere [18]. Note: In-training = Undertaking the process to gain endorsement infections and ulcerations (88%), post-operative pain (67%), and mycosis (95%) respectively. The antidotes and antivenom class were not prescribed by any responders, and emergency (anaphylactic reactions) medications were only occasionally prescribed by Endorsed participants. As expected, there are some medications not recommended by Non-endorsed and In-training participants due to requiring endorsement to prescribe, for example, drugs for gout.
The recommendation of Schedule 2 and 3 medicines by participants over the last 12 months, are presented in Fig. 2 according to endorsement status. A larger proportion of Endorsed participants, or those In-training, reported prescribing/recommending Schedule 2 or 3 medication over the last 12 months compared to Nonendorsed participants. The most prescribed/recommended Schedule 2 medications were ibuprofen, paracetamol, and topical terbinafine, irrespective of endorsement status. The most prescribed Schedule 4 medicines among Endorsed participants included lignocaine (84%), Cephalexin (68%), Flucloxacillin (68%), and Amoxicillin with Clavulanic acid (61%). Lignocaine, a Schedule 4 medicine (local anaesthetic) that podiatrists have long been able to administer without further endorsement, was the most frequently used medication by all participants (84% of Endorsed podiatrists, 75% of podiatrists In-training, 55% of Non-endorsed participants). Except for lignocaine, approximately 1 in 5 Endorsed participants reported not prescribing medications over the last 12 months. The frequency of prescribing/recommending specific medications (including Schedule 4 medicines) is provided in Supplement Material Table 1.
Perceptions of the endorsement pathway Facilitators of endorsement
The frequency of facilitators for endorsement is provided in Fig. 3. All of the Endorsed participants reported 'It would enable me to provide complete patient care', compared with 87.5% of In-training and 64.5% of Nonendorsed participants. Irrespective of endorsement status the items 'Broadening my scope of practice to offer a higher level of patient care' was a highly reported facilitator along with 'To offer streamlined processes of patient care' and 'I like to extend my knowledge'.
Compared to Endorsed participants (0.1%), Nonendorsed participants (30.8%) identified 'If endorsed podiatrists got a higher Medicare rebate' as an important factor in deciding to undertake the process to gain endorsement. In-training (44.4%) and Non-endorsed (33.6%) participants had similar response rates to 'Access to prescriptions from alternate sources', while this item had low response rates from Endorsed participants (8.3%). Compared to Endorsed (66.7%) and In-training participants (61.1%), fewer Non-endorsed participants (27.1%) reported 'I believe it is an essential skill for effective podiatry practice'.
Barriers of endorsement
The results for barriers to endorsement are listed in Table 2. For those that were Endorsed or In-training the most highly reported barriers were 'Prolonged approval / review process from the Board' (75.0%), 'The time commitment involved impacted my private life' (62.5%), and 'Time away from work' (45.8%). Of note, very few (12.5%) Endorsed participants identified that they had 'No difficulties' associated with completing the requirements for endorsement.
More than half of Non-endorsed participants reported barriers associated with the training process and time requirements. Some of the most highly reported responses were 'Limited access to supervisors/ mentors' (60.8%), and 'The cost of training is prohibitive -University or time away from work' (42.8%), and 'Lack of structured clinical training' (45.8%), and 'I do not have the time needed to undertake training' (40.2%). Many Nonendorsed participants reported 'It is harder in private practice/non-hospital-based positions than within the public hospital sector' (33.6%). This perception may be because podiatrists in large hospitals have access to a broad range of supervised practice opportunities. 'Lack of PBS funding for podiatry-prescribed scripts' (42%), 'You can be a successful podiatrist without having endorsement' (32.7%), 'The easy availability of scripts from alternate sources' (29.0%), and 'Lack of professional role identity -Our patients are not aware we can prescribe' (20.6%) were also identified.
Discussion
This is the first known survey of Australian podiatrists' prescribing and recommendation practices, and barriers and facilitators to endorsement for scheduled medicines. Findings from 225 registered podiatrists indicate that approximately one third of survey participants were endorsed podiatrists, well above the percent of endorsed podiatrists (less than 3%) in the Australian podiatry population, which may reflect a greater interest in the survey topic among this group. The most common medications prescribed or recommended by all podiatrists were local anaesthetics, antimycotics, antibacterial agents, and analgesics. These findings are consistent with other studies that examine prescribing and medication use practices among non-medical prescribers [19,20]. Taken together these findings suggest the management of pain and infection are areas where patients' patterns of receiving their medications is undergoing change.
Podiatry, with its known association with chronic disease is well placed in the primary health care setting to reduce burden on General Practitioners and the public health care system [4,10]. These benefits could be maximised if the rate of endorsement is increased in the podiatry profession. While a majority of Non-endorsed participants reported they would like to become an endorsed prescriber, this intention is not converted into behaviour. This intention-behaviour gap could be partly explained by Endorsed participants reported numerous barriers within the process to gain endorsement, with only 12.5% of Endorsed participants reporting no difficulties. Further, 75% indicated that the wait to receive the award after completing training was prolonged. Steps The current endorsement Pathway B require a high level of self-motivation. Individuals are tasked with costly enrolment into post-graduate courses (with associated non-Commonwealth supported fees), the commitment to undertake self-directed online case studies, identify and approach a mentor, identify and organise supervised practice opportunities over a broad range of areas, and complete a portfolio of evidence. Hence, it is not unexpected that barriers related to the endorsement process were amongst the most highly reported barriers by Non-endorsed participants. In addition, Nonendorsed participants frequently identified a 'Lack of understanding of the endorsement training process'. While this could reflect a lack of intent or motivation to engage with the process to become endorsed, it may also reflect the onerous process of completing the requirements to gain endorsement.
The 'Lack of PBS funding' was reported as a considerable barrier for Non-endorsed participants to undertake the endorsement process. Eligibility to prescribe medications under the PBS has been viewed as an important factor in providing full episodes of patient care elsewhere [21,22]. Without PBS subsidies, podiatry patients may incur greater out-of-pocket expenses or choose to return to their General Practitioner for prescriptions, increasing public and healthcare costs.
Many pathology investigations such as blood, urine, or tissue required for the safe and responsible use of scheduled medicines and to provide full episodes of patient care are either not subsided when ordered by podiatrists or podiatrists are unable to request them. Topical antifungal therapies (which are known to have limited efficacy for the treatment of onychomycosis [23]), do not require pathology testing to prescribe and manage and were recommended by up to 67% of Endorsed participants over the last twelve months. While oral terbinafine, which has high quality evidence for efficacy in the treatment of onychomycosis but requires both pathology testing and liver function monitoring [24], was only prescribed by 25% of Endorsed participants over the last twelve months. Such testing can only be accessed by a referral back to a GP, reducing efficiencies. This example demonstrates how limited access to pathology testing may, in-part, explain the prescribing patterns for antifungal medications observed in this study as well as the relatively high proportion of Endorsed participants not prescribing any medications.
In 2018 the estimated total cost to train a podiatrist through the endorsement process in the public hospital system was estimated at over $50,000, inclusive of costs and in-kind costs to the hospital and the podiatrist themselves [25]. Although this appears to be a significant investment, the researchers suggested breakeven could be achieved by averting 0.68 infected diabetic foot ulcers (DFU) from major amputations [25]. DFU is another area of podiatric practice where health system efficiencies and Identifying factors that motivate podiatrists to become endorsed is an important step in developing strategies to increase engagement and participation. The findings of this survey build on the qualitative interviews by Graham et al. [17] confirming that the self-determination theory (SDT) [26,27] can provide insights into the drivers of motivation. This universal motivational and personality framework is commonly applied in education and health settings and is based on the concept that people naturally develop through acquiring knowledge, skill, and habits they observe that support their basic psychological needs of autonomy, competence, and relatedness [28].
Self-determination theory posits that motivation to undertake learning can be represented on a continuum from extrinsic motivation (e.g. financial rewards) through to intrinsic motivation (e.g. curiosity or finding the content appealing and interesting) [28]. Endorsed participants indicated the intrinsic motivator such as 'I like to extend my knowledge' for participating in endorsement training. This may suggest that such intrinsic motivation is more likely to motivate podiatrists to action than the extrinsic factors such 'Lack of PBS funding' and 'No tangible incentives to undertake training'. Interestingly, these were reported as barriers by Nonendorsed participants.
In SDT, internalisation occurs when a behaviour becomes a personally endorsed value such that the behaviour is in harmony with the broader values, commitments, and interests of the person [29,30]. Intraining participants had the highest reporting of 'I believe it is an essential skill for effective podiatry practice', followed by endorsed participants, suggesting for them prescribing may have become an internalised behaviour that motivated these participants to overcome the onerous process involved in becoming endorsed.
Several studies have supported aspects of SDT on an organizational level [31]. The history of local anaesthetic (LA) use in podiatry could be one example of internalising prescribing as a behaviour on a profession wide level. Local anaesthetic was introduced as part of undergraduate podiatry training in Australia in 1978 [32] and has seen a wide uptake such that it is now seen by the profession as an essential skill. Findings from this survey support that LA has become an internalised behaviour in podiatry as LA's were a medication group nonendorsed participants frequently used.
Self-determination theory hypothesises that the need to be connected to others and to be an effective member of the social environment supports the tendency to internalise the values and regulatory processes of their surroundings [31]. In-training participants had the highest reporting of the social and workplace influences of 'Becoming a valued member of a team' and 'Support from employers'. Similarly, In-training participants reported higher levels than Nonendorsed participants of 'Working in an environment with other endorsed prescribers', suggesting observing prescribing in the workplace can motivate an individual to master this skill to enable them to better integrate into the larger social structure of the workplace [28]. The lack of internalization by Non-endorsed participants is supported by the reporting of 'You can be a successful podiatrist without having endorsement'.
Future directions
The second most frequently reported barrier by Endorsed participants was 'The time commitment involved impacted my private life'. Training systems that provide flexible learning environments to assist with balancing work and other commitments may overcome this barrier. Plans to incorporate the requirements for endorsement into the undergraduate accredited programs in Australia are one such strategy to assist in increasing endorsement rates in the future podiatry workforce. However, this poses new challenges such as curriculum creep, staffing and resourcing requirements. Forward planning to meet the required mentors for prescription practices in the new undergraduate pathways will maximise rates of participation in endorsement. Supporting podiatrists working in the academic environment to become endorsed could be considered a priority.
This research supports the premise of SDT theory that social context can motivate behaviour change. Combined with the finding that supervisor and mentor access are a major barrier suggests that supporting podiatrists in leadership roles in clinical settings may be a strategy that could have considerable short-term impact by improving access to mentors as well as increase opportunities to internalise prescribing behaviour in the podiatry profession. One strategy could be to establish a formalised leadership and mentoring framework, as well as communities of practice where podiatrists support each other and increase capacity. One example of this style of supportive program being successfully established in podiatry is a mentoring program developed in a Victorian public health service [25].
Podiatry could explore strategies used by other allied health groups included in the legislative changes for nonmedical prescribing. For example, in the Australian optometry profession which is comparable in size to podiatry (N = 6207), 68% of optometrists hold an endorsement for scheduled medicines qualification [29]. In addition to incorporating therapeutic training into undergraduate degrees since 2002 [30], in optometry there is a streamlined, post graduate academic pathway. A one-year postgraduate certificate in Ocular Therapeutics is offered by two accredited independent optometry education institutions. The cost to students is approximately $10,000 and includes support to meet required clinical hours, such that students graduate with a formalised qualification recognised by their registration body. To the authors knowledge, there are currently 2 universities offering the postgraduate podiatric therapeutic component of the pathway. However, there are currently no Australian providers that offer courses that incorporate all components of the pathway.
The Australian rural and remote population can have greater challenges accessing healthcare than those living in more urban areas. While 10 survey participants reported working in rural areas and one as working in a remote area, none were endorsed. Further research to explore strategies to overcome the significant barriers for this group of podiatrists to undertake the process to become endorsed has the great potential to offer improved equitable and timely access to scheduled medicines in these communities.
Strengths and limitations
While this survey examines a sample of Australian podiatrists that share demographic characteristics consistent with the Podiatry Board of Australia registrant data, there are limitations that must be addressed. Firstly, the final sample was relatively small, reflecting approximately 4% of the total Australian podiatry population. Caution is therefore needed when generalising study findings to the entire podiatry population. Secondly data collected are self-reported and may be prone to reporting bias such as recall bias and social desirability bias [31]. Lastly, we did not differentiate the separate pathways available to become an endorsed prescriber. Since several endorsement pathways exist, it is possible that perceptions relating to factors that made endorsement training difficult to complete, may vary according to pathway.
Conclusion
This is the first known research to examine the prescribing and medication recommendation practices of podiatrists within Australia, with a key outcome being that podiatrists predominantly prescribe or recommend medications to assist pain, inflammatory, or infectious conditions. However, lack of PBS funding and pathology testing access limit podiatrists' ability to provide full episodes of patient care. Therefore, the valuable benefits of streamlined care, improved patient access, and improved efficiencies in the health system may not be fully realised and can even be difficult to examine.
Approaches that improve access to mentors and a broad range of supervised practice, such as a increased numbers of endorsed podiatrists or a formalised leadership and mentoring framework are likely strategies that that could improve rates of endoesement. It was also suggested that some barriers to endorsement could be addressed by internalising prescribing as a behaviour in the podiatry profession through supporting staff within leadership roles and teaching institutions to engage with and model the prescription of scheduled medicines. The plan to incorporate the requirements for endorsement into undergraduate education has been demonstrated in the past to be effective in podiatry when used with the prescription of LA. While this may be a long-term strategy, in the short term, improving access to mentors and incentives for the current workforce to become endorsed should be actively considered. | 2022-02-08T14:37:29.146Z | 2022-02-08T00:00:00.000 | {
"year": 2022,
"sha1": "30abf15701ad84e4f2cf23348d1ec7611ad7e0b0",
"oa_license": "CCBY",
"oa_url": "https://jfootankleres.biomedcentral.com/track/pdf/10.1186/s13047-022-00515-w",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "30abf15701ad84e4f2cf23348d1ec7611ad7e0b0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
72990945 | pes2o/s2orc | v3-fos-license | Torsional Strength Prediction of RC Hybrid Deep T-Beam with an Opening using Softened Truss Model
This paper presents an analytical study to predict the torsional behavior of reinforced concrete (RC) hybrid deep T-beam with opening and to compare to the experimental results. The RC hybrid deep T-beam was cast with normal weight concrete web and light weight concrete flanges. Based on the Softened Truss Model and Bredt’s theory, a new equation was derived and proposed to show the relationship between compression strut of light weight concrete and normal weight concrete. Based on the experimental results of cracking angles and curvature equation of light weight concrete and normal weight concrete, a new equation was also proposed to show the relationship between strain diagram of light weight concrete and normal weight concrete. The analytical results show that the prediction of maximum torque capacity of the beams are close to the experimental results, except the prediction of twist angle which are larger than the experimental results.
Introduction
In order to pass utility (ducts and pipes) and also to save headroom, an opening is often required in the beam. Beam with opening subjected to pure torsion is very rare. Normally it is subjected to combination of bending, shear and torsion. However, it is important to study the beam under pure torsion as the basis of understanding their complex behavior under combine loadings [1]. Researches in torsion of shallow beam were extensively carried out [2][3][4][5][6][7][8], however, research in torsion of deep beams is rare and attracts little attention. Beams with span/depth ratios about four or less are categorized as deep beams [9]. Akhtaruzzaman and Hasnat [10] conducted investigation on the torsional behavior of deep beams with and without opening, while Samman et al [11] reported their investigations on the torsional behavior of rectangular plain concrete deep beams without opening.
Lisantono et al. [12] has reported preliminary investigation on the effect of light weight concrete (LWC) flanges and web opening on the torsional capacity of RC deep T-beams. Lisantono et al. [13,14] also investigate the effect of web opening locations, in horizontal as well as in vertical direction, on the torsional behavior of RC deep T-beam. To obtain a clear understanding on the effect of web opening dimension on the torsional behavior of RC hybrid deep T-beams, Lisantono et al [15] conducted an experimental program. This study aims to develop a method for predicting the torsional capacity of RC hybrid deep T-beam, especially deep T-beam with web opening.
Softened Truss Model Theory
A beam subjected to pure torsion will behave as an analogous thin wall tube-space truss [2][3][4][5][6]. After the beam cracks, concrete strut will be subjected to compression force and the reinforcement acts as a tie. Hsu [16] stated that three equilibrium conditions of Truss Model satisfy Mohr's stress circle and by assuming that the steel bars can only resist axial stresses, superposition of concrete stresses gives: According to the Bredt's theory which stated the shear flow should be constant along the centerline of the shear flow zone and can be related to the torque, T. Hsu [4] adding one more equilibrium equation: where T = torque A0 = the area within the centerline of the shear flow td = the effective thickness of shear flow Hsu [16] showed that from the compatibility condition of the truss model, the average strains satisfy the Mohr's strain circle and gives the following equations: where l, t = average strains in the l and t directions, respectively (positive for tension) lt = average shear strains in the l-t coordinate (positive as shown in Figure 1) d, r = average principal strains in the d and r directions, respectively (positive for tension) The stress and strain of concrete in the d direction follows material law proposed by Vecchio and Collin [17] for the softened concrete: where d = compression stress at the concrete strut = softening coefficient = Poisson rasio 0 = strain at the maximum compressive stress of nonsoftened concrete and can be taken as -0.002 [4].
Hsu [7] used the softening coefficient as proposed by Belarbi and Hsu [18] as follows: The stress-strain relationship in the r direction can be expressed by where Ec = modulus of elasticity of concrete cr = strain at cracking of concrete, taken to be c cr E f fcr = stress at torsional cracking of concrete is assumed to occur when the principal tensile stress reaches the tensile strength of the concrete in biaxial tension-compression and can be taken as The stress-strain relationship for longitudinal and transverse steel bars are assumed to be elasticperfectly plastic.
Softened Truss Model for RC Hybrid Deep Tbeam
According to CEB-FIP Model Code 1990 [20] for beam consisting of several rectangular sections, the torque of the beam can be assumed to be distributed on each rectangular section as: where T = total torque Ti = torque that assumed to be resisted by the-i th section Xi = the smaller size of the-i th section Yi = the larger size of the-i th section The RC hybrid deep T-beam in this research was cast of normal weight concrete web and light weight concrete flanges as shown in the Figure 2.a.
If the flange portion and web portion are denoted as I and II, respectively, the proportion of each torque becomes: Considering the web opening in the beam ( Figure 2.b), Equation 12 and 13 become: (a) Torque proportion without opening (b) Torque proportion with opening (15) where T = total torque TI = torque resisted by flange (light weight concrete) TII = torque resisted by web (normal weight concrete) bf = width of flange hf = thickness of flange bw = width of web hw = height of web d0 = opening diameter To derive the torque capacity of a RC hybrid deep Tbeam after cracking the beam is assumed to be a space truss-thin wall tube analog as shown in Figure 3. Figure 3 shows that the shear flow of flange section (light weight concrete) is qI and the effective thickness of shear flow zone is tdI, while the shear flow and the effective thickness of web section (normal weight concrete) are qII and tdII, respectively. Taking a small element A at flange section and B at web section of the RC hybrid deep T-beam as shown in Figure 3, the shear stress of element A will be: According to the Bredt's theory [4] the shear flow q must be constant along the shear flow center line. Therefore, the relationship of shear flow of light weight and normal weight concrete can be obtained as: Hsu [5] stated that as long as the tensile strength (r) of the concrete is very small compared to the compressive strength (d), the effect on the torque will be small too. Therefore, the tensile strength (r) can be neglected in the analysis. Substituting rI = rII = 0 into Equations 22 and 23, gives illustrates that the compressive force at the strut of light weight concrete must be in equilibrium condition with the compressive force at the strut of normal weight concrete. Therefore, if the characteristics of light weight and normal weight concrete are not the same, the effective thickness of shear flow zone or td must not be the same either.
Lampert and Thurlimann [21,22] proposed that due to warping, the strut concrete is not only subjected to compressive force but also flexure at the tube wall. This phenomenon can be stated in curvature equation as: where, = curvature of concrete strut According to the flexural phenomenon at the tube wall, the curvature of concrete strut at each light weight and normal weight concrete can be obtained as: , it can be obtained that (see Figure 4): It is to be noted that program was terminated when εdI has reached the value of 0.00175 or εdsI = 0.0035, where εdsI is the compression strain of concrete at the outer surface of diagonal concrete strut, while εdI is the compression strain of concrete at the middle height of the effective thickness of the diagonal concrete strut (see Figure 4).
Experimental Program
To verify the theory, Lisantono et al [15] conducted experimental program of torsional RC hybrid deep-T beams. Four beams namely B4HS; B4HOD1; B4HO; and B4HOD3 were prapared. The first beam (B4HS) was reinforced concrete hybrid deep T-beam without opening, cast of normal weight concrete web and light weight concrete flanges.
The second, third and fourth beams (B4HOD1; B4HO and B4HOD3) were reinforced concrete hybrid deep T-beams with web opening diameters of 100 mm; 200 mm and 300 mm, respectively. The circular openings were located at mid span and mid depth of the beam. All beams have the same span of 2000 mm and the same nominal cross-sectional dimensions. Details of the beams are shown in Figures 6, 7, 8, and 9. Table 1. The test setup of the specimen is illustrated in Figure 10. The beams were instrumented for measurements of prevailing deflections and rotations. The deflections and rotations due to the torque force were measured using LVDT and inclinometers respectively. The electrical resistance strain gauges were used to measure strains in reinforcements and concrete.
Results and Discussion
Comparison between the experimental results and the softened truss model theory of the beam B4HS, B4HOD1, B4HO, and B4HOD3 can be seen in Figures 11, 12, 13, and 14, respectively.
Figure 10. Test Setup of the Specimen
The experimental results show that generally the strength characteristics of the tested beams up to first cracking behave essentially linear. After first cracking, preceded by a small drop in torque, the curves increased non-linearly with increasing twist up to their ultimate torque. It was observed that the small drop in torque occurred when the crack propagated a short distance along the corner line of the web-flange interface. After reaching their ultimate torque, the curves decreased non-linearly with increasing twist, and proceeded with a section of an approximately horizontal plateau, indicating a state of yielding prior to collapse [15]. Figures 11,12,13,and 14, show that the theory can predict not only the torsional strength of the RC hybrid deep-T beam, but also the angle of twist throughout the post-cracking loading history. Comparing to the experimental results, it can be seen that generally the stiffness of the tested beam is stiffer than the curve predicted by the theory. To evaluate the accuracy of proposed theory, the torque capacity and twist angle predicted by the proposed theory were compared to the experimental results. Comparison of the maximum torque capacity based on the proposed theory with the experimental results can be seen in Table 2. The comparison of the twist angle at maximum torque obtained using the softened truss model with the experimental results can be seen in Table 3. Table 3 shows that the maximum twist angle predicted by the theory is higher than the experimental result. It means that twist angle predicted by the theory gives more excessive twist angle than the experimental results, especially for the beam with the large opening. It is suspected, this softer behavior is due to the fact that the proposed method was derived based on the assumption that the concrete of the beam was already crack, while in reality the beam has two behavior condition throughout the loading history that is before first cracking and after cracking of the concrete.
Conclusion
Based on comparison of the results between analytical method and experimental program, the following conclusions can be drawn. The softened truss model can predict not only the torsional strength of the RC hybrid deep-T beam, but also the angle of twist throughout the post-cracking loading history.
Prediction of the maximum torque capacity of the RC hybrid deep T-beams based on the softened truss model is close to the experimental results. However, the twist angle predicted by the softened truss model is higher than to the experimental results, especially for the beam with large opening. The excessive prediction of the twist angle is due to the fact that the softened truss model was derived based on the assumption that the concrete of the beam was already crack, while in reality the beam has two behavior condition throughout the loading history, that are before first cracking and after first cracking of the concrete. | 2019-03-11T13:08:28.035Z | 2013-04-03T00:00:00.000 | {
"year": 2013,
"sha1": "b3aa75638f14537e824c180b8dfc4754f4a1f48e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.9744/ced.15.1.25-35",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "5d1058c2db2825dcd0abac23fcdb96ef1fa426fc",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
215833340 | pes2o/s2orc | v3-fos-license | Intuitionistic fuzzy time series functions approach for time series forecasting
Fuzzy inference systems have been commonly used for time series forecasting in the literature. Adaptive network fuzzy inference system, fuzzy time series approaches and fuzzy regression functions approaches are popular among fuzzy inference systems. In recent years, intuitionistic fuzzy sets have been preferred in the fuzzy modeling and new fuzzy inference systems have been proposed based on intuitionistic fuzzy sets. In this paper, a new intuitionistic fuzzy regression functions approach is proposed based on intuitionistic fuzzy sets for forecasting purpose. This new inference system is called an intuitionistic fuzzy time series functions approach. The contribution of the paper is proposing a new intuitionistic fuzzy inference system. To evaluate the performance of intuitionistic fuzzy time series functions, twenty-three real-world time series data sets are analyzed. The results obtained from the intuitionistic fuzzy time series functions approach are compared with some other methods according to a root mean square error and mean absolute percentage error criteria. The proposed method has superior forecasting performance among all methods.
Introduction
Forecasting is very important for future planning in many technological areas.Forecasting techniques are attracted by managers and other decision-makers.Forecasting techniques can be based on probability theory, fuzzy set theory or computational techniques.Many of forecasting techniques use fuzzy sets in their algorithms.Fuzzy sets were proposed by Zadeh (1965).Chen (1996a, b) proposed a fuzzy reasoning approach.Chen (1998) proposed a fuzzy system for group decision-making.Bai and Chen (2008a, b) and proposed a method for creating automatically membership functions of fuzzy rules.Bai and Chen (2008a, b) proposed adaptive fuzzy system-based automatically determined concept maps.Fuzzy inference systems and fuzzy time series methods can be used for forecasting.Takagi and Sugeno (1985) system, adaptive network fuzzy inference system proposed by Jang (1993) and a fuzzy function approach proposed by Turksen (2008) are well-known fuzzy inference systems in the forecasting literature.Fuzzy time series methods are also popular methods in the forecasting literature.Song and Chissom (1993) were firstly defined fuzzy time series concept, and they proposed a fuzzy time series forecasting method.Chen and Wang (2010), Chen et al. (2012Chen et al. ( , 2013)), Chen and Chen (2015), Chen and Phuong (2017) and Chen and Jian (2017) proposed forecasting methods based on fuzzy sets.
Recent years, many applications of classical fuzzy systems have been made in the literature.Zarandi et al. (2013) proposed a new fuzzy functions model tuned by hybridizing imperialist competitive algorithm and simulated annealing.Bezdek (2013) used fuzzy objective function algorithms for pattern recognition.Baykasog ˘lu and Maral (2014) proposed fuzzy functions approach via genetic programming.Baser and Apaydin (2015) proposed a hybrid fuzzy support vector regression analysis.Barak and Sadegh (2016) used ensemble ARIMA-ANFIS hybrid algorithm for forecasting of energy consumption.Goudarzi et al. (2016) proposed an interactively recurrent fuzzy function with multi-objective learning.Aladag et al. (2016) proposed a type 1 fuzzy time series function method based on binary particle swarm optimization.Tan et al. (2017) proposed a new adaptive network-based fuzzy inference system for forecasting.Yang et al. (2017) used linear fuzzy information granules and fuzzy inference system for longterm forecasting of time series.Son et al. (2017) proposed a new neuro-fuzzy inference system for insurance forecasting.Ranganayaki and Deepa (2017) proposed a support vector machine-based neuro-fuzzy model for short-term wing power forecasting.Tak et al. (2018) proposed a recurrent fuzzy function approach for forecasting.Pelka and Dudek (2018) proposed a neuro-fuzzy system for forecasting.Vanhoenshoven et al. (2018) proposed a fuzzy cognitive map employing ARIMA components for time series forecasting.Moreover, there are many fuzzy time series forecasting methods.The fuzzy time series concept was introduced by Song and Chissom (1993).Chen (1996a, b) proposed a fuzzy time series method based on fuzzy relation tables, and it constituted a base for many methods.In recent studies, Chen and Chang (2010), Chen and Chen (2011), Chen et al. (2012), Garg and Garg (2016), Singh (2016), Cagcag Yolcu et al. (2016), Kumar and Gangwar (2016), Kocak (2017), Bose and Mali (2018) and Chang and Yu (2019) proposed fuzzy time series methods.Wang (2018) used a fuzzy time series forecasting method for big data analysis.Bisht and Kumar (2019) used hesitant fuzzy sets based on the computational method for financial time series forecasting.Egrioglu et al. (2019) proposed a forecasting method for single-variable high-order intuitionistic fuzzy time series forecasting model.Gupta and Kumar (2019a) proposed a novel highorder fuzzy time series forecasting method based on probabilistic fuzzy sets.Gupta and Kumar (2019b) proposed a hesitant probabilistic fuzzy set-based time series forecasting method.
Recent years, intuitionistic (hesitant) fuzzy sets have been commonly used in fuzzy techniques.In a fuzzy set, there are membership values for each member of the universal set.Non-membership values can be obtained from membership values by using a simple subtract operation.Atanassov (1983) introduced an intuitionistic fuzzy set.In an intuitionistic fuzzy set, non-membership values have different information than membership values have.Besides, hesitation degrees are obtained from the simple mathematical operation of membership and non-membership values.Atanassov (1986) and Atanassov (1999) gave the details of the theory and some applications for intuitionistic fuzzy sets.Bustince et al. (1995), Cornelis and Deschrijver (2001), Szmidt and Kacprzyk (2001), Marinov and Atanassov (2005), Own (2009) and Davarzani and Khorheh (2013) applied intuitionistic fuzzy sets on different implementations.Moreover, Zheng et al. (2013), Kumar and Gangwar (2016), Wang et al. (2016), Bisht and Kumar (2016) and Fan et al. (2017) proposed intuitionistic fuzzy time series method in their studies.Chen and Chang (2016), Chen et al. (2016a, b) and Liu et al. (2017) applied intuitionistic fuzzy sets in their proposed methods.Castillo et al. (2007) proposed an intuitionistic fuzzy system for time series analysis.Olej and Ha ´jek (2010a) proposed an intuitionistic fuzzy inference system design for prediction of ozone time series.Olej and Ha ´jek (2010b) showed the possibilities of air quality modeling based on intuitionistic fuzzy set theory.Olej and Ha ´jek (2011) compared of fuzzy operators for intuitionistic fuzzy inference system of Takagi-Sugeno type.Ha ´jek and Olej (2012) used adaptive intuitionistic fuzzy inference system of Takagi-Sugeno type for regression problems.The parameters of the intuitionistic fuzzy inference system are determined by using particle swarm optimization in Angelov (2012), Maciel et al. (2012) and Henzgen et al. (2014).Bas et al. (2019) proposed a type 1 fuzzy function method based on ridge regression for forecasting.Kizilaslan et al. (2019) and Cagcag Yolcu et al. (2019) proposed intuitionistic fuzzy function approaches.Egrioglu et al.(2020) proposed picture fuzzy regression functions method based on picture fuzzy clustering.
The motivation of this paper is explained in the following sentences.Fuzzy inference systems are efficient tools for forecasting purposes.It is possible to create new fuzzy inference systems for obtaining more accurate forecasts.Especially, the intuitionistic fuzzy inference system is needed to improve by using different updated techniques.Because intuitionistic fuzzy inference systems employee non-membership values, they can give more accurate forecast results than classical fuzzy inference systems.
The main contribution of this paper can be expressed as proposing a new intuitionistic fuzzy inference system.In this new system, membership values and non-membership values in intuitionistic fuzzy sets and their nonlinear transformations are used as inputs.Thus, the dimension of the input matrix in type 1 fuzzy function approach is augmented by using non-membership values in intuitionistic fuzzy sets.In the new approach, the membership and non-membership values are obtained from intuitionistic fuzzy c-means as in Chaira (2011).The proposed intuitionistic systems do not need to determine the combination parameter of a dual system which are separately designed according to membership and non-membership.In the second section, the proposed method is summarized.The applications for real data sets are given in the third section.In the last section, conclusions and discussions are given.
Intuitionistic fuzzy time series functions approach
In the literature, many of fuzzy inference methods have been proposed.The fuzzy functions approach proposed by Turksen ( 2008) is fairly different from others because it does not have a rule base and it can use directly linear regression models.Although the fuzzy functions approach uses just fuzzy sets, it does not use intuitionistic fuzzy sets.
In the fuzzy functions approach, Turksen (2008) showed that the augmentation of the input matrix's elements by using nonlinear transformations of membership values can drastically improve prediction performance.In this paper, an intuitionistic fuzzy time series functions approach is proposed.In this approach, the input matrix contains nonlinear transformations of non-membership values as well as membership values.The proposed method is based on the ordinary least square estimation instead of ridge regression like in Kizilaslan et al. (2019).The proposed methods use membership and membership values in the same input matrices apart from Cagcag Yolcu et al. (2019).
The proposed approach has the following advantages: • The proposed approach employs intuitionistic fuzzy c-means clustering.Creation of intuitionistic fuzzy sets is more realistic than creation fuzzy sets because of using hesitation margin.
•
The input matrix has a higher dimension in the proposed approach so that it uses more information compared with other fuzzy functions approaches.
•
The proposed approach has superior forecasting performance in many real-world time series applications.
•
The proposed intuitionistic systems do not need to determine the combination parameter of a dual system which are separately designed according to membership and non-membership.
The proposes step-by-step algorithm for intuitionistic fuzzy time series functions algorithm is shown as follows, where its flowchart is given in Fig. 1.
Algorithm 1. Intuitionistic Fuzzy Time Series Functions (IFTSF) Algorithm
Step 1 Parameters of the method are determined.Parameters are the number of intuitionistic fuzzy clusters (cn), and inputs of the system are the number of lagged variables (p), hesitation margin ðpÞ, alpha cut (a À cut) and the length of the test set (ntest).
Step 2 Clustering the data.The input and targets are constituted IO matrix.Intuitionistic fuzzy c-means clustering algorithm proposed by Chaira (2011) is used to obtain memberships and non-memberships.
The last element of cluster centers is excluded, and reduced cluster centers are obtained.Intuitionistic membership values ðl A ðxÞÞ and non-membership values ð# A ðxÞÞ are calculated according to reduced cluster centers.
After applying a À cut operation, normalization applied to membership and non-membership values is shown u ij and l ij .
Step 3 Fuzzy regression functions are obtained by using the least square method.The parameters of linear functions are estimated.Let n be the length of training time series data.
Step 4 Predictions are obtained for training data.Predictions for the training set are obtained by using Eq. ( 7) by combining outputs of linear functions.
d Output where u ij and l ij membership and non-membership values are computed by using reduced cluster centers which are obtained in Step 2.
Applications
The forecasting performance of the proposed method is investigated by using some real-world time series data sets.The list of time series and their features are given in 1 for the analysis of all data sets.Firstly, BIST100 data set is analyzed by using ARIMA (Box and Jenkins 1976), ANFIS (Jang 1993) and modified ANFIS (MANFIS) proposed by Egrioglu et al. (2014), fuzzy time series method (SC) proposed by Song and Chissom (1993), AR-ANFIS proposed by Sarica et al. (2018), type 1 fuzzy function (T1FF) proposed by Turksen (2008) and the proposed method in this paper (IFTSF).The root of mean square error (RMSE) and mean absolute percentage error (MAPE) values for test sets of BIST100 are given in Tables 2 and 3, respectively.
RMSE
In Eqs. 12 and 13, y t and ŷt are real observations and predicted values, respectively.The proposed method produces the best forecasts for 50% of all results in BIST100 data set applications according to RMSE value.The similar results are obtained according to the MAPE criterion.Moreover, the proposed method produces the minimum mean of RMSE and MAPE values in Tables 2 and 3 when the proposed method is compared with other methods.When the length of the test set is 7, the success rate of the proposed method is 60%.For the length of the test set 15, the success rate of the proposed method is 40%.As a result of the analysis, IFTSF method can produce better forecasts for a small test set of BIST100 data set.The proposed method produces the competitive results for long test set lengths.
The analyzed results of the TAIEX data set are given in Table 4.In Table 4, it is seen that the proposed method has 83.33% succeeds of all results in TAIEX data set according to RMSE value.Besides this superior succeed performance, the proposed method is the best method for mean statistics of RMSE values among all other analytical methods for TAIEX data set.Dow Jones data set is analyzed by ANFIS with the grid partition method (ANFISgrid), ANFIS with subtractive clustering (ANFISsub), MANFIS, T1FF and IFTSF.The results of the analysis for Dow Jones data set are given in Tables 5 and 6.The success rate of the IFTSF is 80% according to RMSE criterion, and it is 60% for MAPE criterion.For mean statistics, the proposed method is the best one according to both RMSE and MAPE criteria.
In the final application, the analyze results of TEC data are given in Table 7. TEC data are analyzed by multilayer perceptron artificial neural network (MLP-ANN), seasonal autoregressive integrated moving average model (SAR-IMA; Box-Jenkins 1976), single multiplicative neuron model artificial neural network (SMNM-ANN) proposed by Yadav et al. (2007), linear and nonlinear neural network introduced by Yolcu et al. (2013) and TIFF and IFTSF methods.
It is clearly seen that the proposed method has superior forecasting performance for both RMSE and MAPE criteria.Besides, the graph of real observations and forecasted values is given in Fig. 2.
The best parameter sets in IFTSF method for all analyzed time series are given in Table 8.An important remark in Table 8 is that while the number of clusters is generally
Conclusion
The main contribution of this paper can be stated that an intuitionistic fuzzy time series functions approach is proposed.The proposed method inspired by T1FF is an improved modification of T1FF with the use of hesitation margins and non-membership values.Non-memberships have different information to determine time series relations from membership values, unlike T1FF approach.The new inference system takes into consideration hesitation values, and it has an additional approach to uncertainty like type 2 fuzzy systems.Because of using second-order uncertainty, the system can define better relations between lagged variables of time series.In IFTSF, input matrix contains non-memberships and their transformations as well as lagged variables, memberships and transformations of memberships.Augmentation of the input matrix provides extra information to the inference mechanism.The proposed method is compared with some wellknown fuzzy inference, fuzzy time series methods, artificial neural networks and classical time series methods.According to analyzing results, it is clearly seen that IFTSF can produce better forecasting results than others for almost all of the time series.Moreover, it can be generally said that the proposed IFTSF has better forecasting performance for short-term or small test sets.
In future studies, the proposed method can be adapted to be working in a dual structure for memberships and nonmemberships like other intuitionistic fuzzy inference systems.Input selection can be made by using artificial intelligence optimization techniques.Moreover, the fuzzy functions can be obtained by using artificial neural networks instead of the linear model.
of intuitionistic fuzzy time series function (IFTSF) method for jth observation.Step 5 Forecasts are obtained for test sets.The design matrix ðItest i ð Þ Þ is constituted for each intuitionistic fuzzy cluster and test set.The test set forecasts ð b Yt i ð Þ Þ of each intuitionistic fuzzy regression functions are computed as follows.
Table 1
series were taken from the Turkish Central Bank official Web site.The second data set is the Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX) data observed daily between the years 1999 and 2004.The TAIEX data sets were taken from Sarica et al. (2018).The third data set is daily Dow-Jones Industrial Average index between the years 2010 and 2014 as totally five time series.The first three data sets are stock exchange data sets.The last data are Turkey Electricity Consumption (TEC) data observed monthly between the first month of 2002 and last month of 2013.TEC data set was taken from Turkey Energy Ministry.The parameters of the proposed method (p, cn and ntest) are used like in Table
Table 1
Names and features of time series and parameter values for the proposed method The number of series Series/year Number of observation Number of lag (p) Number of clusters (cn) Length of test set (ntest)
Table 4
RMSE values for test sets for TAIEX data
Table 7
RMSE, MAPE values and forecasts for TEC data
Table 8
Conditions for the best results of IFTSF , there is no a certain value for the number of lags that means the best results obtained in the different numbers of lags for the BIST 100.Moreover, there are no certain parameter values in other time series. | 2020-03-19T10:21:07.757Z | 2020-03-17T00:00:00.000 | {
"year": 2020,
"sha1": "3e24819b9493ba0f1628a56e58dee6115577cf8b",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s41066-020-00220-8.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "f3ba00f0a6f8c118a6466d92dfa1dc439dc22fce",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
266061290 | pes2o/s2orc | v3-fos-license | Description of a PCR-based technique for DNA splicing and mutagenesis by producing 5' overhangs with run through stop DNA synthesis utilizing Ara-C
Background Splicing of DNA molecules is an important task in molecular biology that facilitates cloning, mutagenesis and creation of chimeric genes. Mutagenesis and DNA splicing techniques exist, some requiring restriction enzymes, and others utilize staggered reannealing approaches. Results A method for DNA splicing and mutagenesis without restriction enzymes is described. The method is based on mild template-dependent polymerization arrest with two molecules of cytosine arabinose (Ara-C) incorporated into PCR primers. Two rounds of PCR are employed: the first PCR produces 5' overhangs that are utilized for DNA splicing. The second PCR is based on polymerization running through the Ara-C molecules to produce the desired final product. To illustrate application of the run through stop mutagenesis and DNA splicing technique, we have carried out splicing of two segments of the human cofilin 1 gene and introduced a mutational deletion into the product. Conclusion We have demonstrated the utility of a new PCR-based method for carrying out DNA splicing and mutagenesis by incorporating Ara-C into the PCR primers.
Background
Splicing of DNA molecules is an important task in molecular biology that facilitates cloning, mutagenesis and the creation of chimeric genes. While the advent of restriction enzymes substantially advanced DNA splicing techniques, they cannot be applied universally, and their use is limited to enzyme-specific loci. Other techniques like sitedirected mutagenesis by overlap extension [SOE; [1]], insertional mutagenesis with the megaprimer technique [2] and staggered reannealing [3,4] have further improved DNA mutagenesis and splicing. Each method offers advantages and inherent drawbacks. Another cloning approach involving the formation of 5' overhangs utilizes incorporation of nucleotide derivatives into PCR primers [5][6][7] that stall polymerization. These techniques are dependent on an established set of optimal conditions for strong polymerization arrest, including the correct choice of polymerase or the incorporation of three ribonucleotide derivatives in the primer [7]. Furthermore, chimeric DNA/RNA primers need to be removed and reverse-transcribed in order for the splicing to be completed.
Although in the past we have successfully used the SOE technique for mutagenesis and splicing, we encountered difficulties while constructing larger genes. That led us to develop the staggered reannealing method [3,4]. This method proved to be useful as well, however, its efficiency declined as the gene to be mutagenized exceeded 1000 bp. Although these techniques allow splicing of any two DNA fragments without the need for restriction enzymes, their efficiency is inversely related to the length of the DNA fragments involved, since these techniques rely on the successful melting and reannealing of DNA to create matching overhangs. We sought to offer an alternative approach to facilitate the splicing of any two DNA segments for mutagenesis and construction of chimeric genes. Our technique utilizes two rounds of PCR, and is based on moderate template-dependent polymerization arrest using cytosine arabinose (Ara-C).
Ara-C is a nucleotide derivative (Fig 1) that is widely used in cancer therapeutics [8]. It is a competitive inhibitor of DNA polymerase and also affects polymerization initiation [9,10]. Ara-C exerts it therapeutic action on cellular DNA polymerase after phosphorylation by an endogenous kinase. Once phosphorylated, Ara-C facilitates inhibition of DNA replication in cancer cells. Sanger et al, in their search for polymerization terminating agents for use in sequencing techniques, found that while dideoxynucleotides were strong polymerase terminators, Ara-C only weakly halted polymerization [11]. Therefore, even today, dideoxynucleotides remain the terminators of choice in sequencing reactions. Previous studies have shown that while Ara-C could serve as a substrate for mammalian polymerases, it terminates polymerization by some prokaryotic polymerases [11].
Here we used Ara-C both as DNA polymerase inhibitor and template for DNA mutagenesis and splicing.
Results and discussion
We were searching for a mild template-dependent polymerization terminator. The rationale for mild termination is as follows: Termination must be strong enough to create 5' overhangs in the first PCR reaction, but weak enough to allow the polymerase to continue through the modified nucleotide during the second round of PCR (Fig 2). For the reasons mentioned above, Ara-C was chosen for use in the present study.
As proof of principal, a 20 bp deletion in the human cofilin 1 gene was created. We tried to splice together two segments of the gene: one 5' (237 bp) and one 3' (309 bp) segment (Fig 2). Primers were designed with one or two Ara-C molecules replacing native deoxycytidine nucleotides. When two Ara-C molecules were incorporated into the primer (Hospital for Sick Children, Toronto, Canada), template-dependent termination can potentially occur before, after one, or after two Ara-C molecules. Therefore, to determine the termination location, we designed one side of the overhang to accommodate termination after two Ara-C molecules, and the other side of the overhang to accommodate termination after one Ara-C molecule (Fig 3).
There were a total of 8 PCR reactions that included two Ara-C primers for each of the two segments, and the two polymerases (Taq and Pfu) for each set of primers. PCR products were gel-isolated. At this stage, gel-isolation is essential in order to remove any of the original plasmid that might serve as a template in the second PCR reaction. Alternatively, the original plasmid may be eliminated by digestion with DpnI, although this option is less recommended, since traces of undigested plasmid could affect the outcome of the second PCR reaction. Corresponding segments to be spliced were combined (total of four tubes) and ligated. As indicated above, the rationale for this technique is that Ara-C is a mild polymerization terminator, and therefore it will produce a mixture of cohesive and blunt ends. Hence, the reaction is expected to both terminate (producing sticky ends essential for the splicing phase) and run through the Ara-C (producing blunt ends; this feature is essential to the second PCR reaction). Therefore, lowered concentration of ligase and reduced ligation time were used to optimize conditions to favor cohesive end ligation. The products of the ligase reaction were amplified by the second PCR with Taq or Pfu polymerases using the sense primer A, and the antisense primer B, which span the cofilin 1 gene. This PCR reaction produced the expected 552 bp product (blunt end ligation, is expected to produce an extra duplicated piece of DNA of 15 bp). The PCR products were either sequenced directly, or cloned into a plasmid and then sequenced. Based on sequencing results, we observed that incorporation of two Ara-C nucleotides into the PCR primers yielded the expected product. This suggests that the two molecules of Ara-C provided the desired mild termination to produce a product with 5' overhangs, but also allowed the polymerase to run through during the 2 nd PCR. Furthermore, based on the design of the primers, the polymerization stalled both after the first and the second Ara-C molecule. Both 5 and 30 min incubations with DNA Ligase were sufficient to preferentially ligate the cohesive ends. This further suggests that two adjacent molecules of Ara-C produce 5' overhangs. Even though both 5 and 30 min ligations were successful in producing the desired product, it is not recommended to allow the reaction to proceed for a prolonged time, nor is it recommended to use high levels of ligase, since these conditions may facilitate blunt end ligation that may produce a mixture of the blunt and cohesive end products. Both Pfu and Taq polymerases were equally capable of producing termination products in the first PCR, while still running through the Ara-C in the second PCR. When one molecule Structure of cytidine and its derivatives Figure 1 Structure of cytidine and its derivatives. The derivatives featured in this figure vary in their sugar substitutes. Note that in Cytosine Arabinose (Ara-C), the arabinose sugar contains hydroxyl groups in positions 3 and 5 in a similar orientation to native ribose, thus permitting reaction with other nucleotides in DNA synthesis. However, the hydroxyls in positions 2 and 3 are in the trans orientation. Comparing position 2 on the arabinose ring to that of 2-deoxyribose reveals that the hydrogen in 2-deoxyribose, that is in trans configuration to hydroxyl 3, is replaced by the hydroxyl group found on arabinose. It should be emphasized that there are two types of polymerization arrest: a. Chain termination-the nucleotide is incorporated into the nascent DNA strand and synthesis is stalled because no new nucleotide is added. Dideoxy derivatives stall elongation after incorporation into the nascent DNA strand because they do not have hydroxl in position 3. Arabinose nucleotides also belong to this group, but they offer only partial stalling [11]. b. Template-dependent termination-nucleotides already incorporated in the DNA (e.g. in primers) are able to stall polymerization when the polymerase reads the template. It is believed that due to stereo restraints, the polymerase falls off the template. The frequency of this event determines the efficiency of the stalling. Arabinose derivatives belong to this group. The property of template-dependent termination of Ara-C was utilized in this study to create 5' overhangs in the first PCR. However, since the template-dependent termination by Ara-C is moderate, it was utilized for the amplification in the second PCR.
Schematic representation of run through stop DNA mutagenesis and splicing technique with Ara-C Figure 2 Schematic representation of run through stop DNA mutagenesis and splicing technique with Ara-C. In this example two pieces of DNA are to be spliced (5' and 3' DNA segments) and mutated with an insertion of additional DNA. The 5' segment is amplified using PCR primers A (sense) and Ara-C2-A (anti-sense). Primer Ara-C2-A is designed to contain hybrid DNA with two adjacent molecules of Ara-C to stall polymerisation and produce a 5' overhang. Mutational addition is also incorporated into this primer. (Note that in this paper we created a mutational deletion in the human cofilin 1 gene, but here for illustration purpose, we describe a mutational addition). The 3' segment is amplified using PCR primers Ara-C2-B (sense) and B (anti-sense). Primer Ara-C2-B contains overlapping sequence with primer Ara-C2-A, and 2 molecules of Ara-C are incorporated to stall polymerization and produce a 5' overhang that is complementary to the overhang in Ara-C2-A. Both Ara-C primers are phosphorylated for down stream ligation. Since two adjacent Ara-C molecules produce moderate termination, PCR products contain a mixture of 5' overhang and blunt end DNA. Each PCR product is gel-isolated and subjected to short ligation, where cohesive end ligation is predominant. A portion of the ligation reaction is then subjected to a second PCR reaction, using primers A and B that span the entire mutated chimeric DNA. As mentioned above, 2 Ara-C molecules are moderate polymerization terminators. This assures that at the first round of the second PCR, the polymerase will run through the Ara-C in the template and incorporate native dGMP, that will ensure in turn proper polymerization in the next rounds and a product that will contain the native dCMP. For cloning purposes of the final PCR product, primers A abd B can include restriction sites (as used in this study). Alternatively, by using Taq polymerase in the second PCR reaction, the product can be cloned into TA cloning plasmids. Another alternative is to design primers A and B to contain at least 2 molecules of Ara-C to produce 5' overhangs to match cloning plasmids. of Ara-C was incorporated in the PCR primers, no termination could be observed, as seen by the addition of a 15 bp segment in the PCR product indicative of blunt end ligation. Even ligation for 5 min in reduced concentration of ligase failed to produce cohesive end ligation when only one Ara-C was employed.
The run through stop method utilizes a novel approach for DNA splicing and mutagenesis. While other mutagenesis techniques like SOE, megaprimer and staggered reannealing create matching overhangs after melting and reannealing, the run through stop method creates matching overhangs by polymerization arrest with Ara-C. We were motivated to design the Ara-C approach because we were not successful in creating gene mutations with the other techniques. Hence, the run through stop offers a good alternative to these techniques.
It has been previously demonstrated that utilizing abasic or RNA nucleotides like tetrahydrofuran derivative or 2-omethyl ribonucleotide in PCR primers produced 5' overhangs that facilitated cloning of DNA fragments into plasmids [5][6][7]. These approaches were dependent on strong polymerization termination by the nucleotides. Our technique established the conditions for mild termination of DNA polymerization with two Ara-C molecules. This enables us to use the Ara-C-containing primers in two steps of PCR for DNA splicing and mutagenesis. Although in the present study we used relatively short segments of DNA for proof of principal (~500 bp of the human cofilin gene), this technique, unlike the staggered reannealing technique, is not limited to short DNA fragments. Since both rounds of PCR in the present study are based on con-ventional PCR, the length limit of the DNA fragments to be mutagenized is that of the PCR technique.
Conclusion
The run through stop method can be summarized in four steps: 1. Amplify two segments of DNA to be spliced using PCR, with phosphorylated primers containing two adjacent molecules of Arabinose nucleotide with overlapping sequence.
2. Gel-isolate the two DNA products, combine and ligate.
3. Amplify the spliced product with flanking primers using PCR.
4. Clone the product into a plasmid.
First PCR
For the first PCR, 4 primers were designed: Primers A and B flanking the human cofilin 1 gene (Fig 2) and two primers containing Ara-C molecules (Figs 2 and 3)). Primer A-5'-ATActgcagATGGCCGCTGGTGTGGCTGTCTGTG-3'sense primer of human cofilin 1. Lower case letters represent Pst I sequence and bold letters represent Ala to Ser mutagenesis for down stream usage. Primer Ara-C2-A-5'-GGCATAGCGGCAGTCXXAAAGGTGGCGTAGGGATCG-3'-anti-sense primer that contains two Ara-C molecules (XX) and designed to delete a 20 bp segment from the human cofilin 1 gene (Fig 3). Primer Ara-C2-B-5'-ACTGCCCGTTATGCXXTCTATGATGCAACCTATGAG-3'-Ara-C primer assignment Figure 3 Ara-C primer assignment. Shown is the double-stranded DNA segment of human cofilin 1 gene that was used for mutagenesis. Capital letters and arrows represent primers containing Ara-C molecules. Lower case letters represent deleted nucleotides, achieved with primer Ara-C2-A (broken line). Xs in primers denote Ara-C molecules that replace the original deoxy cytidine molecules. Note that the 5' end of primer Ara-C2-A was designed to produce an overhang, that restricts ligation to the 3' segment of the PCR product (see also Fig 2) only if termination occurred after the first Ara-C molecule. The 5' segment of primer Ara-C2-B was designed to produce an overhang that restricts ligation to the 5' segment of the PCR product (see Fig 2) only if termination occurred after two Ara-C molecules. Additionally, two primers containing only one Ara-C molecule insertion were synthesized (not shown).
sense primer that contains two Ara-C molecules (Fig 3). Additionally, two primers containing only one Ara-C molecule insertion were synthesized. Primer B-5'-CAActcgag-GGCTGCCAGATGCTCCAGGCAGG-3'-anti-sense primer of the 3' end of human cofilin 1 gene. Lower case letters represent the sequence for the Xho I gene. In the first PCR, Primer A was used with primer Ara-C2-A, and Primer Ara-C2-B was used with primer B. In the second PCR, primer A was used with primer B (see also Fig 2).
Ara-C primers were phosphorylated for 30 min at room temperature using T4 polynucleotide kinase (Invitrogen, Burlington, ON), followed by inactivation at 65°C for 10 min, and used for PCR with no further purification. PCR was performed with corresponding primers (see above and Fig 2, 3
Second PCR
Two µl of the ligase reaction were amplified by the second PCR with Taq or Pfu polymerase using the sense primer A, and the anti-sense primer B. Conditions for the second PCR were similar to those of the first PCR. The PCR products were purified (Qiagen). Alternatively, the PCR products were subjected to double digestion with PstI and XhoI followed by ligation into plasmid pcDNA3.1Zeo+ (Invitrogen). One µl of ligation reaction was used to transform 20 µl competent cells (DH5α; Invitrogen), using a short procedure: competent cells were incubated for 5 min on ice, and heat-shocked by immediate plating on pre-warmed (37°C) agar plates. Plasmids were prepared using Fast Plasmid Mini Kit (Eppendorf, Mississauga, ON), and sequenced using the T7 primer.
DNA sequencing
The products of PCR, as well as the products that were cloned into plasmid pcDNA3.1Zeo+ were sequenced in both directions, utilizing primers A and B, or primer T7, respectively (Hospital for Sick Children).
Authors' contributions
MA conceived and designed the study, performed the experiments and drafted the manuscript. NMG carried out some of the experiments, participated in critical evaluation and drafted the manuscript. MS provided general guidance, coordination and funding for the study, and drafted the manuscript. | 2014-10-01T00:00:00.000Z | 2005-09-01T00:00:00.000 | {
"year": 2005,
"sha1": "997cbba1a1c2c440fe7ea722e9dc5289b43c1759",
"oa_license": "CCBY",
"oa_url": "https://bmcbiotechnol.biomedcentral.com/counter/pdf/10.1186/1472-6750-5-23",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "997cbba1a1c2c440fe7ea722e9dc5289b43c1759",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
233639580 | pes2o/s2orc | v3-fos-license | Job Loss During the COVID-19 Pandemics and Its Psychological Con- sequences
Article history Received: 27 January 2021 Accepted: 26 February 2021 Published Online: 5 March 2021 The Coronavirus pandemics, or the COVID-19, came as an unwelcomed guest that did not want to leave, where people until today do not know for sure all the ways it affects people’s health and overall being. The year of 2020 will be remembered as the one in which life almost stopped. A year full of losses that continue, from losing people dear to us, to losing jobs, opportunities, and freedom in almost every sense. This paper covers the consequences of the COVID-19 outbreak that it had on people, focusing on the job loss and unemployment, the healthcare opportunities and availabilities, the gender discrimination in the process of losing jobs, and the most importantly the psychological consequences people suffered from, due to isolation, inability to work and to provide.
Introduction
The year 2020 is the year that will be remembered in history as the year when many lives were stopped, forever on this world, or just for a period of time. The COVID-19 pandemic or the coronavirus pandemic, an outgoing coronavirus disease was first identified in the Wuhan, China at the end of 2019. The WHO (World Health Organization) declared it a pandemic in March of 2020, and as of the January of 2021, the number of reported cases raised up to 93 million with over 2 million deaths globally. [1] After the first cases were identified and shared with the world, people were confused and scared as the virus is different from any other the medicine has faced, even though the COVID-19 (coronavirus disease 2019) comes from the coronavirus family of viruses from which the most famous are MERS (Middle East Respiratory Syndrome) and SARS (Severe Acute Respiratory Syndrome).
As the virus started to spread to almost every continent and every country in an unexpectedly short time, the countries had a different way of dealing with it. It is believed that the way the virus spreads is mainly through droplets that are released when a person coughs and sneezes, which then made it one of the main precautions to keep a safe distance, to regularly wash and sterilize hands, and to wear masks. The symptoms, according to WHO, are not too clear or universal as people with a positive test for COVID-19 reported a lot of different symptoms and side effects. The most common ones are fever, cough, tiredness, and loss of taste or smell; the less com-mon symptoms involve the sore throat, headache, aches, and pains, diarrhea, a rash on the skin, red or irritated eyes, and all those symptoms are seen as mild and mostly require patients to isolate themselves and take care of their bodies with healthy nutrients and medicine if needed. The serious symptoms require immediate medical care as they may lead to fatal endings, the symptoms involve shortness of breath or difficulty breathing, chest pain and loss of speech or mobility and confusion. Easy transmission, sometimes even with precautions, ended with COVID-19 clusters and was one of the reasons why many countries started making decisions about lockdowns and curfews.
Shutting down the cities due to COVID-19 pandemics affected the economy in the majority of countries, which was followed by people getting dismissed, leading to recession. [2] People were getting sacked and were stuck at home without many opportunities to find another employer who could afford new employees. People started losing their family members and friends to the virus about which no one can tell much information about, there was a lot of fear and anger people were feeling, and losing jobs and careers on the other hand made people feel like they are losing control of their lives. Every aspect of life seemed like it was crashing, people were getting isolated and only some were happy enough to have families with whom they could isolate and spend months with in a closed and restricted area, but many felt loneliness like never before and the mental health of many people was affected. The numbers of new cases keep going up even after more than a year from the first case recorded, but in some way, people are adapting to the new lifestyles, as working from home or starting their own business became an alternative. People are also getting used to avoiding social interaction with large groups of people at the same place, and at the same time they are trying to overcome feelings of loneliness. It is important to focus and provide sufficient psychological help, counseling, and support to people during these hard times, and the focus of this paper will be the job loss caused by the COVID-19 pandemics and psychological consequences, as well as impacts that it has on family relations, financial security, and wellbeing.
Job Loss and Gender Differences
In most countries economic activities are dependent on in-person interactions, different jobs come with different infection risks where some jobs have higher exposure and other jobs can afford employees to start working from home which was the case for a lot of firms. [3] The lowwage jobs come with higher exposure as they involve more in-person interactions, so we have people with a high risk of being infected for a small amount of money, and on the other hand, we have people who lost their jobs. According to OECD [4] , the crisis is having the greatest impact of joblessness and poverty on young people and women, as they are having not so secure and skilled jobs, and millions of people have been provided with reduced hours and work from home, but millions also have lost their jobs completely.
A study was done by Dang and Nguyen [5] across six countries as China, South Korea, Italy, Japan, United Kingdom, as well as the four states in the United States, where it was found that women are more likely than a man to lose their jobs permanently but there were no gender differences noted for the temporary job loss. The same study also shows how women are more scared than men about the future of their incomes due to the COVID-19 pandemics, but that is then one of the reasons why females tend to increase their savings and reduce more their consumptions. [4] We see that females are more at risk of losing jobs as the economy is crashing and being unstable all around the world as around 58.6% of the workforce in the service sector are women. It is not strange to find gender differences, especially when the world is stopping, when shops, restaurants, and many services are closing, decreasing working hours or employers, but this is the problem that countries representatives and different organizations, I believe, need to take seriously. Many females are single mothers, many females do not have other supports, and governments need to provide policies that will support women, during and after pandemics.
Women are not the only ones who became the highest risks categories for the job loss caused by the COVID-19 pandemics. Some other categories involve the older employees who suffer from chronic diseases and are at risk because of poor health; younger workers seeking employment, as employers are having limited funds and would prefer more experienced employees. Self-employed workers as their employment is always flexible and they are unprotected in terms of health care and social protection; and lastly the economic migrants as the pandemic has prevented them to come to desired destinations and countries they were seeking employment in. [6] Some firms in Bosnia have proposed an offer to families where both spouses are working with the same employer, to decide on their own who is going to leave the job, where another spouse is having a safe job for the time being. I believe this practice made it easier for a lot of families to keep a safe income, choosing on their own the best options and who will find another job in a shorter period of time. But even this model shows us the presence of gender discrimination and inequality, as most females would stay and males, as they are more preferred in business, would Distributed under creative commons license 4.0 find another job more easily.
Job loss and Health Care
In March of 2020, the International Labour Organization (ILO) [7] estimated that around 25 million jobs would be lost worldwide, due to the Coronavirus outbreak. Between February and April, around 22 million jobs were recorded and from April to June there was a rebound of 7.5 million jobs. One survey showed data of people who lost their jobs or work-related income during the pandemics, and the way it affected the health care of them and their families. Findings portray how nonelderly adults who lost their jobs, are more likely to have problems with the affordability of health care, than families who did not face such economic effect. More than half of them and the ones with family members who have chronic conditions, families who have children under 19, avoided the health care because of, firstly costs that can not be met, and also the fear and concerns about exposure to the coronavirus in health institutions. [7] Losing jobs directly affects the affordability of health care, especially in the pandemic season where there is a high risk of getting the virus and the possibility of needing proper medical care, but people can not afford it without proper income.
Health care is being avoided with the fear of getting infected, a lot of doctors in many countries advised people not to come to hospitals if there is no serious reason and need for it. But avoiding and delaying the proper healthcare due to no income, is risky for the people who have ongoing medical problems, coming to the high possibility of having long-term consequences, but also, according to the study done [8] , would contribute to the differences and already existing polarization in an ethnic and socioeconomic sense. Another important factor is the families with kids, where parents would avoid or not be able to visit pediatricians regularly, where some developmental risks may be faced without the doctor's involvement, and again this has a long-term effect on child's overall health. The vaccinations are not being administered for a lot of children, where they're major risks for getting different preventable diseases [9] , even though hospitals and doctors are preoccupied with the coronavirus outbreak and keeping people safe and well treated, we should not keep aside other important doctor visits, especially for children and elderly people.
Psychological Consequences
As mentioned at the beginning, many countries out ruled lockdowns, curfews, and isolation as important ways of fighting the COVID-19 pandemic. From the men-tal health perspective, the isolation process carries many risk factors and concerns if the isolation tactic would be a long-lasting solution. According to Safai [10] , mental health professionals fear the consequences of self-isolation, such as increased incidence of depression, increased anxiety, and domestic violence many times related to alcohol or substance abuse. Fighting the virus just by isolating itself represents a trigger for many individuals, and those who lost their jobs on top of that have one more fight to fight, most of the time by themselves. Research suggests how people who struggle finding or maintaining employment and jobs, a lot of times suffer substantial psychological distress as a result of it. [11] According to Merriam-Webster [12] , the work is "the labor, task, or duty that is one's accustomed means of livelihood". Work and job is the source of motivations, values, the way one chooses his or her career expresses their beliefs, and they take a significant meaning from the work they do. Being successful in work is what brings meaning to some people's lives, a big portion of one's life is spent working or building careers, and with such significance, we can easily conclude the importance it has to the psychological well-being of a person.
A job loss represents an immense stress source, [13] especially when it happens quickly and without much preparation for it, such as during the pandemics. Individuals who lose jobs and get separated from their work start questioning their abilities in life, and if the unemployment period lasts for a long period of time, some risk factors would easily follow, such as alcohol abuse, depression, anxiety, family problems, and even suicide. Getting fired for some people may have a positive side, for example, an opportunity for a change, a new workplace, and colleagues, but even the process of search has its own effect on the psychological well-being of a person.
As in many other situations, even the unemployment during COVID-19 pandemics is not the same for all, as some categories of people are having harder times than others. One study covered the COVID-19 outbreak and its effects on job security and emotional functioning of women with breast cancer, where the results showed how women who were unemployed due to COVID-19 had a greater level of job insecurity than those who continued their work. Another more important finding is that women diagnosed with breast cancer are at higher risk of anxiety and depression disorders, as well as poorer cognitive functions, and the emotional distress in those women is directly associated with poorer quality of life in general and the job insecurity only came as an additional increaser of symptoms and risks mentioned above. [14] We are living in a time of extreme social changes for which not many people are prepared for. The isolation and lockdowns for some people were too much, and the continuation of it in the case of not being able to take the virus under control only prolongs the pain and suffering of people who are afraid of the whole situation and what tomorrow carries with it. The economic crisis, which led to job losses and increased unemployment rates, as in the case of 2008, showed an increase in suicide rates. According to one study in Bosnia, where the cases of suicide were analyzed during the pandemics, concluded that those restrictive measures carried due to pandemics are representing the triggers for people fearing and facing job loss and having existential issues as well as the people with preexisting traumas and PTSD such as for war veterans. [15] Job loss and unemployment are also directly related to family functioning, where WHO and other sources, such as mental health organizations, are trying to draw attention to the problem and the risk of child maltreatment and abuse as a consequence of economic crash and losing job during COVID-19 pandemics in particular. The investigations done during the pandemics of 2020, showed that job loss of a parent was predictor for psychological maltreatment and physical abuse towards children, but the association between the two was highly dependent on the manner and how parents had a habit of coping with stressful situations and experiences in life in general [13] . Observing the collected data and the study, researches have found that using positive cognitive framing, a technique in psychology used to challenge and change the particular view on the situations and thoughts, could help to decrease the negative effect of the parental job loss on the child abuse and maltreatment through stressful times, during and after the pandemics. According to Lawson and Simon, [13] unemployment and losing a job is one of the biggest life stressors, especially for people who have families, and the literature mentioned the "family stress model" which states that the accumulated stress caused by economic adversity as unemployment, is increasing the risk of child maltreatment or abusement.
Conclusion
The COVID-19 pandemics brought us the feeling of powerlessness and dejectedness over some aspects of our lives. The stressful situations and experiences it brought to us will make new generations look at and live their lives differently. Restricting freedom, getting isolated, getting fired, and trying to survive during the hard times while being unemployed comes with particular psychological consequences. People who were suffering before with depression, anxiety, and many other psychological disorders, mainly increased their symptoms during the isolation period, and the ones who thought that they had their life under control, started feeling like they are losing everything they value in life, like their jobs.
COVID-19 recession led to people getting fired or having a hard time finding an employer who can benefit from new employees and losing the secure source of income caused substantial psychological distress while making it difficult to get proper health care. Different categories of people who were getting dismissed were affected diversely. Ones who were already marginalized or had an already existing physical or mental health problem only got their symptoms to worsen.
Gender differences keep being persistent also during the hard times for the humankind, the weakest in our society became even weaker, at health risk people were slightly moved to the second or third plan as the focus of medical care is mainly on the COVID-19 patients and the treatment of symptoms this new virus carries. As the weakest in society will always be the easiest victims of the unfairness and hard times, the children in families as the weakest members also suffer and feel the consequences of the parents' development or stagnation in their careers and socioeconomic statuses that may be changed due to getting dismissed.
Recommendations
It is only a matter of time when we all will feel the true psychological consequences of the isolation and everything that the pandemic of coronavirus brought to us. In this paper, we saw different ways the pandemics affected job loss and how every aspect of human life is being a part of that calculation. I believe how things could have a better ending and better future if we act on time and give importance to problems which could be the longrun ones. The psychological health of people needs to be a priority despite the physical health, especially during those hard times. I believe each country and city needs to have supportive policies for the ones in need, the people who are losing their jobs, chronically ill people who are losing jobs, families of people who became unemployed, and many others. Now that we have meeting restrictions, online counseling, both individual and group, the positive psychology, financial and social support are all the ways to deal with such obstacles. The mental health organizations are warning us from the beginning of the pandemics, it is a time when we can not do much to change things around us, but it is important to work on ourselves to have a healthy mentality and positivity which can help us to get over the obstacles we face in our lives. | 2021-05-05T00:08:57.653Z | 2021-03-18T00:00:00.000 | {
"year": 2021,
"sha1": "73e8ab6e7f9ccce99ce06856e6bee9c6c50c67ba",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.30564/jpr.v3i1.2855",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "e84258dcd04374911ef4253d8e4348fe2c79ae63",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
} |
225528607 | pes2o/s2orc | v3-fos-license | THE INFLUENCE OF STREET TREES ON URBAN MICROCLIMATE
The microclimatic improvement is often cited as one of the benefits generated by urban trees, however, there are few studies which quantitatively address this effect. The aim of this paper was to compare the microclimate of streets with and without trees. Therefore, three samples containing a stretch of street with and without trees were selected, with the street with trees being composed of different species. The temperature, relative humidity and wind speed data were measured by automatic Kestrel meteorological ministations installed on the street with trees and another in the street without trees. The data collection period was from 9 AM to 3 PM, and the monitoring interval was every 1 minute during all four seasons starting in the winter of 2011. The comparative analyses were done using the Student’s t-test (99%). The results indicated that the air temperature in a street with trees was on average 1.7 °C lower than a street without trees, the relative humidity was 6.9% higher and the wind speed was 0.04 m/s lower. There was no statistical difference for temperature between the street with “ipê” in the winter and spring. There was no difference for wind speed in the summer between the street with other trees and the street with “ipê” and “tipuana”. It was concluded that street trees provide a pleasant microclimate, and this influence was statistically significant.
INTRODUCTION
The development of urban spaces without proper planning attributes unhealthy characteristics to a city, mainly due to anthropic actions and their effects (MARTINS et al., 2011). In view of the impacts arising from the urbanization process, one of the ways to promote improvement in the quality of life of a population in cities is to carry out correct planning and planting of trees, whether in public or private areas (BOBROWSKI; BIONDI, 2012).
It is unquestionable that trees play a vital role in the well-being of urban communities. Their unique ability to control many of the adverse effects of the urban environment contributes to a significant improvement in the quality of life (VOLPE-FILIK et al., 2007). In addition to promoting quality of life, vegetation can favor the livability of cities, improving the landscape, environmental quality and people's own health (JIM et al., 2015).
The various functions that trees play in cities, their value, costs, benefits and the very diverse influence that they have in urban areas have been widely discussed (DONOVAN;BUTRY, 2010). Thus, cities around the world have promoted urban forests in recent decades, being supported by scientific theory and empirical evidence as a way of keeping their citizens healthy, as well as improving environmental and economic conditions (JIANG et al., 2015). FLORESTA, Curitiba, PR, v. 50, n. 3, p. 1486-1493, jul/set 2020. Martini, A. et.al Many studies point to the use of vegetation as a mitigating factor for climatic problems occurring in cities. For Chang and Li (2014), the urban forest can be used as a measure to cool portions of urban areas, reducing the intensity and magnitude of the negative impacts of heat islands. Yu and Hien (2006) emphasize that when the vegetation is well distributed, the energy balance of the whole city can be modified by adding more evaporative surfaces, more absorbed radiation can be dissipated in the form of latent heat and the urban temperature can be reduced. In this context, Huang et al. (2008) stated that studies on the urban climate should be stimulated to assist in the various decisions of environmental planning and rehabilitation of urban areas.
The positive advances in the microclimate are quite significant when there is a project for afforestation of an area, since in addition to performing the mechanism for filtering atmospheric gases, the presence of tree species can also be responsible for the social well-being of the population (BENATTI et al., 2012). Thus, assuming the hypothesis that afforestation provides better microclimate conditions for the city, the present study aimed to quantitatively compare the microclimate of streets with trees and the microclimate of streets without trees.
MATERIAL AND METHODS
The research was carried out in the city of Curitiba, located in the First Planalto of Paraná, at 934.6 m of average altitude. The city's ground zero is located at Praça Tiradentes, at latitude 25º25'40"S and longitude 49º16'23"W. The climate is humid, mesothermal subtropical Cfb according to the Köppen climate classification, without a dry season, with cool summers and winters with frequent frosts (INSTITUTO DE PESQUISA E PLANEJAMENTO URBANO DE CURITIBA (IPPUC), 2011).
The climate of Curitiba presents a mild summer and moderate winter with some more rigorous days. Average temperatures are 19.7ºC in summer and 13.4ºC in winter. Rainfall has an annual average of 1,419.91mm (IPPUC, 2011).
According to Bobrowski, Ferreira and Biondi (2011), the tree species most found in public streets in Curitiba are: H.chrysotrichus, T.tipu, L pacari, and P.rigida.
Three samples were established in Curitiba, named Alto da Rua XV, Hugo Lange and Bacacheri. Each sample presents a stretch of street with trees next to a stretch of street without trees ( Figure 1).
The sample of Alto da Rua XV is formed by a section of Mal. Deodoro street without trees and Fernando Amaro street with Tipuana tipu. The urban configuration of both streets is similar. The stretch with trees features 32 large trees, distributed on both sides of the rolling road with 9 m tree spacing, providing 100% shading to the street. The individuals have the following characteristics: average height of 20 m, Circumference at Breast Height (CBH) of 200 cm, average bifurcation height of 8 m and average canopy area of 200 m².
The Hugo Lange sample is formed by a stretch of Augusto Stresser street without trees and Dr. Goulin street which is afforested with Handroanthus chrysotrichus. The urban configuration of both streets is also similar. The stretch with trees has 26 medium-sized trees, distributed on both sides of the rolling street with 7 m tree spacing, providing 47% shading to the street. The individuals have the following characteristics: average height of 8.5 m, average CBH of 53 cm, average bifurcation height of 3 m, and average canopy area of 24 m².
The Bacacheri sample is formed by a section of the Estados Unidos street without trees and another one with Lafoensia pacari and Parapiptadenia rigida. The section with trees has 9 trees of Lafoensia pacari (medium size) on the odd side, with spacing of 9 m and 12 tree of Parapiptadenia rigida (large size) on the even side, with spacing of 8 m, which provides 74% of shading the street. Lafoensia pacari individuals have the following characteristics: average height of 8.5 m, CBH of 86 cm, average bifurcation height of 2 m and average canopy area of 20 m². On the other hand, the Parapiptadenia rigida individuals have the following average characteristics: average height of 14 m, CBH of 130 cm, average height of bifurcation of 3.5 m, and average area of canopy of 80 m².
Two automatic meteorological ministations of the Kestrel ® brand were used to analyze the influence of street trees on the urban microclimate, remaining fixed on a tripod with a sensor table at 1.50 m in height. According to the manufacturer's description, Kestrel ® has precision in the temperature value to ±1°C, covering a range of use from -29°C to 70°C. The relative percentage values have precision of ± 3%, covering a range detection range from 5 to 95% (without condensation). The wind speed has an accuracy of 3% of the reading value, between 0.6 m/s and 40 m/s. The equipment was positioned in the central portion of the square on the south sidewalk of the east-west streets (Mal. Deodoro, Fernando Amaro, Augusto Stresser and Dr. Goulin) and on the west sidewalk of the northsouth street (Estados Unidos) in order to reduce the interference caused by the apparent movement of the sun. The collection of meteorological variables was carried out through monitoring campaigns for security reasons and other possible human interference in which a team of researchers remained close to the devices throughout the collection period.
The influence of street trees on the urban microclimate was analyzed using the variables: air temperature (°C), relative humidity (%) and wind speed (m/s). Meteorological variables for each sample were monitored on different days. Thus, one device remained on the street with trees and the other on the street without trees on each collection day. The procedure was repeated in the four seasons. As a way to better characterize the stations, the monitoring campaigns were adopted as a criterion in the central two weeks of each station (Table 1), but the daily weather condition (which cannot be controlled) was variable between open skies and clouds. Although only three days per season (three samples) were needed, two weeks were stipulated as a safety margin against possible unforeseen events such as rainy days, equipment failures and an insufficient number of researchers to go out into the field. FLORESTA, Curitiba, PR, v. 50, n. 3, p. 1486-1493 Monitoring was carried out in the winter and spring of 2011 and in the summer and autumn of 2012, and the data collection period was from 9 am to 3 pm (Brasília time), being corrected to 10 am and 4 pm in summer time with a 1-minute monitoring interval, which generated a set of 360 data per day on each street.
For a general analysis, all values recorded every minute in the streets with trees (4320 data) were compared with all values found in the streets without trees (4320 data) using the Student's t-test at 99% significance to assess possible differences in street weather conditions. Next, a more detailed analysis was carried out in which the values found in the streets with trees (360 data) and without trees (360 data) were evaluated separately for each day of collection using the Student's t-test (99% significance level). An analysis of variance was applied (ANOVA) in both analyzes before applying the means comparison tests.
RESULTS
The streets with trees had a different microclimate condition than the streets without trees, which proves that street trees have a significant influence on the meteorological variables in the city's microclimate. The streets without trees generally had higher values of temperature and wind speed and lower values of relative humidity (Table 2). It is observed that the statistical analysis applied to the temperature values showed a significant difference between the averages of the streets with trees (19.2°C) and without trees (20.9°C). The average values of relative humidity of the streets with trees (56.8%) were also different from those found in streets without trees (50.9%). This fact was repeated for the variable wind speed, where the average of the streets with trees was 0.66 m/s and the streets without trees with 0.70 m/s. Thus, it was possible to statistically prove the street trees influence the local microclimate. The average between the values of the three samples analyzed in each season of the year revealed that the temperature of a streets with trees is 1.7°C lower than that of a street without trees, the relative humidity is 6.9% higher and the wind speed is 0.04 m/s less.
The effect that street trees have on the microclimate can be demonstrated in general way (Table 2). However, this study was carried out in the four seasons and using different species which requires a more detailed analysis. Therefore, some specificities are observed when separately analyzing the values found in streets with trees and without trees for each day of collection (Table 3). Table 3. Average microclimatic variables and statistical analysis (t-test) among the street with and without trees in each sample and season. Tabela 3. Média das variáveis microclimáticas e análise estatística (teste t) entre a rua com e sem arborização em cada amostra e estação do ano. The statistical analysis performed showed a significant difference between the average temperature of the streets with and without trees in all seasons of the year in the Alto da Rua XV and Bacacheri samples. This occurred even when this difference was very small, such as in the winter season in the Alto da Rua XV sample (0.4°C) and in the autumn in the Bacacheri sample (0.2°C). There was no statistical difference between the average street temperature with and without trees only in the Hugo Lange sample in the winter and spring seasons, while the average temperature of the two streets were equal in winter (13.8°C), and the spring was the only occasion where the average of the street with trees (21.7°C) was slightly higher than the average of the street without trees (21.5°C).
Season
The statistical analysis performed showed a significant difference between the average of the street without trees and the average of the street with trees for the relative humidity values in all seasons and in all samples. There was no statistical difference between the average of the street without trees and the average of the street with trees for the wind speed values in the summer season for the samples Alto da Rua XV and Hugo Lange, where the values were equal between the streets.
DISCUSSION
The microclimate of streets with trees had different conditions than streets without trees, showing higher temperature and lower relative humidity values. These results were also found by other authors in similar research who sought to assess the influence of green areas on the urban microclimate.
In a park in Nagoya (Japan), Hamada and Ohta (2010) found temperature differences of 1.9°C in the summer and -0.3°C in the winter. In Manchester (England) in the summer season, it was found that although the shade of a tree only reduces the air temperatures by 1 or 2°C, the perceived temperatures are significantly lower in the shade on hot and sunny days (ARMSON et al., 2012). FLORESTA, Curitiba, PR, v. 50, n. 3, p. 1486-1493 In the city of Curitiba in an area with Mixed Rainforest, Soldera et al. (2014) found that the external temperature was on average 3.9°C higher and the relative humidity was 19.5% lower than in the interior of the forest. In Bosque Gutierrez, another remnant of Mixed Ombrophilous Forest, the temperature was 4.4°C lower and the relative humidity was 17.8 units higher (SILVA et al., 2014). In a planned green area, known as Praça Alfredo Andersen, the average temperature was 1.4 °C lower under the shade of trees than in the surroundings, and the relative humidity was 5 units higher (VIEZZER et al., 2015).
The authors in all of these works quantitatively verified the influence that vegetation has on the microclimate. However, it is noteworthy that studies specifically referring to street trees in Brazil are non-existent, so the real influence that planting trees on public roads has on the microclimate of a city is still unknown. Bearing in mind that the mentioned studies refer to large green areas existing in cities, it is possible to affirm that the values found in the present research are extremely important, since the influence of street trees on the microclimate was similar to that verified in green areas.
According to most of the works mentioned, the decrease in temperature that a green area provides was from 1 to 4°C, whereas the decrease caused by the street trees in this study was on average 1.7°C, reaching up to 3.7°C. With regard to relative humidity, the studies showed values around 5 and 15 units higher in green areas. In the present study, it was found that street trees provide an average increase of 6.9% in relative humidity, reaching up to 11.4%.
A similar study was found in Dresden (Germany), where the air temperature in streets with trees was 0.9 to 2.6°C lower than in an environment without vegetation and the relative humidity was higher, ranging from 0.6 to 6.4 units (GILLNER et al., 2015).
The results found for the wind speed do not allow to affirm a trend, although the average value indicates that the streets with trees provides lower wind speed values (0.04 m/s), even though this value was small. In addition, variations are observed when the values are analyzed separately on each collection day.
Wind speed is a little used variable in studies which address the influence of vegetation on the urban microclimate, as winds in cities are influenced by different urban structures and equipment, and their analysis is difficult to understand. However, the results found in the summer season stand out, where it was found that the vegetation did not provide a barrier to the passage of winds on the studied streets, so in that season where temperatures are higher, the presence of wind created a more pleasant microclimate.
The benefits provided by street trees related to the microclimate found in this study were evident, although Mahmoud (2011) stated that isolated trees distributed with wide spacing (as is the typical case of an urban street) do not have a significant effect on cooling the air; however was possible to verify that if planted close together in this alignment, this effect does not occur. Many studies suggest that the use of small clusters of trees (parks, forests and squares) is more efficient for cities (SHASHUA-BAR et al., 2010). Therefore, it was proved with this study that the linear planting of grouped trees on the streets is a promising alternative for improving the microclimate of a city.
Even the analysis carried out separately for each collection day demonstrated the positive and significant influence that street trees have on the microclimate. It is known that the absorption of radiant energy by the chloroplasts in the photosynthesis process is a prerequisite for its occurrence (LARCHER, 2006). It is estimated that between 60 and 75% of the solar energy on vegetation is consumed in physiological processes, as plants do not store heat in their cells (BERNATZKY, 1980). In this sense, the barrier provided by the canopy of the treetops prevents penetration of most of the solar radiation during the day. This lesser amount of solar radiation incidence implies in less soil heating and consequently less long-wave radiation emission and less air heating in the space between the soil and the treetops (HERNANDES et al., 2002). Thus, there was clearly a greater microclimate benefit in the sample with a higher percentage of shading offered by the treetops and a lower benefit in the lower proportion of shading.
Information regarding the entry of radiation into the wooded environment can help explain the unexpected results found in the Hugo Lange sample, where the street is afforested with the Handroanthus chrysotrichus species. In addition to this species having a thin crown, the individuals were planted with greater spacing between trees, making the crowns isolated. Thus, the streets with trees had little restriction on radiation entering into the environment, interfering little in the energy balance, and thereby resembling the street without trees, especially in stations where there are marked changes in its phenological stage.
Finally, to understand how vegetation influences the microclimate, Oke (1989) stated that vegetation provides cooling in constructed areas using two means: the shading generated by vegetation, which reduces the conversion of radiant energy into sensitive heat and consequently reduces the surface temperatures of shaded objects; and also by evapotranspiration on the leaf surface, which cools the leaf and the surrounding air due to the latent heat exchange. FLORESTA, Curitiba, PR, v. 50, n. 3, p. 1486-1493
CONCLUSÃO
▪ It is concluded that the microclimate of a street with trees is milder than that of a street without trees, with the difference between the studied environments (with and without trees) being statistically proven. ▪ The streets with trees showed lower temperature (on average 1.7°C) and higher relative humidity (on average 6.9%) Values. It was not possible to determine a trend regarding wind speed. ▪ The results found emphasize the importance of street trees for cities, as the decrease in temperature and the increase in relative humidity provided by this vegetation are similar to the values found in green areas. In addition, it is worth highlighting the importance that a proper choice of species is made for each specific location, considering the seasonality of the plant and other particularities of the environment in order to accentuate the benefits provided by urban forests.
ACKNOWLEDGMENTS
To the Fundação Araucária for Scientific Support and Technological Development of Paraná for financing the purchase of equipment. | 2020-07-16T09:07:29.753Z | 2020-07-10T00:00:00.000 | {
"year": 2020,
"sha1": "ee4a2946cb648f1f783997b406708b24d0bd203c",
"oa_license": "CCBY",
"oa_url": "https://revistas.ufpr.br/floresta/article/download/62194/41136",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4449cf9c9e44edbef8de5033eb5f3f9038ba4e22",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
119350806 | pes2o/s2orc | v3-fos-license | The hidden role of coupled wave network topology on the dynamics of nonlinear lattices
In most systems, its division into interacting constituent elements gives rise to a natural network structure. Analyzing the dynamics of these elements and the topology of these natural graphs gave rise to the fields of (nonlinear) dynamics and network science, respectively. However, just as an object in a potential well can be described as both a particle (real space representation) and a wave (reciprocal or Fourier space representation), the `natural' network structure of these interacting constituent elements is not unique. In particular, in this work we develop a formalism for Fourier Transforming these networks to create a new class of interacting constituent elements $-$ the coupled wave network $-$ and discuss the nontrivial experimental realizations of these structures. This perspective unifies many previously distinct structures, most prominently the set of local nonlinear lattice models, and reveals new forms of order in nonlinear media. Notably, by analyzing the topological characteristics of nonlinear scattering processes, we can control the system's dynamics and isolate the different dynamical regimes that arise from this reciprocal network structure, including the bounding scattering topologies.
INTRODUCTION
The study of transport within nonlinear lattices is a seminal problem in computational physics, thermal transport, and applied math [1][2][3][4][5][6][7][8][9][10][11][12][13]. Despite the variety of researchers in the field and the role of this topic as the origin of computational physics, many out-standing problems remain [3][4][5][6]. This problem is in part compounded by the discrepancies between the most commonly considered models. While a great many nonlinear lattices have been considered for various application (including the φ 4 , Toda, FPUT (Fermi-Pasta-Ulam-Tsingou), FK (Frenkel-Kontorova), and discrete sine-Gordon lattices [1,[14][15][16][17][18][19][20][21][22]), they are each adapted to some special case. The absence of a unifying framework of nonlinear lattices means that it is not always simple to find comparable lattices to help explain the origin of the phenomena observed in these structures [6,7,[23][24][25][26][27][28][29][30]. This problem is compounded when the results of the nonlinear lattice toy models are applied to understand experimental data, as without a proper understanding of the origins of the nonlinear transport effects it is difficult to say how precisely any given analogy between a toy model and an experimental system. As such, the widening of the set of possible nonlinear lattices to beyond these special cases is likely to extend the utility of this field to new classes of problems.
In this paper, we show how the previously considered nonlinear lattices contain an implicit network structure which constrained the potential anharmonic couplings to a narrow set of options. By modifying this implicit network we can construct a variety of novel nonlinear lattices and use them to elucidate the connection between the form of the network and wave dynamics in a nonlinear lattice. In particular, we show how these new topologies can give rise to greater control of the dynamics of individual modes and provide bounds on the possible dynamics of the lattice. Finally, we conclude by examining potential realizations of these modified networks. * e-mail: Baowen.Li@Colorado.EDU
COUPLED WAVE NETWORK FORMALISM
To begin, let us consider the implicit network structure of an arbitrary linear system divided into N interacting elements. Assuming that it can be represented through simple Hamiltonian dynamics, we parameterize the system as where m α is the effective mass of the α th element of the system, u is an arbitrary variable defining the state of a given site (typically displacement from equilibrium for mechanical systems), F is a so-called onsite potential that measures selfinteraction force (typically zero for mechanical systems with translational symmetry), k is the effective inter-site interaction force, and A is the adjacency matrix. The adjacency matrix will play a special role in our formalism, so it is worth emphasizing that whenever two elements are coupled (interacting), A αβ = 1 and whenever they are decoupled A αβ = 0. Moreover, for every adjacency matrix we can define a network. Each row or column corresponds to a vertex of a graph and each non-zero value of A corresponds to an edge (if A = A T the edges are directed and define a digraph). When k is nonuniform, the product k αβ A αβ defines a weighted graph.
Assuming that F is also linear, this system can be diagonalized by taking a Fourier Transform to give the spectrum of eigenmodesũ( q) with eigenvalues ω q,p (where q is the wave vector and p is the branch number of the eigenmode − to simplify some later notation we define ω(− q) ≡ −ω( q) which is valid because ω's sign is arbitrary). In particular, when these equations represent a mechanical system, the eigenmodes are mechanical waves (phonons) that propagate through the system without interacting, i.e. a phonon gas. If either F or k became nonlinear, though, this would introduce anharmonic couplings between the phonons, and imply that these waves are no longe the eigenmodes of the system. Without loss of generality, we consider a nonlinear coupling of the form: is the renormalized Fourier Transform of the nonlinear coupling and the delta function for a system with periodic boundary conditions is defined as (i.e. within a reciprocal lattice vector for each component of q). Now this equation can be rewritten into a more evocative form: Comparing equations 2.2 and 2.5 reveals a fundamental correspondence between a set of real space elements with selfinteraction F/m and nonlinear coupling (k + γ(∆u))/m and a set of reciprocal space elements with self-interaction |ω(−q)ω(q)| and nonlinear coupling W (q, q ). Most significantly, in each case the coupling term is proportional to an adjacency matrix A, meaning that in both real and reciprocal space we can define network structures. In the mechanical model, the real space graph defines a series of masses (vertices) coupled by nonlinear springs (edges) whereas the reciprocal space graph defines a series of linear waves (vertices) coupled by nonlinear scatterings (edges). It is this latter representation of the system that we term a Coupled Wave Network (CWN) which is formally equivalent to taking a Fourier Transform of a dynamical system's graph. Notably, any combination of nonlinear scatterings (i.e. any sum over Feynman diagrams) is equivalent to taking a directed walk within the CWN. As we shall now confine our attention to reciprocal space, tildes onũ shall be suppressed to emphasize the similarity between the real and reciprocal space networks. The representation of nonlinear wave scattering as a network was been independently developed in continuous media [31][32][33][34][35]. However, due to the continuity of the systems they studied, the correspondence between real and reciprocal space network representations was not noted in these works. Similarly, note that this formalism is distinct from other uses of networks in physics, such as the representation of entanglement in tensor networks [36] or the representation of real space interactions, such as in Ref. [37]. In the former, networks are used as a graphical representation of entangled many-body states as a convenient computational tool. As in Feynman diagrams, every vertex in the network represents a tensor coupling multiple wave functions (edges). In the latter, the network serves as a generalization of a free-body diagram, showing how disparate points in real space interact with each other. In contrast to these two approaches, every vertex in our CWN is a wave and every edge is an interaction.
While this derivation reveals the existence of CWNs, we must still determine its form. From equation 2.4 we see that valid (crystal momentum conserving) forms of phonon scattering are governed by modular arithmetic. Since modular addition defines a group and every group is closed by definition, there must be a combination of ancillary modes that links any two given modes for an arbitrary strength FPUT-α coupling. Thus an edge must exist between any pair of modes and the FPUT-α lattice is equivalent to a complete graph (CG) CWN (a network where every vertex shares an edge with every other vertex, see Fig. 1a).
To validate our claim that the anharmonicity employed in equation 2.2 does not affect the generality of our derivation, let us consider the following variations. A similar derivation holds for anharmonic on-site potentials (except for the elimination of factors of the frequency in the anharmonicity term). Moreover, as this relation holds true for every integer rank anharmonicity and the addition/strengthening of links to a CG is still a CG, any anharmonicity with a valid Taylor expansion is still a CG. Nor does the spatial dimension affect this, since each basis vector in the Brillouin zone obeys the same modular addition constraint. Furthermore, adding a harmonic spatial modulation to the anharmonicity only shifts the modular addition constraint from equaling 0 to equaling the modulation's wavelength, which still defines a group and still gives a CG CWN. Modifying the harmonic lattice structure would shift the values of ω q but leaves the CWN constraint unchanged. Changing boundary conditions would affect the CWN constraint, but as the most common conditions are either periodic or isolated (which is still a modular arithmetic constraint, albeit a more complex one [40,41]), the CG CWN appears to be a common feature of the most popular nonlinear lattice models. Thus we see that the most common nonlinear lattice models all correspond to the same CWN, with their only difference being the strength of the couplings in this topology.
Given the universality of the CG CWN, we shall focus our attention on the FPUT lattice and further specialize to the onedimensional FPUT-α lattice, where s ≡ 3. The motif (the CWN element corresponding to a single coupling term) associated with a nonlinearity of rank s is an s vertex complete sub-graph, so the FPUT-α lattice is associated with triangle motifs. While the dynamics of such a lattice are quite well studied [3], it will be an important reference and as such we reproduce the standard reciprocal space dynamics in Fig. 1b.
Note that the dynamics of this simulation appear significantly noisier than the standard simulation, as the reciprocal space representation of the equation of motion has different error terms than the real space representation [40] (indeed, real space simulations with lower accuracy show no such noise). Rigorously, we can be sure that this noise is a numerical artifact as it produces fluctuations in the total energy at a characteristic frequency greater than the highest frequency phonon mode. By confining our attention to qualitative aspects of the system which operate at a longer time scale than these fluctuations, we can ensure that this noise does not affect the validity of our qualitative results. For quantitative results, we make use of the separation of time scales between the exact result and the numerical artifact noise and filter out the Fourier components corresponding to these fluctuations via a numerical low-pass filter (the modification of our simulations to facilitate this filtering are discussed more fully in Sec. 4). This filtering removes almost all of the total energy fluctuation, ensuring that our results are both quantitatively and qualitatively rigorous.
We generate these (and all subsequent) numerical results using a 24 site FPUT-α lattice with m = k = a = 1, γ = 1/4, initial conditions of (where E coh is the energy of an initially coherent popula- , and numerical integration is performed via the Dormand-Price adaptive step-size Runge-Kutta 4-5 algorithm (which maintains fixed accuracy by modifying the size of the time steps). Integration was carried out for one recurrence time of the FPUT-α lattice. To match the initial conditions of the real space standard FPUT-α dynamics E coh = 2N , q 0 = 1, and k B T = 0 unless otherwise noted. One important feature of the FPUT-α lattice is that energy flows through a specific series of modes when spreading out from the initial coherent excitation [2,41]. These initially excited modes on this relaxation pathway are particularly important, as they serve as the gateways that limit the flow of energy to the entire population [42,43]. When the thermal energy is negligible compared to the coherent excitation, this gateway mode is twice the frequency of the coherent mode, as second harmonic generation (SHG) dominates (for a generic FPUT lattice of rank s, the dominant process is s − 1 HG). For an exact picture of these dynamics, see the video of the FPUT-α CWN's dynamics in the online supplement.
TAILORED CWN TOPOLOGY
While the previously studied nonlinear lattices have been constrained to be CGs in the CWN framework, other topologies are possible by relaxing the constraint of short-range (local) interactions. In this section we show how the modification of the CWN away from a CG creates new opportunities to control the dynamics of specific modes. Color indicates mode number. A coherent phonon population is initially injected in the |q| = 1 modes and the system is left in isolation. As time progresses, anharmonic interactions cause the energy to flow and excite the |q| > 1 modes. This excitation shows a clear hierarchy, with higher energy modes only excited once the lower energy ones reach a critical amplitude. As in the standard FPUT paradox, this excitation of the higher energy modes is clearly oscillatory, with energy flowing out of the |q| > 1 modes and back into the |q| = 1 modes after a critical period (the recurrence time).
A. Gateway Control
We begin by returning to the observation that ended our previous section − that SHG is the dominant gateway mode for an initially coherent phonon population. To illustrate this effect we consider an FPUT-α lattice with a coherent excita-tion in the |q| = 1 modes and remove the SHG coupling that links them to the |q| = 2 modes (Fig. 2a). Simulations at k B T = 0 (Fig. 2b) show no excitation of the other modes and the system appears entirely harmonic. This is to expected though, as while the SHG coupling scales like E coh , the other coupling modes scale like √ E coh k B T , and should therefore be trivial at k B T = 0. To detect the effect of modifying the CWN, then, we increase the temperature to k B T = E coh /32 and repeat the simulations for a modified FPUT-α (Fig. 2c) and regular FPUT-α (Fig. 1c). Comparing these two figures reveals a reduction of the excitation of higher order modes by at least a factor of two (and up to an order of magnitude) due to the suppression of SHG for the fundamental mode. (The |q| = 2 mode in particular never exceeds its initial thermal energy, as it only acts as a donor to the higher order modes.) As total energy is conserved, this also implies that significantly more energy is stored within the fundamental mode than the higher order modes, especially when compared to the typical FPUT dynamics (energy loss decreased by over an order of magnitude). So by merely cutting a single edge on the CWN (suppressing a single scattering pathway), we have dramatically reduced the decay rate of the coherent excitation (even at non-zero temperature).
B. Pathway Engineering
The ability to control the dynamics of specific modes by eliminating certain scattering pathways can extend beyond the simple elimination of the first gateway mode of the previous subsection. Instead, by cutting specific subgraphs of the CG CWN, we can tailor the combination of activated modes. That is, by only allowing subgraphs that incorporate specific parts of the standard relaxation pathway, we can direct the ordered excitation of an arbitrary combination of modes. So, for example, the excitation of the |q| = 2 modes via SHG from |q| = 1 can lead to the excitation of the |q| = 3 modes. Via the modular arithmetic that govern mode conversion, similar rules exist for other combinations of modes or coherent excitations of modes other than the |q| = 1 ones. To illustrate this possibility, we engineer two different relaxation pathways and use them to tailor the excitation of specific phonon modes. In the first (Fig. 3a), we can excite |q| = 2, 3, 5 from the initial excitation (Fig. 3b), while in the second (Fig. 3c) we excite |q| = 2, 4, 6 ( Fig. 3d). Note that, beyond the necessity of sharing the initial coherent population and SHG pathway (|q| = 1, 2) modes, these two pathways lead to the excitation of completely different phonon populations. Note also that this is distinct from the existence of the even mode submanifold of the FPUT lattice [41], as the initial excitation of |q| = 1 should lead to the excitation of all other modes in the standard FPUT lattice (Fig. 1b). The sub-manifolds considered in Ref. [41] are much less stable than the engineered excitation pathways that we consider here, as the sensitivity to initial conditions that those sub-manifolds possess is eliminated through the tailoring of the allowed phonon couplings.
C. Effects of Topological Distance
As a final consideration of how the form of the CWN affects the dynamics of the excited phonons, we shall consider the effect of vertex-vertex distance in the CWN (or topological dis-tance, to differentiate it from real space separation). By topological distance we refer to the minimum separation between vertices in our network, in particular their separation from the initial coherent excitation at |q| = 1. Since the further a mode is from the initial excitation the more scattering will be required the excite it, this topological distance might appear to be equivalent to the number of scattering events. However, this is not the case, which we can show by contrasting the dynamics of the |q| = 4 mode when excited by two different scattering pathways (Fig. 4a). While the degree of nonlinearity is the same in both pathways, the first always involves the inclusion of a new |q| = 1 phonon and therefore is of topological distance one while the second includes only the products of the initial SHG scattering in its final scattering event and is therefore of topological distance two. Contrasting the dynamics of these anharmonically generated |q| = 4 phonons in Fig. 4b reveals that their time dependence is qualitatively different too. The mean energy and recurrence time are greater in the topological distance one case and the two cases have distinct beat patterns. Thus, while these two pathways are equivalent in terms of strength of anharmonicity, they are distinct in terms of dynamics and we can associate this distinction with their different topological distances. Similar situations should obtain for other scattering pairs, although as we shall see in section 4 A the topological distance has very strong effects for higher topological distances that obscure the subtler effect that we observe here.
TOPOLOGICAL CHARACTERIZATION OF ENERGY TRANSFER IN RANDOM CWNS
One advantage of the CWN framework over that of Feynman diagrams or scattering is that networks possess a number of new characteristics to describe their properties. This allows us to find new correlations between the response of nonlinear lattices and the topological properties of the CWN. To do this, it is necessary to consider a great many networks with the same topological properties, rather than isolating particular topologies that reveal novel effects (as we did in the previous sections). We shall consider two specific topological characteristics of wave coupling, the topological distance discussed previously and the weighted vertex degree (or vertex strength). When calculating the mean of the harmonic modal energies of this section we shall filter out the high frequency fluctuations that violate energy conservation, as they are a numerical artifact. Since this filtering can introduce ringing at discontinuities, for each random network we select an integration domain of that network's recurrence time (whereas previously we used the FPUT-α's recurrence time).
A. Dependence on Topological Distance
To determine the correlations between topological distance and transfer of energy from the coherent excitation to other modes, we consider the time-average energy of each mode in thirty six random lattices. These random lattices are con- structed as reciprocal subgraphs of the CG CWN (i.e. the FPUT-α lattice), essentially constraining the subgraphs to complete sets of triangle motifs linking any six vertices related by reciprocity. Reciprocal motifs were added to the graph randomly until all vertices were connected to ensure that the distance was well defined (by powers of the adjacency matrix).
To ensure that non-trivial dynamics would exist at 0K, the SHG scattering of the fundamental mode was always included in the CWN (see 3 A). Specific additional couplings would be included as well (since most relaxation pathways occupy only a small part of parameter space), together forming a seed CWN to which the random couplings would be added. The results of these seeded random graphs are shown in Fig. 5a, where each point is the mean energy of a mode in a given graph. Notice that mean amplitude decays quickly with distance, with only d T ≤ 2 showing any appreciable amplitude. Using a semilogarithmic plot of this data (Fig. 5b) reveals that the energy decays approximately like E(q, d) = E(q, 0) exp(−ξ(q)d 4 ). Note also that there is a significant fraction of the d T = 1 domain that falls above the error bars. This is due to the FPUT-α relaxation pathway, as the presence or absence of couplings in the random lattices that fall along this pathway creates a bimodal distribution of amplitudes for d T = 1. The comparatively smaller energy that above the er- ror bars of d T = 2 also corresponds to this pathway, but since it requires active combinations of the d T = 1, 2 pathways, it is comparatively rarer and weaker. As for the coherent excitation itself (d T = 0), note that there is considerable variance around the mean value, almost as much as the d T = 1 relaxation pathway induced variance. This is principally due to the conservation of energy within the lattice, as decreased (increased) conduction to the higher modes results in greater (lesser) energy confined to the fundamental mode. This suggests a certain degree of tunability to the coherent population's thermal relaxation as a function of specific topologies, but it also reveals a very clear shielding of phonon energy transfer within the CWN structure. Unlike in real space, where the FPUT lattice displays a simple power law scaling with the separation between sites [44][45][46], the energy in reciprocal space remains highly localized. The FPUT-α lattice coupling, in fact, produces quite small energy flux for d T > 2, an effect that has been masked in other studies of the nonlinear lattice as the CG CWN necessarily has d T ≤ 1 for any pair of modes. Given this clear restriction in energy transfer across topological distances and the approximately linear increase in mean topological distance with system size, we find that the effects that we have observed in the N = 24 FPUT-α do not appreciably change when extended to larger systems. This is particularly the case as random lattices are unlikely to contain more than a few elements of the relaxation pathway for large systems, so energy doesn't flow through more than the first handful of modes in the pathway.
Examining the interaction between topological distance and mean energy of each mode reveals an interesting result − the transition between normal and Umklapp scattering is reflected in this distribution. Specifically, while both normal and Umklapp scattering can be present for modes connected below a critical distance (d T c = ln q/ ln 2 for the FPUTα lattice with the fundamental mode excited, see Sec. 7 for details), Umklapp scattering dominates above this critical distance from the coherent source. This is clearly seen in Fig. 6 that focuses on the set of modes in the N = 24 lattice where d T c = 3 (data for d T c = 2 is similar, but the transition is obscured by the presence of only two data points in the normal regime). This also partially explains the quartic scaling observed previously, which averaged over all mode numbers. Since the normal dominated regime decays linearly and the Umklapp dominated regime decays quadratically, the quartic scaling is necessary to fit this discontinuity.
B. Dependence on Weighted Degree
Although we can use the topological distance to distinguish between normal and Umklapp dominated scattering, even at fixed q the mean energy is not a simple monotonic function of topological distance. This is in part due to topological distance being an unweighted measure of interaction strength, whereas paths in the CWN are not all equivalent. Some, such as the relaxation pathways, will carry more energy than others. As such, we turn to the weighted degree of each vertex as a more effective means of characterizing the interaction. The weighted degree is given by the time average differential force between a vertex and all its neighbors. For a reciprocal CWN, this takes the simplified form: where the brackets denote a time average. Plotting the mean energy as a function of degree reveals the reason for this nonmonotonic dependence between mean energy and topological characteristics (see Fig. 7 for representative examples, others shown in the Sec. 7) − the presence or absence of specific paths in the CWN will shift these parameters. Specifically, the presence or absence of an SHG pathway for each mode will determine how mean energy scales with topological dis- tance (in the FPUT-α lattice). For odd modes, it is the outgoing SHG edge that is critical (as there is no in-coming edge for these modes), whereas for even modes it is the in-coming edge that is critical (except for |q| = 2, where the in-coming edge is present in all realizations of the random CWN, making the out-going edge critical). Using this rule, we identify three regimes: when the critical SHG coupling of a mode is present, the mean energy (not the log of mean energy) scales approximately linearly with weighted degree and both mean energy and mean degree are relatively high. When the critical SHG coupling of a mode is absent, the mean energy scales approximately linearly with weighted degree and mean energy is relatively high while mean degree is relatively low. When the coupling is Umklapp dominated, this distinction is relatively unimportant as mean energy and mean degree are both low and so no scaling is observed.
C. Bounding Topologies of the CWN
To conclude our analysis of the role of CWN topologies on seeded random couplings, we consider the role of mode number on this dependence. In doing so we shall make the following corrections to our data. First, any modes where Umklapp scattering dominates (i.e. above the critical distance, see Sec. 4 A), we shall shift the wave vector to the second Brillouin zone, as the phonon is effectively acting like this higher order mode. We also exclude any data points where the mode is inactive (zero mean energy), since we are concerned with the |q| versus ln E dependence. Combining these corrections gives us Fig. 8, where we see that the mean energy typically falls between two bounds. On the upper bound is the FPUT-α graph, i.e. the maximum degree CWN. Conversely, the lower bound corresponds to the |q| = 1 star graph, where the only valid triads are those that contain the |q| = 1 vertices, which forms the minimum non-trivial (unweighted) degree for the higher harmonics. Between these two bounds we see a clear scaling with increasing weighted degree leading to increased mean energy, although this scaling is non-monotonic (see Sec. 4 B). Note though that these bounds are not strict, we often observe data points above or below them. For the points below these bounds, these correspond to topologies where the modes are Umklapp dominated. While we cannot reconstruct the exact mode number corresponding to the scattering of a random graph, it is likely that this Umklapp scattering takes these modes out of the second Brillouin zone and into higher order zones. Given that the FPUT-α graph scales linearly with q and the star graph scales quadratically, there will necessarily be some correction where any data point would fall within their bounds. For points above these bounds, conversely, this effect is real and rigorous. Interestingly, the points far above the FPUT-α bounds tend to have lower weighted degree than those just below it. In fact, these data points correspond to topologies where many modes are inactive and most of the energy is confined to a small number of active modes. Since the total energy is conserved, this confinement necessarily results in modes with greater energy than the FPUT-α bound.
POTENTIAL EXPERIMENTAL REALIZATIONS
While the non-trivial CWNs introduced in this work reveal a great deal of novel effects in nonlinear lattice dynamics, the complexity of the interactions that they require will likely impede immediate experimental confirmation in nonlinear lattices. In particular, each triad motif (q 1 −q 2 −q 3 = 0 mod N ) corresponds to a real space coupling of the form i.e. a highly delocalized complex coupling between lattice sites. While this can be simplified using special cases, e.g. the SHG only coupling is (which at least has the advantage of being purely real and symmetric), it is still a highly delocalized force. This suggests that realizations of this system would require a rather precise ability to tailor long-range nonlocal interactions between lattice sites. Such a requirement would likely be easiest to accomplish in atomic systems, particularly atoms in a lattice of optical traps, where there is a greater degree of control over inter-atomic couplings [47][48][49]. While these systems work best for very small numbers of atoms, the limited topological distance of interactions in random CWNs (Sec. 4 A) suggests that this is an acceptable constraint. For larger systems, however, there would need to be some form of additional coupling to create the long range delocalized interactions. This could be accomplished by using internal stresses, as in the Kroner-Eringen nonlocal elasticity model [50][51][52], which would allow for couplings of the form for nonlocality parameter λ. This suggests that the CWN could arise in other systems possessing effective fields, as in composite structures (fibers, laminates, etc.) [53][54][55][56], as the integration over internal structure can introduce these effective nonlocal interactions. A particularly important class of materials where nonlocal mechanical effects are known is piezoelectrics [57][58][59][60], as the charge-induced long range interactions are easily tailored and the material itself is not particularly exotic. Still, as nonlocal effects are not presently known for phonons (aside from the Kroner-Eringen model), it is likely that additional research into effective phonon-phonon couplings will be necessary before these numerical predications are testable.
On the other hand, many of these effects are not confined to solid mechanicals. CWNs are already known to exist for nonlinear resonances in continuous media [31][32][33][34][35], making fluid dynamics a promising field for realizing these results. The principal challenge there would be tuning the wave interactions in the fluid to realize specific CWNs. Furthermore, effective nonlocal interactions can exist in complex networks, which encompass a broader set of systems than simple mechanical systems. In particular, non-trivial CWNs are likely to exist within social networks, as the nonlocal forces that produce such graphs would correspond to interpersonal interactions mediated by (complex) social pressure dynamics. The possibility of two agents having their interactions influenced by the state of third agent is fairly exotic in mechanics (i.e. three-body forces are rare) but relatively mundane a state of affairs in social interactions.
CONCLUSIONS
In this work we have shown how the anharmonic interactions that define scattering in nonlinear media implicitly create a reciprocal network structure that defines the wave mixing (i.e. a coupled wave network). By modifying this coupled wave network structure to create nonlocal nonlinear lattices, we have shown how it becomes possible to precisely control the dynamics of the phonons within the nonlinear lattice. In particular, we have demonstrated how simple modifications of the CWN structure could be used to change the relaxation rate and reduce energy flux between coherent phonon populations and a thermal background. Moreover, directing the energy down specifically tailored relaxation pathways or controlling the topological distance between modes can be used to control the specific combination of modes excited by the coherent population and their resulting time dynamics. And most significantly, we have found graph topology signatures of different transport regimes within nonlinear lattices, including CWNs that effectively bound arbitrary anharmonic scattering mechanisms. The variety of new effects seen within CWNs suggests that they can become a powerful tool for understanding the origin of nonlinear transport within real systems with long-range interactions, provide an invaluable comparison when seeking to understand the nature of transport within the more conventional nonlinear structures, and open alternative avenues for understanding dynamics on complex networks more generally.
A. Normal-Umklapp Transition
Assuming no Umklapp scattering, the lowest order mode that can be generated at a given topological distance from a coherent source is equal to s − 1 times the lowest order mode of the previous distance (for an anharmonicity of rank s > 2). Thus the maximum distance that a mode can be from a coherent source at |q| = |q 0 | is determined by the relation |q| ≤ |q 0 |(s − 1) d T , giving the critical threshold for q 0 = 1: d T c = ln |q|/ ln(s − 1) . For s = 3, this gives a scaling as shown in Fig. S1a. This behavior is reflected in Fig. 6 and Fig. S1b for d T c = 3, 2 respectively.
B. SHG Dependence of Higher Harmonics
In Fig. S2 we continue plotting the degree dependence of different modes for random CWNs. While the basic pattern is maintained from Fig. 7, certain features are worth emphasizing in the higher harmonics. First, note that we have switched from plotting the absolute value of s i to its signed value. While this somewhat obscures comparisons between the magnitude of s i and the different pathways explored in Sec 4 B, it reveals that the sign (i.e. the direction of mean force) is often different in these two cases. Surprisingly, the sign of the FPUT-α lattice is often the same as the sign of the SHG absent path and opposite to the sign of the SHG present path, which in turn shares a sign with the star graph. This is opposite to the trend of their magnitudes. However, as the sign of s i only correlates with the imaginary component of the average differential nonlinear force, the role of s i 's sign in CWN conduction remains somewhat obscure.
Additionally, the separation between the two pathways begins to break down at higher orders, particularly when the absolute value of s i is used. This is in part due to the lower energy carried by these higher order modes, which causes the fluctuations from the random CWN topologies to have a stronger effect, but is also due to number of intermediate pathways that must be activated before the higher order modes can be generated. This is likely why the odd modes typically display a weaker dependence on the presence of SHG pathways than the even modes (although the fact that the odd modes require an SHG out-going edge while the even modes require and SHG in-coming edge is another significant contributing factor). FIG. S2: Signed degree dependence of different modes (|q| ∈ [5,12]) for random seeded CWNs, following the same conventions as Fig. 7. | 2018-09-11T00:29:18.000Z | 2018-03-05T00:00:00.000 | {
"year": 2018,
"sha1": "020a400d5e62dab7f4cf746c9b2e665ff85b91ef",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "020a400d5e62dab7f4cf746c9b2e665ff85b91ef",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
219633448 | pes2o/s2orc | v3-fos-license | Adaptation Measures to Combat Climate Change Impacts on Agriculture: An Empirical Investigation in the Chambal Basin
This study is based on the empirical investigation of the climate change adaptation measures adopted by the farmers in the Chambal basin. The adaptation measures were analysed after investigating the nature and impact of climate change in the region. Four representative districts were selected using control sampling. A representative sample of farmers was selected through stratified snowball sampling technique. Descriptive statistics and case study methods were used for results and analysis. Detailed irrigation profiles of the farmers were traced. The moisture index was calculated based on secondary data. A sampling survey method of investigation was used in the study. This paper also presents the context of maladaptation of monoculture in the region and severe groundwater depletion associated with this practice. The study directs policy to strengthen water-harvesting measures in the region to facilitate the adaptation measures for coping with the effects of climate change on agriculture.
INTRODUCTION
A consistent shift in the weather of a region over a long period is termed as climate change. It includes many variables like temperature, rainfall, rate of evaporation, wet day frequency, etc. The Bruntland Report states that climate change was identified as a crucial problem bearing on our survival long back (WCED 1987). According to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC 2007), large scale variations in average temperatures and precipitation in the coming decades will have a significant impact on ecosystems, related livelihood options, and overall human well-being. Agriculture as a managed ecosystem gets affected by climate change most significantly. Productivity, crop-duration and even selection of crops to be grown in a region depend upon temperature coupled with duration and spatial distribution of rainfall. Hence, changes in average climatic conditions along with the occurrence of extreme climatic events will have a significant impact on the agricultural sector, which, in turn, may have critical implications for food security. However, the effects in different regions around the globe will differ significantly. Consequently, region-based research on the interactions between climate change and agricultural performance has gained momentum. Climate change adaptation and mitigation, therefore, is now an important area of research in social sciences as well as physical sciences.
The real challenge of climate change is to minimize its risks through adaptations, which is a process of adjustment to actual or expected climate change and its effects. In human systems, adaptation seeks to moderate or avoid harmful activities and exploit beneficial opportunities. These adaptations have to take place at all levels from changes in global systems to changes at national and regional levels through adaptations made by local communities and individuals. The development of adaptation strategies needs to recognise the appropriate mix of actions at different levels. Agriculture is one of the most important sectors to be severely impacted by climate change and thus an inquiry into the adaptation measures in this sector in relation to climate change is a must. It is all the more significant to be carried out on a regional basis as the regional climate has peculiarities that govern crop selection and irrigation management at the most basic level. This study is an attempt to fulfil this objective in the Chambal basin, which has faced significant changes in climatic and cropping patterns in the decades following the construction of Gandhi Sagar Dam on the Chambal river.
An Assessment of Risk and Vulnerability due to Climate Change
In some natural systems, human intervention may facilitate adjustment to the expected climate and its effects (IPCC 1996). In human systems, adaptation seeks to moderate or avoid harmful activities and exploit beneficial opportunities. Adaptations take place at all levels from changes in global systems to changes at national and regional levels through changed practices of local communities and individuals. The development of adaptation strategies needs to recognise the appropriate mix of actions at different levels. Agriculture is inherently sensitive to climatic conditions and is among the most vulnerable sectors to the risks and impacts of global climate change (Parry and Carter 1989;Reilly and Schimmelpfenning 1999). Studies show that without adaptation, climate change is generally problematic for agricultural production and agricultural economies and communities; but with adaptation, the vulnerability can be reduced and there are numerous opportunities to be realised (Rosenzweig and Hillel 1995;Mendelsohn 1998).
Studies on climate change trends have already shown that climate variation is a reality for India but its impact on society as well as its social and economic consequences are yet to be fully understood. Also, there is neither a consensus on the definition of vulnerability to climate change nor a full, regionally nuanced mapping of impacts of variables available. It is only when we have a better understanding of what constitutes vulnerability to climate change and what are its region-specific impact, that we can determine proper adaptation strategies. In this context, one study has found that the states of Bihar, Rajasthan, Gujarat, Punjab, Haryana, Madhya Pradesh, Maharashtra, Andhra Pradesh, and Karnataka have the lowest adaptive capacity (O'Brien et al. 2004). The areas of greatest climatesensitivity are Rajasthan, Madhya Pradesh and Uttar Pradesh using current climatological data. To identify and assess crop adaptation, there is a pressing requirement for more observational field studies to achieve detailed knowledge about how crops respond to climate change. In another study, it was found that a 2°C temperature rise and 7 per centages increase in rainfall would lead to an almost 8 per cent loss in farm net revenue (Kumar and Parikh 2001). The regional differences are significantly large with northern and central Indian districts along with coastal districts bearing a relatively large impact. It has been observed that during the past 25 years, significant changes in climate are observed over different regions of the country (Sinha, Singh and Rai 1998). For example, many parts of northern India show an increase in minimum temperature by about 1°C in the rabi cropping season. However, mean temperatures are misleading as some of the individual regions could exhibit a large variation with a larger impact on rabi production.
Exploring Perceptions about Climate Change in Agricultural Systems
As the impacts of climate change on agriculture are severe, it is important to take appropriate actions to minimise the losses. The foremost requirement for taking an action is to accurately assess the nature of change in the climatic events. In this context, existing research suggests that the formation of environmental perceptions is most of the time a local phenomenon rather than a global phenomenon (Magistro and Roncoli 2001). It is usually associated with personal experiences about changes in temperature, precipitation and observation of crop-responses to the environment. In developing countries ‗farm surveys' and ‗focus group discussions' are the preferred mode of research in identifying farmers' perceptions of climate change and the factors that shape them. A study on climate change in the Western Himalayas of India (Vedwan and Rhoades 2001) compared farmers' perceptions with ‗locally idealised traditional weather cycles'. Several studies indicate that socio-economic and demographic factors are most important in determining farmers' perceptions. In a survey-based study of the Sekyedumase district in the Ashanti region in Ghana, 180 farmers were queried about their perceptions of changing climate in terms of changes in temperature, rainfall and area covered by vegetation in the past 20 years (Fosu-Mensah, Vlek and MacCarthy 2012). They were also queried about their major adaptations to climate change and the barriers they faced. Household characteristics, years of farming experience, size of landholdings and their access to extension services along with credit services were major explanatory variables. Age of the head of the farming household, which is usually a proxy for the farmer's experience, was found to be one of the most important factors in shaping the perceptions about climate change (Diggs 1991). Extensive field-based studies of African small-holding farming systems have shown that the level of formal education of farmers is positively associated with their ability to perceive correctly climate-related changes (Mustapha, Sanda and Shehu 2012). Access to banking services and information about climate change through extension-services plays an important role in enhancing farmers' understanding of climate change and appropriate adaptation measures (Maddison 2007). Farmers with a higher level of income were also found to be more perceptive of changes in climate (Semenza et al. 2008). Finally, a cross-sectional analysis of farmers in Kyuso district in Kenya, Africa, found that joint family households were less perceptive of climate change as such families are more inclined to engage in non-farm activities as well (Ndambiri et al. 2012). A comprehensive strategy that seeks to improve food security in the context of climate change may include a set of coordinated measures related to agricultural extension, crop diversification, integrated water and pest management, and agricultural information services. Some of these measures may have to do with climatic changes and others with economic development. Indeed, studies indicate that farmers perceive that the climate is changing and also adapt to reduce the negative impacts of climate change (Thomas et al. 2007;Ishaya and Abaje 2008;Mertz et al. 2009). Studies further show that the perception or awareness of climate change (Semenza et al. 2008;Akter and Bennett 2011) and taking adaptive measures (Maddison 2007;Hassan and Nhemachena 2008) are influenced by different socio-economic and environmental factors.
Adaptation to climate change is a two-step process; the first step requires the farmers to perceive a change in climate and the second step requires them to act through adaptation (Maddison 2007). Studies of perceptions of climate change, both in developing (Vedwan and Rhoades 2001;Hegeback et al. 2005;Thomas et al. 2007;Ishaya and Abaje 2008;Gbetibouo 2009;Mertz et al. 2009) and developed (Diggs 1991;Leiserowitz 2006;Semenza et al. 2008;Akter and Bennett 2011) nations show that the majority of population have already perceived climate change and they are adapting to it in various manners (Falco, Veronesi and Yesuf 2011). There are different ways of adapting to climate change in agriculture (Bradshaw, Dolan and Smit 2004;Kurukulasuriya et al. 2004;Mertz et al. 2009) and different factors affect the use of any of these adaptation methods (Deressa et al. 2009). For instance, it has been shown that better access to markets, extension and credit services, technology, farm assets (labour, land and capital) and information about adaptation to climate change, including technological and institutional methods, affect adaptation to climate change (Hassan and Nhemachena 2008). Changing cropping calendars and pattern will be the immediate best available option with available crop varieties to adapt to the climate change impact (Rathore and Stigler 2007). The options like introducing new cropping sequences, late or early maturing crop varieties depending on the available growing season, conserving soil moisture through appropriate tillage practices and efficient water harvesting techniques are also important. Developing heat and drought-tolerant crop varieties, by utilizing genetic resources that may be better adapted to new climatic and atmospheric conditions, should be the long-term strategy. Genetic manipulation may also help to exploit the beneficial effects of increased CO2 on crop growth and water use (Rosenzweig and Hillel 1995). One of the promising approaches would be gene pyramiding to enhance the adaptation capacity of plants to climate change inputs (Mangala 2007).
Adaptation Strategies
Adaptations to climate change impacts are not new phenomena. Natural and socio-economic systems have been continuously and autonomously adapting to a changing environment throughout history. Adaptation to climate change and variability (including extreme events) at national and local levels is regarded as a pragmatic strategy to strengthen capacity to lessen the magnitude of climate change impacts that are already occurring and could increase gradually (or suddenly) and may be irreversible. Adaptation can be anticipatory, where systems adjust before the initial impacts take place, or it can be reactive, where change is introduced in response to the onset of the impacts. Climate change adaptations in agricultural practices often have synergy with sustainable development policies and may explicitly influence social, economic and environmental aspects of sustainability. Many adaptations have co-benefits (improved efficiency, reduced costs, environmental co-benefits) as well as trade-offs (e.g. increasing other forms of pollution) and balancing these effects will be necessary for successful implementation of climate change adaptation and mitigation in the agricultural sector (IPCC 2014).
Farmers generally adapt swiftly to avert their agricultural production losses. In India, adaptations in farm practices (changing the sowing dates, adopting different crop varieties and improving water supply) have been seen to reduce the adverse impacts of climate change (Kumar and Parikh 2001). Adaptation measures could be simple ones like shifting planting calendars or changing crops, or more costly ones like investing in protective infrastructures such as damming rivers to provide assured water supply for irrigation. Farm-level resource management innovations such as the development of irrigated drainage systems, land contouring, reservoirs and recharge areas, and alternative tillage systems are also used to minimise the impact of climate change on agriculture (Easterling 1996). A comprehensive strategy that seeks to improve food security in the context of climate change may include a set of coordinated measures related to agricultural extension, crop diversification, integrated water and pest management, and agricultural information services. Some of these measures may have to do with climatic changes and others with economic development. Studies have indeed indicated that farmers perceive that the climate is changing and also adapt to reduce the negative impacts of climate change (Thomas et al. 2007;Ishaya and Abaje 2008;Mertz et al. 2009). From United Nations Framework Convention on Climate Change (UNFCC 1992) to India's National Communications (MoEF 2004) river basin specific impacts of various climate change scenarios and vulnerability to droughts and floods have been estimated at the catchment, sub-catchment and watershed levels, as well as for administrative units such as districts.
While such exercises are useful given the multiple pressures that act on water resources, integrated watershed modelling might be more appropriate. A pathbreaking study examined the current adaptation strategies of stakeholders in the Cauvery delta of Tamil Nadu and argued that the responses to climatic and non-climatic pressures have largely been ad hoc and hence could be inadequate and unsustainable in the long term (Janakarajan 2010). Finally, in context to efficient natural resource management, conservation agriculture offers resource-poor farmers a set of possible options to cope with and adapt to climate change (Thomas et al. 2007). Improved water management will represent the key adaptation strategy in both irrigated and dryland agriculture. Emphasis will also be given to crop production systems located in delta regions to sustain high production potential under sea-level rise (Wassmann and Dobermann 2007). Based on fieldwork in Andhra Pradesh and Rajasthan, effective ways to make farmers more adaptive to climate change were suggested (MSSRF 2008). The recommendations include specific changes in traditional water management practices such as harren in Rajasthan, establishing small farm networks that enable farmers to share knowledge on-farm management practices, utilising weather data from simple meteorological stations operated by farmers and use of some new farming techniques such as systems of rice-intensification.
Costs and Limits of Adaptation
There is an array of factors that limit adaptations by ecosystems, communities and individuals. There are cost considerations and threshold limits that may primarily be categorised in four sets -ecological, physical, economic and technological (Adger et al. 2009). A farmer may practically abandon farming due to limits to adaptation with respect to water resources. It is, thus, especially important to understand social limits to adaptation because this may put the responsibility on governance to work proactively for mitigation strategies. If the capacity to adapt is considered unlimited, a key rationale for reducing greenhouse gases is weakened (Dow et al. 2013). A linked consideration, where adaptation is well within the limit, is ‗willingness to adapt', which is influenced by individual characteristics and perceptions about climate change impacts (Pannell et al. 2006). Finally, the barriers to adaptations are the obstacles which can be overcome by concerted effort, creative management or changed thinking (Moser and Ekstrom 2010). However, adaptation is not an easy process. Any failed decision in adaptation, with respect to objective, results in ‗maladaptation'. The problem of increasing vulnerability from action taken for adaptation is termed as ‗maladaptation' (Barnett and O'Neill 2010). Maladaptation also occurs when the negative impacts caused by adaptation are as serious as the climate change adaptation being avoided (Scheraga and Grambsch 1998). This may put whole systems at risk and may lead to its breakdown, and thus needs to be analysed in every adaptation situation.
ABOUT THE STUDY AREA
The study area considered here is the catchment area of Chambal river in the state of Madhya Pradesh -the entire geographical area drained by the river and its tributaries and characterized by all run-off being conveyed to the same outlet. It is also known as catchment basin, drainage area or drainage basin. Chambal river, a principal tributary of Yamuna, originates in the Vindhyan ranges near Mhow in Indore district of Madhya Pradesh. The river flows through the states of Madhya Pradesh, Rajasthan and Uttar Pradesh. The basin is roughly rectangular, with a maximum length of 560 km in the northeast-southwest direction. Broadly, its catchment area is termed as Malwa Region. It is located in the south-western part of the Madhya Pradesh and generally slopes towards the North. It is spread across 45,628 square km. The catchment mainly covers the districts of Indore, Dewas, Ujjain, Dhar, Mandsaur, Ratlam, Neemuch and Shajapur. Rainfed farming of grains, pulses (moong, black gram and pigeon pea) and groundnut is a traditional practice. In the rabi season, wheat and gram are cultivated mostly under irrigated condition. The natural vegetation comprises of tropical dry and moist deciduous forests. However, rich farmers grow rice, wheat and gram and, sometimes cotton using irrigation facilities.
The catchment area of the Chambal river shows severe effects of climate change. This area was once known for its good climate and abundant food, water and employment opportunities (in the folk idiom it is defined as pag roti dag neer). It is now facing severe water shortage and extreme weather conditions (Gupta and Kawadia 2003). Agriculture is primarily rainfed and the region does not have adequate mechanism to use surface water for agriculture. As a result, farmers are forced to exploit ground water for the domestic as well as for agricultural purposes. No proper facilities to recharge groundwater are developed. As water withdrawal from the ground is much more than the recharge (Gupta, Kawadia and Attari 2007), it has created conditions of deforestation and desertification in the area. The area thus presents a good case study for climate change adaptation practices.
OBJECTIVES OF THE STUDY
1) To understand the nature of climate change and its impact on agriculture in the Chambal river catchment area.
2) To present an overview of the adaptation measures in the area.
3) To discuss maladaptations and its implications for the region.
4) To direct policy for strengthening specific adaptations.
RESEARCH METHODOLOGY
The study has followed the sample survey method of investigation. Of the eight districts in the Chambal catchment, Indore, Dewas, Mandsaur and Neemuch districts were selected in controlled sampling following expert advice. These four districts provided adequate representation of different agro-climatic and farming systems in the study region. A representative sample of 470 farmers was finally selected from 28 villages of these districts through stratified snowball sampling techniques in the agricultural year 2014-2015. The farm household survey was conducted in two steps, a field pre-test and actual data collection. As indicated above, the study made use of controlled sampling -only those agricultural households were surveyed that got subsidy from the Government for rainwater harvesting specifically to overcome the shortage of water due to climate change. Enumerators conversant with local language and traditions in the study area were engaged to conduct the field survey. Each survey schedule had 70 questions. A farm household was the unit of analysis. Moisture index was calculated based on centurial data of precipitation and potential evapotranspiration (India Water Portal 2016) to determine the ‗aridity' status of all the districts in the study area. A seven-year moving average was used to smoothen the fluctuations. Linear regression was used to find the equation and trend line. The study also made use of descriptive statistics and case study method for analysis and presentation of results.
Precipitation and Moisture Index
Agriculture in Madhya Pradesh has remained rainfed and will continue to be so for the next few decades. The state is dependent on rainfall for its water requirements. The total rainfall in the state varies from 60 cms, over the extreme north and western parts, to 120 cms over the central, eastern and southern parts of the state. Therefore, significant climatic aberrations or changes will have a certain impact on the agricultural output of the state. Global warming and shift in precipitation zones would cause drought, exposing the vulnerability of the countries affected. Monitoring the occurrence of droughts is helpful in various disciplines like administration, planning, agriculture and hydrology to take remedial measures. Drought is a period of drier than normal conditions that result in water-related problems. Agricultural drought occurs when soil moisture and rainfall are inadequate during the growing season to support healthy crop maturity and cause extreme crop stress and wilt. The drylands of the world are increasingly subject to desertification due to climate change and recurrent droughts. It is thus extremely important to analyse the trend of climate change in the Malwa region of Madhya Pradesh and to know whether it is being significantly encroached by desert from the neighbouring state of Rajasthan. For this, the study makes use of Moisture Index/ Drought Index/ Aridity Index (Thornthwaite and Mather 1955). The aim is to analyse the phenomenon of drought occurrence, or gradual desertification, in the catchment area of Chambal river basin, that is, the eight districts of Indore, Dewas, Dhar, Shajapur, Ujjain, Mandsaur, Ratlam and Neemuch. The study further attempts to empirically investigate whether these districts have experienced climate change over a century. Then a time-series based linkage was tried to be established between climate change pattern and drought occurrence. For climate change analysis, moisture index was calculated based on the centurial data of precipitation and potential evapotranspiration (India Water Portal 2016). Computation of Moisture Index or MI (Thornthwaite and Mather 1955) was simplified using annual average data (Krishnan 1992 Humid 100 and More Per-humid Source: Thornthwaite and Mather (1955) The values of the index correspond to the humidity or aridity in an area. If value of the index is positive, it indicates humid atmospheric conditions; negative index value represents dry climate conditions. Table 1 depicts corresponding moisture index and the climate zone of the eight districts.
Moisture is thus most inadequate in arid zones followed by semiarid and dry sub-humid regions. From moist and sub-humid zones onwards, the moisture is adequate for normal crop production. The eight districts of Chambal basin have been categorised into their prevailing climate zone on the basis of average moisture index obtained from the climate data spanning almost over a century (table 2). The trend is also identified with the help of regression equation and trend line (figure 1). As per Thornthwaite moisture index calculation six of the eight districts fall in the arid zone, the remaining two are semi-arid. There has been no significant change in the moisture index trend for the districts as per the centurial climate data. Only the Dhar district is depicting a significant increasing trend in the moisture index. This means that currently the district is under ‗arid' zone but gradually it will move in the ‗semi-arid' zone. Apart from Dhar, there are two more districts in the semi-arid zone, namely Mandsaur and Shajapur. One can conclude that no efforts have been made to shift the area from arid to semi-arid or humid zones. As per moisture index, only Dhar district has shown a significant increasing trend. This means that currently the district is under ‗arid' zone but gradually it will move in the ‗semi-arid' zone. This can be well understood in the backdrop of special focus Dhar has received in the past as a droughtprone district. Integrated Mission for Sustainable Development (IMSD) study was initiated in the year 1987 (Rao, et al. 1995) with specific reference to find scientific and lasting solution to mitigate droughts. Droughts have been a recurring feature in Indian agriculture from 1991 to 2000, and also earlier. Thus, some special districts were selected for systematic investigation. A specific study was carried out in the districts of Jhabua and Dhar, in Madhya Pradesh, using Composite Land Development Sites (CLDS) approach for forest and wasteland development and soil and water conservation in 1995 (IMSD 1995). This was followed by specific suggestions and treatments. Further, monitoring was done by Space Application Centre, ISRO, Ahmedabad (Dasgupta, Dhinwa and Rajawat 2015). This was done through visual interpretation and analysis of temporal images of the region from 1991 to 2013. The study had revealed that there is a substantial increase in the area of irrigated agricultural land with increase in number of check dams along with the stream channels. This has helped Dhar district's transition from arid zone towards semi-arid zone. It, thus, becomes clear that for non-arable soil conservation, rainwater harvesting and management of lands for fodder, fruit and fuel-wood production in the watershed perspective are the core strategies for fighting drought in the arid zones of India. As various water harvesting measures were adopted in Dhar district, the result came out in the form increased agricultural productivity. Thus, watershed development programmes were seen to have a positive impact in combating desertification. We, therefore, need to employ more of such techniques in the remaining arid zones to prevent them from getting gradually converted into deserts and to ensure food-security.
Temperature and Pattern of Precipitation
If sufficient water is available, then the temperature is the most important factor determining farm productivity in a region. The higher temperature eventually reduces crop yields, while encouraging weed and pest proliferation. Farmers' responses about a general change in temperature over time were traced. This reflected the change in seasonality, distribution, amount and intensity of temperature over time. As can be seen in table 3, around 78 per cent of farmers in the survey reported an increase in temperature of the study region. Changes in the precipitation patterns increase the likelihood of crop failures in the short term and production decline in the long term. Agriculture will be adversely affected not only by an increase or decrease in the overall amounts of rainfall but also by shifts in the timing of rainfall. It is thus extremely important that farmers' reporting about the changing trends of precipitation are analysed. In the sample, close to 70 per cent of farmers did not see a major change in precipitation, however, 11 per cent observed a clear decrease (table 4).
Extreme Events
With climate change, extreme weather occurrences have become more common and frequent. Longer and hotter heat waves, greater incidence of droughts, intense precipitation, heavy rains and floods have now become usual occurrences. It is important to know how farmers perceive the occurrence of such events in their regions. They were asked to give their observations of whether the occurrence of a particular climateevent has increased, decreased or has remained constant in terms of its frequency and intensity in their region. The events on which their responses were gathered were drought, flood, hailstorm, heat-waves, cold-waves and frost.
As can be seen in table 5, more than 70 per cent of the surveyed farmers observed that heat waves, frost and cold wave occurrences have increased. About 56 per cent surveyed farmers observed increase in hailstorms. About 47 per cent of the farmers observed increase in droughts. About 70 per cent farmers there had been no significant change in the incidence of floods. Factors affecting farmers' perceptions were also explored. Farmers with a higher educational level, a higher income level and joint family mode of living were able to perceive climatic changes more correctly (Kawadia and Tiwari 2017).
Impact of Climate Change on the Agricultural System of the Chambal Basin
Crop growth simulation assessments in dryland or rainfed agriculture in tropical stations indicate yield reduction of some crops even with a minimal increase in temperature. If there is also a significant decrease in rainfall, tropical crop yields would be even more adversely affected. Some studies indicate that climate change would lower incomes of the vulnerable populations and increase the absolute number of people at risk of hunger. Climate change, mainly through increased extremes and temporal/spatial shifts, would worsen food security in some parts of the globe. This study attempts to analyse how farmers of the Malwa region respond to the change in their crop yield due to change in climatic conditions. Our survey found that 70 per cent of farmers reported a significant decrease in farm yield (table 6).
The greatest impact of climate change was observed in case of availability of water, which affects the entire farming community -irrigation systems are affected and so also are the crops dependent on irrigation, while at the same time global warming increases the demand for water in irrigation. As it is important to trace whether farmers have perceived the change in climate correctly or not, farmers were queried on the change in frequency of irrigation required for their crops. As can be seen in table 7, around 60 per cent of farmers reported greater than marginal increase in the irrigation frequency. Close to 40 per cent farmers in the sample reported 100 per cent increase in irrigation frequency over previous values. Climate change also encourages the spread of pests and invasive species and has already increased the geographical range of some diseases. In essence, it is altering the distribution pattern of animal and plant pests and diseases. The change in temperature, moisture and atmospheric gases accelerate growth rates of plants, fungi and insects, which alters the interaction between pests, their natural predators and hosts. In this regard, it is important to trace the farmers' response on whether there is an increase in pest attack and disease outbreak in crops in recent years. The survey found that 73 per cent of farmers confirmed the increase in pest attacks and occurrence of crop diseases due to climate change (table 8).
Adaptation Strategies in the Chambal Basin
Chambal basin primarily has rain-fed agriculture and groundwater level in the region has been continually on decline. As a result, climate change pressure of increased irrigation requirements on the available water resources has increased manifold. Improved water management is thus one of the most important long-term adaptation as well as protection options that region must pursue. A wide range of adaptation measures have been highlighted in this regard like improving water distribution strategies; changing crop and irrigation schedules; using rainwater more effectively; water recycling and the conjunctive use of groundwater. In this respect some major strategies were identified from the literature. These are: (i) planting trees (ii) soil conservation (iii) different crop varieties (iv) early and [85] Ganesh Kawadia and Era Tiwari The farmers were thus queried about their chosen adaptation strategy to protect crop against climate change. Table 9 explains the various adaptation practices used by the farmers of the region. They are not mutually exclusive as farmers are practicing multiple adaptation techniques simultaneously as per their need and suitability.
Water Harvesting / Improved Water Management
Water harvesting was found to be the most popular adaptation strategy followed by the farmers of the Chambal basin. It is adopted by 84 per cent of the sampled farmers. It has specifically become popular since the launch of ambitious schemes like Khet Talab Yojana and Balram Taal Yojana. Water harvesting can be defined as a range of techniques for collecting rainwater.
Water harvesting is economically beneficial for local farmers as it is the only feasible method of farming on degraded land devoid of other means of water for irrigation. It is also significant as a sustained source of irrigation for Rabi crops. Furthermore, it helps significantly in the recharge of groundwater resources of the region, adds greenery and in this way acts as a positive externality towards the overall ecology.
Irrigation Management
Improving the use of irrigation is generally perceived as an effective means of smoothing out yield volatility in rainfed systems. It has the potential to improve agricultural productivity through supplementing rainwater during dry spells and lengthening the growing season (Orindi and Eriksen 2005). Overall, improving the use of irrigation aids in averting the crop losses in areas subjected to recurrent cycle of drought.
Around 58 per cent of the sample farmers used this method to fight climate change (Table 9). The farmers use plastic pipes for transporting water from the reserve to the farm. They also use sprinklers for efficient use of the available water. The government subsidy for proper water management has played a major role in the adaptation of water harvesting and conservation measures (Orindi and Eriksen 2005).
Early and Late Planting / Changing Plant Dates
Altering the length of the growing period and varying planting and harvesting dates are among the crop management practices used in agriculture (Orindi and Eriksen 2005). This includes early and late planting options as a strategy to fight harmful effects of changing climate. The strategy helps to protect sensitive growth stages of crops by ensuring that these critical stages do not coincide with very harsh climatic conditions such as mid-season droughts. Early and late planting comes third in the sequence of importance among major adaptation strategies. This adaptation is followed by 44 per cent of the farmers surveyed (table 9). The Malwa region is now strictly following soybean-wheat annual crop cycle. As soybean is a Kharif crop and its growth cycle is strictly regulated by rainfall, changes in precipitation cycle certainly change its sowing and harvesting dates for the farmers. For example, many farmers have started opting now for the 95-60 soybean varieties instead of the regular variety of soybean planted earlier.
Wheat can be sown only after the harvesting of soybean in Rabi season; therefore, wheat planting dates also change accordingly. Farmers are practicing early sowing date and quicker maturing variety of soybean so that they can use soil moisture following the rainy season for the next crop like wheat, gram, mustard and other crops of Rabi sessions. The monsoon season in the region normally extends up to the end of September or some time to the early October. This provides enough moisture for the cultivation of the next crop. This has not only increased the cropping intensity but made the Malwa the bowl of wheat and soybean.
Plantation
Planting trees or afforestation, in general, provides a particular example of a set of adaptation practices that are intended to enhance productivity in a way that often contributes to climate change mitigation through enhanced carbon sequestration. It also has a role to play in strengthening the system's ability to cope with adverse impacts of changing climate conditions. It also contributes to temperature stabilization in the region. The farmers of the region thus follow tree plantation, particularly along the water harvesting structures. Almost 25 per cent of the sampled farmers undertake tree plantation as a method to avert climate change impact (table 9). This has increased the vegetation cover in the region.
Crop Diversification Varieties
Switching over to varieties that are early maturing and drought tolerant and/or resistant to temperature stresses, the farmers save their crops from rainfall fluctuations as well as add variety (Orindi and Eriksen 2005). There is evidence that growing different crop varieties on the same plot or on different plots reduces the risk of complete crop failure as different crops are affected differently by climate events, and this in turn gives some minimum assured returns for livelihood security. The pattern of crop diversification and its emerging trends in the Malwa region have already been discussed in detail in a previous chapter. In the survey, approximately 24 per cent of the farmers favoured adoption of different crop varieties and 25 per cent support planting of trees on their fields as an essential strategy to ward-off negative impacts of climate change (table 9). Nihaal Singh Tomar from Harnawada village in Dewas district succinctly mentioned that the only way to ensure sustained production in the wake of climate change was to make a pond in the field to capture rainwater and to plant trees in the field.
Soil Conservation
The adoption of practices and technologies that enhance vegetative soil coverage and control soil erosion are crucial to ensuring greater resilience of production systems to increased rainfall events, extended intervals between rainfall events, and potential soil loss from extreme climate events. Improving soil management and conservation techniques assist in restoring the soil while also capturing soil carbon and limiting the oxidation of organic matter in the soil. Soil conservation automatically gets ensured by following all the above-mentioned strategies; however, soil conservation issue was highlighted by only around 13 per cent of the sampled farmers (table 9). Only a minuscule 3 per cent of the farmers said that they were not going for any specific adaptation strategy (table 9). This makes it clear that almost all the farmers of the Chambal basin are aware of the negative impact climate change has on the production trends and taking appropriate mitigative steps.
Irrigation Profile of the Farmers
The beneficial adaptation in the fight against these problems is to work on optimum irrigation and better rainwater harvesting facilities. In this study, the emphasis was laid on knowing the irrigation profile of the surveyed farmers, that is the sources used for irrigation, for example, tube-well, pond, well, etc. This has been shown in table 10. Average water withdrawal/ hour per farmer 13.74 6.81 6.48
Source: Authors
A majority of the farmers in the survey sample use pond as their major source of irrigation (65 per cent), followed by well (47 per cent) and tubewell (25 per cent). This shows that importance and usage of ponds has greatly accelerated in recent times and has reduced farmers' dependence on groundwater resources. Thus, rainwater harvesting as an adaptation has lived up to the expectations of the farmer. Farmers from Harnawada village in Dewas district emphasise that since the ponds have been constructed in the village on the fields of the farmers, it is symbolic death of the tube-well. Villagers testify decline in the use of tube-wells since the adoption of rainwater harvesting techniques, which, according to the farmers, has helped them significantly in retaining the soil moisture after the rains. This indicates that rainwater harvesting is not only ecologically beneficial but also cost-effective in terms of per unit water consumption.
Tracing Farmers' Responses on Effectiveness of Varied Adaptations (Case Studies)
The farmers of Indore district are the main beneficiaries of recently launched Balram Taal Yojana. Semaliya Raimal and Kampel villages are good examples of excellent work in water harvesting. Yashwant Patel from Semaliya Raimal underlined the importance of the Yojana and its benefits to people when he said that it has helped the villagers in maintaining the stock of water in their fields, enhanced profits significantly, and fulfilled their irrigation needs. Krishnapal Singh Daangi from the same village added that rainwater harvesting has made him self-reliant as it improved his farm production by leaps and bounds. Vishnu Daangi, another farmer, said that as the area sub-soil is full of stones, tubewell-recharge is not good even when the region has abundant rains. In such situations, rainwater harvesting is a blessing. Dilip Patel states that because of water harvesting he has stopped borrowing for agricultural needs as it has made taking two-three crops in year possible and is thus increasing his total income. Kansingh Daangi, also a farmer, said water harvesting brought him an overall better life as it made it possible for him to make a pakka house and send his kids to good schools for education.
In Shadadev, a village adjacent to Semaliya Raimal, farmer Pawan Singh describes the advantages of water harvesting. He says that before they began water harvesting, they were compelled to do irrigation by drawing water directly from Shipra. As it was an illegal practice, farmers were fined Rs. 20,000 to Rs. 25,000. But after farmers started rainwater harvesting, irrigation difficulties are sorted. The farmers' experiences from Kampel village have also been on similar lines. Sunil Nimadia states that rainwater harvesting ensures available water is conserved and it also helps recharge water table. Other villages of Indore district where water harvesting has been carried out substantially are Paaliya, Faraspur, Rawad, Balodatakun, Atawada, Nevary, Matabarodi and Kadwaali Bujurg. Farmers responses from these villages have been on similar lines. They have also reported increased water level, tubewell recharge, less dependence on rainfall, a greater area for crop production, sustained irrigation facility for Rabi crops and last but not the least enhanced socio-economic status with better educational facilities for education for their children.
Arjun Singh from Pedmi village, Indore district, explains that in his area Kumbi, Beed are big Naalas but there is no dam on them. If stop dams are made on them, wastage of water can be minimised. Mahendra Singh Chouhan from Mhowgoan village gives an overview of different adaptation measures by saying that adaptation, in essence, is a long-term process with many benefits. It includes a wide range of measures like those of plantation, construction of ponds, soil conservation, soil testing, save water campaign, etc. These contribute to farming as well as to the environment.
Dewas district is a pioneer in water harvesting activities in the Chambal basin. Tonk Khurd Tehseel is world-famous for the ponds being constructed here under the ambitious Khet Talab or Rewa Sagar Yojana. Jujhaar Singh Tomar from Harnawada village says that there has been a great increase in the yield of wheat and gram in the area along with a substantial increase in green cover since the practice of rainwater harvesting began. He suggests more investment in water harvesting and tree plantations. Forak Singh Tomar from the same village urges that the Government increase subsidy on the construction of pond in the field from Rs. 80,000 to Rs. 200,000. Mansingh Tomar says that there has been a 200 per cent increase in production from his field due to rainwater harvesting. All the farmers say that tree plantation in their fields was the next best adaptation measure after rainwater harvesting. Sheshnarayan Patel from Gorwa village also stated his production got doubled. Water harvesting is extremely important for water conservation and ecology. Varied types of animals and plants are now noted in the village. Deers are now easily visible in the area. Vishnu from the same village drew attention to soil conservation as a result of water harvesting activities. Uday Singh Khiswi, also from Gorwa, said improved situation encourages him for hard work as water harvesting has made it possible to expect sure returns from farming. He further says that the Government should ban deep tube-wells in the area and encourage construction of ponds instead.
The districts of Mandsaur and Neemuch are in the vicinity of Gandhi Sagar Dam and Retam Barrage. These two districts have seen substantial work in water harvesting and well-recharge activities. Villages of Kachnara, Borkhedi and Haripura were covered in Mandsaur district. Gobar Singh from Kachnara says that rainwater harvested is also used to recharge wells.. Kishan Singh says that well-recharge has helped him get additional income from production of fruits like mangoes, papaya and pomegranate in his fields. Madho Singh Borona from the same village emphasises improved crop yield due to water harvesting. He suggested that water can be transferred from one dam to another by linking them with canals. Earlier the region was continuously under drought. Now, the farmers are prosperous, while earlier they used to work as daily wage labourers. The farmers from Borkhedi also told a similar story. Kamal Singh Shamsawat from the village says his farm production has increased to a great extent as he now gets three crops in a year. Under Kapildhara scheme, 28 wells have been constructed and all farmers have been provided with Kisan Credit Cards. The construction of Retam Barrage in the year 2000 has benefitted the farmers. The water supply is now ensured for a fee charged based on irrigated land in hectare. He also emphasised soil conservation as a major adaptation measure in saving agriculture from the harmful impacts of climate change. Hiralal Ojha and Deepsingh Sattawat also cited the advantage of building dam; they said, they have started sugarcane farming because of it. They have also started cultivating coriander. They also supported soil conservation and plantation of trees. Ramcharan Rewari from Haripura said that water harvesting has considerably increased his basket of production, which now includes wheat, coriander, gram, isabgol, flaxseed, mustard, fenugreek etc. He supported soil conservation and proper soil testing as the major method of adaptation apart from water harvesting and implementation of new and improved methods of irrigation.
Finally, concerning efficacy of various adaptation measures, this study examines the farmers' response in Neemuch district. The villages covered here included Barlai, Hatunia, and Pipliya Ghota. Rahul Patidar from Barlai says that he has now an orange orchard of his own due to water harvesting. He also favoured plantation of trees as an adaptation measure. Vishnu Prasad Patidarhas says that he could grow a variety of crops like orange, garlic, wheat, coriander and fenugreek only because of water harvesting. Shambhulal Patidaar said that water harvesting is giving him an annual return of at least four lakh rupees through improved farm productivity. He emphasised organic farming and plantation of trees as an adaptation measure.
In the village of Hatunia, there are around 280 to 300 ponds. Here, tubewells and hand-pumps are not successful. Farmers are engaged in agricultural activities only because of water harvesting. Satyanarayan from this village supports construction of more ponds as well as plantation of trees as the main adaptation measure to sustain in the face of climate change. Villagers from Pipliya Ghota also mainly follow water harvesting, seek enhancement of subsidy for that, plantation of trees and soil conservation as adaptation measures for changing climate.
Maladaptation: Soybean and Wheat based Monoculture
A ‗maladaptation' is a trait that is (or has become) more harmful than helpful, in contrast to an adaptation, which is more helpful than harmful. So, farming practices that though have increased farmers' production and income in the short run but become a severe danger in the long run, if continued unabated, can be effectively called maladaptation. One such maladaptation in the Malwa region is ‗monoculture'. Monoculture is the practice of producing a single crop over a long period in a certain area. The practice of monoculture gets usually stimulated by political and economic incentives. Specialisation brings obvious benefits to the economy of scale in terms of higher yields and easier mechanisation techniques; however, there are disadvantages associated with monocultures. Monocultures lead to easier spread of diseases and pests, thereby decreasing resilience to climate change variability that often induces additional stress on plants.
Additionally, when the produced crop is negatively affected by changing weather or biophysical conditions, farm income may be severely affected. For these reasons, moving towards diversification reduces the risks of maladaptation (Lin 2011). The Malwa region is a classic text-book example of such kind of monoculture. The area, since 1980s, has become a specialised zone of soybean-wheat annual cycle-based production. Soybean plants usually grow at ambient temperatures between 15°C and 27°C, although temperature below 21°C and above 32 °C may reduce flowering. Temperatures exceeding 40°C (104°F) are detrimental to seed production.
Soybean is adapted to grow in a wide range of soils and climates but requires adequate soil moisture for germination and seedling establishment. Soybean has flourished well in the Malwa region with many growth conditions getting satisfied simultaneously.
The soybean success story caught headlines not only regionally but also at the national level. The Malwa has practically given up on production of crops like maize, sugarcane and especially cotton after soybean success. However, this specialisation has reduced crop-diversification in the region. Also, this monoculture has been sustained by continuous groundwater exploitation. Since the 1980's the Malwa region has become increasingly tube-well dependent to sustain its crop-cycle. During the survey, it was found that villages like Jalodiya-Panth in Depalpur Tehsil of Indore district had as many as 500-600 tube wells with a depth ranging from 250 to 500 feet. The whole region is sustained on irrigation from groundwater resources and in recent times hit severe water shortages, not only for irrigation but also for drinking purposes in the wake of its fast depletion.
Hence, such a crop-cycle suffers a serious threat. The maladaptation thus needs to be balanced by a suitable adaptation that may ensure sustained water supply for irrigation. Besides, soybean-wheat crop cycle has high risks of infestation by widespread pests. Many farmers from the survey corroborated to such incidences. A farmer from Dhaturiya Village in Dewas district said that soybean crop in the district in recent times suffered from severe caterpillar attack and fungal attack. Soybean crop also suffered severely due to the acute shortage of rainfall during the growing stage. This was coupled with a rise in temperatures beyond 32°C, many a time crossing 40°C, severely affecting the crop. Warm temperatures and high humidity are conducive for the fungus that leads to the development of soybean rust. Soybean gets totally destroyed in case of untimely torrential rains; this is known as jal jaana in the local language. Thus, both extreme drought conditions with high temperatures as well as torrential excessive rains are harmful to the crop.
CONCLUSIONS AND POLICY IMPLICATIONS
As per moisture index, six out of eight districts in the study region lie in the arid zone, clearly indicating a movement towards desertification of the region. Nature of climate change in the Chambal basin was also explored through farmers' observations about change in temperature, precipitation and occurrence of extreme events. Farmers reported an increase in temperature with a clear majority of around 73 per cent. They reported an increase in the occurrence of heatwaves, cold waves, frost and droughts in the region. The decrease in precipitation was, however, noted by only a few farmers. There were thus indications of increasing aridity in the studyregion. The impact of the climatic change was analysed through farmers' responses about changes in crop-yields, the extent of change in irrigation frequency as well as the spread of pest-infestation and disease occurrence in plants. Around 70 per cent of farmers reported a decrease in crop yield, while close to 60 per cent of farmers reported greater than a marginal increase in irrigation-frequency. As much as 40 per cent of the total sampled farmers reported a 100 per cent increase in irrigation frequency. About three-quarters of the total sampled farmers reported an increase in pest attack and disease occurrence in crops.
From the survey responses, the study considers crop diversification, changing plant dates, soil conservation and soil testing, increasing rainwater capture, construction of stop dams on nalaas, and tree plantations as the major adaptation strategies farmers perceive as appropriate for rain-fed agriculture. Water harvesting was found to be the most important adaptation measure followed by crop diversification. The case for water harvesting got established by the transition of Dhar district from the arid to the semi-arid zone as per moisture index-based analysis. It also became clear from the survey of the farmers that adaptation measures to climate change cannot be considered in isolation, but relative to the impacts of other exogenous sectoral changes. The issue of ‗maladaptation' of soybeanwheat monoculture has accentuated the crisis in the region. This has severely resulted in groundwater depletion in the region and there has been thus overall damage to the ecosystem. Therefore, there are social costs as well as ecological limits to crop-based adaptations in the region. Hence, gross market and institutional failures that make farmers very vulnerable come at the forefront. In short, the key lesson to emerge is that the prioritisation of appropriate adaptation measures needs to be contextual and fit the capacity of local institutional and legal frameworks. Water harvesting measures should be specially strengthened by the policy in the study region to cope with changing climate and its effects on the agricultural sector. Mainstreaming adaptation strategies is thus to be considered as the most important policy intervention. | 2020-03-26T10:40:12.300Z | 1970-01-01T00:00:00.000 | {
"year": 1970,
"sha1": "f08eeb680f9ac8cea1f61ec7b73da731df720cc3",
"oa_license": "CCBYNC",
"oa_url": "https://ecoinsee.org/journal/ojs/index.php/ees/article/download/89/80",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "46a2ac70465e31ec4eddb09d18643e1758a02875",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
207486511 | pes2o/s2orc | v3-fos-license | Venous thromboembolism prophylaxis: Solutions are in our hands
V thromboembol i sm (VTE) i s a spectrum of diseases involving deep venous thrombosis (DVT) and/or pulmonary embolism (PE). Without prophylaxis, the probable incidence of confirmed hospital acquired DVT is approximately 10 to 40% of medical or surgical patients and 40 to 60% followings major orthopedic surgery.[1] Venous thromboembolism carries signifi cant hospital morbidities and mortalities. It has been estimated that 10% of hospital deaths are due to pulmonary embolism (PE).[1] Therefore, VTE is considered the number one cause of preventable death among hospitalized patients.
Venous thromboembolism prophylaxis: Solutions are in our hands
Fahad M. Al-Hameed V e n o u s t h r o m b o e m b o l i s m ( V T E ) i s a spectrum of diseases involving deep venous thrombosis (DVT) and/or pulmonary embolism (PE).Without prophylaxis, the probable incidence of confirmed hospital acquired DVT is approximately 10 to 40% of medical or surgical patients and 40 to 60% followings major orthopedic surgery. [1]Venous thromboembolism carries signifi cant hospital morbidities and mortalities.It has been estimated that 10% of hospital deaths are due to pulmonary embolism (PE). [1]Therefore, VTE is considered the number one cause of preventable death among hospitalized patients.
Unfortunately, VTE prophylaxis for high risk hospitalized patients is extremely underutilized.Implementation of preventive strategies has been proven to be able and effective, and adverse outcomes can be significantly minimized.In addition, effective pharmacological and mechanical VTE prophylaxis is available for medical and surgical patients.Furthermore, VTE prophylaxis has been found to be cost-effective in many clinical trials. [2]o r m a n y y e a r s , p h y s i c i a n s c o n t i n u e to underutilize VTE prophylaxis for their patients due to historical misconceptions and reasons that remain unproven.Some of these misconceptions include fear of bleeding from anticoagulants; the belief that the overall incidence of VTE among hospitalized and postoperative patients is too low to consider prophylaxis; concern over heparin-induced thrombocytopenia (HIT); unawareness that broad application of prophylaxis may be costeffective; and perceptions that VTE is not a signifi cant problem in their practice. [1] [3] The wide gap between guidelines and implementation still exist, in a multinational cross-sectional study looking at VTE risk and prophylaxis in the acute hospital care setting (ENDORSE study).Cohen et al. found that only half of their hospitalized patients actually received ACCP-recommended VTE thromboprophylaxis (54.7% of surgical patients, 32.5% of medical patients). [4]In another local study, Aboelnazr and his colleagues found that only 21.7% of medical patients received ACCPrecommended VTE thromboprophylaxis. [5]e study was conducted at Saudi Aramco Medical Services organization and is reported in the (April-June) issue of Annals of Thoracic Medicine ATM.The authors stressed on a quality improvement project regarding adherence to ACCP-recommended VTE prophylaxis guidelines in their hospitalized medical patients.They achieved a notable rate of 91% in-hospital VTE prophylaxis.Although, they started at the rate of 63%, and after using multiple strategies which include education, daily e-mail reminder and fi nally, considering VTE prophylaxis as part of their weekly round, they managed to attain a rate of 100%, making the overall rate of 91% of VTE prophylaxis.Upon review of the literature I found no similar study that has accomplished these outstanding results.From quality management point of view, this success should be complemented.However, it would be helpful if the investigators had looked at and clarify whether VTE prophylaxis was appropriately done based on the right drug, dose, timing, and duration of prophylaxis.More important is developing mechanisms ensuring the sustainability of these outstanding results.As a result of this high rate of VTE prophylaxis the authors report a signifi cant increase in the time-free period of the VTE and report a period of 11 months with no single VTE.
Several strategies have been reported in the literature that can enhance VTE compliance rates.First, increasing health care provider awareness of VTE prophylaxis through conducting periodic educational sessions. [6]Second, using medical admission order sets that incorporate VTE prophylaxis. [7]Third, using electronic alerts to physicians whose patients are at risk for VTE but not receiving VTE prophylaxis, called computer-based clinical decision support systems. [8]Fourth, developing national nonprofi t organizations such as the Saudi Arabian Venous Thrombo Embolism (SAVTE) advisory group (www.savte.com.) and or using the expertise of the international organization in the same fi eld like the American College of Chest Physicians (www.accp.org)who devoted their time to increase the awareness and knowledge of Venous Thrombo-Embolism (VTE), and to facilitate advances in the treatment of affected people, as well as the routine implementation on venous thrombo-embolism prophylaxis measures among health care providers (HCP).They also adopt the notion of "think tank" to provide expertise to advise physicians, scientists, health authorities, and the healthcare industry regarding medical technologies and pharmaceuticals relevant to VTE prophylaxis.
On conclusion, prevention of VTE in hospitalized patients with moderate and high risk of VTE is considered as patient safety issue with medico legal implications.Therefore, each hospital has to establish its own VTE prophylaxis policy along with the international standards, monitor the implementation thereof, audit and update for control of this "epidemic."At last, VTE prophylaxis is a long term journey of medical quality management that we know the beginning of this journey but not the end.
August 2004, for the fi rst time, the French Public-Health Programme for 2004-2008 included a tangible objective to reduce the incidence of DVTs by 15% in 2008.In April 2007, the United Kingdom (UK) Government announced a national VTE strategy, which required mandatory risk assessment and prophylaxis for hospitalized patients in all UK hospitals.In December 2004, in United States (US) the public health leaders established the Coalition to Prevent DVT and PE, and marked March 2004 as the first US DVT Awareness Month.In 2008, Steven Galson issued.The Surgeon General's Call to Action to Prevent Deep Vein Thrombosis and Pulmonary Embolism. | 2018-04-03T00:36:20.978Z | 2011-07-01T00:00:00.000 | {
"year": 2011,
"sha1": "e2803f0127d5a5eacf401506f809e8a1685fa5cc",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/1817-1737.82434",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "02f3ad377b459800210e0f8ad9bf9792deacb247",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
198495367 | pes2o/s2orc | v3-fos-license | An effective biomedical data migration tool from resource description framework to JSON
Abstract Resource Description Framework (RDF) is widely used for representing biomedical data in practical applications. With the increases of RDF-based applications, there is an emerging requirement of novel architectures to provide effective supports for the future RDF data explosion. Inspired by the success of the new designs in National Center for Biotechnology Information dbSNP (The Single Nucleotide Polymorphism Database) for managing the increasing data volumes using JSON (JavaScript Object Notation), in this paper we present an effective mapping tool that allows data migrations from RDF to JSON for supporting future massive data explosions and releases. We firstly introduce a set of mapping rules, which transform an RDF format into the JSON format, and then present the corresponding transformation algorithm. On this basis, we develop an effective and user-friendly tool called RDF2JSON, which enables automating the process of RDF data extractions and the corresponding JSON data generations.
For instance, the RDF model is chosen in GlycoRDF (32) for the glycomics-based data resource integration and representation. In (31), by using the RDF model, the DisGeNET platform interconnects multiple gene-disease associations and pharmacological data sources obtained from several drug discovery applications for helping us study molecular mechanisms underpinning human diseases. To effectively publish the cross-reference information about diseases and abnormal states extracted from disease ontology and abnormality ontology, Disease Compass (17) linked the causal chains of diseases by using the RDF model.
The emergence of numerous RDF-based applications lead to the generation of massive RDF data resources, which naturally attracts the interest of seeking for novel architectures and providing supports for the future RDF data explosion (2). In recent years, JSON (JavaScript Object Notation) becomes a popular format for representing and publishing massive data resources over the web application (8,16,18,35,38). JSON documents could be used to store records in MongoDB database (3), which is a crossplatform and support distributed processing of data sets with a large size. Previous attempts (4,14) partitions RDF data graph into several subgraphs by duplicating no-literal nodes, and then organizes these partitioning subgraphs in MongoDB. This partitioning approach duplicates no-literal nodes twice and costs some extra storage space.
JSON has been accepted as a major format for the future data explosion in National Center for Biotechnology Information dbSNP (34). Now the architecture of dbSNP is redesigned to provide products by using JSON files, which suit for the programmatic approaches well and could effectively provide supports for the increasing volume of data. Orphanet (26) chooses JSON as a new data format for the mission providing the scientific community with freely available data sets related to rare diseases and orphan drugs in a reusable format. As a completely languageindependent format, JSON has a higher compression ratio in coding, and this property makes it take up less space (30). This property also makes JSON become a popular format for data interchange on the web (15,20,21,39). Therefore, many web-based services, such as Semantic-JSON (16), Open chemistry (12), etc., choose JSON as the data representation formats. In order to provide access to the linked life science database, Semantic-JSON (16) develops an interface using a representational state transfer (REST) web service to the retrieval of the Semantic Web data in JSON formats. Open chemistry (12) develops a web application by providing access to the open source chemical science data in JSON formats. It facilitates data exchanges across different languages, and makes it easy to send the chemical science data to the web client developed in JavaScript. Moreover, the Ensembl (39) provides a web service to retrieves its data through the REST service, and JSON is chosen as its main endpoint data exchange format.
The adoption of a new data format will trigger the requirement of data migrations from the historical format to the new one (33). There have been some mapping tools that allow data migrations in previous works, e.g. Bio2RDF (6) and VCF2RDF (29), etc. Bio2RDF creates a knowledge space based on RDF documents by converting public bioinformatics databases into RDF documents and linking them together with normalized URIs (Uniform Resource Identifiers) in a standardized way. VCF2RDF presents the isomorphic mapping between VCF and RDF to make portability and interoperability of the self-contained description of the genomic variation in next-generation sequencing results. Unfortunately, although JSON has been employed to model future data explosion, and the available data size of RDF sources is rapidly increasing, relatively little work focuses on the data migrations from RDF to JSON (1,37). In particular, in the new era of big data, the studies on the mapping rules and an effective mapping tool from RDF to JSON for biomedical users without programming skills obviously lag behind.
Currently, developing an effective mapping technique deal with data migrations from the RDF model to the JSON model in a uniform way is still an open problem. In order to solve this problem, in this paper we present an effective mapping tool that allows data migrations from RDF to JSON for supporting future massive data explosion. After giving a set of mapping rules which transform the RDF format into the JSON format, the corresponding transformation algorithm from RDF to JSON is presented. On this basis, we develop a user-friendly tool called RDF2JSON, which enables automating the process of the RDF data extraction and the corresponding JSON data generation. Finally, experimental evaluations are carried out to verify the advantages of the proposed tool by using real-world data sets.
Materials and methods
In this section, we firstly give the RDF graph model parsing process, and then introduce the mapping rules. Based on these mapping rules to transform, the corresponding data migration algorithm is developed. The implementation details about the mapping tool RDF2JSON is given in the end of this section.
RDF graph parsing
RDF is based on the triples consisting of resources, properties and values. A resource is an entity accessible by an URI, a property defines a binary relation between resources or literals and a value is a literal or a resource. RDF Schema (RDFS) (27) provides a data modeling vocabulary and syntax to RDF descriptions. RDFS allows the definitions of the class and the property, respectively, which have global effects. It is flexible to add new properties to existing classes. In the following, a fragment of an RDF schema about computational models of biological processes is given in Figure 1.
In Figure 1, there are some classes such as 'SBMLElement', 'SpeciesReference' and 'KineticLaw', and some properties such as 'sbmlElement' and 'kineticLaw'. Since classes and properties can be refined in subclasses and subproperties, there is a hierarchy of classes and a hierarchy of properties existing in RDFS. For example, the tag 'rdfs:subClassOf' represents that the class 'KineticLaw' is a subclass of the class 'SBMLElement', and the tag 'rdfs:subPropertyOf' depicts that the property 'kineticLaw' is a subproperty of the property 'sbmlElement'. In the domain and range mechanisms of RDFS, a property is defined according to a domain and has an associated range that can be a literal or a class, which build the relations among classes and properties. For example, the property 'kineticLaw' is limited by the property 'domain' and the property 'range', which means that the types of values are instances of the class 'KineticLaw' and the class to which the property ascribes to is 'Reaction'.
An RDF statement consists of a subject, a predicate and an object, in which the subject is a class and the object is a class or a literal, the predicate is a property. Since a hierarchy of classes and a hierarchy of properties are built by RDFS using the vocabulary in the RDF statements, we can take advantage of this hierarchy to construct an RDF graph model for data extractions. Figure 2 shows the constructed graph model of the RDF schema above based on the hierarchy relationships of classes and properties. In particular, in this graph model, vertices depict the classes and literals, and edges represent properties.
Mapping rules from RDF to JSON
This section will introduce the mapping rules from RDF to JSON. The details of the mapping rules are shown as follows. RULE 1 For the root element (depicted as rdf: RDF) in the RDF document, the namespaces are transformed in the following format: where the prefix is to refer to the RDF namespace, and the namespace is identified by the URI. RULE 2 For the basic RDF description, the key is the property. The value of the property is divided into two categories to describe: (a) Literal type: The value object includes value and datatype. If the language property of the property value is specified, the language attribute (lang) is also included. (b) URIRef type: The value object includes 'rdf:resource', which is called by all things described by RDF.
RULE 3
For the value of the property being a blank node identifier, all the triples, in which the identifier is the resource, are extracted from the model. And these triples are processed recursively according to the basic RDF description.
In the case where the value of a property is a set of values, the W3C recommendation defines container such as bags, sequences and alternatives in order to hold such values. The parsing rules are following: RULE 4 For a set of values, the following elements under the child relation are mapped into key-value pairs, in which the key is the ordinal attribute and the value is the value of the element. The type of the container is retained.
For describing a group containing only the specified members and complying with the grammar rules of the triple, the elements are nested according to the order of elements. RULE 5 For a group of the specified elements, the nested structure is parsed to generate a list of key-value pairs and the type information is retained. RULE 6 For the hierarchy relation of classes, the key-value pair, in which the key is the property name of subclass and the value is the URI of superclass, is put under the resource JSON object. Figure 3 shows a schematic diagram for the process of hierarchy relation of the class, in which C1 and C2 are class instances. The solid line with label represents that C1 is a subclass of C2. The relation is extracted from the triple set of the description about C1. The relation is transformed into a pair of key and value, which is put under the JSON object of C1. RULE 7 For the hierarchy relation of properties, the keyvalue pair, in which the key is the property name of the subproperty and the value is the URI of super property, is put under the resource JSON object. Figure 4 shows a schematic diagram for the process of hierarchy relation of property. The solid line with label represents that P1 is a subproperty of P2, and P1 and P2 are property instances. The relation is extracted from the triple set of the description about P1. The relation is transformed into a pair of key and value, which is put under the JSON object of P1. RULE 8 For the domain and range of a property, two keyvalue pairs, in which the keys are the names of properties and the values are the value of the property, are put under the property JSON object. Figure 5 shows a schematic diagram for the process of the domain and range of a property. C1 and C2 are class instances defining the resources denoted by the subjects and objects of the triples whose predicate is P. The relation is extracted from the triple set of the description about P. The relations are transformed into two pairs of key and value, which are put under the JSON object of P. RULE 9 For repetitive properties of the same resource, the values are merged into an array. RULE 10 For the property that does not contain a specific value, a blank string is used to describe the value. Figure 6 shows an example that illustrates the mapping between RDF and JSON. For simplicity, only two representative instances are shown in the figure. The property 'sbml-Rdf:kineticLaw' is defined with domain property 'sbml-Rdf:Reaction' and range property 'sbmlRdf:KineticLaw' and has a superproperty 'sbmlRdf:sbmlElement'. According to rules 7 and 8, these properties could be mapped into the JSON object of the property 'sbmlRdf:kineticLaw', which has been defined by the base descriptions of the property 'sbmlRdf:kineticLaw' according to rules 2 and 3. Similarly, the class 'sbmlRdf:SBMLElement' has a superclass 'sbmlRdf:Element', and the hierarchical relationship could be mapped into the JSON object of the class 'sbml-Rdf:SBMLElement' according to rule 6. Figure 7 gives the data migration process based on the above mapping rules (Algorithm 1). First, we use RDFLib to parse the RDF file into a graph model. RDFLib is a python library for working with RDF, a simple yet powerful language for representing information. Then, the namespaces used to abbreviate the URI are extracted from the model into the root object by the rule1. In the main loop , all the resources contained in the model are processed step by step. And the properties associated with each resource are extracted to form the next layer of the loop (9-28). In the inner loop, we map the properties according to rules 2 to 7. Finally, the root object is mapped to the JSON file. Since the algorithm mainly consists of two layers of loop, the time complexity is O(n 2 ). Figure 8 shows that the RDF2JSON application is designed and implemented on the basis of client-server architecture with the intention of utilizing and separating each module into independent pieces. The server-side component is implemented by Python (version 3.6) providing access to using several libraries such as zerorpc and rdflib. Zerorpc (version 0.9.7) is a flexible remote procedure call implementation, which serves as a known remote server to execute a specified procedure with supplied parameters. The preprocessing of the data conversion uses the rdflib library to parse the input file to get RDF statements before applying the mapping rules. To write the transformed data into the output file we use JSON module in Python. The returned data is saved under the same path of the source.
RDF2JSON framework and implementation
The client-side component handles interaction with user, which is mainly implemented in electron (version 6.0.0), an open-source framework for creating native applications with web technologies such as JavaScript, HTML5 and CSS. The page architecture, design and functionality use the Bootstrap framework together with several jQuery (version 3.x) plugin to enhance user interactivity. To improve user friendliness, a common display layout is adopted and maintained between application functions. The file upload area is located on the right side of the page. As a desktop application, it allows a cross-platform compatibility among the most used operating systems (Windows 7 or above) and Linux (version 16.04).
Load JSON file into MongoDB
MongoDB could use the transformed JSON files to store massive records obtained from RDF files. The mongoimport tool is provided for importing the JSON file into MongoDB; the system command line is as follows: mongoimport -d < database> -c <collection> -file <filename>,where "-d" specifies the name of the database, "-c" specifies the collection to import and "-file" specifies the location and name of a file containing the data to import. Note that the field names of the document cannot contain "." or null, and cannot start with "$" for the system reference. Thus, if these characters appear in the document fields, they need to be replaced with other specific character such as Unicode characters.
After storing the RDF records in MongoDB, the basic MongoDB query mechanism could be used for searching the desired results. In particular, a 'find' function, which receives two parameters in JSON documents, 'query' and 'projection', and returns a cursor to the matching documents, could be used for the searches. The goal of 'query' is to specify the conditions that determine which documents to select, in which the <field>:<value> expressions is used to specify the equality conditions and query operator expressions. The 'projection' is used to specify the fields to return in the documents. The MongoDB query also provides aggregation pipeline methods called 'aggregate' to process a series of documents (such as COUNT, GROUP, SORT, LIMIT, SKIP, etc.) to provide aggregate computations. Besides, the expression operators can be used to construct the query expressions such as arithmetic expression operators, boolean expression operators, etc.
Results
In order to test the performance of RDF2JSON, two realworld data sets UniprotKB and BioModels containing RDF data sources are chosen in our experiments. UniProtKB is a comprehensive resource for protein sequence and annotation data. BioModels is a repository of computational models of biological processes. All the experiments are running on the Ubuntu 18.04 LTS 64-bit operating platform with the following system features: Intel ® core i5-8500 CPU @3.00GHz×6 and 32GB main memory. The approaches are programmed in Python 3.6. Table 1 gives the results of the experiment running on the UniprotKB data set. From Table 1, we observe that, for the experimental data set, JSON provides a compression storage and the compression rate for the tested data is about 40%. The best compression happens in R1 with the rate of 46% (the size of using RDF is 8.75 MB and the size of using JSON is 4.64 MB). As the RDF is an extension of XML and it is a complete markup language, it uses redundant tags for the content descriptions, which may result in redundant storage. JSON organizes data as an ordered list of 'name/value' pairs, which reduces the redundant tags. Figure 9 illustrates the results of the experiments running on the BioModels data set. In Figure 9, similar to the experimental results obtained in the UniprotKB data set above, a consistent result is observed and the average compression rate is about 14%. We also investigate the scalability of RDF2JSON, by varying number of the input RDF files. Figure 10 reports the running times when the size of input RDF files increase. From the figure, we can see that the running time will increase when the input files increase, and it approximates linear growth.
Discussion
With the emergence of rapidly increasing size of RDF data sources, there is an urgent need of a novel infrastructure to provide effective supports for future data explosion.
In this paper, we deal with the RDF data explosion issue by introducing the JSON model, and show the benefits of using JSON. After parsing the RDF schema, we present a set of mapping rules, which transforms an RDF schema into the JSON schema, and then propose an effective and specified algorithm to complete data migrations from RDF to JSON. We complement the work with a user-friendly and cross-platform tool RDF2JSON to help users without programming skills. Through the final experiment results, we also demonstrate the performance and advantages of RDF2JSON by using the real-world Uniprot and BioModels data sets. In the future work, we plan to integrate efficient query processing and optimization approach by using Spark/MapReduce, which has promising processing performance over massive data. | 2019-07-26T13:05:20.545Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "bf9217a9e688023e2fb711148f5cef268bd3455c",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/database/article-pdf/doi/10.1093/database/baz088/29001039/baz088.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bf9217a9e688023e2fb711148f5cef268bd3455c",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
252301873 | pes2o/s2orc | v3-fos-license | Research on the Teaching Reform of Business English Listening and Speaking Based on the Blackboard Platform from OBE Perspective
The “One Belt, One Road” initiative places greater demands on cultivating business English talents. Composite talents who can communicate effectively in business are in high demand. Based on the OBE teaching concept, the course is reversely designed with the help of the Blackboard platform and other information tools. Teachers integrate the requirements of professional standardized examinations into the course teaching and evaluate the results of learners’ output in a diversified formative manner.
Introduction
Talents with a global perspective, able to participate in international affairs, and capable of participating in international competitions are in high demand for the advancement of "One Belt, One Road." As an application-oriented university, the training direction of undergraduate education should be based on the concept of "pre-employment vocational education," which aims to cultivate application-oriented, high-quality compound talents for local services. Business English specialists have become necessary application-oriented talents in the "One Belt, One Road" boom due to their expertise in language, economics, and trade. The essential prerequisites for talent training are their language, knowledge, and practical communication abilities.
This study introduces the OBE teaching concept based on a clearer understanding of applied talents' connotation and level. It uses the Blackboard platform to carry out the teaching reform of the course "Business English Listening and Speaking." Teachers of this course further optimize the syllabus, enrich the teaching contents, refine the teaching points following the National Business English Teaching Standards, adopt the Blackboard platform, Rain Classroom, and other modern educational technologies to carry out diversified course teaching. They also optimize the evaluation system into the formative evaluation and align the teaching contents and assessment forms with the international and domestic business English standardized examinations, such as BEC, BTEM.
OBE
OBE (Outcome Based Education) educational philosophy is also called outcome-oriented education, which is an educational philosophy based on learning outcomes or result-oriented. Spady (1981) first put forward the term "outcome-based education," whose four principles are "clarity of focus," "expanded opportunities," "high expectations," and "designing back." [1] This concept focuses on outcomes and emphasizes the educational process, as opposed to the input-based idea. [2] The graduation goals and the curriculum goals should reflect society's specific needs for talents and students' ultimate expectations for learning from the macro and micro levels. Therefore, systematic curriculum settings and courses should be carried out in course design, teaching methods, evaluation methods, etc. Since then, Harden (1999), inspired by Spady, argued that OBE "is a performance-based approach at the forefront of curriculum development, leading the way in reforming the way medicine is educated and administered" [3] and opening up the practice of talent development in the medical field. Scholars have carried out OBE teaching practice in different areas based on initial ideas and principles. The research perspective has experienced the expansion of education and teaching concepts limited to Published by Francis Academic Press, UK -59-the macro level of talent training to the micro-level of curriculum theory. [4] The concept of OBE was introduced into China from Europe and America. After slow development in the early stage, it is now in its infancy. Since 2016, research results have shown a trend of accelerated development centering on such themes as the introduction of basic concepts and systems [5] , practical exploration of engineering education [6] [7] , and talent cultivation [8] . The author carries out a fuzzy search on CNKI with "OBE" and "Business English" as keywords, and the number of articles published is 81. However, there are few articles on teaching reform achievements of business English courses based on the Blackboard platform. Therefore, the teaching reform of business English courses guided by the OBE concept has specific research value.
Designing Back
It is a compulsory course for business English majors which provides essential business English oral practical knowledge and comprehensive application skills of the English language. To accomplish this goal, students must learn specific business English knowledge and skills as well as know how to engage in simple oral communication in foreign-related business activities. They can improve their abilities by attending several training projects, socializing, and cooperating with others. This integrated arrangement will strengthen their comprehensive English ability and lay a solid foundation for them to use English proficiently in real work situations in the future.
Teaching Tasks Are Designed to Achieve Effective Foreign Business Communication
The purpose of this course is to provide students with the basic process of international trade and international business communication, strengthen their general industry knowledge, and develop their communication sensitivity as future business English talents. The particularity of the epidemic poses more challenges to the teaching, and at the same time, gives it more room for progress. Through this course, students will use real-life businesses and products to cultivate practical business communication skills. For example, the 2020 Canton Fair was held online, with exhibitors' basic information and exhibits displayed. Therefore, teachers guided students to choose one Canton Fair exhibitor and several exhibitions from the original self-created companies, brands, and products and trained them in language and pragmatic skills around basic trade processes. Students are divided into groups and play the roles of company managers, marketing and promotion personnel, factory personnel, negotiators, etc. They learn how to use business skills to communicate effectively by mastering basic sentence patterns and basic business background knowledge.
Integrating Domestic and International Standardized Examinations Such as TEM, BTEM, and BEC into the Task Designing
Teachers can reasonably integrate the business-related texts in the listening of TEM4 or TEM 8 with the related topics of this course as practical exercise materials in class or after class. For example, the subject of Dialogue 1 of the original question for TEM 4 Listening in 2017 is product after-sales service telephone communication. The customer's computer has just been bought for a month, and there is a problem of file transfer loss, and he is emotionally anxious: "Well, I was transferring my files to it from my flash drive, and they got lost. Everything!" Teachers can focus on cultivating students' pragmatic ability of business English on how to deal with customers' complaints reasonably and solve problems effectively based on solving multiple-choice questions. "Okay. Now, don't worry. I'm sure we can sort something out." Here customer service uses an empathic strategy to reassure the customer: "Oh. My goodness." and "Okay. I can understand how upset you must be." TBEM 4 and TBEM 8 are exams for business English majors and have gradually emerged in recent years. The Cambridge Business English Exam (hereinafter referred to as BEC) is one of the many exams that prove the level of professional qualifications and can, to a certain extent, reflect the certificate holder's ability to communicate in English in the workplace environment. While the focus of these two exams may differ, the results of both exams can serve as a reflection of a candidate's mastery of business English and their ability to use it. The topics such as "telephone communication" and "business messaging" in BEC intermediate listening can be appropriately reverse designed to check students' communication abilities.
Using the Informational Platform
This course reform utilizes Blackboard as a teaching platform.
SettingTasks
According to the syllabus, teachers could reasonably divide the basic process of international trade into different stages, integrate the themes of each unit in the textbook and announce the staged teaching goals on the Blackboard platform before class. They can also use the system's built-in reminder function to guide students to clarify the learning tasks. The new words and phrases covered in the textbook, especially business-related content, are the default preview content. In addition, the theme-related content (such as text and video resources) of public course resources on various online course platforms can be attached to the BB platform to expand students' horizons.
The class discussion forum is also an effective place to guide students' independent learning and thinking. Teachers can post thought-provoking questions related to the unit's topic in advance to get a sense of the students' current stock of business language knowledge and language skills and can hold off on responding to students' queries. Peer learners can sometimes briefly discuss a topic or question or gain more information through other students' replies by browsing classmates' responses and questions. Teachers should take note of common issues that appear in their students' responses and address them in the classroom during teaching.
Analyzing Tasks
OBE's teaching philosophy stresses the importance of classroom teaching, which includes analyzing final results, dividing tasks, and teaching students how to learn. Taking the theme of "exhibition" in international trade as an example, teachers can subdivide this theme into multi-level themes and present the main content of the unit on the BB platform in advance. The secondary themes may involve "exhibition-related business background introduction," "pre-show preparation," "exhibition," "post-show closing," etc. The topic breakdown can describe the content involved in the "exhibition" theme to students from a macro level so that students can understand the "what to do" in business activities and then learn the specific "how to do."
Implementing Tasks
In addition to classroom audio-visual practice, students are required to perform independent practice and cooperative group work after class. Our department introduces the foreign language 3D training room to help teachers realize virtual scene selection, real-time recording, and vividly show students' group role play with more realistic scene simulation. The video can be copied so that students can easily self-examine the completion of the activity. In addition to the after-school simulation practice, the second classroom competition for learning also has a more apparent practical output. Students participate in various business English practice competitions to fully demonstrate their business English audio-visual skills.
Diversification of Formative Evaluation of Output
Formative evaluation can better reflect the effect of students' periodic learning than regular exams. First, this course conducts periodic assessments for each topic. In teaching, teachers use Tencent Classroom to record students' active participation in the classroom; and use the BB platform and Rain Classroom interfaces to conduct real-time classroom listening quizzes. Secondly, this course also introduces diversified evaluation methods such as teacher evaluation, personal self-evaluation, and peer review. The BB platform discussion forum is used to display students' business scenario simulation ideas, audio, and video, and motivate students to make adequate evaluations and give specific comments by embedding questionnaire tools or direct postback evaluations. Third, the results of the performance are in various forms. The closed-book examination of students' listening and speaking ability at the end of the term is abandoned. This course encourages students to demonstrate active learning and use their business language skills to complete cross-cultural business communication projects. For example, in the theme "exhibition" mentioned above, students will thoroughly understand the concept of "exhibition" in exploring the business background. The introduction of representative exhibitions at home and abroad and the collection and classification of materials is an evaluable result. Exhibition preparation and personnel arrangement can be discussed through the company group meeting. According to the textbook example, another achievement is students' simulated discussion videos on the actual exhibitors and product exhibition arrangement mentioned above. Filming profiles Published by Francis Academic Press, UK -61-of the target company and product, conducting video or interactive marketing activities during the specific exhibition process are also assessable results; similarly, the summary after the exhibition can also be developed to measure students' practical ability.
Conclusion
Business English teaching largely relies on traditional English teaching methods, which are incompatible with modern society. It is of great practical significance to introduce the OBE concept in constructing the business English major curriculum and carry out teaching reform using modern education information technology. There is an inevitable transformation of college business English majors into application-oriented majors. The results-oriented approach advocates learning-centered teaching, emphasizes knowledge integration and encourages students to learn cooperatively. It embodies the "unity of knowledge and action," essentially integrating input learning and output application. The teaching reform by introducing the OBE concept can enhance classroom vitality, promote a good classroom teaching ecology, and give full play to the central position of students. The Blackboard platform and other related Internet+ multimedia enhance a diverse learning and group collaboration activities both online and offline. They also help teachers implement multi-modal and various teaching, improving the students' independent learning potential and teamwork skills. | 2022-01-16T07:29:01.149Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "a84dd4e08d623c1205074795a59fc964edd8fff6",
"oa_license": null,
"oa_url": "https://francis-press.com/uploads/papers/6RmyJPPsvAiBZlQCmYEpuA6WpcHdxvhRE0yd74D7.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a84dd4e08d623c1205074795a59fc964edd8fff6",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
} |
249684530 | pes2o/s2orc | v3-fos-license | Effects of agavins in high fat-high sucrose diet-fed mice: an exploratory study
ABSTRACT The present study was to investigate whether agavins supplementation might reduce obesity in mice fed with a high fat-high sucrose (HF-HS) diet. Mice were fed with a HF-HS diet with (HF-HS+A) or without agavins (HF-HS) supplementation. Body weight, white adipose tissue (WAT), biomarkers inflammation, gastrointestinal hormones, microbiota, and their excreted metabolites were evaluated. HF-HS+A mice significantly reduced body weight, WAT, and leptin levels compared to the HF-HS group. Furthermore, pro-inflammatory cytokines and insulin levels tended to be lower in the HF-HS+A group. Moreover, the genera Allobaculum, Akkermansia, and Sutterella, linked with positives effects on host health, were identified as possible biomarkers for agavins treatment; meanwhile, ethyl oleate, thymine, hypoxanthine, uracil, and some fatty acids were substantially enriched with agavins and negative associated with pro-inflammatory biomarkers. Collectively, these results demonstrate that agavins can ameliorates many of the harmful effects induced by intake of a diet with a high fat and sucrose content.
Introduction
The increment prevalence of obesity and their comorbidities has become a serious public health economic burden. Commonly, an obesogenic environment and inadequate physical activity are the main causes of the epidemic of obesity and related metabolic disorders (Ogden et al., 2007). Nevertheless, nowadays in modern human life, the high fat diet intake plus consumption of sugar-sweetened beverages is very common. In general, these foods are particularly energy-dense promoting rapid weight gain compared to balanced diets (Christ et al., 2019). In animal obesity model, it is clear that the combination of fat and sugar has a more detrimental effect, especially on insulin resistance, than saturated fat or sugar alone (Masi et al., 2017), augmenting the risk for development of type 2 diabetes as well as cardiovascular and neurodegenerative diseases (de Mello NP et al., 2019;Ormazabal et al., 2018).
Up to date, the main focus to target obesity and related comorbidities includes restriction of caloric intake and increase of physical activity as well as the use of medications, bariatric surgery, among others (Ogden et al., 2007); however, some of these options imply high costs and may originate adverse secondary effects. In the last years, the development of dietary strategies employing prebiotics, have arisen as a good alternative for preventing or treating obesity and its comorbidities associated (Choque Delgado & Wmdsc, 2018). In this regard, inulin is perhaps the most known prebiotic; and addition of them to a HF-HS diet, substantially decrease insulin and leptin levels and increase adiponectin concentrations in rats (Sugatani et al., 2008). Recently, Igarashi et al. (2020) reported that inulin supplementation shifts the fecal microbiota composition of HF-HS diet-fed mice, decreasing the F/B ratio as well as the relative abundance of Proteobacteria.
On the other hand, agavins are a natural highly neobranched fructans extracted from stems of Agave plants (Mancilla-Margalli & López, 2006). We previously reported that agavins supplementation can modulate the gut microbiota, with the enrichment of bacteria with huge probiotic potential such as Akkermansia in high fat diet-fed mice (Huazano-García et al., 2020). Besides, agavins not only induce changes on cecal microbiota composition but also shifts on microbial activity increasing SCFA levels (Huazano-García et al., 2017) and other metabolites (postbiotics) which could be exerting various positive metabolic effects on the host including glucose and lipid metabolism (Huazano-García et al., 2020). Since the combination of fat and sugar is an important inducer of metabolic alterations (Guimarães et al., 2020;Masi et al., 2017); in the present study, we feed mice with a high fat diet plus a high sucrose beverage (resembling to human diet) to investigate the capacity of agavins to ameliorates obesity under this dietary regimen.
Animals
Ten male C57BL/6 mice (8 weeks old) were purchased from the Universidad Autonoma Metropolitana (Mexico City, Mexico). All mice were housed individually under temperature and humidity-controlled room with a 12-h light-dark cycles. Mice were adapted to the environment for 1 week and subsequently divided randomly in two groups and fed with a high fat diet (58Y1 Test Diet, Richmond, IN, USA) plus 30% (wt/vol) of sucrose (Sigma-Aldrich, Saint Louis, MO, USA) (HF-HS; n = 5) or supplemented with agavins (HF-HS +A; n = 5) for ten weeks. Sucrose and agavins were introduced as an extra caloric source by addition to the drinking water of mice.
The high-fat diet (58Y1 Test Diet) contained 61.6% calories from fat (31.7% lard and 3.2% soybean oil); 20.3% from carbohydrates (16.15% maltodextrin, 8.85% sucrose, and 6.46% powdered cellulose), and 18.1% from proteins. Agavins (BIOAGAVE TM powder; code: 11200001) from Agave tequilana were purchased to Ingredion Mexico. According to information provided by the supplier, each 100 g of BIOAGAVE TM contains 91.6 g of agavins (soluble fiber), 2.8 g of sugars, and 5.6% of moisture. Agavins were added in bottled water at a concentration of 0.38 g/mouse/day. The sucrose solution with or without agavins supplementation were freshly prepared daily. Clean drinking vessels were filled with an equal volume of the correspond solution. Water intake was monitored daily to ensure all animals of the same group were drinking an equivalent volume of fluid. Because excess calorie intake is considered to be an important contributor to metabolic syndrome development or obesity; then food and water were provided ad libitum throughout the experiment. Total energy intake was obtained by the sum of the energy coming from the 58Y1 diet along with sucrose and agavins added to drinking water.
At the end of the experimental period, mice in the postprandial state were anesthetized with a 60 mg/kg intraperitoneal dose of sodium pentobarbital to collect blood of portal vein. Subsequently, the mice were sacrificed by cervical dislocation. The cecum and adipose tissue were removed, rinsed with physiological saline and weighed. Cecum content was snap-frozen with liquid nitrogen and stored at −70°C until their use. The use of animals for this research was conducted according to the Mexican Norm (NOM-062- ZOO-1999) and approved by the Institutional Care and Use of Laboratory Animals Committee from Cinvestav-Mexico (CICUAL) protocol number: 0236-17.
Body weight gain, glucose, triglyceride, and cholesterol analysis
Body weight evolution was recorded every week along of experimental period. At the end of experimental period, blood samples from mice tails were taken in the postprandial state to measure glucose, triglycerides, and cholesterol. Blood glucose concentrations were obtained using a blood glucose meter (SD Check Gold, Mexico). Triglyceride and cholesterol analysis was carried out on serum samples using kits coupling enzymatic reaction (BioVision, Milpitas, CA, USA).
Hormones, cytokines, and lipopolysaccharides determinations
Upon sacrifice, portal blood was collected in tubes containing a dipeptidyl peptidase IV inhibitor (0.01 mL per mL of blood; Millipore, St. Louis, MO, USA) and centrifuged at 1600x g for 15 min at 4°C Serum was stored at −80°C until analysis. GLP-1 (active), insulin, and leptin concentrations were analysed employing a Mouse Diabetes Standard BioPlex Pro kit (Bio-Plex Pro Assay, Bio Rad, CA, USA); while interleukins (IL-1α, IL-1β, IL-6, IL-10) and TNF-α concentrations were quantified using a Mouse Cytokine BioPlex Pro kit (Bio-Plex Pro Assay, Bio Rad, CA, USA) and a BioPlex 200 instrument according to the manufacturer's specification. LPS determinations were carried out employing a mouse LPS kit (MyBioSource, San Diego, CA, USA) following the manufacturer's instructions.
DNA extraction and next generation sequencing
Cecal genomic DNA was extracted using a QIAamp PowerFecal DNA Kit (Qiagen, Hilden, Germany) according to the manufacturer's instructions. Construction of a highthroughput sequencing library and Illumina-based sequencing using a MiSeq instrument were carried out by Genewiz (South Plainfield, NJ, USA). Amplicons were generated using a MetaVx TM Library Preparation kit (Genewiz, South Plainfield, NJ, USA). V3 and V4 regions of the bacterial 16S rDNA gene were amplified using forward primers containing the sequence "CCTACGGRRBGCASCAGKVRVGAAT" and reverse primers containing the sequence "GGACTACNVGGGTWTCTAATCC". Sequencing was performed using a 2 × 150 paired-end (PE) configuration. The raw sequences generated in this study are available via the National Center for Biotechnology Information Sequence Read Archive (accession number: PRJNA772689).
Cecal microbiota analysis
Sequence data was processed and analysed on the QIIME2 software (v.2020.8) (Bolyen et al., 2019). DADA2 plugin within QIIME2 (Callahan et al., 2016), was used to quality filtering, dereplicating, and chimera filtering. First 18 nucleotides of forward and reverse reads were each trimmed to remove primers with ambiguous sequences. Forward reads were truncated at position 249 nt, and the reverse reads at position 241 nt based on the quality scores. We obtained a total of 1,990,271 high-quality sequences, with an average of 199,027 reads per sample, from 10 cecal samples (n = 5/group). Samples were rarefied at 91,885 sequences for alpha and beta diversity analyses. Alpha diversity was evaluated using Chao1 index (number of unique ASV) and Shannon diversity index (quantitative measure of community richness) (Gwinn et al., 2016). Differences in richness and diversity between nonsupplemented control and agavins prebiotic were calculated by Kruskal-Wallis test. Beta diversity was assessed by calculating weighted UniFrac distance matrices. The distance matrices were graphically visualized by twodimensional principal coordinates analysis (PCoA) plot. Results of the PCoA were then statistically tested by permutational multivariate analysis of variance (PERMANOVA) with 999 permutations. Generated ASV were assigned to taxonomy using the qiime feature-classifier classify-sklearn feature, using a naïve Bayes classifier (Pedregosa et al., 2011) trained on Greengenes v13.8 database at 99% similarity. Linear discriminant analysis (LDA) effect size (LEfSe) was used to identify taxa feature differentially expressed between non-supplemented control and agavins prebiotic (Segata et al., 2011). The threshold for the logarithmic LDA score was 3.0 with p < 0.05 for the factorial Kruskal-Wallis test.
Fecal metabolites analysis
At week ten of the experiment, feces from each mouse were collected, lyophilized, triturated, and homogenized to determine fecal metabolites profiles by Gas Chromatograph-Mass Spectrometry (GC/MS) as previous reported (Huazano-García et al., 2020). Fecal metabolites extraction was carried out using 100 mg of feces and 1 mL of chloroform/methanol (2:1). The extraction process was performed three times. Follow, the extracts were combined, and the solvent was evaporated. The residue was re-suspended in 1 mL of chloroform/methanol (2:1). An aliquot of 50 µL was taken and evaporated under nitrogen flux, after derivatized using N, O-Bis(trimethylsilyl) trifuoroacetamine with 1% trimethylchlorosilane (80 µL) and pyridine (20 µL) at 80°C for 25 min. Afterwards the system was at room temperature, isooctane was added to a final volume of 200 µL. Heptadecanoic acid was employed as an internal standard at a concentration of 3 mg/mL. One µL of the isooctane phase was injected, in a pulsed-splitless mode, onto a capillary column HP-5-MS, using He as the carrier gas at constant flow of 1 mL/min. Injector temperature was set to 260°C. The oven temperature began at 40°C (keep for 5 min) followed of an initial ramp temperature of 6 °C/min until 170°C then a second ramp of 12 °C/min until 290°C. The transfer line was maintained at 260°C. Mass spectrometer operated at 70 eV of electron energy, quadrupole and ion-source temperatures were 150 and 230°C respectively. All data were obtained scanning from 40-550 m/z. MassHunter Workstation software version B.0.0.6 (Agilent Technologies, Inc.) was used to collect all the data generated. Components mass spectra and retention times were obtained using the AMDIS (Automated Mass Spectral Deconvolution and Identification System) software. Fecal metabolites analysis was carried out in R 4.1.1 environment. Raw data were normalized and transformed. A principal component analysis (PCA) was applied to pre-processed dataset employing the ade4 package (Dray & Doufour, 2007); differences between cluster separations on PCA were confirmed by means of Mahalanobis distance (Md) and statistics significance by Hotelling's T 2 and F-test (Goodpaster & Kennedy, 2011). A hierarchical clustering analysis (HCA) was carried out on the PCA patterns through FactoMineR package (Le et al., 2008). Peaks with the lowest p-values (p < 0.05) on PCA1 and HCA were selected and annotated by comparing their respective extracted mass spectrum with the mass spectra of data of the NIST (National Institute of Standards and Technology, USA) library and software. A heatmap was performed with all the relevant information obtained for the metabolites analysis.
Statistical analysis
All statistical analyses, except microbiota and metabolites, were performed using GraphPad Prism 9.0 (GraphPad Software, La Jolla, CA, USA). Results are presented as mean ± SEM. Differences between groups were assessed by Student's t-test. Differences were considered significant at p-value <0.05. Pearson correlation was applied to assess the relationship between differential microbial taxas (at genus level) and metabolites with hormones, inflammation biomarkers, and systemic effects using R environment.
Agavins prebiotic significantly reduces body weight gain and white adipose tissue
At the end of the dietary intervention, body weight gain was notably lower in mice that received the agavins supplementation compared to the HF-HS group (p < 0.05; Figure 1a). In addition, upon sacrifice, weighing of WAT demonstrated that agavins prebiotic lead to mice a significantly lower fat accumulation (p < 0.05; Figure 1b). Moreover, energy intake (the sum of the energy from the solid food and drinking water) did not vary significantly between HF-HS and HF-HS+A group (65.63 ± 1.62 kJ and 67.14 ± 1.67 kJ, respectively), suggesting that reduction of body weight gain and WAT were not dependent of energy intake. On the other hand, glucose and triglycerides levels exhibited a trend to be lower (approximately 10% and 53%, respectively) in mice that receiving the agavins supplement with respect to nonsupplemented control group (0.05 ≤ p value <0.10; Figure 1(c,d)), while cholesterol levels were not significantly different between HF-HS and HF-HS+A treatments (4.31 ± 0.24 mM and 4.30 ± 0.11 mM, respectively).
Agavins intake remarkably increased GLP-1 and decreased leptin as well as pro-inflammatory cytokines levels
Portal GLP-1 levels showed a notably increment of about of 47% in the HF-HS+A group compared to the HF-HS group (p < 0.05; Figure 2a), whereas insulin levels in mice that received the agavins prebiotic tended to be lower in comparison with non-supplemented control group (0.05 ≤ p value <0.10; Figure 2b). Moreover, HF-HS+A mice exhibited a noticeably reduction of about of 45% on leptin concentration with respect to HF-HS group (p < 0.05; Figure 2c). In addition, a lower portal LPS concentration was observed in mice that received the agavins supplementation compared to the HF-HS group, but the difference did not reach statistical significance (Figure 2d). On the other hand, no significant difference was found for IL-1α levels between both treatments ( Figure 2e); nevertheless, the pro-inflammatory cytokine IL-1β concentration exhibited an important reduction in mice that received the agavins treatment with respect to non-supplemented control (p < 0.05; Figure 2f). Additionally, IL-6 levels showed a trend towards a decrease 1. Influencia del consumo de agavinas el peso corporal, tejido adiposo blanco (WAT) y niveles de glucosa y triglicéridos en ratones alimentados con una dieta alta en grasa y sacarosa.(A) Evolución del peso corporal, (B) WAT, (C) Glucosa y (D) Triglicéridos. Los datos se muestran como la media ± SEM (n = 5/ grupo). En el gráfico de barras los círculos y triángulos representan roedores individuales. Los valores de p exactos indican una tendencia a la disminución (0.05 ≤ valor p < 0.10; prueba t independiente). La diferencia significativa se indica con * (p < 0.05;prueba t independiente). (HF-HS) significa ratones control no suplementados y (HF-HS+A) significa ratones alimentados con un suplemento de agavinas.
in HF-HS+A group in comparison to HF-HS mice, however the values were not statistically significant (Figure 2g). Noteworthy, agavins intake led obese mice to a significant decrease of about 64% TNF-α concentration compared to HF-HS group (p < 0.05; Figure 2h). While the antiinflammatory cytokine IL-10 levels in HF-HS+A group tended to be higher when compared to HF-HS group, however the values were not statistically significant (Figure 2i).
reduced significantly in the HF-HS+A group compared to the HF-HS group (Appendix Figure A1), because agavins promote the enrichment of specific microbial taxa (mostly probiotics). Moreover, principal coordinate analysis (PCoA) plot showed that HF-HS+A group had a distinct bacterial community structure since it is clustered separately from non-supplemented control group, indicating that agavins consumption strongly affected gut microbial composition (Appendix Figure A1c). Overall, at phylum level, the mouse cecal microbiota was greatly dominated by Firmicutes and Bacteroidetes followed by Proteobacteria. In addition, other six minor phyla: Verrucomicrobia, Cyanobacteria, TM7, 3. Modulación de la composición de la microbiota cecal por la suplementación con agavinas en ratones alimentados con una dieta alta en grasa y sacarosa. (A) Gráfico de taxones bacterianos a nivel de filo, (B) Efecto de la dieta sobre la relación Firmicutes/Bacteroidetes (F/B), (C) Gráfico de taxones bacterianos a nivel de género. Cada taxón > 1% de la abundancia relativa media en los grupos se presenta con un color diferente. Los taxones aparecen en el nivel más bajo identificable, mostrado por la letra que precede al guion bajo: f, familia; g, género. (D) Análisis discriminante lineal que muestra los géneros sobrerrepresentados diferencialmente entre los ratones control no suplementados (HF-HS) y los ratones alimentados con un suplemento de agavinas (HF+S-A); para cada grupo experimental (n = 5). Actinobacteria, Tenericutes, and Deferribacteres were also present ( Figure 3a). HF-HS intake led to mice a markedly increment of Firmicutes/Bacteroidetes (F/B) ratio and a higher relative abundance of Proteobacteria. In contrast, the supplementation of agavins significantly reduced the F/B ratio and the relative abundance of Proteobacteria (Desulfovibrionaceae and Helicobacteraceae families, including the Helicobacter genus; Figure 3(b,c)). Furthermore, in relation to the HF-HS group, agavins supplementation increased the abundance of the genera Bacteroides, Allobaculum,Akkermansia,Coprococcus,Clostridium,Dehalobacterium,and Sutterella and decreased Oscillospira,Helicobacter,Odoribacter,Mucispirillum,AF12,Ruminococcus, and Roseburia (LDA >3.0; Figure 3d). Of note, the presence of Allobaculum, Akkermansia, and Sutterella only were detected in the cecal microbiota of HF-HS+A group; thereby, these genera could be used as biomarkers for agavins prebiotic.
Agavins consumption induced changes in the fecal metabolites profiling
Fecal metabolome profile of the agavins supplemented group was significantly different from those nonsupplemented mice (Figure 4). Principal component analysis (PCA) plot showed that HF-HS and HF-HS+A groups clearly separated in the PC1 (Figure 4a). A total of 109 metabolites were detected in the mice feces. Metabolites significantly contributing to the discrimination between HF-HS and HF-HS+A groups were selected based on the PCA1 and HCA analysis; thus, 35 metabolites showed the greatest differences (p < 0.05). Nonetheless, solely 24 metabolites were annoted. Besides, through the mass spectra data we were able to assign the family to which belong four compounds (Figure 4b). In relation to HF-HS diet, agavins supplementation noticeably increased the excretion of L-leucine, ethyl oleate, hypoxanthine, pentadecanoic acid, ethyl 13-methyl-tetradecanoate, uracil as well as the fatty acids ethyl esters hexadecenoic and octadecanoic; whereas levels of L-isoleucine along with the fatty acids: palmitelaidic, cis-11-eicosenoic, 9,12-octadecadienoic, cis-9-hexadecenoic, and oleic; ribose, and sterols (24ethylcoprostanol, campesterol, stigmasterol, β-sitosterol) were decreased. Of note, we detected the presence of D-glucose only in the feces of mice fed with the prebiotic supplement. Therefore, some of these metabolites could be used as potential biomarkers for agavins administration.
Correlation between differential bacteria genera and fecal metabolites with host metabolic outcomes
In general, microbial taxas notably increased with HF-HS diet were correlated positively with body weight, WAT, triglycerides, LPS, inflammatory biomarkers, insulin, and leptin levels ( Figure 5). In contrast, the genera substantially enriched with agavins supplement such as Bacteroides, Allobaculum, and Akkermansia, exhibited a significant and negative correlation to body weight, WAT, and leptin concentration as well as a strong positive association with GLP-1 levels. Moreover, Coprococcus displayed a negative correlation to WAT and a positive correlation to IL-10 levels. In addition, Dehalobacterium was negatively correlated with body weight, whereas Sutterella exhibited an inverse association with WAT, LPS, and TNF-α levels. Interestingly, both Dehalobacterium and Sutterella showed a positive correlation to IL-10 and GLP-1 levels. On the other hand, several of metabolites enriched significantly in the feces of HF-HS+A group exhibited a negative correlation with TNF-α and IL-1β levels ( Figure 5). In addition, ethyl oleate as well as Hotelling's T2 test were calculated to measure and confirm the significant difference between both clusters (Md = 108.809, p = 0.00019). (B) Heat map of differential metabolites. NI (Not Identified); Abundance values are shown as Z-scores per row (purple, increase relative to the row mean; yellow, decrease; white, absence metabolite); for each experimental group (n = 5).
hexadecenoic acid, ethyl ester; displayed an inverse association with LPS concentration. Remarkably, octadecanoic acid, ethyl ester was negatively correlated to WAT, while pentadecanoic acid showed an inverse association to IL-6 concentration. Contrary, the majority of metabolites that were decreased in the feces of HF-HS+A group, but increased significantly in the HF-HS group, such as the acids: cis-11eicosenoic; 9,12-octadecadienoic; cis-9-hexadecenoic; and oleic showed a significant and positive correlation with insulin and leptin levels.
Discussion
Previous investigations have reported that dietary fat plus sucrose intake is associated with higher body weight gain and WAT as well as increase of leptin, LPS, pro-inflammatory cytokines and deteriorated insulin function (Gao et al., 2020;Huang et al., 2020;Li et al., 2021;Yang et al., 2012); which is in line with the results of the present study. Remarkably, supplementation of HF-HS diet with agavins prebiotic led to mice a substantial reduction of body weight, WAT, TNF-α, IL-1β, and leptin levels. An earlier work reported that body weight loss contributes to decreased TNF-α levels in obese individuals (Kern et al., 1995), which is in concordant with our results. In addition, WAT is known to secrete various bioactive substances that help to regulate metabolic homeostasis, such as leptin, TNF-α, IL-1β, IL-6, among others (Shoelson et al., 2007). Thus, the significant decrement of leptin, and pro-inflammatory cytokines levels observed in the HF-HS+A group could be due, in part, to the lower WAT weight. Moreover, HF-HS+A group exhibited a tendency to diminish the insulin and glucose concentration, in relation to HF-HS group, which might be associated with lower pro-inflammatory cytokines levels; because the increase of them has been associated with insulin resistance and impaired glucose homeostasis (Maedler et al., 2011;Olson et al., 2012). Interestingly, we did not find a strong effect of agavins on significant decrease of some inflammatory biomarkers such as LPS, IL-1α, and IL-6, but particularly on reduction of insulin levels as in our previous study using Figura 5. Correlaciones significativas de Pearson entre los géneros bacterianos diferenciales y los metabolitos, efectos sistémicos, hormonas gastrointestinales y biomarcadores inflamatorios. El árbol situado arriba y a la izquierda ilustra un dendrograma de clúster (método de Ward). only a HF diet (Huazano-García et al., 2020); suggesting that the combination of fat and sugar exacerbated inflammation and insulin resistance (Masi et al., 2017) which is not so easy to reverse.
On the other hand, accumulating evidence indicate that perturbations in the gut microbiota composition may play an important role in the development of diseases associated with altered metabolism (Gao et al., 2020;Li et al., 2021;Rodríguez-Daza et al., 2020). In this regard, HF-HS diet notably increased the F/B ratio, which is considered as a marker of obesity (Ley et al., 2005). In addition, a remarkably increment of Proteobacteria abundance (Desulfovibrionaceae family) was also observed. The enrichment of Desulfovibrionaceae family with HF-HS diet has been previously reported (Li et al., 2021). Interestingly, some members of Desulfovibrionaceae family can produce genotoxic hydrogen sulfide (H 2 S) gas leading to enhanced intestinal permeability (Rohr et al., 2020) making easy the passage of toxic metabolites into the periphery, triggering induction of pro-inflammatory cytokines.
Overall, our results strongly support that agavins supplementation had positive effects in gut microbiota modulation. Agavins intake dramatically reduced the F/B ratio and Proteobacteria proportion, which is consistent with a previous report using inulin as prebiotic in HF-HS dietfeed mice (Igarashi et al., 2020). Besides, Proteobacteria abundance show a positive correlation with LPS levels (Mi-Young et al., 2019); thereby, reduction of this bacteria phylum could be associated to decrement LPS levels in the HF-HS+A group. In addition, LPS are a strong stimulator of the release of several cytokines (Cani et al., 2007); thereby, reduction of LPS concentration in the HF-HS+A mice could be contributing to decrease IL-1β, IL-6, and TNF-α levels in these animals.
Moreover, we found that agavins supplement mostly enriched the Bacteroides genus. Interestingly, some species of this genus such as B. thetaiotaomicron, B. ovatus, and B. fragilis are emerging as novel probiotics (Chang et al., 2019;Liu et al., 2017;Tan et al., 2019). Remarkably, these Bacteroides species possess broad ability to breakdown complex polysaccharides (Flint et al., 2012). In addition to Bacteroides genus, agavins increased specific taxas such as Allobaculum, Sutterella, Akkermansia, Coprococcus, Clostridium, and Dehalobacterium. Interestingly, the substantial enrichment of all these bacteria genera agrees with an early study using inulin-type fructans as prebiotic (Everard et al., 2014). Of note, the significant increment of Bacteroides, Allobaculum, and Akkermansia is consistent with our previous work using a mice fed with a HF diet supplemented with agavins (Huazano-García et al., 2020). Moreover, in the present study, Allobaculum, Sutterella, and Akkermansia only were found in the cecal microbiota of the HF-HS+A group; thereby, these bacteria genera could be used as biomarkers for agavins treatment. Intriguingly, previous works evidenced a positive relationship between abundance of Allobaculum with a body weight reduction (Ravussin et al., 2012) and GLP-1 levels in obese mice (Huazano-García et al., 2020). Whereas Sutterella genus has been reported as important contributor to the remission of type 2 diabetes after bariatric surgery (Wang et al., 2020). Furthermore, Akkermansia is considered as a significant biomarker of gut homeostasis and host physiology, since their abundance dramatically decrease in many diseases such as obese and type 2 diabetes (Chang et al., 2019;Cheng & Xie, 2020). In addition, has been evidenced that Akkermansia reduce fatmass gain, LPS levels, and insulin resistance in obese mice (Everard et al., 2013). Recently it was reported that Akkermansia increases GLP-1 secretion, improving glucose homeostasis and ameliorating metabolic disease in high-fat diet-fed mice (Yoon et al., 2021). Thus, the presence of Allobaculum, Sutterella and Akkermansia in the cecal microbiota of HF-HS+A group, could be contributing to notably increment of GLP-1 and as well as to improve metabolic parameters. Consistently, correlation analysis revealed that Bacteroides, Allobaculum, and Akkermansia were negatively associated with body weight, WAT, and leptin concentration and positively correlated to GLP-1 levels. Moreover, Sutterella was inversely associated with WAT and LPS concentration and correlated positively with IL-10 and GLP-1 levels. Collectively, these results suggest that the agavins effects on obesity and inflammation could be mediated by gut microbiota modulation.
On the other hand, we show a significant difference in the fecal metabolomic profiles between HF-HS and HF-HS+A groups. Intriguingly, glucose was detected only in the feces of HF-HS+A group, and since this compound is mostly absorbed in the small intestine, when coming from of diet (Chen et al., 2016), possibly this metabolite could be derived from gut microbe's breakdown of agavins. Besides, we observed an increment of hypoxanthine and uracil as well as a notably reduction of isoleucine in the feces of mice that received the agavins supplement. Similar results were reported in a previous study using inulin in HF-HS diet-fed rats (Guerville et al., 2019). Moreover, L-leucine, thymine, pentadecanoic acid, and octadecanoic acid, ethyl ester; were substantially enriched in the feces of HF-HS+A group and detected as possible biomarkers for prebiotic supplementation; additionally, some of these metabolites show a strong and negative correlation with pro-inflammatory cytokines ( Figure 5). Remarkably, this result is consistent with our previous work, in which uracil, L-leucine, thymine, octadecanoic acid, ethyl ester; and pentadecanoic acid were identified as biomarkers for agavins consumption and negative correlated to metabolic endotoxemia, low-grade inflammation and metabolic parameters in HF diet-feed mice (Huazano-García et al., 2020). However, since the present study was mostly exploratory, whether these metabolites mediate beneficial effects to host health require further evaluation. In addition, there is a limitation in this study because we did not include a standard-diet group (healthy mice) as reference to compare the improvement on mice health derived from agavins intake.
In summary, agavins supplementation can ameliorates several of the detrimental effects induced by consumption of a diet rich in fat and sugar such as body weight, WAT, and leptin levels. Furthermore, agavins showed a trend to decrease glucose, triglycerides, some inflammatory biomarkers (LPS, IL-1α, and IL-6), and insulin levels, but without reaching statistical significance, in relation to nonsupplemented control group which could be due to exacerbated inflammation and insulin resistance, typical in HF-HS diet. On the other hand, agavins notably enriched the genera Bacteroides, Coprococcus, Clostridium, and Dehalobacterium. In addition, Allobacullum, Akkermansia, and Sutterella were identified as possible biomarkers for agavins prebiotic under this dietary regimen. Moreover, microbial taxas substantially enriched with agavins supplement showed a significant and negative correlation with body weight, WAT, LPS, and leptin levels as well as a positive association to levels of the anti-inflammatory interleukin IL-10 and the hormone GLP-1. Besides, some of metabolites increased significantly in the feces of HF-S+A group, and identified as possible biomarkers for agavins prebiotic, showed a negative correlation with WAT, LPS and pro-inflammatory cytokines. Altogether, these results suggest that the agavins effects on obesity could be mediated by gut microbiota modulation and their metabolites. Thus, this study provides new insights on the impact of agavins supplementation on the cecal microbiota composition and host responses to high fat-high sucrose dietinduced obesity, providing a useful focus for further studies in humans.
Disclosure statement
No potential conflict of interest was reported by the author(s). | 2022-06-16T15:21:28.760Z | 2022-06-14T00:00:00.000 | {
"year": 2022,
"sha1": "6d0f8fda6a6894d25206a3bb36153fae578a2f1d",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/19476337.2022.2082536?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "09a4c603dd2a73877fdde18d1f5636e7c9c57971",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
257099998 | pes2o/s2orc | v3-fos-license | Gender-specific differences in patients with psoriatic arthritis receiving ustekinumab or tumour necrosis factor inhibitor: real-world data
Abstract Objective Investigate effects of gender on disease characteristics and treatment impact in patients with PsA. Methods PsABio is a non-interventional European study in patients with PsA starting a biological DMARD [bDMARD; ustekinumab or TNF inhibitor (TNFi)]. This post-hoc analysis compared persistence, disease activity, patient-reported outcomes and safety between male and female patients at baseline and 6 and 12 months of treatment. Results At baseline, disease duration was 6.7 and 6.9 years for 512 females and 417 males respectively. Mean (95% CI) scores for females vs males were: clinical Disease Activity Index for Psoriatic Arthritis (cDAPSA), 32.3 (30.3, 34.2) vs 26.8 (24.8, 28.9); HAQ-Disability Index (HAQ-DI), 1.3 (1.2, 1.4) vs 0.93 (0.86, 0.99); total PsA Impact of Disease-12 (PsAID-12) score, 6.0 (5.8, 6.2) vs 5.1 (4.9, 5.3), respectively. Improvements in scores were smaller in female than male patients. At 12 months, 175/303 (57.8%) female and 212/264 (80.3%) male patients achieved cDAPSA low disease activity, 96/285 (33.7%) and 137/247 (55.5%), achieved minimal disease activity (MDA), respectively. HAQ-DI scores were 0.85 (0.77, 0.92) vs 0.50 (0.43, 0.56), PsAID-12 scores 3.5 (3.3, 3.8) vs 2.4 (2.2, 2.6), respectively. Treatment persistence was lower in females than males (P ≤ 0.001). Lack of effectiveness was the predominant reason to stop, irrespective of gender and bDMARD. Conclusions Before starting bDMARDs, females had more severe disease than males and a lower percentage reached favourable disease states, with lower persistence of treatment after 12 months. A better understanding of the mechanisms underlying these differences may improve therapeutic management in females with PsA. Trial registration ClinicalTrials.gov, https://clinicaltrials.gov, NCT02627768
Introduction
Epidemiological evidence suggests that the prevalence of PsA is similar across genders [1][2][3], but gender-related differences have not been thoroughly explored in PsA; a number of studies have analysed various aspects of the disease, from baseline characteristics and disease perception to treatment response [mostly to TNF inhibitors (TNFi)] and patient outcomes, in men and women separately.However, data are emerging of differences in the clinical expression of PsA, with men tending to develop more severe axial disease and women developing polyarticular disease [1,[4][5][6][7].
Furthermore, some studies have suggested gender-related differences in patient-reported measures of disease, in particular those related to pain [8].PsA in women was found to lead to more severe limitation of their daily function than in men and to result in a higher level of work disability [1].
Men and women with PsA have shown different magnitudes of response to and retention of biologic DMARDs (bDMARDs), such as TNFi [9][10][11][12][13], indicating that women with PsA initiating TNFi are less likely to achieve remission or minimal disease activity/very low disease activity (MDA/ VLDA) [14,15].The Danish registry DANBIO and the British BSRBR registry reported that women receiving TNFi more frequently develop side effects than men, possibly leading to an earlier discontinuation of these drugs in women [9,11].Results from the DANBIO registry showed that a higher proportion of female patients than male patients switched to another TNFi or stopped the first TNFi without starting a new TNFi [10].
Similar observations have been made across other rheumatic conditions: women with RA and AS have been shown to have shorter TNFi treatment retention than men [16][17][18][19][20].In these studies, female gender was an independent predictor of shorter drug survival (regarded as a surrogate marker for efficacy) across different TNFi [16,17].Women received treatment for a significantly shorter time, and the main reason for switching treatment was inefficacy [18,20].Accumulating evidence in multiple rheumatic diseases indicates that gender may influence the likelihood of achieving the desired outcome with treatment.
The objective of this analysis of PsABio data was to establish whether there are gender-related differences at baseline as well as in response to and retention of biological treatment in patients with PsA treated with ustekinumab or TNFi in routine clinical practice and to analyse these differences in the context of previous research.
Methods
PsABio (NCT02627768) was a multinational, prospective, real-world, observational cohort study of patients with PsA who started ustekinumab (an IL-12/IL-23 inhibitor) or a TNFi as first-, second-or third-line bDMARD treatment.The study was designed to evaluate the persistence, effectiveness and tolerability of ustekinumab and TNFi.The study design, patient population and evaluations have been described elsewhere [21,22].
Data were collected at baseline, then every 6 months up to 3 years, with a window of 63 months to align with standard clinical practice.In addition to the main statistical analysis, exploratory analyses were performed on various patient subgroups.In the analysis presented here, male and female patients were compared for disease activity, patient-reported outcomes and treatment persistence.
The Baseline set included all eligible patients with baseline data and without major protocol deviations.The Safety set included all patients with baseline data and an additional three female and two male patients excluded from the baseline set [no valid baseline assessment (within 62 days prior to bDMARD start)] and any available follow-up data included in the 36-month data analysis.The effectiveness set-1 included all eligible patients from the Baseline set with any effectiveness follow-up data up to 12[þ3] months (hereafter referred to as 12 months).The 'remainer' data analysis is based on a previously obtained effectiveness set that included two patients less, referred to as effectiveness set-2.Data on these two patients were not available at the time of previous analyses [21,22].'Remainer' patient groups included all patients who remained on the initial treatment (ustekinumab or TNFi) and had a selected Month 12 visit (defined as a visit that took place within the Month 12 6 3 visit window).
Data were also collected for the following variables: the presence of dactylitis, enthesitis, nail psoriasis and psoriasis Rheumatology key messages • Baseline PsA disease activity, impact and function were poorer in females compared with males.
• Males with PsA had better 12-month responses and persistence with ustekinumab and TNFi than females.
• Disease course and treatment response seems to differ between female and male PsA patients.
Partially missing dates were imputed for analysis.These included start and stop days of previous treatments or of treatments within the study, laboratory sample dates and other dates (if incompletely known, day and/or month were imputed), the BASDAI and the PSAID-12 missing item scores were imputed according to the recommendations of the developers of the scales.We defined the risk window (the time between treatment initiation and 91 days after treatment stop) on the basis of which adverse events (AEs) were assigned to treatments.If information on AE relationship to treatment was missing, the AE was imputed as related to the bDMARD.The analysis included data from the baseline assessment, at 6(63) months and at 12(63) months.
Data were analysed by descriptive statistics including 95% CI.All inter-gender comparisons were descriptive.Intragender comparisons between ustekinumab and TNFi cohorts were done by logistic regression analysis, with propensity score adjustment for imbalanced baseline covariates and nonresponse imputation for stopping/switching biologic drugs.
Ethics approval
This study complied with ethics requirements as specified by the Independent Ethics Committee/Institutional Review board of each participating site (as detailed in [22]) and by local regulations in each country.Each participant signed a participation agreement/informed consent form in line with local regulations and trial sponsor policy, before data collection.
Female patients were slightly older than male patients (mean age was 50.2 years for females vs 48.7 years for males); however, both genders had similar mean disease duration at baseline (6.7 years for females vs 6.9 years for males) and similar mean BMI (28.4 kg/m 2 for females vs 27.7 kg/m 2 for males) (Table 1).A higher proportion of female patients (42.7%) than male patients (24.4%) had a FiRST score !5, suggestive of chronic widespread pain.Similarly, females were proportionally more likely to have polyarticular disease and enthesitis, whereas males were proportionally more likely to have oligoarticular disease, dactylitis and psoriasis affecting >10% of body surface.In addition, a higher proportion of female patients than male patients had comorbidities and physician-confirmed axial involvement combined with peripheral joint disease (Table 1).
There were similarities and differences in baseline medication.Males were more likely to start ustekinumab or TNFi as the first line of bDMARD (54.7% males vs 46.9% females).They were slightly more likely to be receiving NSAIDs (63.1% males vs 59.4% females) and similarly likely to be receiving steroids (32.6% males vs 34.0% females).A numerically higher proportion of female patients were receiving antidepressants at baseline (8.0% females vs 2.6% males).The use of other analgesics was also slightly higher in females (29.7% females vs 25.9% males), whereas the proportions of patients receiving opioids were similar (5.1% females vs 4.8% males).
The proportions of females and males receiving any concomitant conventional DMARDs (cDMARDs) at baseline were similar (48.8% females vs 46.0% males); in particular for methotrexate at baseline (37.1% females vs 35.3% males).
Patients of both genders demonstrated improvement of clinical outcomes at 6 months and at 12 months, compared with baseline; however, females experienced a less pronounced improvement of their disease than males (Table 2).The proportion of patients who reached MDA including VLDA was 21.0% at 6 months and 33.7% at 12 months for females, and 43.1% at 6 months and 55.5% at 12 months for males (Table 2; Fig. 2).The proportion of patients achieving cDAPSA LDA (including remission) was 43.8% for females vs 66.0% for males at 6 months, and 57.8% for females vs 80.3% for males at 12 months (Fig. 2).
Although at baseline males had a higher rate of dactylitis and nail psoriasis than females, and an only slightly lower rate of enthesitis, they had lower rates of enthesitis, dactylitis and nail psoriasis than females at 6 months; this difference became more pronounced at 12 months (Table 2; Fig. 2).Males had a lower HAQ-DI score at baseline and a greater improvement in HAQ-DI score at 6 months and at 12 months than females; the 95% CIs of the HAQ-DI change at 12 months did not overlap (Table 2).Females had a greater improvement of EQ5D VAS score over 12 months; however, their mean EQ5D VAS score at 12 months remained lower than that of males (61.5 for females vs 69.7 for males) (Table 2).The change from baseline in final PsAID-12 score was greater for males, as shown by non-overlapping 95% CIs (Table 2).
Male patients demonstrated higher treatment persistence than females (P 0.001 log rank test; Fig. 3).
After 12 months, 730 patients (81.7%) remained on their initial bDMARD.When considering ustekinumab and TNFi treatment groups separately, a higher proportion of females in the ustekinumab group switched or stopped their initial treatment compared with males.The same pattern was seen for the TNFi group (Supplementary Fig. S1, available at Rheumatology online).The proportions of males and females achieving MDA/VLDA and cDAPSA low disease activity/remission at 12 months were similar across treatment groups, i.e. males receiving ustekinumab showed a similar level of improvement as males receiving TNFi, and the same pattern was seen for females (Supplementary Table S1 and Fig. S2A, available at Rheumatology online).
Previously published overall analysis reported that patients receiving ustekinumab were on average receiving a later line of bDMARD treatment, had more severe skin involvement and more chronic widespread pain than patients receiving TNFi [21,22].Separately for females and males, achievement of effectiveness endpoints was compared between treatment groups including propensity score adjustment for baseline covariates.No significant differences in effectiveness of ustekinumab vs TNFi were detected within genders (Supplementary Fig. S2B, available at Rheumatology online).
Overall safety data at 12 months have been reported previously [22].The safety data reported at 36 months is in line with previous reports.The proportion of females with any AEs, treatment-related AEs and bDMARD discontinuation due to a treatment-related AE was slightly higher than males.The proportion of males with malignancies was slightly higher than females (Table 3).
Females were more likely to stop treatment compared with males [n ¼ 109/494 (22.1%), n ¼ 50/399 (12.5%), respectively].Lack of efficacy was the most common reason to stop the initial treatment compared with safety in both males [n ¼ 42/50 (84.0%), n ¼ 7/50 (14.0%), respectively] and a Pure axial PsA is defined as having only axial involvement (evaluation by the investigator rheumatologist without imaging), whereas combined axial PsA includes axial involvement and !1 of distal interphalangeal joint involvement, monoarticular or oligoarticular PsA, and arthritis mutilans.
Discussion
The analysis of gender subgroup results of the PsABio study has expanded previously published observations that men and women with PsA have different experiences with the disease activity, clinical manifestations, impact on health-related quality of life, response to bDMARDs and drug persistence.Broadly, the differences we observed may be classified into those related to the disease and those related to the patient response to bDMARD treatment.At baseline, polyarticular PsA and enthesitis were more prevalent in female patients who were also almost twice more likely to have FiRST score !5, indicative of chronic widespread pain.The observation that they received NSAIDs slightly less frequently than male patients may indicate that patients and/or their prescribing physicians perceive PsA-related pain differently in females and males.This may also be supported by the less favourable outcomes among female patients, reflected by more frequent switching and greater use of antidepressants.To our knowledge, this is the first report of gender differences in non-DMARD medication.In the context of clinical and patientreported outcomes following ustekinumab or TNFi treatment, although both gender subgroups showed improvement at 6 and 12 months, female patients remained in a worse disease state than male patients (i.e.lower rates of MDA/VLDA and cDAPSA low disease activity/remission among females compared with males).Although male patients started with lower (i.e.better) HAQ-DI and total PsAID-12 scores at baseline, they showed a greater improvement in both scores at 12 months than females, increasing the gap between the genders rather than closing it.Finally, female patients in our study entered on later lines of bDMARD treatment than males, suggesting that their previous biologic treatment(s) may have been unsuccessful.In addition, they stopped or switched the bDMARD earlier than male patients in the study, which may represent the recognized phenomenon of decreasing effectiveness in subsequent bDMARD lines.Females were more likely than males to stop treatment; this was due to effectiveness reasons, but also to some degree safety signals, in line with observations in the DANBIO registry [13].Lack of effectiveness was the most common reason to stop treatment, irrespective of gender.These results add to the accumulating evidence of genderspecific differences in PsA [1,4,10,12] and other rheumatic conditions [28][29][30].Females are more likely to have polyarticular disease [4,30] and experience more chronic pain than males.A recent study that observed differences in reporting of pain [8] hypothesized that women may have a different perception of disease.Our observations on prior and baseline medications (e.g.use of analgesics) may also point towards a different interpretation of the genesis of disease expression by health care providers/rheumatologists.
The chronic widespread pain identified with the FiRST tool could rather be an epiphenomenon, typically occurring in females, and should not preclude potent anti-inflammatory treatment for female patients.The results of this study suggest that treatment approaches for females are not fully successful, and broader/more comprehensive therapeutic strategies including sufficient and lasting anti-inflammatory DMARDs are needed for female patients, probably earlier in the disease course.
Poor prognostic factors, including dactylitis, enthesitis, polyarticular disease and progressive disability have been reported to be more common in females, suggesting a need for more intensive treatment management [1,13].Results from the DANBIO registry indicated that females had worse physical function compared with males after 12 months of treatment.In addition, inflammatory markers (e.g.CRP and SJC) were less affected in females compared with males [13].Female sex hormones may contribute to enhanced immunogenicity and pro-inflammatory disease and could in part contribute to gender differences in the clinical characteristics of PsA [1].Although dactylitis was not more common in females in our study, greater prevalence of enthesitis, polyarticular disease and higher HAQ scores in females are all in agreement with previous studies.
It may be that enthesitis and polyarticular disease are a patient's way of expressing pain and the higher levels at baseline in females compared with males may be linked to the higher levels of fibromyalgia and chronic pain (FiRST questionnaire) seen in females.However, to fully understand if pain is linked to a higher enthesitis score, a mediation analysis would need to be undertaken which could be difficult to interpret when outcomes are highly correlated [as is the case for patient reported outcomes (PROs)].
This study benefited from a number of strengths as it was a large prospective real-world cohort study (conducted across 91 sites in eight European countries) and as such, patients were selected in a less rigid way than those enrolling in randomized controlled trials.The responses to treatment by gender were not limited to just one type of medication (i.e. they included two different modes of action of biologic therapy, ustekinumab or TNFi) and are thus more widely applicable.Limitations include the fact that the results shown here were a Safety set included all patients who had baseline data and an additional three female and two male patients excluded from baseline set [no valid baseline assessment (within 62 days prior to bDMARD start)].
b AEs do not include neoplasms unless stated.AE: adverse event; bDMARD: biologic DMARD; SAE: serious adverse event.
generated from a post-hoc analysis, as investigating genderrelated differences was not a primary study objective.In addition, in consensus with a routine care setting, there was no strict medication protocol and the choice of bDMARD was made independently before enrolment by each patient's rheumatologist.
More studies are needed, to illuminate further the disappointing treatment results for female patients compared with their male counterparts.This study can make rheumatologists aware that women with PsA have a substantially worse experience with their disease both in terms of disease activity (i.e.increased disease duration and severity) and patient-reported outcomes, at the start of treatment with bDMARDs.The between-gender differences have consequences for treatment response and treatment persistence, and physicians should consider changing the current practice of treatment, particularly for female patients with PsA.
Conclusion
These real-world data from PsABio on gender differences suggest that, at the start of biologic treatment, females have a worse clinical picture of PsA than males.Although treatment improvements were seen in both genders, a lower percentage of women reached a favourable disease state of low or minimal disease activity at one year, and more women stopped/ switched biologic due to both lower effectiveness of the treatment and AEs.A better understanding of the mechanisms derlying these differences may improve therapeutic management in females with PsA.
Figure 1 .
Figure 1.Summary of patients in analysis -all patients.a Note that Effectiveness set-1 comprises all patients in the latest data run, who had baseline data and a post-baseline assessment.See Methods and Supplementary Fig. S1, available at Rheumatology online.b Note that one patient may report more than one eligibility criteria.bDMARD: biological DMARD; TNFi: TNF inhibitor
Figure 2 .
Figure 2. Observed proportion of male and female patients achieving treatment targets (A, B) and single item resolution (C-F) at 6 and at 12 months-Effectiveness set-1.a Solid bar represents MDA and hashed bar represents VLDA.b Solid bar represents cDAPSA LDA/remission ( 13) and hashed bar represents cDAPSA remission ( 4).cDAPSA: clinical Disease Activity Index for Psoriatic Arthritis; LDA: low disease activity; MDA: minimal disease activity; mo: months; TNFi: TNF inhibitor; VLDA: very low disease activity
Table 1 .
Patient demographics and disease characteristics at baselinebaseline set
Table 3 .
Adverse events at 36 months -Safety set a | 2023-02-24T06:18:25.967Z | 2023-02-22T00:00:00.000 | {
"year": 2023,
"sha1": "c50059eeacf4b388645ee27a2c23f47df6047962",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/rheumatology/advance-article-pdf/doi/10.1093/rheumatology/kead089/49295245/kead089.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "b597811147568d0585a7054f445c91e975dacb4f",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
239720981 | pes2o/s2orc | v3-fos-license | A Vascular Variation in Radial Forearm Free Flap Harvesting: Distal Branching of the Radial Artery—A Case Report
Study Design: Case report. Objective: By reporting such a rare vascular variation of the radial artery, we aim to make other surgeons aware of comparable vascular variations in radial forearm free flap harvesting. Methods: In this case report, we present an 84-year-old male patient, with a rare distal branching of the radial artery into the deep palmar branch, approximately 7 cm from the wrist. In order to visualize the vascular variation, intraoperative photo documentation took place. Results: The radial free flap harvesting was successful and no postoperative complications were noted. Conclusions: Distal branching of the radial artery into the deep palmar branch may occur in radial forearm free flap harvesting. Since no restrictions in flap perfusion and/or hand perfusion were observed in our case, we recommend radial forearm free flap raising in the traditional way. No changes concerning the design and the positioning of the skin paddle need to be made.
Introduction
The fasciocutaneous radial forearm flap (RFF) was first described by Yang et al in 1981 1 and became known as the "Chinese flap." 2 Soon after, in 1982, Mü hlbauer et al described the technique of flap raising and its advantages to the European audience. 3 Over the years, the RFF became the workhorse flap in head and neck surgery 2 due to its many indications, reliable anatomy, the low level of longterm donor side morbidity 4 the long and high-caliber vascular pedicle 3 and the ease of flap raising. 4 Additionally, the flap survival has been reported to be approximately 97% for RFF, [4][5][6] with relatively few complications. 4 Anatomically, the brachial artery usually divides into the ulnar and radial artery. Both arteries form the superficial and deep palmar arch, which are responsible for the blood supply to the digits. 4 Deviations from the course of the radial artery have been described in only a few cases. 7 The incidence of anatomical variations in the radial artery ranges from 4.3% to 9%. 8,9 In these cases, a high origin of the radial artery from the brachial artery occurred most frequently. Accessory branches of the radial artery are the rarest anatomical variant, with an estimated average prevalence of 0.5%. 10 The most frequently observed branches of the radial artery in the forearm are the radial recurrent artery and various muscular branches. 11 However, artery loops, stenosis, hypoplasia, and abnormal origin are more prevalent than branches. 10 If an RFF is planned, a preoperative Allen's test for the screening of the hand circulation is mandatory. 4 Because of the high reliability of the arterial system of the forearm, preoperative imaging is only taken into consideration if there is a pathological Allen's test result and the other side can not be considered as an alternative. 12 In such cases, alternative flaps may be necessary. In the rare event of complications, total hand ischemia, digital ischemia, chronic vascular insufficiency, hypothenar hammer syndrome and/or cold intolerance in the digits have been reported but have not been observed by the authors themselves yet. 13 Because complications are possible, it is important to be aware of potential anatomical variations of the radial artery during the raising of an RFF. In this paper, we present 1 patient with rare branching pattern of the radial artery into the deep palmar branch, approximately 7 cm proximal to the first wrist crease.
The Case
An 84-year-old male patient presented with a pre-auricular/ temporal sarcoma of the soft tissue on the left side, pT1 pN0 L0 V0 Pn1 G2 R0 (TNM 8th Edition). A 2-step surgical procedure was planned and realized. First the radical resection with temporary coverage was performed. Since the final histopathological result of the initial surgery revealed a close margin situation, reconstructive surgery using an RFF was combined with a circumferential resection to increase the histological safety margins.
Prior to the surgical procedure, an Allen's test was performed on both arms. The result showed an adequate ulnar supply of the deep palmar arch for both sides. During the Allen's test, no deviations were noted. Since the Allen's test demonstrated normal blood circulation for both hands, the left (non-dominant) hand was chosen for flap harvesting.
A 6 cm  4 cm skin paddle was marked in the usual manner and raised subfascial in an ulnar to radial direction. The radial artery could be palpated, dissected, and ligated. The skin incision was expanded to approximately 1 cm radial to the artery; the superficial branch of the radial nerve was identified and looped. While keeping a safe distance to the radial artery, the fascia was incised and the raising of the flap was continued. At this stage, anomalous distal branching of the radial artery was observed ( Figure 1A). The lumen of this artery was noticeably larger than the lumen of the radial artery, and seemed to be the deep palmar branch (DPB) of the radial artery ( Figure 1B and C). The DPB could be palpated and followed to the dorsal side of the forearm and the radial fossa. The incision to the antecubital fossa followed in order to dissect the pedicle from distal to proximal between the brachioradialis and the flexor carpi radialis muscle ( Figure 1A). The large distal branch was temporary clamped to ensure a sufficient blood supply to the hand ( Figure 1A). Since there were no signs of inadequate perfusion, the distal branch was ligated and the flap was raised in the conventional way. Small subcutaneous bleedings could be seen during the raising of the flap, which proved adequate flap perfusion at all times. No further anatomic variations occurred. The flap was anastomosed to the superior thyroid artery and a direct branch of the internal jugular vein. The patient made an uneventful recovery; the flap perfusion showed no limitations postoperatively.
Discussion
Distal branching of the radial artery has rarely been described in the literature and can potentially interfere with the raising of the RFF. On the other hand, anatomical variations of the arterial system of the forearm occur more frequently. The typical distal branching of the brachial artery can only be found in 70% of the cases, where the bifurcation is usually located 1 cm distal to the antecubital fossa. 12 Since most variations occur at the origin of the radial artery, it is important to follow the radial artery to the brachial artery before any branches are ligated. 12 Here, a high origin of the radial artery is the most frequent variation, 8,9 which should not compromise the flap raising. Besides that, the most commonly described variations in the literature are a superficial dorsal antebrachial artery and duplication of the radial artery, 8,12,14 which also do not interfere with the flap raising.
The unusual branching of the radial artery in the present case was first discovered intraoperatively, and made the extension of the skin paddle difficult to determine. The radial artery could be palpated regularly at the wrist level, and a preoperatively performed Allen's test showed no irregularities. The deep palmar branch (DPB) was discovered after the radial incision had been made and preparation of the skin paddle had been performed. The branch ran from the dorsal side of the forearm crossing the superficial radial nerve (Figure 1C). At this stage, an extension of the skin paddle and an inclusion of the DPB was impossible. The DPB was clamped to ensure a sufficient blood supply to the hand and to the skin paddle. After an adequate perfusion of the hand and the skin paddle could be confirmed, the deep palmar branch was ligated. The case demonstrates the importance of caution with large arterial branches during flap raising in patients with vascular variations before the dissection is completed. Otherwise, there is a risk of inadequate blood circulation, which can have severe negative consequences for the patient. 13 Due to the high reliability of the arterial system, a negative Allen's test result is considered sufficient preoperatively. 2,12,15 The Allen's test is a good and valid screening test for the arterial blood flow of the hand, but it cannot provide reliable predictions concerning the vascular anatomy of the forearm/hand. 10,16 In case of a pathological Allen's test, preoperative plethysmography and Doppler ultrasonography scanning are recommended. 16 In the present case, it might have been possible to palpate the DPB preoperatively while carrying out the Allen's test. In this case, further diagnostics could have been performed, as recommended in other studies 12,16 to verify the potential anatomic variation preoperatively and adjust the flap design or consider an alternative flap. Breik et al presented a similar case with anomalous distal branching of the radial artery into the deep palmar branch. 12 Along with the challenge of determining the skin paddle, the securing of the skin paddle perfusion was highlighted. These 2 factors were also our main concerns during the flap raising. Since no restrictions in flap perfusion and/or hand perfusion were observed in our case, we recommend RFF raising in the traditional way. No changes concerning the design and the positioning of the skin paddle should be made.
In conclusion, prior to microsurgical reconstruction with an RFF, surgeons should be aware of possible anatomical variations of the radial artery and the resulting flap design. Additionally, in case of an anatomical variation, the dissection of the pedicle should be completed, and the arterial branch should be clamped to prevent poor blood circulation to the hand/skin paddle before dividing any large branches of the radial artery.
Acknowledgement
We thank Franz Hafner for his photographic documentation.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
Patient's Consent
Written consent from the patient was obtained in order to include the photographs in the publication. | 2021-10-26T00:08:51.137Z | 2021-08-31T00:00:00.000 | {
"year": 2021,
"sha1": "0088638b74f4c6026a6877a0777a7c5ab4ab0b29",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/24727512211041415",
"oa_status": "GOLD",
"pdf_src": "Sage",
"pdf_hash": "d9cdd2ad72e28e9fc6051cc9b325eed80715d48a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Physics"
]
} |
153341472 | pes2o/s2orc | v3-fos-license | Influence of Globalization on Development of the Russian Economy D . Kh .
Article contains theoretical and practical analysis of development of international cooperation of Russia within globalization process. Important element of globalization for the Russian economy is economic integration within foreign trade cooperation of Russia. Consideration of positive and negative scenarios of development of situation in the foreign trade segment of the Russian economy is a basis for allocation of the basic parameters providing or interfering stable economic development. Consequences of strengthening of processes of globalization concerning the Russian economy during the period since 2012 allow to estimate features and directions of deepening of integration of Russia in world economy, and to reveal the indicators demanding special control from the state.
Introduction
Modern globalization can be defined as process of formation of the uniform universal information and financial space, caused by development of electronic means of communication, transfers, storages and data processing.As M. Kastels [1] noted, information economy by the nature is global as it allows operating in each time point in worldwide scales.Collapse of the USSR and the Soviet block finished globalization territorially, opened possibilities of distribution of institutional norms of the western civilization the whole planet.
If in the middle of the XX century the states and the structures mediated by them were the main subjects of international relations, now their place are taken by multinational corporations and international financial centers.This tendency which is showed in the international cooperation in all spheres of economic and political life, can be defined as transition from classical foreign policy to world domestic policy [2].
Rapid development of multinational corporations promoted high volumes of foreign investments.In 2000 total amount of transnational direct investments around the world exceeded 1,3 trillion dollars which meant its doubling in comparison with 1998.Further, in 2007 the indicator grew by 29,9%, having reached 1,83 trillion dollars.Due to the world financial and economic crisis in 2008, by UNCTAD estimates, reduction of inflow of direct foreign investments for 21% (to 1,45 trillion dollars) is noted.Against macroeconomic instability and political uncertainty in the next years reduction of total amount of direct foreign investments was observed.So, in 2012 this indicator decreased by 18% in comparison with previous year and made $ 1,35 trillion.
Theory
The variety of theoretical views of globalization, their detailed analysis contains in works of Guillermo De la Dehesa [2-3], Stanley Fischer [4], and other authors [5] [6].Supporters of globalization pay attention to its conditionality by natural movement of capitals and increasing dependence of national economies on the global financial markets and multinational corporations and offer effective mechanisms of international integration in financial and economic spheres [7][8].In their opinion, globalization in long-term prospect allows to use advantages of the international division of labor, to create global system of cooperation communications (supply chain) [9] and to increase efficiency of functioning of the world economy.
Opponents of globalization controvert the state of its natural character.Following Jacques Derrida of [10], critics of globalization try to prove that behind opening of economic borders there are forms of hegemony [11] In compliance with this approach economic subjects are initially unequal and connected with each other by the taxonomy relations which are formed in process of growth of the sizes of companies.Globalization of economy brings this process to the world level.
In practical implementation globalization has to solve a number of interconnected tasks.
The first is ensuring bigger transparency to which serves process of expansion of the WTO and the regional economic unions, and also activity of such organizations as the IMF, World bank.
The second -providing uniform rules of law of maintaining international economic relations and creation of international institutes providing their performance, including increase of a role of international arbitration courts.
The third -universalization of standards and regulations of economic activity, having impacts on unification of other parties of social life, upbringing and education.
The fourth -need of the organization of the universalized system of calculations and payments which has to correspond and financially provide functioning of global economy.
Within the real work attention is paid to the sphere of the foreign trade relations of the Russian economy.In this context WTO can be considered as the imported institute which passes a stage of intensive adaptation in Russia.
For definition of consequences of globalization for economy of Russia potential opportunities and threats are considered which the accelerated adoption of institutional requirements imposed by the entry of the country in the WTO can represent for its economy.
Results
On condition of implementation of the positive scenario of development the advantages of entrance of the Russian economy in the WTO will be: -Expansion of a sales market of production of the Russian production and improvement of structure of export."Actually accession to WTO is the first powerful measure for advance of the Russian export", -noted professor of the Russian Economic School (RES) Natalya Volchkova.Positive expectations concern generally export of the Russian metals (ferrous and nonferrous metallurgy) on which import the USA has restrictive quotas, and petro-chemistry production.-Inflow of foreign investments, preferably direct, owing to bigger security of investors by the international obligations of Russia and inadmissibility of providing direct preferences to residents."Stabilization of a trade policy will be able to make Russia more attractive to foreign investors", -the expert of New Economic School estimated possible prospects.-Decrease in internal prices as a result of reduction of import customs duties on imported production and toughening of conditions of competition for domestic producers; -Improvement of dynamics of the main macroeconomic indicators owing to growth of business activity of exporters, foreign and domestic investors.-Increase of level of the legal and business culture connected with more active cooperation with leading foreign firms, inflow of new technologies and development of progressive modern methods of management, decrease of corruption.-Increase in a role of innovative component in development of the Russian economy.About the last position it should be noted the following.The correlation analysis of statistical data of the Republic of Tatarstan carried out by us showed that rates of development of extracting branches have direct dependence with rates of changes of size of investments into fixed capital (correlation 0,66) while branches of manufacturing industry have the return correlation with the size of investments into fixed capital (correlation-0,71).Branches of extracting sector of economy have correlation with rates of changes in the size of the shipped innovative goods.As the export sector of Russia includes mainly export of raw materials, innovations in Russia tend to concentrate in export branches and pursue the aims of increase of efficiency and productivity at raw materials production.Thereby primary development of innovative process in raw sector of economy (which shows weaknesses of manufacturing industry) is provided [13,16].
Expected advantages of Russia's entry into the World Trade Organization to agriculture are possible improvement of conditions of access of the Russian agricultural production (mainly grains) to the world market and inflow of the foreign capital.At Russia's entry into the World Trade Organization the agreement on admissible level of state support of national agrarian production of 9 bln.dollars with its practical reduction by 50% by 2018 was reached.The problem is that within the current situation for 2013-2014 agriculture subsidizing from the state will make 130 billion rubles that is much less, than the limits of subsidizing provided by the agreement with the WTO.Growth of subsidizing is supposed during the subsequent period of time when according to the accepted obligations there will have to be a reduction of its sizes.
Already at a stage of negotiations on Russia's entry into the World Trade Organization it was supposed that Russia recognizes that other members of the WTO can apply more widely tariff regulation (Table 1).able 1.Extra quote rates of a customs tariff, % ( World-Tariff-Profiles,WTO-2010).Considering the general specifics of development of the Russian economy during the period before entry into the WTO, we determined the basic parameters influencing development of national economy and there was constructed a model showing dependence of rates of change of gross domestic product of Russia from growth rates of receipts from export (X1), unemployment rate (X2), investments into fixed capital (X3).
These parameters define rates of changes of gross domestic product (show the big importance within model).Russia's entry into the World Trade Organization according to the experts will affect these parameters (directly -export receipts and investments into fixed capital and indirectly -unemployment rate).
The narrowness of the general influence of all parameters on rates of change of gross domestic product can be estimated as high -68% of a variation of rates of change of gross domestic product are caused by change of these parameters.
The negative scenario of development of situation in the Russian economy in intensification of process of globalization assumes the following threats: 1. Absorption or reduction of domestic financial institutions.The probability of it decreases a little owing to reached within the held negotiations on Russia's entry into the World Trade Organization agreements on their protection (foreign banks are forbidden to open the branches in Russia, opening of subsidiaries is allowed only).2. Significant increase in outflow of capital from the Russian economy in the absence of significant direct investments to Russia.Formally according to UNCTAD experts inflow of capitals to the Russian economy for 2013 grew by 83% and made 94 billion dollars (the third place in the world).However only 5-6% of these funds represent real capital investments, for 92-95% they are credits and loans which mainly go for repayment of earlier taken credits and financial speculation.According to Bank of Russia pure export of capital by banks and enterprises from Russia grew from 54,6 billion dollars in 2012 to 63,7 billion in 2013 [11, 14]. 3. Decrease in competitiveness of domestic enterprises due to reduction in cost of imported goods, and, as a result, deterioration of conditions of activity of a number of branches of the Russian industry, agriculture, and services sector.The Russian economy isn't in a phase of steady growth, therefore isn't completely ready for opposition to foreign competition [15].4. Increase in outflow of the qualified labor.This scenario is rather realistic as labor, especially highly skilled, also as well as the capital is capable to be mobile, and, therefore, can dodge taxes and go to another countries that is confirmed by practice of East European countries -new members of the European Union. 5. Cardinal distinctions in compensation level in different countries in conditions of globalization led to outflow of the capitals, moving of production to countries with traditionally lower level of salary.6. Considering dynamics of quarterly growth rates of gross domestic product of the Russian economy, it is possible to track situation deterioration since 3 quarters 2012 (Fig. 1).
y= 1,04 x 1 0,18 x 3 0,15 x 2 0,33 Expected parameters weren't reached, and dynamics of quarter growth rate of gross domestic product shows their falling, since 2012 (Fig. 2).% A -gain to the respective quarter of the current year, B -gain in the last quarter, % Fig. 2. Dynamics of quarter growth rate of gross domestic product
Conclusions
Following the results of the first year of Russia's stay in the WTO significantly the following changes which have happened in the sphere of foreign economic activity of Russia: 1. Within the signed agreements on entry into the WTO the import duties on a number of foodstuff decreased: on imported pork on quotas from15 to 0%, and over a quota from 75 to 65%, on dairy products from 25 to 15%.As a result in the fall of 2012 import of pork, milk, cheeses (116%) and butter (136%) grew, import of powdered milk (216%) increased.According to Institute of an conjuncture of the agrarian market import of vegetable oil (50%) and tobacco products (33%) increased also.In Russia there was a reduction of prices in the market of production of animal husbandry (and its prime costs for the majority of the Russian producers are lower).For situation smoothing in a transition period the ban on import of the cooled and frozen meat from a number of countries (because of hygienic requirements) was imposed.As a result at the beginning of 2013 deliveries of pork to Russia from Germany fell. 2. The debt on salary increased from July, 2012 to August, 2013 by 25%.During the summer of 2013 unemployment grew by 8%.In industrial production for 2013 zero growth was observed.3. Introduction of utilization collecting on imported cars after entry into WTO led on September 1, 2012 to submission of the first claim within the WTO to Russia.Collecting is directed against cars of foreign production and is used with the same purpose, as import duties.In compliance of the new requirements planned
Fig. 1 .
Fig. 1.Dynamics of growth rates of gross domestic product in Russian economy (as a percentage quarterly in prices of 2008, billion rubles) with an exception of a seasonal factor.
the dominating force in relation to France of the middle of the XX century François Perroux wrote allocating centers of growth and periphery[12]. | 2018-11-19T04:52:13.496Z | 2014-08-27T00:00:00.000 | {
"year": 2014,
"sha1": "9cbba939000104ee6b9ebf88b7321b934df4da1d",
"oa_license": "CCBYNC",
"oa_url": "https://www.richtmann.org/journal/index.php/mjss/article/download/3657/3582",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "9cbba939000104ee6b9ebf88b7321b934df4da1d",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
} |
234743978 | pes2o/s2orc | v3-fos-license | An Appliance Scheduling System for Residential Energy Management
In this work, an Appliance Scheduling-based Residential Energy Management System (AS-REMS) for reducing electricity cost and avoiding peak demand while keeping user comfort is presented. In AS-REMS, based on the effects of starting times of appliances on user comfort and the user attendance during their operations, appliances are divided into two classes in terms of controllability: MC-controllable (allowed to be scheduled by the Main Controller) and user-controllable (allowed to be scheduled only by a user). Use of all appliances are monitored in the considered home for a while for recording users’ appliance usage preferences and habits on each day of the week. Then, for each MC-controllable appliance, preferred starting times are determined and prioritized according to the recorded user preferences on similar days. When scheduling, assigned priorities of starting times of these appliances are considered for maintaining user comfort, while the tariff rate is considered for reducing electricity cost. Moreover, expected power consumptions of user-controllable appliances corresponding to the recorded user habits and power consumptions of MC-controllable appliances corresponding to the assigned starting times are considered for avoiding peak demand. The corresponding scheduling problem is solved by Brute-Force Closest Pair method. AS-REMS reduces the peak demand levels by 45% and the electricity costs by 39.6%, while provides the highest level of user comfort by 88%. Thus, users’ appliance usage preferences are sustained at a lower cost while their comfort is kept impressively.
Introduction
In the present day, the population and the usage of technological devices are increased in cities yielding an increase in energy demand. That high energy demand causes high depletion of natural resources and pollution of the environment as well as high costs for both users and energy providers. Hence, efficient and conscious use of energy is essential for people, the environment and the future. Since residential energy consumption constitutes 38% of the total energy consumption in the US [1], studies on Residential Energy Management (REM) have gained importance nowadays.
Residential users have various habits of energy use according to their lifestyles and want to keep their comfort in today's life, while reducing electricity cost is the common goal of all users. Hence, keeping user comfort and reducing electricity cost are two parameters that should be under consideration in REM studies. On the other hand, the total electricity consumption of independent homes may exceed the power limit provided by the grid, thus the peak demand occurs at certain times of the day, i.e., in the evenings when all occupants are at home. This leads to expensive failures in the grid and the requirement for more grid infrastructure to prevent these failures. Grid malfunctions may also pose serious problems affecting the public's social life, such as disruption of health and transportation activities in the city. Consequently, for both residential users and energy provider sides, avoiding peak demand parameters should also be considered in REM studies.
Many studies have been done on REM systems in the literature considering electricity cost, peak demand and user comfort.
Some REM studies in the literature dealt with cost reduction and keeping user comfort simultaneously. Through these studies, the work in [2] proposed a pre-emptive prioritybased load scheduling approach at residential premises, while a REM algorithm using reinforcement learning and an artificial neural network was presented in [3]. The work in [4] demonstrated that comfort and energy consumption can be partially decoupled by an adaptive indoor comfort management approach. An automated switching off system with load balancing and appliance planning algorithm was proposed in [5]. In that work, all appliances are scheduled to manage the cumulative energy consumption below a defined power level with less interaction to users. Authors in [6] presented a multi-objective optimization model to reduce the electricity cost as well as the inconvenience level of the home user. They evaluated the performance of the proposed method by using the energy consumption patterns of several different social-economic Brazilian families. The work in [7] presented a consensual negotiation-based decision model for eliminating the overload by using appliances with the IoT concept. In that model, all connected appliances make their individual decisions based on the consensus algorithm. In [8], a REM approach was presented by a mixed-integer nonlinear programming problem with time or energy-based task classification. In [9] authors presented an improved multi-objective optimization algorithm to minimize the electricity cost with considering the user comfort. A new binary particle swarm optimization with quadratic transfer function was proposed in [10] for scheduling shiftable appliances in smarthomes. Authors in [11] presented a mathematical model to assist aggregator that is able to match a flexibility request from distributor system operator while reducing the cost and rescheduling shiftable appliances. In [12], a level billing approach was proposed with the aim of providing user comfort and cost reduction while a probabilistic scenario-based method [13] and an intensive quadratic programming approach [14] were presented with the same aim. Performance of different types of Demand Side Managements (DSMs) are compared in [15]; such as, deterministic and stochastic DSMs, and day-ahead and real-time DSMs. The authors in [16] propose robust energy management for grid-connected and islanding microgrids by considering stochasticity over the active power injections from photovoltaic units, wind turbine units, and conventional demands. Authors presented a multilayer control mechanism in [17] and they proposed to use Tabu search for scheduling HVAC (heating, ventilation and air conditioning) system. Some studies in the literature aimed at reducing the electricity cost and avoiding the peak demand as well as keeping users comfortable. For example, the aim of the work [18] is to minimize the energy cost and dissatisfaction of the customer by using different electricity tariffs (time of use (TOU), inclining block rate (IBR) and real-time pricing (RTP)). The work in [19] proposes an automatic control approach that reduces the peak demand of buildings as compared to manual control. An incentive-based energy optimization method is proposed [20] for scheduling a number of residential electric appliances of a residential community. Authors propose a crow search optimization algorithm in [21] for appliance scheduling with RTP tariff rate. In [22], mixed-integer quadratic programming problem is proposed to find the optimal energy scheduling of controllable loads as well as charging/discharging strategies of the energy storage systems and plug electric vehicles by considering renewable energy resources (RESs). Authors take the forecast uncertainty caused by the RESs energy profiles into account, as well as the users' energy demand.
The main drawback of these works is using the average powers of appliances instead of their real power profiles. That is, power consumption of appliances are assumed to be constant during a time period, i.e., 1 h. This drawback was eliminated in the authors' previous works by using real power profiles of appliances: In [23], a real-time residential power management scheme based on power unit prioritization due to their current status and tariff rates was presented, while in [24] an appliance-based residential power management system that manages home's power consumption based on the operational characteristics of smart appliances was introduced. Although user comfort was also taken into consideration besides reducing the electricity cost and avoiding the peak demand in these works, ignoring user preferences and allowing comprehensive intervention to some appliances kept the user comfort at a limited level.
Remarkable REM studies in the literature are summarized with their methods, objectives and descriptions in Table 1. Table 1. Synthesis of remarkable REM studies in the literature.
Reference
Method Objective Description [5] Multiobjective optimization Minimizing cost Schedule some selected appliances programming using TOU and rated power [8] Mixed integer nonlinear Minimizing cost Schedule the time and energy based appliances programming [10] Binary particle swarm Minimizing daily electricity bill Schedule the shiftable appliances optimization without effecting comfort by using TOU and rated power [12] Level billing approach Minimizing daily bill Schedule the time shiftable loads with a mathematical model [14] Intensive quadratic Minimizing cost and peak Flatten the power consumption programming by using PV power [20] Intensive based energy Minimizing the electricity bill Schedule the shiftable optimization appliances by using rated power [24] Rolling wave planning Minimizing cost and peak Control controllable appliances by using the real power consumption In this study, an Appliance Scheduling-based Residential Energy Management System (AS-REMS) which avoids peak demand and keeps user comfort while reducing electricity cost is proposed. In AS-REMS, based on the effects of starting times of appliances on user comfort and the user attendance during their operations, appliances are classified as MC-controllable appliances which are allowed to be scheduled by the Main Controller and user-controllable appliances which are allowed to be scheduled only by users. Use of all appliances are monitored in the considered home for a while for getting users' appliance usage preferences and habits for each day of the week. Then for each MC-controllable appliance, favorite starting times are determined and prioritized according to the recorded user preferences on similar days. When scheduling MC-controllable appliances, assigned priorities of starting times are considered for maintaining user comfort. On the other hand, the sum of expected power consumption of user-controllable appliances corresponding to the recorded user habits, and the power consumption of MC-controllable appliances corresponding to the assigned starting times is obtained as the total power consumption of considered home which is taken into account for avoiding peak demand, while the tariff rate is considered for reducing the electricity cost. The corresponding scheduling problem is solved by Brute-Force Closest Pair method.
AS-REMS provides important advantages over similar REM studies. The main contributions of AS-REMS to the literature can be summarized as follows: • AS-REMS is a multi-objective REMS structure that considers avoiding peak demand, reducing electricity cost and keeping user comfort simultaneously. • AS-REMS provides a realistic and high-level user comfort; because it is based on the users' appliance usage preferences and habits which are obtained by monitoring the considered home for a while. • AS-REMS assures to detect the short-term peak demand and consequently procure smooth and continuous energy from the grid since it uses real power consumptions of appliances instead of their rated (average) powers.
This paper is organized as follows: The proposed AS-REMS is introduced in Section 2 in detail. Case studies and their results are presented and interpreted in Section 3. Finally, conclusions and future work directions are given in Section 4.
Appliance Scheduling System for Residential Energy Managements
In this work, AS-REMS is proposed for scheduling allowed appliances with the aim of avoiding peak demand and reducing electricity cost while keeping user comfort.
AS-REMS consists of the Main Controller (MC), a database, communication units, electrical appliances, power measurement units (smart plugs) mounted on appliances, control units (Wifi-RS232 converters or Wifi-relay modules) and a smart meter. The configuration of AS-REMS is presented in Figure 1. In AS-REMS, one execution period (e.g., one day, 24 hours) is discretized into a prescribed T number of uniform time slots, i.e., t ∈ T = {1, 2, . . . , T}; hence, the total number of time slots (shortly, ts) in a day is T = 24/∆ t . Here, ∆ t represents the length of each ts.
Appliances
In AS-REMS, appliance scheduling is strictly based on users' appliance usage preferences. Within this scope, the home is monitored for a while to constitute usage and power consumption information of appliances. During the monitoring, at the beginning of each ts of one execution period T (e.g., one day), MC communicates with appliances to gather usage information of each appliance a ∈ L, where L represents the set of appliances. This appliance usage information is stored in a database in a matrix form, namely utilization matrix. In AS-REMS, for each appliance a ∈ L distinct utilization matrices are composed for each set of similar days of observed weeks; thus seven different utilization matrices are constructed for each appliance. The set of similar days of the observed weeks is represented by the set D.
Utilization matrix of an appliance a ∈ L for the set of the similar days D, is represented by U U U a D : {0, 1} |D|×T . U a D (d, t) is constructed as in Equation (1): In AS-REMS, in order to get power consumption profiles of appliances, power consumptions of appliances are measured via power measurement units during proper durations and stored in a database in a vector form, namely power profile vector. Power profile vector of an appliance a ∈ L is represented byP a : R 1×|T a m | , where T a m ∈ T is the measurement duration of a andP a (t) refers to the power consumption of a at itstth internal ts. Note that, in this study, the internal ts of appliances during their operation is indicated byt (∆˜t = ∆ t ); such thatt = 0 at the time that the appliance is turned on,t increases as long as the appliance is running,t is reset when the appliance is switched off.
In AS-REMS, power consumption of an appliance a ∈ L at a ts t ∈ T of a day d ∈ D is defined in Equation (2).
Here, t a s ∈ T is the starting time of the appliance a, and t − t a s refers to internal tst of a. In appliance scheduling-based REM studies, generally, it is needed to interfere with appliances externally. It is not appropriate for some appliances because of their own intended use and technical features. Therefore, most of the REM studies dealing with appliance scheduling in the literature have considered the classification of appliances. The main basis of classifications is the suitability of appliances for external interference. Therefore, classification types are consistent with each other, although assigned class names are different; such as controllable (C)/uncontrollable (UnC), shiftable (Sh)/unshiftable (USh), schedulable (Sc)/unschedulable (USc), normally operated (NO), fixed and taskbased (FTB), comfort-based elastic (CBE), energy-based elastic (EBE) etc. (see Table 2 for types of classification in the literature). For example, the refrigerator is considered in categories such as unshiftable, uncontrollable or task-based, since its operation time and duration are not suitable for any external interference.
In AS-REMS, the suitability of appliances for external interference is determined based on the effects of starting times of appliances on user comfort and user attendance during their operations. Accordingly, appliances are divided into two classes in terms of controllability: MC-controllable (MCC) appliances which are allowed to be scheduled by a main controller and user-controllable (USC) appliances which are allowed to be scheduled only by a user. The set of appliances is represented by L = L MC ∪ L U C , where L MC is the set of MC-controllable appliances, while L U C is the set of user-controllable appliances. These classes will be explained in detail in the following subsections. Appliances whose starting times directly affect user comfort are classified as usercontrollable appliances. Their starting times are set by users and are not negotiable. Interfering with starting times of these appliances against the demand of users undoubtedly deteriorates user comfort. User-controllable appliances can be two types: non-delayable (ndUSC) and delayable (dUSC). Non-delayable user-controllable appliances are generally appliances that must be turned on immediately upon users' request, and they are basically operated by an attending user (e.g., TV, hairdryer, toaster, rice cooker, microwave oven, vacuum cleaner, iron, lights and etc.). Appliances whose operations are fixed (e.g., refrigerator) are also considered in this type. On the other hand, delayable user-controllable appliances can be scheduled for a specific time due to users' requests (e.g., kettle, coffee machine, water heater, air-conditioner). For example, when a user wants coffee to be ready at 8:00 a.m., he/she can schedule the starting time of the coffee machine correspondingly. For both delayable and non-delayable user-controllable appliances, only users can decide when and how long these appliances will operate. Hence, any user-controllable appliances are not allowed to be scheduled by MC in AS-REMS.
The power consumption of a user-controllable appliance a ∈ L UC is given in Equation (3).
Here, ∀t ∈ [t a s − T a m ]. In AS-REMS, power measurement duration T a m of any usercontrollable appliance a ∈ L U C is one execution period, that is T a m = T. Power consumption profiles of a kettle and an air-conditioner are given as examples of power consumption profiles of user-controllable appliances in Figure
MC-Controllable Appliances
Appliances whose starting times can interfere without deteriorating user comfort are classified as MC-controllable appliances. These appliances are unattended appliances that are operated with little supervision (e.g., washing machine, dishwasher, tumble dryer, battery-powered appliances). For example, dirty laundry can wait in the washing machine for a while (until the assigned starting time) without deteriorating user comfort. Hence, MC-controllable appliances are allowed to be scheduled by MC in AS-REMS.
Any MC-controllable appliance a ∈ L MC operates during a certain time T a o ∈ T after it is switched on. The power consumption of a MC-controllable appliance a ∈ L MC is given in Equation (4).
In AS-REMS, power measurement duration T a m of any MC-controllable appliance a ∈ L MC is its operation period, that is T a m = T a o . The power consumption profiles of a washing machine is given in Figure 4.
Brute Force Closest Pair Method
Brute-Force Closest Pair (BFCP) method finds the closest point to a reference point through a set of candidate points by considering euclidean distance. For example, let we consider Figure 5 where (x r , y r ) is the reference point and the other (x i , y i ), i ∈ {1, 2, . . . , 5} are candidate points making up the set P candidates .
BFCP finds out all euclidean distances of the reference point P r = (x r , y r ) from the five points P ci = (x i , y i ), i ∈ {1, 2, .., 5} which accumulates 5 distance computations {P r P c 1 , P r P c 2 , P r P c 3 , P r P c 4 , P r P c 5 }, and determines the green point as the closest point from the reference point according to the following euclidean distance equation:
Scheduling Parameters
AS-REMS schedules MC-controllable appliances at the beginning of each day with the aims of avoiding peak demand and reducing electricity cost while keeping user comfort. Hence the scheduling parameters are electricity cost, peak demand and user comfort.
User Comfort
For scheduling MC-controllable appliances without deteriorating user comfort, AS-REMS considers users' appliance usage preferences stored in the database. For each MC-controllable appliance a ∈ L MC , starting times t a s on similar days in D are obtained from the corresponding utilization matrix U U U a D and listed in the set of starting times, i.e., TS a D as in Equation (6).
For each t a s ∈ TS a D , the number of being chosen as starting time, i.e., NC a D (t a s ), and its probability, i.e., Pr a D (t a s ), at any day in D are calculated in Equation (7) and Equation (8) Then for each appliance a ∈ L MC each starting time t a s is labeled with the corresponding priority level for the considered day d, i.e., Pr L a d (t a s ), such that the priority level of starting time with the highest probability is 1, that with the second-highest probability is 2, and so on. Note that, the priority level of the starting time with the lowest probability is |TS a D |. The total priority level induced by starting the operation of MC-controllable appliances a i ∈ L MC at times t a i s ∈ TS a i D in a day d ∈ D is defined as the square root of the sum of the squared priority level of each MC-controllable appliance as in Equation (9).
Here, T L MC s stands for a combination of starting times of all MC-controllable appli- Unlike previous studies in the literature, for a more realistic approach, AS-REMS gets users' appliance usage preferences by monitoring their power consumption in the considered home. The total priority level is a determining parameter that shows the preference of starting time combinations of MC-controllable appliances. As operating an appliance at the most preferred starting time increases user comfort, AS-REMS considers operating the MC-controllable appliances at the most preferred starting times by minimizing the total priority level.
Users can prefer to operate MC-controllable appliances at times different from the recorded user habits, which may yield uncertainties at the preferred starting times. In order to eliminate the effects of these uncertainties on user comfort, users are also allowed to select a specific starting time interval T a i s interval ⊂ T for each MC-controllable appliance a i ∈ L MC . In this case, the user's present preference is considered instead of stored historical usage preferences, and the priority level of a i is set to 0 (i.e., Pr L a i d (t) = 0 ∀t ∈ T ). Therefore, the priority level of a i does not add up to the total priority level value.
Electricity Cost
Since the electricity tariff rate is generally time-dependent, different starting times of appliances yield different electricity costs. For scheduling MC-controllable appliances with reducing the electricity cost, AS-REMS takes the tariff rate into consideration.
The total electricity cost of MC-controllable appliances a i ∈ L MC induced by starting their operation at times t a i s ∈ TS a i D in a day d ∈ D is calculated in Equation (10).
Here, Tariff(t) is the unit price of electricity per kWh at a ts t.
For reducing electricity cost, AS-REMS considers minimizing the total electricity cost as much as possible.
Peak Demand
For scheduling MC-controllable appliances by avoiding peak demand, AS-REMS intends total power consumption of appliances in the considered home not to exceed previously specified grid power limit, P lim , at any time of the day. The total power consumption at each time is the sum of the expected power consumption of user-controllable appliances corresponding to the recorded user habits on similar days, and the power consumption of MC-controllable appliances corresponding to the assigned starting times.
Expected power consumption of a user-controllable appliance a ∈ L UC at a ts t of a day d ∈ D, i.e., P a d exp (t), is calculated in Equation (11) by regarding all similar days, that is, all days in D: Expected power consumption of all user-controllable appliances at a ts t of a day d ∈ D, i.e., P L UC d exp (t), is calculated in Equation (12).
Power consumption of MC-controllable appliances a i ∈ L MC with starting times t a i s ∈ TS a i D in a day d ∈ D at a ts t ∈ T , i.e., P L MC d (t, T L MC s ), is calculated in Equation (13).
Corresponding total power consumption of all appliances in a day d ∈ D at a ts t ∈ T is calculated as in Equation (14).
According to the total power consumption, whether the predefined power limit, P lim , is exceeded at any ts t ∈ T of the day d ∈ D is represented by power limit indicator, i.e., I d (T L MC s ) is obtained as in Equation (15).
In order to provide a realistic approach, AS-REMS uses recorded appliance usage habits to determine expected power consumptions of user-controllable appliances and consequently Power limit indicator which is a determining parameter that shows power limit violation of power consumption. For avoiding the peak demand, AS-REMS deals to keep power limit indicator value at 0.
Users can operate user-controllable appliances whenever they want which may yield uncertainities at the total power consumption. In order to eliminate the effects of these uncertainities at the electricity cost and power limit indicators, these parameters are calculated by considering the expected power consumptions which are determined by usage of appliances during several days under several environmental conditions.
Scheduling Procedure
AS-REMS aims to schedule MC-controllable appliances by minimizing power limit indicator, total electricity cost and total priority level parameters. The corresponding scheduling procedure is given in AS-REMS Algorithm (namely, Algorithm 1).
At step 1 of AS-REMS Algorithm, the set of starting times TS a i D is found for each appliance a i ∈ L MC . If user does not select a specific starting time interval T a i s interval = ∅ for an appliance a i ∈ L MC , TS a i D is obtained as given in Equation (6) ) are calculated. In order to find the optimal practical solution among the candidate solution set, the BFCP method which uses the Euclidean distance approach to find the optimal solution through a practical solution set is used. Note that, the number of MC-controllable appliances and the number of their preferred starting times are limited, the number of possible combinations of starting times of MC-controllable appliances, thus the size of the practical solution set is also limited in this problem. Hence, applying the BFCP method (calculating the value of the objective function for each possible practical solution and which is also verified by the analysis results given in the case study section.
Algorithm 1 AS-REMS algorithm.
Input:L MC , L UC , d, T a i s interval , U U U a D ,P a , P lim , Tari f f .
Step (16) Step 3. Determine corresponding parameters values for each T (15) Step 4. Normalize the corresponding parameter values for each T Step Since BFCP method uses euclidean distance approach and takes the magnitude of parameters neglecting the units, parameters with high magnitude ranges will dominate the parameters with low magnitude ranges. In order to supress this effect and each of parameter to contribute to the result equally, all parameters are brought to the same scale of magnitudes at step 4 of AS-REMS Algorithm. Thus, total electricity cost of MC-controllable appliances C where, T (17), it is possible to find the optimal solution and the corresponding starting times of MC-controllable appliances for different weighted parameters. Thus, in the case that any parameter is desired to be more effective, this is achieved by increasing the weight of the corresponding parameter. For example, if the primary goal is reducing the electricity cost, the weight of the relevant parameter (i.e., w C ) is chosen bigger than the other weights (i.e., w C > w I , w C > w Pr ). If the primary preference is keeping the user comfortable and reducing the electricity cost simultaneously, the weights of these two parameters (i.e., w C and w Pr ) are chosen higher than the weight of power limit indicator (i.e., w C > w I , w Pr > w I ). If the weights of all parameters are selected equal (i.e., w C =w I =w Pr as in the scenarios of the case study), optimal starting times of MC-controllable appliances for equal precedence of three parameters are obtained.
Since the practical solution set TS
Case Studies and Discussion
In this section, in order to demonstrate AS-REMS's performance on avoiding peak demand and reducing electricity cost while keeping user comfort, several scenarios are designed and simulations of these scenarios are carried out.
The scenarios are for a residence monitored for 12 weeks. The residence is 120 m 2 flat with four occupants and equipped with kettle (1000 W), hair dryer (1600 W), toaster (700 W), rice cookers (400 W), microwave ovens (800 W), vacuum cleaner (700 W), water heater (1000 W), iron (1700 W), coffee machine (500 W), TV (116 cm), lamps (25 W), refrigerator (nofrost-540 lt) and air conditioner (6.74 kW cooling and 7.03 kW heating capacity) as user controllable appliances and washing machine (wm) (7 kg front-load), dishwasher (dw) (60 cm free standing) and battery powered appliances (bp) (for example, e-scooter, e-bike, etc.) as MC-controllable appliances. The power consumptions of all appliances are measured via Itech IT9121 power meter and Fibaro smart wall plugs and measured real power consumption profiles of appliances are used in the experiments. Some appliances (e.g., microwave ovens, dw, wm) can draw very high power in a very short time (<3 min). In order to catch these short term high power variations, time slot duration is taken as 2 min, i.e., ∆ t = 2 and T = 720. Besides, the same days of the weeks are defined as similar days, that is for each appliance a ∈ L, seven different utilization matrices U U U a D : {0, 1} |12|×720 are constructed. For the grid power, the time of use (TOU) pricing tariff rate set by the Turkish Electricity Distributor Company (TEDAS) is used [25]. This pricing tariff is a three-level TOU tariff with on-peak, mid-peak, and off-peak periods. As it is clear in Table 3, electricity prices are lower when the demand is low (off-peak) and higher when the demand is high (on-peak) to encourage the user. Besides, the grid power limit is chosen as P lim = 4500 W according to the agreement between home residents and TEDAS. Moreover, for the considered day d, for each possible combinations of preferred starting times of these appliances, that is, for each possible triplet of (t wm s , t dw s , t bp s ) corresponding total electricity cost C Figure 6. For some numerical samples, see Table 7.
Input:Pr
In the considered day, through the possible triplets of starting times, triplets with the cheapest cost (9.45 cent), their priority level values and power limit indicator values are given in Table 8 and indicated by blue diamond on the chart in Figure 6. For some of these triplets, the power limit is exceeded. For one of these triplets corresponding daily total power consumption graph is given Figure 7.
On the other hand, through the possible triplets of starting times, the triplet with the minimum total priority level value (i.e., 1.73) is (05:12, 16:40, 06:50) (indicated by a red square on the chart in Figure 6). The induced cost of this triplet is 13.44 cent, while the power limit is exceeded. The daily power consumption graph of this triplet is given in Figure 8.
In order to obtain optimal starting times of wm, dw and bp from the view of cost, peak demand and user comfort, for each possible triplet of starting times, AS-REMS scales corresponding total electricity cost values and power limit indicator values in the range of priority level values via min-max normalization yielding scaled total electricity cost, i.e., C L MC dscaled (t wm s , t dw s , t bp s ), and scaled power limit indicator values, i.e., I dscaled (t wm s , t dw s , t bp s ), respectively (see Table 7 for examples) and schedules starting time of wm at (05:12), i.e., t wm s =05:12, that of dw at (22:32) i.e., t dw s =22:32, and that of bp at (21:52), i.e., t bp s = 21:52, according to BFCP Algorithm. For the corresponding triplet of starting times, i.e., (05:12,22:32,21:52) (marked as black circle on the chart in Figure 6), the cost is 9.45 cent, the total priority level value is 7.28, while the power limit is not exceeded, i.e., I d (t wm s , t dw s , t bp s ) = 0. The daily power consumption graph is given in Figure 9. As it is clear from the Figure 6, this is the closest triplet to the theoretically ideal point. Table 8. For the triplets of starting times with the minimum cost; priority level, electricity cost, power limit indicator, scaled priority level, scaled power limit indicator values and the corresponding values of objective function of BFCP Algorithm 2 for Scenario 1. For each possible combinations of starting times of these appliances, that is, for all possible triplets of (t wm s , t dw s , t In the considered day, through the possible triplets of starting times, triplets with the cheapest cost (16.40 cent), their priority level values and power limit indicator values are given in Table 12 and indicated by blue diamond on the chart in Figure 10. For some of these triplets, the power limit is exceeded. For one of these triplets corresponding daily total power consumption graph is given Figure 11. On the other hand, through the possible triplets of starting times, the one with the minimum total priority level value (i.e., 1.73) is (05:28,16:30,06:18) (indicated by a red square on the chart in Figure 10). However, the induced cost of this triplet is 23.01 cent, while the maximum power consumption is reached 4427.85 kW.
For this scenario, AS-REMS schedules starting time of wm at (05:28), that of dw at (22:30) and that of bp at (06:18). For the corresponding triplet of starting times, i.e., (05:28,22:30,06:18) (marked as black circle in the chart in Figure 10), the cost is 16.48 cent, the total priority level value is 3.32, while the power limit is not exceeded (maximum power consumption is 4427.85 kW). As it is clear from the Figure 10, this is the closest triplet to the theoretically ideal point. The total power consumption graph of optimal solution is given Figure 11.
Let we consider scheduling dw, wm and bp on a Sunday from September as Scenario 3. Apart from previous scenarios, in this case, user sets specific starting time intervals for wm, dw and bp, such that T wm s interval =[08: . The priority level of each of these appliances is set to 0 and the optimal triplet of starting times (t wm s , t dw s ,t bp s ) must be determined through these intervals, i.e., t wm s ∈ T wm s interval , t dw s ∈ T dw s interval , t bp s ∈ T bp s interval , by considering the cost and power limit parameters. In the considered day, AS-REMS obtains optimal triplet of starting times as (09:26,21:58, 23:00) with the cost of 14.15 cent, while the power limit is not exceeded. The daily power consumption graph is given in Figure 12. Note that, without AS-REMS, wm and dw start to operate at the beginning of their specified starting time interval, such that t wm s = 08:44, t dw s = 19:38 and t bp s = 20:30. In that case, the cost is 23.45 cent while the power limit is exceeded (see Figure 12). In order to demonstrate AS-REMS's performance, numerous scenarios (≥500) are designed and the corresponding simulations are carried out. According to the results of these simulations, AS-REMS completely avoids all peak demands exceeding the specified grid power limit by reducing the peak demand levels by approximately 45%. Consequently, smooth and continuous energy from the grid is ensured for the user, while possible maintenance cost of energy provider is reduced. Furthermore, in the simulations, the first preferences of the users are realized by 88% while the electricity costs could be reduced by 39.6%. Thus, users' appliance usage preferences are sustained at a lower cost while their comfort is kept impressively.
In the simulations of scenarios, sensitivity analysis of computational times are also carried out on a PC with 2.8 GHz CPU, i7 Core, and 16 GB RAM and results are given in Table 13. The comparison of AS-REMS with the recent studies in the literature from the view of considered parameters and simulation results are given in Table 14 which demonstrates the reasonability and effectiveness of the proposed AS-REMS. Unlike most studies in the literature, in AS-REMS, avoiding peak demand, reducing electricity cost and keeping user comfort are considered simultaneously. Despite this complexity, simulation results of case studies are very satisfactory and also they are much better than the recent works in the literature. This is not surprising, because AS-REMS is based on the users' appliance usage preferences and habits providing a realistic and high-level user comfort; and real power consumption profiles of appliances are used instead of their average powers assuring to detect even short-term peak demands. In this way, smooth and continuous energy from the grid is also procured. Table 14. Comparision of AS-REMS with the recent REM studies in the literature.
Method Cost Minimization Peak Reduction User Comfort
Incentive-based energy optimization method [20] 6.2% 21% no value Intensive quadratic programming approach [14] 10% 44% no value Level billing approach [12] 13-25% not considered only financial satisfaction Appliance based Rolling Wave Planning algorithm [24] 13-24% 38-53% no value Binary particle swarm optimization [10] 32.8% -66% AS-REMS 39.6% 45% 88% On the other hand, the hardware configuration of AS-REMS is also constructed to verify the results of simulations and the scenarios are realized on this configuration. At this configuration, a PC with 2.8 GHz CPU, i7 Core, and 16 GB RAM stands for the MC of AS-REMS. Power consumptions of all appliances are measured via Fibaro smart wall plugs connected to the appliances and verified by the Itech IT9121 power meter. PC collects power consumption information of all appliances from smart plugs connected to the appliances via a USB Z-wave stick controller. MC-controllable appliances are equipped with wi-RS232 converter (or Wi-relay module), for starting the operation of appliances. AS-REMS algorithm is implemented via developed C++ software which is also used for simulations of scenarios. Simulation and real application results of scenarios are found to be compatible with each other.
Conclusions
Due to the increase in the population and the use of technological devices in cities, electricity demand is increasing day by day leading to high depletion of natural resources and pollution of the environment. Besides, peak demand may occur at certain times of the day leading to expensive failures in the grid. This circumstance may also pose serious problems that may affect the public's social life as disruption of health, education and transportation activities in the cities. Consequently, for both residential users and energy providers sides, avoiding peak demand parameters should also be considered in REM studies.
In this work, a new REM system, namely AS-REMS, is proposed. AS-REMS avoids peak demand and keeps user comfort while reducing electricity costs simultaneously for responding to the expectations of both residential users and energy providers. In AS-REMS, based on the effects of starting times of appliances on user comfort and user attendance during their operations, appliances are divided into two classes such as MCcontrollable appliances, whose starting times can be set by MC and user-controllable appliances, whose starting times strictly set by the user even if it is delayable. Use of all appliances are monitored in the considered home for a while for recording users' appliance usage preferences and habits for each day of the week and for each appliance. Then for each appliance, preferred starting times are determined and prioritized according to the recorded user preferences on similar days. When scheduling, assigned priorities of starting times of MC-controllable appliances are considered for maintaining user comfort, while the tariff rate is considered for reducing the electricity cost. Moreover, expected power consumptions of user-controllable appliances according to user's usage habits and power consumptions of MC-controllable appliances according to assigned starting times are considered for avoiding peak demand. The practical solution set of the corresponding scheduling problem consists of possible preferred starting time combinations of MCcontrollable appliances. Since the numbers of MC-controllable appliances and preferred starting times are limited, the size of practical solution set of the problem is limited. BFCP method whose computational complexity is proportional to the number of candidate solutions, and therefore very suitable for this problem is used to solve it. Besides, the BFCP method is simple to implement and one can add different starting time combinations to the practical solution set, as well as remove some from this set easily.
One future work direction of this work would be to investigate the effects of AS-REMS by integrating it into homes in a neighborhood system. Besides, monitoring the power consumption of household appliances and identifying users' appliance usage preferences will contribute to future works in research areas, such as improving user comfort and home safety in smart cities.
Funding: This research received no external funding.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement:
The data that support the findings of this study are available from the corresponding author, Hanife Apaydin-Özkan, upon reasonable request.
Conflicts of Interest:
The authors declare no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript: Non-delayable User Controllable U U U a D utilization matrix of an appliance a ∈ L for the set of the similar days D P a power profile vector of an appliance a ∈ L (W) P a d (t) power consumption of an appliance a ∈ L at a t of a day d ∈ D (W) T a m measurement duration of an appliance a ∈ L t a s starting time of an appliance a ∈ L L U C set of user-controllable appliances L MC set of MC-controllable appliances P a U C (t) power consumption of a user-controllable appliance a ∈ L U C at a t (W) T a e one execution period of an appliance a ∈ L U C P a MC (t) power consumption of a MC-controllable appliance a ∈ L MC at a t (W) T a o operation period of an appliance a ∈ L MC TS a D set of starting times of an appliance a on the similar days D NC a D (t a s ) number of t a s to be chosen as starting time of an appliance a ∈ L MC Pr a D (t a s ) probability of t a s to be chosen as starting time of an appliance a ∈ L MC Pr L a d (t a s ) priority level of t a s to be chosen as starting time of an appliance a ∈ L MC T a j s interval specific starting time interval for each of MC-controllable appliance a j ∈ L MC ideal total electricity cost I d ideal power limit indicator w C weight of the total electricity cost w I weight of the power limit indicator w Pr weight of the total priority level d i output of distance function for ith possible combination of starting times of MC-controllable appliances | 2021-05-18T05:17:15.823Z | 2021-05-01T00:00:00.000 | {
"year": 2021,
"sha1": "411c5312dc7a945825c7db685af7c4f45744b871",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/21/9/3287/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "411c5312dc7a945825c7db685af7c4f45744b871",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
38611313 | pes2o/s2orc | v3-fos-license | Coagulation syndrome: Delayed perforation after colorectal endoscopic treatments
Various procedure-related adverse events related to colonoscopic treatment have been reported. Previous studies on the complications of colonoscopic treatment have focused primarily on perforation or bleeding. Coagulation syndrome (CS), which is synonymous with transmural burn syndrome following endoscopic treatment, is another typical adverse event. CS is the result of electrocoagulation injury to the bowel wall that induces a transmural burn and localized peritonitis resulting in serosal inflammation. CS occurs after polypectomy, endoscopic mucosal resection (EMR), and even endoscopic submucosal dissection (ESD). The occurrence of CS after polypectomy or EMR varies according previous reports; most report an occurrence rate around 1%. However, artificial ulcers after ESD are largely theoretical, and CS following ESD was reported in about 9% of cases, which is higher than that for CS after polypectomy or EMR. Most cases of post-polypectomy syndrome (PPS) have an excellent prognosis, and they are managed conservatively with medical therapy. PPS rarely develops into delayed perforation. Delayed perforation is a severe adverse event that often requires emergency surgery. Since few studies have reported on CS and delayed perforation associated with CS, we focused on CS after colonoscopic treatments in this review. Clinicians should consider delayed perforation in CS patients. colonoscopic treatments. CS is found in around 1% of cases after polypectomy and endoscopic mucosal resection and in 7%-8% of cases after endoscopic submucosal dissection. The prognosis for CS is excellent. However, clinicians should be mindful of delayed perforation in CS patients.
Recently, ESD is another procedure used to remove large colorectal lesions according to the EMR curative criteria. This procedure is frequently used for removing large lesions by en bloc fashion, which includes lesions that would require piecemeal EMR for removal [11][12][13][14] .
Most previous studies have investigated CS after polypectomy and EMR. Thus, in this review, we defined PPS as CS associated with only polypectomy and EMR, while CS included ESD.
DEFINITION OF CS
CS is the result of an electrocoagulation injury to the bowel wall that induces a transmural burn and localized peritonitis resulting in serosal inflammation [17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33] . Patients with CS are diagnosed when they present with abdominal pain (sometimes tenderness with rebound); fever; leukocytosis; an elevated C-reactive protein level; or peritoneal irritation symptoms and signs that occur after colonoscopic treatment (polypectomy, EMR, and ESD) with electrocoagulation, in the absence of visualized perforation by abdominal radiography and/ or computed tomography (CT) [26,34] . It is important to recognize that CS can be misleading, as it can resemble a true rupture of the colon and present with pain, a low fever, and mild leukocytosis. Typically, patients with CS present within a few hours to 7 d after colonoscopic treatment with fever, localized abdominal pain, and localized peritoneal signs [19,20,26,30] . It is important to recognize this condition, because it does not require surgical treatment in most cases [19,21,23,26,30,31] . There is a range in severity of PPS between admission to the intensive care unit and post-discharge, as it can lead to shock, additional surgery, or death from possible followup on an outpatient basis.
Risk factors
Some previous reports have investigated the risk factors of PPS Nivatvongs [18] showed that 83% of PPS patients had polyps in the right side of the colon, and all were sessile polyps. Choo et al [24] also showed that right-colon polypectomies had a statistically significantly higher tendency for developing PPS. Lee et al [20] reported that a polyp size > 2 cm (OR = 1.08) and hypertension (OR = 14.40) were associated with a significantly increased risk of PPS. The most recent report showed that hypertension, a large lesion size, and nonpolypoid configuration of the lesion were independently associated with PPS according to multivariate analysis [19] . PPS develops when the electrical current applied during colonoscopic polypectomy extends past the mucosa into the muscularis propria and serosa, resulting in a transmural burn without perforation [17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33] . Therefore, larger lesions and non-polypoid configuration are logical Hirasawa K et al . Delayed perforation after colorectal endoscopic treatments risk factors, as they usually require a large amount of thermal energy for a longer duration. However, the mechanism of hypertension to promote PPS is unclear. Patients with hypertension are more likely to have endothelial dysfunction [43] and atherosclerosis [44,45] , which may be contributing factors.
However, with thinness of the wall, there is also concern regarding the frequency of PPS. The right colon wall is thin, and a large study that addressed major post-polypectomy complications reported barotraumatic perforations, and all of them were caused by cecal blow-out [34,[46][47][48] . Regarding colonic perforation, it has been suggested that air insufflation during colonoscopy generates a higher pressure in the cecum than in the rest of the colon, increasing vulnerability to injury. In addition, Rutter et al [49] hypothesized that a more perpendicular approach to polypectomy in the cecum may increase the risk of complications. However, scientific evidence in support of these theories is lacking. Loffeld et al [47] also reported that barotrauma caused by insufflated air occurs more often than therapeutic perforation due to polypectomy or coagulation.
Prevention of PPS
Theoretically, submucosal saline injections of large, nonpolypoid lesions prior to EMR may reduce the risk of PPS. The rationale for this is that a submucosal saline injection may increase the thickness of the submucosal layer and consequently reduce the risk of PPS [32] .
However, no studies have supported this assumption. Sethi et al [17] hypothesized that submucosal injection itself leads to serosal irritation and localized peritonitis, and then patients present with PPS symptoms. Therefore, the protective role of the saline "cushion" for PPS should be considered in future studies. The improvement of devices would likely reduce PPS. Galloro et al [50] reported that steel snares induced significantly deeper tissue injury than tungsten snares in the pure cut mode; therefore, tungsten snares may reduce the risk of PPS [32] . Another way to reduce the risk of PPS is dependent on skill. Using lower risk procedures when clinically appropriate or referring patients to high-volume endoscopists can reduce the complication rates [51] . PPS is considered different from infection from a local mucosal defect. Min et al [52] reported that blood cultures at baseline and 5 min after the procedure were all negative, and a blood culture at 30 min after the procedure showed a positive result in only 1 of 40 patients (2.5%). However, this one positive sample was considered contamination. None of the 40 patients showed any signs or symptoms associated with infection. Therefore, the prior administration of antibiotics is considered controversial for preventing PPS.
Treatment and prognosis
Most cases of PPS have an excellent prognosis, and they are managed conservatively with medical therapy. In some reports, all patients were admitted to the hospital, while in other reports, some cases underwent outpatient observation [19,21,23,26,30,31] . Treatment of PPS requires bowel rest and the administration of intravenous fluids and broad-spectrum parenteral antibiotics to cover the colonic bacterial flora. Nothing is taken by mouth until the symptoms subside. Patients with mild symptoms and adequate outpatient follow-up can be managed with oral antibiotics and a clear liquid diet for 1-2 d.
In contrast, to diffuse peritoneal signs, there is an indication for immediate surgical intervention. Within the spectrum of post-polypectomy cautery injury, "miniperforation" falls between a "serosal burn" and frank perforation (with diffuse peritonitis). It is a minimal defect that can be quickly covered by peri-intestinal fat and omentum [16] . Its clinical features include pneumoperitoneum without signs and symptoms of diffuse or spreading peritonitis, and with local tenderness that is characteristic of a full-thickness burn. The patient usually improves within 24 h, and the symptoms should resolve within 96 h with conservative treatment. The dilemma as to whether the conservative or surgical approach is more appropriate for managing this kind of perforation still exists [19][20][21][22]26,30,31] .
Although conservative treatment can generally be performed in most patients, it is important to adopt careful measures such as prolonging the fasting period and considering the possibility of delayed perforation [26,[35][36][37][38][39] .
DELAYED PERFORATION
Immediate perforation is diagnosed by endoscopy during resection and by the presence of free air on plain abdominal film or abdominal CT scan [15][16][17]35,51,53] . This is very rare; however, delayed perforation, which is considered to be caused by an electrical or thermal injury after electrocoagulation, was reported in these cases. Delayed perforation after colonoscopic resection can begin as PPS, which can evolve into a perforation or as a free perforation with air and fluid leakage, resulting in pneumoperitoneum and peritonitis [35][36][37][38][39] . Japan Gastroenterological Endoscopy Society guidelines for colorectal ESD/EMR defined delayed perforation as an intestinal perforation that develops over a certain period postoperatively (i.e., intestinal perforation that is detected after the scope has been withdrawn following completion of ESD/EMR during which perforation did not occur). This is diagnosed based on abdominal pain, abdominal findings, the presence of a fever, and an inflammatory response that is consistent with PPS. Most cases of delayed perforation occur within 14 h after endoscopic resection. However, approximately one-third of delayed perforation cases are confirmed within 24 h after treatment. Free air, which cannot be detected by simple radiographic imaging, is sometimes found on abdominal CT. Therefore, in cases where delayed perforation is suspected, abdominal CT should be performed. Surgeons must be called for emergency surgery, because it is essential in cases of delayed perforation [26] . procedure time and ulcer bed to energization that largely affects the characteristics of ESD procedures is evident theoretically. Delayed perforation in ESD is also a great concern. The indications for ESD are markedly different from those for conventional EMR, and the overall perforation rate is higher compared to conventional EMR [55] . Delayed perforation after ESD reportedly ranges from about 0.1%-0.4%; however, this may be because of the small number of reports [26,[40][41][42]55] . Saito et al [41] reported that delayed perforations occurred in another 4 patients (0.4%) after ESD. Two of the 4 patients with delayed perforations were successfully treated conservatively, because the abdominal findings and inflammatory changes based on laboratory data were slight. However, other patients with delayed perforation required emergency surgery because of the risk of peritonitis. Saito et al [41] also reported that 0.11% (1/900) showed delayed perforation that required emergency surgery. Previous studies have cautioned that clinicians must carefully follow patients with delayed perforation, and continually close communication with consulting surgeons is essential since the number of such cases has been quite limited to date. Few studies have reported on delayed perforation after ESD. While previous reports have shown the success of endoscopic clip closure with over-tube [42] , the treatment and prognosis often require emergent surgery. A 44-year-old woman underwent colonoscopy for surveillance of ulcerative colitis, and a 30 mm cecal sessile polyp was revealed (Figure 1). We diagnosed this tumor as a sessile serrated adenoma/polyp using the pit and narrow-band imaging patterns. Because of the size of the tumor and the tumor morphology, we chose ESD in order to perform en-bloc resection. ESD was performed safely without any perioperative complications ( Figures 2 and 3), and she reported no symptoms. However, 24 h after ESD, she had a high fever (38.6 ℃) with slight abdominal pain and leukocytosis. Subsequently, she was diagnosed with CS after ESD. She fasted and received In 1994, Lo et al [54] reported that 43.8% of therapeutic perforations were managed conservatively with a mortality rate of 4.1%. This means that perforation is still a severe condition that reduces patients' quality of life [25,[35][36][37][38][39] . Thus, prevention of PPS and its potential sequelae are most important, and clinicians must always consider the potential for delayed perforation due to PPS.
Case presentation
Only two studies have reported on the incidence of delayed perforation. Taku et al [39] reported delayed perforation in 7 of 15070 cases, while Waye et al [25] reported it in 1 of 777 cases. This is still not sufficient evidence. For ESD, the incidence of delayed perforation ranges from 0.1% to 0.4% [26,[40][41][42] .
CS after ESD
ESD has been a reliable method for en bloc resection of colorectal tumors regardless of the lesion size for years. Although colorectal ESD has been established as a procedure with reproducible safety and efficacy, complications such as intestinal perforation and delayed bleeding remain to be problematic. Similarly, few studies have reported CS after ESD [11][12][13][14]26] . Hong et al [48] reported that 8.6% showed CS after colorectal ESD. There were no differences in the demographic and endoscopic characteristics (age, sex, underlying disease, procedure time, tumor size, macroscopic type, location, and pathologic findings) between patients with CS and those without CS. The mean hospitalization stay was statically significantly longer in the CS group than that in the non-CS group. All patients with CS were treated with conservative (non-surgical) management (e.g., fasting and intravenous antibiotics). CS showed a favorable progression even after ESD, and delayed perforation was not reported. See comment in pubmed commons below.
Delayed perforation after ESD
CS is reported even after ESD, and its frequency is clearly higher than polypectomy or EMR [26,34,48] . The antibiotics (cefmetazole). CT was obtained immediately, but no findings were suggestive of perforation (i.e., free air and ascites were not present) (Figure 4). Thirty hours after ESD, severe abdominal pain developed, and 36 h after ESD, free air appeared on radiography and CT ( Figures 5 and 6). At this point, we diagnosed the patient with delayed perforation that developed after CS. Emergent laparoscopic surgery was performed, and a perforation site was found in the ESD ulcer at the bottom of the cecum (Figure 7). Partial cecum resection was performed, and the patient's condition improved rapidly.
CONCLUSION
CS is found in around 1% of cases after polypectomy and EMR and in 7%-8% of cases after ESD. Although the prognosis is excellent, clinicians should consider delayed perforation in CS patients. | 2018-04-03T04:19:45.093Z | 2015-09-10T00:00:00.000 | {
"year": 2015,
"sha1": "fd4f322ba7cdf940160243213885c42354db2f25",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.4253/wjge.v7.i12.1055",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "305feba4bff99c2f85d47311a3b26809c483d312",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
218649321 | pes2o/s2orc | v3-fos-license | Looking for solutions to lung dysfunction in type 2 diabetes
Endocrinology and Nutrition Department, Hospital Universitari Vall d’Hebron, Diabetes and Metabolism Research Unit, Vall d’Hebron Institut de Recerca (VHIR), Universitat Autònoma de Barcelona, Barcelona, Spain; Centro de Investigación Biomédica en Red de Diabetes y Enfermedades Metabólicas Asociadas (CIBERDEM), Instituto de Salud Carlos III (ISCIII), Madrid, Spain; Endocrinology and Nutrition Department, Hospital Universitari Arnau de Vilanova, Obesity, Diabetes and Metabolism Research Group (ODIM), Institut de Recerca Biomèdica de Lleida (IRBLleida), Universitat de Lleida, Lleida, Spain Correspondence to: Rafael Simó, MD, PhD. Endocrinology and Nutrition Department, Hospital Universitari Vall d’Hebron, Diabetes and Metabolism Research Unit, Vall d’Hebron Institut de Recerca, Pg. Vall d’Hebron 119-129. 08024-Barcelona, Spain. Email: rafael.simo@vhir.org. Provenance and Peer Review: This article was commissioned and reviewed by the Academic Editor Dr. Jiewen Jin (Department of Endocrinology, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China). Comment on: Kim JM, Kim MK, Joung KH, et al. Association between glycemic state and pulmonary function and effect of walking as a protective factor in subjects with diabetes mellitus. Ann Transl Med 2019;7:530.
Diabetes-induced pulmonary dysfunction is an emerging topic within the "new" complications of diabetes and in particular in type 2 diabetes (1). The lungs show a huge vascularization and are rich in proteins with an elevated turnover such as elastin and collagen, pointing this organ as a potential target to the harmful effects of chronic hyperglycaemia (1). In fact, an inverse association between metabolic control and spirometric values has been observed (2). In addition, positive changes in spirometric maneuvers after of 3-months of improvement in glycemic control has also been reported (3). The mechanisms to explain the cluster of pulmonary dysfunctions associated with type 2 diabetes are not yet fully understood. However, it has been argued that the most suitable explanations could be related with insulin resistance, nonenzymatic glycosylation of lung proteins, low-grade chronic inflammation state, microvascular damage, autonomic neuropathy and defects in the bronchiolar surfactant layer (1) (Figure 1). The manuscript of Kim et al. sheds light on this issue, exploring the connection between prediabetes, type 2 diabetes and pulmonary function in the Korea National Health and Nutrition Examination Survey (KNHANES) (4). Their data confirm not only that patients with type 2 diabetes exhibits lower values of forced expiratory volume in one second (FEV1) and forced vital capacity (FVC) than control subjects, but also that the impaired pulmonary function also exists in the prediabetes stage. Overall, data from Kim et al. confirm previous information in Caucasian population and reinforce the notion that lung dysfunction is a progressive defect across glucose abnormalities, beginning in the prediabetes stage and expanding when type 2 diabetes appears (5). Notably, after adjustment for potential confounding factors (age, sex, body mass index, waist circumference and smoking status) an increase of 1% in the HbA1c level was associated with a −1.20% difference in FVC and a −0.77% difference in FEV1, with similar results when a 10 mg/dL increase in fasting plasma glucose was considered. As prevalence of prediabetes and undiagnosed diabetes are both increasing, a closer relationship between endocrinologists, pneumologists and primary care physicians is needed not only to better understand the deleterious effects of type 2 diabetes on lung parenchyma but to successfully target pulmonary dysfunction (6).
The potential advantageous role of physical activity on the respiratory muscles' strength and function has been reported in patients with lung diseases (7). Our group has also assessed the benefit from heart-healthy lifestyle behaviors on lung mechanics in a cross-sectional study Editorial conducted in 3,020 Spanish middle-aged subjects free of lung disease (8). In this population, low physical activity was significant and independently associated with the presence of pulmonary impairment assessed by FEV1 <80% only in men (8). Kim et al. go one step further to explore the therapeutic implications of walking exercise in preventing a decreased pulmonary function in subjects with diabetes (4). Their results show that walking more than 300 minutes per week has a significant effect avoiding FVC and FEV1 decline. However, this beneficial effect disappeared after correction for smoking status, pointing to smoking cessation as a fundamental pillar among the strategies aimed at improving lung function of patients with diabetes.
Other possibilities that can be added to physical activity to treat or prevent lung dysfunction in patients with type 2 diabetes should be considered. The first option to consider would be the improvement of glycemic control. In this way, Gutiérrez-Carrasquilla et al. have recently communicated results from a prospective and interventional study to determine whether ameliorating metabolic control in patients with type 2 diabetes without known pulmonary disease during a three-month period produce significant changes in respiratory function (3). Therefore, in the Sweet Breath Study a favorable change in the spirometric parameters was just observed in the subgroup of participants who reached a reduction of their HbA1c higher than 0.5%, and this result was not related with weight reduction (3). More interesting, the spirometric parameters that appeared to be most sensitive to this rapid improvement of metabolic control were peak expiratory flow (PEF) and FEV1, the former related with neuromuscular integrity. This relation between muscle strength and pulmonary function also reinforce the relevance to prescribe physical activity to patients with type to diabetes more vulnerable to develop lung involvement. And it is also another reason to insist on patients with diabetes about the need to achieve good glycemic control and thus prevent the development of late complications.
The glucagon like peptide 1 (GLP-1) receptor is expressed by alveolar type 2 cells, and its activation have been shown to stimulate the production of pulmonary surfactant in experimental studies (9). In fact, serum surfactant protein D (SP-D) has been proposed as a serum biomarker useful to identify patients with type 2 diabetes with defects in their bronchiolar surfactant layer (10). In this way, SP-D serum concentrations were inversely correlated with FEV1 and the stepwise multivariate regression analysis showed that a serum SP-D value equal or higher than 32.3 ng/mL was independently associated with a FEV1 <80% of predicted (10). Therefore, the underlying deficit of GLP-1 in type 2 diabetes could also be involved in the impairment of airway caliber. This hypothesis is now been testing in the LIRALUNG study (ClinicalTrials.gov Identifier: NCT02889510), a randomized double blind, crossover, placebo controlled clinical trial to evaluate the effect of liraglutide, a GLP-1 analogue, on lung function in patients with type 2 diabetes. On the other hand, worthy of additional attention is whether preventing the inactivation of the endogenous GLP-1 through the pharmacological inhibition of the dipeptidyl peptidase-4 applies any effect on pulmonary function (1). On a similar level remains the possibility to enhance pulmonary function rising insulin sensitivity in patients with type 2 diabetes. Although there are no prospective studies addressing this option, a little retrospective study also in Korean patients showed how treatment with insulin sensitizers was independently associated with improvements in FVC compared with insulin therapy (11). And similarly, after adjustment for glycemic control and the known duration of type 2 diabetes, Colombian patients under treatment with metformine showed significantly lower differences from the expected values in FVC measures in comparison with patients receiving a secretagogue therapy (12).
To sum up, we hope that health professionals who take care of patients with type 2 diabetes begin to consider them as a vulnerable group for pulmonary dysfunction. Although clinical relevance of such changes has to date been little, we cannot forget that a 10% decrease in FEV1 is an independent predictor of all-cause mortality in type 2 diabetes (13). In this context, and while we find therapeutic targets capable of reversing this situation, investment in a healthy lifestyle that includes high physical activity, give up smoking and selected antidiabetic therapies to improve metabolic control should be recommended to patients with diabetes.
Acknowledgments
Funding: This research was supported by grants from the Instituto de Salud Carlos III (Fondo de Investigación sanitaria, PI 15/00260 and PI 18/00964) and the European Union (European Regional Development Fund). Ethical Statement: The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.
Conflicts of
Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the noncommercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/. | 2020-04-30T09:11:31.489Z | 2020-04-01T00:00:00.000 | {
"year": 2020,
"sha1": "e9ba655f45d4b5e0450f7466ca2a3edd8fc4f683",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.21037/atm.2020.03.225",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "24bd5237ee24ad7fa395b1dbd9fd0e960a303798",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233654910 | pes2o/s2orc | v3-fos-license | Disinfectant Ecacy Against Dry Surface Biolms of Staphylococcus Aureus and Pseudomonas Aeruginosa Is Product, Time Point and Strain Dependent
Background: Globally, healthcare associated infections (HAI) are the most frequent adverse outcome in healthcare delivery. Although bacterial biolms contribute signicantly to the incidence of HAI, few studies have investigated the ecacy of common disinfectants against dry surface biolms (DSB). The objective of this study was to evaluate the bactericidal ecacy of seven disinfectants against DSB of Staphylococcus aureus and Pseudomonas aeruginosa. We hypothesized that overall, hydrogen peroxides, sodium dichloro-s-triazinetrione and quaternary ammonium compounds plus alcohol disinfectants will be more bactericidal against DSB than quaternary ammonium. We also hypothesized that regardless of differences in product chemistries, higher bactericidal ecacies against DSB will be exhibited after 24 h of dehydration compared to 72 h. Methods: Wet surface biolms of S. aureus and P. aeruginosa were grown following EPA-MLB-SOP-MB-19 and dehydrated for 24 h and 72 h to establish DSB. Seven EPA-registered disinfectants were tested against dehydrated DSB following EPA-MLB-SOP-MB-20. Results: Overall, quaternary ammonium plus alcohol, sodium dichloro-s-triazinetrione, and hydrogen peroxide products were more ecacious against DSB than quaternary ammoniums for both tested strains. While there was no signicant difference in biolm killing ecacies between 24 h and 72 h S. aureus biolms, signicantly higher log 10 reductions were observed when products were challenged with 24 h P. aeruginosa DSB compared to 72 h P. aeruginosa DSB. Conclusion: Strain type, active ingredient class, and dry time signicantly impact disinfectant ecacy against DSB of S. aureus or P. aeruginosa. statistically compare mean log 10 reductions at 24 and 72 Pair-wise and dry times completed with Tukey adjustments. All statistical procedures completed ecacy of disinfectants against DSB of S. aureus and P. aeruginosa after prolonged dehydration for 24 h and 72 h. While there was no signicant difference in log 10 reductions between 24 h and 72 h DSB of S. aureus, the reverse was true for DSB of P. aeruginosa as 72 h DSB of P. aeruginosa were harder to kill than their 24 h counterparts. In a previous study by our group, we found that 100% of P. aeruginosa DSB established at a dehydration temperature of 21°C were encased in EPS while this was true for only compared to P. aeruginosa DSB exposed to quaternary alcohol and quaternary ammonium products. multiple intrinsic and extrinsic factors. Our study denitively demonstrated that signicant kill levels of the DSB of major healthcare pathogens that cause HAI can be achieved although this is highly dependent on the choice of disinfectant, active ingredient class, DSB “age” and bacteria strain. It is therefore critical for healthcare stakeholders to consider these factors in efforts to reduce HAI rates.
Background
Healthcare associated infections (HAI), a result of diverse interactions among modern healthcare practices, hospital environments, and growing antibiotic resistance, among other factors, pose a crucial threat to human well-being [1]. Globally, the acquisition of HAI is the most frequent adverse outcome in healthcare delivery [2]. In the United States, approximately 633,300 patients are affected by 687,200 HAI [3] with more than 72,000 deaths every year [4]. In Europe, about 4.5 million HAI occur yearly in acute care hospitals [5] with about approximately 135,000 deaths [6]. In low and middle income countries (LMIC), the density of HAI in adult intensive care units is estimated at 47.9 per 1,000 patient days, which is higher than rates in the US and Europe [7]. Comparing HAI incidence rates in developed and LIMC, the incidence rate is seven out of 100 patients in developed economies and ten out of a 100 in LMIC [8].
The prevalence of HAI has been associated with bio lm formation by bacteria [9]. Bacterial bio lms are ubiquitous and represent approximately 99% of the world's known bacterial population [10]. The National Institutes of Health (NIH) of the US estimates that about 80% of all chronic infections are due to bio lm formation [9]. Bio lms are comprised of microbial cells adhered to a surface and to each other, forming a microcolony encased in a polysaccharide dominant matrix [11]. In addition to the cells that inhabit a bio lm, DNA, proteins and biosurfactants are prevalent [12]. Bacterial bio lms are persistent on environmental surfaces due to the ability to adhere to common surfaces and the extracellular polymeric substances (EPS) they produce [13]. The EPS forms a matrix that presents a major barrier to removal from surfaces in healthcare facilities as it is "resistant" to physical stress [14] and shields underlying bacterial cells from direct contact with disinfectants [15]. As a result of the EPS matrix [11Donlan, 2000], the presence of e ux pumps and persister cells [16], bacterial bio lms are about 1,000 times less susceptible to disinfectants than their planktonic counterparts [17]. Additionally, the ability for disinfectants to penetrate the bio lm matrix is affected by the water binding characteristic of the EPS matrix [11], and pH differences among various layers of bio lms [18]. These features may result in the aggregation of organic acids leading to the deactivation of less potent disinfectants that may be non-lethal [18].
Bacterial bio lms have the ability to develop and persist for up to 12 months [19] on wet and dry surfaces in the hospital environment despite repeated cleaning [20]. Dry surface bio lms (DSB) are particularly widespread on surfaces in healthcare facilities [20,21]. In a recent 2018 study by Ledwoch et al., DSB were detected in 95% of 61 samples collected from hospitals in Wales [22]. Such surfaces included commodes, clipboards and sanitizing bottles [22]. DSB have also been detected on indwelling catheters [23]. Although multi-species dry surface bio lms have been detected on a range of surfaces in healthcare facilities, major HAI pathogens as S. aureus [22] and P. aeruginosa are predominant [24,25].
While DSB are widespread on surfaces in healthcare facilities, they are also harder to kill than wet surface bio lms [26]. This is the case as overall, DSB are characterized by a denser EPS matrix than wet surface bio lms [26,27]. Moreover, with prolonged desiccation and starvation of bacteria in DSB, there is an increase in the overall percentage of protein content and slightly decreased carbohydrate content compared to wet surface bio lms [26,28] Being the principal component of bio lms, this increase in the proportion of proteins may further contribute towards the reduced bactericidal e cacy of disinfectants against DSB. [29]. Bio lms may survive longer due to metabolic changes that may result from cell-cell signaling and from the presence of a bio lm matrix that facilitates nutrient recycling and transformation from lysed cells [30].
In the current protocol used by the Environmental Protection Agency (EPA) for bio lm claims on disinfectants, wet surface bio lms of S. aureus and P. aeruginosa are the required test pathogens [31]. Under real world conditions such as in healthcare facilities, disinfectants are relied on to inactivate bacteria on dry surfaces [32], which are usually in the form of DSB [26]. Despite widespread evidence that bacteria in healthcare environments are more likely to be encased in DSB, the standard test for disinfectant e cacy testing and registration with the EPA are conducted using planktonic bacteria or bacteria in wet bio lms. To the best of our ndings, no studies have evaluated the bactericidal e cacy of disinfectants against dry surface bio lms of S. aureus and P. aeruginosa established at different dehydration time points consistent with routine cleaning and disinfection schedules recommended by the CDC [33]. In a previous study, our group developed a rapid model for establishing DSB of S aureus and P. aeruginosa at different time points and at mean log 10 densities su cient for disinfectant e cacy testing [34]. In this study, we evaluated the bactericidal e cacy of seven liquid disinfectants against DSB of S. aureus and P. aeruginosa after 24 h and 72 h of dehydration. We hypothesized that overall, hydrogen peroxide, sodium dichloro-s-triazinetrione, and quaternary ammonium compounds plus alcohol disinfectants will be more bactericidal against DSB than quaternary ammonium disinfectants based on our prior work. We also hypothesized that regardless of differences in product chemistries, higher bactericidal e cacies against DSB will be exhibited after 24 h of bio lm dehydration compared to 72 h of dehydration.
Methods Bacteria strains and disinfectants tested in this study DSB of S. aureus ATTC-6538 and P. aeruginosa ATCC-15442 were established on borosilicate glass coupons (1.27 ± 0.013 cm; Biosurface Tech, Inc.) following Nkemngong et al., 2020 [34]. These strains were selected as they are standard strains of choice for disinfectant e cacy testing [31]. They are also the standard EPA strains for registering disinfectants with claims against wet surface bacterial bio lms [35]. [34]. Wet surface bio lms were established following EPA-MLB-SOP-MB-19 through batch and continuous stir tank reactor (CSTR) phases [35]. The batch medium was 3.0 g/L TSB for S. aureus and 300 mg/L TSB for P. aeruginosa. A 500 ml batch medium held in a CDC bio lm reactor (Biosurfaces Technologies, Inc., Bozeman, MT) was inoculated with one ml of an overnight culture of S. aureus or P. aeruginosa. The batch phase lasted 24 ± 2 h with the CDC bio lm reactor (Biosurfaces Technologies, Inc., Bozeman, MT) mounted on a magnetic hot plate stirrer (Talbays, Thorofare, NJ) set at 60 ± 5 rpm at 36 ± 1°C for S. aureus or 125 ± 5 rpm at 21 ± 2°C for P. aeruginosa. CSTR medium in 20 L of sterile distilled water had a nal concentration of 1.0 g/L TSB for S. aureus and 100 mg/L TSB for P. aeruginosa. CSTR medium was continuously pumped through the CDC bio lm reactor for 24 ± 2 h for both strains.
After wet surface bio lms were established through the batch and CSTR phases, rods from the CDC bio lm reactor; each holding three borosilicate glass coupons were dehydrated for 24 h and 72 h at 25°C or 21°C for S. aureus and P. aeruginosa, respectively. Dry times and dehydration temperatures were informed by Nkemngong et al., 2020 [34]. [34]. Post treatment with disinfectants, DSB of S. aureus or P. aeruginosa were vacuum-ltered onto lter membranes following EPA-MLB-SOP-MB-20 [35]. Negative controls were spread plated following EPA-MLB-SOP-MB-20 [35]. Eight biological replicates were completed for QA and QT products and ve biological replicates for CL, SH and HP products as informed by Lineback et al., 2018 [36].
Statistical analysis
Log 10 reductions resulting from the treatment of coupons with DSB were calculated and used for statistical analyses. Speci cally, mean bacterial log 10 densities per coupon were calculated for disinfectant and PBS-treated coupons. Mean log 10 densities per disinfectant-treated coupon were normalized against the mean log 10 densities of control coupons to determine log 10 reductions. The least squares method of the PROC GLIMMIX procedure was used to analyze and compare mean log 10 reductions (n=70 per strain; N=140; α=0.05) among the seven tested disinfectant products. The same test was used to statistically compare mean log 10 reductions at 24 h and 72 h. Pair-wise comparisons among products, strains, and dry times were completed with Tukey adjustments. All statistical procedures were completed using SAS version 9.4 (SAS Institute, Cary, NC). Figure 1).
Results
The average log 10 densities of P. aeruginosa DSB per coupon pre-treatment were 7.40 ± 0.75 and 6.77 ± 0.61 after 24 h and 72 h dry times, respectively. There were no signi cant differences between the average log 10 density per coupon after 24 h and 72 h of dehydration (P 0.005).
On average, the mean log 10 reduction per coupon for all tested disinfectants after 24 h and 72 h were 5.50 ± 1.45 and 4.65 ± 1.63, respectively.
Overall and regardless of the product type or active ingredient class, signi cantly higher bactericidal e cacies against DSB of P. aeruginosa were recorded after 24 h compared to 72 h of dehydration (P<0.05; Figure 1).
Mean log 10 reductions for P. aeruginosa DSB were higher for oxidizing agents compared to quaternary ammonium products The mean log 10 density of P. aeruginosa DSB per coupon was 7.08 ± 0.75 after dehydration (24 h and 72 h) and pre-treatment. Overall, product type and active ingredient class were signi cant (P<0.0001; Figure 2 Figure 2). Similarly, CL (5.79 ± 1.40) and QA2 (5.85 ± 0.87) had signi cantly higher log 10 reductions against P. aeruginosa DSB than QA1, QT and SH (P<0.05; Figure 2). However, there were no statistically signi cant differences among QA1, QT and SH (P 0.05; Figure 2). There were also no statistically signi cant differences in the bactericidal e cacies of HP1, HP2, CL and QA2 (P 0.05; Figure 2).
There were statistically signi cant differences among active ingredient classes (CL, HP, SH, QA and QT (P<0.0001; Figure 3). Overall, HP products resulted in a signi cantly higher bactericidal e cacy than QT and SH products (P<0.05; Figure 3). Similarly, CL and QA products had signi cantly higher mean log 10 reductions than QT and SH products (P<0.05; Figure 3). However, there were no differences between QT and SH, CL and HP, HP and QA, and QA and SH products (P 0.05; Figure 3).
Higher bactericidal e cacy against S. aureus DSB than P. aeruginosa DSB Overall, and regardless of the product type, the bacterial strain was statistically signi cant (P P<0.05). The overall mean log 10 reductions for S. aureus and P. aeruginosa were 6.096 ± 1.251 and 4.941 ± 1.505 respectively. Signi cantly higher log 10 reductions were observed when the tested disinfectants were challenged with S. aureus compared to P. aeruginosa (P<0.05).
Discussion
In this study, we employed a rapid DSB model previously developed by our group for disinfectant e cacy testing and evaluated the bactericidal e cacy of seven EPA-registered disinfectants against 24 h and 72 h old DSB of S. aureus and P. aeruginosa. Speci cally, we established DSB of S. aureus and P. aeruginosa at 25°C and 21°C respectively to mimic environmental conditions for the formation of DSB on dry contaminated hard non-porous surfaces in healthcare facilities.
We found that mean log 10 densities per coupon from this study were comparable to the ranges previously reported by Nkemngong et al., 2020 [34]. We found that overall and irrespective of dry time, CL, SH, HP and QA disinfectants were signi cantly more bactericidal against DSB of S. aureus than QT disinfectants. We also found that when DSB of P. aeruginosa were challenged with disinfectants, CL and HP were signi cantly more bactericidal than SH and QT disinfectants. Overall, we demonstrated that prolonged dehydration had varied effects on the bactericidal e cacy of disinfectants against DSB of S. aureus or P. aeruginosa. Speci cally, we found that there were no signi cant differences in the bactericidal e cacies of disinfectants against 24 h and 72 h DSB of S. aureus. There was however, a signi cantly lower log 10 reduction against 72 h DSB of P. aeruginosa compared to 24 h DSB of the same strain.
Bactericidal e cacy varies by strain after prolonged dehydration
Our study found differences in the overall bactericidal e cacy of disinfectants against DSB of S. aureus and P. aeruginosa after prolonged dehydration for 24 h and 72 h. While there was no signi cant difference in log 10 reductions between 24 h and 72 h DSB of S. aureus, the reverse was true for DSB of P. aeruginosa as 72 h DSB of P. aeruginosa were harder to kill than their 24 h counterparts. In a previous study by our group, we found that 100% of P. aeruginosa DSB established at a dehydration temperature of 21°C were encased in EPS while this was true for only 92% of S. aureus DSB established at 25°C [34]. The consistent presence of EPS on DSB of P. aeruginosa at dehydration time points from 24 h to 120 h as previously demonstrated by our group suggested that older DSB of P. aeruginosa developed using our model may be encased in more EPS; making them harder to kill [34]. This is consistent with previous studies that have demonstrated the presence of a thick EPS matrix as a major factor for reduced bactericidal e cacy in bio lms compared to planktonic bacteria [29]. Moreover, previous studies [26,37] have also suggested that unfavorable conditions such as dehydration may trigger bacterial bio lms to produce more EPS. While this may be true for P. aeruginosa DSB as evidenced in our previous study, the same may not be the case for S. aureus DSB as we found that older S. aureus DSB (72 h) were overall encased in less EPS matrix than 24 h bio lms [34]. More EPS production translates into a thicker barrier for disinfectants to bypass before contact with underlying bacteria. Additionally, a thicker EPS matrix may also result in a range of pH, which can impact bactericidal e cacy [18]. These factors could account for the reduced bactericidal e cacy against 72 h DSB of P. aeruginosa compared to 72 h DSB of S. aureus.
Product type and class signi cantly impact disinfectant e cacy against S. aureus DSB There were signi cant differences among products, with QA1, QA2, CL, SH and HP1 being more bactericidal than QT. In a related study against S. aureus wet surface bio lms, Lineback et al., demonstrated that one sodium hypochlorite and ve hydrogen peroxide disinfectants were signi cantly more bactericidal than two quaternary ammonium compounds [36]. This could be explained by the production of reactive oxygen species (ROS) by hydrogen peroxide disinfectants. The production of ROS results in more necrotic death compared to quaternary ammonium compounds as ROS result in DNA damage [38]. Comparatively, quaternary ammonium compounds mainly rely on a positively charged N-atom to bind to cell membranes, creating "pores" for n-alkyl side chains to transverse the cell membrane resulting in lysis and leakage of cytoplasmic contents [39,40]. Considering the denser EPS produced by DSB compared to wet surface bio lms, this may present a signi cant barrier for quaternary ammonium products compared to sodium dichloro-s-triazinetrione, sodium hypochlorite and hydrogen peroxides. Moreover, oxidizing agents such as sodium dichloro-s-triazinetrione, sodium hypochlorite and hydrogen peroxides have low molecular weight active ingredients that when compared to larger molecules such as quaternary ammonium, can more easily bypass the cell membrane to damage internal cellular components [38]. This could further explain the observation that sodium dichloro-s-triazinetrione, sodium hypochlorite and hydrogen peroxide products were overall more bactericidal against DSB of S. aureus than quaternary ammonium. Quaternary alcohol products may have resulted in signi cantly higher bactericidal e cacies owing to the "rapid" bactericidal mode of action of alcohol [41].
We also found that the mean log 10 reductions between HP1 and HP2; QA1 and QA2 were comparable when disinfectants were challenged with S. aureus DSB This nding is consistent with the ndings of Lineback et al., 2018 who reported no signi cant differences among the bactericidal e cacies of ve hydrogen peroxide products tested against S. aureus wet surface bio lms [36]. Similarly, in a recent study that evaluated the bactericidal e cacies of six disinfectant wipes against S. aureus ATTC-6538 inoculated on hard-non-porous surfaces, Voorn et al., reported no signi cant differences in the bactericidal e cacies among three hydrogen peroxide products or three quaternary alcohol products [42]. However, we found that quaternary alcohol products were overall more bactericidal than quaternary ammonium products without alcohol. This suggest that the de ned percentage of alcohol added to quaternary ammonium compounds in uences bactericidal e cacy; alcohol confers a rapid and more potent (tuberculocidal) action against bacteria [41].
HP and CL products are more bactericidal against P. aeruginosa DSB than SH, QT and QA products Overall, CL, QA2, HP1 and HP2 had signi cantly higher log 10 reductions against P. aeruginosa DSB than QA1, QT and SH. Our ndings are similar to those of West et al., who demonstrated that hydrogen peroxide-based disinfectants are overall, more bactericidal against P. aeruginosa allowed to dry on a Formica disc than quaternary ammonium disinfectants [43]. In another study, Tote et al. found that hydrogen peroxides had a stronger antibio lm activity against one day old P. aeruginosa bio lms as they were biologically active against both viable P. aeruginosa cells and their EPS matrix unlike isopropanol disinfectants [44]. The high e cacy of HP1 and HP2 compared to SH against DSB could be explained by the relatively low concentration (0.39%) of sodium hypochlorite in SH as in a 2018 study, Lineback et al. compared the bactericidal e cacies of 0.5% hydrogen peroxide and 1.312% sodium hypochlorite disinfectants against wet surface bio lms of P. aeruginosa, and found no difference in their e cacies [36]. The same intrinsic factor of a relatively low sodium hypochlorite concentration in SH may also account for the higher bactericidal e cacy of CL compared to SH as in a study by Tiwari et al., 0.60% sodium hypochlorite resulted in superior bactericidal e cacy against clinical isolates of S. aureus bio lms [45]. These reports suggest that although sodium hypochlorite is generally more bactericidal than quaternary ammoniums owing to their mode of action, the degree of disinfection is largely concentration dependent.
Although QA2 had a higher quaternary ammonium and lower alcohol content (0.76% quat + 22.5% alcohol) than QA1 (0.5% quat + 55% alcohol) ( Table 1), QA2 demonstrated a signi cantly higher kill against P. aeruginosa DSB than QA1. This suggests that the synergistic effect of quaternary ammonium compounds and alcohol in QA1 may not be su cient. Moreover, in a 2018 study by Wesgate et al., the authors reported that quaternary ammonium formulations with side alkyl chains in the C 12-16 range as is the case for QA1 were more adsorbed to different wipe material types than other formulations [46]. Consequently, and considering that wipes were "wringed" to dispense disinfectant liquid from QA1, the quaternary ammonium compound in QA1 may have been more adsorbed to the wipe material than QA1, resulting in a lower nal disinfectant liquid concentration in QA1 than QA2 [46]. P. aeruginosa DSB are harder to inactivate than S. aureus DSB Our data delineate statistically signi cant higher average log 10 reductions when disinfectants were treated against S. aureus DSB compared to P. aeruginosa DSB. Overall, the low bactericidal e cacy of disinfectants against bio lms is often linked to the EPS matrix [47]. The reduced e cacy of disinfectants, regardless of the product type, observed with Gram-negative P. aeruginosa can be partially explained by the presence of alginate, Psl, Pel [48], and extracellular DNA (eDNA) [49] as important components of the bio lm matrix characteristic of P. aeruginosa. Speci cally, the overproduction of alginates by P. aeruginosa mutants result in the formation of larger microcolonies than wildtype strains [50]. This suggests a role for alginates in decreased susceptibility to antimicrobials [51] compared to non-alginate-producing bacteria such as S. aureus [48]. Pel, on the other hand, plays a vital role in cell-to-cell interactions within these bio lms [52] and in the bio lm maturation [49]. A spike in alginate and carbohydrate production during bio lm formation and maturation confers an overall increase in the net negative charge of the EPS matrix, enhancing the electrostatic attractions between the EPS matrix and positively charged antimicrobials as quaternary ammonium compounds [47]. This limits the diffusion of cationic antimicrobials through the EPS matrix, thus shielding the underlying bacteria from direct antimicrobial contact [47]. However, the cell wall of Gram-positive bacteria such as S. aureus is essentially composed of peptidoglycan and teichoic acid and substances with high molecular weight can traverse the cell wall. [53]. This may explain the higher log 10 reductions observed against S. aureus DSB compared to P. aeruginosa DSB exposed to quaternary alcohol and quaternary ammonium products.
Our results suggest that comparatively higher mean log 10 reductions are achieved when sodium hypochlorite was challenged with S. aureus compared to P. aeruginosa DSB. This could be due to the act that negatively charged disinfectants as sodium hypochlorite destroy the cellular activity of bacterial proteins [54] and are capable of increased penetration of outer cell layers even in unionized state [53]. Similarly, hydroxyl free radicals from HP based products speci cally target sulfhydryl groups, double bonds [55] and destroy bacterial lipids, proteins, and DNA. Our data is in accordance with Lineback et al., 2018 who suggested that sodium hypochlorite products are overall, more effective against P. aeruginosa and S. aureus WSB compared to quaternary ammonium products [36].
Our results support previous ndings that DSB are harder to kill than planktonic bacteria; all the products tested in this study are EPA registered, indicating high levels of e cacy against planktonic bacteria of S. aureus and P. aeruginosa. To reduce patient safety risks in healthcare facilities, it is critical to conduct baseline disinfectant e cacy testing for product registration using bacteria bio lms representing healthcare environments.
We acknowledge that the scope of our study is limited as we did not investigate the bactericidal e cacy of the tested products against mixed culture bacterial bio lms common on dry contaminated hard-non-porous surfaces in healthcare facilities. We also acknowledge that our study did not speci cally investigate disinfectant e cacy against DSB of S. aureus and P. aeruginosa subjected to longer hours of dehydration as this could impact the e cacy levels of commonly used disinfectants. A wider range of disinfectant active ingredients could have also been investigated. However, this study has set the foundation for future investigations of DSB of S. aureus and P. aeruginosa.
Conclusion
Although it is generally agreed that DSB pose a severe challenge for the disinfection of hard non-porous surfaces in healthcare facilities and are a signi cant contributor to the incidence of HAI, the success of any disinfection regime is dependent on multiple intrinsic and extrinsic factors. Our study de nitively demonstrated that signi cant kill levels of the DSB of major healthcare pathogens that cause HAI can be achieved although this is highly dependent on the choice of disinfectant, active ingredient class, DSB "age" and bacteria strain. It is therefore critical for healthcare stakeholders to consider these factors in efforts to reduce HAI rates. | 2021-05-05T00:08:04.734Z | 2021-03-26T00:00:00.000 | {
"year": 2021,
"sha1": "4c1f55a4f24f4fae893f9970471f08eec8157b3b",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-315705/v1.pdf?c=1621845529000",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "c26ce5d6e227d442809f68447c4a2fb8e94bf5db",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
248476231 | pes2o/s2orc | v3-fos-license | Learned Gradient of a Regularizer for Plug-and-Play Gradient Descent
. The Plug-and-Play (PnP) framework allows integrating advanced image denoising priors into optimization algorithms, to efficiently solve a variety of image restoration tasks. The Plug-and-Play alternating direction method of multipliers (ADMM) and the Regularization by Denoising (RED) algorithms are two examples of such methods that made a breakthrough in image restoration. However, while the former method only applies to proximal algorithms, it has recently been shown that there exists no regularization that explains the RED algorithm when the denoisers lack Jacobian symmetry, which happen to be the case of most practical denoisers. To the best of our knowledge, there exists no method for training a network that directly represents the gradient of a regularizer, which can be directly used in Plug-and-Play gradient-based algorithms. We show that it is possible to train a denoiser along with a network that corresponds to the gradient of its regularizer. We use this gradient of the regularizer in gradient-based optimization methods and obtain better results comparing to other generic Plug-and-Play approaches. We also show that the regularizer can be used as a pre-trained network for unrolled gradient descent. Lastly, we show that the resulting denoiser allows for a quick convergence of the Plug-and-Play ADMM.
1. Introduction. This paper proposes a new approach for solving linear inverse problems in imaging. Inverse problems represent the task of reconstructing an unknown signal from a set of corrupted observations. Examples of inverse problems are denoising, super-resolution, deblurring and inpainting, which all are restoration problems. Supposing that the degradation process is known, we can formulate the restoration task as the minimization of a data fidelity term. However, inverse problems are ill-posed. Hence they do not have a unique solution. A common approach for dealing with this issue consists in introducing prior knowledge on images, in the form of an extra regularization term which penalizes unlikely solutions in the optimization problem. Early methods used handcrafted priors such as total variation (TV) [4,5,25,34] and wavelet regularizers [13], as well as low-rank regularizers such as weighted Schatten [23,42] and nuclear [16,17] norms of local image features.
With the advancement of deep learning, problem-specific deep-neural networks yielded a significant performance improvement in the field of image restoration. In fact, using a reasonable amount of training data, we can train a neural network that can learn a mapping between the space of measurements (i.e. degraded images) and the corresponding solution space (i.e. ground-truth images). These networks interpreted as deep regression models are efficient for solving different applications such as sparse signal recovery [28], deconvolution and deblurring [11,12,38,39,43], super-resolution [10,22,29], and demosaicing [19]. Nevertheless, they have to be designed specifically for each application, hence they lack genericity and interpretability.
Recent methods have been introduced with the goal of coupling classical optimization with deep learning techniques. Most of these methods are derived from the idea of the Plugand-Play prior (PnP) popularized by Venkatakrishnan el al. [41]. The method in [41] uses the ADMM algorithm [2] which iteratively and alternately solves two sub-problems, each associated to either the regularization term or the data fidelity term. The authors have noted that the regularization sub-problem (formally defined as the proximal operator of the regularization term) can be advantageously replaced by applying a state-of-the art denoiser, hence removing the need for explicitly defining a regularization term. While early works [6,36,41] used traditional denoising methods such as BM3D [8] or non-local means [3], the approach makes it possible to leverage the high performances of deep neural networks for solving various tasks with a single deep denoiser [31,46,47].
However, these methods remain limited to proximal algorithms which make use of the proximal operator of the regularization term, but they do not apply to the simple gradient descent algorithm which instead requires the gradient of the regularization term with respect to the current estimate. For this reason, despite the growing interest for Plug-and-Play algorithms, as well as the extensive use of gradient descent in machine learning, the Plug-and-Play gradient descent has seldom been studied in the literature.
To the best of our knowledge, the only method which intends to model the gradient of a regularizer for solving inverse problems in a similar Plug-and-Play context, is the Regularization by Denoising (RED) proposed by Romano et al. [32]. Using an off-the-shelf denoiser, the RED method explicitly defines a regularization term which is proportional to the inner product between the unknown image and the residual of its denoising. They also show that, under certain conditions, the gradient of this regularization term is the denoising residual itself. However, Reehorst and Schniter [30] proved later that the gradient expression proposed in RED is not justified with denoisers that lack Jacobian symmetry, which excludes most practical denoisers such as BM3D [8] or state-of-the-art deep neural networks. Moreover, although the regularization term is explicitly defined in RED, it depends on the noise level parameter of the denoiser used. This involves an additional hyper-parameter that must be tuned per-application, whereas theoretically, a regularizer (and thus its gradient) should fully determine the image prior, regardless of the task to be solved.
It must also be noted that a network playing the role of the gradient of a regularizer can be trained in the context of the so-called "unrolled algorithms" [9,14,15,21,27,29,35,37,44]. Extending on the Plug-and-Play idea, this approach consists in training the regularizing network in an end-to-end fashion so that applying a given number of iterations of the algorithm yields the best results for a specific inverse problem. In particular, the Total Deep Variation (TDV) method [21] consists of an unrolled gradient based algorithm where the network represents the regularization function. Due to the end-to-end training, high quality results can be obtained for the targeted application. However, the approach loses the genericity of Plug-and-Play algorithms. Furthermore, it is not clear what interpretation can be given to the trained network which does not only learn prior knowledge on images, but also task-specific features.
The aim of this paper is then to train a network that mathematically represents the gradient of a regularizer, without relying on task-specific training. Hence, our regularizing network can be used for solving inverse problems using a simple gradient descent algorithm, unlike existing Plug-and-Play methods that are suitable only for proximal algorithms. Our method makes use of a second network pre-trained for the denoising task. Based on the assumption that the denoiser represents the proximal operator of an underlying differentiable regularizer (defining the image prior), we derive a loss function that links the denoiser and the regularizer's gradient networks. However, since there is no guarantee that this assumption is mathematically valid for a denoising neural network, we propose an approach where the pretrained denoiser is modified jointly with the training of our regularizer's gradient network. This approach encourages the denoiser to be consistent with the definition of a proximal operator of a differentiable regularizer, and significantly improves our results in comparison to keeping the denoiser fixed.
We use our network to solve different inverse problems such as super-resolution, deblurring and pixel-wise inpainting in a simple gradient based algorithm, and obtain better results when comparing to other generic methods. We also show that our training method can advantageously serve as a pre-training stage, later facilitating a per-application tuning of the regularization network in the framework of unrolled gradient descent.
2. Notations and problem statement. We consider the linear inverse problem which consists in recovering an image x ∈ R n from its degraded measurements y ∈ R m obtained with the degradation model: where A ∈ R m×n represents the degradation operator depending on the inverse problem and ∈ R m typically represents Additive White Gaussian Noise (AWGN). The restoration of these degraded images is an ill-posed problem, therefore a prior is used to restrict the set of solutions. The reconstruction can be treated using Bayesian estimation that uses the posterior conditional probability p(x|y). Maximum a posteriori probability (MAP) is the most popular estimator in this scheme, where we choose x that maximizes p(x|y). The estimation task is hence modeled as the optimization problem where p(y|x) and p(x) are respectively the likelihood and the prior distributions. For the linear degradation model in Equation 2.1 with Additive White Gaussian Noise of standard deviation σ, we get (2.4) x M AP = argmin where the data fidelity term f (x) = 1 2 y − Ax 2 2 enforces the similarity with the degraded measurements, whereas the regularization term φ(x) reflects prior knowledge and a property to be satisfied by the searched solution. The non-negative weighting parameter σ 2 balances the trade-off between the two terms. The problem in Equation 2.4 does not have a closed-form solution in general. Therefore it must be solved using different optimization algorithms. The Plug-and-Play framework typically considers proximal splitting algorithms which decompose the problem in two sub-problems (one for each term in Equation 2.4) and solve them alternately. In these algorithms, the regularization sub-problem consists in evaluating the proximal operator of the regularization term defined as: This can be seen as a particular case of inverse problem where the degradation operator A is the identity matrix, and the degradation only consists in the addition of White Gaussian Noise of standard deviation σ. The proximal operator in Equation 2.5 can thus be interpreted as a MAP Gaussian denoiser. Hence, this sub-problem can be conveniently replaced by a state-of-the-art Gaussian denoiser in a Plug-and-Play proximal algorithm. However, this approach does not directly generalize to the gradient descent algorithm where the update formula for the minimization in Equation 2.4 is expressed as: Here, instead of the proximal operator of the regularizer φ, we need its gradient ∇φ, which cannot be replaced by a denoiser. In this paper, we propose to train a network that can serve as the gradient of the regularization term in Plug-and-Play gradient descent algorithms.
3. Training of the gradient of a regularizer.
Mathematical derivations.
We show in the following that it is mathematically possible to train a network that corresponds to the gradient of a regularizer by using a deep denoiser. Let us consider a denoiser D σ defined as the proximal operator in Equation 2.5: Hence, for σ and z fixed, the denoised image x = D σ (z) minimizes F φ (x, z, σ). Therefore, we have: ∂x can be computed as: Evaluating at the denoised image x = D σ (z) thus gives: Using Equation 3.2 and Equation 3.5, we obtain: Using Equation 3.8, we can train a network that corresponds to the gradient of the regularizer ∇φ with respect to its input using the loss function This requires the knowledge of the corresponding denoiser D σ . Note that Equation 3.8 is valid for any value of σ regardless of the degradation in z. Hence, σ can be seen as a free parameter of our loss L ∇φ . For small values of σ, the input D σ (z) of the regularizing network ∇φ will be close to the degraded image z. Hence ∇φ will be trained to fit the artifacts in the degraded images (e.g. noise). On the other hand, for high values of σ, the input of ∇φ will be a strongly denoised image, with reduced artifacts but less details. Hence, ∇φ will be trained to recover the missing details. During the training, we vary the value of this parameter so that the regularizing network can recover details while also removing artifacts (see details in Subsection 3.2). Also note that in practice, since φ is meant to be used in gradient-based algorithms, we only need the gradient ∇φ rather than an explicit definition of φ. Hence, we propose in what follows a framework for end-to-end training of ∇φ along with the denoiser D.
3.2.
Training framework for the regularizer's gradient. The training framework is depicted in Figure 1. Let η → N (0, σ 0 ) be a white Gaussian noise of mean 0 and standard deviation σ 0 that we use to corrupt the ground truth images x 0 of the training dataset to produce degraded images z = x 0 + η. Let σ be a standard deviation value used as a parameter of our loss L ∇φ , as defined in Subsection 3.1. In order to handle different values of σ in Equation 3.9, D σ is modelled as a non-blind deep denoiser that takes as input a noise level map (i.e. each pixel of the noise level map being equal to σ) concatenated with the noisy image z. For the denoiser, we use a simple l 1 loss defined as: Note that Equation 3.10 is a suitable loss for the denoiser only when σ = σ 0 since the non-blind denoiser must be parameterized with the true noise level σ 0 of the noisy input. The denoised output D σ (z) is then inputted to the network modelling the gradient of the regularizer in order to train it using the loss L ∇φ defined in Equation 3.9. Hence, our goal is to minimize the global loss defined as: For training the deep denoiser network, we should set the noise level σ inputted to the network equal to the actual noise level σ 0 used for generating η. However, as explained in Subsection 3.1, for the loss L ∇φ , it is preferable to select σ independently of σ 0 . Hence, the input D σ (z) of our regularizer gradient network ∇φ can cover a wide range of alterations, including images with remaining noise (i.e. σ < σ 0 ) or with too strong denoising, and thus less details (i.e. σ > σ 0 ). We therefore choose to alternate during the training between either selecting independently σ and σ 0 , or setting σ = σ 0 in order to keep D faithful to the data. Furthermore, since the denoiser loss L Dσ is only valid when σ = σ 0 , we omit this loss when σ = σ 0 by setting δ = 0.
Note that an alternative training strategy would consist in first training the denoiser separately (only with the loss L Dσ ), and then training the regularizer (only with the loss L ∇φ ) without jointly updating the denoiser. However, separately training the denoiser only ensures good denoising performance, but it does not guarantee to match the formal definition of a MAP Gaussian denoiser for some differentiable prior, i.e. a proximal operator of a differentiable scalar function φ. We further analyse the advantages of jointly training the denoiser and the regularizer in Subsection 4.3.
Training details.
For our training, we use a state-of-the-art deep denoiser architecture in order to train our network modelling the gradient of the regularizer. We choose to work with the DRUNet proposed in [46] which is a combination of U-Net [33] and ResNet [18]. Since it takes as input the noisy image concatenated in the channel dimension with a noise level map, it can suitably represent the non-blind denoiser D σ . [46], that we choose for our ∇φ by changing the input channel to 3 instead of 4 (the regularizer doesn't need a noise level map as additional input).
The architecture of the regularizing network ∇φ is shown in Figure 2. It is the same architecture as the DRUNet denoiser, with the only difference that it does not take a noise level map as additional input.
We initialize D σ using the pre-trained DRUNet denoiser (which we reproduced based on the work in [46]). Then, we train our network ∇φ while jointly updating D σ , following the proposed framework in Subsection 3.2. The weight λ of the loss L ∇φ in Equation 3.11 is set equal to 0.004. The selection of the parameters σ and σ 0 follows the alternating strategy described in Subsection 3.2: for half of the training iterations, we use σ = σ 0 with a value chosen randomly with uniform distribution in [0, 50]; otherwise, σ and σ 0 are chosen independently with the same uniform distribution.
The remaining training details are similar to the ones presented in [46] for the DRUNet pre-training: the same large dataset of 8694 images composed of images from the Waterloo Exploration Database [26], the Berkeley Segmentation Database [7], the DIV2K dataset [1] and the Flick2K dataset [24] is used. 16 patches of 128x128 are randomly sampled from the training dataset for each iteration. We use the ADAM optimizer [20] to minimize the loss L defined in Equation 3.11. The learning rate is initially set to 1e-4, and decreased by half every 100,000 iterations until reaching 5e-7, where the training stops.
Experimental results.
4.1. ∇φ for Plug-and-Play gradient descent. In this section, we evaluate the performance of our approach. First, we propose to use our network to solve different inverse problems in a simple Plug-and-Play gradient descent algorithm. For experimental results, we use the ADAM optimizer to solve Equation 2.4 with the Plug-and-Play gradient descent.
As the main goal of this approach is to solve inverse problems using simple gradient-based algorithms with a generic regularizer, we compare ourselves to algorithms that are designed to solve different inverse problems using a single regularization network in a Plug-and-Play framework. Hence, we compare the performance of our network to the PnP-ADMM with the DRUNet [46], the RED [32] in gradient descent with the DRUNet used for regularization, and Chang's projection operator [31] used in an ADMM framework. We also compare with the Deep Image Prior (DIP) [40], which is a generative model trained for each test image in an unsupervised way. The network takes a random input vector and adjusts its weights to minimize the mean square error (MSE) between its output after applying the degradation and the true degraded observation. This method can be seen as a generic approach as well.
For fair comparisons, we reproduced all the results under the same conditions, i.e. using the same initialization and the same degradation operator A for each application as described in the following subsections. To reproduce the results of [31] we used the model trained by the authors, which takes input images of size 64x64. Hence we applied the network on quarter-overlapping sample patches in order to enhance the results by avoiding block artifacts.
We tuned the parameters of each of these methods for each application in order to get the best results. Table 1 shows the parameters used during testing for the Plug-and-Play gradient descent with our regularizer. In theory, the parameter σ in Equation 2.4 should be equal to the true standard deviation σ n of the Gaussian noise added on the degraded image. However, when σ n = 0 (e.g. super-resolution, pixel-wise inptaining), choosing σ = 0 would completely remove the regularization term. For these cases, we choose a small non-zero value of σ depending on the application. Table 1 Parameters used for the Plug-and-Play gradient descent with our regularizer. µ: gradient step size, σ: weight of the regularization, σn: standard deviation of the AWGN added on the degraded image Super-resolution consists of reconstructing a high-resolution image from a low-resolution (i.e. downsampled) measurement. Low resolution images are generated by applying a convolution kernel followed by a downsampling by a factor t. We evaluate our method with bicubic and Gaussian convolutional kernels, with both 2x and 3x downsampling scales. The Gaussian kernel has a standard deviation σ b = 0.5 · t (i.e. σ b = 1 for x2 and σ b = 1.5 for x3). In all the cases, the gradient descent is initialized with a high resolution image obtained by bicubic upsampling of the degraded image. Tables 2 and 3 show a numerical comparison of our method with the aforementioned generic approaches for super-resolution of factor 2 and 3 respectively, for both bicubic and , the projection operator (One-Net) [31], RED [32], PnP-ADMM [46] and our regularizer used in a PnP-GD framework. Low resolution images generated with a bicubic kernel followed by a downsampling by a factor of 2.
Gaussian filters. A bicubic interpolation of the degraded image was used for initialization. The PSNR (peak signal-to-noise ratio) measures presented in this paper are computed on the RGB channels. Numerical comparison gives higher values for the regularizer compared to the existing generic Plug-and-Play approaches, with a slight improvement with respect to the PnP-ADMM. Figure 3 shows a visual comparison of the results for a degradation with a bicubic kernel and a downsampling by a factor of 2. We observe sharper images with less aliasing artifacts produced by our approach. Deblurring. For image deblurring, the degradation consists of a convolution performed with circular boundary conditions. Hence, the degradation matrix can be written as A = F * DF, where F and F * represent respectively the discrete Fourier transform and its inverse, and D is a diagonal matrix representing the filter in the Fourier domain. We degrade our images with two 25x25 isotropic Gaussian blur kernels of standard deviations of 1.6 and 2.0 ( Figure 4) that are used in [45], and add a White Gaussian noise of standard deviation σ n = √ 2/255. The blurred image is directly used as the initialization of the Plug-and-Play gradient descent. Table 4 shows the PSNR results [dB] of the evaluation of our method for deblurring. Table 2 Super-resolution results obtained with our regularizer used in Plug-and-Play gradient descent (corrupted with bicubic and Gaussian kernels and downsampled by a a factor of 2), measured in terms of PSNR [dB]. Comparison with DIP [40], the projection operator (One-Net) [31], RED [32] and the PnP-ADMM [46]. [40], the projection operator (One-Net) [31], RED [32], PnP-ADMM [46] and our regularizer used in a PnP-GD framework. The blurred images are generated by an isotropic Gaussian kernel of standard deviation 1.6.
DIP One-Net RED
Similarly to super-resolution, we observe higher PSNR values with respect to the other generic methods for both Gaussian kernels of standard deviation 1.6 and 2.0. Visual comparison in Figure 5 shows that our approach successfully recovers the details without increasing the noise.
Pixel-wise inpainting.
Pixel-wise inpainting consists of restoring pixel-values in an image where a number of the pixels were randomly dropped. The degradation consists of multiplying the ground-truth image by a binary mask. For the initialization image x 0 , we set the color of the unknown pixels to grey. We test our results with both 20% and 10% of known pixels rates. Table 5 shows a numerical evaluation of our method for the application of pixel-wise inpainting in terms of PSNR [dB]. We observe significant performance gains of our regularizer compared to the other methods, especially in the most challenging case where the known pixel Table 3 Super-resolution results obtained with our regularizer used in Plug-and-Play gradient descent (corrupted with bicubic and Gaussian kernels and downsampled by a factor of 3), measured in terms of PSNR [dB]. Comparison with DIP [40], the projection operator (One-Net) [31], RED [32] and the PnP-ADMM [46]. Table 4 Deblurring results obtained with our regularizer used in Plug-and-Play gradient descent (blurred images are generated by isotropic Gaussian kernels of standard deviation 1.6 and 2.0), measured in terms of PSNR [dB]. Comparison with DIP [40], the projection operator (One-Net) [31], RED [32] and the PnP-ADMM [46]. rate is only 10%. The visual improvements can also be seen in Figure 6.
Unrolled gradient descent with ∇φ.
Aside from the Plug-and-Play gradient descent, our approach for training ∇φ can also serve as a pre-training strategy for unrolled gradient descent. In unrolled optimization methods, the regularization network is trained for each inverse problem such that applying a fixed number of iterations of the algorithm (e.g. Equation 2.6 for gradient descent) best approximates the ground truth image. We describe the unrolled Table 5 Pixel-wise inpainting results obtained with our regularizer used in Plug-and-Play gradient descent (Corrupted images are generated by keeping 20% and 10% of the known pixels), measured in terms of PSNR [dB]. Comparison with DIP [40], the projection operator (One-Net) [31], RED [32] and the PnP-ADMM [46]. Figure 6. Visual comparison of pixel-wise inpainting results with known pixel rate of p = 20%, obtained with the DIP [40], the projection operator (One-Net) [31], RED [32], PnP-ADMM [46] and our regularizer used in a PnP-GD framework.
DIP
training approach in Algorithm 4.1 for a gradient descent optimization. While this end-to-end training strategy loses the genericity of the Plug-and-Play approach, it typically improves the performances.
However, to facilitate the training, it is generally required to initialize the network weights with a generically pre-trained version. When unrolling proximal algorithms such as ADMM, a pre-trained deep denoiser can be used since it can be interpreted as the proximal operator of a generic regularization function. On the other hand, the gradient descent requires instead the gradient of a regularizer. Hence, a denoiser cannot be directly used as a pre-trained network. By transferring the image prior implicitly represented by the denoiser to our regularizer's gradient ∇φ, our method thus provides a satisfying pre-trained network for unrolled gradient descent.
We tested this approach for super-resolution of factor 2 and 3, as well as deblurring with 2 Table 6 Parameters used for unrolled gradient descent optimization for both pre-trained and not pre-trained versions (i) Super-Resolution of factor 2 and 3 for bicubic corruption (ii) Deblurring with an isotropic Gaussian kernel of standard deviation of 1.6 and 2.0 . σn: Standard deviation of the White Gaussian noise added on the corrupted image, σ: weight of the regularization term, µ: gradient step size , N : number of unrolled iterations. The training is performed over the DIV2K dataset [1]. for each batch do 4: η ← N (0, σ n ) 5: x gt ← ground-truth batch 6: y ← Ax gt + η 7: x 0 ← y 8: for k ← 0 to N − 1 do 9: 10: end for 11: loss ← x N − x gt 2 2
12:
Update weights of ∇φ with back-propagation 13: end for 14: end for isotropic Gaussian kernels (with standard deviations of 1.6 and 2.0) and compared our results with a network learned end-to-end in an unrolled environment without the pre-training (i.e. random weight initialization).
Both versions (pre-trained and not pre-trained) were unrolled in the same conditions. Table 6 shows the training parameters for each of the different tasks. For all 4 cases, we trained over the DIV2K dataset [1] of 800 images, over 600 epochs by randomly taking 48x48 patches from the dataset. In addition, we include a comparison with the Total Deep Variation (TDV) method [21] which also performs unrolled optimisation where the network represents the gradient of the regularization function. In [21], the network is not pre-trained. Instead, the training starts with a small number of unrolled iterations N = 2, and N is incremented every 700 epochs. Note that the authors originally trained the TDV network for N = 10 iterations of an unrolled proximal gradient descent algorithm. However, for fair comparisons with our approach, we re-trained it with N = 6 iterations of simple unrolled gradient descent. Tables 7 and 8 show the PSNR results [dB] for both networks (pre-trained and not pretrained) as well as the TDV for super-resolution and deblurring respectively, on Set5, Set14 and BSDS100. As expected, using ∇φ for weight initialization improves the results of the unrolled gradient descent for all of the 4 tested cases (with up to 0.9 dB). Some visual comparisons of super-resolution with magnifying factor of 3 and deblurring images corrupted by a Gaussian kernel of standard deviation 2.0 are respectively shown in Figures 7 and 8. The visual results confirm that better reconstruction of the details is obtained when our pre-training is used. While the TDV results display even sharper edges, the PSNR remains lower because of exaggerated sharpness in comparison to the ground truth. Table 7 Super-resolution results obtained with unrolled gradient descent (corrupted with bicubic kernel and downsampled by a factor of 2 and 3), measured in terms of PSNR [dB] and evaluated on Set5, Set14 and BSDS100. Restoration obtained with unrolled gradient descent initialized with our pre-trained network ∇φ and without weight initialization, as well as TDV [21] pre-trained (ours) not pre-trained TDV Table 8 Deblurring results obtained with unrolled gradient descent (corrupted with isotropic Gaussian kernels with standard deviation 1.6 and 2.0), measured in terms of PSNR [dB] and evaluated on Set5, Set14 and BSDS100. Restoration obtained with unrolled gradient descent initialized with our pre-trained network ∇φ and without weight initialization, as well as [21] . First, we compare the performance of our regularizer's gradient network trained in both scenarios. An example of deblurring results with the Plug-and-Play gradient descent is shown in Figure 9. It is clear that leaving the denoiser fixed to its pre-trained state degrades the performance of our network ∇φ: the reconstructed image in Figure 9 (a) remains more blurry than in Figure 9 (b) and it also presents colored fringes artifacts. On the other hand, when the denoiser is updated when training ∇φ, the convergence of the Plug-and-Play gradient descent is significantly improved as well as the visual result, as shown in Figure 9 (b,c). A possible explanation of the worse results when fixing the denoiser in our training is that the pre-training of D σ does not guarantee that there exists a differentiable regularizer φ for which prox σ 2 φ = D σ for every value of σ. In other words, the assumption that D σ is a MAP Gaussian denoiser for a differentiable prior may not be satisfied. However by jointly updating the denoiser with our network representing ∇φ, the modified denoiser better represents such a MAP Gaussian denoiser for the corresponding regularizer.
In a second experiment, we compare the performances of the updated denoiser and the original DRUNet when used in the Plug-and-Play ADMM algorithm. Figure 10 shows for each ADMM iteration the PSNR of the reconstructed images and the MSE of the difference between two consecutive iterations for both versions of the denoiser. We can observe that although the original DRUNet can obtain better PSNR performances when stopping the ADMM after a few iterations (see subfigure (a)), the algorithm does not converge, and may even strongly diverge after a sufficiently large number of iterations. On the other hand our modified denoiser allows for a quick convergence of the Plug-and-Play ADMM.
Conclusion.
In this paper, we have proposed a novel framework for solving linear inverse problems. Our approach makes it possible to solve Plug-and-Play algorithms using gradient descent, where the gradient of the regularizer is required rather than its proximal operator. We have proved that it is mathematically possible to train a network that represents the gradient of a regularizer, jointly with a denoising neural network.
The results have demonstrated that the joint training of a regularizer's gradient network with the DRUNet have several advantages. First, the regularizing network can be used in a Plug-and-Play gradient descent algorithm and outperform the performance of other generic approaches in different inverse problems such as super-resolution, deblurring and pixel-wise inpainting. Second, our network can also serve as a pre-training strategy for unrolled gradient descent and yield a significant improvement. Lastly, the joint training of the denoiser with the regularizing network makes the former match better the definition of a proximal operator compared to the original pre-trained DRUNet. | 2022-05-02T01:15:44.097Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "eae97160b8116e63fb1ca1129a56b7716d177c84",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "eae97160b8116e63fb1ca1129a56b7716d177c84",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
} |
37259516 | pes2o/s2orc | v3-fos-license | Self-Oscillating Structural Polymer Gels
Self-oscillating polymer gel has become a distinguished class of smart soft materials. Here we fabricated and demonstrated a self-oscillating structural gel network with the incorporation of the Belousov-Zhabotinsky (BZ) reaction. The structural polymer gel oscillates at a macroscopic level with remarkably faster kinetics compared to a normal gel of similar chemical compositions. The structural polymer gel also displays larger oscillating amplitude compared to the normal gel because of the increased diffusion of fluids surrounding the gel particles. This type of structural polymer gels can be harnessed to provide novel and feasible applications in a wide variety of fields, such as drug delivery, nanopatterning, chemical and biosensing, and photonic crystals.
Introduction
The Belousov-Zhabotinsky (BZ) reaction, which is a well-known nonlinear dynamic chemical system, has attracted much attention due to their fascinating phenomena [1,2].The reaction includes a series of metalion-catalyzed oxidative reactions of organic substrates, such as malonic acid, by an acidified bromate solution.Recently, the BZ reaction has been used to induce mechanical oscillations in polymeric systems to mimic biological materials [3,4].In particular, Yoshida and his co-workers have developed a self-oscillating cross-linked gel composed of stimuli-responsive poly-(N-isopropylacrylamide) (PNIPAM) polymer with grafted Ru(bipy) 3 moiety as the BZ catalyst [5,6].The BZ reaction generates autonomous and rhythmical redox oscillations from the oxidized Ru(III) state to the reduced Ru(II) state, which induced a periodic volume oscillation in the PNIPAM gel when the gels are immersed in an aqueous acidic solution containing the substrates for the BZ reaction [7].Due to the periodic mechanical oscillations, a novel biomimetic self-walking gel was developed by Yoshida et al. [8].Directional movement of gel is produced by asymmetrical swelling-deswelling of the PNI-PAM-co-Ru(bipy) 3 gels.
An approach was also developed recently for simulateing chemo-responsive gels that exhibit not only large variations in volume but also alterations in shape [9].Through this approach, oscillating gels undergoing the BZ reaction were simulated, which showed that the formation of the wave pattern depends on the aspect ratios of the sample.Such self-oscillating polymer gel has become a distinguished class of soft materials which are anticipated for new concept applications such as self-beating pacemakers and drug delivery systems synchronized with human circadian rhythms [10][11][12].Such advanced materials, especially those stimuli-responsive polymers and gels have been actively investigated due to a unique nature of volume change in response to external stimuli [13].These self-oscillating gel particles have various potential industrial applications, such as drug delivery [14], chemical and biosensing [15], fabrication of photonic crystals [16] and absorbents [17].
In our previous studies, we have prepared the PNI-PAM gel particles of uniform size using a two step synthesis with covalently bound Ru(bipy) 3 catalyst modifying the process used by Yoshida and his co-workers [18,19].The volume responses of PNIPAM particles in coupling to the BZ reaction as a function of substrate concentrations and temperature have been characterized [19].It has shown that the automatic mechanical oscillation of the gel particles was induced by the covalently coupled BZ reaction.The chemical oscillations of the BZ reaction induce the periodic charge oscillation of the Ruthenium ions, which leads to the periodic change of the hydrophilicity of the PNIPAM polymer network, hence the volume oscillation of the microgel particles.In this swell/shrink process, water is diffused either from the surface towards the centre of the gel, or vice versa.
However, the slow response and small amplitude of the responsive PNIPAM polymer gels prevent their further usages [20], because the swell/shrink rate is inversely proportional to the square of the distance that the water molecules have to travel [21].Therefore, the gels with a fast response and large amplitude are desired for feasible applications.If we can improve the frequency and amplitude of the self-oscillation, the autonomous gel systems will have a much broader range of applications.One idea to solve this problem is to raise the temperature in the system so that the BZ reaction rate increases.However, the conventional-type self-oscillating gel usually shrinks at temperatures above its LCST (lower critical solution temperature) [4].Recently, Cho et al. fabricated a fast responsive structural gel scaffold, where microgel particles assembled either through bridging or depletion interactions to yield a structure that response to the external stimulus at a macroscopic level in much shorter times as compared to a bulk polymer gel of similar characteristics [22].In this work, we further develop this technique to prepare a self-oscillating structural PNIPAM gel that contains a catalyst for the BZ reaction to achieve an autonomous oscillation with both high frequency and large amplitude, without the temperature limitation.
Synthesis of PNIPAM-co-Ru(bipy) 3 Gels
Surfactant-free PNIPAM microgels have been prepared by a precipitation polymerization of N-isopropylacrylamide (NIPAM).In a typical procedure, NIPAM (4 g), allylamine (250 μL), Ru(vmbipy)(bipy) 2 PF 6 (0.200 g), N, N-methylenebisacrylamide (0.180 g), 2,2'-azobisisobutyronitrile (AIBN) (0.166 g) and water (80 mL) were added in a 200 ml round-bottomed flask reactor equipped with a stirrer, a condenser, a nitrogen inlet, and a thermometer.The contents were heated to 70˚C under a nitrogen atmosphere and at a stir rate of 300 rpm.The polymerization was continued for 6 hours.The microgel dispersion was then filtered through a filter (~100 μm mesh size) to remove some aggregates generated during polymerization and then, cooled down to room temperature.
Synthesis of Clusters of PNIPAM-co-Ru(bipy) 3 Gels
Clusters of microgels were made by linking microgel particles covalently using glutaraldehyde.To the microgel suspension prepared above, 0.002 M negatively charged polyacrylic acid solution (Mol.Wt. 400,000) was added to induce ionic attractions with the positively charged Ruthenium bipyridine and allylamine.The dispersion was heated to 60˚C leading to a phase separation of the microgel from the continuous aqueous phase.Then, approximately 1 volume % of glutaraldehyde aqueous solution (50% w/v, Aldrich) was added to chemically link amide groups on the surface of the microgel particles in contact with each other.After a 1 hour reaction at 60˚C, the resulting gel scaffolds were washed with DI water.This cluster of microgel particles with interconnected PNIPAM gels was named as "structural gel" and its self-oscillating behavior was compared with that of "normal gel" with no interconnected gel network.The "normal gel" was prepared following the preparation method by Maeda et al. [23].For compared studies, both structural gel and normal gel were cut into the same dimensions (5 1 1 mm) and the swell/shrink response was measured at both oxidized and reduced states of Ruthenium ion.The synthetic scheme of the structural gel using PNIPAM particles is shown in Scheme 1.
Synthesis of Suspensions of PNIPAM-co-Ru(bipy) 3 Gel Particles
The suspensions of normal gel particles and structural gel particles were prepared as below.The gel particles of PNIPAM with Ru(bipy) 3 were synthesized by emulsion polymerization as follows: purified NIPAM (3.80 g), Ru(bipy) 3 (0.422 g), allylamine (0.200 g), sodium dodecylbenzene sulfonate (0.700 g), N, N-methylenebisacrylamide (0.700 g), and AIBN (0.169 g) were added in 200 mL of H 2 O.The suspension was stirred at 60˚C for 8 hours under the N 2 -flow condition.The resulting mixture was purified through dialysis against pure water for 14 days.To the 100 mL of the above suspension, 0.002 M of polyacrylamide solution and 5 volume % of glutaraldehyde were added at room temperature.The mixture was stirred at 250 rpm for 48 hours under N 2 -flow condition to form the suspension of structural gel particles.
Results and Discussion
For the self-oscillation study, the structural gels were dispersed in aqueous solution containing the reactants of the BZ reaction at a fixed concentration: Malonic acid (MA) 0.30 M, sodium bromate (NaBrO 3 ) 0.75 M, and nitric acid (HNO 3 ) 1.0 M. It was observed that in the oxidized state of the metal catalyst Ruthenium, the equilibrium olume of the gel was larger than that in the reduced v Initially, the metal catalyst Ru(II) is in the reduced state, and the structural gels remain in the shrink state.With the introduction of BZ substrates, after some time, and during the rest of the induction time, Ru(II) switches to its oxidized form, Ru(III).In this state, the structural gels become swollen.This oscillation behavior is caused by the significantly different solubility of the Ru(bipy) 3 moiety in its oxidized and reduced states.The reduced Ru(bipy) 3 moiety in the gel has an extreme hydrophobic property while the oxidized Ru(bipy) 3 part in the gel has a great hydrophilic property [4].
For the suspension of PNIPAM structural gel particles, under constant temperature and stirring conditions together with BZ substrates, the time course of transmission was monitored by use of UV-vis spectroscopy.The transmittance of the mixed solutions in the 190 -856 nm wavelength range was measured continuously in an episodic data capture mode.The transmittance is correlated with the volume change (i.e., gel particles shrink when transmittances decrease).The oscillations in transmittance at 460 nm and 685 nm reflect the chemical oscillation of the Ru(bipy) 3 catalyst between the Ru(II) and Ru(III) states, whereas the oscillations at 570 nm are attributable to the conformation change of the PNIPAM polymers [19].Figure 2 shows the normalized transmittance of the structural gel as a function of temperature which is measured under different oxidization states of the Ruthenium.Compared to the suspension of normal gel particles, the suspension of structural gel particles displays much higher swelling and shrinkage.The interconnected structured gel particles shrink to ~40% of their original size above their LCST (lower critical solution temperature) whereas the normal gel particles shrink to ~55%.It should be noted that the particle size of the suspension is 160 nm for the normal gel particles and about 700 nm for the structural gel particles.Since several normal gel particles are clustered together in the latter, oscillations are expected to be higher in structural gel suspensions.
The response of the prepared structural gel is compared with that of the normal gel at room temperature and shown in Figure 3.The structural gel exhibits a remarkable improvement in response dynamics.As mentioned above, both structural and normal gels were cut into the dimensions of 5 1 1 mm.The shrinkage of gels in length with time was monitored by placing in a solution of 1.0 M nitric acid containing either Ce(IV) or Ce(III), which corresponds to the oxidized or reduced states of the Ruthenium.The degree of swelling in the oxidized state is higher than the swelling in the reduced state for both gels due to the increased Donnan osmotic pressure.It is observed that the structural gel responds 10 times faster than the normal gel.This remarkably faster kinetics may arise from the smaller dimensions of the gel particles than form the three dimensional gel networks.The smaller size speeds up the diffusion of the fluid through the gel networks.In the case of normal gels, the cross-linked network reduces the diffusion of surrounding fluids and hence the gel exhibits slow response.
The oscillation frequency and amplitude changes of the PNIPAM gel particles in the structural gel and normal gel suspension are compared under the same BZ reaction conditions as mentioned above and shown in Figure 4.The Ru(bipy) 3 complex has different absorption spectra in the reduced Ru(II) state and the oxidized Ru(III) state as an inherent property.The upper two oscillations (at wavelengths of 460 nm and 685 nm) reflect the chemical oscillation of the Ru(bipy) 3 catalyst between the Ru(II) and Ru(III) states, whereas the bottom oscillations (at the wavelength 570 nm) are attributable to the conformation change of the PNIPAM polymers [19].It should be noted that the upper two oscillations in Figures 4(a) and (b) are mostly out of phase.However, the shape of the oscillation for structural gel particles is most pulse-like (non-sinusoidal) compared to normal gel particles.As Figure 4 shows, particles in the structural gel oscillate at a rate twice faster than that of the gel particles prepared by normal polymerization method.The oscillation amplitude of particles in the structural gel is also much larger than that of normal gel particles.
Based on these special properties obtained from the experiments (fast response kinetics with large amplitudes), more possible applications by the use of these fast responsive and large amplitude polymer gels could be identified, such as reduced detection time of chemical and bio sensors, micro-fluidic valves that operate with faster open/close mechanisms, mechanical switches, and expansion or contraction of bio-related mechanisms such as rapid artificial muscle movements, faster heart beat in the case of artificial heart in which expansions or contractions will occur quickly to drive a mechanism.However, much more refined systematic work should be performed to explain the phenomena in this complex system in the future.First, the different mechanical oscillations may be related to the space distribution of the catalyst even at the macroscopic level and the presence of other moieties in structural polymer gels (i.e., some amine moieties).Second, the strongly confined chemical systems such as the gel network in this work may significantly affect the effective kinetics.Last but not least, the gel suspensions are similar to segregated reactive fluids where micro-mixing problems may apply, which suggest that the chemical dynamics could be affected by differences in the segregated fluids [24].
Conclusion
In summary, we have succeeded in fabricating a selfoscillating structural polymer gel coupling to the BZ reaction, which exhibits fast response kinetics with large amplitudes.There are two keys to our approach: one is the incorporation of the BZ chemistry into responsive polymer gels and thus to power a mechanical action; another one is the use of microgel particles that can be assembled through bridging interactions to yield a structure that oscillates at a macroscopic level.Particles in the structural gel oscillate twice faster than that of the gel particles prepared by normal polymerization method.The approach we have presented in this study offers a new Copyright © 2013 SciRes.ANP way of fabricating controllable and functional selfoscillating polymer gels with excellent responsive capabilities in a wide range.
Figure 2 .
Figure 2. Large volume oscillations of PNIPAM-co-Ru(bipy) 3 structural gels in comparison to normal gels as a function of temperature.
Figure 3 .
Figure 3. Response time of PNIPAM-co-Ru(bipy) 3 structural gels in both oxidized and reduced states of the Ruthenium metal in comparison with the response time of normal gels.
Figure 4 .
Figure 4. Comparison of the oscillation frequency and amplitude between self-oscillating BZ active gels: (a) structural gels and (b) normal gels.Chemical oscillations were observed at 460 nm and 685 nm as shown in the green and blue lines, and mechanical oscillations were observed at 570 nm as shown in the red line. | 2017-11-07T15:12:21.131Z | 2013-05-21T00:00:00.000 | {
"year": 2013,
"sha1": "5e80127433eccd624e2f0365146cab6a11733550",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=31469",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "5e80127433eccd624e2f0365146cab6a11733550",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
267443097 | pes2o/s2orc | v3-fos-license | Insights into refractory chronic inflammatory demyelinating polyneuropathy: a comprehensive real-world study
Background Refractory chronic inflammatory demyelinating polyneuropathy (CIDP) is a challenging subset of CIDP. It does not respond well to immune therapy and causes substantial disability. A comprehensive understanding of its clinical profile, electrophysiological characteristics and potential risk factors associated with refractoriness remains to be further elucidated. Methods Data in this cross-sectional study was collected and reviewed from the Huashan Peripheral Neuropathy Database (HSPN). Included patients were categorized into refractory CIDP and non-refractory CIDP groups based on treatment response. The clinical and electrophysiological characteristics were compared between refractory and non-refractory CIDP groups. Potential risk factors associated with refractory CIDP were explored with a multivariate logistic regression model. Results Fifty-eight patients with CIDP were included. Four disease course patterns of refractory CIDP are described: a relapsing–remitting form, a stable form, a secondary progressive form and a primary progressive form. Compared to non-refractory CIDP patients, refractory CIDP exhibited a longer disease duration (48.96 ± 33.72 vs. 28.33 ± 13.72 months, p = 0.038) and worse functional impairment (MRC sum score, 46.08 ± 12.69 vs. 52.81 ± 7.34, p = 0.018; mRS, 2.76 ± 0.93 vs. 2.33 ± 0.99, p = 0.082; INCAT, 3.68 ± 1.76 vs. 3.03 ± 2.28, p = 0.056, respectively). Electrophysiological studies further revealed greater axonal impairment (4.15 ± 2.0 vs. 5.94 ± 2.77 mv, p = 0.011, ulnar CMAP) and more severe demyelination (5.56 ± 2.86 vs. 4.18 ± 3.71 ms, p = 0.008, ulnar distal latency, 7.94 ± 5.62 vs. 6.52 ± 6.64 ms, p = 0.035, median distal latency; 30.21 ± 12.59 vs. 37.48 ± 12.44 m/s, p = 0.035, median conduction velocity; 58.66 ± 25.73 vs. 42.30 ± 13.77 ms, p = 0.033, median F-wave latency), compared to non-refractory CIDP. Disease duration was shown to be an independent risk factor for refractory CIDP (p < 0.05, 95%CI [0.007, 0.076]). Conclusion This study provided a comprehensive description of refractory CIDP, addressing its clinical features, classification of clinical course, electrophysiological characteristics, and prognostic factors, effectively elucidating its various aspects. These findings contribute to a better understanding of this challenging subset of CIDP and might be informative for management and treatment strategies.
Introduction
CIDP is an immune-mediated radiculoneuropathy, characterized by proximal and distal limb weakness and numbness, and absent or reduced tendon reflexes at four limbs (1,2).Although most of the patients respond well to first-line immune treatment including immunoglobulin therapy [intravenous (IVIg) or subcutaneous Ig], corticosteroids, or therapeutic plasma exchange (TPE), 20-30% of CIDP patients do not adequately respond to these therapies, and around 6 to 15% of patients remain refractory to all treatment (3)(4)(5).
The existing literature lacks a comprehensive description of the clinical features, electrophysiological findings and overall prognosis of this subset of CIDP patients (6)(7)(8)(9).Moreover, risk factors for patients being refractory to treatment are not completely clear.Traditionally, CIDP variants (such as multifocal CIDP), insidious onset, progressive course, central nervous system involvement, and irreversible axonal degeneration have been considered as factors contributing to refractoriness in CIDP (6,10,11).Previous studies on refractory CIDP had included patients with chronic immune sensory polyradiculopathy (CISP) and/or IgG4 antibody related autoimmune nodopathy.Recently studies have revealed that autoimmune nodopathy, formerly considered as a subset of CIDP and accounting for approximately 10% to 20% of the total cases, clinically presents as refractory CIDP (12,13).In 2021 European Academy of Neurology/ Peripheral Nerve Society (EAN/ PNS) guideline (14), autoimmune nodopathy and CISP were not classified as CIDP.Hence, risk factors as well as a complete clinical profile for refractory CIDP under the new guideline are completely unknown.
In this study, we strictly applied the 2021 EAN/PNS clinical criteria for CIDP to a cohort of neuropathy patients sourced from a national rare disease center database.Our primary objectives were to describe the clinical presentation, disease course form, as well as electrophysiological characteristics of refractory CIDP.Additionally, we aimed to investigate potential risk factors associated with refractory CIDP.Through this research, we aimed to expand our understanding of this challenging subset of CIDP and contribute to improving management and treatment strategies.
Huashan peripheral neuropathy database
The data of present study was from the HSPN database of the National Rare Disease Center, Huashan Hospital, Shanghai, China.In the HSPN database, patients with "suspected CIDP" was defined as: (1) subjects that fulfilled the required clinical features of CIDP including the typical form, or of any clinical variant; (2) subjects were required to demonstrate demyelination features based on electrophysiological evaluation, although strict adherence to the criteria outlined in the EFNS/PNS Guidelines (15) (prior to July 2021) or the updated EAN/PNS Guidelines (after July 2021) was not mandatory (14); and (3) other etiologies that could cause CIDP were excluded at the time of enrollment into HSPN database.The inclusion of all such clinical cases may, therefore, obviously lead to erroneously high sensitivity calculation for the disease overall.The ethical approval was obtained from the Ethics Committee of Huashan hospital, Fudan university and have been performed in accordance with the ethical standards laid down in the 1964 Declaration of Helsinki and its later amendments.
Study population
Data from patients with "suspected CIDP" was retrospectively retrieved from the HSPN database.All patients with "suspected CIDP" that had complete medical data underwent a detailed clinical history including time of onset, disease duration, distribution and progression of signs and symptoms including weakness, sensory symptoms, gait disturbance, ataxia, pain, tremor, cranial nerve involvement, autonomic dysfunction and treatment response.The results of examinations, including cerebrospinal fluid (CSF) analysis, nerve ultrasound or brachial/ lumbosacral plexus MR examination, nerve conduction studies performed at baseline or during the course of the disease, somatosensory evoked potentials (SSEP) and sural nerve biopsy, were reported when available.Albuminocytological dissociation in the CSF analysis was defined as an increased protein level (>0.60 g/L) in the absence of elevated white cell count (<8 cells/ μL) (16).
Neurological functional impairment and subjective assessment before and after each treatment were carefully reviewed.In our study, patients were routinely followed up every 3-6 months.Response to treatment was defined as an improvement that was objectively confirmed by the following clinical scales: (1) an increase in at least 4 points on the Medical Research Council sum score (MRC sum score, range 0-60); or (2) a decrease at least 1 point on the Inflammatory Neuropathy Cause and Treatment disability score (INCAT, range 0-10); or (3) a decrease at least 1 point on modified Rankin Scale (mRS, range 0-5).
Two senior neuromuscular specialists carefully reviewed patients' medical history and nerve conduction studies.Firstly, patients with "suspected CIDP" met the 2021 EAN/PNS Guidelines as well as with a disease duration more than 6 months were included in this study.In this included population, patients with CIDP were further divided into two groups, the refractory CIDP group and the non-refractory CIDP group.Refractory CIDP was defined as following (17, 18): (1) no response to at least two of three first-line treatments (corticosteroids, IVIg, or TPE) or relapse during drug tapering off; or (2) dependence on at least two of three first-line treatments simultaneously for maintain treatment; or (3) no response to at least one of three first-line treatments combined with one of immunosuppressive drugs (rituximab, azathioprine, mycophenolate mofetil, methotrexate, fingolimod or cyclophosphamide).CIDP patients not fulfilling this definition were considered as non-refractory CIDP and were included for comparison.
Furthermore, within the refractory CIDP group, we specifically focused on patients who had a clinical follow-up duration of over 1 year and had a minimum of more than three follow-up visits throughout their disease course.Through this stringent filtering process, we identified a subgroup of patients for whom we thoroughly reviewed and described the different disease course patterns.We define the relapsing-remitting form as a condition where patients experience symptomatic improvement with the initiation of treatment, followed by a subsequent exacerbation of symptoms upon cessatin of therapy.This pattern of response and deterioration periodically occurs, leading to fluctuating clinical symptoms over time.The criteria for defining improvement and exacerbation are based on changes in clinical scores, as detailed previously.Further, we delineate a stable course as one where the patient's condition neither improves nor deteriorates, maintaining a consistent plateau post-treatment.In contrast, a progressive course is defined by a continuous decline in clinical symptoms despite therapeutic interventions.This includes the 'primary progressive form, ' where deterioration is persistent from onset, and the 'secondary progressive form, ' where clinical symptoms exacerbate following an initial phase of improvement.
At the time of our study inclusion, patients with an alternative diagnosis for the neuropathy or patients with concomitant hematological disorders associated with monoclonal gammopathy were excluded.Patients with antibodies against nodal/paranodal cell adhesion molecules (contactin-1 [CNTN1], neurofascin-155 [NF155], contactin-associated protein 1 [Caspr1], and neurofascin isoforms NF140/186) and patients with CISP were excluded.We employed a cell-based assay method for the initial screening of node/paranodal antibodies, followed by the rat teased fiber immunofluorescence assay for confirmation, as detailed in our previous publications (19,20).Additionally, patients with central combined with peripheral demyelination (CCPD) were also excluded.
Statistical analysis
Categorical variables are described using frequencies and percentages, while continuous variables are described using mean and standard deviation (SD).Comparisons between the refractory CIDP and non-refractory CIDP groups were performed using Chi-square test or Fisher's exact test, t-test or Wilcoxon rank sum test, as appropriate.To assess the relationship between the status of being refractory and various clinical indicators, we initiated our analysis with univariate analyses, incorporating those variables with p-values less than 0.05 into the binary regression analysis.In order to evaluate multicollinearity, we calculated the Variance Inflation Factor (VIF) for each variable.A VIF value exceeding 5 is indicative of the presence of multicollinearity.Ultimately, we performed a logistic regression analysis, excluding variables with VIF greater than 5.During this process, we handled missing values by directly dropping the missing values.We calculated the coefficients, standard errors, 95% confidence interval (95% CI) and p-value of the independent variables.Analyses were performed and figures were generated with the R software (R version 4.2.2) and Python 3.10.All tests are two-tailed, and the significance level is set to 0.05.
Study population selection
Among the 182 patients labeled as "suspected CIDP" in the HSPN database from April 2017 to March 2023, 142 patients were included, all of whom had available nerve conduction study data and met the EAN/PNS electrophysiological criteria.Of these confirmed CIDP population, 41 patients were excluded, including 30 patients with autoimmune nodopathy (18 with anti-NF155, 8 with anti-NF186, 3 with anti-CNTN1, and 2 with anti-Caspr1), 2 patients with CISP, 3 patients with CCPD and 6 patients with concomitant hematological disorders associated with monoclonal gammopathy.Among the 101 patients with CIDP, 36 patients were further excluded because of incomplete clinical data or loss to follow-up, or not fulfilling study inclusion criteria.Furthermore, 7 patients were excluded because of not fulfilling our inclusion criteria.Fifty-eight patients were included in the final study population (Figure 1).
Clinical characteristics of refractory CIDP
In our study, the demographic and clinical features of 25 refractory CIDP patients at their initial consultation in our hospital were summarized and compared with those of patients with non-refractory CIDP.This comparison includes both patients who had not previously received any treatment and those who had undergone treatment at other institutions (Table 1).There were 20 males (80.0%) in the refractory CIDP group, with a mean age at symptom onset of 44.15 ± 18.29 years.According to the 2021 EAN/PNS guideline, 14 (56.0%)patients were typical CIDP and 11 patients were CIDP variants (7 distal CIDP, 1 multifocal CIDP, 1 focal CIDP, 1 motor CIDP and 1 sensory CIDP).Most of the refractory CIDP patients (72.0%) had a chronic onset.The refractory CIDP group had a disease duration of 48.96 ± 33.72 months, significantly longer than that in non-refractory CIDP (28.33 ± 13.72 months, p = 0.038).Refractory CIDP patients exhibited a more severe functional impairment compared with non-refractory CIDP patients (MRC sum score, 46.08 ± 12.69 vs. 52.81± 7.34, p = 0.018; mRS, 2.76 ± 0.93 vs. 2.33 ± 0.99, p = 0.082; INCAT, 3.68 ± 1.76 vs. 3.03 ± 2.28, p = 0.056, respectively).There was no difference in treatment response to IVIg between these two groups.However, non-refractory CIDP patients had a better response to glucocorticoid and TPE (Table 1).Other demographic and clinical features did not demonstrate statistically significant differences.In our analysis, we specifically examined the prevalence of comorbidities such as diabetes and kidney disease, which are known to contribute to peripheral neuropathy.Our data indicated no statistically significant difference in the prevalence of these comorbidities between refractory and non-refractory CIDP patients, as detailed in the table provided (Supplementary Table S1).
Electrophysiological characteristics of refractory CIDP
Electrophysiological study were performed at patients' initial consultation in our hospital.Nerve conduction characteristics of refractory CIDP patients were summarized and compared with patients with non-refractory CIDP in Table 2.In motor nerve studies, the refractory CIDP group showed significantly a lower ulnar compound muscle action potential (CMAP) (4.15 ± 2.0 vs. 5.94 ± 2.77 mv, p = 0.011), longer ulnar and median distal latency (5.56 ± 2.86 vs. 4.18 ± 3.71 ms, p = 0.008; 7.94 ± 5.62 vs. 6.52 ± 6.64 ms, p = 0.035, respectively), and a decreased median conduction velocity and a longer F-wave latency (30.21 ± 12.59 vs. 37.48 ± 12.44 m/s, p = 0.035, 58.66 ± 25.73 vs. 42.30± 13.77 ms, p = 0.033) compared to the non-refractory group.In the sensory nerve conduction study, refractory CIDP had a more decreased conduction velocity on the ulnar nerve compared to the non-refractory group (41.91 ± 9.14 vs. 49.21± 10.57, p = 0.037).No other significant statistical differences were found in other parameters and nerves.
Prognostic factors for evolving to refractory CIDP
For multivariate logistic regression analyses, the independent variables include disease duration, MRC sum score, ulnar nerve CMAP, median motor nerve distal latency and median motor nerve conduction velocity.In the assessment of the impact of various independent variables on potential risks of becoming refractory CIDP, we found a coefficient of 0.0411 (p = 0.020), suggesting a significant influence on being refractory CIDP.The ulnar nerve CMAP had a regression coefficient of-0.2963(p = 0.056), suggesting a borderline significant influence on evolving into refractory CIDP.However, MRC sum score, median motor nerve distal latency and median motor nerve conduction velocity may not significantly affect the outcome, as summarized in Table 3.
Discussion
Refractory CIDP is a challenging subset of CIDP and a comprehensive understanding of its clinical profile remains to be further elucidated.Our study describes the clinical and electrophysiological features of patients with refractory CIDP.Compared to non-refractory CIDP patients, refractory CIDP patients present with more severe clinical neurological functional impairment and peripheral nerve damage demonstrated by electrophysiological studies.Additionally, disease duration can be considered as an independent prognostic risk factor for progressing to refractory CIDP.Importantly, four disease course patterns of refractory CIDP are described: a relapsing-remitting form, a stable form, a secondary progressive form and a primary progressive form.The concept of refractory CIDP has been discussed for several years, but its definition remains inconsistent.In previous studies, three primary definitions have been discussed: (1) patients with poor treatment outcomes based on neurologists' personal experiences and perspectives regarding treatment outcomes (6,11), (2) patients with CIDP who do not respond to one of the three first-line therapies or are unable to continue these treatments due to adverse effects (10,21,22) or (3) patients with CIDP who do not respond to two of the three first-line or fail to respond to a combination of first-line and second-line therapies (8,17,18,23,24).To comprehensively describe the clinical profile of refractory CIDP, we adopt the third definition, which is more concise and objective.Moreover, to identify the specific characteristics of refractory CIDP, we excluded patients with autoimmune nodopathy, CISP, CCPD and monoclonal gammopathy related neuropathy from our study.Our findings showed that under the new background of the 2021 EAN/PNS guideline and our definition of refractory CIDP, 43.1% of CIDP patients presented as refractory CIDP, a significantly higher proportion compared to previously reported (3).
Refractory CIDP patients more often had a longer disease duration from symptom onset to diagnosis, namely diagnostic delay.In particular, longer disease duration has been demonstrated as an independent risk factor for CIDP patients transitioning into a refractory state.Diagnostic delay is a common issue in CIDP.Studies have shown that there is an average delay of 12 to 40 months between the onset of symptoms and diagnosis (7, 25).This delay often results in inappropriate treatment being administered too late.A delay in diagnosis can cause axonal injury to accumulate, which can lead to increased disability that may be irreversible even with treatment.Additionally, compared with non-refractory CIDP, refractory CIDP had more severe functional impairment at the inclusion entrance, as reflected by the lower MRC sum score.This could potentially be linked to a delay in diagnosis.Therefore, it is crucial to diagnose the condition quickly and start the treatment early to avoid irreversible disability.
In this research, electrophysiological studies provided further confirmation of a correlation between the severity of peripheral nerve impairment, characterized by more extensive demyelination and pronounced axonal loss, and the refractory nature of CIDP.It has also been established that axon loss is a significant long-term adverse prognostic factor in CIDP (7, 11), as evidenced by a greater decrease in CMAP demonstrated by nerve conduction study and the presence of axon loss in nerve biopsy specimens (6,26,27).Furthermore, our study has identified that severe demyelinating lesions serve as significant prognostic risk factors for adverse outcomes.It is widely acknowledged that demyelinating lesions could cause secondary axonal damage.As the disease progresses, if disease progression is not adequately controlled, such secondary damage may lead to irreversible axonal impairment.
This study aims to establish a more comprehensive foundation for precision treatment by identifying distinct disease course patterns within refractory CIDP.These include the relapsingremitting, primary progressive, secondary progressive, and stable patterns.The relapsing-remitting form accounted for approximately half of the patients with refractory CIDP.The most striking characteristic in this group is that patients' functional disability can fluctuate between normal and reduced levels, resembling the disease course pattern observed in relapsing-remitting multiple sclerosis (28).However, we observed that the level of disability during the last follow-up in the remission stage was more severe than that in the initial remission stage.This suggested that frequent relapses may lead to accumulating injuries, eventually resulting in irreversible impairment.
The stable group poses a significant challenge in clinical practice, as it becomes difficult to determine the true effectiveness of the ongoing treatment.Although it has a relatively stable condition, the effectiveness of current treatment or the possibility of responding to further attempted treatment could not be certainly identified.This uncertainty makes it challenging to decide whether to suspend the current treatment regimen and explore alternative therapies or to continue with the present medication until the desired effectiveness is observed.
Three patients presented with a primary progressive disease pattern and the diagnosis was carefully verified and confirmed.Previous studies have also reported that the progressive course pattern accounted for 6.7% of CIDP patients (6).Given the continued progression experienced by patients with a primary progressive or secondary progressive course, it is imperative that these individuals receive highly effective treatment in the early stage.This proactive approach is aimed at mitigating the potential for further axonal damage.
Our study has certain limitations.Firstly, the sample size was relatively small.As a retrospective study, there may be inherent biases in the clinical data.In HSPN database, patients who have a long-term and effective response might not have a regular follow-up and could be lost while patients with a poor treatment outcome have high compliance and might have a regular follow-up.And thus, the high proportion of refractory CIDP in our study may result from such a selective bias.Furthermore, it should also be noted that in China IVIg is limited availability and high cost, making it difficult for many patients to access or afford adequate treatment courses.Consequently, CIDP patients receiving IVIg as therapy often cannot afford to undergo a sufficient treatment course.This limitation often leads to rapid relapse and worsening of symptoms, contributing to the refractory nature of the disease.Additionally, this study did not explore the dynamic evolution of the clinical course and the associated conversion relationships.Nevertheless, it is important to note that this research provides insights and presents a relatively comprehensive clinical profile of refractory CIDP.It expands our understanding of the disease's clinical manifestations within the context of the 2021 EAN/PNS guideline.Despite the limitations, our study provides a more accurate reflection of the refractory characteristics of CIDP.
Conclusion
This study provided a comprehensive description of refractory CIDP, addressing its clinical features, classification of clinical course, electrophysiological characteristics, and prognostic factors, effectively elucidating its various aspects.These findings contribute to a better understanding of this challenging subset of CIDP and might be informative for management and treatment strategies.
FIGURE 1 Flowchart
FIGURE 1Flowchart of patient cohort enrolment and exclusion.
FIGURE 2
FIGURE 2 Schematic diagram of clinical course pattern of refractory CIDP.
TABLE 1
Clinical and laboratory characteristics of refractory CIDP patients.
TABLE 2
Electrophysiological characteristics of refractory CIDP patients., confidence interval; CMAP, compound muscle action potential; CV, conduction velocity; DML, distal motor latency; MRC, Medical Research Council Sum Score.The value '0.020' meaned p value was smaller than 0.05. | 2024-02-06T17:47:01.015Z | 2024-01-31T00:00:00.000 | {
"year": 2024,
"sha1": "0a9ac4e1df11bba9612e69f7186154b331ff3b2d",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fneur.2024.1326874/pdf?isPublishedV2=False",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b71ec713f8c42de1dc7f28ddc047931b0551da58",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
118925228 | pes2o/s2orc | v3-fos-license | Stability of a rotating asteroid housing a space station
Today there are numerous studies on asteroid mining. They elaborate on selecting the right objects, prospecting missions, potential asteroid redirection, and the mining process itself. For economic reasons, most studies focus on mining candidates in the 100-500m size-range. Also, suggestions regarding the design and implementation of space stations or even colonies inside the caverns of mined asteroids exist. Caverns provide the advantages of confined material in near-zero gravity during mining and later the hull will shield the inside from radiation. Existing studies focus on creating the necessary artificial gravity by rotating structures that are built inside the asteroid. Here, we assume the entire mined asteroid to rotate at a sufficient rate for artificial gravity and investigate its use for housing a habitat inside. In this study we present how to estimate the necessary spin rate assuming a cylindrical space station inside a mined asteroid and discuss the implications arising from substantial material stress given the required rotation rate. We estimate the required material strength using two relatively simple analytical models and apply them to fictitious, yet realistic rocky near-Earth asteroids.
INTRODUCTION
Sustaining human life on a station built inside a mined asteroid is a task which will require expertise in many fields. There needs to be air to breathe, water to drink, and the appropriate recycling systems, as well as food and light. Nevertheless, one of the most important prerequisites for a human body to stay healthy is gravity.
Taking plants to space and zero gravity is not as big a problem as taking a human body into this hostile environment. Plants do adapt to zero gravity relatively easily which is shown by many experiments performed both on the ISS (International Space Station) as well as on the ground with artificial microgravity (for example, see Kitaya et al., 2000Kitaya et al., , 2001Kiss et al., 2009).
A study on how much gravity is needed to keep the human body upright was performed by Harris et al. (2014). They found that the threshold level of gravity needed to influence a persons orientation judgment is about 15 % of the gravity on Earth's surface, which is approximately the gravity acting on the Lunar surface. Martian gravity, 38 % of Earth's gravity, should be enough for astronauts to orient themselves and maintain balance.
As a consequence of a lack of experiments on the influence of reduced gravity on the human body we adopt the value of 38 % of Earth's gravity (g E ) as starting point for our theoretical approach. We assume that a rotation of the asteroid has to cause an artificial gravity of minimum 0.38 g E in order to sustain long term healthy conditions for humans on the station. Present suggestions to tackle this challenge of providing sufficient gravity rely on habitats in rotating wheels or tori that create gravity: Grandl and Bazso (2013) suggest self-sustained colonies up to 2000 people. Other studies somewhat vaguely mention augmenting the natural rotation with additional artificial rotation Taylor et al. (2008). We elaborate on the latter and explore the feasibility and viability of creating "artificial" gravity for a habitat by putting the entire asteroid to rotation at a rate sufficient to generate the desired gravity.
We start with initial considerations regarding the required spin rate of a space station with sufficient artificial gravity (Sect. 2). Section 3 elaborates on the stress acting on the asteroidal hull and formulates two analytical models for the tensile and shear stresses as a function of the asteroid size, the dimensions of the space station, the required artificial gravity level, and the bulk density of the asteroid material. Section 4 applies our formulation to a near-Earth asteroid maximizing the usable area of the space station while observing maximum stress constraints. Finally, conclusions and an outlook on further related research is given in Sect. 5.
INITIAL CONSIDERATIONS
Let's assume a cylindrical space station with height h c and radius r c as depicted in Fig. 1. Letting the cylinder rotate about its symmetry axis y with angular velocity ω will create an acceleration of acting on objects on the lateral surface. For a certain artificial gravity level the required rotation rate ω and rotation period T are then given by Considering a size range of r c = 50 . . . 250 m for example, the rotation rates would need to be between 1.17 and 2.6 rpm (rotations per minute) to create artificial gravity necessary for sustaining extended stays on the station (assuming 0.38 g E as discussed above). Figure 2 gives an overview of rotation rates for a range of radii and gravity levels. The area usable for a space station subject to artificial gravity is the lateral surface of the cylinder S, S = 2π r c h c .
ESTIMATING TENSILE AND SHEAR STRESS
The station is to be built inside a mined asteroid. Imposing a spin rate sufficient for providing artificial gravity on the lateral surface of the cylinder will create a substantial load on the asteroid material due to centrifugal forces. While little is known on material properties of small asteroids subject to our study, we rely on assumed material strength. Here, we assume that the asteroid is made of homogeneous, solid material such as basaltic silicate rock, for instance. We estimate the load on the asteroid material in simplified models: the tensile stress acting on the asteroid cross section is related to assumed tensile strength of solid silicate rock. Figure 3 shows our geometrical model of a spheroidal asteroid with semi-axes a and b, respectively. The cavern is centered and cylindrical with respective radius r c and height h c . Due to the elliptical cross-section, the lateral distance d between the cavern and the asteroid's surface is given by For a feasible solution, the cavern needs to be inside the asteroid in its entirety, which translates to the condition d > 0 or The whole body will rotate about its symmetry axis y at a rate ω providing sufficient artificial gravity on the lateral surface of the space station with radius r c . We assume rigid body rotation.
Model 1
In this model centrifugal forces that are acting on the asteroid material exert a load on an arbitrary symmetry plane. We determine the tensile stress that results from this load. The total centrifugal force F 1 pulling two halves of the asteroid apart is given by Here, we use the mass m of a volume element, the uniform asteroid density ρ, and the distance r of a volume element from the rotation axis. Transforming into cylindrical coordinates symmetrical w.r.t. the y-axis (dV = r dr dy dϕ) and considering that there will be no contribution from the void of height h c and radius r c (see Fig. 3) gives Note that r(y) = b a a 2 − y 2 holds for the elliptical cross section. Integrating (details given in Appendix A) yields This load acts on the asteroid's cross section A = π a b − 2 r c h c (cf. Fig. 3) resulting in tensile stress As we are interested in the stress resulting from a desired artificial gravity g c we substitute ω 2 from (2) and get By introducing the dimensionless quantities we can separate the parameters specific to individual asteroids and show that σ scales linearly with each of material density ρ, desired artificial gravity g c , and asteroid semi-minor axis b. Inserting h c and r c into (10) yields Hence, it is sufficient to study f (r c , h c ) to get estimates for the stresses in asteroids of arbitrary density and semi-minor axis rotating at a rate providing the desired artificial gravity. Figure 4 shows σ 1 contours assuming parameters ρ = 1 g cm −3 , g c = 1 g E , and b = 1 m. For different parameter values the numbers scale linearly with ρ, g c , and b. The white area in Fig. 4 corresponds to illegal combinations of r c and h c that violate condition (5) demanding the space station has to be inside the asteroid in its entirety.
The required rotation rate as a function of space station radius r c and desired artificial gravity g c is given in (2), scales according to and is indicated on the upper x-axis. As we will be interested in the usable surface area of the station S, we indicate its dimensionless variant as red, dotted contour lines in Fig. 4. For any assumed maximum material strength, the solution for maximum S suggests a radius of the cylinder r c that extends all the way to the surface (d = 0, cf. Fig. 3). This is however, the edge case of our model 1 that estimates material load by assuming two halves of the asteroid driven apart by centrifugal forces. As soon as d gets very small the "two halves" assumption fails. Therefore, we estimate material load by another model which will be more accurate at the edge case, i.e. very small values of the distance to the surface d.
Model 2
Rather than focusing on two entire halves of the hollowed-out asteroid, this model studies the "mantle" outside the space station. This is the solid torus created by sweeping the right part of the hashed surface between the red lines at y = ± h c /2 in Fig. 5 around the y-axis. As the radius of the space station approaches the asteroid's hull, the centrifugal forces attempting to shear away this torus may get significant. This shearing load acts on the two annuli resulting from rotating the red lines in Fig. 5 about the y-axis. In addition to overcoming the shear strength of the asteroid material however, the tensile strength of the cross section (the hashed area in Fig. 5) has to be exceeded by the load exerted by the centrifugal force.
Similar to calculating F 1 in Sect. 3.1, the centrifugal force F 2 can be derived by transforming to cylindrical coordinates as follows: Integrating (Appendix B gives the detailed steps) yields a rather lengthy expression for the centrifugal force: This force exerts a tensile load on the asteroid's cross section A t between y = −h c /2 and y = h c /2 and a shear load on the two annuli of area A s given by rotating the red lines in Fig. 5 about the y-axis. The surface area A t is given by (cf. Fig. 5) and using the identity arcsin x = arctan(x/ 1 − x 2 ) we get Each of the annuli has a surface of A s , Combining equations (16), (17), and (19), we obtain the average stress σ 2 in this model, for the complete -rather unwieldy -formulation of the stress please refer to Appendix C. Unlike it is the case in model 1, we cannot formulate σ 2 in a scaling way using the dimensionless quantities r c and h c . For the asteroids in scope of this study though, σ 2 is usually smaller than σ 1 , but increases if the space station radius gets closer to the asteroid's surface. In the following Sect. 4 we will demonstrate this by comparing the two models for a fictitious, yet realistic asteroid.
APPLICATION TO A REALISTIC ASTEROID
We will apply the analytic models 1 and 2 to a rocky asteroid with dimensions 500 × 390 m. There is a number of similar-sized rocky near-Earth asteroids, e.g. 3757 Anagolay, 99942 Apophis, 3361 Orpheus, 308635 (2005 YU55), 419624 (SO16), etc. (cf. JPL, 2018). As little is known about the composition and material properties of these objects, we assume they are composed of basaltic rock with a bulk density of ρ = 2.7 g cm −3 . Tensile strength values for basalt are in the range of approx. 12. . . 14 MPa Stowe (1969), shear strengths are approx. 8. . . 36 MPa Karaman et al. (2015), which provides an order-of-magnitude framework of the expected material strength data. Finally, we will assume a desired artificial gravity level of g c = 0.38 g E as discussed in Sect. 1. Figure 6 shows the resulting tensile stress along with the usable space station surface and required rotation rates predicted by model 1. The stress levels are mostly of the same order of magnitude as the assumed material strength (∼ 10 MPa) or even smaller. However, the solution resulting in the maximum area S ≈ 0.3 km 2 would have the cylindrical station extend to the asteroid's surface, which seems unrealistic and will lead to the asteroid becoming unstable. Also, for realistic scenarios a material stress (≈ 4 MPa in this case) very close to the -poorly constrained -material strength will be unacceptable so that a cavern with radius-height data more towards the lower right of the diagram will be desirable.
Investigating the combined tensile and shear stresses according to our model 2 results in a different stress-pattern, given in Fig. 7. While the material loads are systematically lower for space stations deeper inside the asteroid in their entirety, stresses for "thinner" tori (i.e., larger r c ) are of the same order of magnitude as predicted by model 1.
In summary, both models predict stresses that are comparable to anticipated material strength for asteroids made of competent rock. As the assumed material parameters are based on the unknown composition, thorough studies of candidate asteroids will be necessary before considering to set them to rotation to house a space station with artificial gravity inside.
CONCLUSIONS AND FURTHER RESEARCH
We established two simple analytical models for estimating whether a candidate for asteroid mining may be suitable for hosting a space station with artificial gravity. The novelty in our approach is to investigate whether the asteroidal hull -once set to rotation as a whole -can sustain the material loads resulting from a sufficiently high rotation rate. We find that loads resulting from centrifugal forces are in the order of magnitude of material strength of solid rock, which makes a space station in the cavern of a mined asteroid feasible if its dimensions are chosen right and if the material composition and material strength of the asteroid is known to a satisfactory level of accuracy. Practical applications will crucially depend on knowing not only the composition but also the internal structure of candidate bodies. As missions to these asteroids seem inevitable for such studies, decisions on inhabiting such asteroids may only be possible after mining operations have started. Also, the methods of actually initiating the rotation at the required rate is subject to further investigations. Hypothetically, starts and landings of spacecraft during the mining process might contribute to building up angular momentum of the asteroid.
Currently, we are working on a more realistic analytic approach for determining the detailed shape of the cavern housing the space station taking into account the internal density profile.
In the past, we successfully conducted smooth particle hydrodynamics (SPH) simulations of asteroids (e.g. Maindl et al., 2013;Haghighipour et al., 2018;Maindl et al., 2018). As our analytical study is approximative in nature we plan to conduct a series of SPH simulations with different material models and varying porosity. This will allow to numerically verify the predictions of the simplified analytical models presented here as well as future models and to further investigate the behavior of rotating bodies with substantial internal caverns.
CONFLICT OF INTEREST STATEMENT
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
AUTHOR CONTRIBUTIONS
TIM developed the analytical models and performed most of the calculations. He wrote about 70 % of the paper and created most of the figures. RM provided various aspects of the analytical approximations, helped in the calculations, and contributed to the representation of the individual equations. He wrote about 15 % of the paper. BL contributed to the quality of the analytical models, researched the details of required artificial gravity levels, and created the pictorials of the asteroid with the cylindrical cavern. She wrote about 15 % of the paper.
FUNDING
This project received seed funding from the Dubai Future Foundation through Guaana.com open research platform. The authors also acknowledge support by the FWF Austrian Science Fund project S11603-N16.
A MODEL 1 CENTRIFUGAL FORCE CALCULATION
Starting at (7), we get Considering this simplifies to This is the same expression as (8).
B MODEL 2 CENTRIFUGAL FORCE CALCULATION
Starting at (15), With the identity arcsin x = arctan(x/ 1 − x 2 ) and arcsin(−x) = − arcsin x the integral evaluates to This is the same expression as (16).
C MODEL 2 STRESS
Combining equations (16), (17), and (19), we get the total stress σ 2 , . The blue contour lines give the ratio σ ρ gcb for ρ given in g cm −3 , g c measured in units of g E , and b measured in meters, respectively. The dotted red lines give contours of the ratio S a b obtained via (14), the upper x-axis gives the scaled rotation rate. Figure 6. Model 1 results for artificial gravity of 0.38 g E in a space station of radius r c and height h c , the color code and the blue contour lines give the tensile stress σ 1 resulting from the required rotation rate obtained via (9). The red dotted lines give the usable surface area S of the space station, the upper x-axis denotes the required rotation rate. Figure 7. Model 2 results for artificial gravity of 0.38 g E in a space station of radius r c and height h c , the color code and the blue contour lines give the combined tensile and shear stress resulting from the required rotation rate obtained via (9). The red dotted lines give the usable surface area S of the space station, the upper x-axis denotes the required rotation rate. | 2018-12-26T18:10:14.000Z | 2018-12-26T00:00:00.000 | {
"year": 2018,
"sha1": "f83c6a8b023caac4d0240d5c6b5fa5b5e79e3bbd",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fspas.2019.00037/pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "7f8e7ea59e4829aa816506bccce52e8d45aace7e",
"s2fieldsofstudy": [
"Physics",
"Geology"
],
"extfieldsofstudy": [
"Geology",
"Physics"
]
} |
214644075 | pes2o/s2orc | v3-fos-license | Mechanical Properties, Microstructure, and Chloride Content of Alkali-Activated Fly Ash Paste Made with Sea Water
The aim of the present study is to investigate the potential of sea water as a feasible alternative to produce alkali-activated fly ash material. The alkali-activated fly ash binder was fabricated by employing conventional pure water, tap water, and sea water based alkali activating solution. The characteristics of alkali-activated materials were examined by employing compressive strength, mercury intrusion porosimetry, XRD, FT-IR, and 29Si NMR along with ion chromatography for chloride immobilization. The results provided new insights demonstrating that sea water can be effectively used to produce alkali activated fly ash material. The presence of chloride in sea water contributed to increase compressive strength, refine microstructure, and mineralogical characteristics. Furthermore, a higher degree of polymerization on the sea water-based sample was observed by FT-IR and 29Si NMR analysis. However, the higher amount of free chloride ion even after immobilization in sea water-based alkali-activated material, should be considered before application in reinforced structural elements.
Introduction
Fresh water is the most consumed natural resource on the planet. Its diverse use in sustaining life and lifestyle of humans has made it one of the most precious resources. On the other hand, the cement concrete remains the largest consumed man-made resource. Two major environmental concerns being raised on use of cement are its huge carbon and water footprint [1,2]. As a countermeasure against the high carbon footprint of Portland cement, the development of alkali-activated materials (AAM) is considered as a cementless binder solution. However, the higher cost of production of AAM has been a limiting factor in its widespread use. One of the major subsections in higher cost of AAM is usually due to the use of alkali activating solution with highly refined chemicals. Furthermore, the use of pure/distilled water is a major concern due to dwindling fresh water resources. Thus, exploring suitable replacement of pure/distilled water to produce AAM is one of the important aspects of research.
Sea water is one of the largest untapped available sources of water. Its use in manufacturing traditional cement concrete is very limited due to the presence of aggressive compounds and higher chloride based salt compounds [3]. It was reported in various studies that use of supplementary cementitious materials along with sea water as mixing water, lowered the probability of chloride induced corrosion in steel rebar [4][5][6][7]. Furthermore, recent studies have been carried out to observe the influence of sea water on characteristics of AAM fabrications by utilizing ground granulated blast furnace slag. Li et al. [8] produced slag-based alkali-activated concrete containing sea water and studied its mechanical properties after thermal exposure. They reported no obvious influence of sea water on thermal properties [8]. On the other hand, few studies have reported slightly negative influence of sea water on the properties of AAM made with sea water. Shi et al. [9] developed calcium silicate slag-based AAM by utilizing artificial sea water. They reported that no negative influence of sea water was observed on the formation of amorphous calcium aluminosilicate hydrate (C-A-S-H) gel. However, the magnesium ions present in the artificial sea water also form magnesium silicate hydrate (M-S-H) which lowered compressive strength [9]. Furthermore, hindrance in alkali activation process was reported due to the coating effect on particles [9]. Yang et al. [10] developed slag-based AAM produced by artificial sea water. The compressive strength was slightly low for sea water based samples. However, improved durability against chloride ion was observed [10]. It should be noted that in studies by Shi et al. [9] and Yang et al. [10] the sea water used for investigations was prepared in the laboratory as per the guidelines of ASTM D1141 [11]. Nevertheless, as discussed in the previous studies sea water has almost negligible effects on the polymerization products of slag-based AAM [8][9][10]. However, an important aspect of high amount of chloride ions present in the sea water samples was not given due consideration in previous studies.
The objective of this study is to investigate the properties of alkali-activated fly ash materials produced by pure water, tap water, and sea water. The feasibility of sea water as alkali-activating solution and its effect on properties of alkali-activated fly ash material were assessed by a suitable experimental program. The compressive strength, microstructural characteristic and mineralogy of alkali-activated fly ash were investigated to assess the influence of various types of water sources. Specific attention was given to investigate the free chloride content of the alkali-activated fly ash.
Materials and Sample Preparation
The class F fly ash (as per ASTM C618 [12]), a byproduct of the Dangjin thermal power plant in South Korea, was used as a precursor for AAM. The chemical composition of fly ash as obtained by X-ray fluorescence (Model: Philips PW2404, Philips, Amsterdam, Netherlands) is presented in Table 1. Figure 1 presents the X-ray diffraction spectra of fly ash. The presence of quartz, mullite, magnetite, and hematite was observed. 9M NaOH solution and Na 2 SiO 3 (SiO 2 = 29 wt.%, Na 2 O = 10 wt.%, and H 2 O = 61 wt.%) were mixed to produce alkali activating solution, and the mass ratio of 9M NaOH and Na 2 SiO 3 was maintained at 1:1 throughout the study. The silica modulus of alkali activating solution was 2.9. For making 9M NaOH solution, three different types of water solvents were used namely, pure water (also known as distilled water), tap water and sea water. The sea water was collected from the West Sea of Korea and the composition of all three types of water is shown in Table 2. Ion chromatography was employed to observe the composition of all three types of water. It can be observed that chloride has a major presence in the sea water sample. In addition, the cation concentration of all three water types was obtained by employing inductively coupled plasma atomic emission spectroscopy (ICP-AES; Model: Jobin Yvon Ultima 2, Horiba, Kyoto, Japan) and is shown in Table 3. The total organic carbon of all three water measured by employing total organic carbon analyzer (Model: vario TOC cube, by ELEMENT AR, Langenselbold, Germany) was found to be 1.08 ppm for PW, 2.82 ppm for TW, and 2.22 ppm for SW. The 24 h cool-off period was considered for the alkali-activating solution before casting.
A constant activator to fly ash ratio of 0.5 was maintained throughout the study to fabricate the AAM. To produce 1 L volume of paste, 1197 g of fly ash and 598 g of alkali-activating solution were mixed. The similar table flow value of 200-210 mm was observed for all the alkali-activated paste mixes. The samples were cast into mold of 50 mm cubic side. The samples were then sealed in plastic cover to prevent evaporation and heat cured for 24 h at 60 • C in hot air oven. After the required heat curing the samples were kept in the open laboratory environment until the specific testing dates. The nomenclature followed in the study denotes PW-for pure water based samples, TW-for tap water based samples, and SW-for sea water based samples.
Testing and Characterization Protocols
The compressive strength test of hardened AAM was carried out at 3, 7, 14, 21, and 28 days on a 3000 kN universal testing machine (DUT-D100, Daehan, Korea) at a loading rate of 1000 N/s as per the guidelines of ASTM C109 [13]. The average of three test results is reported along with the standard deviation. The microstructural characteristics of AAM were studied by employing micromeritics mercury intrusion porosimetry (MIP; Model: micromeritics, Micromeritics Instrument Corporation, Norcross, GA, USA). The pressure range from 30 to 60,000 psi was selected for pore size detection. The mercury surface tension was kept at 485 dynes/cm. The crystalline phases present in samples were identified by carrying out X-ray diffractometry (XRD). XRD was conducted on a PanAlytical device (Malvern Panalytical Ltd, Malvern, UK) with a scan range of 5 • to 60 • (scan speed of 0.2 • /min). The Fourier transform infrared (FT-IR) spectra were collected from 400 to 4000 cm −1 at a resolution of 4 cm −1 . The FT-IR spectra of AAM were obtained by using Vertex 80v (Bruker, Billerica, MA, USA). The 29 Si MAS-NMR (Bruker, Billerica, MA, USA) analysis was conducted under the conditions of 79.51 MHz at a spinning speed of 11.0 kHz, a pulse length of 30 • (1.6 µs) and a relaxation delay of 20 s for the quantitative study. An external sample of trimethylsilyl silane at -135.5 ppm with respect to trimethylaluminium (TMA) at 0 ppm was used to refer chemical shifts. The amount of free chloride ion was measured by ion chromatography carried out on ICS-1600 (Model: Dionex, Sunnyvale, CA, USA). All samples at the age of 28 days were treated with acetone and were then dried by vacuum desiccator. The samples for XRD, FT-IR, NMR, and ion chromatography were ground to pass 150 µm, whereas bulk samples were used for MIP measurements. For ion chromatography measurements, 20 g of ground paste sample was mixed with 100 mL deionized water and the filtered leachate was then examined for quantification of chloride content. Figure 2 shows the compressive strength of AAM samples measured at different testing age. It can be observed that at the age of 28 days the AAM samples with sea water displayed the highest compressive strength. However, AAM samples made with all three different water types have quite similar compressive strength at 28 days, i.e., PW-51. 16 MPa, TW-49.62 MPa, and SW-54. 22 MPa. It should be noted that in case of PW the strength development peaks at 7 days of curing with just marginal increase (2.7%) at 28 days. However, in the case of TW and specifically SW an increase of 6.7% and 12.7%, respectively, was observed between the testing age of 7 and 28 days. Similar results of higher compressive strength for sea water based alkali-activated slag were reported in studies by Rashad and Ezzat [14] and Yang et al. [10]. On the other hand, Shi et al. [9] reported that the use of sea water resulted in lower compressive strength of calcium silicate slag-based AAM.
Compressive Strength
In the case of SW mixes, the higher amount of sulfate and sodium ions can accelerate the polymerization process and ensure closure of pores [14]. It should be noted that in the present study the amount of calcium in fly ash is only 5%, thus the formation of CaCl 2 might have been restricted. Nevertheless, the limited formation of CaCl 2 can also accelerate an early age polymerization [10]. Furthermore, SW registered a higher pH value than TW and PW (see Table 2). The polymerization of AAM could be influenced by the pH value and a higher pH could promote the polymerization process. In addition, a higher pH value can reduce the probability of corrosion by the formation of passive layer around steel rebar. Another possible aspect of SW effect is the higher amount of total dissolved solids which can improve the compressive strength by reducing the porosity between the crystalline polymers.
Porosity
Pore size distributions of AAM measured at 28 days are shown in Figure 3. Table 4 presents the total pore area, average pore diameter, and porosity of AAM samples. The results obtained from MIP show that sea water is beneficial in refining the average pore diameter and porosity of fly ash based AAM. However, an increase of 11.77 m 2 /g in total pore area was observed. It should be noted that in previous studies, no relation between total pore area and strength development was observed [15,16]. The primary cause of such inconsistencies can be induced by the ink bottle effect of MIP method. Furthermore, it is inferred that the complex microstructures of AAM can cause a much higher ink bottle effect. Another aspect of increased porosity in SW samples can be due to the increased amount of gel pores (see Figure 3b). However, it should be noted that these gel pores have almost no role in strength characteristics [17,18]. Nevertheless, the average pore diameter and porosity of AAM samples were reduced by incorporating TW and SW. In addition, it can be observed from Figure 3b that PW-based AAM samples have a larger critical pore diameter along with increased medium capillary pores. Previous studies have reported that higher amount of medium capillary pores resulted in the decreased compressive strength [17,19].
It should be noted that the higher amount of total dissolved solids in SW can play a significant role in lowering the average pore diameter and porosity of AAM samples. Previous studies focused on the use of SW in producing normal Portland based cement systems argued about the densification of cement matrix due to the formation of Friedel's salt supported by excess amount of sulfate and chloride ions [6,20,21]. It can be stated from the present study that the higher pH in SW could positively influence the development of sodium aluminate silicate hydration (N-A-S-H) which results in the densification of microstructure.
X-Ray Diffraction
The XRD spectra of AAM samples are presented in Figure 4. It can be seen that the XRD spectra of AAM made with different types of mixing water are nearly the same with no major differences. The majority of quartz, mullite, magnetite, and hematite observed are from the unreacted crystalline portion of raw fly ash. These phases usually go unaltered due to unreactive nature towards alkali activation. Besides, the hump visible in all XRD spectra at 29 • -30 • depicts the presence of amorphous unreacted phases of fly ash. As illustrated, two major polymerization phases namely calcium aluminate silicate hydrate (C-A-S-H) and sodium aluminate silicate hydrates (N-A-S-H) were observed in all the samples. These polymeric networks are resulted from dissolution, coagulation and restructuring of the glass phase in fly ash. The formation of C-A-S-H and N-A-S-H are responsible for strength gaining mechanism and can also provide suitable sites for the immobilization of chloride ions [22]. It can be seen that the addition of sea water did not hinder the formation of C-A-S-H and N-A-S-H phases.
The notable difference is the presence of chloride in SW sample that can provide excess chloride ions in the alkali activation process. In a previous study by He et al. [23], the addition of NaCl in slag-based geopolymer resulted in an increased mechanical strength due to the binding of chloride ion to form calcium aluminum chloride sulfate hydrate. In the present study, however, no such phases of chloride-based polymerization products were observed in the XRD spectra. On the other hand, it can be stated that the presence of chloride ions might help in the formation of CaCl 2 in the matrix. As discussed in Section 3.1, the formation of CaCl 2 is beneficial in accelerating the early polymerization reaction of AAM which could result in the additional formation of N-A-S-H (as evident in SW spectra from Figure 4). Nevertheless, the important observation can be drawn that the presence of chloride and sulphate ions in SW do not have any negative influence on the mineralogical composition of AAM.
FT-IR
The FT-IR spectra of 28 days AAM samples are presented in Figure 5. [26].
The primary band of T-O-T is indicative of the polymerization products of alkali-activated fly ash. In all samples, a new low intensity band is observed at 1082 cm −1 that specifies the formation of SiQ n (n = 3 or 4). Two reaction mechanisms are primarily responsible for these bands. The more reactive glass phases result in the formation of less polymerized structure at lower wavenumber (1010-1015 cm −1 ), whereas the undissolved quartz phase also undergoes structural changes due to micro-stress [26] leading to a polymerized structure represented by a minor intensity band at 1082 cm −1 . This splitting of bond bands is in good agreement with previous studies [26,27]. Moreover, the increase in intensity of the T-O-T peak on SW-based sample suggests the increase in the amount (per unit volume) of the functional group associated with the molecular bond. This can lead to improved mechanical property for SW-based AAM.
It can be observed that the addition of sea water did not result in the formation of new bond band, but had minor effect on bond bands. The narrowing of bond bands on the SW-based alkali activation depicts the formation of more condensed reaction products. Furthermore, the slight shift in SW samples to lower wave number of 1010 cm −1 (as compared to PW) can be accredited to the formation of N-A-S-H with a higher crosslinking. The presence of the C=O bond for AAM indicates the carbonation reaction between polymerization products and atmospheric carbon dioxide. Figure 6 depicts the 29 Si NMR spectra of AAM prepared with PW, TW, and SW. In the present study, the Origin software package was used to deconvolute the NMR spectra. The notation Q n (mAl) is used to depict the chemical bonds of the resonating Si nuclei where 'n' represents the number of adjacent tetrahedral SiO 4 linked to a specific SiO 4 tetrahedron, and m denotes the number of Al substitution to the corresponding Si tetrahedra [28][29][30][31][32][33][34]. The ratio of Si/Al (only Q 4 (mAl)) for different AAM is provided in Table 5, and the ratio was calculated by referring to previous studies [32,35]. As seen in Figure 6, the AAM produced from various water sources has quite similar spectra signifying that the use of TW and SW did not significantly affect the polymerization process. Moreover, the use of SW leads to the higher Si/Al ratio as compared to TW and PW mixes as shown in Table 5. This confirms the findings of a previous study by Kovalchuk et al. [25] that the Si/Al ratio is directly proportional to a mechanical strength. It should be noted that presence of alkali cation such as Na + influences the alkali activation process of AAM. In the present study, the SW has a higher amount of Na + than PW and TW. This promotes the dissolution of Si and Al which results in higher Si/Al in the SW based AAM. Similar observations were drawn by Peng et al. on investigating the influence of alkali cation on microstructure of AAM [36]. Additionally, from Figure 6, it can be observed that the intensity of peaks after deconvolution depicts a slightly better structured spectra for SW-based AAM as compared to PW-and TW-based mixes. The broad and poorly defined peaks in PW and TW reflect a disorder gel structure affecting the mechanical performance. However, the presence of Q 1 peak intensity for TW and SW mixes indicates the presence of hydrolyzed material that is yet to undergo polymerization into Q 4 (mAl) structures. This can be attributed by the difference of water molecules present in TW and SW due to dissolved salts. These dissolved salts are responsible for disbalance of Si/Na ratio, as excess content usually leads to faster condensation which can slightly delay further polymerization. In summary, the use of SW does not have a detrimental effect on the polymerization process of AAM mixes.
Free Chloride Content
Free chloride contents in the AAM mixes are presented in Table 6. Although the SW-based AAM sample had the highest content of free chlorides, a high chloride binding capacity of AAM is observed. The chloride binding capacity of AAM is governed by the encapsulation process within the matrix [37]. In the case of the SW-based AAM, nearly 86.8% of chloride ion was bound by alkali activation in this study. The exact binding mechanism of chloride is still not clear due to its complex nature. However, it is interesting to note that the increase of free chloride contents in PW and TW samples were observed. It can be theorized that the unconsumed Na + from the alkali-activating solution can form NaCl by reacting with chloride ions, and then deposited on the N-A-S-H gel surface as precipitation. These NaCl precipitates formed on surface can be measured during the free chloride measurements. The chloride present in SW-based AAM influences the polymerization process by forming CaCl 2 and NaCl precipitates. The pore solution of N-A-S-H gels then absorbs or encapsulates these chloride precipitates within the binder matrix, thereby immobilizing them which can influence the free chloride content. A similar observation was also drawn by Shi et al. on incorporating sea water to produce alkali activated calcium silicate slag [9]. In addition, the authors would like to point out that as compared to PW and TW, the SW also has a high amount of sodium (Table 3), which can enhance the amount of unconsumed Na + in the alkali-activating solution.
The mechanism of chloride binding needs to be further investigated. Previous studies also reported that the absence of Friedel's salt in AAM can be due to preferred formation of zeolitic phases and N-A-S-H gel [22]. The absence of AFm phase is responsible for the non-existence of Friedel's salt. It should also be noted that the present study focused on the binding behavior of chloride ions already present in the alkali-activating solution rather than the ingress of chloride ions in hardened AAM paste. The finding of the free chloride content in SW-based AAM suggests that caution is necessary before incorporation in steel reinforced structural elements.
Conclusions
The present study investigated the compressive strength, microstructure, mineralogy, and free chloride content of alkali-activated fly ash material produced from pure water, tap water, and sea water. The main findings of the study are summarized below.
1.
The difference in compressive strength was marginal on utilizing the three different types of water for alkali activating solution. Moreover, the presence of chloride ions and higher pH of sea water were instrumental for slightly greater gain in compressive strength.
2.
The use of sea water resulted in the refined pore structure along with reduced average pore diameter. The primary cause was the higher amount of polymerization products that densified the matrix. 3.
The XRD results showed that the use of sea water has negligible effects on the mineralogical phases of alkali-activated fly ash material. Moreover, the absence of any chloride and sulphate based crystalline minerals is an evidence of the immobilization potential of the alkali activation process.
4.
The FT-IR spectra of the alkali-activated samples showed no negative influence of sea water on the bond band of polymerization products. The results suggest that the use of sea water leads to higher crosslinking of sodium aluminosilicates hydrates in alkali-activated fly ash material. 5.
The ordering structure and higher Si/Al ratio observed from 29Si NMR spectra showed that the sea water-based alkali-activated fly ash material has higher content of Q4 groups. Furthermore, the sea water-based alkali-activated fly ash material has the higher formation of zeolitic Si-O-Al linkages which is indicative of more matured paste matrix. | 2020-03-26T10:08:03.489Z | 2020-03-01T00:00:00.000 | {
"year": 2020,
"sha1": "5c7190f2ad587145c65a636acde4d8882f025c58",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/13/6/1467/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "378229f4d8fceaeec08fe828111853a1146ab45b",
"s2fieldsofstudy": [
"Environmental Science",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
} |
255631847 | pes2o/s2orc | v3-fos-license | A 63-Year-Old Postmenopausal Woman with Uterine Inversion Associated with a Submucosal Geburt Fibroid Successfully Treated by Surgical Reversal Using the Spinelli Procedure
Patient: Female, 65-year-old Final Diagnosis: Chronic uterine inversion due to submucous leiomyoma Symptoms: Complaints of lower abdominal pain, difficulty in defecating, and a mass in the genitals when straining accompanied by clots of bleeding Medication: — Clinical Procedure: — Specialty: Obstetrics and Gynecology Objective: Rare disease Background: Reproductive health affects long-term quality of life, including in the elderly. Uterine inversion is common in postpartum women in developing countries and menopausal women are also at risk. Case Reports: A 65-year-old menopausal woman had 3 children and a history of uterine tumors and curettage. She had received a different diagnosis – a cervical tumor – exactly 3 years ago. She was admitted to a referral hospital for lower abdominal pain, difficulty in defecating, and a mass in the genitals when straining, accompanied by blood clots. There was a 20×20 cm mass protruding from the vagina, and the uterine fundus of the uterus was not palpable. The patient was diagnosed with chronic uterine inversion due to submucous leiomyoma. Management requires the collaboration of multidisciplinary professionals in hospitals. These patients receive therapy to improve their general condition, transfusions, antibiotics, and a hysterectomy plan. The results of the Urogynecology Division showed that a 20×15 cm mass came out of the vagina, with a large necrotic area. The patient was first managed by Spinelli procedure to correct the uterine inversion, followed by an abdominal hysterectomy. Histopathology revealed the final diagnosis as a benign mesenchymal lesion, leiomyoma with myxoid degeneration. Conclusions: Timely diagnosis and management by a multidisciplinary team can help reduce morbidity and mortality in patients with submucosal uterine leiomyoma leading to chronic uterine inversion.
Background
As a result of increased life expectancy, the elderly population is growing rapidly. Older women outnumber older men and the health problems of an aging population are largely related to women's health problems [1]. Understanding reproductive health in the elderly, especially women, is very important because there are so many changes that occur when women enter old age. Women who enter menopause will experience hormonal changes followed by various problems, such as lack of support by the pelvic muscles [2].
Uterine inversion is a condition in which the uterus comes out with the prolapse of the fundus through the cervix [3]. The causes of uterine inversion can be broadly classified as puerperal or non-puerperal. Puerperal uterine inversion is more common than non-puerperal uterine inversion. The acute inversion that occurs immediately or within 24 hours postpartum is the most common type. The differential diagnosis in such patients includes prolapsed fibroids and endometrial polyps [4]. The provisional diagnosis of chronic uterine inversion is made based on vaginal findings of a globular mass protruding from the cervix. This mass approaches the vagina so that cervical effacement occurs around the mass, forming a tight constriction ring, and ultrasound findings can be useful [5]. If these cases are not identified immediately, large and underestimated blood loss can lead to hypovolemic shock. Therefore, early diagnosis and treatment of this condition are crucial [6] Prevalence studies have found that 5.4% to 77% of women have myomas, but this depends on the study population and the diagnostic technique applied [7]. According to the location, myomas can be divided into the following 3 types: intramural myomas, submucosal myomas, and subserous myomas, among which intramural myomas are the commonest. Surveys show that the incidence of uterine submucosal myomas is about 20% to 40%, and this disease often occurs in women aged 30 to 50 years. However, based on recent research, the incidence of submucosal myomas in the uterus is higher in younger women [8]. Treatment depends on the location and size of the myoma [9]. Because the growth of leiomyomas is estrogen-dependent, they usually regress in postmenopausal women. Our case is interesting in that it was a submucous myoma in a postmenopausal woman that led to chronic uterine inversion and severe anemia, causing severe morbidity.
Case Report
Our patient was a 65-year-old postmenopausal woman who had 3 children. She had a history of uterine tumor and curettage with a benign tumor 3 years ago. She presented her concern of vaginal bleeding at 2 different first-level hospitals. She received a diagnosis of a cervical tumor and was referred to a tertiary health facility but did not continue therapy and instead took traditional herbal medicine. She reported being unable to urinate, as well as discharging blood and clots from the vagina, so she went to a different hospital within 1 week of experiencing this problem. At the last hospital, she was catheterized and was transfused with 2 units of blood. She was diagnosed as having a cervical mass (Figure 1).
The patient finally decided to get treatment at a referral hospital after her general condition improved. On arrival, the patient had lower abdominal pain and difficulty defecating. When she strained during defecation, a lump came out of the genitals the size of a baby's head, accompanied by blood clots. She tried to push the mass back in but could not. The patient showed weakness and anemia. There was a 20x20 cm mass protruding from the vagina and the uterine fundus of the uterus was not palpable. She was initially diagnosed with cervical myoma and anemia due to a hemoglobin level of 7.2 g/dl. On gynecological examination, a 20×20 cm mass seen protruding from the vagina. The hemoglobin level fell to 6.1 g/dl, detected 3 days after the first examination. Figure 2 shows an ultrasound view of the abdomen, (a) sagittal slice, and (b) transverse slice. The ultrasound examination of the uterus showed normal size and shape, and no visible mass protruding. There was no visible intensity of free fluid echo in the abdominal cavity and right left pleural cavity. There was no visible lymph node enlargement in the para-aortic region. The patient was diagnosed with chronic uterine inversion, uterine myoma, anemia, hypoalbuminemia, and hyponatremia. Her general condition improved after transfusion and administration of antibiotics, and hysterectomy was suggested. She was examined at the Oncology Department and received a diagnosis of Geburt myoma and uterine prolapse. She was discharged after her general condition started to improve.
She received a follow-up examination at the Urogynecology Division, which showed a 20x15 cm mass protruding from the vagina, with a large necrotic area, and the uterine body was not palpable. She was diagnosed with chronic uterine inversion and had a differential diagnosis of pedunculated submucosal myoma. Follow-up treatment consisted of the Spinelli procedure and hysterectomy. The Spinelli procedure was done by an incision made on the anterior aspect of the cervix and then the uterus was reinverted. It was not possible to remove the fibroid vaginally because it was pedunculated, so she underwent vaginal hysterectomy because of uterine inversion.
The technique of abdominal hysterectomy was laparotomy, retraction, restoring normal anatomy, uterine elevation, division of the round ligament and accessing the retroperitoneal space, bladder reflection, exposure of the iliac arteries, division of the ovarian vessels, skeletonization of the uterine artery and vein, dividing the uterine vessels, dissection of the rectum, dividing the broad ligament, dividing the vagina, closing the vaginal cuff, and final examination and closure. Exploration revealed complete uterine inversion, right and left adnexa, tubes within normal limits, and ovarian impression atrophy. The results of the anatomical pathology examination showed a mass in the uterus, which was a benign mesenchymal lesion. This mass also led to leiomyoma with myxoid degeneration. The results of the microscopic immunohistochemistry examination showed positive results in 90% of tumor cells, with strong intensity. Smooth muscle actin (SMA) showed positive results, indicating leiomyoma. At the time of recovery, she had urinary incontinence and was unable to hold urine after the catheter was removed.
The treatment process is carried out with a vaginal approach and an abdominal approach. In the vaginal approach, the anterior uterine wall incision is presented in Figure 3, the uterus after removal of the submucosal myoma is presented in Figure 4, and the suture to the incision in the uterine wall is presented in Figure 5. In the abdominal approach, the uterus after repositioning the abdominal cavity is presented in Figure 6 and the condition of the uterus after hysterectomy is presented in Figure 7. The patient also underwent biofeedback exercise, which helped to reduce stress urinary incontinence. She also completed a quality-of-life questionnaire, muscle strength testing, and kept a urination diary. She had follow-ups at 6 weeks, 3 months, and 6 months. The results showed a reduction in stress urinary incontinence.
Discussion
This is a rare case, so timely diagnosis and management by a multidisciplinary team can help reduce patient morbidity and mortality. The patient received therapy to improve her general condition, as well as transfusions and antibiotics, and a hysterectomy plan was carried out. The patient first underwent a Spinelli procedure to correct the uterine inversion, followed by an abdominal hysterectomy. This treatment varies depending on the type and condition of the patient. Management of a 32-year-old nulliparous woman with 17 years of unexplained infertility and a diagnosis of a large vaginal prolapsed non-pedunculated leiomyoma was performed by the Haultain procedure; this procedure is used to reposition the inverted uterus and remove the leiomyoma through a posterior incision using a laparotomy [10]. The difference in treatment refers to the type of approach -abdominal approaches and vaginal approaches. The Huntington and Haultain procedures are commonly used abdominal approaches and the Kustner and Spinelli procedures are commonly used vaginal approaches. In the present case, the Spinell procedure was used for chronic uterine inversion. This technique involves dissection of the bladder from the inverted uterus. A midline split is made at the cervix and carefully separated from the bladder. The anterior wall of the everted uterus is split. With pressure from the surgeon's index finger and thumb, the uterus is turned outward. The myometrium is re-approached with 2 layers of sutures, and the serous surface with 1 layer. The vaginal skin is re-approximated with interrupted sutures, as is the entire thickness of the cervix. Vaginal restoration and removal are difficult [11]. Treatment is directed at hysterectomy. In another report, a woman underwent a total abdominal hysterectomy with an anterior longitudinal incision to release a tight ring around the fibroid and fundus, followed by excision of the myoma [12].
The present patient had several changes in diagnosis. For healthcare providers, diagnosis is one of the many components necessary during the clinical decision-making process and involves differentiation of the structure of the underlying condition. The diagnostic process involves identification of the etiology, identification of the condition through evaluation of the patient's history, physical examination, and review of laboratory data or diagnostic imaging and provisional diagnosis. In theory, diagnosis is useful for increasing the use of classification tools, improving clarity and communication, providing treatment trajectories, increasing understanding of prognosis, and, in some cases, may be useful for preventive care [13]. Accurate and timely diagnosis with the smallest probability of missed diagnosis or delayed diagnosis is essential in the management of any disease. Misdiagnosis can lead to unnecessary treatment or failure to treat and harms both the patient and the health care system [14]. Chronic inversion should be kept as a differential diagnosis in a patient with a history of irregular bleeding associated with lower abdominal dragging pain and a feeling of a mass protruding from the introitus. Before surgery, it should be differentiated from fibroid polyps, uterine prolapse, and prolapsed hypertrophic ulceration of the cervix [15].
The mechanism of uterine inversion and myoma in this case was likely that the myometrium was swollen due to the tumor in the cavity and the myometrium became irritable and initiated expulsive contractions, which dilated the cervix and aided in the expulsion of the tumor, dragging its fundal attachment. Tumor weight, manual traction of the tumor, or increased intraabdominal pressure from coughing, straining, and sneezing can also contribute. The area of the uterine wall that is weakened due to growth will enter the cavity so that it is under the influence of the active uterine muscles. Leiomyoma is the most common cause of uterine inversion, and most are caused by malignancy [16]. Myomas are associated with the capacity of the myoma to dilate the endometrial cavity (with an increase in size and location), whereas this process triggers an inflammatory reaction in the uterine wall, which causes contraction when attempting to remove the tumor [17].
Chronic uterine inversion requires careful management. The chronic nature of this inversion makes restoring the normal uterus per vagina difficult, in contrast to acute inversion, which can be corrected more easily [4]. The treatment, in this case, was the Spinelli technique. The Spinelli and Kustner techniques use a transvaginal approach that involves replacement of the uterine fundus through the anterior and posterior transaction of the cervix. In the Spinelli procedure, an incision is made on the anterior aspect of the cervix and then the uterus is repositioned [18]. This technique is rarely used today, as several newer methods have recently been described [19]. After repositioning, the uterine incision can be repaired or a vaginal hysterectomy can be performed with the uterus in its anatomical position [16]. Kustner's vaginal approach is usually used to treat cases of chronic puerperal inversion. The Spinelli procedure is similar to the Kustner procedure, but the uterine incision is made on the anterior aspect of the uterus after the bladder is dissected upwards [20]. Division of the constricting cervical ring anteriorly through the vagina is used in the Spinelli procedure, which requires careful handling of the bladder and ureter, and it is associated with more urinary and future pregnancy complications than the posterior approach [21]. Our patient was elderly, so there were no concerns about pregnancy. In addition, robotic and laparoscopic surgery have recently been used for chronic uterine inversion. Abdominal cerclage surgery has also been performed to prevent the nucleus from inversion of the uterus [18].
Management tends to be slow and the diagnosis needs to be made carefully, as the condition can be affected by patient behavior. In our patient, taking traditional herbal medicine was part of the reason why she did not undergo further examination. Traditional, complementary, and alternative medicine (TCAM) is a method of treatment used extensively to treat a variety of diseases, especially for patients with 2 or more chronic diseases [22]. It is used in health care as well as in the prevention, diagnosis, and treatment of physical and mental illnesses [23]. The prevalence of the use of traditional and complementary medicine in the general population worldwide ranges from 9.8% to 76%, and it is used due to perceived poor health, to achieve a sense of well-being, and as integrative medicine [24].
Finally, our patient's condition improved. Biofeedback exercise for stress urinary incontinence is an increasing popular method of urinary incontinence treatment, while also teaching women self-awareness of their bodies and the physiological processes taking place [25]. | 2023-01-12T17:51:46.543Z | 2022-12-28T00:00:00.000 | {
"year": 2023,
"sha1": "bd03a25204aa1ddc1c40ef17bd6c282c51f66c95",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "e01d54daade10118b7563909f74bea6a25bc9f8e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
247107048 | pes2o/s2orc | v3-fos-license | Proposing a novel deep network for detecting COVID-19 based on chest images
The rapid outbreak of coronavirus threatens humans’ life all around the world. Due to the insufficient diagnostic infrastructures, developing an accurate, efficient, inexpensive, and quick diagnostic tool is of great importance. To date, researchers have proposed several detection models based on chest imaging analysis, primarily based on deep neural networks; however, none of which could achieve a reliable and highly sensitive performance yet. Therefore, the nature of this study is primary epidemiological research that aims to overcome the limitations mentioned above by proposing a large-scale publicly available dataset of chest computed tomography scan (CT-scan) images consisting of more than 13k samples. Secondly, we propose a more sensitive deep neural networks model for CT-scan images of the lungs, providing a pixel-wise attention layer on top of the high-level features extracted from the network. Moreover, the proposed model is extended through a transfer learning approach for being applicable in the case of chest X-Ray (CXR) images. The proposed model and its extension have been trained and evaluated through several experiments. The inclusion criteria were patients with suspected PE and positive real-time reverse-transcription polymerase chain reaction (RT-PCR) for SARS-CoV-2. The exclusion criteria were negative or inconclusive RT-PCR and other chest CT indications. Our model achieves an AUC score of 0.886, significantly better than its closest competitor, whose AUC is 0.843. Moreover, the obtained results on another commonly-used benchmark show an AUC of 0.899, outperforming related models. Additionally, the sensitivity of our model is 0.858, while that of its closest competitor is 0.81, explaining the efficiency of pixel-wise attention strategy in detecting coronavirus. Our promising results and the efficiency of the models imply that the proposed models can be considered reliable tools for assisting doctors in detecting coronavirus.
www.nature.com/scientificreports/ from the CT-images. COVNET consumes several lungs' images at the same time and feeds each of which into a separated backend of RESNET-50 16 whose weights are shared between its backends. The main limitation of COVNET, however, is a high level of computation with a low level of sensitivity. Hybrid-3D 17 is another deep screening network that builds the 3D shape of lungs and then feeds both 2D and 3D inputs into two Densnet backbones 18 and combines their predictions. However, its prediction is dependent on the accuracy of the estimated 3D shape of the lungs. Apart from that, the 3D estimation imposes a heavy computation, resulting in a lower screening speed. Unet++ 19 , on the other hand, is computationally better than others. However, it does not satisfy the sensitivity requirements, i.e., the chance of false-negative is high. Table 3 summarizes the performances of these models on CT-COV19, which is a public benchmark. Detailed information about datasets and other related methods can be explored in [20][21][22][23][24] . One of the main limitations of the current deep learning models is the size of datasets being used for learning the models, i.e., the number of publicly available training samples used for optimizing parameters. To the best of our knowledge, current models have been trained on small-sized datasets, mostly due to privacy concerns and the unavailability of COVID-19 CXR/CT images 25,26 . Consequently, it does lead to a lower generalization ability of the trained models 27 . Age and regional diversities are two other essential factors [28][29][30] , playing a vital role in the generalization of the learned models and preventing the models from the danger of overfitting. As elderly people are at a higher risk of being infected by the coronavirus, current public datasets of COVID-19 are often biased towards older people, resulting in a lower chance of generalization for younger patients. Additionally, people of different regional backgrounds are not necessarily common in their lung functioning, respiration abilities, and several other breathing factors 31 . Hence, if a dataset is built based on the samples provided by one hospital/city, the models learned on such datasets would be very likely to become biased toward that specific region. The lack of sensitivity in predictions is another critical limitation of the current deep neural models, causing less reliable detection. In other words, it is currently very likely to predict a positive sample as a negative, particularly during the incubation period of disease when there are no clear patterns of infection in the lungs.
This study aims to overcome the limitations above, resulting in more accurate and reliable deep neural models for being a helpful side-tool in screening COVID-19. Accordingly, this study, firstly, builds a publicly available CT-scan dataset of COVID-19, consisting of 13k CT-images captured from more than 1000 individuals. The images are collected from four regions with entirely different climate conditions. A wide range of age diversities has also been included, ages varying from 19 to 73. Additionally, images are saved at a high level of quality. Overall, the proposed dataset, named CT-COV19, provides a reliable set of CT-scan images for the researchers to develop more accurate and general models. Secondly, the present study suggests a novel deep neural model, i.e., Deep-CT-Net, trained on the proposed CT-COV19 dataset and provides baseline results. Deep-CT-Net benefits from a simple but accurate architecture, enabling the possibility of early screening the infection patterns of COVID-19 from the CT-images of lungs. More precisely, the proposed model takes advantage of pyramidal attention layers 32 , providing pixel-wise attention on top of the extracted high-level features and, consequently, enabling the whole model to accurately detect COVID-19 even when there are less symptoms of the disease in the lungs. The pixel-wise attention empowers the final model to detect more positive cases whose primary swabs are negative. Furthermore, having no heavy pre-processing steps, such as lungs segmentation 17,33 , is another virtue of the proposed network, enabling the model to detect COVID-19 in a lower computational time. This property is particularly desirable during the waves of COVID-19, as the detection process becomes much faster than those models containing the pre-processing steps.
Extensive experiments on several benchmarks of COVID-19 are conducted, and the results are compared with several state-of-the-art methods 15,17,19,[34][35][36][37] . Moreover, a transfer-learning version of Deep-CT-Net, i.e., Deep-CXR-Net, is further developed to detect COVID-19 based on CXR images. The choice of transfer learning enables the network to learn from unlabeled CXR images and, therefore, adjusts the weights by a small set of labelled CXR images. Additionally, the results of Deep-CXR-Net are compared with several related methods, e.g., Refs 11,13,38 .
Results
This section provides a detailed explanation of CT-COV19 and reports the performance results of proposed models over several popular benchmarks, including CT-COV19. The performance criteria are described in Ref. 39 . Additionally, Li's procedure 40 is applied as a post-secondary statistical test over the results in terms AUCs.
CT-COV19: a public CT-scan dataset for COVID-19. Approving by the institutional review board (IRB), this section describes the details of our publicly available CT-scan dataset named CT-COV19 for screening COVID-19. CT-COV19 consists of 13k CT-images of the lungs and is obtained by a non-contrast chest CT, in which the reconstructions of the volume are set at 0.3 to 1 mm slice thickness. The images are taken from more than 1000 randomly selected male and female individuals, i.e., Male : 59%, Female : 41% . Among the patients, 500 cases were infected with COVID-19. An RT-PCR test was performed to confirm their infections with COVID-19. Moreover, the age of individuals ranges from 19 to 73. Therefore, CT-COV19 is diverse in terms of both gender and age groups. The regional diversity has also been included in CT-COV19, as it has been collected from four different regions with diverse climates. It is worthwhile noting that the collected data are anonymous, and privacy concerns are satisfied.
One of the main advantages of CT-COV19 is the number of COVID-19 samples, which is the most considerable size among the publicly available datasets of COVID-19 by far. Another aspect of CT-COV19 is the existence of samples from other pneumonia, providing learning algorithms with an opportunity to distinguish between infections caused by COVID-19 and other lung diseases. Table 1 provides a brief comparison between CT-COV19 and several other similar datasets, explaining the quantitative superiority of CT-COV19 to others. www.nature.com/scientificreports/ CT-COV19 consists of CT-scan images from three different labels, including COVID-19, other pneumonia, and normal with the ratios of 61.5%, 5.8%, 32.7% , respectively. Although the number of samples belonging to the class of other pneumonia is small, one can easily find plenty of such samples via the Internet. In this study, we deliberately merge the labels of other pneumonia with all samples of normal class and consider the merged set as the normal class. Therefore, CT-COV19 is generally a two classes dataset. This dataset, which now has two classes, i.e., COVID and Normal, is further randomly divided into train, validation, and test parts with ratios of 70%, 10%, and 20%, respectively. Table 2 summarizes this division. The minimum and maximum heights/widths of images are respectively 484 × 484 and 1024 × 1024 . Additionally, the minimum resolution of the images is 150dpi, and the bit depth is 24. Figure 1 provides several samples of this dataset.
The results of Deep-CT-Net. This subsection reports the empirical results of the proposed deep neural architecture (Deep-CT-Net), which can classify CT-scan images into two classes: positive-COVID and negative-COVID. Figure 2 depicts the workflow of Deep-CT-Net.
Deep-CT-Net is assessed through several experiments. In the first experiment, we evaluate the performance of Deep-CT-Net on two datasets, i.e., CT-COV19 (proposed in this study) and COVID-CT 36 , and compare its performance against several related deep network models 15,17,19,35 . In brief, COVID-CT has a small number of samples with a lower image quality compared to CT-COV19. Figure 6 depicts several images of this dataset.
As shown in Tables 3 and 5 The next experiment assesses the generalization ability of Deep-CT-Net. Accordingly, Deep-CT-Net is trained on CT-COV19 but tested on COVID-CT dataset 36 without applying any fine-tuning or post-processing step. Table 4 reports the obtained results of Deep-CT-Net on COVID-CT dataset and provides a comparison with the baseline methods reported in Ref. 36 . As the table reports, Deep-CT-Net achieves AUC = 0.92 and F-measue = 0.801 ( Table 5).
The final experiment is conducted to have a better view of the functionality of Deep-CT-Net. Figure 5 depicts Class Activation Mapping (CAM) 45 for a test sample taken from COVID-19, visualizing the attention regions inside the lungs. As shown in this figure, the attention regions detected by our proposed Deep-CT-Net are precisely related to COVID-19 symptoms, i.e., the highlighted lungs' regions in red.
The results of Deep-CXR-Net. The obtained results of Deep-CXR-Net, which is the CXR extension of Deep-CT-Net, and its workflow is depicted in Fig. 3, are reported in this subsection. To have a reliable comparison, we randomly divided ieee8023 46 to train and test sets with the proportions of 70% and 30%, respec- www.nature.com/scientificreports/ tively. Then, we added 100 more CXR images of other pneumonia to the test set. This set of additional CXR data helps to evaluate the performance of each method in terms of false-positive rates. Table 6 compares the obtained screening results of Deep-CXR-Net to other related methods 11,13,38 . As shown in Table 6 and depicted in Additionally, although the results show lower precision for Deep-CT-Net, its overall score (F-measure) is better, enabling the proposed models to be used in clinical diagnostics. Finally, the main implications of this study can be summarized as follows: • the proposed deep learning models have practical implications, and they can be used as an assistant in diagnosing coronavirus. • applying the pyramidal attention layers plays a significant role in detecting coronavirus accurately. Future deep neural models can take advantage of this layer, increasing the overall performance of models. • Deep-CXR-Net offers a transfer learning approach for training deep neural models while less data is available for training.
To discuss more in related methods and compare their main ideas, COVNET 15 combines several ResNets 16 with shared weights along with a series of CT images, in which each ResNet consumes one CT image. Finally, a pooling layer aggregates their outputs. Although the weights are shared, the computational time/cost is high in www.nature.com/scientificreports/ practice. COVNET reports a sensitivity of 0.81 and an AUC of 0.842 over CT-COV19. Similarly, DL-system 35 is computationally complex. It uses three main stages: lungs' segmentation, segment suppression, and prediction, consuming high level of computation. Moreover, it uses 3D convolutions, which even need more computational power. Another limitation is that the prediction accuracy becomes dependent on the segmentation accuracy, causing this model to be not operational in practice. Overall, DL-system reports the lowest accuracy in prediction, i.e., 0.75 of sensitivity and 0.804 of AUC over CT-COV19. Hybrid-3D 17 works in a similar approach to DL-system. It first segments lungs and then applies Densnet-121 for classification. Although its performance is better than DL-system, it still suffers from the same limitations of DL-system. Hybrid-3D reports a sensitivity of 0.797 and an AUC of 0.843 over CT-COV19. CAAD 11 , which is a CXR model, suggests an anomaly detection loss on an This work is significant from three main perspectives. First, we built a publicly available dataset of CT-images of COVID-19 that is large and diverse enough to train reliable models and, therefore, can be considered for training and evaluation in future studies. Second, the proposed deep neural networks can extract pixel-wise information accurately and, thus, detect COVID-19 with higher accuracy. That is why Deep-CT-Net and Deep-CXR-Net www.nature.com/scientificreports/ achieve higher rates of sensitivities than other related methods. For instance, the closest method to Deep-CT-Net, in terms of sensitivities on CT-COV19, is COVNET with a value of 0.81, while that of Deep-CT-Net is 0.858. The same observation can be seen from the obtained results over COVID-CT, in which the sensitivity rate achieved by Deep-CT-Net is 0.905, while that of the closest related method, i.e., xDNN 37 , is 0.886. Third, the proposed Deep-CXR-Net, which is the CXR extension of Deep-CT-Net, is able to be trained on small-sized CXR datasets and efficiently compensate for the lack of enough CXR data of COVID-19. Compared with other CXR-based deep models, we found that the choice of using additional features results in much better performances for unseen CXR data and significantly reduces the rate of false-positive. In contrast, other methods have shown higher falsepositive rates in their predictions, i.e., they wrongly tend to predict samples of other pneumonia as COVID-19.
The results of class activation map (CAM) for each model are also depicted in Figs. 5 and 8, which are derived from Deep-CT-Net and Deep-CXR-Net respectively. The highlighted regions (red colour inside the lungs) in the CAM results depict those parts of the lungs where the COVID pneumonia appears. Interestingly, we can see from the figures how well the models could detect the infectious regions of COVID.
One of the main advantages of the proposed models is their efficiency in terms of computational complexities, allowing them to be used in ordinary computing systems of hospitals. More accurately, the size of models is not huge, particularly for the Deep-CT-Net, and also, there are no complex pre-processing steps. For instance, the models in Refs. 17,33 are based on complex lung segmentation steps, imposing more computational cost in practice and limiting the final screening results to the segmentation accuracy. Being efficient and accurate, the proposed models have the potential to be used in hospitals' emergency rooms. Accordingly, a regular computer equipped with an ordinary Nvidia graphic card can be connected to the imaging system, either CT-scan or CXR, making the prediction in a fraction of a second. In contrast, deep neural models with complex network or overhead processing need advanced graphic cards and computers to predict online, often not applicable in all hospitals due to the lack of computing hardware.
On the other hand, this study has faced several limitations. First, more experiments should be conducted to examine the generalization ability of Deep-CT-Net on the datasets of different medical centers; second, although the attention layers increase the sensitivity rate, the precision rate decreases. Finally, the proposed models only detect positive and normal classes and are not able to quantify other pneumonia, e.g., bacterial pneumonia. Future studies may address such limitations.
In conclusion, this study proposed a publicly accessible benchmark of COVID-19 CT images of lungs, allowing further studies to build more general models. We further proposed a baseline model called Deep-CT-Net, benefiting from a pyramidal attention layer that helps to extract discriminative pixel-wise features. Moreover, we extended our model for the case of CXR images of lungs using a transform learning strategy, enabling it to be trained on a small number of samples. The experimental results show that: (1) the choice of pyramidal attention layers can significantly increase the sensitivity rate, increasing the overall prediction metrics, i.e., AUCs, and (2) the proposed Deep-CT-Net is likely to have more false positives in favour of having a lower rate of false negatives. Overall, we found that the pixel-wise features extracted by pyramidal attention layers can significantly enhance the prediction performance of deep neural models.
As for future work, we plan to design a deep neural decoder for extending the current models to accurately segment the infected parts of the lungs with COVID-19, resulting in discovering biomarkers for COVID-19. This result could further be used to categorize the infection patterns of COVID-19, providing valuable data sources for revealing the unknown aspects of the virus and eventually being helpful in medical prescriptions. Additionally, Deep-CT-Net provides a baseline result over the proposed CT-COV19 dataset, and there is still room for developing more accurate deep models.
Methods
Data pre-processing. The data pre-processing stage consists of five consecutive steps. After resizing all the images into a similar resolution, e.g., 512 × 512 , the first step is applying random data augmentation techniques, including a random rotation in a degree of [−15,15], and a random translation in a range of [−0.05,0.05]. The second step is performing a histogram equalization, which adjusts the contrast of a CT-scan image by modifying the intensity distribution of the histogram. The third step is fixing the aspect ratio of images (or image resizing) to a size of 256 × 256 × 3 . Afterward, a Gaussian blur filtering with a window-size of 3 is used for image smoothing. Finally, images are normalized by subtracting from their mean and divided by their standard deviation.
Network architecture. This subsection introduces the proposed deep neural architectures, i.e., Deep-CT-Net and Deep-CXR-Net, which can accurately classify CT-scan images into two classes: positive-COVID and negative-COVID. The following paragraphs explain their architectures and other implementation details to make them reproducible. Figure 2 depicts the workflow of Deep-CT-Net. As it is shown, Deep-CT-Net consists of three main parts. The first part applies Densnet-121 18 as the backbone, extracting high-level features from the input raw CT-images. The second part performs a pyramid attention layer 32 over the extracted high-level features to maximize pixelwise feature extraction, allowing the model to detect COVID-19 even during the first days of infection. A batch normalization layer, which standardizes the input to the next layer, is then used to avoid internal covariance shifts 47 and have a smoother objective function 48 . Finally, the last part flattens the output of the previous part to be fed into fully connected layers for prediction. The backward pass then updates the weight parameters of all three components using the Adam optimizer 49 with a learning rate of 1e-5 to minimize a binary-cross-entropy loss.
The architecture of Deep-CXR-Net, which is a transfer learning extension of Deep-CT-Net for screening COVID-19 based on CXR-images of lungs, is depicted in Fig. 3 www.nature.com/scientificreports/ generalization ability. More accurately, the proposed Deep-CXR-Net consists of three main parts, where the first two parts are independent pre-trained models respectively on two large-sized datasets of non-COVID diseases of lungs, i.e., CheXpert 50 and Kaggle-Pneumonia 51 . We consider these parts as two black-box functions whose inputs are CXR images, and the outputs are vectors of high-level features. More precisely, for a given CXR image, the output of the first part is a six-dimensional vector whose entries are the likelihood of six certain lung diseases, documented in Ref. 50 . Besides, the output of the second part is a two-dimensional vector, representing the likelihood of having pneumonia or not. These additional features compensate for the scarcity of CXR data in COVID-19, increasing the generalization ability of the Deep-CXR-Net. The last part, i.e., Part 3, is another deep network, concatenating all the extracted features, i.e., the extracted features of itself and those from parts one and two. This concatenation is the point where the idea of transfer learning comes to play. More precisely, the additional concatenated features provided by Parts 1 and 2 compensate for the lack of having enough CXR training samples of COVID-19, resulting in a high level of generalization during the test time. We use ieee8023 46 as a COVID-labeled dataset to train the parameters of the third part. Additionally, a certain number of image augmentation techniques, such as rotation and translation (explained above), have also been applied in the learning phase. As parts 1 and 2 are pre-trained models, the backward pass only updates the weight parameters of the third part using the Adam optimizer with a learning rate of 1e-5 to minimize the binary-cross-entropy loss. As Fig. 3 shows, the backbone applied in all three parts is DensNet-121. Similar to Deep-CT-Net, the proposed Deep-CXR-Net uses a pyramidal attention layer 32 to provide pixel-wise attention on high-level features, enabling the whole model to effectively detect COVID-19 cases even when there are small clues of the disease in lungs.
Statement.
All the experiments, as well as methods, were carried out under relevant guidelines and regulations. All protocols used in the experiments were approved by Shiraz University. The process of collecting the CT-data, i.e., CT-COV19 was approved by the ethics committee of the Shiraz University of Medical Sciences. Informed consent was obtained from all subjects. | 2022-02-26T06:23:39.896Z | 2022-02-24T00:00:00.000 | {
"year": 2022,
"sha1": "cf127415b1c47ba426c23edfbaaea6effd632a17",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "ba4fc58ba8090d16231a6818c6db3a9ad555dba6",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
263203052 | pes2o/s2orc | v3-fos-license | Evaluation of a New MRI-Based Classification of Graft Status After Superior Capsule Reconstruction
Background: A classification system for the graft state after superior capsule reconstruction (SCR) using magnetic resonance imaging (MRI) has not been described previously. Purpose: To introduce a new, MRI-based classification system for graft integrity after SCR and to evaluate the system according to postoperative outcomes. Study Design: Cohort study (diagnosis); Level of evidence, 3. Method: Included were 62 consecutive patients who underwent SCR using autologous fascia lata graft between January 2013 and April 2021. Postoperative outcomes were assessed (American Shoulder and Elbow Surgeons [ASES] score, Constant score, pain visual analog scale [pVAS], range of motion [ROM], acromiohumeral distance [AHD], Hamada grade). Graft status was classified by 2 orthopaedic surgeons on postoperative MRI in accordance with the signal intensity and the presence or extent of the tear, as follows: type 1 (hypointense signal without tear), type 2 (hyperintense signal without tear), type 3 (partial-thickness tear), type 4 (full-thickness tear with partial continuity), and type 5 (full-thickness tear with complete discontinuity). Intra- and interobserver agreement were assessed using Cohen kappa. The correlation between postoperative outcomes (ASES score, Constant score, pVAS, ROM, AHD, and Hamada grade) and the SCR graft classification system was assessed with the Pearson correlation coefficient, and the outcomes were compared according to classification type. Results: Patients were classified according to the new system as follows: type 1 (n = 15), type 2 (n = 20), type 3 (n = 7), type 4 (n = 8), and type 5 (n = 12). There was excellent interobserver agreement (κ = 0.819) and intraobserver agreement (κ = 0.937 and 0.919). The classification system showed a moderate to high correlation with the ASES score (r = –0.451; P = .001), pVAS (r = 0.359; P = .005), AHD (r = –0.642; P < .001), and Hamada grade (r = 0.414; P < .001). Patients classified as having types 1 and 2 showed better outcomes in terms of ASES score, pVAS, ROM, and AHD compared with type 5 patients (P ≤ .021 for all). Conclusion: The new classification system was highly reproducible and showed clinical utility for both radiological and clinical evaluation after SCR.
Studies regarding superior capsule reconstruction (SCR) as an alternative treatment for massive irreparable rotator cuff tears have been increasing. 2,13,17,21,39SCR, which utilizes the autologous fascia lata as a humeral head depressor, was introduced by Mihata et al in 2012. 27,29SCR reinforces superior static stability and prevents superior migration of the humeral head caused by the massive rotator cuff tear.25,40 The established Sugaya classification 36 uses postoperative magnetic resonance imaging (MRI) to evaluate the quality of tendon healing using 5 grades after rotator cuff repair surgery and is useful for assessing the condition of the tendons.24,32,41 However, healing from an SCR using autologous fascia lata is referred to as "graft healing," which is not the same as the "tendon healing" that occurs after rotator cuff repair.
The primary aim of this study was to classify patients after SCR based on graft integrity status using an MRIbased system similar to the Sugaya classification system for rotator cuff tendon healing.The secondary aim was to evaluate clinical outcomes according to this classification.We postulated that comparisons of clinical outcomes in accordance with the new SCR graft classification system would be applicable in clinical practice.
METHODS
The protocol for this study was approved by our institutional review board.Between January 2013 and April 2021, a total of 131 consecutive patients underwent SCR by a single surgeon (I.H.J.) at our hospital.All patients in this population who underwent postoperative MRI after 1 year were identified (n ¼ 98).Among them, we excluded patients with a poor-quality MRI that was not suitable for graft evaluation (n ¼ 10) and those who received SCR using an allograft (n ¼ 26).Ultimately, a total of 62 patients were included in the analyses (Figure 1).
SCR Surgery
After an irreparable rotator cuff tear was diagnosed via a diagnostic arthroscopy, we performed acromioplasty to reduce the friction between the subacromial undersurface and the graft.Subscapularis (SSC) repair was also performed in cases with concomitant SSC tears.The defect was measured in the medial-lateral and anterior-posterior directions.
The fascia lata was harvested from the ipsilateral thigh and prepared as a double-folded 2-layer graft.A running suture with Ethibond (Ethicon) 2-0 in the graft margin was used.A graft of at least 6 mm in thickness was obtained in the final preparation, which was recorded in the operation record.After debridement of the superior margin of the glenoid, 2 or 3 suture anchors (1.7-mm Suture fix Anchor; Smith & Nephew) were inserted from the 10-o'clock to 2-o'clock position according to the defect size.A sliding locking knot suture was used on each anchor.A double-row suture bridge method was used for humeral side fixation.Two threaded anchors (4.5-mm Healicoil; Smith & Nephew) were inserted anteriorly and posteriorly to the medial row of the footprint, respectively, and the graft was fixed with a mattress suture.Remnant rotator cuff tissue or subacromial bursa were sutured using remaining strings from the medial row anchors and fixed with 2 knotless anchors (4.5-mm; Footprint Anchor; Smith & Nephew) inserted into the lateral row of the footprint.All graft fixations were performed in a 30 shoulder abduction and neutral rotation position.(Figure 2).In 38 of 62 patients (61.29%), mesh was implemented between the grafts to enhance stiffness. 16
Postoperative Rehabilitation
Immobilization for 6 to 8 weeks with an abduction brace was performed in all patients.After immobilization, a passive range of motion (ROM) exercise program was started.After a full ROM was achieved, strengthening exercise programs including elastic band exercises and periscapular muscle strengthening exercises were commenced.Patients were recommended to return to daily activities within a tolerable range at 3 months after surgery, while leisure sports activities were allowed 1 year after surgery.
Patient Data, Radiological Assessment, and Shoulder MRI Patient information (age, sex, underlying disease, and history of previous surgeries) was collected through a review of electronic medical records.Clinical outcomes (ROM, pain visual analog scale [pVAS], American Shoulder and Elbow Surgeons [ASES] score, and Constant score) measured preoperatively and at 1 year postoperatively were retrieved from the medical records.The Hamada classification system was used to evaluate the stage of cuff tear arthropathy on plain radiographs taken preoperatively and 1 year postoperatively.The acromiohumeral distance (AHD) was measured using preoperative and 1-year postoperative plain radiographs obtained with the shoulder in a neutral position.
Pre-and postoperative shoulder MRIs were performed using a 3-T scanner (Ingenia; Philips).The following parameters were used on the scanner: axial and coronal T2 fat saturation (repetition time [TR] ¼ 4700 ms; echo time [TE] ¼ 65 ms), coronal T1 (TR ¼ 640 ms; TE ¼ 21 ms), and coronal and sagittal T2 (TR ¼ 2880 ms; TE ¼ 80 ms).Weighted images were acquired.The slice thickness was 2 mm with an interslice gap of 0.5 mm (field of view, 150 mm; image matrix, 512 Â 512).The imaging data were jointly reviewed and evaluated by 2 orthopaedic surgeons (J.-B.L. and J.W.Y.).To evaluate preoperative characteristics of the shoulder pathology, fatty infiltration (FI) of the rotator cuff muscle and tendon retraction were assessed using preoperative MRI.The preoperative FI of the rotator cuff muscle was assessed using the Goutallier classification: grade 0 for normal muscle, grade 1 for muscle with fatty streaks, grade 2 for muscle with greater FI, grade 3 for muscle with equal FI, and grade 4 for muscle with lesser FI. 7,10,35 The degree of tendon retraction was evaluated using the Patte classification on coronal and axial views of the shoulder 16,33 : grade 1, in which the tear stump of the tendon is retracted and located before the lateral articular margin; grade 2, in which the stump is at the level of the humeral head; grade 3, in which the stump is at the glenoid level; and grade 4, in which the stump is located medially to the glenoid level. 33
MRI Assessment of Graft Integrity
Postoperative MRI was performed at 1 year postoperatively to evaluate the graft integrity.We identified the graft tissue between the inserted anchors to reduce the possibility of misreading torn graft when the MRI cut direction was not parallel to the graft direction.To determine the integrity of the graft, several consecutive cuts of the images were checked in the coronal and sagittal views (Figure 3).
The thickness of the graft was checked in both the sagittal and coronal views and was categorized as normal, partial tear, or a complete tear according to the depth of the tear in the graft.The intrasubstance hypersignal intensity of the graft was determined when there was only a signal change without graft tear.The proposed graft classifications after SCR were as follows (Figure 4): type 1, graft with no tear and with homogeneously low intensity on each image; type 2, graft with no tear with intrasubstance hypersignal intensity; type 3, graft with a partial-thickness tear; type 4, graft with a fullthickness tear but with partial integrity; and type 5, graft with a full-thickness tear and complete discontinuity.
Intra-and Interobserver Reliability of SCR Graft Classifications
Two orthopaedic shoulder specialists (J.-B.L. and J.W.Y.) participated in the reproducibility assessment of the new classification system for graft healing after SCR.Each observer independently classified the graft status twice in accordance with this new system, with an interval of at least 4 weeks between assessments.
Clinical and Radiological Outcomes
An independent examiner (J.W.Y) who was not involved in any of the surgeries conducted the clinical assessments of the study patients.The preoperative and 1-year postoperative clinical outcomes (ASES score, Constant score, pVAS, The Orthopaedic Journal of Sports Medicine and ROM) and radiological (AHD and Hamada classification 12 ) outcomes were compared according to SCR graft type under the new classification.Due to the small number of included study patients, types 1 and 2 were combined into group A (without tear), types 3 and 4 were combined into group B (tear but with continuity), and type 5 was regarded as group C (without continuity) for statistical analysis.
Statistical Analysis
Quantitative data are described as mean ± standard deviation and qualitative data as number and percentage.Data sets for measured parameters were compared using the Mann-Whitney U test for continuous data and the Fisher exact test for categorical data.For intergroup comparisons (groups A versus group B versus group C), analysis of variance with Tukey post hoc test was used.The intra-and interobserver reliability of the MRI assessments were calculated using the Cohen kappa coefficient (k), 4 with k values interpreted as described by Landis and Koch 18 : <0 (no agreement), 0 to 0.20 (slight agreement), 0.21 to 0.40 (fair agreement), 0.41 to 0.60 (moderate agreement), 0.610 to 0.80 (substantial agreement), and 0.81 to 1.00 (almost perfect agreement).The Pearson correlation coefficient (r) was used to evaluate the correlations between the clinical outcomes and the new SCR classification
Patients
Among the 62 study patients (mean age, 65.2 ± 8.5 years), 21 (33.9%) were men.A concomitant SSC repair was performed in 9 (14.5%)patients.In most of the patients, the tear margin was retracted to glenoid level or more retracted medially (
SCR Graft Classifications and Intra-and Interobserver Agreement
According to the new SCR classification of the postoperative graft status, 15 patients were classified as type 1 (24.2%),20 as type 2 (32.3%), 7 as type 3 (11.3%),8 as type 4 (12.9%), and 12 as type 5 (19.4%).The intraobserver agreement among the assessments was almost perfect, with a mean k coefficient of 0.937 and 0.919 in each observer.The interobserver agreement was almost perfect, with a mean coefficient of 0.819.
Clinical and Radiological Outcomes According to SCR Graft Type
After surgery, patients in groups A, B, and C showed significant increases in their ASES (P < .001,P < .001,and P ¼ .003,respectively) and pVAS (P < .001,P < .001,and P ¼ .014,respectively) values compared with the preoperative levels.The Constant score was significantly elevated only in groups A and B (P < .001for both).Improvement in forward flexion (FF) after SCR was only noted in group A (P ¼ .016).External rotation showed no significant difference among the 3 groups.AHD was increased significantly in groups A and B (P < .001and P ¼ .004,respectively).
Postoperative ASES was significantly higher in group A (82.5 ± 7.4) than in group C (69.9 ± 10.8; P < .001).Postoperative pVAS was significantly lower in group A (1.19 ± 0.78) than in group C (2.08 ± 0.67; P ¼ .021).Postoperative AHD was the highest in group A (P < .001).In terms of postoperative Hamada classification, group A showed a higher degree of improvement than group B (P ¼ .030).Group C showed no improvement after SCR and there was 1 patient who showed progression in cuff tear arthropathy (Table 2).
DISCUSSION
Our SCR graft classification method showed almost perfect inter-and intraobserver reliability.Furthermore, this classification system showed moderate to high correlations with clinical (ASES and pVAS) and radiological (AHD and Hamada classification) outcomes.Type 5 classification in this system, which denotes a complete discontinuity of the graft, was significantly associated with poor clinical and radiological outcomes that were indicative of a failed SCR.
In the histological evaluation of SCR grafts, second-look arthroscopy and biopsy graft specimens are considered to be the gold standard; however, they are invasive and thus not ideal for clinical follow-up. 38Graft healing in the orthopaedic field is therefore mostly evaluated using MRI. 9,11,22,34,37,42,43A detailed description of the graft status using MRI is important to distinguish whether the graft is healing or if a pathologic condition (eg, partial tear, total rupture) has emerged.Our new classification for the graft status after SCR uses 5 different grading scores.This system will therefore help orthopaedic surgeons or radiologists to describe the graft state in clinical practice.It will also have utility in describing graft changes in future studies.
In our study patients, image analysis was performed using an MRI reading protocol that clearly defined the Data are shown as mean ± SD or n.Boldface P values indicate statistically significant difference between groups compared as indicated (P < .05).AHD, acromiohumeral distance; ANOVA, analysis of variance; ASES, American Shoulder and Elbow Surgeons; ER, external rotation; FF, forward flexion; NA, not applicable; Preop, preoperative; Postop, postoperative; pVAS, pain visual analog scale; ROM, range of motion; SCR, superior capsule reconstruction.Group A ¼ types 1 and 2 (without tear); Group B ¼ types 3 and 4 (tear but with continuity); group C ¼ type 5 (tear without continuity).Chi-square analysis between groups A and B. 6
Lee et al
The Orthopaedic Journal of Sports Medicine location and graft status with high reproducibility.The distinction between the graft and the surrounding tissue is an important factor in accurately determining the graft state on an MRI scan. 16,20In some cases, distinguishing the graft from the surrounding tissue is difficult due to graft remodeling.In addition, if the direction of the graft and that of the MRI are not parallel, the possibility of misdiagnosing a graft tear should also be considered.Importantly, our imaging analysis with an MRI reading protocol showed almost perfect intra-and interobserver agreements.
The SCR graft classifications from our new system showed a moderate correlation with the clinical outcomes.Previous studies have reported that graft healing is the key to achieving favorable outcomes after SCR regardless of the graft materials used. 5,25,26Pain from subacromial impingement, muscle weakness, and restricted active shoulder ROM are common symptoms of irreparable rotator cuff tear. 6,8,23,31Defects in the superior capsule and posterosuperior rotator cuff tendons cause the loss of superior stability. 1,14,28,29More severe symptoms may be caused by loss of stability of the glenohumeral joint. 3In our present study, the graft healing group showed better clinical outcomes in terms of the ASES and pVAS values.
Our SCR classification system showed a moderate to high correlation with the AHD and Hamada classifications.Mihata and colleagues have reported that a healed graft provides superior stability and leads to significant increases in AHD. 27In cases of failed graft healing, another study reported that a loss of superior stability leads to humeral head superior migration and subsequent progression of cuff tear arthropathy. 25In our present study, the SCR graft classification showed significant correlations with AHD (r¼ -0.642; P < .001)and with the Hamada classification (r ¼ 0.414; P < .001),suggesting that graft healing improves AHD and Hamada classification.However, Denard et al reported that preoperatively severely decreased AHD and advanced Hamada grade could be correlated with postoperative graft failure. 5The causal relationship between AHD and postoperative graft healing is thus still unclear, and further research is needed to investigate this.
In this present study, group A (without tear) was associated with the best clinical outcomes, followed by group B (tear with graft continuity) with intermediate clinical outcomes.Group C (tear with graft discontinuity) patients had relatively poor clinical outcomes.In previous MRIbased studies, comparative analyses were conducted in populations with graft tears and those with graft healing. 19,20,25,27,40It is notable that, in relation to the postoperative graft status of the autologous fascia lata, the spectrum varies from no tear with hypointense signal to a full-thickness tear with complete discontinuity.To the best of the authors' knowledge, there has been no consensus or detailed descriptions regarding the definition of graft failure.In our current investigation of the association between clinical results and different graft types, the clinical outcomes (ASES, pVAS) and radiological outcomes (AHD and Hamada grade) were the best among patients with a type 1 graft status and the worst among patients with a type 5 graft status, whereas types 2, 3, and 4 did not show significant differences with type 1. Considering these results, type 5 (complete discontinuity) could be considered to indicate a failure of the SCR, both clinically and radiologically.
A previous study on graft classifications was conducted on patients in whom SCR was performed using alloderm. 30n that report, the authors also used 5 categories to stratify the graft condition according to the presence and location of the tear as follows: intact, tear from the glenoid, midsubstance tear, tear from the tuberosity, and absent graft.However, there was no further description of the graft state in that study, which limited its capacity to explain and understand the changes that occurred due to autograft remodeling.Therefore, the classification system for SCR using alloderm may lead to some limitations in studies on SCR using autografts.
Limitations
One limitation of our study was the small number of cases with impaired integrity of the graft.The numbers of patients with each designated graft state, especially types 3, 4, and 5, were too small for detailed analyses.Future studies with a larger sample size are necessary to perform analysis with the 5 subgroups of the classification.Another limitation of our study was the relatively short follow-up duration, which hindered us from analyzing the association of the proposed classification with long-term clinical outcomes.Future studies with a larger sample size and longer follow-up durations are needed to confirm the usefulness and suitability of our proposed grading system.Lastly, as all cases were treated with SCR using autologous fascia lata, there is a lack of evidence for the generalizability of this classification system to other graft types.
CONCLUSION
The new SCR graft classification system introduced in this study was highly reproducible and showed clinical utility for both radiological and clinical evaluation following SCR.This system may support future studies regarding SCR with a consistent report of MRI-based outcomes.
Figure 2 .
Figure 2. Graft preparation and superior capsule reconstruction procedure.(A) Polypropylene mesh augmentation.(B) Marginal running suturing.(C) The mean thickness of the graft was at least 6 mm.(D) Suture anchors were inserted in the 10-o'clock to 2-o'clock direction of the superior surface of the glenoid.(E) After fixation of the graft at each side of the glenoid and the humerus.(F) Remnant tissues including rotator cuff tendon and bursa tissues were repaired on the graft (over the top technique) and fixed.
Figure 3 .
Figure 3. Comparisons between (A) arthroscopic view and (B-G) postoperative MRI.(A) Arthroscopic view from a standard lateral portal.The autologous fascia lata graft was placed between the glenoid and humerus and fixed with anchors at each side.The yellow arrows indicate the locations of the glenoid anchors, the large white arrows indicate the locations of the humerus anchor, the horizontal white line represents the width of the graft, the red line represents the virtual line of the coronal section MRI, and the red asterisk represents a pseudotear shown in (G).(B-D) T2-weighted sagittal view postoperative MRI scans.(B) Anchors were inserted into the glenoid (white arrows).(C) Midpoint of the graft.The horizontal white line represents the graft width.(D) Anchors were inserted into the humerus (white arrows), and the graft was placed between the anchors.(E-G) T2-weighted fat-suppressed coronal view postoperative MRI scans.The graft was placed between the anchors (white arrows).(G) A graft tear-like finding (red asterisk), but a pseudolesion due to the direction of MRI acquisition (red line), which was not parallel to the graft.MRI, magnetic resonance imaging.
Figure 4 .
Figure 4. Schematic images and postoperative MRIs of SCR grafts classified into 5 categories.(A) Type 1, graft with no tear and with homogeneously low intensity on each image.(B) Type 2, graft with no tear with intrasubstance hypersignal intensity.(C) Type 3, graft with a partial-thickness tear.(D) Type 4, graft with a full-thickness tear but with partial integrity.(E) Type 5, graft with a fullthickness tear and complete discontinuity.MRI, magnetic resonance images; SCR, superior capsule reconstruction.
b
Comparison among groups A, B, and C cComparison between preoperative and postoperative data. d Patte grade 3 in 48 [77.4%] and grade 4 in 8 [12.9%]).Preoperative mean AHD was 5.06 ± 2.11 mm.The mean follow-up duration was 28.5 ± 17.7 months.Table 1 lists the patient demographics and preoperative clinical and radiological findings.
TABLE 1
Demographic, Clinical, and Radiological Characteristics of the Study Patients (N ¼ 62) a a Data are shown as mean ± SD or n.AHD, acromiohumeral distance; ASES, American Shoulder and Elbow Surgeons; FL, fascia lata; pVAS, pain visual analog scale; ROM, range of motion.
TABLE 2
Clinical and Radiological Outcomes According to SCR Graft Type a
TABLE 3
Correlation between SCR Classification System and Clinical/Radiological Outcomes a Boldface P values indicate statistical significance (P < .05).AHD, acromiohumeral distance; ASES, American Shoulder and Elbow Surgeons; ER, external rotation; FF, forward flexion; IR, internal rotation; pVAS, pain visual analog scale; ROM, range of motion; SCR, superior capsule reconstruction. a | 2023-09-28T15:24:50.235Z | 2023-09-01T00:00:00.000 | {
"year": 2023,
"sha1": "194b75d009a02c37cf99fc527c027177fcf8142e",
"oa_license": "CCBYNCND",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/23259671231193315",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "76c77f7bf077610052074e625efe889d714d83c4",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
17482029 | pes2o/s2orc | v3-fos-license | The Cellular Prion Protein Interacts with the Tissue Non-Specific Alkaline Phosphatase in Membrane Microdomains of Bioaminergic Neuronal Cells
Background The cellular prion protein, PrPC, is GPI anchored and abundant in lipid rafts. The absolute requirement of PrPC in neurodegeneration associated to prion diseases is well established. However, the function of this ubiquitous protein is still puzzling. Our previous work using the 1C11 neuronal model, provided evidence that PrPC acts as a cell surface receptor. Besides a ubiquitous signaling function of PrPC, we have described a neuronal specificity pointing to a role of PrPC in neuronal homeostasis. 1C11 cells, upon appropriate induction, engage into neuronal differentiation programs, giving rise either to serotonergic (1C115-HT) or noradrenergic (1C11NE) derivatives. Methodology/Principal Findings The neuronal specificity of PrPC signaling prompted us to search for PrPC partners in 1C11-derived bioaminergic neuronal cells. We show here by immunoprecipitation an association of PrPC with an 80 kDa protein identified by mass spectrometry as the tissue non-specific alkaline phosphatase (TNAP). This interaction occurs in lipid rafts and is restricted to 1C11-derived neuronal progenies. Our data indicate that TNAP is implemented during the differentiation programs of 1C115-HT and 1C11NE cells and is active at their cell surface. Noteworthy, TNAP may contribute to the regulation of serotonin or catecholamine synthesis in 1C115-HT and 1C11NE bioaminergic cells by controlling pyridoxal phosphate levels. Finally, TNAP activity is shown to modulate the phosphorylation status of laminin and thereby its interaction with PrP. Conclusion/Significance The identification of a novel PrPC partner in lipid rafts of neuronal cells favors the idea of a role of PrP in multiple functions. Because PrPC and laminin functionally interact to support neuronal differentiation and memory consolidation, our findings introduce TNAP as a functional protagonist in the PrPC-laminin interplay. The partnership between TNAP and PrPC in neuronal cells may provide new clues as to the neurospecificity of PrPC function.
Introduction
The cellular prion protein PrP C is a ubiquitous glycoprotein anchored at the plasma membrane through a glycosylphosphatidylinositol (GPI) lipid moiety. It is abundantly expressed in neurons of the central nervous system (CNS), which are the main target of transmissible spongiform encephalopathies (TSE). The conversion of PrP C into an abnormal conformer, PrP Sc , prone to aggregation, is a hallmark of prion diseases. In addition to having a genetic or sporadic origin like other neurodegenerative disorders, prion diseases have the unique peculiarity to be transmissible, the PrP Sc conformer being the main if not the only component of the pathogenic agent [1].
The absolute requirement of PrP C for the development of prion diseases is well established. However, the precise role of this protein is yet to be fully determined. Its identification should help to understand how the pathogenic isoforms interfere with the cellular function of normal PrP C [2]. Recent data have shown that PrP C plays a role in cell signaling and cell adhesion and may act as a membrane receptor or co-receptor [3][4][5], consistent with its extra-cellular orientation. Interestingly, PrP C is expressed at the plasma membrane in sub-domains enriched in cholesterol and sphingolipid [6] described as rafts and known to play a role in cellular events such as sorting of membrane constituents and signal transduction [7]. While the location of PrP C in lipid rafts is suspected to be required for its conversion into PrP Sc [8,9], it could also have implications as to PrP C function.
Attempts to identify physiological ligands or partners that could bring light on PrP C function have relied on different approaches (two hybrid techniques, immunoprecipitation of cellular PrP C complexes, complementary hydropathy analyses…). Only some of the interactions have been confirmed and/or shown to have functional relevance at a cellular level [10]. PrP C associates with molecular chaperones such as BiP, grp94, protein disulfide isomerase or calnexin, required for the proper folding of glycoproteins [11]. Another PrP C -interacting molecule is the stress inducible protein I (STI-I) chaperone, described as having a neuroprotective action [12]. PrP C partners also include proteins involved in signal transduction such as synapsin 1, important for synapse formation and neurotransmitter release, the adaptor Grb2 molecule [13] and the protein casein kinase 2, CK2 [14]. Also, adhesion molecules such as laminin and the 37/67 kDa laminin receptor have been shown to interact with PrP C [15][16][17], with heparan sulphated molecules acting as intermediates [18]. Graner et al. have notably reported on the impact of the PrP C -laminin interaction on neurite outgrowth [16]. Chemical cross-linking analyses have identified the neuronal adhesion molecule, NCAM, as another PrP C interacting protein [19]. This interaction appears to sustain the recruitment of NCAM into lipid rafts, the activation of the Fyn tyrosine kinase and N-CAM-mediated neurite outgrowth [20]. The latter observation recalls our demonstration that antibody mediated PrP C cross-linking triggers Fyn activation in 1C11-derived neuronal cells via the lipid raft protein caveolin [3].
In order to search for PrP C partners, we took advantage of the 1C11 neuronal differentiation model [21], which previously allowed us to substantiate a role for PrP C in signal transduction. Upon appropriate induction, the 1C11 neuroepithelial cell line engages into a neuronal differentiation program. Nearly 100% cells acquire the overall functions of serotonergic (1C11 5-HT ) or noradrenergic (1C11 NE ) neurons, within 4 or 12 days, respectively. By unraveling some signal transduction events instructed by PrP C , our previous work has pointed to the implication of the cellular prion protein in cell homeostasis [3,[22][23][24]. In 1C11 5-HT and 1C11 NE differentiated cells, the implementation of a PrP Ccaveolin-Fyn platform on neuritic extensions controls multiple pathways converging to the MAP kinases, ERK1/2. Furthermore, in addition to its proper signaling activity, PrP C modulates the agonist-induced response of the three serotonin receptors coupled to G-proteins present on 1C11 5-HT cells, themselves regulating the overall serotonergic functions [25]. Interestingly, this modulatory role of PrP C is also restricted to fully differentiated cells and is caveolin-dependent.
The neuronal specificity of PrP C signaling function may rely on some of the numerous isoforms and/or glycoforms of this protein resulting from proteolytic cleavage and heterogenous glycosylation [26,27]. It could also depend on PrP C partners induced during the bioaminergic programs and/or recruited into lipid rafts. The purpose of the present study was to search for PrP C partners in lipid microdomains of differentiated neuronal cells. By an approach combining immunoprecipitation of PrP C from lipid rafts and mass spectrometry analysis, we identify the tissue nonspecific alkaline phosphatase (TNAP) as interacting with PrP C in membrane microdomains of both 1C11 5-HT and 1C11 NE cells.
TNAP is a GPI membrane-bound alkaline phosphatase (AP) expressed as three distinct isoforms found respectively in liver, kidney and at a high level in bone where it plays an essential role in osteogenesis [28]. Recent data identified TNAP in different cell types of the brain [29,30]. While its role is still elusive, it has been proposed to participate to neurotransmission [29].
Here, we show that TNAP is induced along either the serotonergic or noradrenergic differentiation program of 1C11 cells. This ectoenzyme is active under physiological conditions and may participate in bioamine synthesis. Besides, we provide evidence that the PrP C -interacting protein laminin is a substrate for TNAP in 1C11-derived neuronal cells, and that, by modulating the phosphorylation level of laminin, TNAP impacts on the interaction between PrP and laminin.
Results
PrP C partitions in lipid rafts irrespective of the differentiation state of 1C11 cells The presence of PrP C in lipid rafts of 1C11 precursor, 1C11 5-HT or 1C11 NE fully differentiated cells, was assessed using TritonX-100 insoluble glycosphingolipid (GSL) rich microdomains isolated by flotation on sucrose gradient and solubilized in 6% SDS. As revealed by Western blot, PrP C majorly segregated into the GSL fraction of 1C11 cells (Fig. 1A) and its neuronal progenies (not shown). A similar result was obtained for caveolin 1, a marker of caveolae which are subtypes of lipid rafts (Fig. 1B).
To evaluate the degree of PrP C enrichment in lipid rafts, comparative analyses were performed by Western blot using proteins of raft preparations (1 mg) and total extracts (15 mg) from 1C11 precursor, 1C11 5-HT (day 4) and 1C11 NE (day 12) cells. Irrespective of the differentiation state, a 100-to 200-fold increase in the amount of PrP C was observed in GSL fractions ( Fig. 2A). A similar enrichment was observed for other proteins specific of lipid rafts such as flotillin (Fig. 2B), caveolin 1 ( Fig. 1B and not shown) and the GPI-anchored 120 kDa isoform of NCAM (Fig. 2C), the latter two described as interacting with PrP C [3,19]. While NCAM120 was enriched in lipid rafts, it is noteworthy that the 140 kDa transmembrane form of NCAM was predominant in total extracts.
These data indicate that the enrichment of PrP C in lipid rafts is independent from the differentiation state, precursor vs neuronal, of 1C11 cells.
PrP C interacts in microdomains of 1C11 5-HT and 1C11 NE neuronal cells with an 80 kDa protein identified as the tissue non-specific alkaline phosphatase, TNAP In order to seach for potential PrP C partners in such specialized microdomains, plasma membrane proteins of 1C11, 1C11 5-HT and 1C11 NE cells were labelled with biotin before raft preparation. GSL fractions were dissolved in non ionic detergent (1% TritonX-100) to maintain some protein interactions and heated for 1 hour at 37uC to allow extraction of proteins from membrane cholesterol. Antibodies recognizing either N-ter (SAF34) or C-ter (Bar221) epitopes of PrP C were covalently linked to sepharose beads and used to immunoprecipitate PrP C . The immunoprecipitated complexes were resolved on a 12% SDS-PAGE (Fig. 3A). The biotinylated full-length mono or bi-glycosylated PrP C species were immunoprecipitated with both antibodies in 1C11 5-HT differentiated cells as well as in 1C11 precursor cells. The glycoforms corresponding to the N-terminally truncated fragments of PrP C were recovered with the Bar221 antibody only. A few other biotinylated proteins appeared to be co-precipitating with PrP C both in 1C11 precursor cells and in bioaminergic neuronal cells. These include proteins with an apparent molecular mass between 45-65 kDa ( fig. 3A and B) as well as proteins of high molecular weight (around 200 kDa). Interestingly, using either anti-N-ter or anti-C-ter PrP C antibodies, an 80 kDa biotinylated protein was co-precipitated with PrP C in lipid rafts of 1C11 5-HT and 1C11 NE cells. The presence of this 80 kDa protein within PrP C complexes appears to depend on neuronal differentiation, since we failed to detect this protein co-precipitating with PrP C in lipid rafts of the 1C11 neuroepithelial precursor ( Fig. 3 and data not shown).
Mass spectrometry analysis was then carried out to define the identity of this 80 kDa PrP C partner. Lipid rafts were prepared from 1C11 5-HT and 1C11 NE cells as well as from 1C11 precursor. PrP C complexes were immunoprecipitated as above and separated on an 8% SDS-PAGE allowing a better resolution in the 50-100 kDa range of proteins as exemplified in Figure 3B. Proteins of 80 kDa apparent molecular mass were trypsin-digested and analyzed with a LC/MS/MS instrument. The experimental peptide fragments were confronted to the NCBI non-redundant mouse database. Five peptides (aa [53][54][55][56][57][58][59][60][61][62][63][64][65][66][67][68][69][70][71] , aa 204-213 , aa 248-260 , aa 274- We took advantage of an anti-TNAP antibody [31] to further study the TNAP-PrP C interaction. Performing the reverse immunoprecipitation with the anti-TNAP antibody did not allow a clear detection of associated proteins (data not shown). The anti-TNAP polyclonal antibody may promote a destabilisation of TNAP-PrP C complexes. It is also worth noting that under conditions where biotinylated TNAP was easily revealed in PrP C immune-complexes by streptavidin, the anti-TNAP antibody failed to yield a signal at 80 kDa. This suggests that sensitive technics (biotinylation, mass spectrometry analysis) are required to reveal TNAP co-precipitating with PrP C .
We next evaluated the distribution of TNAP at the cell surface of 1C11 5-HT serotonergic cells by immunofluorescence. TNAP antibodies yielded a punctate staining of the membrane, both on cell bodies and on neurites (Fig. 5, B and E). Such a labeling was reminiscent of the PrP C staining (Fig. 5, A and D). The superimposition of the two stainings showed a partial colocalization of TNAP and PrP C at the surface of 1C11 5-HT neuronal cells (Fig. 5C) that is confirmed by scanning confocal analysis ( fig. 5F).
As a whole, these data introduce TNAP as a neurospecific PrP C partner, in lipid rafts of either 1C11 serotonergic or noradrenergic progenies. They also indicate that membrane-bound TNAP and PrP C are located in close vicinity within raft domains.
The absence of the TNAP protein in 1C11 cells correlated with a lack of TNAP gene expression as assessed by RT-PCR analysis. As shown in Figure 6B, TNAP transcripts were below detectable levels in the 1C11 precursor and were abundant in 1C11 5-HT and 1C11 NE neuronal cells.
As a whole, these results show that 1C11 precursor cells lack TNAP and that the expression of this ectoenzyme is restricted to 1C11 5-HT and 1C11 NE neuronal cells.
A functional TNAP is induced during the differentiation of 1C11 5-HT and 1C11 NE cells We next investigated whether the TNAP interacting with PrP C at the cell surface of 1C11 5-HT and 1C11 NE cells was functional. To preserve at best the TNAP ectoenzyme natural microenvironment, we developped a chemiluminescence assay using the CSPD probe. It allowed us to measure under physiological conditions, TNAP and other phosphatase activities present at the cell surface of adherent live cells. As shown in Figure 6C (white bars), phosphatase activities monitored in 1C11 precursor cells were much lower (3 to 4 fold) than in fully differentiated 1C11 5-HT neuronal cells. We sought to specify whether the increase in phosphatase activity associated to neuronal differentiation could be attributed to TNAP. To this purpose, the chemiluminescent assay was performed in the presence of orthovanadate (1 mM), a phosphatase inhibitor with broad specificity, or tetramisol (5 mM), a specific inhibitor of the TNAP enzyme. Interestingly, TNAP has the particularity of being inhibited by tetramisol but not by orthovanadate [32]. While not affected by tetramisol (grey bar), exposure of 1C11 undifferentiated cells to orthovanadate (black bar) fully switched off the phosphatase activity indicating that a set of phosphatases, distinct from TNAP, is present at the neuroectodermal precursor stage. By contrast, in 1C11 5-HT serotonergic cells, tetramisol inhibited around 65% of phosphatase activities. The remaining activity corresponded roughly to the level already present in undifferentiated 1C11 cells (Fig. 6C). Noticeably, in 1C11 5-HT cells, around 65% of the phosphatase activity which is resistant to orthovanadate fully relates to TNAP. Similar phosphatase profiles were obtained with 1C11 NE cells (see Fig. 7B).
The time of onset of a functional TNAP among total phosphatase enzymatic activity was then monitored during the kinetics of serotonergic and noradrenergic differentiation of 1C11 cells (Fig. 7). This is rendered possible by the synchronicity and the homogeneity of differentiation of 1C11 cells. In 1C11 5-HT serotonergic and 1C11 NE noradrenergic differentiating cells, phosphatase activity levels kept increasing during the time course of both neuronal differentiation programs, till completion (day 4 for 1C11 5-HT and day 12 for 1C11 NE cells). Such an increase in cell surface phosphatase activities was majorly attributable to an induction of TNAP as demonstrated by sensitivity to tetramisol ( Fig. 7A and B). This TNAP activity accounted for 60-70% of total phosphatase activities in differentiated cells. Of note, the induction of a TNAP enzymatic activity during 1C11 bioaminergic differentiation fully matches the kinetics of expression of TNAP specific mRNA (Fig. 7C).
These results demonstrate that a functional TNAP is induced as early as day 3 of both the serotonergic and noradrenergic neuronal pathways and reaches a maximal activity upon implementation of a complete bioaminergic phenotype. The onset of TNAP activity at the surface of bioaminergic cells, which precedes the implementation of a complete phenotype, may confer to this phosphatase a role in the modulation of neuron-or neurotransmitter-associated specialized functions.
TNAP is involved in the control of serotonin and catecholamines synthesis
The specific role of TNAP in the CNS is still elusive. TNAP is known to function as an ectoenzyme to convert pyridoxal phosphate (PLP) into pyridoxal (PL), ensuring the passive uptake of this non-phosphorylated form of vitamin B6 into the cells where PL is converted back to PLP by intracellular kinases. In neuronal cells, PLP is an essential cofactor of the decarboxylases required for neurotransmitter synthesis i.e. glutamate decarboxylase (GAD) for GABA and amino acid decarboxylase (AADC) for bioamines. To date, an involvement of TNAP has been inferred in GABAergic neurotransmission only [33]. A potential link between a TNAP-dependent control of vitamin B6 metabolism and serotonin (5-HT) or catecholamine (CA) levels has not been established. We evaluated the impact of TNAP inhibition on 5-HT and CA synthesis in 1C11 5-HT and 1C11 NE cells. Cells having . The expression of a functional TNAP is restricted to differentiated serotonergic and noradrenergic 1C11 derived-cells. In (A), the presence of TNAP in 1 mg of lipid rafts prepared from 1C11 induced or not to differentiate was revealed by western blot analysis using an anti-TNAP specific antibody. In (B), the expression of TNAP mRNAs was evaluated by PCR analysis. TNAP (upper panel) or GAPDH (lower panel) specific fragments were obtained after amplification by PCR of cDNA synthesized from mRNA isolated from the 1C11 precursor and the differentiated 1C11 5-HT and 1C11 NE cells. In (C), phosphatase activity at the surface of 1C11 and 1C11 5-HT cells was measured by luminescence using the CSPD substrate and expressed as relative luminescent unit (RLU). White bars correspond to total phosphatase activities, black bars to the activity measured in the presence of 1 mM orthovanadate and grey bars in the presence of 5 mM tetramisol. doi:10.1371/journal.pone.0006497.g006 implemented a complete phenotype (day 4 for 1C11 5-HT and day 12 for 1C11 NE ) were exposed to tetramisol (2.5 mM) for up to 6 hours and cell extracts were collected at various time-points to measure the levels of bioamines and their precursors.
As shown in Figure 8, tetramisol promoted a significant decrease in 5-HT (2 fold) or dopamine (DA) (1.8 fold), i.e. the AADC products, concomitant with an increase of their precursors 5-hydroxytryptophan (5-HTP) and dihydroxyphenylalanine (DO-PA), respectively. This effect was observed as soon as 1 h, peaked after 2 h, remained stable over 6 h (Fig. 8A and B) and vanished after an overnight treatment (data not shown).
These data provide direct evidence that TNAP activity may act on 5-HT and CA synthesis in 1C11 5-HT and in 1C11 NE cells and define TNAP as a player in neurotransmitter metabolism.
TNAP modulates the phosphorylation state of laminin and its binding to PrP C , in both 1C11 5-HT and 1C11 NE cells While TNAP activity on phospho-monoesters is well established, there are only few reports suggesting that TNAP could act on phospho-proteins. TNAP might in fact exert opposite action to ecto-kinases on extracellular matrix (ECM) substrates. Based on this assumption, we probed the impact of TNAP inactivation on the phosphorylation of laminin, selected as a read out as both a target of ecto-kinases [34] and a PrP C -partner [16]. As shown in figure 9A, laminin was barely phosphorylated in 1C11, 1C11 5-HT and 1C11 NE control cells. As anticipated from the lack of TNAP expression in 1C11 precursor cells, the level of laminin phosphorylation was insensitive to tetramisol. In contrast, exposure of 1C11 5-HT and 1C11 NE bioaminergic neuronal cells to 2.5 mM tetramisol promoted a raise in laminin phosphorylation. A five fold increase in the amount of phospho-laminin was quantified at 24 h, that persisted over 48 h in 1C11 5H-T and 1C11 NE treated cells vs untreated cells ( fig. 9B).
Immunoprecipitation experiments were further carried out to evaluate the possible impact of laminin phosphorylation on its interaction with PrP C . In agreement with the work of Graner [16], PrP C was found to associate with laminin in 1C11 5-HT and 1C11 NE cells ( fig. 9C, left panel). Upon exposure of 1C11 5-HT and 1C11 NE cells to 2.5 mM tetramisol for 24 h, the interaction between laminin and PrP was nearly lost (fig. 9C, right panel). As a whole, these results identify laminin as a target of the PrP Cinteracting partner TNAP in neuronal cells. We may also conclude that, by modulating the phosphorylation level of laminin, TNAP impacts on the interaction between PrP C and laminin.
Discussion
In the present work, we identify the tissue non-specific alkaline phosphatase, TNAP, as a partner of PrP C in lipid microdomains of 1C11-derived bioaminergic neuronal cells. This was established through co-immunoprecipitation and mass spectrometry analyses. Three major observations relate to this partnership: (i) the PrP C -TNAP interaction is restricted to the 1C11 5-HT and 1C11 NE neuronal progenies, (ii) it occurs in lipid rafts where both protagonists, which are GPI-anchored, preferentially reside, and, (iii) inhibition of TNAP activity alters the phosphorylation state of the PrP C -binding protein laminin, suggesting that PrP and TNAP could functionally interact.
The 1C11 neuronal differentiation model used in the present study has already allowed to gain information on PrP C function. Besides a ubiquitous intracellular signaling coupled to PrP C involved in red-ox equilibrium and cell homeostasis [22], our previous findings have uncovered some neuronal specific function of PrP C . This first relates to the selective implementation of a PrP C -caveolin-Fyn platform governing several signaling pathways converging on ERK1/2 in the differentiated 1C11 5-HT and 1C11 NE neuronal cells [3,22]. A second neurospecific role of PrP C is to modulate serotonin receptor intracellular coupling and crosstalks [25]. Remarkably, both the proper instruction of signal transduction events by PrP C and its interference with serotonin receptor responses involve caveolin. These observations illustrate the functional implication of PrP C location in a subtype of lipid rafts, the caveolae, involved in cell signaling and capable of internalizing membrane receptors. However, the cellular and molecular basis accounting for PrP C neurospecific function still has to be characterized. It could rely on the recruitment of a selective subset of PrP C isoforms in lipid rafts. An alternative explanation would be the involvement of additional molecules whose expression and/or interaction with PrP C is restricted to mature neuronal cells. In this context, the present identification of TNAP as a neurospecific PrP C partner posits TNAP as one such a candidate.
Besides, our results support the notion that the onset of a functional TNAP accompanies the serotonergic and noradrenergic differentiation of 1C11 cells. This is substantiated by (i) the expression of TNAP mRNAs in the differentiated progenies of the 1C11 cell line and the lack of transcripts in 1C11 precursor cells, (ii) the selective implementation of a tetramisol-sensitive TNAP activity during the kinetics of differentiation coinciding with TNAP protein expression and, (iii) the participation of this ectophosphatase to neurotransmitter metabolism. This latter observation is in line with the well-established TNAP-mediated regulation of pyridoxal phosphate (PLP), a cofactor of decarboxylases contributing to the last step of some neurotransmitter synthesis (serotonin, norepinephrine, GABA…). This TNAP associated phosphomonoesterase activity may confer an important role to this protein in the nervous system, as discussed below.
Noteworthy, our experimental design based on lipid raft isolation shows that PrP C and TNAP interact within these specialized microdomains in which they segregate. The location of PrP C in lipid rafts or its interaction with molecules in such microdomains has been described using other approaches. For instance, Schmitt-Ulms et al. have investigated into PrP C partners in total brain samples. Their analysis confirms that PrP C resides in a membrane environment containing proteins specific of lipid rafts and, in particular, a subset of molecules that, like PrP C , use a GPIanchor [35]. PrP C interacts with GM3 gangliosides present in high amount in lymphocyte and neuronal lipid rafts [36,37] and with other glycoproteins or glycolipids [38][39][40], which co-localize or are enriched with PrP C in rafts of neuronal cells. Whether these partners participate in PrP C function is however unknown. Noticeably, different intracellular signaling molecules such as kinases and adaptors, recruited through lipid rafts, have been implicated in PrP C functional interactions [3,13,20,37,41,42]. Although the functional relevance of PrP C compartimentation within rafts has been poorly addressed, it has recently been established that PrP C does recruit NCAM into lipid rafts where it instructs Fyn activation and subsequent neurite outgrowth and neuronal polarization [20]. Our present identification of TNAP as a novel raft-specific PrP C interacting molecule adds further weight to the idea that PrP C location in rafts deals with its neuronal function. It is now well established that lipid rafts constitute dynamic sub-membrane structures allowing the concentration of specific lipids, glycolipids and glycoproteins serving particular functions [43]. In view of the increasing set of molecules described as interacting with PrP C in membrane microdomains, it is tempting to speculate that PrP C takes part to multi-molecular complexes whose onset is favored by the specific lipid local composition and which may sustain signal transduction events. Further investigation will be required to determine whether TNAP functional interaction with PrP C occurs directly or indirectly via the intermediate of other proteins related to neuronal differentiation programs.
An interaction of PrP C with TNAP may have different implications in neuronal cells in relation to the various roles envisioned for this ectoenzyme (see fig. 10). TNAP is a homodimeric metalloenzyme that hydrolyses phospho-monoester specific substrates, phosphoethanolamine (PEA), inorganic phosphate (PPi), an important player in bone mineralization, and pyridoxal phosphate (PLP), a cofactor of decarboxylases contributing to neurotransmitter synthesis. However, little is known about the role of TNAP under physiological conditions and it is only recently that this ecto-phosphatase has been recognized to be important in the nervous system [29,44]. A role of TNAP in neurotransmission is well illustrated by the observation that TNAP knock-out mice develop epilepsy due to GABA deficiency [33]. These defects recall the occurrence of seizures in patients with mutations in the ALPL gene, suffering from severe hypophosphatasia. Moreover, recent data show that TNAP activity is regulated by sensory experience [29]. Since serotonin containing fibers are present at high density in sensory regions of the brain, the authors suggest that TNAP could also regulate serotonin or dopamine synthesis and participate in cortical function and neuronal plasticity by regulating neurotransmitter synthesis. Our data indeed establish a link between TNAP activity and bioamine synthesis in 1C11 5-HT and 1C11 NE cells. Hence the interaction of PrP C with TNAP may confer to the prion protein a role in neurotransmitter homeostasis and neuronal transmission. In this regard, it is worthy to note that TSE-associated neurodegeneration is accompanied by alterations in neuronal transmission notably involving the serotonergic system [45].
Besides, TNAP could contribute to ectonucleotidase activity in the brain [30,46,47]. Indeed, TNAP has the capacity to dephosphorylate ATP to adenosine in a stepwise manner [48]. Nucleotide signaling exerts important neuronal function in the development of the nervous system and in synaptic transmission in adult brain [44]. Interestingly, a change in nucleotidase activity has been detected in PrP C2/2 mice which exhibit a slower rate of ADP hydrolysis possibly leading to a lower level of adenosine [49]. Adenosine has an anticonvulsant effect and this has to be put together with the recent observation that such PrP C deficient mice are more prone to develop seizures in response to convulsant compounds [50]. The susceptibility to seizures and epilepsy recalls the phenotype of TNAP knockout mice. Possibly, defects in TNAP activity could account for some of the changes in brain ectonucleotidase activities reported in hippocampal and cortical Figure 10. Diagram depicting possible implications of a PrP C -TNAP association in membrane microdomains of neuronal cells. PrP C and TNAP are GPI-anchored membrane proteins, which majorly reside in rafts. Both have been described to interact with ECM proteins [16,53,54] and to participate to cell signaling events. PrP C can instruct downstream signaling events, including ERK and CREB activation, by mobilizing a Cav/Fyn complex on neurites [3,[22][23][24]. In addition, it modulates the coupling of 5-HT receptors, with specific impact according to G protein-dependent pathway [25]. The TNAP ectophosphatase may have different substrates. (i) By promoting PLP hydrolysis it contributes to the regulation of neurotransmitter synthesis [33]. (ii) Its nucleotidase activity may have implications for purinergic signaling [30,[46][47][48]. (iii) TNAP may be active on phosphoproteins notably of the cell surface [51,52]. The identification of phospho-laminin as a TNAP substrate uncovers a novel role of this ectoenzyme in the regulation of ECM molecules. Laminin and the laminin receptor are important components of the perineural net (PN) and are known partners of PrP C . The interplay between PrPC, laminin and TNAP within multiprotein complexes may have implications for neuronal functions (survival, homeostasis, plasticity). doi:10.1371/journal.pone.0006497.g010 synaptosomes of mice lacking PrP C [49]. Further investigation into TNAP activity in a PrP C null context should help clarify this issue.
Beyond its phospho monoesterase and ectonucleotidase activity, TNAP may also exert a phosphatase activity on proteins [51]. This is notably supported by the demonstration by Becq et al that TNAP inhibition enhances the phosphorylation and concomitant activation of the Cystic Fibrosis Trans-membrane receptor (CFTR) [52]. Interestingly, this ectoenzyme could also have a role on extracellular matrix proteins, as supported by its collagenbinding domain [53,54]. In line with this, our data define phospho-laminin as a TNAP substrate in both 1C11 5-HT and 1C11 NE neuronal cells. To our knowledge, this is the prime evidence that TNAP may contribute to regulate the phosphorylation state of an ECM protein in neuronal cells. In contrast, the partnership between PrP C and laminin has raised much attention over the past few years. The interaction of PrP C with laminin has been shown to sustain both neurite outgrowth [16], neuronal differentiation of PC12 cells [55] and memory consolidation [56]. Whether these processes are modulated according to the phosphorylation state of laminin remain to be investigated. Our data support the notion that the phosphorylation level of laminin influences its ability to interact with PrP C and define TNAP as a novel protagonist in the PrP C -laminin interplay. They add to the current notion that PrP C may be part of large multi-molecular complexes, depending on the cellular context and environment, and thereby contribute to diverse cellular functions [57]. Resolving the complexity of PrP C partners and functional interactions in neuronal cells should lead to a better understanding of the neurospecificity of PrP C function.
Materials and Methods
Cell culture and reagents 1C11 cells were grown in DMEM medium (Gibco) supplemented with 10% foetal calf serum (Seromed) and, differentiation into serotonergic (1C11 5-HT ) or noradrenergic (1C11 NE ) neuronal cells was induced respectively by addition of dibutyril cyclic AMP (dbcAMP) or addition of dbcAMP in presence of 2% DMSO as previously described [21]. Unless stated otherwise, 1C11 5-HT cells correspond to day 4 of serotonergic differentiation and 1C11 NE cells correspond to day 12 of noradrenergic differentiation. The BW5147 mouse myeloma cell line was grown in RPMI containing 7.5% foetal calf serum (Gibco). For inhibition of TNAP, tetramisol was added as indicated in the culture medium. Unless indicated, the reagents were purchased from Sigma.
Antibodies
Mouse monoclonal antibodies specific of prion protein were from SPI-BIO. SAF32 and SAF34 antibodies recognize an N-ter epitope (a.a. 79-92) while Bar221 is specific of the C-ter region of PrP C (a.a. 140-160). N-CAM was revealed using an anti-pan N-CAM mouse monoclonal antibody (BD Bioscience). We also used mouse monoclonal anti-caveolin and anti-flotillin antibodies (Transduction Laboratory) and a rabbit polyclonal antibody to Lck (Upstate). Preparation of the anti-TNAP antibody as been previously described [31]. Antibody MAB2549 against Laminin-1 was from R&D systems.
The secondary reagents used for immunoblot detection were, either goat anti-mouse or goat anti-rabbit antibodies coupled to horseradish peroxidase (HRP) accordingly to the primary antibody, or streptavidin HRP to detect biotinylated proteins in immune complexes and were all purchased from Southern Biotechnology. The secondary antibodies (Molecular Probe) used in immunofluorescence were a goat anti-mouse and a goat anti-rabbit antibodies coupled to alexa fluo 488 (green) and alexa fluo 594 (red), respectively.
Preparation of lipid rafts (GSL) on sucrose gradient and cell surface biotinylation
Purification of the glycosphingolipid (GSL) rich complexes was performed as described for lymphoblastoid cells [58]. 1C11 adherent cells were washed twice in PBS then scraped on ice in a small volume of PBS containing a cocktail of protease inhibitors (Complete TM from Roche) and 1 mM sodium orthovanadate (Na 3 VO 4 ) phosphatase inhibitor. Around 10 8 cells were disrupted and homogenized at 4uC in 3 ml of MBS (Mes buffered saline): 25 mM Mes pH 6.5; 150 mM NaCl, containing 1% triton X-100 (Tx100), phosphatase and protease inhibitors. The homogenate (HT) was clarified by 1 min centrifugation at 1000 rpm and brought to a volume of 4 ml at 40% sucrose in MBS-Tx100. The homogenate in 40% sucrose was transferred to a Sw41 tube (Beckman), overlaid with 4.5 ml of a 30% sucrose solution in MBS (without triton) then, with 2.7 ml of a third layer containing MBS without sucrose. The step-gradient was centrifuged for 20 h at 180000 g and at 4uC in a Sw41 rotor (Beckman). The lipid rafts containing GSL complexes appear as an opaque band 5 mm beneath the 0%-30% layers interface. They were harvested and diluted to a volume of 3 ml in MBS. GSL complexes were then pelleted by centrifugation for 1 h at 300000 g in a TL100.3 rotor (Beckman). Such raft preparations were dissolved in 6% SDS-RIPA buffer (150 mM NaCl, 25 mM Tris HCl pH 7.4, 5 mM EDTA, 0.5% Na-DOC and 0.5% NP40). The different soluble fractions at 40% (F40) and 30% (F30) sucrose as well as the triton insoluble high-speed pellet (HSP) were collected from the gradient and their protein concentration was determined using BCA kit (Pierce).
For analysis of PrP C partners in lipid rafts, membrane proteins were biotinylated prior to GSL preparation. Cells in monolayer were washed twice with PBS Ca 2+ /Mg 2+ then incubated with EZlink TM -sulfo NHS-LC-biotin (Pierce) at a concentration of 0.5 mg/ml in PBS for 30 min at 4uC to limit endocytosis of membrane receptors. Adherent biotinylated cells were washed and GSL were isolated as above, except they were diluted in NET buffer (150 mM NaCl, 50 mM Tris HCl pH 7.4, 5 mM EDTA) containing 1% Tx-100 (Calbiochem) and heated 1 h at 37uC in order to improve solubilisation of proteins embedded into cholesterol and to allow further immunoprecipitation of PrP C complexes.
Immunoprecipitation and western blot analysis
Specific immunoprecipitations were performed using protein A or protein G sepharose beads covalently linked to anti-PrP C IgG2a (SAF34) or IgG1 (Bar221) respectively. This procedure avoids recovering of IgG in the complexes which is of importance for MS analysis. We used the Seize TM -X protein A (or G) immunoprecipitating kit (Pierce) to prepare immunoabsorbant according to the manfacturer's recommandations. Anti-PrP coupled-beads were then incubated overnight at 4uC with biotinylated rafts in lysis buffer containing Tx-100. Beads were washed 4 times in high salt buffer (NET, 1% Tx100 in 0.5 M NaCl), then twice in Hepes 40 mM before elution of the immune-complexes in a reducing sample buffer containing SDS. For analyses in western blot, 2.5 mg of raft proteins were immunoprecipitated while for further purification of PrP C partners for mass spectrometric analysis, a high amount of raft was used (equivalent to 20-30 mg). Denatured complexes were run on SDS-PAGE (Bio-Rad). After transfer of proteins from the gel onto nitrocellulose membrane (Amersham), the membrane was blocked with 1% gelatin in PBS 0.1% Tween 20 (PBST). Detection of PrP C and associated proteins was performed using streptavidin-HRP (Southern Biotechnology) 1/ 100 000 and the ECL chemiluminescent procedure (Amersham).
To probe an interaction of PrP C with laminin, 1C11 5-HT and 1C11 NE cells were incubated with antibodies against laminin-1 (10 mg/ml) in PBS containing 0.5% BSA for 1 h at 4uC. Cells were washed twice with PBS Ca 2+ /Mg 2+ , scrapped and collected by centrifugation (10,000 g, 3 min, 4uC). Pellets were resuspended in NET lysis buffer containing 1% Tx-100. Lysates were transferred onto protein-A sepharose beads and the last steps of immunoprecipitation were carried out as described above. SAF32 antibodies were used to detect PrP.
Mass spectrometry
Peptides were generated for mass spectrometry analysis by ingel trypsin digestion of proteins. Since the gel was not stained, we used pre-stained standard molecular weight as reference to evaluate the 80 kDa position. One mm large gel slices excised from 8% SDS-PAGE and including proteins of interest with an apparent molecular mass of 80 kDa, were reduced with DTT and alkylated by iodoacetamide treatment. The enzyme digestion was carried out overnight at 37uC with modified sequencing grade trypsin (Promega, Madison, WI). Peptides were then extracted from the gel by treatment with a solvent solution containing 5% formic acid and 50% acetonitrile. The extracts were dried under vacuum and re-suspended in a minimum volume (10 ml) of a solution at 0.1% formic acid and 5% acetonitrile and 4 ml of peptide extract were analysed.
Mass spectrometric analyses were performed by LC-ESI-MS/ MS where a nanoflow liquid chromatography (LC-Packings nanoflow LC system, Dionex Inc) is coupled to a nano electrospray ionisation system (ESI) and a tandem mass spectrometer (MS/MS) analyser (Deca XP LCQ-Ion trap mass spectrometer instrument, Thermo Electron, Waltham, MA). The system allows peptide extracts to be desalted and concentrated on a capillary peptide trap (1 mm6300 mm ID) prior to injection on a C18-resin (LC-Packings, Netherlands) column (15 cm675 mm ID pepMap column). Peptides were eluted at a constant flow rate of 170 nl/min by applying a discontinuous acetonitril gradient (5%-95%). The column exit is directly connected to the nanoelectrospray ion sources and the instrument is operated in datadependent acquisition mode to automatically switch from MS to MS/MS analysis. MS/MS spectra were obtained by fragmenting ion peptides by collision-induced dissociation (CID) using normalized collision energy of 30% in the ion trap.
The data files generated by LC-MS/MS were converted to Sequest generic format files and were confronted to the mus musculus NCBI non-redundant database using Bioworks 3.1 Search Engine (ThermoFinnigan). Search parameters for determination of peptide sequences included carbamidomethyl as fixed modification and oxidized methionine as variable modification.
Enzymatic activity of alkaline phosphatase
Phosphatase activity was determined at the surface of intact cells performing enzymatic test on cells that were cultured in 96 wellsmicroplates. Cell layers were washed twice with PBS Ca 2+ /Mg 2+ then incubated with CSPD chemiluminescent substrate (Roche) at a concentration of 0.25 mM in 200 ml of a physiologic buffer (135 mM NaCl, 4 mM KCl, 1 mM CaCl 2 , 20 mM Hepes pH 7.5, 5 mM glucose and 1 mM MgCl 2 ) as described [59]. In order to discriminate between different phosphatase activities, the substrate was reacted with cells with or without 5 mM tetramisol, which inhibits TNAP and with or without 1 mM Na 3 VO 4 which exhibits a larger spectra of phosphatase inhibition, but is not active on TNAP. Each condition was tested in 6 replicates. Chemiluminescence amplification resulting from phosphohydrolysis of the CSPD substrate was monitored in a Perkin Elmer reader plate. The data are given as relative luminescent units (RLU/mg prot/h).
Membrane immunofluorescence
1C11 cells were cultured on glass cover slips at the bottom of 24 wells-micro plates and induced to differentiate into 1C11 5-HT cells.
Membrane immunofluorescence was carried out on intact cells reacted for 1 h at room temperature with SAF32 anti-PrP (10 mg/ ml) and anti TNAP (1/100) antibodies diluted in PBS Ca 2+ /Mg 2+ , 2% FCS and 0.1% sodium azide to avoid internalization of membrane receptors. After 3 washes in PBS/azide, secondary fluorescent antibodies were added for 1 h. After washing, cells were fixed with 3.7% formaldehyde then mounted in fluoromount (Southern Biotechnology). Examination was carried out on an Axiophot microscope (Zeiss) equiped with UV lamp and appropriate emission filters for epifluorescence and with a camera (Nikon) and video system (Packard Bell). In addition, sequential acquisition was performed on a scanning confocal microscope (Leica confocal SP5) at 405, 488 and 561 nm.
Determination of cellular content of bioamines and bioaminergic precursors 1C11 5-HT or 1C11 NE cells grown in DMEM supplemented with 10% 5-HT-depleted FCS were exposed to 2.5 mM tetramisol for up to 24 hours. This tetramisol concentration allows to fully abrogate TNAP activity (Fig. 7) and lacks any cell toxicity (data not shown). Cells were washed twice with PBS, scrapped and collected by centrifugation (10,000 g, 3 min, 4uC). The levels of serotonin (5-HT), dopamine (DA) and their precursors 5hydroxytryptophan (5-HTP) and dihydroxyphenylalanine (DO-PA), respectively, were measured by HPLC with a coulometric electrode array (ESA Coultronics, ESA Laboratories, Chelsford, MA), as in [60]. Quantifications were made by reference to calibration curves obtained with internal standards.
Phosphorylation of laminin
The phosphorylation state of endogenous laminin was assessed by measuring [c-32 P]-ATP incorporation (specific activity 18.5 Gbq/mmol, Amersham Pharmacia Biotech). Briefly, 1C11, 1C11 5-HT or 1C11 NE cells were grown in roller bottles in serum free conditions. [c-32 P]-ATP (1.2 GBq per 10 6 cells) was added to the culture medium 1 hour prior tetramisol (2.5 mM) addition. Spent medium was collected at various time points following tetramisol treatment, concentrated by ammonium sulfate at 80% saturation and dialyzed against 20 mM Tris-HCl pH 7.5, 0.5 M NaCl, 0.005% Brij-35 (TNB buffer). Laminin 1 was purified from the concentrated conditioned medium through affinity chromatography using a protein A-Sepharose column (Biorad) chemically conjugated with anti-laminin 1 antibody. Following elution, samples were run on a 7% SDS-PAGE and incorporation of radiolabeled phosphate was quantified using a PhophorImager (Molecular Dynamics). | 2014-10-01T00:00:00.000Z | 2009-08-04T00:00:00.000 | {
"year": 2009,
"sha1": "c69ffa86e2cf4bb259bea6c96ea1478d67c86bd5",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0006497&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c69ffa86e2cf4bb259bea6c96ea1478d67c86bd5",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
58360921 | pes2o/s2orc | v3-fos-license | Reputation based Proposed Scheme to Ensure Reliable Decision by Fusion Centre
WSN is the most pre-eminent field of networks. Technology of wireless communication is developing expeditiously. This emerging trend has lead to the improvement and development of the particular technology. The major problem faced during the development and improvement of this concept is the restriction of scanty spectrum resources. According to reports demonstrated by the FCC (Federal Communication Commission) meagreness of the spectrum is basically due to inefficacious use of spectrum resources1.To solve this problem of spectrum scarcity a new concept came into limelight, which is popularly known as CR. Concept of CR is proposed to resolve the hurdle of deficiency of unlicensed bands( 5GHz and 2.5 GHz )2. There are two types of spectrum bands: licensed bands and unlicensed bands, so according to the reports of FCC, these unlicensed bands are overpopulated and cram-full and these licensed bands are not properly utilized. A survey conducted by FCC concludes that designated spectrum is not utilized smoothly by the licensed users and thus it has permitted unlicensed users or cognitive users to fill in the gaps3. CR is one of the most germane concepts of WSN. It provides a very trustworthy and dedicated communication and also enhances the efficiency of the spectrum resources. As we know that there are two types of users, licensed(primary ) users who are having the license to communicate in a predefined range of spectrum, on the other hand unlicensed (cognitive)users, “substitute the slots” by exploiting unexploited(unused) spectrum bands. In CRN, the spectrum is approached by Abstract
Introduction
WSN is the most pre-eminent field of networks. Technology of wireless communication is developing expeditiously. This emerging trend has lead to the improvement and development of the particular technology. The major problem faced during the development and improvement of this concept is the restriction of scanty spectrum resources. According to reports demonstrated by the FCC (Federal Communication Commission) meagreness of the spectrum is basically due to inefficacious use of spectrum resources 1 .To solve this problem of spectrum scarcity a new concept came into limelight, which is popularly known as CR. Concept of CR is proposed to resolve the hurdle of deficiency of unlicensed bands( 5GHz and 2.5 GHz ) 2 . There are two types of spectrum bands: licensed bands and unlicensed bands, so according to the reports Reputation based Proposed Scheme to Ensure Reliable Decision by Fusion Centre cognitive users through overlay; opportunistic way and underlay with the aim to curtail the interference to the licensed users 4 . These two categories of users augment each other with an aim to issue paramount exploitation of spectrum. The authoritative difference between WSN and CRN is that the nodes present in CRN switches their reception and transmission parameters with respect to the radio environment.
Spectrum sensing is one of the dominant functions performed by cognitive users. During spectrum sensing, each cognitive user senses a particular licensed band with an aim to encounter the existence of licensed user. Once it has detected the existence of licensed user it will immediately evacuate the band to circumvent interference with authorized users. There are multifarious techniques of spectrum sensing such as Non-Cooperative Spectrum Sensing, Co-operative Spectrum Sensing, Interference based Spectrum Sensing and MIMO based Spectrum Sensing. But out of all these four techniques, Cooperative spectrum Sensing is considered to be robust. When spectrum sensing is performed by individual entity it usually suffers from shadowing and multipath fading effects. In order to mitigate these effects, cooperative spectrum sensing is considered to be prominent option 3 . In case of cooperative spectrum sensing, FC or base station is the judge. FC is the one that will perform integration of all sensing reports and deliver the final judgement regarding the occupancy or vacation of licensed users. As we know that a coin has two sides head and tail, similarly Cooperative spectrum sensing has both advantages and disadvantages. Where on side it provides a reliable judgement on the other side it invites time delay, auxiliary energy consumption and most important security threats 5 .
The most vigorous and open facet of CRN is that CRs are pregnable to multifarious malevolent attacks. Securing CRN is considered to be most commanding and onerous task. The reason behind this is that while dealing with these types of attacks, attacks of traditional WSN are also taken into consideration 2 . Conventional attacks include spoofing, denial of service, eavesdropping etc. Whereas threats specific to CRN incorporates SSDF attack, Primary User Emulation (PUE) attack, hardware attacks, CR software attack, Spectrum Sensing Data attacks, Cryptographic based attacks, Sybil attack, Newbie attack etc. Next section deals with the study of various attacks that are specific to CRN.
Security Attacks and Counter Measures
CRN hold some exclusive features, as a result it is pregnable to multifarious security threats in addition to conventional attacks of the WSN. This particular section deals with study of various security threats in CRN and countermeasures. First part elaborates security threats in CRN and second part elaborates the countermeasures.
There are many types of security threats particular to cognitive network environment. Figure 1 shows the classification of attacks in CRN.
Attacks against privacy
In CRN resources are being shared to initiate the communication between two parties and to be well informed about the environment 7 . Malevolent users would utilize this access to shared resources with an aim to steal nodes information. Basically two types of attacks come under this category, eavesdropping attack and impersonating attacks. Eavesdropping attack is the one in which malevolent user peacefully hearken the communication between two parties with the aim to steal some meaningful information and launching a particular attack. Whereas talking about impersonating attack, malevolent user tries to mimic admissible cognitive user, with an aim to initiate communication with other admissible nodes.
Node Generated Attacks
Node generated attacks are of utmost importance in CR environment because distribution of information is the dominant factor in the accurate working of CRN 8 . As the name indicates in this particular attack, nodes are targeted by the malevolent users. It happens that in this attack malevolent user crash the cognitive node. As a result not only node gets destroyed but the entire network is affected. Sometimes a node is abducted and reverse engineering approach is applied by the adversary and this may give invitation to various security threats. In simple words, this node will now act as device that will invite various security threats.
Policy Attack
Each of the privacy and security policies are based on the principles of working, so policy attacks in CRN can be categorised as, excuse attack and newbie picking attack 5 . In case of excuse attack, according to the policy of network, if it will be magnanimous towards the recovery of the wrecked nodes, and also at the same time does not require them to prove that they are preserving their quota, then the malevolent user will exploit the particular attack by continually professing to be wrecked and vandalized. Next comes, Newbie picking attack in which if any of the newly created nodes is having the desire to share resources, then according to the policy it will have to pay the charges in terms of information for a particular time span. Then only that particular node will be criterion from newbie to another node and thus leeching the information by not granting any return of information.
PUE Attack
Primary User Emulation Attack is most commonly known as PUE attack. Basically this attack is a kind of masquerading attack in which malevolent user tries to behave like a authenticated and legal entity by emanating a signal which is analogous to the signal emanated by licensed user 9 .
Counter measures
As we have seen from the previous section that there are multifarious security threats in CRN, in addition to the attacks from conventional WSN. So it is of utmost importance to find countermeasures to these attacks. Many countermeasures are introduced to mitigate the effect of these killer attacks. Countermeasures can be listed as: based on behaviour, data mining approaches, based on geolocation, based on trust and reputation of the node. In this particular section we will discuss in brief about countermeasures based on behaviour, geolocation and trust and reputation.
Based on Geolocation
As we know that the primary function of CR is to operate radio spectrum in situations where base stations are not being utilized properly. Accordingly, first simulated and real scenarios were considered to be static in nature in which base stations are playing the role of licensed user devices of cognitive users 10 . In this case when malevolent user mimics the licensed user, geolocation is taken as appropriate method. This approach works under certain assumptions only. This approach is not well utilized in CRN. In case of WSN also, nodes and adversaries can switch their position according to their wish. As a result adversaries are not able to be detected by this scheme. The major disadvantage of node mobility from the viewpoint of security is that if we want to locate the position of licensed user then we have to continuously perform spectrum sensing with the aim to trace new locations 5 . Further this continuous spectrum sensing will lead to very high battery consumption of the nodes. Also if licensed user is located in spatial location, its location is taken as irrelevant from security point of view.
Based on Behaviour
As the name indicates "behaviour" this countermeasure is used analyse the behaviour of each individual node. Based on this analyses adversaries are distinguished from the normal or legitimate users 14 . Algorithms that are used to analyse the behaviour of each individual node are self organizing or genetic algorithms. The main objective of these algorithms is to analyse the patterns of their behaviour. The two main factors that should be taken into consideration while discussing about these algorithms are, battery of each individual node and computational cost. At last it can be concluded that this countermeasure is a good option to alleviate against the attack.
Reputation and Trust Based Approach
Basically reputation is the characteristic of each and every node. The advantage of reputation is derived from the trait of WSN i.e., adaption and redundancy. Redundancy is that particular characteristic which is used to identify malevolent users. These reputations are basically used to indicate that whether licensed and cognitive users are behaving as expected. Versatility is considered to be the big advantage of this particular process 5 . This countermeasure is explained in detail in the next section.
Spectrum Sensing Data Falsification Attack
From the previous section, we have seen that there are many killer security threats that aim to destroy the functionality of CRN. In this particular section, we will discuss in detail about SSDF Attack or Byzantine attack and will discuss in detail about Reputation based approach.
Spectrum Sensing Data Falsification Attack is commonly known as SSDF attack or Byzantine Failure Attack. In this type of attack, whole procedure is performed by fusion process and FC is the leading entity. The main objective of adversaries in this type of attack is to corrupt the judgement of FC. In simple words, malevolent users deliver false sensing reports at the FC with an aim to invalidate the decision delivered by the FC. Thus due to the introduction of false reports at the FC, false judgement is delivered by the FC about the existence or vacation of licensed user 11 . The particular attack is considered to be the most dangerous attack in CRN. Here the adversary has the mindset that firstly, it should produce a very serious and dangerous attack and secondly, to protect itself from being detected.
Next, we will discuss modelling of SSDF attack. Basically, modelling of SSDF attack can be clustered in two categories, hard SSDF attack and soft SSDF attack 12 .
In the case of hard SSDF attack, malevolent users vitiate their local binary decision whereas in case of soft SSDF attack malevolent users vitiate their received energy values. But soft SSDF attack is considered to be more dangerous and harmful as compare to hard SSDF attack. The reason behind this is that some adversaries prefer to falsify the energy values because of its value space instead of binary decisions. • Always Yes SSDF attack-In case of Always Yes attack, the same result is always delivered by the adversary.
Here local observations are increased by the adversary by introducing a positive offset in each sensing slot. In simple words, in this particular attack an adversary always predicts the existence of primary signal and status of channel is indicated as busy. • Always No SSDF attack -In the case of Always No SSDF attack local observations are decreased by introducing a negative offset in each slot. In simple words, in this particular attack, an adversary always predicts that licensed user is absent or channel is free to use. As a result of this interference is caused by licensed and unlicensed user. • Always Adverse attack-In the case of Always Adverse attack, binary hypothesis testing is performed by the adversary. H0 indicates non existence of licensed user and H1 indicates the existence of licensed user. After that observations are increased by malevolent users when hypothesis is H0 and are decreased when hypothesis is H1.
As SSDF attack is taken as the most dangerous attack in CRN, so it is very important to alleviate the influence of attack. Basically there are various approaches that are used to alleviate the influence of SSDF attack so that reliable decision is given by the FC. Reputation based Approach, Artificial intelligence approach and data mining approach are some of the approaches that are used to mitigate the effect of malevolent users in the fusion process 13 . In this paper, we will discuss only reputation and trust based approach.
Reputation and Trust Based Approach
Reputation and trust based approach is considered to be genuine and trustworthy approach against SSDF attack. Each user is assigned a particular reputation value on the grounds of which malevolent users are identified by the FC. FC maintains the reputation database of each and every node. Threshold factor is taken as comparison factor. Nodes which are having low reputation than a pre-initialized threshold, is tagged as malevolent user (adversary).
Basic architecture of reputation and trust based approach describes three cardinal steps which are executed sequentially in each sensing round at the fusion centre 14 . Figure 2 shows basic architecture of reputation and trust based approach.
Filtering
In this particular step malevolent users are identified by comparing reputation with a pre-initialized threshold. Nodes which are having low reputations are filtered out from the decision process i.e., their reports will not be incorporated into the decision process and only the reports of legitimate users are incorporated into the decision process.
Data Fusion
In this particular step, fusion rules are executed on the sensing reports of legitimate users(selected from the previous step).Basically there are three types of fusion rules, AND rule, OR rule and MAJORITY rule. Out of them Majority rule is considered to be more robust and reliable. After the execution of these rules final judgement is delivered about the existence or vacation of licensed user.
Update
As the name indicates in this particular phase reputation or trust value of each user is updated. Update phase is performed by comparing the final judgement with the individual decision. If that matches with the individual one node gets positive score otherwise it gets negative score.
Proposed Scheme
This section discusses about the scheme with primary objective to mitigate SSDF attack or byzantine attack. As we know that SSDF attack is the attack in which malevolent users manoeuvre the judgement of FC, hence giving a wrong impression about the status of primary user. Proposed scheme is based on decision based approach to mitigate the effect of adversaries on judgement delivered by FC. In this FC doesn't possess any knowledge about the number of adversaries and strategy of attack. In order to minimize the influence of adversary, implementation of clusters is considered to be good idea 1,13 . Cluster formation takes place according to specified criteria. The benefit of cluster formation is that nodes with certain similar attributes should be in one cluster. Each individual cluster will give single vote. And final judgement is given by FC on the basis of majority voting. The intention behind cluster formation is that adversaries and normal users will be present in different clusters because of the variation in attributes of adversaries and normal users taken into consideration.
Here concept of reputation is used. Each node is having reputation value that is inversely related (proportional) to the distance between node and median of particular cluster. That is larger the distance lower is the reputation and vice versa. In the same way, voting weight of each node present in cluster is inversely related (proportional) to the distance between node and median.
The proposed scheme basically consists of six phases that are performed in each sensing round. First is the report collection phase, second is clustering phase, third is voting phase which further depends on intra cluster voting and inter-cluster voting, fourth is encryption phase, fifth is Decryption and Final judgement phase and sixth phase is Reputation refinement phase. Figure 3 shows the flowchart of proposed scheme. Details of these phases are given below:-
Report Collection Phase
This is the phase during which FC gathers sensing reports from all the CRU. This phase acts as a ground for the further phases, as all other phases would start only after the exit of this phase.
Clustering Phase-As the name indicates clustering, during this phase cluster formation takes place. Clustering is considered as a tremendous method used for the identification of adversaries. Two very popular techniques that are used for clustering are K-medoid and K-means 13 . In case of K-medoid cluster formation takes place using medoid. Medoid is highly representative node of the group. In simple words, medoid is that node in a cluster that possesses minimum dissimilarity with remaining nodes. In K-means cluster is introduced using centroid. In this particular approach nodes are clustered to reduce sum of squared Euclidean distance. Proposed scheme executes clustering using both K-means and K-medoid techniques. Attributes that will be considered for clustering are distance between the nodes and sensing history of nodes.
Voting phase
This particular phase further splits into intra cluster voting and inter cluster voting. In case of intra cluster voting each individual cluster will cast a vote and deliver its decision to the FC. Response of each node is weighted using influence factor. Basically influence factor depends upon two factors, first is distance between median and node, and second is energy of node. At last cluster decision is evaluated using influence factor and sensing report of all nodes that are present in the particular cluster. In case of intercluster voting, validity of clusters will be scrutinized. If magnitude of average of reputation of all CU in a cluster is less than the threshold, that cluster is considered as invalid cluster and is named as adversaries.
Encryption
After the selection of valid clusters, the next step is to perform encryption on the sensing reports of valid clusters, with the aim to prevent from attacks such as eavesdropping etc.
Decryption and Final Judgement
After receiving the encrypted reports FC will decrypt them and applies fusion rules on the received reports and at last delivers the final judgement regarding the absence or presence of primary user.
Reputation Refinement Phase
This phase is also known as update phase. In this phase reputation value of each individual node is refined. Based on these refined values, new cluster formation takes place in the next sensing round. After the declaration of final judgement, it is broadcasted to the entire CU. Then the final judgement is compared with the decision of cluster and individual node. This process of comparison is performed in two stages, firstly final judgement of FC is matched with cluster and if that matches, cluster will gain positive score and if not will score negative score. Secondly, cluster decision is matched with decision individual node and if it matches node will gain positive score otherwise negative.
Conclusion
CR is one of the most germane concepts of WSN. It provides a very trustworthy and dedicated communication and also enhances the efficiency of spectrum resources. Spectrum sensing is one of the dominant functions performed by cognitive users. Co-operative spectrum Sensing is considered to be robust and trustworthy spectrum sensing technique. CRN hold some exclusive features, as a result, it is pregnable to multifarious security threats in addition to conventional attacks of the WSN. SSDF attack is considered to be a hazardous attack with foremost objective to manipulate the judgement of FC.
Out of all the countermeasures, Reputation and trust based approach is believed to be robust in providing reliable judgement. Reputation value of individual nodes are updated which are further considered as a basis of cluster formation in next sensing round. The performance of proposed scheme can be analysed using a number of CU and magnitude of probability of false alarm and detection. | 2019-01-23T22:21:04.456Z | 2016-11-30T00:00:00.000 | {
"year": 2016,
"sha1": "3a86fe2fb7cff9a1725a760178fe8a3b5ec4ab15",
"oa_license": null,
"oa_url": "https://doi.org/10.17485/ijst/2016/v9i44/105072",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "778a547e33fe2d300b11a79acadf598d7c218b41",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
253315301 | pes2o/s2orc | v3-fos-license | Therapy – the Problematic Word in Music Therapy with Adolescents in the Child Welfare Services
The word ‘therapy’ is known to be a challenging one in music therapy. This discourse-oriented study asks: how do a group of adolescents and their music therapists in the child welfare services relate to the word ‘therapy,’ and how can music therapy as a profession get round problems connected to the use of it? The data consists of case study material from collaborative interviews of six Norwegian adolescents in out-of-home care and their music therapists in the first author’s ongoing PhD study. Systematic text condensation is used to collect relevant meaning-bearing citations for further discussion and in-depth reflection. The findings show that the word ‘therapy’ creates profoundly negative associations among the informants. In fact, it creates so many difficulties that we actually question if ‘music therapy’ is a fitting label at all. However, because it seems unlikely and even unwise to develop new labels of the well-established ‘music therapy,’ we suggest starting the process of redefining it within the field of child welfare services by engaging in an active and systematic dialogue among all involved.
Introduction
Music therapists working in the child welfare services experience that adolescents express scepticism towards the notion of therapy. They want 'normality' and offering them 'therapy' becomes another way of stigmatizing them, the adolescents say (Fuhr, 2022). Their music therapists, perhaps as a consequence, hesitate with calling themselves 'therapists' and what they do 'therapy.' This, in turn, creates unclear professional identities for the music therapists. Because they know that the adolescents can be reluctant to voluntarily participation in services labelled as 'therapy,' the music therapists use terms like 'music workshops' and 'band' rather than 'music therapy' (see Stensaeth, Krüger & Fuglestad, 2016). This creates a discourse of balances, in which the sessions are described as therapy in some settings, while the same term is avoided when talking with adolescents. Yet, their title is 'music therapist,' and for reasons related to the development of the profession and maintaining proper working conditions, it can be important that the practitioners inform their colleagues and leaders that their service is a form of therapy.
Interview studies on adolescents' experiences of living in out-of-home care in the Norwegian child welfare services show that, even though the adolescents want to bond with adults, they can find it difficult to trust adult caregivers. They feel that the adults do not 'really' care about them, that they care because it is their job to do so (Barneombudet, 2020a;Paulsen et al., 2017). Such experiences arise from the particularity of their living situation as they are asked to bond with uninvited caregivers. They feel surrounded by adults who take care of them, while still being strangers. At the same time, they have few long-lasting relationships with adults who have been with them over time.
Along with this reluctance towards bonding with adult helpers, the adolescents express scepticism towards 'therapy' as a concept. The context of therapy, of meeting a therapist in an office, is often experienced by adolescents as feeling forced or non-natural. In a report by Barneombudet (a public organization advocating children's rights) adolescents explain that the therapist's office is not a place in which they feel comfortable with opening up about their life (Barneombudet, 2020b). Also, Schechtman and colleagues (2018) find that adolescents experience a self-stigma: a fear of diminishing one's sense of self-worth by seeking help from others, while Crenshaw and Cranelli (2020) note that adolescents in residential care can be more accepting to various forms of treatment if it is not openly called 'therapy,' and if they are able to meet the therapist in more everyday settings.
All of this can be understood in relation to a general longing for normality in the adolescents' lives. Normality is seen as an important ideal for adolescents--they do not want to be victimized or thought to be different (Pokempner et al., 2015). Therefore, the Norwegian child welfare services states that it is a goal that adolescents living in out-ofhome care are able to experience a 'normal' childhood (Backe-Hansen et al., 2017;Haug, 2018;Langsrud et al., 2019). Storø (2016) finds that adolescents in out-of-home care view the 'normal' life as free from problems, and that they often relate the idea of normality to having a stable and secure family life. Backe-Hansen and colleagues (2017) note the importance of routines in promoting normality, in that having a predictable schedule consisting of both school and spare time activities can contribute to the adolescents' feeling of normality.
The wish for normality ties into the ways in which the adolescents perceive their helpers as well, as they talk about the importance of not being viewed as a 'case'--a problem that needs to be fixed. This point is also true in therapeutic settings, as adolescents do not like to be objectified or talked to in 'professional terms.' The words that are used by professionals when discussing the adolescents and their situation play an important role as well. In a paper by Follesø (2015), where she discusses the use of the term 'at risk' in the Norwegian child welfare services, Follesø describes how adolescents object to being labelled as 'at risk,' and reflects on how this specific term distinguishes between the majority of the 'normal' and the minority of those who are different. This language problem is well known in many research fields, including music therapy. Exploring literature on homelessness and family violence, question the use of the label 'at risk,' showing how such descriptions emphasize the assumed individual deficits of the child (see also Fairchild & Bibb, 2016;te Riele, 2006). Bowman and Lim (2021) point to a range of ageist, often-used terms in studies on older people, and Rolvsjord (2010) argues against a symptom-oriented discourse in mental health care. What these studies have in common is that they point to disadvantages with discourses surrounding therapy that emphasize the weaknesses of the 'client.'
Mapping the Landscape of Music Therapy in the Norwegian Child Welfare Services
The adolescents who attend music therapy in the context of the Norwegian child welfare services are usually under out-of-home care, meaning that they live in residential care or foster homes, and not with their birthparents. The term 'adolescent' is used broadly in both practice and literature, covering youths from ages 10 to 23. Music therapists working in the services offer both individual and group sessions, and they usually visit the adolescent at their place of living, if that is what the adolescent wishes. The adolescents can be referred to music therapy in different ways. For instance, if they show an interest in music, an adult at the institution might ask them if they want to participate in music therapy or 'music workshops.' The music therapists might also keep track of new adolescents that are arriving at the institutions and visit them. There are no requirements for participating, though the music therapist might prioritize adolescents who are not doing many other activities or are struggling with other forms of therapy. If the adolescents agree to participate, they and the music therapists decide together what the activities in their sessions should be. Among the usual activities there is learning to play instruments, listening to music, playing in bands, songwriting and performing. The adolescents are usually offered therapy from the municipal mental health services, with the adult caregivers in the child welfare services mostly being social workers (and not therapists). Therefore, broadly speaking, the adolescents meet social workers in the welfare services, and therapists in the mental health services. At the time of writing, only a few music therapists work in the Norwegian child welfare services. Their position is vaguely defined, as they are not a part of the mental health services. Yet, they are still therapists. This creates an unclear position with both advantages and disadvantages for the music therapists in their approach towards the adolescents. A disadvantage is that music therapy is not implemented and established within the same systems as other forms of therapy, making it a service that is limited to a few major cities and vulnerable to economical cuts and downsizing. An advantage is that the music therapists are able to meet the adolescents outside of traditional therapeutic settings. This creates for example potentials for community-oriented practices where the adolescents are encouraged to take part in musical activities such as concerts and public performances.
Despite the challenges connected with stable national music therapy practices, research and literature on music therapy in the child welfare services is a continuously developing field in Norway (see Krüger, 2011Krüger, , 2018Krüger et al., 2018;Krüger & Stige, 2014;Nebelung & Stensaeth, 2018;Stensaeth et al., 2016). Stensaeth (2018) and Wilhelmsen and Fuhr (2018), to mention a few examples, emphasize the importance of open and trusting relationships between the adolescents and the music therapists in participatory-oriented settings. Their perspectives, in line with the community-oriented approach of Krüger (2011Krüger ( , 2020, are resource-oriented--both in the sense that they focus on the adolescents' strengths and feelings of mastery, and that they emphasize the ways in which music can be a resource for building relationships with other people and communities. Their studies show that the adolescents appreciate this type of joyful bonding with others and their surroundings.
While the above-cited literature occasionally touches upon the challenges connected to the problems of using the word 'therapy,' the texts do not explore the differences in how adolescents and music therapists describe and understand the concept. We might ask if they have a common understanding of it. In a paper reviewing the literature on music therapy with adolescents, McFerran (2020) recommends that both (music therapy) practitioners and researchers avoid assuming that they know what adolescents experience, and that a common understanding should continuously be sought after. Other studies highlight the potential differences between how adolescents and music therapists understand certain aspects of the practice--aspects like chaos (Oosthuizen, 2018;Oosthuizen & McFerran, 2021), collaborative processes (Bolger, 2013;Bolger, et al., 2018), and acts of aggression (dos Santos, 2018Santos, , 2020. By exploring the meaning behind the adolescents' actions and behaviours, the researchers in these studies show how new understandings of the music therapy practices might emerge. In our paper, we go further as we have asked the adolescents and their music therapists to reflect upon the challenging 'therapy' word directly.
Method
The empirical material in this paper stems from the first author's doctoral study on music therapy in the Norwegian child welfare services. 1 One of the research goals in the study is to explore the similarities and differences between how adolescents and music therapists describe their relationship. Six adolescents and three music therapists who had worked together for at least one year were interviewed in pairs. Two other music therapists (not the authors of this paper) performed the interviews. The dyads were asked questions about their perception of each other and their work together in addition to their thoughts on the term 'therapy.' It is this data we have extracted and used as the source of our study in the present paper.
As music therapists and researchers, we engage with the empirical material with preunderstandings shaped by practical and theoretical experience. Fuhr has worked with adolescents in child welfare and has himself experienced challenges regarding the use of the term 'therapy' in practice. These challenges were part of the inspiration and background for his doctoral study. Stensaeth has worked as a music therapist in a special education school for over 20 years. Many of the children and adolescents there are resourceful young people, and the music therapy in the school aims to, in different ways, support and empower their skills and strengths. It is also a way to deal with life challenges through aesthetic means, when that is needed (Stensaeth et al., 2012). As a researcher, Stensaeth has published articles and edited a book on music therapy with adolescents in child welfare. As music therapist insiders, we have encouraged each other as co-authors throughout the article work to be conscious of a tendency to sympathize with the perspectives of the music therapists more than those of the adolescents. As researchers in the field of music therapy we have also been conscious of the danger of wanting to adjust our findings to our different schools of thought.
The writing of the article is a crucial step in a research process (van Manen, 2014). Also, the discourse-orientation of this article requires that language is an object of study in itself (Potter & Wetherell, 1987). As authors we have therefore constantly tried to remind ourselves that a research article of this kind is not a report on the findings but our reconstruction and systematization of the informants' experiences, as we hear them. Our representations will not describe each informant's experience in an accurate way, just as our understanding is probably not shared by all. Rather the article summarizes, as we have anticipated before, one of many potential descriptions of what could be in play when exploring the present focus of attention. As such we hope that we contribute with interesting aspects to ongoing dialogues on power and language in music therapy.
The data sample is rich, and we have used text condensation to extract relevant material. Text condensation is a method used by authors to create artificial citations in an attempt to summarize important messages stated by the interviewees. We have used Malterud's systematic text condensation to find the citations of interest for our paper. This method, which consists of four steps, offers a pragmatic process of "intersubjectivity, reflexivity, and feasibility, while maintaining a responsible level of methodological rigour," to borrow Malterud's own words (2012, p. 1). Here, reflexivity refers for example to considering the assumptions that surround the development of knowledge that shape our findings.
In the first step, we went over all of the empirical material to get an overall impression of it. Then, in step two, we identified meaning-bearing units. In step three, we abstracted the content of the individual meaning units, and in step four, we summarized the significance of it all into condensed citations (CC). Then, we collected the relevant citations for further discussion and reflection (i.e., Malterud, 2012).
Below, we illustrate the process of condensation by presenting two extracts from two different interviews: 2
Interview 1
Interviewer: Can you talk a bit more about, like, in what ways you notice that this [music therapy] has helped you?
Adolescent: So, I find that it has been easier to talk about stuff. It's easier to express how I feel, and I feel like weight has been lifted off my shoulders in every way because, there I have that person, who can help me turn things to something good, and I have someone that I can always share it with. You know. And, I think that it has helped me a lot, as a person. Makes me feel better.
Interview 2
Interviewer: What was it that made you want to return [to music therapy] even when you, like, even though it was a bad or good day, you showed up either way?
Adolescent: Because I know that, I can talk to [the name of the music therapist] about stuff, and I can, use episodes, thoughts and feelings in the music. Eh, and no matter how I felt when I arrived, I've always felt better when leaving.
We then condensed the two extracts into the following citation:
CC 9:
Interviewer: Can you talk a bit more about, like, in what ways you notice that this, whatever we call it, has helped you?
Adolescent: I find that it has been easier to talk about stuff. It's easier to express how I feel, and I feel like weight has been lifted off my shoulders in every way because, there I have that person, who can help me turn things to something good, and I have someone that I can always share it with. You know. And, I think that it has helped me a lot, as a person. Makes me feel better. Because I know that, I can talk to [the name of the music therapist] about stuff, and I can, use episodes, thoughts and feelings in the music. Eh, and no matter how I felt when I arrived, I've always felt better when leaving.
As shown, we combined the experiences of two adolescents into one citation. We also modified the interviewer's question slightly, by adding 'whatever we call it'. This is done to illustrate the ways in which the interviewers, in the empirical material, avoid using the term 'music therapy', as they are aware that the adolescents and music therapists may use other words to describe their sessions. We keep the square brackets around [the name of the music therapist] to illustrate that the adolescent does not use the term 'music therapist' in this extract.
The condensed citations summarise our constructions of the data that responds to this paper's research question. Some of the extracts represent multiple citations; other extracts represent fewer. We also sometimes refer to few singular citations, which are not included in the condensed citations. These are referred to by the use of double quotation marks.
We are aware that the condensation of the citations is our -the authors' -constructions of the data, a step that involves power dimensions. As authors we have maintained an exclusive privilege to not only condensate what the interviewees say, but also, to report what they have meant. This does not necessarily mean that every informant would agree in our representation of what they said in the interview. Neither does it mean that our text condensations are truer or more representative than what other authors might have found if they carried through the same process with the same data. Anyan (2013, p. 1) reminds us, "power asymmetry seems to be an exasperating circumstance in the interview methodology" and therefore an equal relationship in the prospects of the qualitative research interview seem unrealistic. To control for the power imbalances and to practice reflexivity, which is an ideal in qualitative research, he suggests that we as authors systematically study the interview process to uncover the manoeuvrings of power. In our case, which was during the analysis and the condensation steps, we tried to look again and again at the interview situation from several perspectives to reflect on the dynamisms within the circumstances of the interview. This was done to unveil our own awareness of how knowledge was being created, and how our own thoughts and ideas imbedded the process.
Findings
We find that the data surrounding the term 'therapy' is intertwined with discussions of the word (and title) 'therapist,' as well as the ways in which music and musical activities can be 'therapeutic.' For instance, if the interviewer asks the adolescent about their experiences with 'therapy,' the adolescent could reply by expressing a negative impression of the 'therapists.' We therefore find it meaningful to structure the condensed extracts into three themes: conceptualizing 'therapy,' conceptualizing 'therapist,' and conceptualizing the 'therapeutic.'
Conceptualizing 'Therapy'
When talking about their first meetings with the adolescents, the music therapists say that it varied whether or not they described the service as 'therapy.' The music therapists emphasize that they generally do not want to deceive or trick the adolescent by labelling their service as something else. Yet, they know that some of the adolescents can be sceptical towards therapy, as shown in the following citation:
CC 1:
Adolescent: … because when I heard about it, I thought like, music therapy you know, it's a bit, are you like pressing piano keys, and yeah *ironic tone of voice* what do you feel now?
The majority of the adolescents talk about having earlier experiences with therapy through the municipal health services. These experiences, they say, have shaped how they think of therapy as a concept. They also say that their sessions with their music therapists do not necessarily fit with their understanding of what therapy is (or is supposed to be):
CC 2:
Adolescent: I don't know anyone in the child welfare services who hasn't gone to some kind of therapy. Not everyone thinks of it as a positive experience. You don't go to a psychologist to have a good time, you know. So, I don't know, I don't consider this [music therapy] as therapy.
When elaborating upon the differences between 'therapy' and 'music therapy,' the adolescents describe a feeling of being forced to talk about one's problems or to focus on negativity in the former. Music therapy, in comparison, is experienced as more 'normal' and fun: In one of the interviews, the adolescent and music therapist discuss what word they could use to describe what they do together, as they both agree that 'therapy' is an ill-fitting term. The music therapist suggests that perhaps 'music' is more suitable, as music is the 'cornerstone' of their work and their relationship:
CC 4:
Music therapist: … it might be that in a way, music is like kind of a cornerstone, it was what we both started with in a way and we are building on that. So, we always have music and you get to know each other better like, with it as a cornerstone you know.
Adolescent: Yeah.
Music therapist: So, it kind of started with music and then we experienced a lot together, which in a way makes it so that we know each other better and, kind of like brings us closer you know.
Conceptualizing 'Therapist'
In the previous section, we saw that the adolescents express scepticism towards therapy as a concept. Similarly, the adolescents speak negatively about therapists as a group, explaining that, to them, their music therapists are not 'therapists':
CC 5:
Adolescent: (Addressing the music therapist) I don't think of you as a therapist you know, that would be weird.
Interviewer: You don't think of the [name of the music therapist] as a music therapist?
Adolescent: No.
Interviewer: What would make it so that you did? What would be different?
Adolescent: I mean like, if I had thought of her as a music therapist for me, then it would be in a negative way. Because then it wouldn't be natural, but a bit uptight.
Interviewer: What do you think would be different? If she was a music therapist?
Adolescent: I don't know, I mean she is.
Interviewer: She is.
Adolescent: Yeah, it is just what I associate with, therapists. Because, usually with therapists, often they. Like. They know so much about you. But you know so little about them. You don't even get to know if they have a cat! Although all of the adult informants are trained music therapists, they too object to the idea that they are 'therapists' when working with the adolescents. They also distance themselves from other types of therapists:
CC 6:
Interviewer: How about you, do you think of yourself as a therapist here, in the sessions?
Music therapist: … For me, this is not music therapy with a music therapist, I mean, to me, this is music with [addressing herself in third person], right? And I know that many adolescents, like, we've often talked about the psychologists [name of the adolescent] has been seeing. I think she has used me as a way to like vent about those psychologists and those, eh dumb adults she has to meet, right?
*the adolescent and the interviewer laughs* One of the reasons that the music therapists are not thought of as therapists by the adolescents, is that therapists are associated with being overly professional, to the extent that they seem 'uptight' (see CC5). The music therapists, in contrast, are described as more 'human' than other therapists are. In addition, the adolescents describe the music therapists' strengths and qualities by referring to the music therapists' personalities, rather than their profession or training.
CC 7:
Music therapist: The way I see it, this isn't a, therapist-client relationship, you know.
Music therapist:
We're musicians who write songs together and talk about life and, and my like, intention is that, that's how I try to be as a music therapist.
Adolescent: Yeah. You don't act like this is a job.
Music therapist: So, I act unprofessionally, is that what you are saying? *laughter* Adolescent: Not unprofessionally. You're a human being! Despite questioning professionalism and using terms that hint towards a more egalitarian understanding of the adolescent-therapist relationship, we also find situations in which the adolescents and music therapists describe a hierarchy between them. For instance, the music therapists sometimes talk from the position of a leader or adult whose job it is to take care of the adolescents. In addition, the informants describe situations in which the hierarchical structure of the child welfare services impose certain roles upon them, as seen in discussions on billing:
CC 8:
Music therapist: Because it's actually a bit weird now, eh, with the billing and such, that suddenly one becomes aware of the fact that there is a system, and suddenly we have this role in which, which is, it's inconvenient, and it shouldn't be like that.
Conceptualizing the 'Therapeutic'
Despite objecting towards the use of the word 'therapy,' the adolescents still describe the music therapy sessions to be helpful, in the sense that they find that the sessions strengthen and comfort them:
CC 9:
Interviewer: Can you talk a bit more about, like, in what ways you notice that this, whatever we call it, has helped you?
Adolescent: I find that it has been easier to talk about stuff. It's easier to express how I feel, and I feel like weight has been lifted off my shoulders in every way because, there I have that person, who can help me turn things to something good, and I have someone that I can always share it with. You know. And, I think that it has helped me a lot, as a person. Makes me feel better. Because I know that, I can talk to the music therapist about stuff, and I can, use episodes, thoughts and feelings in the music. Eh, and no matter how I felt when I arrived, I've always felt better when leaving.
The adolescents describe music as a helpful tool for emotional regulation outside of music therapy, while also highlighting the important value of listening to or playing music together with the music therapist:
CC 10:
Adolescent: When I'm with [name of the music therapist] we can, discuss the music, and we can make something of our own, but when I'm alone then it's just, I can listen to music, but, it's not the same.
Interviewer: No? Can you like, pinpoint, what's missing? *laughs* If you get what I mean?
Adolescent: I guess it's more personal.
Interviewer: When you're together?
Adolescent: Yeah.
Music therapist: Might be good to talk and… like be mirrored, on how things are going, maybe?
That there's someone there who can help with expressing things, maybe?.
The extract above shows how the adolescents and music therapists occasionally refer to (what we consider) a more traditional understanding of the therapeutic relationship, here described as an asymmetric relationship in which the adolescent is the client and the music therapist is a helper.
Discussion
In this discussion, we will first explore and summarize how the adolescents and music therapists of the study relate to the word 'therapy,' before discussing how the music therapy profession can come around problems connected to the use of the term.
'Therapy' -the Problematic Use of the Word
The findings confirm what we know from the existing literature: adolescents in out-ofhome care are sceptical of 'therapy,' 'therapists' and any services that are perceived as different from the norm. Their music therapists therefore often avoid describing themselves as therapists or their services as therapy. The condensed citations include many interesting, nuanced in-depth descriptions that reveal how complex the problems with the therapy-word are: All of the informants present and share many positive perceptions of music (see CC4/CC10) and in the same breath they talk of 'music therapy' as something joyful and fun. The adolescents also express a warm trust in their 'music therapists,' and the music therapists talk positively about the adolescents, maintaining that what they do together is meaningful collaborative work. We could say that music therapy, or whatever they happen to call it, is something all of them treasure. They share positive thoughts about what they do and how they do things (see CC4/CC7/CC10). Even the time (when) and place (where) in which music therapy takes place is described in positive manners. It is when the informants are asked to elaborate upon if and why the word 'therapy' could be used that they hesitate. In some way or another, the whys seem to take them out of and away from the music and the fun and instead bring in old stigma and victimisation. The problems connected to the use of the therapy-word in this sense becomes ambiguous: therapy is a word that both parties are reluctant to use, but at the same time, they use the terms 'music therapy' and 'music therapist' without the same type of hesitation. We will discuss the complexity connected to these aspects in the following sections.
Interestingly, but perhaps not surprisingly, the point at which the adolescents show the most scepticism towards the concept of therapy is in the initial stages of the music therapy. We see this is in the ways the adolescents describe negative preconceptions towards therapy (see CC1). They also describe feeling frustrated when they have to meet a new therapist--yet another adult whom they know nothing about (see CC2/CC5). This shows the importance of the first meeting between the adolescent and the music therapist, as the music therapist might have only one chance at explaining what music therapy is about to a sceptical adolescent. We see however that the adolescents' attitude changes along with them getting to know and to like the musical activities--the adolescents' acceptance and understanding of the notion of music therapy develop in time as they become more familiar with the activities. Still, and paradoxically enough, the adolescents continue to argue that 'therapy' is not fitting for their idea of music therapy. Similarly, their perception of the music therapist changes, but their idea of what a 'therapist' is does not change. They do not consider their music therapist to be one. So, alongside their reaction to the positive development in their perception of the notions of music therapy, they also seem to develop a clearer perception of what music therapy is not. According to the adolescents, music therapy as therapy cannot be compared to psychotherapy or other types of therapy that deal with their problems or involves what the adolescents refer to as 'just talking.' At the same time, they describe music therapy to be helpful, even therapeutic (see CC9/CC10)-it works as a container of change and self-support (CC9). Their perception of music therapy simply does not fit into their picture of therapy; for them music therapy contrasts that of traditional therapy by being enjoyable and fun. Here is an ambiguity: both adolescents and music therapists hesitate to name what they do together as therapy while still claiming that music therapy has therapeutic value. The tendency is that while the adolescents' idea of 'music therapy' and the 'music therapist' changes and becomes more positive during experiences over time, their idea of 'therapy' and 'therapists' remains negative.
We also find a change in the way the dyads talk about music therapy, moving from being normal to becoming more unique. At first, they talk of music therapy as valuable because of its 'normal' activities. We note that normal activities for the adolescent refers to musical activities as those recognized from outside of music therapy, that is to say activities they do with friends (talk about music), alone (listen to music), or that they see others do in various media (perform and record music). The degree to which these activities take place in normal places, plays a role as well: the music therapists see the adolescents in their homes or outside it, in the community, back-stage with other adolescents in group music projects, or in concert halls. This creates a contrast to other therapists whom the adolescents meet in the municipal health services in a particular room. Then, after getting to know each other better, the dyads start to talk about their relationship as valuable-because of its uniqueness. The adolescents describe for example their music therapist as a special individual in their life--often in the meaning of a friend but also sometimes an adult (CC4/CC5). Eventually, these aspects of normality and uniqueness in music therapy seem to erase some of the negative connotations associated with traditional therapy.
The music therapist informants use a range of different terms to refer to themselves. They use their own names; they refer to themselves as leaders and musicians. They also refer to themselves as 'adults' and as a part of the paid workforce of the child welfare services. We wonder if the music therapists might find the therapist-label to be too limited; too singular to encapsulate the plurality of the roles they identify with? Then again, although they object to the therapist-title, the music therapists use their job title at several points in the interviews. Here too is another ambiguity: they present themselves as both music therapists and as different from music therapists. Surprisingly, perhaps, when compared to what the adolescents say, we find it is difficult to explain how, when and why the music therapists use which words about themselves. Perhaps each music therapist is being continuously sensitive within each situation of when it is ok and not ok to call themselves therapists? If so, we assume that this creates a challenging mind-set for them.
Recall the child welfare literature in which adolescents speak negatively about adults who act in what they describe (negatively) as a 'professional' manner. Here, the care of the professional adult is considered by the adolescents to be inauthentic: the adults care because it is their job to do so and their approaches are experienced as technical and methodological rather than genuine (Barneombudet, 2020a;BarnevernsProffene, 2017). The adults who 'really' care show their care through empathy, their warm smiles and kind faces, not in their technical skills, according to the adolescents' reports (Forandringsfabrikken, 2019(Forandringsfabrikken, , 2020a(Forandringsfabrikken, , 2020b. The adolescents in this study talk about the care of the music therapists as arising from their personality, rather than their expertise as trained professionals. So, perhaps the biggest challenge among the music therapists is this: music therapists need to use various descriptions to find a balance in the ambiguity of being both 1) a responsible therapist who makes sure the adolescents are safe and happy and 2) being an individual the adolescents say they need: a friend-like adult who really cares and is personal and authentic while also being co-musicians and in charge of musical activities. But these redefinitions can come at a price. The music therapists need to be attentive to the pit falls of adopting new roles in an uncritical way. As 'friends,' for example, they get a new type of power. We can imagine that it can be difficult for some adolescents to disagree with the music therapist as a friend. This could give the music therapist a dominating voice. Additionally, friends tend to share intimate details of each other's lives, and we do see a danger of an expectance to disclose details that can be unnatural and even unethical to share between an adolescent and a professional therapist.
We argue nevertheless that the ambiguity of the role 'music therapist' as shown above can tell us something about the power dynamics of the relationship between the adolescents and music therapists. The interviews, in that they allow the adolescents and music therapists to collaborate in the process of re-defining their roles, show that it is possible to facilitate for conversations that can start a deconstruction of the traditional roles of therapist and client. We find that negotiating societal roles like this aligns with the forms of conversations that feminist theorists like Simone de Beauvoir (2010) andbell hooks (1981) are encouraging--conversations that recognize that both personal and societal change is a collaborative project. Drawing on Rolvsjord's (2010) resource-oriented model, such negotiations can be viewed as a sign of mutuality and authenticity in the therapeutic relationship. For the music therapists, however, active and critical reflection is needed in order to avoid accepting identities that do not easily harmonize with the professional mandate.
Music Therapy or not Music Therapy
The second part of the research question in this paper relates to how music therapy as a profession can come around the challenges described above in the future. First, we need to ask: does music therapy as a profession and a discipline recognize that music therapy offers adolescents in the child welfare system a type of support that is unique as well as apt? This paper's findings suggest that music therapy is a form of combined musical and personal support that the adolescents long for and is one that they do not get elsewhere. Knowing the strong connections between adolescents, music and friends/close others, we think this is a very normal desire for any youth today. The next question then is: are the skills of a music therapist--including that of being able to act authentically--required to fulfil such needs of the adolescents? And if the answer to these questions is yes: should we call all of this music therapy? Or should we ask: when would it be right to call it music therapy and when would it not? Or: do we not want to do anything and instead accept that music therapy in this field is turning into a type of anti-therapy with therapeutic potential? What type of double and impossible duality is this? And, ethically, is it not a challenge to have to be careful of mentioning one's professional title while still acting as professional as one can? Are we developing a lurking philosophy here?
The problems surrounding the term 'therapy' may be particularly articulated in the specific setting of adolescents in out-of-home care, as they express a special need for normality. Yet, studies on music therapy in adult mental health care have shown similar results, as the clients challenge the idea that music therapy is a form of therapy. Both Seberg (2020) and Solli and Rolvsjord (2015) find that music therapy, rather than being considered a form of treatment, is viewed as a 'break' or a 'breathing room' among other therapies. Thus, we think that the challenges brought up above are not only problematic for the dyads involved in the present study. The picture is bigger, and the challenges are bigger, for music therapy and the development of therapeutic services within child welfare, both in Norway and in other countries. If we cannot find a suitable language and have a common opinion of the role of music therapy in the field in question, we continue to worsen the uncertainties, paradoxes and confusions. This, in turn, could potentially mislead future adolescents and music therapists and we might also lose sight of great potentials of what music therapy can offer. We suppose music therapy is not well served by that.
Eventually, as long as the term 'therapy' remains to be predominantly negative for adolescents in the child welfare system, as our findings show it is, it will remain difficult for them and their music therapists to use it in practice. A result could be that 1) young people in vulnerable situations will choose not to attend music therapy and are lost to activities they could benefit from, and 2) music therapists' professional identities are challenged, and possibly hurt. On this background, we question if the term 'music therapy' is fitting at all? Yet, to erase it and find a new label seems too drastic, as music therapy is a well-established label both in Norway and in international discourses.
The words we use matter. This paper shows the power of a word (therapy) and how heavy the weight of its negative connotations can be. Yet, it also shows that words are elusive and that their use is closely attached to times and situations. Interestingly, this study shows that when the word 'music' is put in front of the word 'therapy,' the therapy word seems to be 'liberated' from the traditional understanding of it among the adolescents. Then, music therapy stands out as something that actually can be positive, promising, and less negative compared to other therapies, although the term can also be met with some initial scepticism (see CC1). Still, we heard during the interviews that some of the adolescents, occasionally, with ease and without questioning its problematic content, referred to the notions music therapy and music therapist. In the safe arena, between the adolescents and the music therapists it seemed sometimes natural to use these words.
Further, if we return to the discussions on the terms 'therapy' and 'therapist,' we find that the adolescents and music therapists in our study mostly agree with each other, and portray somewhat similar understandings of the concepts; their ways of talking about music therapy are intertwined. This is surprising after having learned from the research literature that there are many differences in the understandings between adolescents and music therapists. We wonder if the similarities arise as a consequence of the adolescents and music therapists, through working together over longer periods of time, having picked up terms and viewpoints from each other and consequently started to share discourses? If so, this is promising; it shows that dialogues on language and discourse can be fruitful in the sense that it fills old words with new and unified meaning. Again, feminist theory emphasizes the importance of redefining terms as part of a process of de-and reconstructing traditional hierarchies of power. This finding also shows how fleeting the phenomenon of language is, and spoken language in particular--it is continuously shaped and reshaped in active use. Therefore, instead of finding differences between adolescents' and music therapists' discourses, we see instead the need to look for the shared discourse between them. We then need to ask: is there one way of talking about music therapy that--based on negotiations over time--could result in a language that does not offend or push away neither the adolescent nor the music therapist?
We need to emphasize that a shared discourse is not the same as a shared understanding. Using the same language does not automatically mean that the adolescents and the music therapists have the same understanding of what they talk about. Words are powerful and their meanings are inextricably linked to the ones uttering them. Mikhail Bakhtin, whose dialogism is often referred to as the origin behind dialogic discourse analysis (Skaftun, 2019), claims that the words we use are always used by others before, and thus they carry with them meaning ascribed to them by others. Bakhtin (1986) says language as such (i.e., as it used to form utterances) is populated by alien voices. Words, and their uses, are therefore complex matters. The adolescents and the music therapists, we assume, will often carry with them different language histories and cultures. And if the music therapists use the words of the adolescents, to identify as a friend for example, there is a risk that the message is lurked. Mistrust may arise, especially if the content of and intention behind the words are different. We will not go further into this complexity here. A shared discourse in our context is most of all thought of as fruitful to develop the needed shared focus and an intersubjective point of departure to engage in a dialogue that feels essential for both parties.
We find that the collaborative interview showed a possible solution to how we can come around some of the challenges connected to the use of the problematic therapy-word: by bringing the involved parties together to talk about and reflect upon the words they use, the process of building and developing a much-needed shared discourse can be kicked off. This might allow for comparisons and discussions between the informants, which in turn can make it possible for the participants to react directly to each other's descriptions. Active dialoguing between many parties is therefore needed, not avoidance and escapisms. Words and labels need to be dealt with and faced actively. This might even be helpful in terms of therapeutic outcome. Also, systematic participatory research and meaningful theory building would be useful in order to develop a broad and unified language. To develop and maintain a unified understanding of music therapy as therapy among adolescents in the child welfare services globally, one that goes across cultures and the field itself, such measures are not just needed in our Norwegian context. This calls for dialogues between therapy disciplines. Maybe the music therapy community with their therapists and adolescents are ready to take the first initiative in so doing?
Conclusion
This paper asked: How do a group of adolescents and their music therapists in the child welfare services relate to the word 'therapy,' and how can music therapy as a profession get around problems connected to the use of it? Our reflections can only represent our voices and not the whole of this complex matter. The findings confirm our assumptions about the problems connected to the use of the word 'therapy.' They in fact reveal that there exists a lot of doubts and contradictions connected to the practice and profession of music therapy in the child welfare system in Norway. An active collaborative re-definition is needed. Through working together, the adolescents and music therapists develop their own ways of speaking about and understanding the value of music therapy. Their language and understandings must however not remain isolated. Rather, they need to be shared with and opposed by others, including those outside music therapy. The difficulties with the word 'therapy' also deserves to be a topic for participatory research. If we are able to fill the label with a content that both the adolescents and their music therapists find suitable in the future, this could avoid further use of stigmatizing connotations. Instead, we can accommodate our perceptions of music therapy as something that promotes normality to some degree. This paper implies that a redefining is not only possible; it is already beginning to happen in practice. As this paper suggests (again): it needs to be a collaborative project. The voices of the adolescents are especially important and create a basis for further development. As professional music therapists and researchers, we cannot negotiate our words and theories alone if we want to develop a language that is understood by all, not just the involved individuals, but also other professions and society at large. | 2022-11-05T15:30:38.863Z | 2022-11-01T00:00:00.000 | {
"year": 2022,
"sha1": "e5903dd5cd6506c2396efa01133d06999557d8ab",
"oa_license": "CCBY",
"oa_url": "https://voices.no/index.php/voices/article/download/3380/3550",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "5e190a60f9d296258b870dd5b05c0b35ad84c79e",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
} |
267764765 | pes2o/s2orc | v3-fos-license | Power-to-Noise Optimization in the Design of Neural Recording Amplifier Based on Current Scaling, Source Degeneration Resistor, and Current Reuse
This article presents the design of a low-power, low-noise neural signal amplifier for neural recording. The structure reduces the current consumption of the amplifier through current scaling technology and lowers the input-referred noise of the amplifier by combining a source degeneration resistor and current reuse technologies. The amplifier was fabricated using a 0.18 μm CMOS MS RF G process. The results show the front-end amplifier exhibits a measured mid-band gain of 40 dB/46 dB and a bandwidth ranging from 0.54 Hz to 6.1 kHz; the amplifier’s input-referred noise was measured to be 3.1 μVrms, consuming a current of 3.8 μA at a supply voltage of 1.8 V, with a Noise Efficiency Factor (NEF) of 2.97. The single amplifier’s active silicon area is 0.082 mm2.
Introduction
The emerging field of Brain-Machine Interface (BMI) technology utilizes microelectrodes, microelectronics, and computational technologies and has extensive applications in neural research and neuroscience [1].Advanced microelectromechanical systems (MEMS) technology allows for the integration of multiple neural microelectrode systems onto a single silicon chip [2], which can then be implanted into the cerebral cortex.Such systems can simultaneously capture full-spectrum neural signals from multiple neurons.The subsequent analysis of these neural signals allows for the establishment of a connection between neural responses and real bodily activities, thereby facilitating brain-machine control [3].Consequently, neural recording amplifiers play a crucial role in the development of BMI technology and are considered an indispensable component.
The electrochemical effects at the electrode-tissue interface often lead to a DC offset of 1-2 V in differential recording electrodes [4].Therefore, the electrodes need to be AC coupled to the amplifiers to eliminate this offset.Local Field Potentials (LFPs), which are neural signals, typically exhibit amplitudes ranging from 20 µV to 1 mV, covering a frequency range of 1 Hz to 200 Hz.In contrast, Action Potentials (APs) generally have an amplitude of around 50 µV, but they can reach as high as 5 mV in cases of abnormal multi-unit activity; these signals can have a frequency content of up to 5 kHz [5], and occasionally, even higher.
Because neural signals have a low amplitude, noise and interference can significantly affect the recorded signals.Maintaining a low input-referred noise in the amplifier is crucial for obtaining clean neural signal recordings.Technologies commonly used to reduce the input-referred noise in amplifiers include source degeneration resistors [6], current reuse [7,8], and g m -boost [9].In fact, during the process of signal acquisition, the thermal
In this first stage of the schematic, the input signals are AC coupled through a pair of input capacitors (C in ), and a negative feedback network formed by a feedback capacitor (C f ) is applied around the OTA for operation.Hence, the closed-loop gain of the amplifier is defined by the ratio of C in /C f .The lower cutoff frequency (f L ) is given by 1/(2πR pseu C f ), while the higher cutoff frequency (f H ) is given by g m /(2πC L ), where g m represents the transconductance of the OTA, and R pseu is the pseudoresistor formed by the PMOS transistors.One advantage of this design is its ability to occupy a small area while exhibiting resistance characteristics of over 100 GΩ within a voltage difference of less than ±0.2 V [17].Additionally, the resistance value of the pseudo-resistor can be adjusted by an external voltage V tune , allowing for tunable cutoff frequencies.
The calculation formula for the input-referred noise of the amplifier is as follows: Biosensors 2024, 14, 111 where V 2 AMP is the input-referred noise of the amplifier, V 2 OTA is the input-referred noise of the OTA, C p is the parasitic capacitance within the OTA.
According to Equation (1), to achieve a low-noise amplifier, it is essential to ensure that the input capacitance C in >> C f , C p .
x FOR PEER REVIEW 3 of 17 In this first stage of the schematic, the input signals are AC coupled through a pair of input capacitors (Cin), and a negative feedback network formed by a feedback capacitor (Cf) is applied around the OTA for operation.Hence, the closed-loop gain of the amplifier is defined by the ratio of Cin/Cf.The lower cutoff frequency (fL) is given by 1/(2πRpseuCf), while the higher cutoff frequency (fH) is given by gm/(2πCL), where gm represents the transconductance of the OTA, and Rpseu is the pseudoresistor formed by the PMOS transistors.One advantage of this design is its ability to occupy a small area while exhibiting resistance characteristics of over 100 GΩ within a voltage difference of less than ±0.2 V [17].Additionally, the resistance value of the pseudo-resistor can be adjusted by an external voltage Vtune, allowing for tunable cutoff frequencies.
The calculation formula for the input-referred noise of the amplifier is as follows: where is the input-referred noise of the amplifier, is the input-referred noise of the OTA, Cp is the parasitic capacitance within the OTA.
According to Equation (1), to achieve a low-noise amplifier, it is essential to ensure that the input capacitance Cin >> Cf, Cp.
The second stage of the schematic is a Variable Gain Amplifier (VGA).The VGA is based on a CCIA topology as well, and offers two different programmable gains which are set via a programmable capacitor array.Therefore, the total gain of the amplifier can be set to ×200 and ×100.The second stage of the schematic is a Variable Gain Amplifier (VGA).The VGA is based on a CCIA topology as well, and offers two different programmable gains which are set via a programmable capacitor array.Therefore, the total gain of the amplifier can be set to ×200 and ×100.
In addition, due to the significantly lower gain of the VGA in comparison to the gain of the first-stage, the influence of the VGA on the overall amplifier's input-referred noise is correspondingly negligible.Hence, to achieve low-noise performance, it is important to design the first-stage OTA to have low input-referred noise.Section 3 describes the low-noise low-power design technologies used in the OTA.
Proposed OTA
In the OTA depicted in Figure 2, to achieve a 1:10 current scaling and reduce circuit power consumption, we apply a bias voltage V b to M 15 and M 16 .This bias voltage sets the current flowing through M 15 and M 16 at 9/10 I B .Consequently, the current of the branch transistor M 5 -M 8 is configured to be 1/10 I B .This approach enables current scaling in the circuit without requiring additional bias current consumption.The self-biased structure eliminates any additional current consumption from the individual branches that provide bias and removes the necessity for complex circuits to supply the bias voltage to the amplifier.As a result, the operating conditions of the amplifier are simplified.Furthermore, to optimize the noise of the amplifier, we employed source degeneration resistors with identical resistance values and the current mirror transistors M 5 -M 8 are identical while the size of M 15 -M 16 are also identical to mitigate matching errors that could occur when using source degeneration current mirrors with different sizes.The previous approaches to achieve current scaling involved utilizing source degeneration current mirrors with different sizes at the bottom [6,13] to regulate the current replication ratio of source degradation current.However, in the actual manufacturing process, variations and process errors can introduce matching errors when using different sizes of source degeneration current mirrors.This can result in inaccurate current replication ratios and increase the risks of equipment mismatch.Therefore, the use of different sizes of source degeneration current mirrors carries a higher risk of errors and can lead to increased equipment mismatch.To mitigate these risks, employing source degeneration current mirrors with the same size can help reduce matching errors and enhance the overall performance and reliability of the equipment.
plifier.As a result, the operating conditions of the amplifier are simplified.Furthermore, to optimize the noise of the amplifier, we employed source degeneration resistors with identical resistance values and the current mirror transistors M5-M8 are identical while the size of M15-M16 are also identical to mitigate matching errors that could occur when using source degeneration current mirrors with different sizes.The previous approaches to achieve current scaling involved utilizing source degeneration current mirrors with different sizes at the bottom [6,13] to regulate the current replication ratio of source degradation current.However, in the actual manufacturing process, variations and process errors can introduce matching errors when using different sizes of source degeneration current mirrors.This can result in inaccurate current replication ratios and increase the risks of equipment mismatch.Therefore, the use of different sizes of source degeneration current mirrors carries a higher risk of errors and can lead to increased equipment mismatch.To mitigate these risks, employing source degeneration current mirrors with the same size can help reduce matching errors and enhance the overall performance and reliability of the equipment.To minimize the input-referred noise of the amplifier, our focus lies in reducing the contribution of transistor noise.In the conventional OTA without the source degeneration resistor, the transistor produces significant noise due to its substantial channel current.In To minimize the input-referred noise of the amplifier, our focus lies in reducing the contribution of transistor noise.In the conventional OTA without the source degeneration resistor, the transistor produces significant noise due to its substantial channel current.In contrast, our design utilizes the source-degenerated NMOS transistor, comprising a transistor and a source degeneration resistor, as illustrated in Figure 2. The noise generated by a source degeneration NMOS transistor primarily arises from the resistor, resulting in a significantly lower noise contribution compared to an MOS transistor operating at the same current level.Another benefit of employing source-degenerated NMOS transistors is that the noise induced by resistors is predominantly thermal noise, while NMOS transistors tend to produce a notable amount of 1/f noise unless they are sized with a considerably large area.In our neural amplifier, the input differential pair is composed of a pair of stacked large-area PMOS transistors, which is the major noise contributor of the amplifier.
The PMOS transistors are chosen due to the fact that the 1/f noise of a PMOS transistor is one to two orders of magnitude lower than the 1/f noise of an NMOS transistor of the same size, as long as it does not significantly exceed the threshold voltage [17,18].
Maximizing G m Analysis and Noise Analysis
To achieve low input-referred noise, it is crucial to maximize the transconductance (G m ) of the OTA under a given total current.The maximum achievable G m for an OTA is typically the transconductance of the PMOS transistor in the input differential pair, which we can refer to as g m1 .Therefore, G m ≈ g m1 .Consequently, it is advantageous to operate the input transistors in the subthreshold region to maximize the g m at a given current level.This implies that the input transistors need to have a larger W/L ratio.Based on this consideration, combined with Figure 3c,d, enhancing the input differential pair through the use of current reuse technology can increase the transconductance of the input differential pair without consuming additional current.
sistors tend to produce a notable amount of 1/f noise unless they are sized with a cons erably large area.In our neural amplifier, the input differential pair is composed of a p of stacked large-area PMOS transistors, which is the major noise contributor of the amp fier.The PMOS transistors are chosen due to the fact that the 1/f noise of a PMOS transis is one to two orders of magnitude lower than the 1/f noise of an NMOS transistor of t same size, as long as it does not significantly exceed the threshold voltage [17,18].
Maximizing Gm Analysis and Noise Analysis
To achieve low input-referred noise, it is crucial to maximize the transconductan (Gm) of the OTA under a given total current.The maximum achievable Gm for an OTA typically the transconductance of the PMOS transistor in the input differential pair, wh we can refer to as gm1.Therefore, Gm ≈ gm1.Consequently, it is advantageous to operate t input transistors in the subthreshold region to maximize the gm at a given current lev This implies that the input transistors need to have a larger W/L ratio.Based on this co sideration, combined with Figure 3c,d, enhancing the input differential pair through t use of current reuse technology can increase the transconductance of the input different pair without consuming additional current.The total input-referred thermal noise can be approximately calculated by (2).
where k is the Boltzmann constant, T is the absolute temperature, and gm is the transco ductance of its transistor.To reduce the total input-referred thermal noise, gm5, gm7, gm and gm15 must be significantly less than gm1 to minimize the noise contribution of the d vices M5-M8 and M13-M16.After designing M5-M8 and M13-M16, gm5-gm8 and gm13-gm16 b come the minimum.We can analyze M5-M8, M15, and M16 in combination with Figure 4 The total input-referred thermal noise can be approximately calculated by (2).
V 2 in,thermal = [ where k is the Boltzmann constant, T is the absolute temperature, and g m is the transconductance of its transistor.To reduce the total input-referred thermal noise, g m5 , g m7 , g m13 , and g m15 must be significantly less than g m1 to minimize the noise contribution of the devices M 5 -M 8 and M 13 -M 16 .After designing M 5 -M 8 and M 13 -M 16 , g m5 -g m8 and g m13 -g m16 become the minimum.We can analyze M 5 -M 8 , M 15 , and M 16 in combination with Figure 4. Figure 4 illustrates the schematic diagram of the circuit used to determine the equivalent transconductance of a source-degenerated NMOS transistor.In Figure 4b, the opencircuit voltage (V oc ), short circuit current (i sc ), and equivalent resistance (R eq ) are defined.Assuming a small signal current of zero enters the drain of the transistor, the resulting voltage on R s is reduced to zero.This condition renders R s independent of V gs and V oc .Furthermore, the transistor's equivalent resistance is increased by a factor of (1 + g me R s ), where g me represents the effective transconductance of the transistor (accounting for the body effect).Because i sc = V oc /R eq , and V oc is not influenced by R s , the i sc decreases by the same factor as the output resistance increases.Considering the aforementioned properties, we can construct an equivalent transistor for an NMOS transistor with source degeneration, as depicted in Figure 4c.Including R s in the circuit has an overall effect of increasing the output impedance (R o ) and decreasing the equivalent transconductance (G m ).By defining G m and R o and utilizing Equations ( 3), ( 5), and ( 6), we can ensure that the open-circuit voltage of the equivalent transistor remains unaffected by R s .Using this method, we can determine the equivalent transconductance of a source-degenerated NMOS transistor, as demonstrated in Equation (7), where R o is equal to R eq .Figure 4 illustrates the schematic diagram of the circuit used to determine the equivalent transconductance of a source-degenerated NMOS transistor.In Figure 4b, the opencircuit voltage (Voc), short circuit current (isc), and equivalent resistance (Req) are defined.Assuming a small signal current of zero enters the drain of the transistor, the resulting voltage on Rs is reduced to zero.This condition renders Rs independent of Vgs and Voc.Furthermore, the transistor's equivalent resistance is increased by a factor of (1+gmeRs), where gme represents the effective transconductance of the transistor (accounting for the body effect).Because isc = Voc/Req, and Voc is not influenced by Rs, the isc decreases by the same factor as the output resistance increases.Considering the aforementioned properties, we can construct an equivalent transistor for an NMOS transistor with source degeneration, as depicted in Figure 4c.Including Rs in the circuit has an overall effect of increasing the output impedance (Ro) and decreasing the equivalent transconductance (Gm).By defining Gm and Ro and utilizing Equations ( 3), ( 5), and (6), we can ensure that the opencircuit voltage of the equivalent transistor remains unaffected by Rs.Using this method, we can determine the equivalent transconductance of a source-degenerated NMOS transistor, as demonstrated in Equation (7), where Ro is equal to Req.
According to Formula (7), the source degeneration transistor can result in a higher equivalent resistance (Ro) and a lower transconductance (Gm).This has significance in op- According to Formula (7), the source degeneration transistor can result in a higher equivalent resistance (R o ) and a lower transconductance (G m ).This has significance in optimizing the input-referred noise of the amplifier.
Table 1 illustrates the operating points for transistors in the OTA.As shown in Table 1, by operating M 1 -M 4 in the subthreshold region, we achieved a high g m /I D ratio such that g m1 is much greater than g m5 -g m8 and g m13 -g m16 , combining Figure 3, using current reuse technology to enhance the transconductance of the input transistors, with g m1 = g mos1 + g mos3.(The g mos1 is the transconductance of M 1 and the g mos3 is the transconductance of M 3 ).
As mentioned in Section 3.1, the 1/f noise (flicker noise) is also a key noise contributor in low-noise, low-frequency circuits.We mitigate the impact of flicker noise by using PMOS transistors as input devices and employing devices with large gate-source areas.The flicker noise is inversely proportional to the gate-source area, so all transistors should be made as large as possible to minimize the 1/f noise.However, as devices M 5 -M 8 and M 13 -M 16 are made larger, the total capacitance seen by the gate of M 5 -M 8 and M 13 -M 16 increase, and according to (1), when those transistors are made larger, C p increases, and the total input-referred noise of the OTA also increases.To ensure noise minimization, there is an optimal size for M 5 -M 8 and M 13 -M 16 .In our design, we decreased the size of M 5 -M 8 and M 13 -M 16 as much as possible, trading off the input-referred noise.
Noise Efficiency Factor
As mentioned in Section 1, the NEF proposed in [4] is adopted: where V ni,rms is the total input-referred rms noise voltage, I tot is the total supply current, and BW is the −3 dB bandwidth of the amplifier in hertz, respectively.The NEF limitation for MOSFET-based amplifiers stems from their current noise and maximum gm/I D [19].The input-referred rms noise of the ideal MOS transistor is expressed as where γ is the noise coefficient and g m is the transconductance of an MOS transistor.
When the transistor operates in the subthreshold region, we obtain g m = κI D /U T , and the input-referred rms noise of the ideal MOS transistor [19] is expressed as The theoretical limit of the NEF of an OTA that uses a differential pair as an input stage is when the two differential pair transistors are the only noise sources in the circuit.
The input-referred noise of the OTA is then V 2 ni,rms = 2 × V 2 mos,rms .Assuming a first-order roll-off of the frequency response, the input-referred rms noise of the ideal OTA is expressed as Combining ( 8) and ( 11), we obtain the theoretical limit for the NEF of any OTA that uses a subthreshold MOS differential pair to be Assuming a typical value of κ = 0.7 and as mentioned in Section 3.1, a 1:10 current scaling ratio is employed to lower the power consumption of the amplifier.Consequently, the total current consumption of the first stage amplifier is equivalent to 2.2 times I B. Therefore, I tot = 2.2 I D .We can conclude that the theoretical limit value of the NEF is 2.12.
Detailed Circuit Implementation
The amplifier was fabricated in the TSMC 0.18µm CMOS 1P6M process.All the source degeneration resistors are constructed using high-resistance polysilicon, with a resistance value of 186 KΩ.Metal-Insulator-Metal (MIM) capacitors are used for C in and C f , which offer high-precision capacitance for accurately defining the closed-loop gain of the amplifier.By setting the value of C in to 20 pF and C f to 200 fF, the first stage is designed to provide a gain of approximately 100 (40 dB).The second stage offers a controllable gain of x2 and x1, thus setting the total gain of the amplifier to be ×200 and ×100, the total gain adjustable (×200, ×100).Each amplifier occupies active silicon the area of 0.082 mm 2 .An on-chip bandgap reference circuit generates all the reference currents and voltages for the entire chip to minimize the use of off-chip components.A chip microphotograph of the amplifier is shown in Figure 5 (the chip measures 2 mm × 4.2 mm, and contains 64 channels of a low-noise, low-power neural amplifier, a 64 to 1 MUX, a bandgap reference, and an ADC buffer).
Biosensors 2024, 14, x FOR PEER REVIEW 8 of 17 Assuming a typical value of κ = 0.7 and as mentioned in Section 3.1, a 1:10 current scaling ratio is employed to lower the power consumption of the amplifier.Consequently, the total current consumption of the first stage amplifier is equivalent to 2.2 times IB.Therefore, Itot = 2.2 ID.We can conclude that the theoretical limit value of the NEF is 2.12.
Detailed Circuit Implementation
The amplifier was fabricated in the TSMC 0.18μm CMOS 1P6M process.All the source degeneration resistors are constructed using high-resistance polysilicon, with a resistance value of 186 KΩ.Metal-Insulator-Metal (MIM) capacitors are used for Cin and Cf, which offer high-precision capacitance for accurately defining the closed-loop gain of the amplifier.By setting the value of Cin to 20 pF and Cf to 200 fF, the first stage is designed to provide a gain of approximately 100 (40 dB).The second stage offers a controllable gain of x2 and x1, thus setting the total gain of the amplifier to be ×200 and ×100, the total gain adjustable (×200, ×100).Each amplifier occupies active silicon the area of 0.082 mm 2 .An on-chip bandgap reference circuit generates all the reference currents and voltages for the entire chip to minimize the use of off-chip components.A chip microphotograph of the amplifier is shown in Figure 5 (the chip measures 2 mm × 4.2 mm, and contains 64 channels of a low-noise, low-power neural amplifier, a 64 to 1 MUX, a bandgap reference, and an ADC buffer).
Measurement Results
Each channel of the amplifier consumes 3.8 μA from a 1.8 V supply, which can be broken down as follows.The first-stage OTA consumes 3.6 μA, and the second-stage VGA consumes 0.2 μA.We do not include the bias current (1 μA), since it can be shared by many amplifiers in the array.
Figure 6 displays the equipment used for the measurements, including the test board, along with the observed waveforms.Figure 6b-d
Measurement Results
Each channel of the amplifier consumes 3.8 µA from a 1.8 V supply, which can be broken down as follows.The first-stage OTA consumes 3.6 µA, and the second-stage VGA consumes 0.2 µA.We do not include the bias current (1 µA), since it can be shared by many amplifiers in the array.
Figure 6 displays the equipment used for the measurements, including the test board, along with the observed waveforms.Figure 6b-d show that when inputting 1 mVpp, 1 kHz ramp, sine, and artificial cardiac signals generated by the Keysight 33600 A true waveform generator, the DC measurement of the output waveform is performed using a Tektronix MSO54 Mixed Signal Oscilloscope.As mentioned in Section 1, the DC offset is an issue to be considered in a neural signal amplifier.Since the reference voltage of the amplifier is 0.9 V, it is expected that the output waveform of the amplifier will exhibit fluctuations above and below 0.9 V. Therefore, conducting DC measurements can serve as a means to verify this behavior.
waveform generator, the DC measurement of the output waveform is performed using a Tektronix MSO54 Mixed Signal Oscilloscope.As mentioned in Section 1, the DC offset is an issue to be considered in a neural signal amplifier.Since the reference voltage of the amplifier is 0.9 V, it is expected that the output waveform of the amplifier will exhibit fluctuations above and below 0.9 V. Therefore, conducting DC measurements can serve as a means to verify this behavior.The blue part is a long period of waveform, and the red part is a part of waveform captured from it for display.When sin signal/ramp signal/artificial cardiac signal is input, the output signal of the amplifier is the sin signal/ramp signal/artificial cardiac signal amplified according to the scale.
As mentioned in Section 1, taking into account the characteristics of the LFPs and APs, the −3 dB bandwidth of the amplifier should be designed to capture a wide range of neural signals.To achieve this, the high-pass corner frequency of the amplifier can be measurement when inputting a 1 mVpp 1 kHz artificial cardiac signal.The blue part is a long period of waveform, and the red part is a part of waveform captured from it for display.When sin signal/ramp signal/artificial cardiac signal is input, the output signal of the amplifier is the sin signal/ramp signal/artificial cardiac signal amplified according to the scale.
As mentioned in Section 1, taking into account the characteristics of the LFPs and APs, the −3 dB bandwidth of the amplifier should be designed to capture a wide range of neural signals.To achieve this, the high-pass corner frequency of the amplifier can be adjusted to 0.54 Hz, allowing for the recording of low-frequency signals.Additionally, a load capacitor of 8 pF was chosen to establish the low-pass corner frequency of the amplifier at 6.1 kHz, enabling the inclusion of high-frequency signals within the bandwidth.Figure 7 shows the AC frequency response of one channel of the overall amplifier.The amplifier has a measured low-pass cut-off frequency of 6.1 kHz, and its high-pass cut-off frequency is tunable from 0.54 Hz to 182 Hz by V tune , the voltage of V tune is regulated by a potentiometer.load capacitor of 8 pF was chosen to establish the low-pass corner frequency of the amplifier at 6.1 kHz, enabling the inclusion of high-frequency signals within the bandwidth.Figure 7 shows the AC frequency response of one channel of the overall amplifier.The amplifier has a measured low-pass cut-off frequency of 6.1 kHz, and its high-pass cut-off frequency is tunable from 0.54 Hz to 182 Hz by Vtune, the voltage of Vtune is regulated by a potentiometer.The measured CMRR and PSRR are shown in Figure 8.The CMRR is calculated as the ratio of the differential-mode gain to the common-mode gain.The PSRR is calculated as the ratio of the differential-mode gain to the gain from the power supply to the output.The measured CMRR and PSRR exceed 66 and 84 dB at 1 kHz, respectively.The measured CMRR and PSRR are shown in Figure 8.The CMRR is calculated as the ratio of the differential-mode gain to the common-mode gain.The PSRR is calculated as the ratio of the differential-mode gain to the gain from the power supply to the output.The measured CMRR and PSRR exceed 66 and 84 dB at 1 kHz, respectively.The measured input-referred noise spectrum of the amplifier is shown in Figure 9, which is obtained by dividing the output noise spectrum by the mid-band gain of the amplifier (at a gain of 100).The 1/f noise corner of the design was found to be roughly 22 Hz.The measured transient input-referred noise waveform is shown in Figure 10. Figure 10a records the input-referred peak-to-peak noise voltage in the frequency range 1 Hz to 6.1 kHz; the total input-referred rms noise is 3.1 μVrms integrated from 1 Hz to 6.1 kHz.The measured integrated noise is 0.96 and 2.95 μVrms in the frequency band of 1-200 Hz and 0.2 k-6.1 kHz, respectively.An input-referred peak-to-peak voltage noise of 5.9 μVpp The measured input-referred noise spectrum of the amplifier is shown in Figure 9, which is obtained by dividing the output noise spectrum by the mid-band gain of the amplifier (at a gain of 100).The 1/f noise corner of the design was found to be roughly 22 Hz.The measured transient input-referred noise waveform is shown in Figure 10. Figure 10a records the input-referred peak-to-peak noise voltage in the frequency range 1 Hz to 6.1 kHz; the total input-referred rms noise is 3.1 µVrms integrated from 1 Hz to 6.1 kHz.The measured integrated noise is 0.96 and 2.95 µVrms in the frequency band of 1-200 Hz and 0.2 k-6.1 kHz, respectively.An input-referred peak-to-peak voltage noise of 5.9 µVpp (1-200 Hz) and 18 µVpp (0.2 k-6.1 kHz) are measured, as shown in Figure 10b,c, respectively.By using (9), the NEF of the amplifier is calculated to be 2.97 from the measurement results.The power efficiency factor (PEF) that includes the supply voltage VDD is also an important parameter for evaluating the power efficiency for biomedical amplifiers.The PEF can be calculated as And the PEF of the amplifier is calculated to be 10.17. Figure 11 [6,7,9,13,[15][16][17][20][21][22][23][24][25][26][27][28][29][30][31] shows the input-referred noise versus the supply current of the amplifier.The proposed work features a low input-referred noise while achieving a competitive NEF.Table 2 compares the proposed work with state-of-the-art designs in the literature.Three different topologies of AFEs are compared.Although [20] and [32] The power efficiency factor (PEF) that includes the supply voltage VDD is also an important parameter for evaluating the power efficiency for biomedical amplifiers.The PEF can be calculated as And the PEF of the amplifier is calculated to be 10.17. Figure 11 [6,7,9,13,[15][16][17][20][21][22][23][24][25][26][27][28][29][30][31] shows the input-referred noise versus the supply current of the amplifier.The proposed work features a low input-referred noise while achieving a competitive NEF.Table 2 compares the proposed work with state-of-the-art designs in the literature.Three different topologies of AFEs are compared.Although [20] and [32] The power efficiency factor (PEF) that includes the supply voltage VDD is also an important parameter for evaluating the power efficiency for biomedical amplifiers.The PEF can be calculated as And the PEF of the amplifier is calculated to be 10.17.
Figure 11 [6,7,9,13,[15][16][17][20][21][22][23][24][25][26][27][28][29][30][31] shows the input-referred noise versus the supply current of the amplifier.The proposed work features a low input-referred noise while achieving a competitive NEF.Table 2 compares the proposed work with state-of-the-art designs in the literature.Three different topologies of AFEs are compared.Although [20] and [32] achieved impressive NEF (Noise Efficiency Factor) values of 1.07 and 0.86, respectively.In [20] a NEF value of 1.07 was obtained by stacking three gm cells.On the other hand, [32] utilized five differential pairs with AC-coupled inputs to achieve an NEF value of 0.86.Such aggressive stacking of g m cells results in limited headroom for each transistor.Typical amplifier designs are currently used in the industry, such as the CCIA [17] and Chopper [33] structures, as well as existing applications in the field of BMI aiming for high-resolution and high-density neural probes like Neuralpixels [34,35].The design offers several advantages.Firstly, it occupies a smaller area compared to other designs, allowing for the efficient use of limited chip real estate.Additionally, the design achieves a smaller input-referred noise, leading to improved signal quality.Moreover, it provides a larger range of −3 dB bandwidth, enabling the recording of a wider range of signals.Furthermore, the design exhibits relatively low power consumption, making it energy-efficient.Lastly, the NEF and PEF of the design are also superior under the 0.18 µm CMOS process.In [20], a NEF value of 1.07 was obtained by stacking three gm cells.On the other hand, [32] utilized five differential pairs with AC-coupled inputs to achieve an NEF value of 0.86.Such aggressive stacking of gm cells results in limited headroom for each transistor.Typical amplifier designs are currently used in the industry, such as the CCIA [17] and Chopper [33] structures, as well as existing applications in the field of BMI aiming for high-resolution and high-density neural probes like Neuralpixels [34,35].The design offers several advantages.Firstly, it occupies a smaller area compared to other designs, allowing for the efficient use of limited chip real estate.Additionally, the design achieves a smaller input-referred noise, leading to improved signal quality.Moreover, it provides a larger range of −3 dB bandwidth, enabling the recording of a wider range of signals.Furthermore, the design exhibits relatively low power consumption, making it energy-efficient.Lastly, the NEF and PEF of the design are also superior under the 0.18 μm CMOS process.
Figure 11.
Comparison with the existing amplifier designs of the input-referred noise versus the supply current of the amplifier (references: [6,7,9,13,[15][16][17][20][21][22][23][24][25][26][27][28][29][30][31]).These colored slashes represent the value of NEF, such as the pink line, where the value of NEF is 1, the area below the slash is NEF < 1, and the area above the slash is NEF > 1.The green and brown lines work the same way.For example, for the work of Tang, T, 2019, the NEF value of this work is below NEF = 2 (green line) and above NEF = 1 (pink line), which can show that its NEF value is between 1-2.The position of each work point in the picture is based on the current consumed by its design.The resulting -3dB bandwidth and the input referred noise.[6,7,9,13,[15][16][17][20][21][22][23][24][25][26][27][28][29][30][31]).These colored slashes represent the value of NEF, such as the pink line, where the value of NEF is 1, the area below the slash is NEF < 1, and the area above the slash is NEF > 1.The green and brown lines work the same way.For example, for the work of Tang, T, 2019, the NEF value of this work is below NEF = 2 (green line) and above NEF = 1 (pink line), which can show that its NEF value is between 1-2.The position of each work point in the picture is based on the current consumed by its design.The resulting −3 dB bandwidth and the input referred noise.
Conclusions
In this paper, a low-noise and low-power amplifier with a CCIA topology is proposed for neural signal acquisition.The amplifier reduces input-referred noise by stacking two PMOS transistors in combination with source degeneration resistor technology, rather than stacking multiple g m cells that consume headroom for each transistor.And the current scaling technology is used to reduce the power consumption of the amplifier.Different from the traditional current scaling technology, this design uses two separate NMOS transistors to divide the current, so as to achieve current scaling.In contrast to the traditional approach, which requires additional bias current branches, this design method is more energy efficient.The design was fabricated using the TSMC 0.18 µm MS RF G process.The measurement results demonstrate the amplifier's favorable power and noise performance.The measured −3 dB bandwidth of 0.54 Hz-6.1 kHz indicates its capability to record LFPs and APs.This architecture is well suited as a front-end amplifier for power-constrained or energy-sensitive applications, particularly in the field of biomedical implants.
Figure 1 .
Figure 1.Overall schematic of the neural amplifier.
Figure 1 .
Figure 1.Overall schematic of the neural amplifier.
Figure 2 .
Figure 2. Circuit diagram of the low-power, low-noise OTA used in this design.
Figure 2 .
Figure 2. Circuit diagram of the low-power, low-noise OTA used in this design.
Figure 3 .
Figure 3. (a) Small-signal model of a PMOS transistor.(b) Small-signal model of an NMOS tran tor.(c) Small-signal model of a PMOS transistor based on current reuse.(d) Small-signal mode an NMOS transistor based on current reuse.
Figure 3 .
Figure 3. (a) Small-signal model of a PMOS transistor.(b) Small-signal model of an NMOS transistor.(c) Small-signal model of a PMOS transistor based on current reuse.(d) Small-signal model of an NMOS transistor based on current reuse.
Figure 4 .
Figure 4. (a) An NMOS transistor with source degeneration.(b) An equivalent circuit is used to analyze NMOS transistor with source degeneration.(c) An NMOS transistor with source degeneration is equivalent to a single transistor with a smaller transconductance (Gm) and larger output impedance (Ro).
Figure 4 .
Figure 4. (a) An NMOS transistor with source degeneration.(b) An equivalent circuit is used to analyze NMOS transistor with source degeneration.(c) An NMOS transistor with source degeneration is equivalent to a single transistor with a smaller transconductance (G m ) and larger output impedance (R o ).
v oc = −g m r o v in(3)
Figure 5 .
Figure 5. Die microphotograph of the proposed neural recording amplifier ASIC.
Figure 5 .
Figure 5. Die microphotograph of the proposed neural recording amplifier ASIC.
Figure 6 .
Figure 6.(a) Test equipment and test board.(b) DC measurement when inputting a 1 mVpp, 1 kHz ramping signal.(c) DC measurement when inputting a 1 mVpp 1 kHz sine signal.(d) DC measurement when inputting a 1 mVpp 1 kHz artificial cardiac signal.The blue part is a long period of waveform, and the red part is a part of waveform captured from it for display.When sin signal/ramp signal/artificial cardiac signal is input, the output signal of the amplifier is the sin signal/ramp signal/artificial cardiac signal amplified according to the scale.
Figure 6 .
Figure 6.(a) Test equipment and test board.(b) DC measurement when inputting a 1 mVpp, 1 kHz ramping signal.(c) DC measurement when inputting a 1 mVpp 1 kHz sine signal.(d) DCmeasurement when inputting a 1 mVpp 1 kHz artificial cardiac signal.The blue part is a long period of waveform, and the red part is a part of waveform captured from it for display.When sin signal/ramp signal/artificial cardiac signal is input, the output signal of the amplifier is the sin signal/ramp signal/artificial cardiac signal amplified according to the scale.
Figure 7 .
Figure 7. Measured frequency response of the neural recording amplifier with tunable high−pass corner frequency.
Figure 7 .
Figure 7. Measured frequency response of the neural recording amplifier with tunable high−pass corner frequency.
Figure 8 .
Figure 8. CMRR and PSRR measurements of the neural recording amplifier.
Biosensors 2024 , 17 Figure 9 .
Figure 9. Measured output noise and input-referred noise spectrum of the proposed amplifier (at a gain of 100).
Figure 9 . 17 Figure 9 .
Figure 9. Measured output noise and input-referred noise spectrum of the proposed amplifier (at a gain of 100).
Figure 11 .
Figure 11.Comparison with the existing amplifier designs of the input-referred noise versus the supply current of the amplifier (references:[6,7,9,13,[15][16][17][20][21][22][23][24][25][26][27][28][29][30][31]).These colored slashes represent the value of NEF, such as the pink line, where the value of NEF is 1, the area below the slash is NEF < 1, and the area above the slash is NEF > 1.The green and brown lines work the same way.For example, for the work of Tang, T, 2019, the NEF value of this work is below NEF = 2 (green line) and above NEF = 1 (pink line), which can show that its NEF value is between 1-2.The position of each work point in the picture is based on the current consumed by its design.The resulting −3 dB bandwidth and the input referred noise.
Table 1 .
Operating points for transistors in the OTA.
Table 2 .
Performance and comparison of the proposed neural amplifier. | 2024-02-21T16:04:46.269Z | 2024-02-01T00:00:00.000 | {
"year": 2024,
"sha1": "4f2bf38be41b015971ac630949a82e64c5252974",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-6374/14/2/111/pdf?version=1708350382",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bb1d2a12b9575dd5c2cb84ae08dc6f81b1156cdc",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
254921438 | pes2o/s2orc | v3-fos-license | Effect of Different Solvents on Morphology and Gas-Sensitive Properties of Grinding-Assisted Liquid-Phase-Exfoliated MoS2 Nanosheets
Grinding-assisted liquid-phase exfoliation is a widely used method for the preparation of two-dimensional nanomaterials. In this study, N-methylpyrrolidone and acetonitrile, two common grinding solvents, were used during the liquid-phase exfoliation for the preparation of MoS2 nanosheets. The morphology and structure of MoS2 nanosheets were analyzed via scanning electron microscopy, X-ray diffraction, and Raman spectroscopy. The effects of grinding solvents on the gas-sensing performance of the MoS2 nanosheets were investigated for the first time. The results show that the sensitivities of MoS2 nanosheet exfoliation with N-methylpyrrolidone were 2.4-, 1.4-, 1.9-, and 2.7-fold higher than exfoliation with acetonitrile in the presence of formaldehyde, acetone, and ethanol and 98% relative humidity, respectively. MoS2 nanosheet exfoliation with N-methylpyrrolidone also has fast response and recovery characteristics to 50–1000 ppm of CH2O. Accordingly, although N-methylpyrrolidone cannot be removed completely from the surface of MoS2, it has good gas sensitivity compared with other samples. Therefore, N-methylpyrrolidone is preferred for the preparation of gas-sensitive MoS2 nanosheets in grinding-assisted liquid-phase exfoliation. The results provide an experimental basis for the preparation of two-dimensional materials and their application in gas sensors.
Introduction
Given the special structure and potential applications, two-dimensional (2D) materials such as graphene, boron nitride, and molybdenum disulfide (MoS 2 ) draw plenty of concerns. Among them, MoS 2 as the frontrunner in transition metal dichalcogenides (TMDCs) materials has gained the most attention [1][2][3][4] and is used in a wide variety of applications [5][6][7][8][9][10][11] due to its unique properties [12][13][14]. MoS 2 is at the forefront in the race of an ideal gas-sensing material because of its large surface-to-volume ratio, enormous number of active sites, and favorable adsorption sites [15,16]. MoS 2 manifests two possible crystal phases, including trigonal and hexagonal structures, with metallic and semiconducting properties, respectively [17]. The presence of weak Van der Waals force facilitates the isolation of layers from bulk MoS 2 . The indirect bandgap of 1.2 eV in bulk MoS 2 is converted to a direct bandgap of 1.8 eV for monolayer MoS 2 [3,14,18]. The absence of dangling bonds provides stability to pristine MoS 2 flakes in liquid and gaseous media in the presence of oxygen, thereby facilitating its gas-sensing application [19,20]. Therefore, a reliable and low-cost technique is needed to produce 2D-MoS 2 for gas-sensing applications. Currently, several methods including vapor deposition [21], mechanical exfoliation [22], lithium-ion intercalation [23], liquid-phase exfoliation [24,25], and RF sputtering [26] have
Preparation of Materials
MoS 2 , with a purity of 99% and particle size less than 2 µm, was purchased from Sigma-Aldrich. ACN, NMP, and absolute ethanol (C 2 H 6 O) were purchased from Tianjin Zhiyuan Chemical Reagent Co. Ltd. as analytically pure reagents. The preparation of MoS 2 nanosheets via grinding-assisted liquid-phase exfoliation is described as follows: MoS 2 powder (100 mg) was manually ground in a mortar for 2 h, and 0.5 mL of the chosen solvent was added during the grinding. The sample was then dried in a vacuum oven at 60 • C for 12 h. The dried sample was dispersed in 40 mL of 45 vol% absolute ethanol and sonicated for 1 h at 120 W with stirring. The dispersion was centrifuged for another 20 min (1500 r / min) to obtain the MoS 2 nanosheets, and the supernatant was dried in air for further use. For convenience, the MoS 2 nanosheets obtained by grinding with ACN were designated as S1, and those ground with NMP were called S2.
Characterizations
The morphology of MoS 2 nanosheets was observed with a field emission scanning electron microscope (SEM, JSM-7610F Plus). The crystal structure of MoS 2 nanosheets was characterized by X-ray diffraction (XRD, Bruker D8 Advance, with Cu-Kα radiation). Raman spectroscopy (Renishaw inVia, Gloucester, Britain) was used to characterize the defects and functional groups of samples. The I-t and I-V curves of the sensing chip were measured by Keithley 2636B at room temperature.
Device Fabrication and Testing
The MoS 2 nanosheets were dispersed in absolute ethanol at 10 mg/mL. Dispersions (2 µL) were uniformly coated to fabricate a MoS 2-based sensing chip with Ag-Pd fork-finger electrodes. The minimum width and spacing of electrodes was 0.2 mm. The interdigital electrode was dried at 25 • C and aged for 24 h at a voltage of 4 V to obtain a sensing chip with good stability. The target vapor was produced by thermal evaporation, according to our previous work [35], and a calculated amount of target liquid was dropped onto a hot plate in a 1 L container to generate target vapor in the container. Next, 98% relative humidity was obtained by saturating salt solution (potassium sulphate-K 2 SO 4 ). Then, by transferring the sensing chip from the air to the target gas at room temperature, the Keithley 2636B recorded the change of the current signal of the sensing chip ( Figure S1).
The response was defined using the formula I G −I R I R × 100%, where I R and I G are the currents of the sensor in the reference gas and target gas, respectively. The response time and recovery time were defined as the response values of 90% and 10% of the current of the sensor in contact with the target gas, respectively.
Results and Discussion
The XRD patterns of the two types of MoS 2 prepared by different grinding solvents are shown in Figure 1 Figure S2) of the peaks appeared after liquid-phase exfoliation, indicating that the MoS 2 nanosheets were able to be exfoliated, and thus, the size of MoS 2 decreases [36][37][38][39]. man spectroscopy (Renishaw inVia, Gloucester, Britain) was used to characterize the defects and functional groups of samples. The I-t and I-V curves of the sensing chip were measured by Keithley 2636B at room temperature.
Device Fabrication and Testing
The MoS2 nanosheets were dispersed in absolute ethanol at 10 mg/mL. Dispersions (2 μL) were uniformly coated to fabricate a MoS2-based sensing chip with Ag-Pd forkfinger electrodes. The minimum width and spacing of electrodes was 0.2 mm. The interdigital electrode was dried at 25°C and aged for 24 h at a voltage of 4 V to obtain a sensing chip with good stability. The target vapor was produced by thermal evaporation, according to our previous work [35], and a calculated amount of target liquid was dropped onto a hot plate in a 1 L container to generate target vapor in the container. Next, 98% relative humidity was obtained by saturating salt solution (potassium sulphate-K₂SO₄). Then, by transferring the sensing chip from the air to the target gas at room temperature, the Keithley 2636B recorded the change of the current signal of the sensing chip ( Figure S1). The response was defined using the formula ( − ) × 100%, where IR and IG are the currents of the sensor in the reference gas and target gas, respectively. The response time and recovery time were defined as the response values of 90% and 10% of the current of the sensor in contact with the target gas, respectively.
Results and Discussion
The XRD patterns of the two types of MoS2 prepared by different grinding solvents are shown in Figure 1 Figure S2) of the peaks appeared after liquid-phase exfoliation, indicating that the MoS2 nanosheets were able to be exfoliated, and thus, the size of MoS2 decreases [36][37][38][39]. Raman spectroscopy is effective in distinguishing bulk from exfoliated 2D materials. Figure 2 shows the Raman spectra of bulk MoS 2 : S1 and S2. The two Raman peaks correspond to the high-energy A 1g mode and lower-energy E 1 2g mode. As shown in Figure 2a, all the samples displayed the E 1 2g and A 1g peaks of MoS 2 . Comparing with peaks of bulk MoS 2 , a red shift of E 1 2g peak and a blue shift of the A 1g peak were observed for both S1 and S2, respectively. These shifts are associated with nanosheets obtained with NMP and ACN [40,41]. Figure 2b presents two very broad and intense Raman peaks (1360 and 1580-cm −1 ) of S2, which may be assigned to NMP [31,36] that was not completely removed from the surface of MoS 2 nanosheets although it was heated and reduced at 60 • C for several hours. In contrast, S1 showed no broad peaks, indicating that ACN was almost removed. Raman spectroscopy is effective in distinguishing bulk from exfoliated 2D materials. Figure 2 shows the Raman spectra of bulk MoS2: S1 and S2. The two Raman peaks correspond to the high-energy 1 mode and lower-energy 2g 1 mode. As shown in Figure 2a, all the samples displayed the 2g 1 and 1 peaks of MoS2. Comparing with peaks of bulk MoS2, a red shift of 2g 1 peak and a blue shift of the 1 peak were observed for both S1 and S2, respectively. These shifts are associated with nanosheets obtained with NMP and ACN [40,41]. Figure 2b presents two very broad and intense Raman peaks (1360 and 1580cm −1 ) of S2, which may be assigned to NMP [31,36] that was not completely removed from the surface of MoS2 nanosheets although it was heated and reduced at 60° C for several hours. In contrast, S1 showed no broad peaks, indicating that ACN was almost removed. We next investigated the effect of grinding solvents on the morphology of MoS2 nanosheets. The SEM image shown in Figure 3a,b reveals the morphology of the starting MoS2 powder as a thick layer with dimensions ranging from about 1 to 6.4 μm. The SEM images presented in Figure 3c,d clearly indicate that the lateral sizes and thicknesses of layered MoS2 were reduced by combined grinding and sonication. The MoS2 nanosheets were obtained by grinding with ACN (S1), as shown in Figure 3c,d, and the nanosheets were uniform in size and well-dispersed, with the majority measuring between 0.1 and 0.5 μm. As shown in Figure 3e,f, exfoliation with NMP (S2) also produced nanosheets with good dispersion with lateral dimensions of 0.4-1.6 μm. The MoS2 nanosheets obtained by grinding with ACN were smaller than NMP-ground MoS2 nanosheets, which is consistent with the results reported in the literature [34] and the results of XRD patterns (Figures 1 and S2). We next investigated the effect of grinding solvents on the morphology of MoS 2 nanosheets. The SEM image shown in Figure 3a,b reveals the morphology of the starting MoS 2 powder as a thick layer with dimensions ranging from about 1 to 6.4 µm. The SEM images presented in Figure 3c,d clearly indicate that the lateral sizes and thicknesses of layered MoS 2 were reduced by combined grinding and sonication. The MoS 2 nanosheets were obtained by grinding with ACN (S1), as shown in Figure 3c,d, and the nanosheets were uniform in size and well-dispersed, with the majority measuring between 0.1 and 0.5 µm. As shown in Figure 3e,f, exfoliation with NMP (S2) also produced nanosheets with good dispersion with lateral dimensions of 0.4-1.6 µm. The MoS 2 nanosheets obtained by grinding with ACN were smaller than NMP-ground MoS 2 nanosheets, which is consistent with the results reported in the literature [34] and the results of XRD patterns (Figures 1 and S2).
The gas-sensitive properties of MoS 2 nanosheets loaded on ceramic substrates were tested at room temperature. The results shown in Figure 4a,c indicate gas-sensitive properties and response time (Figure 4b,d) of S1 and S2 at 98% relative humidity (RH) and 1000 ppm of formaldehyde (CH 2 O), acetone (C 3 H 6 O), and ethanol (C 2 H 6 O). The MoS 2 layers exfoliated with both the grinding solvents showed good stability in three continuous responserecovery cycles at room temperature. Both of them completed a response-recovery cycle in 40 s and returned completely each time with almost no drift. Nanomaterials 2022, 12, x FOR PEER REVIEW 5 of 12 The gas-sensitive properties of MoS2 nanosheets loaded on ceramic substrates were tested at room temperature. The results shown in Figure 4a,c indicate gas-sensitive properties and response time (Figure 4b,d) of S1 and S2 at 98% relative humidity (RH) and 1000 ppm of formaldehyde (CH2O), acetone (C3H6O), and ethanol (C2H6O). The MoS2 layers exfoliated with both the grinding solvents showed good stability in three continuous response-recovery cycles at room temperature. Both of them completed a response-recovery cycle in 40 s and returned completely each time with almost no drift. Figure 5 shows the average response, response time, and recovery time of S1 and S2 for the target analyte. As can be seen from Figure 5a, the sensitivities of MoS 2 nanosheets exfoliation with NMP (S2) were 2.4, 1.4, 1.9, and 2.7 times higher than exfoliation with ACN (S1) to CH 2 O, C 3 H 6 O, C 2 H 6 O, and 98%RH, respectively. These results prove that the MoS 2 nanosheets obtained by grinding with NMP have higher gas-responsive properties than the MoS 2 nanosheets with ACN although NMP was not removed completely. At the same time, it can be seen from Figure 5b,c that both samples have faster response time to the four analytes, which did not exceed 35 s, and the recovery time did not exceed 4 s. Nanomaterials 2022, 12, x FOR PEER REVIEW 6 of 12 Figure 4. Sensing curves in the presence of different target gases of S1 and S2. (a) and (c) Gas-sensitive properties of S1 and S2 at 98% RH and 1000 ppm of CH2O, C3H6O, and C2H6O, respectively. (b) and (d) Response time of S1 and S2 at 98% RH and 1000 ppm of CH2O, C3H6O, and C2H6O, respectively. Figure 5 shows the average response, response time, and recovery time of S1 and S2 for the target analyte. As can be seen from Figure 5a, the sensitivities of MoS2 nanosheets exfoliation with NMP (S2) were 2.4, 1.4, 1.9, and 2.7 times higher than exfoliation with ACN (S1) to CH2O, C3H6O, C2H6O, and 98%RH, respectively. These results prove that the MoS2 nanosheets obtained by grinding with NMP have higher gas-responsive properties than the MoS2 nanosheets with ACN although NMP was not removed completely. At the same time, it can be seen from Figure 5b,c that both samples have faster response time to the four analytes, which did not exceed 35 s, and the recovery time did not exceed 4 s. . Sensing curves in the presence of different target gases of S1 and S2. (a) and (c) Gas-sensitive properties of S1 and S2 at 98% RH and 1000 ppm of CH2O, C3H6O, and C2H6O, respectively. (b) and (d) Response time of S1 and S2 at 98% RH and 1000 ppm of CH2O, C3H6O, and C2H6O, respectively. Figure 5 shows the average response, response time, and recovery time of S1 and S2 for the target analyte. As can be seen from Figure 5a, the sensitivities of MoS2 nanosheets exfoliation with NMP (S2) were 2.4, 1.4, 1.9, and 2.7 times higher than exfoliation with ACN (S1) to CH2O, C3H6O, C2H6O, and 98%RH, respectively. These results prove that the MoS2 nanosheets obtained by grinding with NMP have higher gas-responsive properties than the MoS2 nanosheets with ACN although NMP was not removed completely. At the same time, it can be seen from Figure 5b,c that both samples have faster response time to the four analytes, which did not exceed 35 s, and the recovery time did not exceed 4 s. In order to further evaluate the real-time monitoring capability of MoS 2 nanosheets obtained by grinding with NMP (S1), the responses of the S2-based sensor under different concentrations (50-2000 ppm) of CH 2 O vapor were evaluated (Figure 6a). The response of S2 increased with the increase of CH 2 O concentration. Figure 5b shows a linear response to changing CH 2 O concentration, and the correlation coefficient R2 was 0.99, which facilitated gas-sensing application. Figure 6a S2 increased with the increase of CH2O concentration. Figure 5b shows a linear response to changing CH2O concentration, and the correlation coefficient R2 was 0.99, which facilitated gas-sensing application. Figure 6a,b show that the response time and recovery time of S2 were only 18 s and 0.5 s to 50 ppm CH2O, respectively, and only 11 s and 0.6 s to 100 ppm CH2O. In order to comprehensively evaluate the gas-sensing performance of MoS2 nanosheets obtained by grinding with NMP, the performances of the MoS2 nanosheetbased sensors were compared ( Table 1). As shown in Table 1, the response time and recovery time of MoS2 nanosheets obtained by grinding with NMP for 50 ppm CH2O were 18 s and 0.51 s, respectively, which were close to the shortest response time (11 s) and recovery time (8 s) shown by ZnS and In2O3/MoS2 [42,43]. Nevertheless, compared with the operating temperature (295 °C) of ZnS, the operating temperature of MoS2 nanosheets was at room temperature (25 °C). Therefore, the MoS2 nanosheets exhibited a robust sensing performance at a low working temperature, with rapid response and recovery. However, the sensitivity and limit of detection (LoD) of the sensor based on pure MoS2 nanosheets need to be improved. In order to comprehensively evaluate the gas-sensing performance of MoS 2 nanosheets obtained by grinding with NMP, the performances of the MoS 2 nanosheet-based sensors were compared ( Table 1). As shown in Table 1, the response time and recovery time of MoS 2 nanosheets obtained by grinding with NMP for 50 ppm CH 2 O were 18 s and 0.51 s, respectively, which were close to the shortest response time (11 s) and recovery time (8 s) shown by ZnS and In 2 O 3 /MoS 2 [42,43]. Nevertheless, compared with the operating temperature (295 • C) of ZnS, the operating temperature of MoS 2 nanosheets was at room temperature (25 • C). Therefore, the MoS 2 nanosheets exhibited a robust sensing performance at a low working temperature, with rapid response and recovery. However, the sensitivity and limit of detection (LoD) of the sensor based on pure MoS 2 nanosheets need to be improved. Figure 7 shows the I-V curves of S1 and S2 measured with an applied bias voltage ranging from −2 to 2 V at 1000 ppm CH 2 O. The I-V curves demonstrated a good ohmic contact between the sensing layers and the electrodes for both samples, which indicates that the sensor response was attributed to the sensitive material and not the metal-semiconductor contact. Figure 7 shows the I-V curves of S1 and S2 measured with an applied bias voltage ranging from −2 to 2 V at 1000 ppm CH2O. The I-V curves demonstrated a good ohmic contact between the sensing layers and the electrodes for both samples, which indicates that the sensor response was attributed to the sensitive material and not the metal-semiconductor contact.
Nanomaterials 2022, 12, 4485 9 of 12 The sensing mechanism of MoS 2 nanosheet to CH 2 O, C 3 H 6 O, C 2 H 6 O, and 98%RH have been well-studied and described elsewhere [52][53][54][55]. According to these references, MoS 2 -nanosheets-based gas sensors exhibit n-type characteristics in our work. The possible sensing mechanism is as follows: The transfer of electrons from the conduction band to chemisorbed oxygen decreases the carrier density and increases the depletion layer, thereby increasing the resistance of the MoS 2 nanosheets. At room temperature, when the MoS 2 -nanosheet-based sensor is exposed to the target gas, for example, CH 2 O, the gas is adsorbed on the surface of the MoS 2 nanosheets. These chemisorbed molecules react with O − 2 (ads) to form H 2 O and CO 2 . Therefore, the trapped electrons are released back into the MoS 2 nanosheets, which increases the number of conductive channels, leading to a decrease in sensor resistance (Figure 8). The sensing mechanism of MoS2 nanosheet to CH2O, C3H6O, C2H6O, and 98%RH have been well-studied and described elsewhere [52][53][54][55]. According to these references, MoS2-nanosheets-based gas sensors exhibit n-type characteristics in our work. The possible sensing mechanism is as follows: The transfer of electrons from the conduction band to chemisorbed oxygen decreases the carrier density and increases the depletion layer, thereby increasing the resistance of the MoS2 nanosheets. At room temperature, when the MoS2-nanosheet-based sensor is exposed to the target gas, for example, CH2O, the gas is adsorbed on the surface of the MoS2 nanosheets. These chemisorbed molecules react with O 2 − (ads) to form H2O and CO2. Therefore, the trapped electrons are released back into the MoS2 nanosheets, which increases the number of conductive channels, leading to a decrease in sensor resistance (Figure 8).
Conclusions
MoS2 nanosheets were prepared with two grinding solvents via grinding-assisted liquid-phase exfoliation. The effects of grinding solvents on the structure of MoS2 nanosheets as well as the gas-sensing performance were studied. The structural and gassensing properties of MoS2 were investigated using XRD, SEM, and Raman spectroscopy. The sensing performance of MoS2 toward four target gases, including CH2O, C3H6O, C2H6O ,and 98% RH, was analyzed at room temperature. The experimental results proved that the MoS2 nanosheets exfoliated with NMP responded better than the MoS2 nanosheets exfoliated with ACN although NMP was not removed completely. The MoS2 nanosheet-based sensor also exhibited excellent response. However, the sensitivity and LoD of the sensor need to be improved. Accordingly, although NMP cannot be removed completely from the surface of MoS2, NMP exhibits good gas sensitivity compared with other materials. Therefore, NMP is preferred for the preparation of gas-sensitive materials in grinding-assisted liquid-phase exfoliation. The results provide an experimental basis for the preparation of two-dimensional materials and their application in gas sensors.
Author Contributions: H.W. designed the experiments, analyzed the data, and wrote the paper; X.X. performed the theoretical analysis; T.S. edited the manuscript and supervised the study. All authors have read and agreed to the published version of the manuscript.
Conclusions
MoS 2 nanosheets were prepared with two grinding solvents via grinding-assisted liquid-phase exfoliation. The effects of grinding solvents on the structure of MoS 2 nanosheets as well as the gas-sensing performance were studied. The structural and gas-sensing properties of MoS 2 were investigated using XRD, SEM, and Raman spectroscopy. The sensing performance of MoS 2 toward four target gases, including CH 2 O, C 3 H 6 O, C 2 H 6 O, and 98% RH, was analyzed at room temperature. The experimental results proved that the MoS 2 nanosheets exfoliated with NMP responded better than the MoS 2 nanosheets exfoliated with ACN although NMP was not removed completely. The MoS 2 nanosheet-based sensor also exhibited excellent response. However, the sensitivity and LoD of the sensor need to be improved. Accordingly, although NMP cannot be removed completely from the surface of MoS 2 , NMP exhibits good gas sensitivity compared with other materials. Therefore, NMP is preferred for the preparation of gas-sensitive materials in grinding-assisted liquid-phase exfoliation. The results provide an experimental basis for the preparation of two-dimensional materials and their application in gas sensors.
Author Contributions: H.W. designed the experiments, analyzed the data, and wrote the paper; X.X. performed the theoretical analysis; T.S. edited the manuscript and supervised the study. All authors have read and agreed to the published version of the manuscript.
Funding: This research was funded by (National Natural Science Foundation of China) grant number (62061046, 51403180) and (The Third "Tianshan Talents" Training Project of Xinjiang Uygur Autonomous Region).
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
Conflicts of Interest:
The authors have no conflict of interest to declare. | 2022-12-21T16:08:14.533Z | 2022-12-01T00:00:00.000 | {
"year": 2022,
"sha1": "875f7ca6cd137e4e0b5bb2a40c6cf6a8ed24bc22",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "ad9c6ad86aacbd5dfcc17374314f3db0e7c3def3",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": []
} |
245355402 | pes2o/s2orc | v3-fos-license | Healthy Lifestyle Factors, Cancer Family History, and Gastric Cancer Risk: A Population-Based Case-Control Study in China
Background: We aimed to explore the relationship between lifestyle factors, cancer family history, and gastric cancer risk. Methods: We examined the association between lifestyle factors, cancer family history, and gastric cancer risk based on a population-based case-control study in Taixing, China, with 870 cases and 1928 controls. A lifestyle score was constructed considering body shape, smoking, alcohol drinking, tooth brushing habit, and food storage method. Unconditional logistic regression models were used to calculate odd ratios (ORs) and 95% confidence intervals (CIs). Results: Compared with participants with a lifestyle score of 0, subjects with a lifestyle score of 1 (OR 0.59, 95%CI 0.43–0.83), 2 (OR 0.42, 95%CI 0.30–0.59), 3 (OR 0.29, 95%CI 0.20–0.41), 4 (OR 0.20, 95%CI 0.13–0.32), or 5 (OR 0.10, 95%CI 0.04–0.22) had a lower risk of gastric cancer (P for trend < 0.001). Overall, 34% of gastric cancer cases (95%CI 27–41%) can be attributed to non-compliance with ≥3 healthy lifestyle. Family history of early-onset cancer is closely related to the occurrence of gastric cancer, with an OR ranging from 1.77 to 3.27. Regardless of family history, a good lifestyle is associated with a reduced risk of gastric cancer, with an OR value between 0.38 and 0.70. Conclusions: The early-onset cancer family history is closely related to the occurrence of gastric cancer and a good lifestyle is associated with a reduced risk of gastric cancer regardless of family history. Our results provide a basis for identifying and providing behavior guidance of high-risk groups of gastric cancer.
INTRODUCTION
Gastric cancer (GC) was the fifth most common cancer in the world in 2020 (1), and more than 40% of new cases and deaths from GC occurred in China (2). Family history is closely related to disease and is widely used to identify high-risk groups of many diseases, including GC (3)(4)(5). However, not all people with a family history of malignancy will get GC, which suggests that environmental and lifestyle factors play an important role in the occurrence of GC.
A number of large-scale cohort studies and meta-analyses have examined the relationship between smoking (6), alcohol drinking (7), poor oral hygiene habits (8), BMI and body composition (9,10), diet (11)(12)(13)(14)(15), physical activity (16)(17)(18) and the risk of GC. The observed reduced GC risk associated with refrigerator usage also suggests that food storage method may be one of the GC-risk-related lifestyle factors (19). The reality that these lifestyle factors often appear together makes it important to explore their combined effects on GC risk. We assumed that people with healthier lifestyle have a lower GC risk than those with no or fewer healthy lifestyle factors, and the beneficial effects increase with the number of healthy lifestyle factors. Among people with different types of family history of malignancy, exploring the relationship between combined lifestyle factors and GC risk will provide information to develop the behavioral guidance for GC high-risk groups.
However, there were few studies on the relationship between combined lifestyle factors and the risk of GC (20)(21)(22). In a prospective cohort study, Jin et al. found that participants with a better lifestyle had a lower risk of GC. Compared with participants with high genetic risk and poor lifestyle, participants with high genetic risk and good lifestyle have a lower risk of GC (21). Family history reflects the common genetic background and common environmental exposure, so the genetic risk represented by the polygenic risk score does not fully represent the family history. Therefore, we still need to explore the relationship between a variety of lifestyle factors and the risk of GC among people with different types of family history of malignancy.
Taixing is one of the areas with high incidence of GC in China. Since 2010, a population-based case-control study aiming at exploring the etiology of upper gastrointestinal cancers has been carried out in Taixing, China. Based on the data collected in this project, we explored the relationship between combined lifestyle factors, family history of cancer, and GC risk, in order to provide bases for the prevention and control of GC.
Study Design and Setting
We have previously reported the study design and participant recruitment process in detail (23). In short, we conducted a population-based case-control study in Taixing from October 2010 to September 2013. In order to collect the newly diagnosed GC cases, we recruited cases from the endoscopy units in the four largest local hospitals. Potential missing cases were additionally identified by comparing our case list with the records of the local Cancer Registry. The GC cases included in our study were approximately 75% of the estimated new GC cases during the study period and were a good representative of the local GC patients. The potential controls were randomly selected from the local Population Registry, which provides the basic information of people living in Taixing. Trained staff used a structured electronic questionnaire to conduct face-to-face interviews with participants in the hospitals (for cases) or community (for controls and for cases identified from the local Cancer Registry). In order to reduce potential recall bias, all GC cases were interviewed before they knew their diagnoses.
Participants
The process of inclusion and exclusion of cases and controls is shown in Figure 1. The basic inclusion criteria for participants were aged 40-85 and lived in Taixing for more than 5 years before the date of diagnosis or interview. The additional inclusion criteria for cases were: first, they were confirmed by pathological diagnosis or endoscopy; second, they have been independently verified by pathologists. The additional inclusion criteria for controls were: first, they were not diagnosed as GC; second, they were randomly selected by the frequency-matched method by gender and the 5-year age group. Based on the response rate of the controls in pilot study (response rate: 75%), we selected controls for the upper gastrointestinal cancer cases (esophageal cancer and GC) at a ratio of 1.3:1. Because the gender and age distribution of patients with esophageal cancer and GC were similar, all qualified controls were included in this analysis. We excluded those participants who were uncooperative or unable to participate in the investigation due to various reasons, such as mental illness. After excluding 138 subjects with missing lifestyle information and cancer family history information, this study finally included 870 GC cases and 1928 controls.
Data Collection
The structured electronic questionnaire includes information on demographics (age, sex, marriage, education, family size), family wealth score, smoking and drinking status, body shape, tooth brushing, food storage method, and family history of malignancy. We defined smoking and drinking as smoking at least one cigarette every 1-3 days for 6 months and consuming alcoholic beverages at least once a week for 6 months, respectively (24). The participants were divided into never smoking/drinking and current smoker/drinker or former smoker/drinker according to their smoking or drinking history. Due to the low prevalence of obesity in Taixing city, we revised the Stunkard's Figure Rating Scale (Supplementary Figure S1), which shows the different body shapes of males and females from extremely thin (body shape 1) to extremely fat (body shape 7 for males and body shape 9 for females). The trained staff showed and introduced the revised Stunkard Graphic Rating Scale briefly, and then asked the participants to choose the closest body type 10 years before the interview. Healthy body shapes were defined as body shape 3 and body shape 4 based on previous studies (23,25). Participants were asked how many times a day they brushed their teeth and were divided into ≥1/day and ≥2/day groups according to the frequency of tooth brushing. We collected information on food storage method by asking participants to choose the option that was closest to their food storage method 10 years before interview from six options. The six options were using airtight box inside the refrigerator, open box inside the refrigerator, airtight box outside the refrigerator, open box outside the refrigerator, plastic bag, or cloth wrap. We defined using open boxes outside the refrigerator, plastic bag, and cloth wrap as a bad food storage method, and using airtight boxes inside the refrigerator, open boxes inside the refrigerator, and airtight boxes outside the refrigerator as a good food storage method. For family history of cancer, the information we collected includes the number of siblings and children, and the cancer status of their parents, siblings, and children. For those relatives who had cancer, we further collected the information about cancer type and the age of diagnosis. A positive family history of a certain type of cancer is defined as having at least one first-degree relative suffering from the cancer. A positive family history of early onset cancer was defined as having at least one first-degree relative who was diagnosed with the cancer at or before the age of 45 (26). The status of H. pylori infection was obtained by quantitative detection of H. pylori immunoglobulin G antibody using immunoblotting assay (H. pylori IgG Antibody Detection Kit; Syno Gene Digital Technology, Taizhou, China).
Lifestyle Score
After reviewing the literatures, we identified some changeable lifestyle factors related to GC risk, such as smoking, alcohol drinking, oral hygiene habits, food storage method, BMI and body composition, diet, and physical activity. Because the food storage method reflected the carcinogens that might be produced by the food (such as nitrites), and physical activity variables have not been collected in our study, the lifestyle factors finally included in our analysis are smoking, alcohol drinking, toothbrushing, food storage method, and body shape. One point was assigned to participants for the following low risk lifestyle factors: appropriate body type (shape 3/4), never smoking, never drinking, brushing teeth twice a day or more, and good food storage method. We summed the points for the five lifestyle factors to get a healthy lifestyle score which ranges from 0 (least healthy) to 5 (most healthy). Because the median lifestyle score of the participants was 2, we divided the study subjects into two groups (≤2 vs. ≥3) according to their lifestyle scores.
Statistical Analysis
Pearson chi-squared test and Wilcoxon rank-sum test were used to evaluate the difference of distribution of the demographic, lifestyle factors and family history of cancer between case and control groups. The family wealth score was calculated based on the ownership of household appliances by the multiple correspondence analysis and classified according to the quintiles among controls (27). We used unconditional logistic regression models to calculate odds ratios (ORs) and 95% confidence intervals (CIs) to evaluate the association between family history of cancer, lifestyle factors and the risk of GC. We adjusted age (continuous) and sex in model 1 and further adjusted education (illiteracy/primary school/primary high school or above), marriage (unmarried/married/divorced or widowed), family size (<=1/2-3/>3) and family wealth score (Q1-Q5) and H. pylori in model 2. In Model 3, we further adjusted the GC family history, smoking, drinking, toothbrushing, food storage method, and body shape, where appropriate. In addition, we also calculated the adjusted population attributable fractions (PAF) and 95% CIs to estimate the proportion of cases attributable to lifestyle factors and lack of adherence to healthy lifestyle. PAFs were estimated based on the method proposed by Bruzzi (28). This method assumes that the cases and controls are random samples from the study population, and the exposure and confounding information are unbiased. In a case-control study, the number of cases is x and the number of controls is n−x. For exposure k, there are two levels of exposure and no exposure, and the number of cases exposed to k is x 1 . After adjusting for other confounding factors, the effect of k is OR. The formula for PAF is: which is equivalent to the formula proposed by Miettinen (29).
The 95% CIs were estimated by the bootstrap method. We sampled 1000 bootstrap samples using replacement sampling, and estimated the PAF of each bootstrap sample. The 2.5th and 97.5th percentiles of PAFs of the bootstrap samples form a good approximation of the 95% confidence interval (30). Sensitivity analysis was conducted by excluding cases from the local Cancer Registry. All analyses were performed using Stata software (version 16.0). Two-sided P-values less than 0.05 were considered statistically significant. Table 1 presents the basic information of cases and controls. The mean age of cases was slightly higher than that of controls, with 67.8 years for GC cases and 66.1 years for controls. Compared with controls, cases had lower education level, lower family wealth score, and were more likely to have H. pylori infection. There were no significant differences between cases and controls in terms of sex, marital status and family size. Table 2 shows the association between lifestyle factors, lifestyle score and the risk of GC. We observed that most individual lifestyle factors were associated with a reduced GC risk in the multivariate analysis: appropriate body type (OR 0.73, 95%CI 0.61-0.86), never drinking (OR 0.79, 95%CI 0.65-0.97), brushing teeth twice a day or more (OR 0.44, 95%CI 0.35-0.54), and good food preservation methods (OR 0.58, 95%CI 0.47-0.72). A good lifestyle was associated with a reduced GC risk in a dose-response pattern (P for trend < 0.001). Participants with five good lifestyle factors had only one-tenth GC risk (95%CI 0.04-0.22) compared with participants without any good lifestyle factor. However, we did not observe an association between smoking and GC risk. We further explored the relationship between food storage method and the risk of GC and found that storing food in airtight containers and low temperatures were associated with reduced GC risks, with ORs of 0.53 (95%CI 0.42-0.67) and 0.70 (95%CI 0.56-0.87), respectively (Supplementary Table S1). We also found that more than 95% of females did not smoke, and more than 70% of males smoked in both cases and controls (Supplementary Table S2). Table 3 shows the PAFs according to individual and combined lifestyle factors and the risk of GC. We calculated PAFs after converting OR to > 1. The estimated PAFs attributable to nonadherence to healthy lifestyle factors were 12% (95%CI 6-16%) for body shape, 10% (95%CI 1-16%) for alcohol drinking, 47% (95%CI 38-54%) for tooth brushing, and 34% (95%CI 22-43%) for food storage method. In combination, 34% (95%CI 27-41%) of GC cases were attributable to non-adherence to healthy lifestyles (<=2 healthy lifestyle factors). The relationship between family history of cancer and the risk of GC is showed in Table 4. Compared with people with no family history of cancer, people with family history of GC and other digestive system cancers showed an increased risk of GC, with OR of 2.25 (95%CI 1.76-2.88) and 1.45 (95%CI 1.17-1.79) respectively. People with family history of early-onset GC and early-onset other digestive system cancers had even higher risk of GC, with OR of 3.27 (95%CI 1.82-5.87) and 2.03 (95%CI 1.36-3.04) respectively. We did not observe a relationship between GC risk and family history of non-digestive cancers, but we observed that people with an early-onset family history of non-digestive system cancers had an increased risk of GC, with OR of 1.77 (95%CI 1.16-2.70).
RESULT
We conducted a stratified analysis for the association between lifestyle and the risk of GC. We found that participants with more healthy lifestyle factors (lifestyle score≥3) had a lower risk of GC irrespective of family history of GC (no family history of cancer: OR 0.42, 95%CI 0.32-0.57; with family history of GC: OR 0.54, 95%CI 0.33-0.91; with family history of other digestive system cancers: OR 0.70, 95%CI 0.47-1.06; with family history of non-digestive cancers: OR 0.38, 95%CI 0.19-0.76, respectively) ( Table 5). Formal test for heterogeneity of results across strata also revealed no significant interaction between family history of cancer and lifestyle factors (P for interaction: 0.396).
We also conducted a sensitivity analysis by excluding cases identified only from the local Cancer Registry, and the results showed no substantial changes (Supplementary Tables S3-S5).
DISCUSSION
Based on a population-based case-control study conducted in Taixing, China, we explored the association between family history of cancer, combined lifestyle factors, and the risk of GC. We found that a family history of cancer was associated with an increased GC risk. The magnitude of the association was strongest for the family history of early-onset GC, followed by the family history of GC, the family history of early-onset other digestive system cancers, the family history of early-onset nondigestive system cancers, and the family history of other digestive system cancers. Regardless of the family history of cancer, a good lifestyle is associated with a lower GC risk. Our findings provide a solid foundation to guide the prevention and control of GC.
We found that a family history of GC was significantly associated with an increased risk of GC, which was consistent with the conclusions of previous studies (31,32). In addition, we also found that family history of other digestive system cancers and family history of early-onset other cancers were associated with an increased risk of GC. Family history reflects common environmental factors, common lifestyle, and common genetic background. One important risk factor associated with GC and affecting each other among family members is H. pylori infection. The H. pylori infection rate of family members of GC patients is higher than that of the general population, and the precancerous histological changes of the gastric mucosa are more serious in these people (33,34). In patients with H. pylori infection and with a family history of GC, H. pylori eradication therapy can reduce the risk of GC (35). Other risk factors can also explain part of the association between family history and increased risk of GC. Previous studies have also found an association between genetic background factors such as IL-17 polymorphisms (36) and cell proliferation-related genetic polymorphisms (37) and an increased GC risk.
Most of the interviewees in our study have a relatively low education level, and the memory of weight may not be as accurate as the memory of body shape. Body shape not only reflects the weight information, but also reflects the information of body composition (23). In order to reduce recall bias and incorporate body composition information into the analysis, our study used Stunkard body shape as a component of lifestyle variables (9,38). Consistent with previous studies, our study also found that a suitable body size is associated with a lower GC risk (10,39). Inappropriate body shapes include overweight and underweight, which are associated with high and low BMI respectively. High BMI is associated with an increased risk of gastroesophageal reflux, which is a risk factor for gastric cardia cancer (40,41). In addition, overweight may increase the incidence of cancer, including GC, through insulin resistance, abnormalities of the IGF-I system and signaling, and other pathways (42). Some previous studies also showed that low BMI may be associated with an increased risk of GC (10,23). Low BMI may be associated with malnutrition and low socioeconomic status, which have been linked with a higher risk of GC (41).
Previous studies have shown that toothbrushing (43) and abstinence from alcohol can reduce the risk of GC (44), and our study supported this conclusion. Poor oral hygiene habits may lead to chronic inflammation, which is related to the occurrence of GC (43). Acetaldehyde, the metabolite of alcohol in the body, is internationally recognized as a Group 1 carcinogen to humans. In addition, alcohol also promotes the occurrence of cancer by changing the absorption and metabolism of carcinogens (45). Previous studies have shown that smoking is a risk factor for GC (41). However, no association between smoking and the GC risk was found in our study, which may be related to the homogeneous smoking habits by sex (more than 75% in males and less than 5% in females).
Our results showed that not putting food in low temperature or airtight containers may be associated with an increased risk of GC, with a high PAF of 34%. Food refrigeration and vacuum preservation can delay the spoilage process of food by preventing or inhibiting the growth of microorganisms (46). In addition, lower food storage temperature will also slow down the accumulation of carcinogens and preserve the beneficial substances in food. Studies have shown that for infant plantbased canned foods, after 24 h of refrigeration and storage at room temperature, the nitrate content increased by an average of 7 and 13%, and after 48 h of storage, the nitrate content increased by 15 and 29%, respectively (47). Higher temperatures will accelerate the aging process of fruits and vegetables and reduce the content of beneficial substances, such as carotenoids (48). These pieces of evidence strongly support our view.
In our study, compared with the people with the unhealthiest lifestyle (0 points), the risk of GC in the people with the healthiest lifestyle (5 points) was only one tenth, and this risk reduction was more eminent than that reported in previous studies (20)(21)(22)49). The difference in the composition of lifestyle scores and the population may explain this observed heterogeneity. We found that a healthier lifestyle was associated with a lower risk of GC regardless of the family history of various cancer types, which was consistent with the finding of Jin and his colleagues (21). Compared with the polygenic risk score, which was used in Jin's study (21), family history was widely used in large-scale population screening programs for its convenience and cheapness. Our results will provide a solid foundation for the development of behavior guidance for GC high-risk groups in practical work. There were some limitations in our study. First, all the information was collected through questionnaire interviews, and the recall bias was unavoidable. For majority of cases, we conducted interviews before cases knew their diagnoses, which might reduce the recall bias to a certain extent. In addition, we also conducted sensitivity analysis after excluding cases from the local Cancer Registry (interviews were conducted after they knew their diagnoses), and the results remained almost unchanged. This alleviated the concern of recall bias to some extent. Second, based on the case-control study design, the causal relationship between exposure and disease cannot be determined. Third, because the food storage method reflected the carcinogens that might be produced by the food (such as nitrites), and physical activity variables have not been collected in our study, our study did not include these two variables. Fourth, there may be potential biases in the selection of cases and controls. However, in our study, the response rates of both the case and the control were high, and there was no statistical difference in age and gender distribution between nonresponders and responders, which can dispel this doubt to a certain extent.
In summary, based on this population-based case-control study conducted in Taixing, we confirmed the close relationship between the family history of cancer, especially the family history of early-onset cancer and the risk of GC. We also found that regardless of cancer family history, a good lifestyle was associated with reduced GC risk. The results of our study provide a solid foundation for the development of the behavior guidance for GC high-risk groups.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by the
AUTHOR CONTRIBUTIONS
Study supervision was performed by ML. The first draft of the manuscript was written by JM and YN. All authors contributed to the study conception and design, acquisition of data, analysis and interpretation of data, critical revision of the manuscript for important intellectual content, read, approved the final manuscript, and commented on previous versions of the manuscript. | 2021-12-22T14:21:05.263Z | 2021-12-22T00:00:00.000 | {
"year": 2021,
"sha1": "fac8563be83f2f3b888b4082bacb71d2610f9520",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "fac8563be83f2f3b888b4082bacb71d2610f9520",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259322854 | pes2o/s2orc | v3-fos-license | Cushing disease in pediatrics: an update
Cushing disease (CD) is the main cause of endogenous Cushing syndrome (CS) and is produced by an adrenocorticotropic hormone (ACTH)-producing pituitary adenoma. Its relevance in pediatrics is due to the retardation of both growth and developmental processes because of hypercortisolism. In childhood, the main features of CS are facial changes, rapid or exaggerated weight gain, hirsutism, virilization, and acne. Endogenous hypercortisolism should be established after exogenous CS has been ruled out based on 24-hour urinary free cortisol, midnight serum or salivary cortisol, and dexamethasone suppression test; after that, ACTH dependence should be established. The diagnosis should be confirmed by pathology. The goal of treatment is to normalize cortisol level and reverse the signs and symptoms. Treatment options include surgery, medication, radiotherapy, or combined therapy. CD represents a challenge for physicians owing to its multiple associated conditions involving growth and pubertal development; thus, it is important to achieve an early diagnosis and treatment in order to control hypercortisolism and improve the prognosis. Its rarity in pediatric patients has led physicians to have limited experience in its management. The objective of this narrative review is to summarize the current knowledge about the pathophysiology, diagnosis, and treatment of CD in the pediatric population.
Introduction
Cushing syndrome (CS) is an endocrine disorder that occurs due to prolonged exposure to excess glucocorticoids and can be acquired exogenously or endogenously. 1,2) The overall incidence of endogenous CS is 0.7-2.4 per million people per year, and only 10% of new cases occur in children each year. 1) Depending on its origin, it can be dependent or independent of adrenocorticotropic hormone (ACTH). 3,4) Cushing disease (CD), which is caused by an ACTH-producing pituitary adenoma, is responsible for 75%-80% of cases of endogenous CS 5) and is more common in children older than 5 years and more frequent in boys, with a prevalence of 63%. 6,7) Its rarity in childhood and adolescence (annual incidence of 0.89-1 per million pediatric patients) 8) has led pediatricians and pediatric endocrinologists to have limited experience in its diagnosis and treatment. 3,9) Growth and development are transcendental processes during childhood. Since hypercortisolism can negatively affect both, its timely diagnosis and treatment are crucial to improve the prognosis. 10) www.e-apem.org Here, we present an updated narrative review of the pathophysiology, diagnosis, and treatment of CD in the pediatric population. The appropriateness of this review article was assessed using the SANRA scale. 11)
Pathophysiology
The hypothalamus produces corticotropin-releasing hormone (CRH), which is transported by the portal system to the anterior pituitary, where it stimulates the release of ACTH, which in turn regulates cortisol secretion by the adrenal cortex. Cortisol exerts negative feedback by inhibiting the secretion of CRH and ACTH. This is called the hypothalamic-pituitaryadrenal (HPA) axis. 12) In CS, normal regulation of the HPA axis is impaired because of the loss of negative feedback caused by excessive secretion of ACTH or cortisol (Fig. 1). 13) Subtypes depend on the cause of the increase in cortisol secretion; thus, subtypes can be ACTH-dependent or ACTH-independent. In ACTH-dependent CS, the cause of increased cortisol secretion is hyperstimulation of the adrenal cortex by excess ACTH, which might originate from an ACTH-producing pituitary adenoma (CD) or from ectopic, nonpituitary ACTH secretion (ectopic ACTH syndrome), the latter usually secondary to a neuroendocrine tumor, such as small cell lung carcinoma or carcinoid lung tumors. 1,[12][13][14] Regarding ACTH-independent CS, a benign adrenocortical adenoma, carcinoma, or rare forms of bilateral adrenal disease leads to excess cortisol secretion by the adrenal glands without ACTH stimulation. Exogenous CS represents the predominant etiology of CS in pediatric patients, which usually involves treatment with oral glucocorticoids in supraphysiological doses; however, it is also associated with intra-articular, intrathecal, epidural, inhaled, topical, or ocular administration route. 1,10,12,13) Several genetic mutations are responsible for syndromes associated with pituitary adenomas, including MEN1, which causes multiple endocrine neoplasia type 1; cyclin-dependent kinase inhibitor 1B; Guanine nucleotide-binding protein, α stimulating, which causes McCune-Albright syndrome; Aryl hydrocarbon receptor-interacting protein, which causes familial isolated pituitary adenoma; and ubiquitin-specific protease 8 (USP8), which is the most common mutation found in patients with CD and prevents lysosomal degradation of the epidermal growth factor receptor, increasing its deubiquitination, proopiomelanocortin transcription, and ACTH secretion. 6) In pediatric CD patients, the USP8 gene is mutated in 31%-63% of corticotroph adenomas, and these patients are more likely to experience recurrence after surgery. 6) However, the major driver mutations in USP8 wild-type tumors remain unknown.
There are other genetic mutations associated with CD. The RASD1 gene mutation can contribute to cell proliferation and ACTH secretion in a small subpopulation of cells. 15) TP53inactivating mutations have been found in 3 CD cases. 15) N3CR1 mutations make the corticotroph cells unresponsive to negative feedback from the adrenal gland and resistant to the effects of glucocorticoids. 15) Activating mutations in the RET gene are associated with multiple endocrine neoplasia type 2, with pituitary adenomas having been reported in rare cases. 15,16) The "Three P Association" (3PAs) is the association of a pituitary adenoma with a pheochromocytoma or paraganglioma and could represent variants of multiple endocrine neoplasia or CD. 15) Loss-of-function CABLES1 missense mutations have been identified in 4 CD patients. 15) Loss-of-function mutations in the DICER1 gene are associated with various tumors, including pituitary blastoma, which presents clinically as CD www.e-apem.org early in infancy. 15,16)
Clinical features
Early recognition of clinical signs and symptoms is essential for timely diagnosis and treatment. 3) The onset of the disease is usually insidious, delaying the diagnosis significantly. The mean duration of symptoms until diagnosis has been reported between 1.7 and 2.5 years. 3,5) Facial changes described as "Cushingoid facies" are very common in children and can be detected by comparison with previous photographs. 3,17) Growth retardation is another manifestation of chronic hypercortisolism and is usually associated with rapid or exaggerated weight gain and delayed bone age. 5) Increased adrenal androgens lead to abnormal virilization, such as acceleration of pubic hair and genital growth in boys with prepubertal testicular volumes or early pubic hair growth with prepubertal breast development in girls. Hirsutism, acne, and purple striae are also common in adolescents. 3,5,17) In children, the most suggestive features of CS are facial changes, rapid or exaggerated weight gain, hirsutism, virilization, and acne (Fig. 2). Mood disturbances, muscle weakness, osteopenia, and headache are less frequent symptoms. 18)
Diagnosis
During diagnosis, ingestion or administration of exogenous corticosteroids should be excluded (oral, nasal, inhalation, topical, intramuscular, etc.) since exogenous CS is clearly more common than the endogenous form. At the initial assessment, the stage of sexual development based on the Tanner scale, bone age, and the presence of abnormal virilization should be determined. 3) Then, hypercortisolism should be documented by at least one of the following (Table 1) 7,[19][20][21][22] : -24-hour urinary free cortisol (UFC): There is an intrapatient variability of approximately 50%; therefore, 3 consecutive samples (i.e., over 3 days) are recommended. In addition, a physiological increase in UFC excretion may occur in girls during the perimenarche phase, and false-negative results can occur in cases of kidney failure with creatinine clearance <60 mL/min. 1,3,8) The diagnostic cutoff is 24-hour UFC excretion > 70 μg/m 2 (193 nmol/24 hr). 19) -Midnight serum or salivary cortisol: Midnight serum cortisol should be collected with an indwelling IV, and a value greater than 4.4 μg/dL has high sensitivity and specificity for CS. 1,17) Late-night salivary cortisol has been shown to have superior diagnostic performance compared to UFC. However, there is great inter-laboratory variability. 1) Normal salivary cortisol level is <1.45 ng/mL (sensitivity 100%, specificity 95%). 23) -Dexamethasone suppression test (DST): Characterized by failure of serum cortisol to decrease to <1.8 μg/dL. 3) This measurement is carried out by 2 techniques: (1) Overnight DST: Oral administration of 25 μg/kg (maximum dose 1 mg) dexamethasone at 23:00 PM or midnight, after which a serum cortisol sample is collected at 9:00 AM the next morning.
(2) Low-dose DST: Oral administration of 20-30 μg/kg/day The next step is distinguishing ACTH-dependent from ACTH-independent syndrome ( Table 2) 7,18,19,21,22,24) once the diagnosis of CS is confirmed: -Basal ACTH: The ACTH cutoff points to establish whether CS is ACTH-dependent or -independent vary depending on the researcher. In this manuscript, our proposal is that a morning plasma ACTH >29 pg/mL is indicative of CD, whereas ACTH <5 pg/mL confirms primary adrenal disease. 8,21) -CRH or desmopressin stimulation test: Intravenous administration of 1-μg/kg CRH (maximum dose 100 μg) is recommended. An exaggerated (>20% from baseline) response of serum cortisol (at 30 and 45 minutes) and an at least 35% increase over basal values of ACTH (at 15 and 30 minutes) supported the diagnosis of pituitary-dependent hypercortisolism. Desmopressin testing can be used in patients with extremely difficult venous access or when CRH is not available. 3) -High-dose DST: The standard protocol either involves a single dose of 80-120 μg/kg (maximum dose 8 mg) dexamethasone administered orally at 23:00 PM or dexamethasone administration divided in 4 doses of 2 mg each for 48 hours. A decrease ≥20% in morning serum cortisol from baseline distinguishes CD from ectopic causes of ACTH production, with a sensitivity of 97.5% and specificity of 100%. 1,3) -Pituitary magnetic resonance imaging (MRI): Pediatric CD is mainly associated with corticotrophic microadenoma, usually <6 mm in diameter, generally hypodense on MRI, and frequently nonenhanced by gadolinium contrast. Therefore, it is necessary to perform thin-slice high-resolution MRI at CD specialist tertiary referral centers. Nevertheless, MRI detects CD pituitary adenomas in only 16%-71% of cases. 8) -Bilateral inferior petrosal sinus sampling (BIPSS) is generally reserved for pediatric patients who have confirmed ACTHdependent CS and a negative pituitary MRI. Pituitary ACTH secretion is supported if there is a central-to-peripheral ACTH ratio >2 in basal conditions and >3 post-CRH stimulation, and samples should be obtained at 3, 6, and 10 minutes after administration of CRH or desmopressin (10 µg IV). 3,4) The sensitivity of BIPSS in pediatric patients is lower than in adults but increases after desmopressin stimulation, proving that desmopressin is an efficient alternative to the CRH test.4) Lateralization of the adenoma to one side of the pituitary gland can be predicted if the interpetrosal sinus gradient is ≥1.4, guiding the surgeon during transsphenoidal surgery (TSS). 3,4) The diagnosis of CD is confirmed by the presence of an ACTH-secreting pituitary adenoma, demonstrated by a pathology study. 25) A diagnostic algorithm for CD is presented in Fig. 3.
Treatment
The goal of treatment is normalization of cortisol level and reversal of the signs and symptoms previously described. 26) Additionally, adequate growth and development must be preserved. 27) Different treatment options have been described for CD, and the choice depends on etiology, age, form of presentation, and availability of resources. 28) Management includes medication, surgical approaches, radiotherapy, or combination therapy. 5) Despite the increase of new treatments, achieving remission is still a challenge. www.e-apem.org available: microscopic and endoscopic, both allowing adequate visualization of the intrasellar content. [29][30][31] Microscopic TSS has been performed for a long time. TSS alone achieved a remission rate of 76%; in combination with adrenalectomy and/or radiotherapy, remission reached 91% in a series of 33 pediatric patients. 31) Early remission was achieved in 94.8% of patients in another series of 96 pediatric patients with CD who underwent microscopic TSS; however, 21.9% of cases recurred over time. 32) In addition, young age, high preoperative cortisol level, large tumor size, and cavernous sinus extension were positively correlated with recurrence. 30) Endoscopic TSS has been used for more than 10 years; however, there is a lack of experience described in the pediatric population. One of the first studies was performed in 27 children with pituitary adenomas, where total tumor resection was achieved in 81.5% of patients, and no postoperative complications, such as mortality, neurological morbidity, or late rhinological problems, were reported. 34) Clinical and biochemical recovery was achieved in 83% of patients in a study that included pediatric patients with CD, and there was no tumor recurrence after an average of 4.7 years. 35) Several studies have compared the 2 techniques. Remission of hypercortisolism was achieved in 88.23% of those who underwent endoscopic TSS compared to 56.6% of those who underwent microscopic TSS in a retrospective analysis of 104 patients. 36) In a recent systematic review and metaanalysis including 6,695 patients, 80% achieved remission of hypercortisolism, with no difference between techniques. However, the percentage of macroadenoma patients in remission was higher and the percentage of recurrence was lower after endoscopic TSS. 37) In the same study, cerebrospinal fluid leak was more frequent with endoscopic TSS, whereas transient diabetes insipidus was reported less often. However, these studies included pediatric and adult patients, and they did not differentiate the results based on age.
Bilateral adrenalectomy is becoming less common as a primary therapy. It is indicated when CD persists despite pituitary surgery or when rapid normalization of hypercortisolism is desired. 38,39) The reported complications are adrenal insufficiency (AI) and Nelson syndrome, which may develop in up to 25% of the pediatric population after follow-up based on studies from more than 30 years ago. 40,41) Currently, there are no efficacy data due to the very limited use of this procedure.
Radiotherapy
Pituitary radiotherapy (RT) was formerly used as a firstline therapy. Currently, it is indicated as a second-line therapy only in CD cases where surgery is not feasible. 26 Fig. 3. Proposed diagnostic algorithm for Cushing disease. UFC, urine free cortisol; DST, dexamethasone suppression test; ACTH, adrenocorticotropic hormone; CT, computed tomography; CRH, corticotropin-releasing hormone; MRI, magnetic resonance imaging; BIPSS, bilateral inferior petrosal sinus sampling. www.e-apem.org surgery is not possible. 42) It is also performed in patients with tumors exhibiting progressive growth and/or local invasion to the cavernous sinus, patients with Nelson syndrome, or prophylactically just after bilateral adrenalectomy to avoid tumor growth. 38,43) RT delivery techniques vary depending on whether the tumor is stereotactic or not and whether it is administered in small doses in several sessions or in a single dose. Radiation doses vary from 8 Gy to 45-50 Gy, delivered based on the size of the lesion, number of sessions, and type of RT. 44) Storr et al. 45) reported their experience with pituitary RT (45 Gy in 25 fractions) in 7 CD pediatric patients following unsuccessful TSS. RT was 100% effective, with no recurrences over time. Growth hormone deficit was reported as the main complication. Another similar study included 8 patients who underwent the same second-line pituitary RT regimen, and 4 patients (50%) were cured in a minimum follow-up of 2 years; 5 patients developed hypogonadism. In addition, all patients failed to reach their target height at the time of the last followup. 46)
Medical treatment
If a tumor is identified, surgery is the first-line treatment, as discussed above. However, if surgery is not possible, pharmacological therapy is indicated.
Ketoconazole is an imidazole derivative primarily used in fungal infections. 47) Its action in steroidogenesis suppression (CYP17 and CYP11B1 enzyme inhibition) at adrenal and gonadal glands has been described for almost 4 decades. 48,49) The safety and efficacy of ketoconazole have not been established for pediatric patients younger than 12 years of age; thus, the main regulatory centers, such as the U.S. Food and Drug Administration (FDA) and European Medicines Agency (EMA), have not proposed recommendations regarding its use and dosage in this population. 50,51) Case reports that described doses from 200 to 800 mg/day achieved serum and urinary cortisol normalization, as well as improvement of Cushingoid signs and growth velocity during follow-up. 52,53) Hepatotoxicity is the main safety concern and can occur in 0.5% to 4.2% of children and adolescents. Therefore, hepatic enzyme testing after 3 or 4 weeks of treatment is suggested. 54) Ketoconazole dosage reduction or withdrawal is required if hepatotoxicity occurs, and hepatic enzyme levels are expected to return to normal in an average of 3 weeks. 54) Methyrapone is a pyridine derivative with an antisteroidogenic mechanism that results from enzymatic blockade of 11β-hydroxylase and 17α-hydroxylase, decreasing the production of glucocorticoids and mineralocorticoids. 55) In pediatrics, methyrapone is approved for diagnosis of AI and for treatment of endogenous hypercortisolism; however, the recommended therapeutic dose has not been established. Some independent studies have used doses from 720 mg to 1,500 mg/ day. [56][57][58] The most frequent adverse effects are dizziness and mild gastrointestinal symptoms, usually within 2 weeks of treatment onset, and they typically resolve after drug discontinuation. 59) The safety and efficacy of this drug have been demonstrated in an 8-month-old infant with ectopic CS 56) ; however, there are no reported cases in patients with CD.
Mitotane is a bimodal agent since it has an adrenolytic effect due to the cytotoxicity induced by mitochondrial damage. It also inhibits several enzymes, such as 11β-hydroxylase, 18-hydroxylase, and 3β-hydroxysteroid dehydrogenase, leading to a decrease in steroidogenesis. 47) Thus, it is not only restricted to hypercortisolism treatment, but is also used in adrenal carcinoma as a chemotherapeutic agent. 60,61) Growth velocity and pubertal development were restored in 9 pediatric patients with CD receiving mitotane at an initial dose of 1 g/ day, regardless of hypercortisolism resolution. 62) The most frequent adverse effects are gastrointestinal issues. Additionally, since mitotane is also considered an endocrine disruptor, hypothyroidism and gynecomastia have been described. 60) Mifepristone is a glucocorticoid receptor blocker that reduces the peripheral effects of hypercortisolism; therefore, it does not decrease serum cortisol level. 63) Furthermore, serum cortisol level may increase and, if excessive, can induce a high mineralocorticoid response, which produces hypertension, hypokalemia, and peripheral edema. 64) Mifepristone efficacy is measured according to clinical features instead of laboratory improvement due to its mechanism of action. Although the recommendation for the general population is a maximum dose of 20 mg/kg/day, a standardized dose has not been established in pediatrics. 63) Its safety and efficacy were demonstrated in a 14-year-old adolescent with ectopic CS, 65) but there are no reported cases in patients with CD.
Osilodrostat is a steroidogenesis inhibitor that was recently approved by the FDA in 2020 for adult patients in whom pituitary surgery is not possible and has been recommended by the EMA for treatment of endogenous CS of any cause. 66,67) There are no established recommendations for pediatric patients to date. Osilodrostat inhibits aldosterone synthase and adrenal 11β-hydroxylase enzymes, decreasing the production of both glucocorticoids and mineralocorticoids. 68) The main adverse effects are nausea, headache, and fatigue. 69) A multicenter clinical trial including 12 patients under 18 years old with CD is currently in phase 2. In the study, the pharmacokinetics, pharmacodynamics, and tolerability of osilodrostat are being evaluated. 70) The use of somatostatin analogs such as pasireotide or a dopamine agonist such as cabergolide has been described as a valid option in conjunction with TSS, RT, or in cases where there is a postsurgical remnant. 1,71) Some case reports of adolescents with CD (12-, 15-, and 17-year-old patients) achieved successful outcomes after the use of somatostatin analogs [72][73][74] ; however, there is no strong evidence of the use of these drugs in pediatric patients.
Prognosis and follow-up
There is no consensus on CD remission criteria. 75) Postopewww.e-apem.org rative morning serum cortisol is the most important marker in defining remission. Two or more measurements within 2 weeks after surgery (on the 5th and 14th postoperative day) are recommended. 76,77) A cortisol value <1.8 μg/dL is highly indicative of remission in the pediatric population. 78,79) There are no sufficient data regarding the predictive value of postsurgical plasma ACTH, UFC, or low-dose DST. 75) After successful surgery for CD, the rapid onset of AI usually indicates a good prognosis. 80) However, rapid reduction in cortisol exposure often results in glucocorticoid withdrawal syndrome (GWS). 81) However, to date, there is no scientific literature on GWS syndrome in pediatric patients with CD. GWS occurs following withdrawal of supraphysiologic exposure to either endogenous or exogenous glucocorticoids over a duration of several months. 82) The clinical manifestations are indicative of cortisol deficiency, as evidenced by HPA axis suppression, even with supraphysiologic glucocorticoid replacement therapy. GWS symptoms typically occur 3-10 days postoperatively. 81) After CD surgery, once remission is achieved, exogenous glucocorticoid replacement should be initiated and maintained during the months required for HPA axis recovery. 83) The most important treatment is periodic assessment of the HPA axis until its recovery. 84) Recurrence of CS was variable in a case series, ranging from 7% to 25% at 5-, 10-, and 15-year postsurgical follow-up. 85,86) The following prognostic factors of pediatric CD recurrence have been described: older age at onset of symptoms, petrosal or dural sinus invasion, unsuccessful adenoma localization during surgery, larger tumor diameter, corticotroph adenomas with USP8 gene mutation, and corticosteroid replacement for less than 6 months after surgery. 25,75,79) Posttreatment challenges include optimizing pubertal growth and development, normalizing body composition, and promoting mental health and cognitive maturation. 87) Clinical recovery is slower and/or incomplete, and it is not rare to find persisting medical problems such as hypopituitarism, loss of final height, obesity, hypertension, osteoporosis/ osteopenia, and cognitive impairment despite successful treatment of hypercortisolism (biochemical recovery). 88) In a prospective study (up to 7-year follow-up) of 13 children and adolescents with CD who were successfully treated, a -0.8 standard deviation (SD) loss of final adult height and 0.9 SD increase in weight and body mass index were found despite no recurrence of CS. 89) Bone mineral density was more severely affected in the vertebra and was partially recovered after definitive treatment in another study of 35 children with CD. 90) Abdominal obesity and insulin resistance may persist after the treatment of CD. 91) Hypertension, diastolic blood pressure, and mean arterial pressure usually normalize within 3 months after surgery, whereas systolic blood pressure may take up to 1 year to normalize. 92) However, CD may persist with hypertension in up to 20% of patients after cortisol normalization. 93) CD has been associated with multiple psychological and psychiatric disturbances, with emotional lability, depression, and anxiety being the most common. 1,3) These disturbances may persist after remission of hypercortisolism and even after HPA axis recovery. 94) Table 3 lists the main long-term consequences of prolonged exposure to corticosteroids, as well as treatment-related complications.
Late diagnosis is associated with significant increase in morbidity and mortality (2.5%) 95,96) ; for this reason, early recognition of CD in children is essential.
Conclusions
CD is rare in the pediatric population and represents a challenge for physicians owing to its multiple associated conditions involving growth and pubertal development, body composition, cardiovascular status, bone mineral density, and psychological health; thus, it is important to achieve an early diagnosis, timely treatment, and to follow a multidisciplinary approach to control hypercortisolism and improve the prognosis and quality of life in pediatric patients.
Conflicts of interest:
No potential conflict of interest relevant to this article was reported.
Funding: This study received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors. Author | 2023-07-05T06:17:08.483Z | 2023-06-01T00:00:00.000 | {
"year": 2023,
"sha1": "298882bfd81d1f3fa6c3a3ca07e32895d09f6a10",
"oa_license": "CCBYNC",
"oa_url": "https://e-apem.org/upload/pdf/apem-2346074-037.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4750bf7a8a032ad54f5dd0b6979141361983751e",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
244039053 | pes2o/s2orc | v3-fos-license | Linguistic emergence from a networks approach: The case of modern Chinese two-character words
The models of linguistic networks and their analytical tools constitute a potential methodology for investigating the formation of structural patterns in actual language use. Research with this methodology has just started, which can hopefully shed light on the emergent nature of linguistic structure. This study attempts to employ linguistic networks to investigate the formation of modern Chinese two-character words (as structural units based on the chunking of their component characters) in the actual use of modern Chinese, which manifests itself as continuous streams of Chinese characters. Network models were constructed based on authentic Chinese language data, with Chinese characters as nodes, their co-occurrence relations as directed links, and the co-occurrence frequencies as link weights. Quantitative analysis of the network models has shown that a Chinese two-character word can highlight itself as a two-node island, i.e., a cohesive sub-network with its two component characters co-occurring more frequently than they co-occur with the other characters. This highlighting mechanism may play a vital role in the formation and acquisition of two-character words in actual language use. Moreover, this mechanism may also throw some light on the emergence of other structural phenomena (with the chunking of specific linguistic units as their basis).
Introduction
The actual use of language has attracted increasing research interest in recent decades, especially regarding its role in the formation, acquisition, evolution, etc., of linguistic structure. Viewed from a usage-based perspective, linguistic structure emerges from language experience based on exemplars of language use to which language users are exposed [1,2].
Although the structural patterns generally lack natural boundaries in actual language use, they can establish themselves and can be extracted by language users by standing out through particular relational patterns (e.g., strong associations between their component units). This has been supported, directly or indirectly, by various studies [3][4][5], which suggest that the quantitative patterns of actual language use may shed light on the emergent nature of linguistic structure. In order for the quantitative patterns of actual language use to shed light on linguistic emergence, two prerequisites need to be considered, namely, appropriate modeling and quantitative measures of actual language use.
The notion of networks often finds its way into the modeling of language experience. Hudson [6], for instance, held that language is represented as part of the conceptual network of human knowledge and employed the notion of networks in structural analysis at various language levels. Bybee [7] argued that language experience can be conceived of as networks composed of linguistic units (e.g., words) and their relations in actual language use, such as associations (e.g., phonetic and semantic similarities) and co-occurrence relations. Furthermore, Bybee [7] also sought a network-based explanation of how particular structural patterns (e.g., idioms and morphological patterns) emerge from language experience. The notion of networks constitutes a relational approach to language in that anything (be it a linguistic unit, a pattern of use, or a linguistic feature) is considered against a larger context of linguistic units and their relations. This marks an intrinsic advantage of networks thinking in usage-based inquiries into linguistic structure, for both linguistic structure and the context of actual language use manifest themselves as linguistic units and their relations.
Linguistic structure is held to emerge from the co-occurrence relations of linguistic units in actual language use [2,7]. Linguists have available to them a wide range of measures for cooccurrence strength [8], with varying computational costs. Simple measures (e.g., frequency and probability) are adopted from time to time in usage-based linguistics [7], largely due to their close bearing on language experience and thus the mechanisms of linguistic emergence. However, the mere inspection of these simple measures alone might still be insufficient for a systematic understanding of the formation of structural patterns in actual language use. For instance, a high co-occurrence frequency of two units does not necessarily mean a meaningful unit, and vice versa. However, the advantages of simple measures are worth considering. The specific roles of these measures in linguistic emergence might be further understood with appropriate models and analytical tools of actual language use. A networks approach may contribute to this understanding by taking into account the larger relational background of any co-occurrence relation and its strength.
To recap, network models plus simple measures for co-occurrence strength (such as frequency) may constitute an excellent approach to the emergence of linguistic structure. However, such networks thinking needs to be converted into substantial studies.
Rooted in a multi-disciplinary background, the models and quantitative tools of linguistic networks [9,10] constitute an operational methodology for networks thinking in linguistics. The basic form of a linguistic network model N can be generally represented as N = (V, E), with V being the set of nodes (vertices) and E the set of links (edges). Various language levels as language sub-systems can be modeled and analyzed quantitatively as linguistic networks, with the corresponding linguistic units (e.g., words) as nodes and their relations (e.g., structural and semantic relations between the words) of a particular type as links. The models can also include extra information concerning the features of linguistic units and their relations (such as co-occurrence frequencies of linguistic units) if necessary. Structural analysis of linguistic networks is supported by a wide range of quantitative tools, which rely on relational data [11] in that they always focus on the relations between linguistic units and features. These tools can hopefully shed light on structural patterns (as relational phenomena) of language at different scales of granularity.
Earlier inquiries into linguistic networks have helped to appreciate the system-level complexity of particular language sub-systems [12,13]. It has been shown later that such an understanding of the system-level complexity of language can also shed light on language development [14], special education [15], language typology [16], and so forth. In practical terms, such an understanding can contribute to various NLP tasks [17][18][19]. More recent studies have extended the scope of research. New ways of modeling have been adopted. For instance, the analysis of linguistic networks of different sub-systems of the same language has deepened the understanding of language as a multi-level system [20,21]. The use of multilayer sentence networks (distinguishing links between sentences from the same document and those connecting sentences from different documents) has facilitated NLP undertakings such as multi-document summarization [22]. Moreover, increasing interest has been devoted to the rather microscopic features of linguistic networks, which are examined with measures including those concerning node centrality [22][23][24].
Inquiries into the less macroscopic features (e.g., the various cohesive sub-networks) of linguistic networks can hopefully shed light on how structural linguistic units are formed by their component units, which in turn may contribute to a usage-based understanding of linguistic emergentism. This line of research has just started. For instance, Goh et al. [25] have shown that the motifs (cohesive sub-networks) in word co-occurrence networks based on authentic English texts function as network shortcuts, and the motif densities are drastically lowered when the texts are shuffled. More importantly, their results have revealed a link between these network motifs and linguistic constructions, which points to the mechanism of these constructions' emergence from the patterns of actual language use.
The formation of Chinese words (as structural units based on the chunking of their component characters) in actual language use constitutes an interesting case study in this line of research. This study explores the formation of Chinese two-character words (henceforth, CTCWs) in the actual use of modern Chinese, which manifests itself as continuous streams of Chinese characters and which was modeled as character co-occurrence networks. The network models consist of Chinese characters as nodes, their co-occurrence relations as directed links, and the co-occurrence frequencies as link weights. Quantitative analysis of the network models has shown that a Chinese two-character word can highlight itself as a two-node island, i.e., a cohesive sub-network with its two component characters co-occurring more frequently than they co-occur with the other characters. This highlighting mechanism may play a vital role in the formation and acquisition of CTCWs in actual language use. Moreover, this mechanism may also throw some light on the emergence of other structural phenomena (with the chunking of specific linguistic units as their basis).
Chinese words as structural units
Unlike other languages such as English, Chinese words lack natural boundaries in actual language use. Morphological markers such as affixes, which can serve as word boundaries, are generally non-existent in Chinese [26: 46]. Moreover, even in written Chinese, the words are not explicitly separated. In other words, they are not separated by spaces as in the case of English (see Example 1 for illustration). The actual use of Chinese language manifests itself as continuous streams of characters, which are generally understood as morphemes of the language [27], the elements of word formation. Each character is a linguistic unit corresponding to one syllable, one orthographic symbol, and usually a particular amount of meaning. To recap, spoken Chinese and written Chinese can be seen as continuous streams of syllables and orthographic symbols, respectively. Each Chinese word is a structural unit, consisting of one, two, or sometimes even more characters chunked together. CTCWs constitute a dominant portion of modern Chinese vocabulary.
Example 1 below illustrates how the actual use of modern Chinese looks like, especially the non-existence of spaces between words. It is an excerpt from the Chinese version of Bloomfield's Language [28: 21], which is followed by its English original.
Example 1
至于语言变化问题, 我们已有足够的事实可以证明, 所有的语言都同样有变化过程, 而且 都倾向于同一方向。甚至很特殊的变化类型, 在差别最大语言里也可以发生非常相同的 变化, 只不过是独立地进行而已。 As to change in language, we have enough data to show that the general processes of change are the same in all languages and tend in the same direction. Even very specific types of change occur in much the same way, but independently, in the most diverse languages.
Example 1 shows a stream of 85 Chinese characters (tokens), separated only by 7 punctuation marks. Example 2 below shows one clause in Example 1 with all the words segmented by spaces (1 st line), its phonetic transcription using the official romanization system for Standard Chinese (2 nd line), word-by-word gloss in English (3 rd line, REL = relative marker), and the English original (4 th line).
Example 2
所有 的 语言 都 同样 有 变化 过程 suáyáu de yīyán dōu tóngyàng yáu biànhuà guòchéng all REL language all same have change process [T] he general processes of change are the same in all languages. . . In actual language use, any two co-occurring (i.e., adjacent) Chinese characters may fall into either of the two cases: (1) they are within the same word (e.g., any two characters which are not separated by a space in Example 2, such as 语 (yī) and 言 (yán)); (2) they constitute the boundary between two adjacent words (e.g., any two characters separated by a space in Example 2, such as 的 (de) and 语 (yī)), i.e., the two characters belong to two different words but happen to be adjacent.
CTCWs constitute an interesting subject of inquiries into the formation of structural patterns from a usage-based perspective. Although it has proved possible to segment Chinese words largely on the basis of statistical features of the character streams [29,30], the statistics adopted by these engineering attempts generally involve complicated computation, which may be rather removed from how the patterns of actual language use affect language experience. In this study, priority is given to rather simple measures of actual language use (especially frequency), which are examined from a networks approach.
A networks approach to frequency
Frequency is held to play a fundamental role in the formation, evolution, learning, and mental representation of structural patterns at various language levels [5,31]. For instance, the frequencies of linguistic units and their combinations have a profound impact on the way how language is chunked in memory, how such chunks are connected, and how easily they are accessed [1].
Viewed from a usage-based perspective, Chinese words emerge from the co-occurrence relations of Chinese characters in actual language use. Unless otherwise specified, co-occurrence in this paper means that two linguistic units (e.g., Chinese characters) are immediate neighbors of each other and thus form a bigram as an ordered pair. Given two linguistic units u and v, the two ordered pairs they form, uv and vu, are different co-occurrence relations or bigrams. The three co-occurrence relations of characters (or character bigrams) at the beginning of Example 1, for instance, are 至于 (zhìyú), 于语 (yúyī), and 语言 (yīyán). In actual language use, any CTCW can be seen as a reusable character bigram denoting one or more meanings. This repetition of co-occurrence helps to entrench the character bigram as a structural whole. Co-occurrence frequencies of characters, therefore, may play a crucial role in the formation and acquisition of CTCWs in actual language use. For instance, the top 10 most frequent character bigrams in the Lancaster Corpus of Modern Chinese [32] are all CTCWs.
However, a more in-depth understanding of the role of frequency (e.g., the specific scope of its effects) is needed. A general law of human language is that word frequency varies drastically [33]: only few of the words being highly frequent while the majority of them rather infrequent. The same is true of CTCWs. A large number of character bigrams with low frequencies may still be CTCWs. What is worse is that many non-word character bigrams exhibit rather high frequency levels. The frequency level alone, therefore, may not be a reliable indicator of whether a character bigram forms a word or not. However, the actual language use to which language users are exposed is ever-growing and ever-changing [7]. Frequency may always play a role in the formation of structural patterns, for even the least frequently-used units or patterns according to corpus statistics can occur repeatedly in actual language use. In order for a word uv to stand out as a cohesive unit in actual language use, the frequency of uv may not have to compete with all the other character bigrams in the entire context. Instead, it may be more plausible to examine the frequency of uv against its immediate context, i.e., the bigrams which u and v respectively form with other characters.
In authentic Chinese language use, any two characters u and v forming a bigram uv also cooccur with other characters at the same time, and these co-occurrence relations constitute the immediate context of bigram uv. In Example 1, 语 (yī) and 言 (yán), in addition to co-occur with each other, also co-occur respectively with other characters, hence the immediate context of 语言. The bigram 语言 (a CTCW meaning 'language') and its immediate context can be modeled as a network in Fig 1, whereby any two characters forming a bigram in the text is connected by a directed link (an arrowed line indicating the order of characters), and the frequency of each bigram is represented by a value attached to the corresponding link and the thickness of the link.
As discussed earlier, a character bigram constitutes either a word (or part of it) or the boundary between two adjacent words. In the former case, the two characters may form a CTCW (e.g., 语言 in Example 1) and thus can be used repeatedly. A character bigram forming the boundary between two words, on the other hand, is due to the co-occurring of the two words as a result of syntactic arrangement of the utterance. Considering the flexibility of syntax, such a bigram (e.g., those formed by 语 or 言 with the other six characters as illustrated by Fig 1) may not exhibit the same degree of reusability as a CTCW. It follows that if two characters form a CTCW, they may tend to co-occur with each other more frequently than they co-occur with other characters. Given a character bigram uv found in a corpus, its own frequency can be seen as its internal tightness, and the frequencies u and v co-occur respectively with their other immediate neighbors can be seen as the external tightness of uv. A CTCW, therefore, is likely to exhibit itself as a two-character chunk with stronger internal than external tightness. In this way, a CTCW as a cohesive structural whole can stand out in its immediate context, which may contribute abundantly to its emergence from actual language use. In Example 1, for instance, 语言as a CTCW stands out as such a chunk (see Fig 1).
In sum, a networks approach to co-occurrence frequency may be more plausible in accounting for the emergence of a CTCW, which focuses on the frequency level of the corresponding character bigram relative to those of the other bigrams in its immediate context, instead of its frequency level alone. Such an approach can be operationalized using network analysis of authentic Chinese language data.
Materials and methods
In order to understand the emergence of CTCWs, this study attempts to examine (1) the twocharacter chunks with stronger internal tightness as previously defined, and (2) how such chunks relate to CTCWs. Network analysis of authentic Chinese language data can be employed to detect such chunks with stronger internal tightness.
Materials
The authentic language data adopted in this study are from two corpora, namely, the Lancaster Modern Chinese Corpus (LCMC) [32] and the Leiden Weibo Corpus (LWC) [34], which cover the formal and informal use of modern Chinese, respectively.
LCMC is a balanced corpus of modern written Chinese with about 1.6 million Chinese character tokens (roughly 1 million word tokens). It consists of 500 samples, 3,200 Chinese character tokens apiece, and covers 15 text categories. Two sub-corpora of LCMC were adopted for this study, namely, those of press reportage (LCMC_A, about 120,000 character tokens) and science (academic prose) (LCMC_J, about 230,000 character tokens). These corresponds to two representative genres of formal language use of Chinese.
LWC consists of over 5,100,000 messages posted on Weibo, China's leading social media platform. Compared with the language data in LCMC, those in LWC are rather close to daily spoken interactions. In this study, the first 8,000 messages with substantial linguistic content (i.e., consisting of at least one word) in LWC were selected as a sub-corpus (about 250,000 character tokens) concerning the informal use of Chinese.
Network modeling
Considering the advantages of linguistic networks in dealing with relational phenomena (such as linguistic structure), it is necessary to construct and analyze linguistic network models based on authentic language data. These networks have linguistic units as nodes and their relations (e.g., co-occurrence, syntactic dependency, or semantic dependency relations) observed in actual language use as links. Network models constructed in this way, therefore, are potentially capable of modeling language users' exemplar-based language experience. Quantitative analysis of such networks can help to detect various cohesive clusters formed by linguistic units in actual language use, which can hopefully shed empirical light on the emergent mechanisms of particular structural patterns, such as CTCWs.
As previously noted, the actual use of Chinese language can be seen as continuous streams of characters (as the immediately-observable linguistic units). In network analysis, these can be modeled by directed and weighted character co-occurrence networks. Such a network can be represented as N = (V, E, w), with V being the set of all the characters (types) in the corpus, E the set of co-occurrence relations of characters, and w the set of link weights representing frequency values of the corresponding co-occurrence relations. Only Chinese characters were counted as network nodes. Others symbols, such as Arabic numerals and punctuation marks, were all ruled out. Any two co-occurring characters u and v forming a bigram uv were connected by a directed link pointing from u to v in the corresponding network. In rare cases, a character u might co-occur with itself to form a bigram uu. For instance, some CTCWs are formed by reduplicating the same character, such as 叔叔 (shūshu, 'uncle'), 狒狒 (fèifèi, 'baboon'), and 天天 (tiāntiān, 'every day'). A co-occurrence relation of this type was represented by a link pointing from one character to itself, which is termed as a self-loop. Note that two characters separated by a punctuation mark representing a pause (such as comma, and period 。) did not count as co-occurring. The frequency of any co-occurrence relation of characters is the weight of the corresponding link. In Fig 2, the number attached to each directed link represents its weight, that is, the frequency of the corresponding bigram. For instance, as the character bigram (in fact a CTCW) 语言 (yīyán, 'language') occurred 3 times in the text, the weight of the corresponding link is 3 (see also Fig 1).
With the scheme introduced above, the three sub-corpora were converted into three network models (henceforth, Networks LCMC_A, LCMC_J, and LWC), respectively.
Data analysis
As previously discussed, in a body of authentic language data, if two characters co-occur more frequently than they co-occur with the other characters, they form a chunk with stronger
PLOS ONE
internal than external tightness in its immediate context. A chunk of this type is in line with the notion of islands [35] in network analysis. An island is defined as a sub-network, whose inner-links, those connecting its member nodes, all have greater weight values than its outerlinks, those connecting its member nodes with nodes outside it. A two-character chunk with stronger internal tightness due to its repeated use (e.g., 语言 (yīyán, 'language') as illustrated by Fig 1), therefore, corresponds to a two-node island (henceforth, TNI) in a linguistic network constructed as previously described.
The classical definition of islands is based on networks without self-loops. In the actual use of Chinese, the co-occurring characters do form self-loops, which are rare yet worth considering. Some of these self-loops are CTCWs, e.g., 叔叔 (shūshu, 'uncle'). For the purpose of this study, an adjusted definition of TNIs is formulated, so that the self-loops formed by co-occurring characters are considered. Let the directed link between two nodes u and v be e (allowing u = v, and hence a self-loop), and any other directed link involving u or v e'. If the weight of e is greater than the maximal weight of the latter, i.e., w(e) > max(w(e')), then <u, v> forms a TNI.
Based on the above definition of TNIs, data analysis of this study took two major steps: (1) extraction of TNIs from the linguistic networks, and (2) determination of the relationship between the TNIs and CTCWs.
Chinese words are rather fluid units in that it is sometimes difficult to draw the boundary between a word and phrase. Dictionaries, lexicons, and lexical databases published officially or academically constitute an operational (though not perfect) tool for wordhood judgment. The sources adopted by this study are: (1) Contemporary Chinese Dictionary (7 th Edition) [36] (henceforth, CCD), the most authoritative reference work on modern Chinese words; (2) Lexicon of Common Words in Contemporary Chinese [37] (henceforth, LCWCC), released by the State Language Commission of China; (3) the Chinese Lexical Database [38] (henceforth, CLD), a large-scale lexical database for simplified modern Chinese which well reflects the linguistic experience of modern Chinese users; (4) Wiktionary (https://www.wiktionary.org); and (5) Baidu Encyclopedia (https://baike.baidu.com/). Both (4) and (5) are powerful online dictionaries with a good coverage of Chinese neologisms, slang words, and dialect words. Technical terms which are not listed in the above sources were judged on the basis of (1) the terms published by China National Committee for Terms in Sciences and Technologies (partially searched on http://www.termonline.cn/index.htm), and (2) Dacihai [39], a large-scale dictionary and encyclopedia of modern Chinese.
In order to ensure the accuracy of wordhood judgment, the TNIs extracted were examined manually based on the above sources and their contexts. A TNI has to meet one major criterion and two minor criteria to be counted as a CTCW. The major criterion is that the TNI should be listed in at least one of the above sources as an entry. The two minor criteria are that (1) not all occurrences of the TNI are boundaries between two adjacent words (i.e., co-occurring but belong to two adjacent words), and (2) not all occurrences of the TNI are parts of longer words. The major criterion is generally sufficient for wordhood judgment, while the two minor criteria help to rule out two very rare cases, which further improves the accuracy of judgment.
Results
The TNIs extracted from each network were classified into three types: (1) CTCWs, (2) wordlike chunks, and (3) non-word chunks. Table 1 displays the number and percentage of TNIs of each type (see Appendices A-C in S1 File). The few TNIs with weight 1 (altogether 3, 1, and 3 in LCMC_A, LCMC_J, and LWC, respectively) have been ruled out, for they were not formed through repeated use of the corresponding character bigrams. Those (altogether 2) in the three sub-networks of Network LCMC_J were treated in the same way.
The Type-1 TNIs are those identified as words according to the abovementioned criteria for wordhood judgement. As can be seen from Table 1, CTCWs constitute an overwhelming majority of the TNIs in each network. The CTCWs extracted as TNIs from Network LWC cover some slang words widely used in daily conversations and on social media platforms, such as 粉丝 (fěnsī, 'fans'), 童鞋 (tóngxié, 'schoolmate' or 'classmate'), and 坑爹 (kēngdiē, 'cheating' or 'deceiving'). These words reflect the informal use of modern Chinese.
The Type-2 TNIs do not count as words according to the criteria of this study. However, they constitute either a two-character phrase (e.g., 万元 (wànyuán, 'ten thousand Yuan RMB') or part a word/phrase of three or more characters (e.g., 铃虫 (língchóng) as part of 棉铃虫 (miánlíngchóng, 'bollworm') and 红铃虫 (hónglíngchóng, 'red bollworm')). As previously noted, the boundary between a two-character phrase and a CTCW is not always clear-cut and even LCWCC and CLD list a number of CTCWs which are sometimes treated as two-character phrases. A two-character phrase behaves very much like a CTCW in that they are both used as a structural and meaningful whole. For instance, 万元 (with 万 (wàn) meaning 'ten thousand' and 元 (yuan) 'Yuan RMB') can also be seen as a CTCW for a unit of currency.
A Type-3 TNI consists of two characters which respectively belong to two adjacent words and cannot be treated as a phrase. As can be seen from Table 1, the proportion of non-word islands in each network can be almost ignored.
In sum, the above results have preliminarily shown that the two-character chunks with stronger internal than external tightness due to their repeated use are generally CTCWs.
In network analysis, the minimal weight value of the links between the component nodes of an island is termed as the height of the island [35]. In this study, the height of a TNI is the frequency of the corresponding chunk. In Fig 1, for instance, 语言 (yīyán, 'language') as a TNI exhibits a height of 3. The distribution of word frequency in natural language can be captured by power law [33], which is known as Zipf's Law. In other words, when the words in a language are ranked in descending order of frequency, word frequency (f) and rank (r) generally follows a power-law relationship f~r -γ . The rank-frequency distribution obtained is extremely uneven: a rather limited number of words having extremely high levels of frequency while the majority of words having rather low levels of frequency. Most CTCWs have very low levels of frequency, which however does not deny their status as words. Fig 3 displays the rank-frequency distributions of Type-1 TNIs, i.e., the CTCWs, in the three networks (see Appendices A-C in S1 File). All the plots in Fig 3 exhibit Zipf-like distributions, with word frequency dropping abruptly to a low level and then decreasing rather slowly as word rank increases. It might be unnecessary to test statistically how well the distributions fit power law. The most important message from the distributions, however, is that the TNIs extracted covered CTCWs with vastly different frequency levels. In other words, there is no necessary connection between the formation of a TNI and the frequency level of the corresponding CTCW.
The definition of islands constitutes a rather rigorous criterion for detection of two-character chunks with stronger internal than external tightness, for an island is a local sub-network with the greatest link weight values. The same character (e.g., 语 (yī)) may occur in more than one word (e.g., 语言 (yīyán, 'language') and语法 (yīfǎ, 'grammar')). However, a TNI uv in a given network will prevent the other character bigrams containing u or v in the same network from forming islands, even though some of them might also be CTCWs. In addition, a CTCW which occurs in particular immediate contexts (e.g., one that happens to occur always between the same neighboring characters in a particular text) may also fail to form a TNI. It follows that island extraction as conducted in this study will inevitably miss a number of CTCWs in a network. The CTCWs that are missed by island extraction are worthy of investigation, especially whether they can form TNIs with the change of their immediate contexts. Actual language use is rich and flexible. The same is true of the immediate context of a CTCW. A body of authentic language use, regardless of its size, only constitutes a fragment of language experience. Given a particular fragment of language use (e.g., the sub-corpora LCMC_A and LCMC_J), a CTCW may fail to form a TNI in its immediate context. However, other fragments of language use may provide appropriate immediate contexts where the CTCW may well form a TNI. This is due to the nature of CTCWs as reusable character bigrams. This reusability gives rise to a frequency effect, which makes a CTCW a structural whole with stronger internal than external tightness in appropriate immediate contexts. In order to illustrate this point, the TNIs extracted from three sub-networks of Network LCMC_J were examined. These three sub-networks were network models constructed respectively on the basis of three text samples selected at random from sub-corpus LCMC_J (with 583, 1,495, and 3,353 character tokens, respectively). The three sub-networks are labeled as Networks LCMC_J_1, LCMC_J_2, and LCMC_J_3 according to their sizes (with Network LCMC_J_1 being the smallest). Table 2 displays statistics of the TNIs in the three sub-networks (see Appendix D in S1 File). The TNIs are generally CTCWs in the three networks, especially Networks LCMC_J_2 and LCMC_J_3. The relatively low proportion of CTCWs in Network LCMC_J_1 is probably
PLOS ONE
due to the small size of the text sample on the basis of which it was constructed. In each of the three sub-networks, there were CTCWs which failed to form TNIs in Network LCMC_J (see Table 2 for their numbers). For instance, 数量 (shùliàng, 'quantity') failed to form a TNI in Network LCMC_J due to the formation of 数据 (shùjù, 'data') as a TNI. However, 数量 (shùliàng, 'quantity') formed a TNI due to the change of its immediate context in Network LCMC_J_1. If text samples of various sizes can be selected repeatedly from a large Chinese corpus, then the TNIs extracted from these samples will cover the overwhelming majority, if not all, of the CTCWs (with a frequency of at least 2) in the corpus. Therefore, even with the rather rigorous criterion for island extraction, a CTCW still has chances to form a TNI thanks to its repeated use in appropriate immediate contexts.
To recap, CTCWs have been examined, in their immediate contexts, through network analysis based on the notion of islands. There are three major findings: (1) the TNIs in the linguistic networks based on Chinese characters and their co-occurrence relations are generally CTCWs; (2) a CTCW of any frequency level (usually at least 2) may form a TNI; and (3) any CTCW (usually with a frequency of at least 2) has the potential to form a TNI in appropriate immediate contexts.
Discussion
In a wide range of research fields, the notion of islands has helped researchers to identify, in systems of various types, cohesive groups which are meaningful in one way or another [40][41][42]. In this study, linguistic networks as models of actual language use of Chinese have been constructed and analyzed, with Chinese characters as nodes, their co-occurrence relations as links, and the co-occurrence frequencies as link weights. It has been found that the TNIs extracted from these networks are generally CTCWs. Furthermore, it has been shown that a CTCW always has the potential to form a TNI in appropriate immediate contexts. The findings of this study, therefore, suggest a highlighting mechanism of CTCWs in their immediate contexts. While this highlighting mechanism is by itself a statistical (frequency-based) effect, it may have a bearing on the emergence of CTCWs, especially when viewed from the perspective of attention/activation. An island as a cohesive whole is 'a local summit in the network', which is 'raised above its immediate surroundings' [35: 129]. A CTCW, by forming an island in appropriate immediate contexts, can highlight itself as a structural whole with stronger internal tightness (as can be seen from the case of 语言 (yīyán, 'language') in Fig 1). In this way, it can form a local peak in terms of attention/activation level in language experience and can be readily perceived and represented as a whole. The ever-changing active areas of language experience [6: 74] give rise to great flexibility of immediate contexts of the CTCWs, so that any CTCW can have the opportunities to form a local peak of attention/activation level in language experience. The repeated highlighting of a CTCW in appropriate immediate contexts can entrench a CTCW as a local peak in language experience, so that it can it can establish and maintain its status as a structural whole. Similarly, this highlighting mechanism may also contribute to the holistic representation of a two-character phrase (e.g., those extracted from the network models).
PLOS ONE
Considering that two-character phrases constitute an important source of new CTCWs in Chinese [43], this mechanism may also play a role in the future lexicalization of some of these phrases. Moreover, the highlighting mechanism points to a contextual/networks approach to cooccurrence frequency. The findings indicate that for a CTCW to be highlighted as a cohesive whole in its immediate context, its component characters' co-occurrence frequency does not need to reach a particular threshold. Such a non-threshold effect of frequency is also found in other studies [44][45][46]. What matters most to the highlighting of a CTCW has been found to be the advantage of its component characters' co-occurrence frequency in the immediate context. Such an advantage suggests that a CTCW can be used repeatedly in diversified contexts, highlighting it as cohesive whole in its immediate surroundings. In other words, co-occurrence frequency studied in isolation is insufficient in accounting for the chunking of Chinese characters into CTCWs; rather, it needs to be examined in its immediate contexts. Such importance of contextual information has been appreciated by a growing body of usage-based research. For instance, empirical inquiries [47,48] have shown that contextual diversity facilitates the processing and acquisition of lexical and sub-lexical units better than their frequencies alone.
This contextual/networks approach may also apply to other structural patterns (based on the chunking of lower-level units), with other quantitative measures of co-occurrence strength possibly involved. The above discussion is generally speculative. In future research, it is necessary to investigate the cognitive reality of the highlighting mechanism. For instance, will the two-character sequences that form TNIs differ in the shape of P300 from those that do not, considering that P300 is related to attention and activation of working memory [49,50]?
Conclusions
With appropriate models and analytical techniques of linguistic networks, this study focuses on the emergence of CTCWs (as structural units based on the chunking of their component characters) in the actual use of modern Chinese, which manifests itself as continuous streams of Chinese characters. Based on a relational view of co-occurrence frequency, two-character chunks with their component characters co-occurring more frequently than they do with the other characters (i.e., two-node islands in terms of network analysis) were extracted from linguistic network models of authentic language use of modern Chinese, and the relationship between the TNIs extracted and CTCWs was determined. It has been shown that the TNIs extracted are generally CTCWs and a CTCW, regardless of its frequency level (usually above 2), always has the potential to form a TNI in appropriate immediate contexts.
The findings of this study have shed some light on the emergent mechanism of CTCWs in language use. This mechanism helps to highlight a CTCW as a structural whole in its immediate context, and thus may play a vital role in the formation and acquisition of CTCWs in actual language use. This mechanism may also throw some light on the emergence of other structural patterns (with the chunking of specific linguistic units as their basis). Moreover, the findings may help to further understand the role of frequency (and probably other types of co-occurrence strength) in linguistic emergence, especially the specific scope of the role.
Methodologically speaking, this study has further shown the value of linguistic networks in usage-based research into the structural patterns of language. To be specific, linguistic network models based on authentic language data may provide an effective means to researching the emergence of structural patterns of human language. The findings of this study have also confirmed the advantages of linguistic networks in handling relational linguistic phenomena in authentic language use. This study is still preliminary and further researches are wanted. For instance, the highlighting mechanism is still at the computational level [51] in that it is based on statistics of actual language use and its cognitive realism needs to be determined by researches concerning the psychological and neural levels. In addition, considering the advantages of linguistic networks in dealing with relational linguistic phenomena, network-based investigations can be conducted to unveil the formation of other structural patterns of human language. | 2021-11-13T05:20:50.984Z | 2021-11-11T00:00:00.000 | {
"year": 2021,
"sha1": "669a1abd2abe7c107c41463c8a20235d94c08df1",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0259818&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "669a1abd2abe7c107c41463c8a20235d94c08df1",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237431116 | pes2o/s2orc | v3-fos-license | Unsupervised Conversation Disentanglement through Co-Training
Conversation disentanglement aims to separate intermingled messages into detached sessions, which is a fundamental task in understanding multi-party conversations. Existing work on conversation disentanglement relies heavily upon human-annotated datasets, which is expensive to obtain in practice. In this work, we explore training a conversation disentanglement model without referencing any human annotations. Our method is built upon the deep co-training algorithm, which consists of two neural networks: a message-pair classifier and a session classifier. The former is responsible of retrieving local relations between two messages while the latter categorizes a message to a session by capturing context-aware information. Both the two networks are initialized respectively with pseudo data built from the unannotated corpus. During the deep co-training process, we use the session classifier as a reinforcement learning component to learn a session assigning policy by maximizing the local rewards given by the message-pair classifier. For the message-pair classifier, we enrich its training data by retrieving message pairs with high confidence from the disentangled sessions predicted by the session classifier. Experimental results on the large Movie Dialogue Dataset demonstrate that our proposed approach achieves competitive performance compared to previous supervised methods. Further experiments show that the predicted disentangled conversations can promote the performance on the downstream task of multi-party response selection.
Introduction
With the continuing growth of Internet and social media, online group chat channels, e.g., Slack 1 and Whatsapp 2 , among many others, have become increasingly popular and played a significant social 1 https://slack.com/ 2 https://www.whatsapp.com/ Anyone finished the assignment?
Not yet. Working on it When is the due date?
Hmm… tonight? I'd like to get a new keyboard. Any suggestions?
The one I have is pretty good. S1 S1 S1 S1 S2 S2 Session Message and economic role. Along with the convenience of instant communication brought by these applications, the inherent property that multiple topics are often discussed in one channel hinders an efficient access to the conversational content. In the example shown in Figure 1, people or intelligent systems have to selectively read the messages related to the topics they are interested in from hundreds of messages in the chat channel.
With the goal of automatically grouping messages with the same topic into one session, conversation disentanglement has proved to be a prerequisite for understanding multi-party conversations and solving the corresponding downstream tasks such as response selection (Elsner and Charniak, 2008;Lowe et al., 2017;Wang et al., 2020). Previous research on conversation disentanglement can be roughly divided into two categories: (1) two-step methods, and (2) end-to-end methods. In the two-step methods Charniak, 2011, 2008;Jiang et al., 2018), a model first retrieves the "local" relations between two messages by utilizing either feature engineering approaches or deep learning methods, and then a clustering algorithm is employed to divide an entire conversation into separate sessions based on the message pair relations. In contrast, end-to-end methods (Tan et al., 2019;Yu and Joty, 2020) capture the "global" information contained in the context of detached sessions and calculate the matching degree between a session and a message in an end-to-end manner.
Though end-to-end methods have been proved to be more flexible and can achieve better performance , these two types of methods are interconnected and complementary since a global optimal clustering solution on the local relations will produce the optimal disentanglement scheme (McCallum and Wellner, 2004). Although the previous research efforts have achieved an impressive progress on conversation disentanglement, they all highly rely on humanannotated corpora, which are expensive and scarce to obtain in practice (Kummerfeld et al., 2019). The heavy dependence on human annotations limits the extensions of related study on conversation disentanglement as well as the applications on downstream tasks, given a wide variety of occasions where multi-party conversations can happen. In this work, we explore the possibility to train an endto-end conversation disentanglement model without referencing any human annotations and propose a completely unsupervised disentanglement model.
Our method builds upon the co-training approach (Blum and Mitchell, 1998;Nigam and Ghani, 2000) but extends it to a deep learning framework. By viewing the disentanglement task from the local perspective and the global perspective, our method consists of a message-pair classifier and a session classifier. The message-pair classifier aims to retrieve the message pair relations, which is of a similar purpose as the model used in a two-step method that retrieves the local relations between two messages. The session classifier is a global context-aware model that can directly categorize a message into a session in an end-to-end fashion. The two classifiers view the task of conversation disentanglement from the perspectives of a local two-step method and an global end-to-end model, which will be separately initialized with pseudo data built from the unannotated corpus and updated with each other during co-training. More concretely, during the co-training procedure, we adopt reinforcement learning to learn a session assigning policy for the session classifier by maximizing the accumulated rewards between a message and a session which are given by the message-pair classifier. After updating the parameters of the session classifier, a new set of data with high confidence will be retrieved from the predicted disentanglement results of the session classifier and used for updating the message-pair classifier. As shown in Figure 2, the above process is iteratively performed by updating one classifier with the other until the performance of session classifier stops increasing.
We conduct experiments on the large public Movie Dialogue Dataset . Experimental results demonstrate that our proposed method outperforms strong baselines based on BERT (Devlin et al., 2019) in two-step settings, and achieves competitive results compared to those of the state-of-the-art supervised end-to-end methods. Moreover, we apply the disentangled conversations predicted by our method to the downstream task of multi-party response selection and get significant improvements compared to a baseline system. 3 In summary, our main contributions are three-fold: • To the best of our knowledge, this is the first work to investigate unsupervised conversation disentanglement with deep neural models.
• We propose a novel approach based on cotraining which can perform unsupervised conversation disentanglement in an end-to-end fashion.
• We show that our method can achieve performance competitive with supervised methods on the large public Movie Dialogue Dataset. Further experiments show that our method can be easily adapted to downstream tasks and achieve significant improvements.
Related Work
Conversation Disentanglement Conversation disentanglement has long been regarded as a fundamental task for understanding multi-party conversations Charniak, 2008, 2010) and can be combined with downstream tasks to boost their performance Wang et al., 2020). Previous methods on conversation disentanglement are mostly performed in a supervised fashion, which can be classified as two categories: (1) two-step approaches and (2) end-to-end methods. The two-step methods (Elsner and Charniak, 2008, 2010, 2011Chen et al., 2017;Jiang et al., 2018;Kummerfeld et al., 2019) firstly retrieve the relations between two messages, e.g., "reply-to" relations (Guo et al., 2018;, and then adopt a clustering algorithm to construct individual sessions. The end-to-end models (Tan et al., 2019;Yu and Joty, 2020), instead, perform the disentanglement operation in an end-to-end manner, where the context information of detached sessions will be exploited to classify a message to a session. End-to-end models tend to achieve better performance than two-step models, but both often need large annotated data to get fully trained , which is expensive to obtain and thus encourages the demand on unsupervised algorithms. A few preliminary studies perform unsupervised thread detection in email systems based on two-step methods (Wu and Oard, 2005;Erera and Carmel, 2008;Domeniconi et al., 2016), but these methods use handcrafted features which cannot be extended to various datasets. Compared with previous works, our method can conduct end-to-end conversation disentanglement in a completely unsupervised fashion, which can be easily adapted to downstream tasks and used in a wide variety of applications.
Dialogue Structure Learning One problem that may be related to conversation disentanglement is dialogue structure learning (Zhai and Williams, 2014;Shi et al., 2019). Both are related to understanding multi-party conversation structures but they are different tasks. Dialogue structure learning aims to discover latent dialogue topics and construct an implicit utterance dependency tree to represent a multi-party dialogue's turn taking (Qiu et al., 2020), while the goal of conversation disentanglement is to learn an explicit dividing scheme that separates intermingled messages into sessions.
Co-training Co-training (Blum and Mitchell, 1998;Nigam and Ghani, 2000) has been widely used as a low-resource learning algorithm in natural language processing (Wu et al., 2018;, which assumes that the data has two complementary views and utilizes two models to iteratively provide pseudo training signals to each other. Our method consists of a message-pair classifier and a session classifier, which respectively view the unannotated dataset from the perspective of the local relations between two messages and that of the context-aware relations between a session and a message. To the best of our knowledge, this is the first work that utilizes co-training in the research of conversation disentanglement. We will extend the co-training idea to the deep learning paradigm to construct novel models for disentanglement.
Formulation and Notations
Given and K are unknown to the model. Our goal is to learn a dividing scheme that indicates which session a message m i belongs to. We solve this task in an end-to-end fashion where we formulate unsupervised conversation disentanglement as an unsupervised sequence labeling task. For a given message m i , there exists a session set T = {T 1 , · · · , T z(i) } where z(i) indicates the number of detached sessions when m i is being processed. The model needs to decide if m i belongs to any session in T. If m i ∈ T k , then m i will be appended to T k ; otherwise a new session T z(i)+1 will be built and initialized by m i , and the new session will be added to T.
Method
In this section, we describe our co-training based framework in detail, which contains following components: 1. A message-pair classifier which can retrieve the relations between two messages. The relation scores will be used as rewards for updating the session classifier during co-training.
2.
A session classifier which can perform end-toend conversation disentanglement by retrieving the relations between a message and a session. The predicted results will be used to build new pseudo data to train the messagepair classifier during co-training.
3. A co-training algorithm involving the message-pair classifier and the session classifier. Two classifiers will help to update each other until the performance of the session classifier stops growing.
We will introduce the details of the three components in following sections.
Message-pair Classifier
The message-pair classifier is a binary classifier which we denote as F m in the remainder of this paper. Due to the lack of annotated data in unsupervised settings, the goal of F m is to predict if two messages are in the same session; i.e., whether they talk about the same topic, which is different from most previous work that predicts the "reply-to" relation. In our experiment, we adopt a pretrained BERT (Devlin et al., 2019) in the base version as our message encoder.
Model
Given two messages m i and m j , we separately obtain the sentence embeddings of the two messages: The probability of m i and m j belonging to the same session is computed with the dot product between v m i and v m j : We abbreviate Eq. 1-3 as: F m is trained to minimize the cross-entropy loss. The predicted probabilities between message pairs will be used as rewards during the co-training process to update the session classifier.
Initialization
One important step in standard co-training algorithm is to initialize the classifiers with a small amount of annotated data. Since our dataset is completely unlabeled, we create a pseudo dataset to initialize the message-pair classifier. The assumption we use in our experiments is that one speaker mostly participates in only one session 4 . 4 We verify this assumption in two natural multi-party conversation datasets: Reddit dataset (Tan et al., 2019) and Ubuntu IRC dataset (Kummerfeld et al., 2019). Statistics show that only 6% of speakers will join multiple sessions on the Reddit dataset and 20% on the IRC dataset.
To construct the pseudo data D m , we use the message pairs from the same speaker in one conversation as the positive cases, while randomly sampling messages from different conversations as the negative pairs. In this way we obtain a retrieved dataset D ret m containing 937K positive cases and 2,184K negative cases. However, we observe that the positive cases constructed from the above process are very noisy because: (1) there are still some speakers who will appear in multiple sessions, and (2) even message pairs from the same speaker in the same session can be very semantically different since they are not contiguous messages. These noisy training cases will result in low confidences for the predicted probabilities of F m , which will be used later in co-training. Thus we randomly select some messages from the unlabeled dataset and use a pretrained DialoGPT (Zhang et al., 2020) to generate direct responses in order to form new positive cases, which we denote as D gen m . In this way, we finally obtain the pseudo data D m = D ret m ∪ D gen m , which contains 1,212K positive cases and 2,184K negative cases, to initialize F m .
Two-step Disentanglement
After being trained on the pseudo data D m , the message-pair classifier F m can be exploited as for two-step conversation disentanglement. Given an unlabeled conversation as C = [m 1 , m 2 , · · · , m N ], we first use F m to predict the probability between each message pair in C. Then we perform the greedy search algorithm widely used in the previous works (Elsner and Charniak, 2008) to segment C into detached sessions.
Session Classifier
The session classifier, denoted as F t , aims to calculate the relations between a session and a message that indicates if the message belongs to the session or not. Given the current context of a session as T = [m 1 , · · · , m |T | ] and a message m, the goal of F t is to decide if m can be appended to T or not.
Model
For each message m j ∈ T , we obtain its sentence embedding v m j by a Bidirectional LSTM network (Hochreiter and Schmidhuber, 1997) and a multilayer perceptron (MLP): After obtaining sentence embeddings of all the messages in T as [v m 1 , · · · v m |T | ], we adopt a self attention mechanism (Yang et al., 2016) to calculate the session embedding v T by aggregating the information from different messages. Specifically, where w and b are trainable parameters. For the message m, we use the same Bidirectional LSTM network and MLP as in Equation 5 and 6 to obtain its sentence embedding v m . Then the probability of m belonging to T is calculated with the dot product between v m and v T : We abbreviate the above process as: F t is trained to minimize the cross-entropy loss.
Initialization
Similar to the message-pair classifier, we build a pseudo dataset D t to initialize the session classifier F t to make it be able to decide if a message is semantically consistent with a sequence of messages. We construct D t based on the same assumption that one speaker is involved in just one session for most of the time.
Given a conversation C = [m 1 , m 2 , · · · , m N ] from the unlabeled corpus, we retrieve the messages from a speaker S as C S = [m S 1 , m S 2 , · · · ] where C S ⊂ C. Based on the assumption, the messages in C S are in the same session, so the message m S i ∈ C S where i = 1 and its preceding context can be regarded as the positive input of F t . Consider a positive case with m S 2 as the message, the message m and session T are defined as follows: The reason is that m S 1 ∈ [m 1 , · · · , m S 2 −1 ], so [m 1 , · · · , m S 2 −1 ] and m S 2 should be semantically consistent according to the assumption.
For the negative instances of D t , we randomly sample a conversation as T from the corpus, and a message from another conversation as m. As such we obtain a pseudo dataset D t consisting of 460K positive instances and 1,158K negative cases.
Algorithm 1 An end-to-end method for conversation disentanglement with the session classifier Input: An unlabeled conversation C, the initialized session classifier F t Output: A set of sessions T end if 22: end for Return T
End-to-end Disentanglement
Note that after initialized with the pseudo data D t , the session classifier F t can be directly applied to perform end-to-end conversation disentanglement. Suppose message m i is being processed where m i ∈ C and C = [m 1 , m 2 , · · · , m N ], we first calculate the probability of m i belonging to its preceding context C i = [m 1 , · · · , m i−1 ], which we denote as where z is a function indicating the number of disentangled sessions in C i ; otherwise m i will be used to calculate the matching probability with each session in T, and then be classified to the session which has the greatest matching probability. The overall end-to-end algorithm is shown in Algorithm 1.
Co-Training
The confidence of F m and F t is not high because they are initialized with noisy pseudo data. We propose to adapt the idea of co-training to the disentanglement task, which is leveraged to iteratively update the two classifiers with the help of each other. The session classifier will utilize the local probability provided by the the message-pair classifier with reinforcement learning, while more training data, built from the outcomes of the session classifier, will be fed to the message-pair classifier. We will introduce the details in this subsection.
Updating Session Classifier
Since no labeled data is provided to train F t , we formulate the disentanglement task as a deterministic Markov Decision Process and adopt the Policy Gradient algorithm (Sutton et al., 1999) for the optimization. For each co-training iteration, F t will be initialized with the pseudo data D t and then updated by reinforcement learning.
State The state s i of the i th disentanglement step consists of three components (m is the preceding context of m i ; T is the detached session set which contains z(i) sessions.
Action The action space of the i th disentanglement step consists of two types of actions: 1. Classifying m i to a new session, which we denote as a new i ∈ {0, 1}. If a new i is 0, m i will be used to initialize a new session T z(i)+1 ; otherwise m i will be categorized into an existing session. 2. Categorizing m i to an existing session in T, which we denote as a t i ∈ {1, · · · , z(i)}.
Policy network We parameterize the action with a policy network π which is in a hierarchical structure. The first layer policy π new (a new i |s i ; θ 1 ) is to decide if a message m i belongs to C i , and the first layer action a new i ∈ {0, 1} will be sampled: If a new i is 1, which means m i belongs to a session in T, the second layer policy π t (a t i |s i ; θ 2 ) will decide which of existing sessions that m i should be categorized to: where θ 1 and θ 2 are both parameters.
Reward The rewards are provided by the message-pair classifier F m . For a new i = 0, we want m i to be different from all the messages in C i . Thus it is defined by the negative average of the probabilities between m i and all the messages in C i . However, for a new i = 1 and a t i = k, we want m i to be similar to all the messages in T k , and thus the reward is defined as the average of the probabilities between m i and all the messages in T k : An issue associated with r m i is that its confidence might be low because F m is trained on noisy pseudo data. We hence design another speaker reward r S i based on our assumptions. For a message m i initializing a new session T z(i)+1 , its speaker S i should not appear in C i ; while for a message m i categorized to an existing session T k , it should receive a positive reward if its speaker S i appears in T k : The final reward r i for an action is calculated as: where γ is a parameter ranged in [0, 1] that balances r m i and r S i , which we set to 0.6 in experiments. The policy network parameters θ 1 and θ 2 are learned by optimizing:
Updating Message-pair Classifier
As mentioned in Section 4.1.2, the pseudo data D m for initializing the message-pair classifier F m is noisy. Thus we enrich D m with new training instances D new m retrieved from the predicted disentanglement results of F t .
Given a conversation C, F t can predict the disentangled sessions as T = {T k } K k=1 . Given session T k = [m k 1 , · · · , m k |T k | ] as an example, for a message m k i , we retrieve its preceding M messages in T k and form M pairs } as the new positive pseudo message pairs. In order to raise the confidence of the newly added data, we filter out those pairs where the two messages have less than 2 overlapped tokens after removing stopwords. For each co-training iteration, F m is retrained on the data D m ∪ D new m .
Dataset
A large corpus is often required for end-to-end conversation disentanglement. In this work, we conduct experiments on the publicly available Movie Dialogue Dataset which is built from online movie scripts. It contains 29,669/2,036/2,010 instances for train/dev/test split with a total of 827,193 messages, where the session number in one instance can be 2, 3 or 4. Since we make explorations in unsupervised settings, no labels are used in our training.
Implementation Details
We adopt BERT (Devlin et al., 2019) (the uncased base version) as the message-pair classifier. For the session classifier, we set the hidden dimension to be 300, and the word embeddings are initialized with 300-d GloVe vectors (Pennington et al., 2014). For training, we use Adam (Kingma and Ba, 2015) for optimization; the learning rate is set to be 1e-5 for the message-pair classifier, 1e-4 for initializing the session classifier, and 1e-5 for updating the session classifier with reinforcement learning. We iterate for 3 turns for co-training when the best performance is achieved on the development set.
Evaluation Metrics
Four clustering metrics widely used in the previous work (Elsner and Charniak, 2008;Kummerfeld et al., 2019;Tan et al., 2019) are adopted: Normalized mutual information (NMI), One-to-One Overlap (1-1), Loc 3 and Shen F score (Shen-F). More explanations about the metrics can be found in Appendix A.1. We also report the mean squared error (MSE) between the predicted session numbers and the golden session numbers as previous work . This metric can measure whether the model can disentangle a given dialogue to the correct number of sessions. Table 1 shows the results of unsupervised conversation disentanglement of different methods. We can observe that: (1) for two-step methods, BERT has a very poor performance without finetuning, while after finetuned on our pseudo dataset, its performance gets improved with a relatively large margin.
Disentanglement Performance
(2) Utilizing the pseudo pairs generated by a pretrained DialoGPT can further improve the performance of BERT based on D ret m . We consider this is because the messages from one speaker are usually not contiguous in a conversation, while DialoGPT can directly produce a response to a message, which is beneficial to BERT on capturing the differences of two messages. (3) Table 2: The performance of the session classifier and the message-pair classifier in each co-training iteration. Columns NMI, 1-1, Loc 3 and Shen-F are for session classifier and Column F1 is for the message-pair classifier. "Base" represents session classifier trained on D t and message-pair classifier finetuned on D m . the co-training process, the pseudo pairs retrieved from the predictions of the session classifier can help BERT to achieve a performance close to that of a supervised BERT, which demonstrates the effectiveness of our proposed co-training framework.
(4) BERT finetuned with golden message pairs just has a marginal performance advantage compared to the pseudo data D m . This is caused by the weakness of two-step methods in which the clustering algorithm is a performance bottleneck .
In general, end-to-end methods perform much better than two-steps methods as shown in the table, which is in accordance with the conclusions in previous works under supervised settings (Yu and Joty, 2020). The session classifier trained on the pseudo data D t can achieve a Shen F score of 59.61, which is +5.29 improvement compared to the supervised BERT in two-step settings. This proves that the model structure and the approach to building D t are effective for the task of unsupervised conversation disentanglement. Meanwhile, our proposed co-training framework can further improve the performance of the session classifier and achieve competitive results with the current state-of-the-art supervised method. With further updating during the co-training process, the session classifier raises the NMI score from 24.96 to 29.72 and 1-1 from 54.26 to 56.38. Such a performance gain proves that our co-training framework is an important component in handling unsupervised conversation disentanglement.
Moreover, as we can see in the table, two-step methods have a high MSE on the predicted session numbers, but with the pseudo data D m , BERT can achieve performance which is much better than that without finetuning and even comparable with that finetuned on the golden pairs. End-to-end ses- Table 3: The performance on multi-party response selection with disentangled conversations. The first column respective stands for no disentanglement, the disentangled conversations predicted by our method and the golden disentangled conversations.
sion classifier achieves a significant improvement on the MSE by reducing it from 1.4602 to 0.8059, while our proposed co-training framework further improves it to 0.6871, which is close to the performance of the supervised model. It demonstrates that the co-training method can help the session classifier to better understand the semantics in the conversation and thus to more accurately disentangle the conversation into sessions.
Analysis of Co-training
In this section we analyze the iteration process of co-training. Table 2 shows the performance of session classifier in different iterations. We also include in the last column of Table 2 the performance of the message-pair classifier on the task of pair relation prediction. As we can see, the model performance is improved iteratively as the iteration increases. For the first iteration, the reward r m i is received from the base message-pair classifier, of which the F1 score on relation prediction is 68.26. After the first iteration, new pseudo pairs will be retrieved from the disentanglement results and used to improve the performance of the message-pair classifier to 68.44. Thus better reward r m i will be provided to update the session classifier. As shown in the table, with such a co-training procedure, performance of both the session classifier and the message-pair classifier are significantly enhanced.
Performance on Response Selection
Conversation disentanglement is a prerequisite for understanding multi-party conversations. In this section we apply our predicted sessions to the downstream task: multi-party response selection.
We create a response selection dataset based on the Movie Dialogue Dataset. We adopt a LSTM-based network to encode the conversations/sessions, and use attention mechanism to aggregate the information from different sessions as in . More details of the model and implementation can be found in Appendix A.2.
The results are shown in Table 3. Note that the three experiments are performed on the model with the same number of parameters. We can see that with the disentangled conversations predicted by our method, there is a significant performance gain comparing with the baseline model. Though golden disentanglement can bring the best performance, the annotations are usually expensive to acquire. With our method, a disentanglement scheme can be obtained for better understanding multi-party conversations with no annotation cost.
Conclusion
This is the first work to investigate unsupervised conversation disentanglement with deep neural models. We propose a novel approach based on cotraining which consists of a message-pair classifier and a session classifier. By iteratively updating the two classifiers with the help of each other, the proposed model attains a performance comparable to that of the state-of-the-art supervised disentanglement methods. Experiments on downstream tasks proves that our method can help better understand multi-party conversations. Our method can be easily adapted to a different assumption, and also it can be extended to other low-resourced scenarios like semi-supervised settings, which we will leave as our future work.
A.1 Metric Explanation
We use four metrics in our experiments: Normalized mutual information (NMI), One-to-One Overlap (1-1), Loc 3 and Shen F score (Shen-F). NMI is a normalization of the Mutual Information, which is a method for evaluation of two clusters in the presence of class labels. 1-1 describes how well we can extract whole conversations intact. Loc 3 counts agreements and disagreements within a context window size 3. Shen calculates the F-score for each gold-system conversation pair, finds the max for each gold conversation, and averages weighted by the size of the gold conversation.
A.2 Multi-party Response Selection
Given a conversation C = [m 1 , · · · , m N ] and a candidate message m, the goal of response selection is to decide if message m is correct response to the conversation C.
We obtain the disentanglement scheme of C as T = {T 1 , T 2 , · · · , T K }, where session T k = [m 1 . · · · , m |T k | ]. For each session T k , we encode each message m k i within it by a Bidirectional LSTM network and a multilayer perceptron (MLP): After obtaining sentence embeddings of all the messages in T k as [v m k 1 , · · · v m k |T k | ], we adopt a self attention mechanism (Yang et al., 2016) to calculate the session embedding v T k by aggregating the information from different messages. Specifically, where w and b are trainable parameters. In this way we can acquire all the session representation as {v T 1 , v T 2 , · · · , v T K }. Meanwhile, we obtain the candidate message representation as v m with the same LSTM and MLP in Equation 22-23. We follow to aggregate the information from different sessions with the attention … BiLSTM+MLP … Self Attention Figure 3: The model structure that incorporating disentangled sessions for the task of response selection. mechanism: The final matching score between the conversation and the message is given by: The overall of structure of the method incorporating the disentangled sessions is shown in Figure 3. For the vanilla model using conversation C without any disentanglement, we use the LSTM, MLP and self attention as in Equation 22-26 to obtain its vector representation v C . And then the matching score is calculated by the dot product between v C and v m .
The whole model is trained to minimize the cross-entropy loss of both positive instances and negative instances. | 2021-09-08T01:15:54.741Z | 2021-09-07T00:00:00.000 | {
"year": 2021,
"sha1": "890a9c6345cd2d0ca95c76d870b7d3f754671bb5",
"oa_license": "CCBY",
"oa_url": "https://aclanthology.org/2021.emnlp-main.181.pdf",
"oa_status": "HYBRID",
"pdf_src": "ACL",
"pdf_hash": "9fbe7857191cfb04c6411cd335fb25ad48a4cbe6",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
119269834 | pes2o/s2orc | v3-fos-license | Interface exchange processes in LaAlO$_3$/SrTiO$_3$ induced by oxygen vacancies
Understanding the role of defects in oxide heterostructures is crucial for future materials control and functionalization. We hence study the impact of oxygen vacancies (OVs) at variable concentrations on orbital- and spin exchange in the LaAlO$_3$/SrTiO$_3$ interface by first principles many-body theory and real-space model-Hamiltonian techniques. Intricate interplay between Hubbard $U$ and Hund's coupling $J_{\rm H}$ for OV-induced correlated states is demonstrated. Orbital polarization towards an effective $e_g$ state with predominant local antiferromagnetic alignment on Ti sites near OVs is contrasted with $t_{2g}(xy)$ states with ferromagnetic tendencies in the defect-free regions. Different magnetic phases are identified, giving rise to distinct net-moment behavior at low and high OV concentrations. This provides a theoretical basis for prospective tailored magnetism by defect manipulation in oxide interfaces.
A deeper comprehension of the defect influence on the interface phenomenology is motivated not just by basic research [27]. Since the physical properties of oxide heterostructures, ranging from insulating and/or conducting to magnetic and/or superconducting, may be subtly tuned by the presence of impurities, promising engineering aspects emerge. Due to increasing control in detailed oxide-interface fabrication, new opportunities in high-response materials design are within reach [28]. In view of future spintronics devices, selective magnetic activation on the nano scale by versatile ways of defect creation [29] may soon become available.
Theoretical accounts of realistic LAO/STO heterostructures are challenging because of the unique combination of complexity from the basic interacting quantum perspective and the structural bulk-to-interface setting. Calculations based on density functional theory (DFT) using hybrid functionals or employing static cor-relation effects from a Hartree-Fock-like treated Hubbard Hamiltonian ('+U') can reveal some relevant aspects of the intriguing electronic structure [15][16][17][30][31][32][33][34]. But there are two serious drawbacks to such extended Kohn-Sham schemes. Broken-symmetry states, i.e. long-range magnetic and/or charge orders, have to be stabilized often right from the start to address correlation effects. Second, several many-body hallmarks such as paramagnetic local-moment behavior, low-energy quasiparticle (QP) formation, band narrowing, and interplay between QPs and Hubbard bands are not incorporated due to a lack of frequency dependence in the local electronic selfenergy Σ. Invoking DFT+dynamical mean-field theory (DMFT) overcomes these deficiencies, but, especially for defect environments treated by larger cells [19,35], it remains numerically expensive. It is important to note that many-body physics beyond standard DFT-based approaches is here not only a detail, but essential for illuminating mechanisms of future technological use. As a further relevant aspect, first-principles supercell computations to reveal defect influences are generally not perfectly suited to the problem at hand. They are restricted by the choice of the (often too small) cell size and can introduce artifacts because of the introduced short-range defect ordering.
In this theoretical work we want to focus on the peculiar problem of OVs at the LAO/STO interface over a larger concentration range. Though a deeper relevance of these defects for the formation of the original quasi-twodimensional electron liquid is still under debate, several first-principles calculations have shown that even the stoichiometric heterostructure is metallic from partly filled Ti(t 2g )-dominated bands at the Fermi level. Although there are other theoretical suggestions [32,[36][37][38][39][40], OVs provide a natural way to explain the occurrence of interface ferromagnetism [16,19,30,34]. In conventional DFT, an OV induces crystal-field lowered e g -like impurity states on the neighboring Ti ions. Thinking intuitively, two limiting scenarios could apply depending on the concentration of vacancies in LAO/STO. Lin and Demkov [18,41] studied the dilute-defect limit with only few oxygen defects, where e g -like local moments on assumed Anderson/Kondo impurities may form. The latter can couple ferromagnetically via Ruderman-Kittel-Kasuya-Yosida (RKKY) interaction mediated by the itinerant t 2g electrons. On the other hand, in a densedefect limit, the physics is closer to a minimal two-orbital (e g , t 2g ) Hubbard model near quarter filling [19,42]. In a recent DFT+DMFT work [19] it was shown that in this limit, emerging FM order in the interface TiO 2 layer can indeed be explained by effective (Zener) doubleexchange [43,44] between an vacancy-inducedẽ g orbital and an in-plane t 2g (xy) orbital. Michaeli et al. [36] also proposed Zener exchange between localized and itinerant states, but without referring to OVs as the source for localization.
Although experimentally the influence of OVs in STObased materials is documented by monitoring physical response with varying oxygen partial pressure [4,8,21], a good quantitative understanding of the OV concentration as well as definite information on the location with respect to the interface [2,4] is still lacking. To cope with the uncertainties in the number of OVs, we here perform investigations in a broad concentration range. In order to achieve this task, the correlated electronic structure is treated in a real-space framework allowing for in principle arbitrary vacancy configurations in number and arrangement. Depending on the concentration of OVs, we encounter different orbital-and spin-exchange regimes, that shed light on the emerging FM order at the LAO/STO interface. Local and non-local processes subject to a subtle interplay between the Hubbard U and Hund's J H govern the OV-induced magnetism. Throughout the paper all energies are given in electron volts.
The paper is organized as follows. To set the stage, we touch base with previous DFT+DMFT calculations [19] and start in Sec. II with a brief view on the impact of the Hund's coupling J H on the magnetic order in a n-type [45] LAO/STO interface in the limit of high OV concentration. In Sec. III A the correlated real-space modeling to describe different defect concentrations is introduced. Results in the dilute limit of a single oxygen vacancy as well as of two OVs in the interface are presented in Sec. III B. Section III C deals with the evolution of electron correlation and magnetism upon increasing the number of OVs from the dilute-to a dense-defect limit. The work closes with a discussion and summary in Sec. IV.
II. INFLUENCE OF JH IN THE DENSE-DEFECT LIMIT OF OXYGEN VACANCIES IN LAO/STO
A charge self-consistent DFT+DMFT study in a dense-defect limit of 25% OVs in the TiO 2 interface layer was performed in Ref. 19. We define that limit by OVs exclusively located in the TiO 2 interface layer, with each Ti ion having one OV in bonding distance (see Fig. 1). Here and throughout this paper Ti neighborhoods with more than one nearby OV are not considered. [19]. Ions Ti1 and Ti2 form the basis in the √ 2× √ 2 primitive cell. (b) minimal relevant Ti orbitals with |ẽg ∼ 0.55|z 2 ± 0.84|x 2 −y 2 [19].
For the DFT part a mixed-basis pseudopotential framework is used and the DMFT impurity problems are solved by continuous-time quantum Monte Carlo (CT-QMC) [46][47][48][49]. A minimal correlated subspace was derived to consist of a two-orbital [ẽ g , t 2g (xy)] manifold located at the interface Ti ions. Remaining t 2g orbital degrees of freedom are included in more distant layers from the interface. The local Coulomb interactions in the interacting Hamiltonian with Slater-Kanamori parametrization, i.e., the Hubbard U and Hund's exchange J H were set to U =2.5 and J H =0.5, in line with other works [16]. A double-exchange-like (DE) mechanism is effective in stabilizing FM order within the interface. Thereby the spin polarization is triggered by inter-orbital scattering between the nearly-localizedẽ g state and the less-localized xy state. Because of this exchange mechanism, the fewer and more-itinerant xy electrons exhibit stronger spin polarization. The selfconsistently adapting Ti occupation results in a quarter filling (n Ti1,Ti2 ∼1) of the two-orbital correlated subspace within the interface layer. In total, each OV releases 2 electrons, which add to the 0.5 electrons per Ti ion due to the polar-catastrophe avoidance within LAO/STO heterostructures. In the supercell treatment of the densedefect limit, which notably includes several TiO 2 layers, these two contributions adjust such that 1 electron per Ti settles in the interface TiO 2 layer. Hence the latter layer formally consists of Ti 3+ (3d 1 ) ions in the given defect limit.
Since the size of the Hund's coupling is key to a double-exchange-like mechanism for the onset of ferromagnetism, we here present results from varying J H , while keeping U =2.5 fixed. For more details on the calculational setup see Ref. 19. Usually the value of J H is much less modified by screening processes than the Hubbard U and therefore remains close to the magnitude in the free atom. Hence although for LAO/STO a Hund's coupling J H =0.5−0.6 is expected, we take the liberty of changing this value in order to assess it relevance for the underlying physics. Figure 2 orbital-and spin-dependent occupations with J H in the magnetically ordered phase of the dense-defect limit. For J H 0.4 ferromagnetism with local Ti moment m∼0.2-0.4µ B occurs. But when J H becomes smaller and eventually tends to zero, antiferromagnetic (AFM) order between the nearest-neighbor (NN) titanium ions sets in. In the latter regime the orbital polarization in favor of the more localizedẽ g level strongly increases towards nearly full polarization at J H =0. Figure 2(b) shows the k-integrated spectral function A σ (ω)= k A σ (k, ω) in the limiting cases J H =0, 0.7 for the whole supercell as well as for the correlated subspace of the Ti[ẽ g , t 2g (xy)] states. Concerning the correlation strength the Hund's coupling has a known model impact within a two-orbital system near quarter filling [50]. In the case of vanishing J H stronger correlations occur, giving rise to a prominent lower Hubbard peak at ∼−1.3. This incoherent excitation is exclusively associated with the vacancy-inducedẽ g state and resembles a similar feature in photoemission data [7,8,26]. Locally, hopping is nearly blocked on the interface Ti ions and residual metallicity is mainly provided by sites far from the interface. For rather large J H the interface is well conductive, with a prominent QP peak at the Fermi level but an absent lower Hubbard peak. Coexistence of general metallicity with the latter incoherent excitation holds for reasonable intermediate values of J H [19]. The AFM order for J H =0 is completely carried by theẽ g orbital, while the FM order for J H =0.7 is dominantly carried by the xy orbital. These findings underline the importance of DE processes in the formation of ferromagnetism at the LAO/STO interface in the dense-defect limit.
III. REAL-SPACE APPROACH TO OXYGEN VACANCIES IN LAO/STO
The scope is now broadened by addressing lower OV concentrations in the TiO 2 interface layer. Performing large-scale incommensurate studies from first principles of the interplay between defects, disorder and correlations for multi-orbital lattice is nowadays still too expensive. We thus need a minimal setting that is geared to carry the key physics of the OV-doped LAO/STO interface. Therefore a model-Hamiltonian approach is used, inspired by the results of the DFT+DMFT investigation in the previous section.
A. Model and methodology
A two-orbital Hubbard Hamiltonian based on the vacancy-induced effectiveẽ g state and the in-plane t 2g (xy) state is employed on a 10×10 square lattice with N Ti =100 titanium ions, mimicking the interface TiO 2 layer (see Fig. 3). Only the Ti sublattice is treated explicitly and the oxygen degrees of freedom are integrated out within the chosen Hamiltonian form. Since the creation of OVs amounts to electron doping, explicit involvment of remaining oxygen orbitals can be neglected to first approximation. Periodic boundary conditions are applied. Only intra-orbital NN hoppings are retained in the model.
Lets focus first on the lattice scenario in the densedefect limit, where each Ti site is affected by a nearby OV, to build up the model characteristics. In line with Ref. 19, we choose tẽ g =t xy =0.2 for the NN hoppings. In contrast to a different modeling by Pavlenko et al. [42], our hopping amplitudes from the projectedlocal-orbitals method [51] for higher OV concentrations are not strongly orbital dependent. From a noninteracting point of view, the crystal-field splitting ∆ between the xy level and the vacancy-induced low-energyẽ g is the key model parameter. Note that ∆ is different from the usual octahedral crystal-field splitting that is already vital in the stoichiometric compound. The latter energy splitting does not occur in the present defect model. Again from Ref. 19 where c(c † ) are creation (annihilation) operators, n=c † c, i, j are site indices, α, α ′ =β,xy and σ=↑, ↓ marks the spin projection, using For the same Hubbard U , the strength of electronic correlations is usually weaker within slave-boson theory than within CT-QMC. If not otherwise stated the Hubbard U is thus set to U =3 in all calculations. The Hund's coupling is set to J H =0.55, again with variations to smaller/larger values to trace its relevance. For the dense-defect limit, ∆ i =∆ and β=ẽ g , i.e. the vacancy-inducedẽ g crystalfield state is active on each Ti site.
In the latter limit every interface Ti ion has one neighboring OV and the given Hamiltonian is coherently applicable on each Ti site. However, when the defect number is reduced, titanium sites without a nearby OV appear and at those Ti sites there are no low-energyẽ g states. In the stoichiometric case the e g orbitals are strongly bound to O(2p) and do not contribute either to states at the Fermi level or to any possible local-moment formation. In order to keep the modeling simple, we make the following approximations when treating general defect cases in real space: (i) the Hamiltonian of form (1) is used throughout the lattice, (ii) the parametrization of ∆ i is ∆ i = ∆ , β =ẽ g : if an OV nearby 0 , β = xz/yz : if no OV nearby (2) and (iii) multiple OVs around a Ti site are prohibited. The interpretation of (ii) is as follows: without a nearby OV, the Ti local low-energy electrons are mainly of t 2g kind and thus the formerẽ g degree of freedom takes over the role of an additional effective t 2g orbital. This can be justified by a notable hybridization betweenẽ g and xz, yz in the dense-defect case [19]. We neither change hoppings for Ti sites with or without nearby OVs nor employ a concentration-dependent hopping modification. Such a more detailed parametrization is hard to fix and the aim here is to work in a canonical two-orbital setting. The final modeling step examines the concentrationdependent electron filling of a lattice with vacancy concentration c=N vac /N O , where N vac is the number of OVs and N O =2N Ti denotes the number of oxygen sites. Our electron count considers only the single TiO 2 interface layer, explicit charge fluctuations to more distant layers across the interface are neglected. In the dense-defect limit of the supercell treatment, DFT+DMFT yields a filling of one electron at each Ti site in the interface TiO 2 layer. Supercell DFT calculations for the defectfree interface result in an itinerant two-dimensional electron system with a count of 0.5 electrons per interface Ti [19], i.e. 50 electrons for the 100 Ti sites, in line with the polar-catastrophe avoidance. Putting these numbers together, we choose the linear-interpolation scheme n tot =N Ti /2+N vac for the total lattice electron count n tot at intermediate defect levels.
The interacting multi-orbital problem is solved by a real-space formulation of the rotational-invariant slaveboson (RISB) mean-field method [50,[52][53][54][55]. It corresponds to single-site DMFT close to zero temperature with a simpler impurity solver than the CT-QMC, allowing for local self-energies Σ with a linear frequency dependence and static terms. Renormalized QPs as well as local multiplets can be monitored in the interacting regime. Explicit intersite self-energy terms are neglected, but due to the coupling of all sites in the RISB selfconsistency cycle effects of incoherency due to the distribution of defects are included. Our real-space approach is reminiscent of a single-orbital variant put into practice by Andrade et al. [56]. However instead of simplified Kotliar-Ruckenstein slave bosons [57] we here use the full rotational-invariant extension and elaborate on a multiorbital framework. Because of the model/method complexity no disorder averages are performed in this work. The calculations utilize 60 slave bosons and 17 lagrangian multipliers per site, 7700 variational parameters in total, with dimension 400×400 for the kinetic part of the Hamiltonian.
In the following, the ordered magnetic moment m, the orbital moment τ , the paramagnetic local spin moment m PM and the orbital polarization ζ are defined as where S denotes the local spin operator andŌ= O . Lattice-averaged values Q lat of these quantities Q are computed by Q lat =Q/N Ti .
B. Dilute-defect limit
Since our parametrization is adjusted to the DFT+DMFT results within the dense-defect limit, the model performance in the contrary limit of one or two OVs is of primary interest. For the case of two vacancies, a long-distance accommodation is chosen. Furthermore, they are placed on differently directed bars on the effective square lattice to minimize coherency effects. Figure 4 summarizes the obtained real-space results for key quantities in the PM phase as well as with magnetic order. Each small square resembles one Ti site and the oxygen sites are located in the middle of the enclosing bars. As expected, the orbital polarization towards theẽ g level for the Ti sites near vacancies is substantial. Away from the defect, especially in the two-OV case, there are, however, also oscillations in the orbital moment. The local PM spin moment is largest close to the OVs, understood from the well-localized correlatedẽ g electrons, but attains a minimum just at second-nearest Ti sites.
When initializing the RISB calculations with FM order, at saddle-point the solutions indeed reveal a small net FM lattice moment. But throughout the lattice the site-resolved ordered magnetic moment m alternates. Interestingly, while m for the single-OV case is still rather small at the defect, already the two-OV case displays a strong increase in the absolute value of the near-defect Ti magnetic moment. Furthermore, whereas for the single vacancy the spins on both NN Ti sites align in the same direction, an AFM alignment emerges with two vacancies. This is surprising since the nearby Ti sites in both scenarios show similar strong filling of the localized e g state. Due to the strongly correlated regime, a significant kinetic exchange ∼t 2 /U of AFM type is expected. Therefore the distance between OVs has to matter for the near-defect spin coupling. Structures with defects at closer range favor the local AFM alignment. The nonlocal exchange with the alternating m throughout the lattice may also play a role in the local spin alignment. Thus there appears to be an intricate coupling between short-and longer-range exchange on the lattice.
Note that in general the overall system is rather susceptible to net magnetic order even with only a small number of defects and small resulting moments. The electron count per site readsn=0.51(0.52) for one(two) OV(s), i.e. in principle the two-orbital Hubbard modeling is close to one-eighth filling. Thus though the stoichiometric interface in DFT+DMFT does not show magnetic order [19], it apparently only needs a low concentration of OVs to create the first magnetic instability in the correlated regime.
In order to investigate the issue of magnetic exchange with distance in more detail, the averaged ordered magnetic moment with spacing r from the OV in the singledefect case is plotted in Fig. 5. The NN Ti site is located in distance r NN =0.5 in units of the lattice constant a. Though it is expected that results will dependent on the chosen lattice size, there are significant variations within the shorter-and longer-range regions. For d∼2.5 the moments predominantly change their sign and the spin coupling switches from FM-to AFM-like. Let's assume here a Fermi-wave-vector k F modulated RKKY exchange with J(d)∼cos(2k F d) in the low-density limit for the sea of conduction electrons [36]. This would correspond to k F ∼π/10 in reciprocal units, more or less commensurable with the noted one-eighth filling. But one has to keep in mind that the local ordered moment in our single-OV case is too small for a serious application of the standard RKKY picture. Still, for the record, there are indications of RKKY-like exchange taking place in the dilute-defect limit. Concerning the influence of Hund's coupling, not surprisingly, a larger J H increases the spin moments since it tends to locally align the contributions from the orbital degrees of freedom.
C. Electron correlation and magnetism from the dilute-to the dense-defect limit We now investigate the regime between the dilutedefect limit (c=0.005, 0.01) and the dense-defect limit (c=0.25). Additional configurations with randomly distributed OVs for intermediate concentrations are constructed, prohibiting local Ti neighborhoods with more than one vacancy in NN distance, respectively. In the following, site-resolved and site-averaged data is presented. It proves instructive not only to perform full lattice averages, but also to differentiate between the two groups of Ti sites, i.e., those with and those without nearby OVs. Figure 6 provides a measure of the correlation strength by displaying the paramagnetic QP weight ω=0 . Because of the low electron count at low OV concentrations the lattice QP weight starts off with values close to the noninteracting limit Z lat =1. With more vacancies and increased electron doping, general electronic correlations become stronger down to Z lat ∼0.4 in the dense-defect case. For the considered electron-doping regimes the Hund's coupling J H on average weakens correlations, in line with DFT+DMFT results from Sec. II. For any doping, orbital-and/or siteselective Mott transitions remain absent. Not surpris-ingly, electrons in theẽ g orbitals are nonetheless stronger correlated since they are more localized due to the lowered crystal field. While for the xy state an obvious site discrimination occurs only at higher doping, theẽ g electrons near OVs are already much heavier at the lowest doping. Interestingly the xy electrons eventually become stronger correlated away from near-OV titanium. Thus the vacancy influence on correlations is twofold: it locally strengthens theẽ g correlation and nonlocally fosters the xy correlations.
As mentioned, the site-dependent correlation strength is of course related to the local orbital filling, shown in Fig. 7 wardsẽ g already at low dopings is obvious, even without local Coulomb interactions. Rather independent of the number of vacancies, a polarization ζ∼2.8 holds for these sites in the noninteracting problem. With interactions this orbital polarization is substantially increased for all doping levels because of the crystal-field renormalization via the electronic self-energy. It still gradually decreases from ζ∼6 in the dilute limit to ζ∼4 in the dense limit. Due to the absence of local crystal fields a subtle competition between both orbital degrees of freedom occurs at the remaining Ti sites. However, remember that in this region the 'ẽ g ' orbital inherits the role of an additional t 2g orbital in our modeling. The noninteracting case does not reveal any finite orbital moment for any OV concentration. By including Coulomb interactions, low doping from the dilute limit slightly disfavors the xy orbital. But interestingly, at c p ∼0.08 the xy orbital takes over the lead in occupation. Thus also here a nonlocal impact of the OV-induced correlations shows up, breaking the orbital degeneracy at Ti sites away from the defects. Since in the dense-defect limit every Ti ion has one nearby OV, the class of defect-free Ti sites disappears and its nominal filling then, of course, vanishes. Intuitively, the doping-dependent change of orbital-filling hierarchy for this Ti class can be understood as follows. At very low dopings, the interacting system tries to put more electrons into theẽ g level, since there they can occasionally enjoy the lower crystal field close to an OV. Yet at some concentration with increased electron filling, it is more beneficial to put electrons which like to visit defect-free regions in the overall less occupied xy levels to minimize the Coulomb interaction and gain kinetic energy.
The real-space variation of the site-dependent orbital moment τ underlines these findings [see Fig. 7(b)]. Beyond the critical doping level c p there is a qualitative change in the polarization of the 'interstitial' region towards xy. A shoulder in the lattice-averaged orbital moment τ lat is located around c p . Of course, every increase in OVs, with strong orbital polarization towardsẽ g nearby, renders τ lat further monotonically growing with c. Give or take, the averaged influence of J H on the orbital moment is as expected from multiorbital Hubbard models, i.e. it works against the crystal field and tries to wash out orbital polarization [58].
The orbital-and site-resolved magnetic moment ex- hibits an even more intricate structure with respect to the vacancy concentration (cf. Fig. 8). Starting from the dilute-defect limit the Ti sites with nearby OVs can be divided into two subclasses. One spin thereof compensates their net moment by AFM alignment, the other one displays an FM alignment with smaller moments. Therefrom a still sizable net FM moment near c∼0.05 results, whereby the RKKY-like exchange noted in Sec. III B can be held responsible. Intriguingly, close to c p this net FM moment vanishes, accompanied by the disappearance of the Ti-site subclass with FM alignment near OVs. In addition, the nearly exclusive AFM alignment at OVs above c p comes along with zero spin polarization on the remaining Ti sites in the concentration range 0.08<c<0.13. Thus the RKKY-like driven FM phase is followed by a phase region of separated AFM pairs with zero net moment. Though, as discussed before, the xy orbital polarization in the 'interstitial' is already active, in this concentration range there is no novel exchange mechanism yet strong enough to spin polarize the Ti sites aways from the defects. This changes for c DE 0.13, when eventually the latter sites build up a considerable xy-dominated magnetic moment. Effective non-local double-exchange between the more-localizedẽ g electrons near OVs and the more-itinerant xy electrons away from OVs yields net ferromagnetism. Also, part of the AFM alignment of the near-defect Ti sites breaks up and these sites join in the contribution to the FM order. Interestingly, there is a minimum in the averagedẽ g magnetic moment on the way to the dense-defect limit. This may be explained by the competition between FM double-exchange and AFM kinetic exchange on Ti near OVs. The increasing electron filling re-strengthens the kinetic exchange close to the defect in the now FM-polarized environment, i.e. a re-formation of AFM pairs occurs. A larger Hund's coupling shifts that minimum to higher electron dopings [see Fig. 8(a)], i.e. the localẽ g occupation has to be closer to the kinetic-exchange favored half-filling [cf. Fig. 7(a)] to overcome the J H -supported DE processes. We also checked the generic influence of a smaller U =2.5 and encountered an overall somewhat reduced magnetic moment and a weakening of the xy magnetism based on the nonlocal double-exchange. For the FM phases, our revealed lattice magnetic moment m lat ∼0.1-0.2µ B is in very good agreement with experimental findings [14]. So for moderate J H the nonlocal polarization effect of OVs fosters a sizable xy-dominated magnetic moment on defect-free Ti sites above a concentration c DE ∼0.13. This nonlocal double-exchange process extends the local DE mechanism from coherent systems, here active in the dense-defect limit. Note that the obtained magnetic moment m∼0.2µ B for J H =0.55 in the latter limit is in excellent agreement with the former DFT+DMFT results (cf. Fig. 2), highlighting the consistency of the model. However, there is a difference in the orbital contributions, since in the real-space RISB model theẽ g level is more strongly spin-polarized than xy. This may be explained by the fact that fluctuations and their correlations, relevant for assessing the DE processes in detail, are underestimated in simplified RISB compared to DMFT with a CT-QMC solver. It could also be that scattering in additional TiO 2 layers, which is not included in the real-space modeling, supports the xy spin polarization.
Finally, we address spectral features at low energy since, e.g., the question arises about the different site and orbital contributions to the resulting metallicity. For selective dopings, Fig. 9 shows the total QP density of states (DOS) as well as the site-and orbital-resolved QP spectral weight within a small energy window around the Fermi level, both in the PM regime. The realspace/orbital resolution is naturally derived from analyzing the low-energy lattice eigenvectors of the renormalized kinetic Hamiltonian.
At the concentration just above c p the total DOS displays a pseudogap-like feature at the Fermi level. From a Stoner argumentation, this reduced spectral weight at ε F is in line with a vanishing of ferromagnetism in this concentration regime. In the case of higher OV numbers, i.e. higher electron dopings, the low-energy density of states rises again within the DE-FM region. As generally expected the xy weight is mainly located in the defect-free regions, and theẽ g weight stems dominantly from defectnear regions. Close to the dilute-defect limit, the overall xy low-energy weight is lower, but it becomes dominant just above c p . There theẽ g electrons appear most localized, giving eventually rise to the pair-AFM phase. For c DE 0.13 eventually theẽ g low-energy contribution again overcomes the xy one, marking the intriguing scattering regime of the effective double-exchange region.
IV. SUMMARY AND DISCUSSION
This work examined the key effects of the electronic structure reconstruction in the LAO/STO interface due to the presence of OVs. Different orbital and spin exchange processes are identified for varying OV concentrations. From the revealed and expected magnitudes of hoppings, crystal fields and Coulomb interactions a straightforward picture of fully localized (Kondo-like) electrons near OVs is not evident. Charge self-consistent DFT+DMFT for LAO/STO supercells in a dense-defect limit yields a dichotomy between OV-induced heavierẽ g states as well as xy states with a higher QP weight. But itinerancy remains a common feature when including a finite Hund's coupling J H . The latter not only triggers the correlation strength but fosters ferromagnetism above a threshold doping through effective nonlocal and local double-exchange processes. Additionally, the strongly correlated dense-defect limit is spectrally marked by a lower Hubbard peak ofẽ g kind. For generic OV numbers in a 10×10 TiO 2 model interface our correlated realspace RISB approach elucidates further mechanisms due to the system separation into Ti sites near and away from OVs. Coulomb interactions are relevant to trigger intricate (nonlocal) orbital polarization processes, especially at the defect-free Ti sites.
Concerning magnetic order, the schematic phase diagram in Fig. 10 summarizes the main findings. Already in the dilute limit of very few vacancies, oscillations in the sign of the magnetic moments with distance from the OV can be detected. Near OVs two subclasses of Ti sites appear, one favors local AFM and the other local FM alignment. An effective RKKY-like exchange mechanism weakly spin polarizes the system, giving rise to a finite FM net moment. We note that the present RKKY ordering is not conventional in the sense that the involved local moments are comparatively small. Due to the delicate exchange mechanism, the expected Curie temperature T c associated with this phase is rather low. Above an interface vacancy concentration c p ∼0.08, the pairs of AFM-aligned Ti sites dominate the lattice and the spin polarization in the regions without OVs disappears until at c DE ∼0.13 the double-exchange becomes strong enough to polarize the 'interstitial', now with dominant xy character. In addition, the DE mechanism is effective in switching local AFM pairs to FM alignment. A more robust ferromagnetic order sets in, with a supposedly much larger T c , and continues to be stable up to the dense-defect limit. The value of J H influences the competition betweenẽ g filling-controlled re-strengthened AFMlike kinetic exchange and the DE processes near OVs in this novel DE-FM phase. We did not delve into the possible phase transitions among the three phases. Because of the overall defect system without a coherent local or- der parameter and straightforward symmetry distinction, first-order(-like) transitions with coexistence regions are expected (cf. Fig. 10). Our obtained behavior with increasing OV concentration shares several features with experimental results. It is generally in accordance with the found key dependence of magnetism on electron doping in STO-based materials. An interplay of AFM and FM tendencies has been recently identified by Bi et al. [28]. The different experimental results concerning the range of stability for LAO/STO ferromagnetism may be related to substantial differences in the number of vacancies in the respective samples. Whereas nearly stoichiometric interfaces are susceptible to the low-T c RKKY-like FM phase [59,60], OV-rich samples can stabilize the DE-FM phase with the surprisingly high T c near room temperature [13,28]. Of course, such different ferromagnetism may also emerge in very inhomogeneous samples. A theoretical resolution of phase separation on a larger lattice scale based on the present modeling is, however, numerically hard to achieve. Concerning the crucial concentration dependence, unique behavior connected to critical electron densities has been revealed in magnetotransport measurements [23]. In that respect it would be very interesting to trace in detail the ferromagnetism in applied magnetic field B within a group of samples with different OV concentrations, or to perform in-situ monitoring with oxygen pressure. For instance, the pair-AFM phase could be transformed to FM order by a larger field B.
We have shown that itinerancy, polarized orbital degrees of freedom and magnetic order naturally go together in LAO/STO interfaces with OVs. The realistic many-body physics remains challenging and needs further work. From theory, the treatment of the full Ti(3d) shell for the demanding supercell-and real-space compu-tations would allow for a more detailed orbital resolution. Including the impact of spin-orbit coupling and the competition of the revealed phases with superconductivity is a natural further modeling step. In addition, with even larger numerical effort by further extension to a cluster-RISB framework [50], one could introduce a two-site cluster in real space for each pair of Ti sites linked by an OV. Then possible singlet formation, contrary to the heredescribed local pair-AFM state, via intersite self-energies would be describable. Note that we have limited our work to single-OV defects. An investigation of multi-OV configurations up to vacancy-clustered regions on very large real-space lattices will surely be appreciated future work. In this respect, experimental information on defect concentrations, locations, and arrangements beyond currently available data is greatly needed.
Generally, controlling the defect structure within interfaces of oxide heterostructures will be pivotal to eventually engineering technological applications with designated response behavior. Recent findings of roomtemperature ferromagnetism and enhanced photocatalytic performance in Ti-defected TiO 2 Anatase [61] are a further example for the relevance and potential of defect-induced oxide physics beyond the high-T c cuprate paradigm. The here-documented sensitivity of the (mag-netic) electronic structure to the OV concentration could be useful not just for designing LAO/STO-based magnetic order at elevated temperatures. Charge and magnetic writing, flexible spintronic switches, sensor technology, and possible multiferroic response are only a few further optional engineering directions. The idea of creating atom-resolved orbital and spin polarization by controlled defect manipulation within a well-defined interface region has so far not been translated into practicable device physics. Defect control of emerging interface phases could be complementary to the technological potential of adatom-driven surface phenomena. The real-space approach presented here is especially suited to simulate and direct such design and control of challenging correlated materials on a nano scale. | 2015-10-11T07:33:18.000Z | 2015-06-23T00:00:00.000 | {
"year": 2015,
"sha1": "fe691102752de38d0c35540f59bd5004f0a7825d",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1506.07066",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "fe691102752de38d0c35540f59bd5004f0a7825d",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
3859352 | pes2o/s2orc | v3-fos-license | Pros and cons of different therapeutic antibody formats for recombinant antivenom development
&NA; Antibody technologies are being increasingly applied in the field of toxinology. Fuelled by the many advances in immunology, synthetic biology, and antibody research, different approaches and antibody formats are being investigated for the ability to neutralize animal toxins. These different molecular formats each have their own therapeutic characteristics. In this review, we provide an overview of the advances made in the development of toxin‐targeting antibodies, and discuss the benefits and drawbacks of different antibody formats in relation to their ability to neutralize toxins, pharmacokinetic features, propensity to cause adverse reactions, formulation, and expression for research and development (R&D) purposes and large‐scale manufacturing. A research trend seems to be emerging towards the use of human antibody formats as well as camelid heavy‐domain antibody fragments due to their compatibility with the human immune system, beneficial therapeutic properties, and the ability to manufacture these molecules cost‐effectively. HighlightsComprehensive overview of reported antibodies against animal toxins.Pros and cons of antibody formats is discussed.Pharmacokinetics and pharmacodynamics of antibodies and their fragments.Trends in recombinant antivenom development are presented.
Introduction
The world fauna presents a vast variety of venomous animals including snakes, scorpions, spiders, bees, wasps, caterpillars, sea anemones, jellyfishes, lizards, fishes, and cone snails as examples.
Many of these animals can cause severe envenomings by their sting or bite, inflicting pain, tissue damage, and systemic pathologies, and may in some cases cause fatalities. The true number of these accidents is unknown, as even the World Health Organization (WHO) does not report epidemiological data for envenomings by all classes of venomous animals. However, it has been estimated that snakes alone cause 1.8 to 2.7 million envenomings each year, resulting in 81,000 to 138,000 deaths (Guti errez et al., 2017a), while scorpion stings result in 1.2 million envenomings per year, leading to around 3000 deaths (Chippaux and Goyffon, 2008). In particular, snakebite envenoming is classified by the WHO as a Neglected Tropical Disease (NTD), a group of diseases that prevail in tropical and subtropical parts of the world and mainly affect populations living in poverty with very limited access to healthcare.
The specific medical treatment for envenomings caused by animals is the use of antivenoms. Heterologous antivenom serotherapy is a century-old treatment described simultaneously by C esaire Auguste Phisalix, Gabriel Bertrand, and Albert Calmette in France in 1894 (Calmette, 1894;Phisalix and Bertrand, 1894). Later (1901), in Brazil, Vital Brazil Mineiro da Campanha demonstrated that antivenom specificity is essential for treating envenomings from particular species (Hawgood, 1992). Since that time, the use of antivenoms has saved countless lives. Nowadays, different heterologous antivenoms are manufactured in many countries with the aim of neutralizing venoms from diverse venomous animal species (Laustsen et al., 2016a). Supplies of these life-saving medicines are, however, still critically scarce in many regions (Brown and Landon, 2010), and efforts are being carried out to improve their availability and accessibility (Guti errez, 2012).
Although heterologous antivenoms are, to this date, the only effective treatment for snakebite envenomings, these therapeutic agents present some documented undesirable problems ( Fig. 1): (i) Antivenoms can cause anaphylactic reactions, which can be either IgE-mediated or, more commonly, non-IgE-mediated (due to complement activation); both types are known as early adverse reactions (up to 24 h) (de Silva et al., 2011;Isbister et al., 2008a,b;Le on et al., 2013). (ii) Antivenoms are composed of whole immunoglobulins (IgGs) or antigen-binding fragments (F(ab') 2 s or Fabs) raised against whole venom(s) via immunization of a host animal (Laustsen et al., 2016a(Laustsen et al., , 2016cRodríguez-Rodríguez et al., 2016). However, the majority of these antibodies are not directed towards medically relevant venom toxins (Laustsen et al., 2015), but are instead directed against antigens that the immunized animal has encountered during its life (environmental antigens, microorganisms, and parasites). As a consequence, most antivenoms carry a large portion of immunoglobulins that are not directed against venom components (about 70%) (Laustsen et al., 2016a;Segura et al., 2013). (iii) The large amount of antivenom antibodies combined with the elicited human anti-horse antibodies (IgGs and IgMs) may result in the generation of immune complexes (ICs) that have a long elimination half-life. This can trigger IC deposition in target tissues (such as blood vessels, glomeruli, and joints), mediating inflammation and promoting serum sickness e a late adverse reaction associated with type III hypersensitivity (1e2 weeks after antivenom therapy) (Cunningham et al., 1987;Descotes and Choquet-Kastylevsky, 2001).
Taken together with the high cost of antivenom production, which is dependent on both animal immune systems and procurement of venoms, a need for innovation within envenoming therapies exists. Several approaches, including the use of immunization with DNA, synthetic epitope strings, or recombinant toxins, have been pursued (Alvarenga et al., 2002;Araujo et al., 2003;Harrison, 2004;Laustsen et al., 2016aLaustsen et al., , 2016c. However, despite a promising potential for eliminating the need for keeping venomous animals in captivity and "milking" them to obtain their venoms, these novel immunization techniques all retain the drawbacks of creating heterologous antivenoms with compromised compatibility with the human immune system. A more recent avenue that has been taken is the development of recombinant antibodies and antibody fragments of camelid and/or human origin (Harrison et al., 2011;Laustsen et al., 2016aLaustsen et al., , 2016cPucca et al., 2012;Pucca et al., 2011a,b;Richard et al., 2013). These molecules have very low immunogenicity and are easy to engineer using standard approaches that are well-investigated in other fields. This allows for the design of more optimized envenoming therapies with better safety profiles and potentially higher efficacy, as such recombinant antibodies would be completely compatible with the human immune system. Furthermore, only therapeutically active antibodies targeting medically relevant toxins would be included in a novel recombinant antivenom (Laustsen et al., 2015). Additionally, in the future it is projected that the production of recombinant antivenoms based on mixtures of such antibodies may be costeffective compared to traditional antivenom manufacturing methods (Laustsen et al., 2017(Laustsen et al., , 2016b. However, although several antibody formats have been investigated for use in recombinant antivenoms (Fig. 2), a clear indication of which format represents the optimal molecular scaffold to be used does not exist. In this review, we therefore aim at presenting all available data on different antibody formats that have been investigated for neutralization of animal toxins, and discuss their pros and cons in relation to toxin targeting in clinical scenarios.
Pharmacodynamics: ability to neutralize venom toxins
Pharmacodynamics (PD) plays a key role in the successful outcome of antivenom immunotherapy. Within the field of antivenom, PD refers to the ability of therapeutic molecules to neutralize in vivo specific venom toxins present in a given venom, which is one of the key determinants of antivenom efficacy. Independent of their antibody format, antivenoms derive their PD efficacy from high affinity interactions between each antibody-toxin pair, although antibody stability is also considered important for neutralization capacity . In the simple situation involving only a single antibody and a single toxin, affinity is often reported using the dissociation constant, K d . However, several factors complicate such measurements when comparing classical polyclonal antivenoms: (i) several different antivenom antibodies (with different specificities) may recognize the same or various epitopes in a single toxin; (ii) each individual antivenom antibody may recognize similar (homologous) toxins with different affinities; (iii) the concentration of each antibody that recognizes a given toxin is unknown. For these reasons, it is only feasible to measure the avidity (a measure of the strength between a venom and multiple antibodies), also interpreted as functional affinity (Casewell et al., 2010;Vauquelin and Charlton, 2013). To our knowledge, no studies have systematically investigated the effect on avidity after enzymatic treatment of polyclonal IgGs to Fabs (or F(ab') 2 s). However, one may expect a higher avidity of an IgG or F(ab') 2 -based antivenom than a Fab-based antivenom due to the bivalent nature of the IgG and F(ab') 2 formats. The two independent binding sites on these antibody formats provide a larger probability that a toxin will become rebound to the antibody if the toxin is released due to molecular proximity effects. Additionally, crosslinking to other toxin-antibody complexes can take place, making it less likely that a toxin may escape during transient dissociation of the complex (Rudnick and Adams, 2009). This cross-linking effect may potentially lead to high therapeutic relevance of weaker interactions. Nevertheless, at least one Fab-based antivenom has proven to be at least as effective in the clinical setting than an IgGbased antivenom (Dart and McNally, 2001).
Another of the most commonly investigated antibody formats is the Fab format. Two different studies produced Fabs against snake toxins. In one study, a Fab targeting cardiotoxin from Naja nigricollis venom was developed (Guillon et al., 1986), and in another study a Fab was developed against b1-bungarotoxin from Bungarus multicinctus venom (Yang and Chan, 1999). Both Fabs were shown to neutralize in vitro and in vivo effects of the toxins, respectively. Four studies have developed monoclonal Fabs against spider and scorpion toxins. Of these, three exhibited neutralizing effects in vivo against spider toxins (Bugli et al., 2008) and Early adverse reactions occur within 24 h after administration of antivenoms. (A1) Patients may develop early adverse reactions (within 24 h) resulting from de novo complement activation (non-IgE reactions) or, (A2) in cases of previous exposure to animal antibodies, due to IgEmediated anaphylactic reactions. (B) Around 70% or more of the antivenom antibodies are not directed towards medically relevant venom toxins. Therefore, envenomed victims will receive a larger than necessary dose of equine antibodies, which have no therapeutic value, but which may cause adverse reactions. (C) The large amount of antivenom antibodies combined with elicited human anti-horse antibodies (IgGs and IgMs) may result in overproduction of immune complexes. These may be deposited in blood vessels, glomeruli, and joints, mediating inflammation and promoting serum sickness 1e2 weeks after administration of antivenom therapy. Black antibodies: equine antibodies specific to target toxins. Red antibodies: equine antibodies against non-venom antigens. Blue antibodies: human antibodies against equine antibodies. Green circles: Snake toxins. For the sake of simplicity, examples illustrating the disadvantages of heterologous antibody therapy refer to equine antivenoms, but the same principles apply to antivenoms derived from other animal species. (For interpretation of the references to colour in this figure legend, the reader is referred to the Web version of this article.) scorpion toxins (Licea et al., 1996;Selisko et al., 2004), whereas the last study did not obtain neutralizing Fab antibodies (Aubrey et al., 2004). As previously mentioned, the scFv antibody format has also been widely studied. scFvs showing neutralization of lethality in vivo have been reported for both snake toxins (Cardoso et al., 2000;Castro et al., 2014;Kulkeaw et al., 2009;Lee et al., 2015;Meng et al., 1995;Oliveira et al., 2009;Roncolato et al., 2013) and scorpion toxins (Amaro et al., 2011;Devaux et al., 2001a;Hmila et al., 2012;Mousli et al., 1999;Riaño-Umbarila et al., 2016, 2011Rodríguez-Rodríguez et al., 2016). To obtain more biochemical details regarding scorpion toxin neutralizing capacity, electrophysiological studies involving the two-electrode voltage clamp technique using Xenopus laevis frog oocytes showed that activation of sodium channels by Tityus Fig. 2. Schematic overview of the different antibody formats used in existing plasma-derived antivenoms and experimental recombinant antivenoms. IgG: whole IgG antibody. F(ab') 2 : pepsin-digested IgG antigen-specific region. Fab: papain-digested antigen-specific region. Diabody: non-covalent dimers of scFv fragments. scFv: single-chain variable fragments. V H H: single-domain antigen-specific fragments. In vitro (Chavanayarn et al., 2012) serrulatus venom toxins Ts1, Ts2, and Ts5 could be neutralized by human scFvs (Pucca et al., 2014). Also, scFvs capable of neutralizing myonecrosis have been reported for snake venom toxins (Oliveira et al., 2009;Roncolato et al., 2013;Tamarozzi et al., 2006). Other scFvs have been discovered, which can neutralize melittin and phospholipase A 2 (PLA 2 ) from Africanized bees in vitro and prolong survival in vivo (see Table 4). However, scFvs that lack neutralizing abilities have also been reported (Ju arez- . In addition to assessing their neutralization potential, a few studies of scFv antibodies developed against snake venom toxins also include a structural and sequencing analysis to determine the regions involved in toxin binding (Kulkeaw et al., 2009;Lafaye et al., 1997;Meng et al., 1995). Several studies have involved two other small antibody formats, variable fragments of heavy chain antibodies (V H Hs) and dimers of scFvs (diabodies), used against snake and scorpion toxins. Of these, one V H H has shown neutralization of lethality against snake toxins (Richard et al., 2013), whereas both V H Hs (Abderrazek et al., 2009;Hmila et al., 2012Hmila et al., , 2008Hmila et al., , 2008 and diabodies (di Tommaso et al., 2012;Rodríguez-Rodríguez et al., 2012) have shown neutralization of lethality against scorpion toxins. For IgGs (Bahraoui et al., 1988;Boulain et al., 1982;Charpentier et al., 1990;Fernandes et al., 2010;Iddon et al., 1988;Jia et al., 2000;Schneider et al., 2014;Tr emeau et al., 1986), Fabs (Aubrey et al., 2004), scFvs (Ju arez-Gonz alez et al., 2005Lafaye et al., 1997;Lee et al., 2015;Meng et al., 1995;Riaño-Umbarila et al., 2016, 2011Rodríguez-Rodríguez et al., 2016), and V H Hs (Abderrazek et al., 2009;Hmila et al., 2008;Richard et al., 2013;Stewart et al., 2007) some studies have determined the K d between the antibodies and their respective toxins. The K d s range from 10 mM as the highest reported for an scFv against crotoxin from the venom of the South American rattlesnake (Lafaye et al., 1997) to the lowest K d of 28 pM for an IgG developed against BmK AS-1 from the Chinese scorpion Buthus martensii Karsch (Jia et al., 2000). The reported K d s seem to corroborate the notion that high affinity frequently correlates with better neutralization ability, where antibodies with neutralizing abilities have K d s in the lower nanomolar range, as shown in Tables 1 and 3.
All reported monoclonal antibody formats that have been developed against snake, scorpion, spider, and bee venom toxins seem to neutralize toxins equally well (see Tables 1e4). No conclusion can thus be drawn on which format binds and neutralizes animal toxins best. However, one major challenge when comparing different antibody formats is that studies have employed very different approaches for assessing toxin neutralization. For better comparison of neutralization potentials of different antibodies, it would be beneficial if a common approach could be employed, such as that recommended by the WHO for assessing the preclinical efficacy of antivenoms. Following this approach, in vivo neutralization is assessed by pre-incubation of toxin and antibody prior to injection into rodents, as this has been shown to yield the best reproducibility of results and allow for better comparability between antivenoms (Guti errez et al., 2017b). This protocol does, however, not mimic a real life envenoming and subsequent treatment scenario, and antibodies showing neutralization potential when pre-incubated with the toxin prior to injection may not show efficacy if administered after venom injection (Charpentier et al., 1990). It would therefore be more relevant to evaluate antivenom neutralizing capacity in experiments involving independent administration of venoms and antibodies, i.e. 'rescue experiments'. Overall and unsurprisingly, no final conclusion can be drawn based purely on pharmacodynamics regarding which antibody format represents the most optimal format for toxin neutralization. To allow for better comparison between different antibody formats it would be beneficial to test a single monoclonal antibody and its derived formats against the same toxin target. In this context, no prior studies have been performed within the field of toxinology.
Modes of neutralization
Understanding the modes of neutralization of antibodies may guide the design of novel antivenom components. Nonetheless, only limited efforts have been invested in this area, and it is therefore not possible to determine any general trend in how different antibody formats neutralize various animal toxins. However, studies of single antibodies targeting mainly snake venom toxins have proposed five different mechanisms to explain the mode of neutralization. Firstly, direct inhibition where antibodies interfere with the site of interaction between the toxin and its target by competitive inhibition (Fig. 3A). This mechanism has been demonstrated for an anti-long chain neurotoxin monoclonal antibody (Charpentier et al., 1990) and has been suggested as a general mode of neutralization of small neurotoxins by polyvalent antivenoms (Engmark et al., 2017a(Engmark et al., , 2016. Secondly, for enzymatic toxins, direct inhibition may be equivalent to blocking the catalytic site (Fig. 3B). Similar to direct inhibition, binding of a relative large antibody (fragment) to a region near the site of interaction may result in a steric hindrance effect (Fig. 3C). However, to the best of our knowledge no record of such situation is available, although it is structurally feasible. A third mechanism is allosteric inhibition (Fig. 4), where binding of the antibody induces a conformational change making a toxic site inaccessible or locking the toxin in a much less toxic, or even inactive, conformation. As an example, a polyvalent Crotalinae antivenom has been reported to recognize linear peptides mimicking a known allosteric site from snake venom serine proteases (Engmark et al., 2017b). Fourthly, antibodies can prevent the dissociation of toxin complexes responsible for forming the active toxins (Lafaye et al., 1997) (Fig. 5). Fifthly, even if an antibody does not block the active site of the toxin nor an allosteric site, the formation of toxin-antibody complexes may preclude the toxin from interacting with its target, and may facilitate its elimination by the mononuclear phagocytic system (Guti errez and Le on, 2009).
On the more general level of venom toxicity, neutralization of single toxins by antibodies may reduce the clinical manifestations dramatically. This may be explained by high individual toxicity and/ or high concentration of a single toxin in a venom (Laustsen et al., 2015), and when this toxin is neutralized, only weakly toxic or nontoxic components remain. However, abrogation of venom toxicity by a single antibody can also be caused by an interruption of synergistic effects between toxins, if a key toxin (or key component) is neutralized (Fig. 6). Toxin synergism is a well-known feature of certain snake venoms (Laustsen, 2016). Each venom toxin may exhibit low toxicity on its own, but when the individual toxins are combined in a whole venom, they amplify the effect of each other resulting in actions such as destabilization of phosphorylative oxidation and increased tissue necrosis (Gasanov et al., 2014). Consequently, understanding the toxicity and interplay between individual toxins, as well as possible mechanisms of neutralization, is key to rational design of future recombinant antivenoms. Therefore, despite the great biochemical complexity of snake venoms (Calvete, 2017) and other animal venoms, it is likely that, in some cases, the neutralization of a few key toxins by antibodies may result in a drastic reduction in overall venom-induced toxicity.
Pharmacokinetics: distribution and elimination of antibodies and antibody fragments
The efficacy of treatment for a therapeutic antibody is strongly influenced by the speed and concentration at which it reaches the site of action, as well as its residence time in the body and consequent elimination. Upon injection, the pharmacological effect of the antibody will vary according to its absorption, distribution, metabolism, and excretion (ADME), pharmacokinetic (PK) processes that depend largely on the structural and biophysical properties of the molecule (Deng et al., 2012;Liu, 2017;Mould and Green, 2010). The combination of these processes provides an antibody with a PK profile, generally described by parameters such as volume of distribution (V d ), bioavailability (F), clearance (CL), maximum concentration in plasma (C max ), and elimination half-life (t 1/2 ), among others, that are calculated after measuring the concentration in plasma of the antibody over a period of time after its administration (Fan and de Lannoy, 2014).
Generally, for antibodies and their fragments, there is a strong relationship between the molecular mass of the molecule and its distribution and elimination characteristics. The PK profiles of recombinant monoclonal IgG antibodies used for therapeutic purposes (isotypes IgG1, IgG2, and IgG4) are characterized by limited tissue distribution and long elimination half-lives (Fig. 7AeB), displaying either linear or non-linear (dose-dependent) profiles (Kamath, 2016;Keizer et al., 2010;Tabrizi et al., 2006). Distribution of IgGs, which involves extravasation to the interstitial space and elimination from tissue, occurs mainly by convection, as diffusion In vivo (Funayama et al., 2012;Pessenda et al., 2016) across endothelial cells is very slow due to the large size and hydrophilicity of the molecule (Lobo et al., 2004). Their large size also impedes IgGs from being enzymatically metabolized by cytochrome P450 (Mould and Green, 2010), and cleared by the kidneys (glomerular filtration cut-off~50 kDa) (Wang et al., 2008). Instead, the main route for their elimination is via intracellular catabolism in the lysosomes, upon fluid-phase endocytosis (pinocytosis) or receptor-mediated endocytosis, including binding to Fcg receptors (FcgR) expressed by phagocytic cells (Keizer et al., 2010;Tabrizi et al., 2006;Wang et al., 2008). However, a major fraction of the internalized IgGs is rescued from rapid degradation through binding to the neonatal Fc receptor (FcRn) of cells in the mononuclear phagocytic system (Fig. 7C), which transports the IgGs back to the cell surface and facilitates their release into the extracellular fluid (Brambell et al., 1964;Junghans, 1997); this is a saturable, pHdependent, recycling mechanism that confers a long half-life (21e28 days) to human IgGs (Keizer et al., 2010;Raghavan et al., 1995;Tabrizi et al., 2006;Wang et al., 2008). Of note, the affinity of IgGs for FcRn is species dependent (Ober et al., 2001). Human IgGs have a higher affinity for human FcRn than chimeric IgGs and murine IgGs, which explains the shorter elimination half-lives of the latter in humans (8e10 days and 1e3 days, respectively) (Deng et al., 2012;Tabrizi et al., 2006). In contrast to whole IgGs, the smaller sizes of antibody fragments, such as Fabs, scFvs (monomers and dimers), V H Hs, and minibodies, account for a larger volume of distribution and faster rate of tissue penetration (Harmsen and De Haard, 2007;Keizer et al., 2010;Wu et al., 1999Wu et al., , 1996. Due to the lack of an Fc region on these antibody fragments, they are unable to bind to the FcRn. Also associated with their small size, the main route for their clearance is via glomerular filtration by the kidneys (Lobo et al., 2004;Tabrizi et al., 2006). Owing to these features, these antibody fragments possess considerably shorter half-lives (0.5e30 h) (Tabrizi et al., 2006). F(ab') 2 fragments, also devoid of Fc region, possess a shorter half-life than IgGs, since recycling by the FcRn rescue mechanism is not possible (Tabrizi et al., 2006). However, their distribution profile resembles that of IgGs, and similarly, elimination occurs mainly by non-renal mechanisms, as their size exceeds the cut-off for renal filtration (Seifert and Boyer, 2001;Tabrizi et al., 2006).
In addition to the structural and biophysical properties of the antibody molecule, PK of IgGs and their fragments can be influenced by specific patient conditions, such as age, gender, health status (renal and hepatic function), or concomitant administration of other drugs (Deng et al., 2012;Tabrizi et al., 2006). The interaction between the antibody and the antigen may also affect PK (Bauer et al., 1999;Meijer et al., 2002), as well as immune responses (1) A toxin binds to its target, resulting in a toxic effect. (2) The antibody binds to a distal site of the toxin, which induces conformation changes, resulting in a less or non-functional toxin (allosteric inhibition).
In agreement with the PK parameters displayed by whole recombinant IgGs and their fragments used as therapeutics, kinetic studies of plasma-derived antivenoms have shown the same strong relationship between the molecular mass of the molecules and their PK profiles (Guti errez et al., 2003). Antivenoms based on IgGs have low volumes of distribution, long elimination half-lives, and a high number of cycles through the interstitial spaces (Ho et al., 1990;Ismail et al., 1998;Ismail and Abd-Elsalam, 1996). Conversely, antivenoms based on Fab fragments, much smaller than IgGs, have larger volumes of distribution, diffuse faster into extravascular compartments, and have shorter elimination halflives (Ariaratnam et al., 2001(Ariaratnam et al., , 1999Brvar et al., 2017;Meyer et al., 1997;Rivi ere et al., 1997;V azquez et al., 2010a,b;V azquez et al., 2005). A negative consequence of the short elimination half-life of Fab fragments is the higher incidence of recurrent peaks in plasma venom levels, and therefore envenomation symptoms, compared to IgG and F(ab') 2 antivenoms. This is most probably due to rapid clearance of Fab fragments from circulation that impedes the neutralization of venom toxins released from the bite site in later stages of the envenoming (Boyer et al., 2013;Guti errez et al., 2003;Seifert and Boyer, 2001). In accordance with their intermediate molecular mass, the PK profile of F(ab') 2 -based antivenoms constitutes a middle point between that of IgGs and Fab fragments (Boyer et al., 2013;Guti errez et al., 2003;Ho et al., 1990;Isbister et al., 2015;Kurtovi c et al., 2016;Maung-Maung-Thwin et al., 1988;P epin-Covatta et al., 1996;Sevcik et al., 2004). However, in general, due to the heterologous nature of antivenoms derived from horse or sheep plasma, these antibodies are eliminated faster than expected for a homologous human antibody (Scherrmann, 1994). The molecular mechanisms behind this observation are not fully understood, but could be the result of impeded binding to FcRn and/or development of anti-antibodies by the patient's immune system (Tabrizi et al., 2006;Wang et al., 2008).
While information on animal plasma-derived antivenom PK is somewhat available, only two studies have reported the PK profiles of recombinant antibody fragments targeting animal toxins. Aubrey et al. investigated the in vivo kinetics of a homodimeric diabody (50 kDa) derived from the anti-AahI murine 9C2 antibody after intraperitoneal injection into mice (Aubrey et al., 2003). The diabody displayed rapid diffusion, being detected in plasma only 5 min after its administration. Consequently, the maximum concentration (C max ) was reached shortly after (30e60 min post-injection). High concentrations (>50% of C max ) were detected for at least 6 h, and complete clearance of the diabody took approximately 24e32 h (Aubrey et al., 2003). In the other study, Hmila et al. compared the distribution and kinetics of two nanobodies (NbAahIF12 and NbAahII10, 14 kDa each) and a bispecific nanobody construct (NbF12-10, 29 kDa) to those of a F(ab') 2 -based (110 kDa) scorpion antivenom after intravenous administration in mice and rats (Hmila et al., 2012). In vivo monitoring of radiolabeled nanobodies and F(ab') 2 fragments revealed that the nanobody-based molecules were cleared from blood faster than the F(ab') 2 antivenom, most likely due to the lower molecular mass of nanobodies. Additionally, a major difference was observed in the organ accumulation of the antitoxins. Monovalent nanobodies and the bispecific construct accumulated mainly in the kidneys, whereas F(ab') 2 fragments were predominantly retained in the liver (Hmila et al., 2012).
PD has implications on PK profiles of antibodies, and this will further have implications on efficacy, which highlights the importance of choosing the right antibody format for rational development of novel antivenoms. Often, venoms consist of complex mixtures containing both low and high molecular mass toxins, acting locally and/or systemically (Fig. 7D). On one hand, antivenoms should ideally provide antitoxins able to rapidly reach locally acting toxins and toxins that reach systemic extravascular targets very fast, such as low molecular mass neurotoxins. On the other hand, antivenoms should also provide antitoxins with extended half-lives that remain in circulation for prolonged periods of time (many hours to days). This will allow the antitoxins with long half-lives to intercept and neutralize systemically acting toxins in the circulatory system before these toxins reach their target site (Guti errez et al., 2003). Thus, an antivenom comprised of a mixture of different antibody formats could be necessary to target all medically relevant toxins present in complex venoms (Guti errez et al., 2003). Regarding the route of administration, notable differences have been found when comparing intravenous administration with intramuscular administration. Intravenous injections directly deliver the antibodies to the bloodstream, avoiding the absorption step and providing complete bioavailability (Liu, 2017). Hence, it is considered the preferred route of administration for antivenoms in a hospital setting. In contrast, intramuscularly injected antivenoms have shown poor efficacy due to slow absorption and reduced bioavailability of the antibodies or their fragments (Isbister et al., 2008a,b;P epin-Covatta et al., 1996P epin-Covatta et al., , 1995V azquez et al., 2010a,b). Nevertheless, it could still be considered an option, as antivenoms are occasionally required to be administered in the field (Warrell, 1995). Although the PK of a specific antibody format may be predicted based on the general distribution and elimination characteristics typical for its molecular mass, more PK studies are required to increase the current knowledge and guide the development of recombinant antivenoms based on in-depth understanding of the PK-PD relationship of each antibody format on an individual case basis. Additionally, favorable PK-PD for a given antibody format may very well depend on the toxicokinetics of the target toxin(s). (2) Antibody binding to one of the toxins results in milder toxic (or no) effects due to disruption of synergism.
Propensity for adverse reactions of different antibody formats
Adverse reactions to animal plasma-derived antivenoms are relatively common, with 6e59% of patients experiencing earlyonset reactions, depending on the particular antivenom being used. In rare cases, administration of animal plasma-derived antivenoms may result in severe life-threatening anaphylaxis (Schaeffer et al., 2012;Stone et al., 2013). Further, 5e23% of treated patients experience delayed-onset serum sickness (typically observed 1e2 weeks after exposure), with symptoms such as high fever, rash, urticaria, and arthralgia (LoVecchio et al., 2003). The propensity of an animal-derived antivenom to generate early and late adverse reactions depends on the microbiological and physicochemical quality of the product, its format (i.e. Fab, F(ab') 2 , or IgG), and the total amount of protein injected in a treatment (Le on et al., 2013). A relatively low rate of early adverse reactions (5e7%) has been reported for a highly purified Fab antivenom in use in the USA, which includes an affinity chromatography purification step in its manufacture (Cannon et al., 2008;Farrar et al., 2012). In comparison, F(ab') 2 and IgG antivenoms of good physicochemical quality induce early adverse reactions in 13e26% of treated patients (see review by ). In these cases, the majority of such reactions are mild, including mostly cutaneous manifestations. In contrast, other antivenoms of poor physicochemical quality, or containing pyrogens, are known to induce a rate of adverse reactions as high as 80%, with some of these reactions being severe (Le on et The influence of the antibody format on pharmacokinetics in relation to toxicokinetics. The distribution of the larger IgG antibody format is largely restricted to the intravascular compartment, where it is effective in neutralizing systemically acting toxins over a period of many days due to its long elimination half-life. Smaller antibody fragments may both neutralize toxins in circulation, toxins present in or around the bite wound, and toxins that have reached systemic targets in tissues, i.e. neuromuscular junctions, due to their larger volumes of distribution, which allow these smaller fragments to more effectively penetrate tissue compartments. However, antibody fragments have a shorter elimination half-life. Systemically acting toxins are represented by scorpion stings and elapid snakebites, whereas viper snakebites represent locally and systemically acting toxins, although all three types of bite/sting contain both locally and systemically acting toxins in their venom. Administration of animal-derived antivenoms also induce late adverse reactions, a type III hypersensitivity phenomenon associated with serum sickness . This occurs approximately 1e2 weeks after antivenom infusion as a consequence of the generation of human antibodies against animal IgGs, and the consequent formation of antigen-antibody complexes, which exert effects in the microvasculature and the joints, causing arthralgia, fever, and urticaria (Guti errez et al., 2017a). The incidence of serum sickness after antivenom administration has not been analyzed in depth, although it seems to depend on the total load of foreign protein administered (LoVecchio et al., 2003) and on the format of the antivenom preparation. In particular, Fab antivenoms have been shown to induce a much lower incidence of serum sickness than IgG and F(ab') 2 antivenoms (Lavonas et al., 2013;Le on et al., 2013). A detailed account of the studies reporting incidences of adverse reactions to animal-derived antivenoms can be found in (see reviews by Descotes (2009) andHansel et al. (2010)).
There are no antivenoms in clinical use that are comprised of monoclonal antibodies or of any type of recombinant product. Information on safety of other biotherapeutics based on monoclonal antibodies may instead be utilized to shine light on the potential challenges that recombinant antivenoms may face, when they become available in the future. Murine monoclonal antibodies have been shown to induce early and late adverse reactions in humans (see reviews by Descotes (2009) andHansel et al. (2010)), owing to their heterologous nature, including anaphylactic reactions in a few cases and serum sickness. As a result, biotherapeutics based on murine antibodies are no longer put into development and enter clinical trials. The propensity to generate adverse reactions has, however, been greatly reduced by the generation of chimeric, humanized, and fully human monoclonal antibodies, although it is still possible to generate anti-idiotype antibodies with such products (Hansel et al., 2010). For example, a humanized monoclonal antibody against an integrin has been reported to induce early adverse reactions (urticaria) in 4% of patients (Ransohoff, 2007). Despite these observations, the introduction of humanized or fully human monoclonal antibodies in the development of new antivenoms is likely to greatly reduce the incidence of early and late adverse reactions, currently observed for animal plasma-derived antivenoms, owing to the greater compatibility of these products with the human immune system (Laustsen, n.d.). Likewise, the fact that antivenoms are usually used only once in a single individual further reduces the likelihood of development of adverse reactions. From a theoretical viewpoint, it is also probable that recombinant antivenom antibodies of low molecular mass formats, such as Fab, scFv, V H H, diabodies, bivalent constructs, and other binding protein formats, will be less prone to induce adverse reactions than whole IgG preparations. However, this should be carefully balanced with other aspects such as PK profile and the possible role of the Fc part of the immunoglobulin in its biological action. Finally, optimization of antibody glycosylation to better resemble human patterns may lead to recombinant antivenom formats with even better compatibility with the human immune system. All these issues demand renewed research vis-a-vis the current upsurge in the development of recombinant antivenoms.
Formulation
Owing to the proteinaceous nature of antibodies, antivenoms face many of the generic issues commonly related to high-proteinconcentration solutions. Antivenom antibodies are especially susceptible to degradation when exposed to heat, freezing, light, pH extremes, shear-stress, agitation, as well as to some metals and organic solvents . Particularly heat stability is important for long term storage in tropical regions, where most envenomings occur (Guti errez et al., 2006;Warrell, 2007). Liquid antivenom should generally be stored at 2e8 C, but this requirement is not always possible to fulfil in rural areas where the cold-chain is often interrupted or non-existent. When stored at room temperature, formation of turbidity over time is observed in liquid formulations, indicating physical instability and decrease in biological activity (Segura et al., 2009). To overcome this issue, many antivenom manufacturers lyophilize their antivenoms, although this adds to the cost of manufacture (Segura et al., 2009). As an example, two studies on EchiTAb-Plus-ICP antivenom used to treat snakebite victims in rural sub-Saharan Africa attempted to determine the optimal state for antivenom stability. These studies indicated that freeze-drying offered the best thermal stability of the antivenom compared to liquid formulation without stabilizer and liquid formulation stabilized with sorbitol (Herrera et al., 2017(Herrera et al., , 2014. Most of the current research efforts are, however, focused on finding a stable liquid formulation that can be stored at room temperature. As an example, Solano et al. (2012) described that an acetate buffered (pH 4.0) formulation stabilized antivenoms for at least six months at room temperature without the presence of a protective carbohydrate excipient (Solano et al., 2012).
Some antivenom formulation additives have been reported to have varying levels of effects depending on the combination of additive molecules used and on whether the additives are added to liquid or lyophilized formulations. In a study that compared the stabilizing effects of sorbitol, sucrose, and mannitol in lyophilized antivenom, Herrera et al. (2014) showed that antivenoms lyophilized with mannitol lost efficacy against the lethal effects of B. asper venom (Herrera et al., 2014). Furthermore, it was shown that a 5% (w:v) sucrose formulation exhibited the best stability, indicating that sucrose could perform better as a stabilizer than mannitol and sorbitol in lyophilized antivenoms. Of the additives used in antivenom formulation, the most commonly used are phenol, cresol, and sodium chloride (see Table 5). These additives stabilize and preserve the antivenom by preventing aggregation of IgGs and/or antibody fragments, by providing an isotonic solution, and by having antifungal and bacteriostatic effects (Rodrigues-Silva et al., 1999;Segura et al., 2009). Preventing aggregation for therapeutic antibodies is crucial, as aggregation may significantly contribute to their immunogenicity (Rosenberg, 2006;van Beers et al., 2010).
Other less conventional formulations explored at the experimental level focus on enhancing the neutralization ability through conjugation of protein nanoparticles and/or facilitating the administration through encapsulation. Renu et al. (2014) used soy protein nanoparticles conjugated to F(ab') 2 fragments to optimize the neutralizing effects of Bungarus caeruleus antivenom (Renu et al., 2014). They achieved to produce the smallest size of selfstabilized soy protein nanoparticle reported within antivenom research, which displayed improved neutralization capacity against toxins from B. caeruleus venom at a much lower concentration compared to the non-conjugated antivenom. The conjugated antivenom particles also showed enhanced thermal stability (Renu et al., 2014).
Certain formulations could allow for alternative routes of antivenom administration. These formulations are being explored to allow non-physicians to aid snakebite victims before the victim reaches a clinic or hospital. Currently, all antivenoms are administered by intravenous bolus injection and/or intravenous infusion (Ahmed et al., 2008). Compared to other common routes of administration (e.g. intramuscular route), intravenous injection offers the fastest route to maximum concentration of antivenom in a Stabilization implies benefits that prevent aggregation of IgGs and/or IgG fragments. b Preservation implies antifungal and bacteriostatic benefits.
the circulatory system (Guti errez et al., 2003), although rapid infusion of foreign antivenom proteins may result in adverse reactions often experienced by patients upon antivenom administration . An approach to minimize the adverse effects of antivenom, that has only been explored once experimentally, involves oral administration of alginate encapsulated antivenom (Bhattacharya et al., 2014). However, even if antibodies can be properly formulated for oral administration, oral delivery of an emergency medicine will come at a cost to bioavailability and the delayed arrival of antibodies may not be optimal for efficient toxin neutralization. Thus, even if such formulations may one day be useful in the field, they will have to be supplemented with intravenously administered antivenom once the snakebite victim reaches a clinic or hospital.
In conclusion, it is observed that the majority of antivenoms currently on the market are formulated with one or more of the excipients phenol, cresol, sodium chloride, glycine in some products and, in the case of freeze-dried antivenoms, sucrose. Most of the available data on antivenom formulation is based on plasmaderived equine or ovine polyclonal F(ab') 2 s, possibly due to the early stage of development for recombinant antivenoms based on monoclonal antibodies. It seems likely that antivenom research will increasingly focus on more modern approaches involving the use of recombinant human antibodies (Laustsen, n. d.;Laustsen et al., 2017). With such a shift, more research is needed in order to develop and optimize formulations of mixtures of monoclonal antibodies. These future efforts will fortunately not start from scratch. In other fields, (mixtures of) human monoclonal antibodies have been extensively used, and existing formulation solutions from these fields are likely to also be applicable for recombinant antivenoms (Heijtink et al., 1999;Robak et al., 2012).
Expression of different antibody formats
To enable large-scale production of novel antivenoms consisting of recombinant antibodies or antibody fragments, a suitable expression system is essential. To the best of our knowledge, so far, no monoclonal antibody nor antibody fragment targeting an animal toxin has been produced in larger scale. Several different research efforts have, however, employed different expression hosts, which will be reviewed in the following for their suitability for Research and Development (R&D) purposes and scale up.
Key differences between eukaryotes and prokaryotes in antibody expression
Antibodies and antibody fragments can be expressed in either prokaryotic or eukaryotic cells, depending on the structure of the protein product and the application of the desired antibody fragment. These cell types are inherently different and thus offer different advantages and disadvantages in relation to antibody expression (Berlec and Strukelj, 2013).
Advantages of prokaryotic expression of antibodies include low cost of media and ease of handling. For these reasons, E. coli has been a much-used organism for expression of several different antibody formats within antivenom research. However, the inability of prokaryotes to glycosylate antibodies limits the range of antibody formats that can be expressed with these systems. Therefore, E. coli has mainly been used to produce diabodies, scFvs, Fabs, and V H Hs (see Table 6). Furthermore, the tendency to form incorrectly folded proteins and insoluble aggregates in the reducing environment of the bacterial cytoplasm decreases expression yields. Other prokaryotes that are more promising than E. coli for production of biotherapeutics could be strains of the genus Bacillus, which have a long track record of successful use for expression of both heterologous and homologous proteins (Lakowitz et al., 2017). These have, however, not yet been employed within the field of antivenom.
In contrast to prokaryotic cells, mammalian cells are capable of performing more advanced post-translational modifications, such as glycosylation, and possess more complex cellular machinery for folding and secretion (Chadd and Chamow, 2001;Frenzel et al., 2013). Mammalian cells are capable of yielding more diverse antibody formats with lower immunogenicity (Chadd and Chamow, 2001;Frenzel et al., 2013) and are the primary production system for full IgG molecules (Walsh, 2014). Also, mammalian cells typically deliver close to 100% fully functional proteins, in contrast to prokaryotic expression systems, where the yield of active protein may be significantly lower than the overall protein yield. However, drawbacks for mammalian cells include high cost of media and consumables, difficulty in handling, and (arguably) slow growth rate. Productivity has, however, been increased significantly in recent decades by optimization of protein expression levels for many of the mammalian cell lines employed in industrial processes, which compensates for the slower growth of mammalian cells compared to prokaryotes. Previously, pathogenic contaminations of cell cultures also posed a threat, but modern protocols for avoiding such contaminations limit this issue .
scFvs are typically expressed in E. coli
The use of E. coli as an expression host appears to be the most commonly used system within antivenom research, not only for scFvs, but also for other antibody fragments (see Table 6). In 1999, Mousli et al. expressed an scFv in E. coli. capable of neutralizing the AahII toxin of the desert scorpion, Androctonus australis hector (Mousli et al., 1999). More recently, scFv expression in E. coli cells has been optimized, leading to improved expression yields. As an example, signal peptides that localize antibody fragments to the oxidative environment of the periplasm are often added to the expression plasmid (Amaro et al., 2011;Ju arez-Gonz alez et al., 2005;Juste et al., 2007;Pucca et al., 2012;Roncolato et al., 2013). The oxidative environment allows for the formation of disulphide bond, which is normally unattainable in the reducing cytoplasmic environment of E. coli, wherein expression tends to lead to nonfunctional aggregates. Research groups outside of the field of antivenom have attempted different strategies as alternatives to localizing antibodies to the periplasm to achieve a higher degree of correct folding. These strategies include: (i) denaturation and refolding of cytoplasmic, aggregated antibodies, (ii) increased expression of cytoplasmic chaperones in addition to altering the cytoplasmic environment by creating mutations in reductases, (iii) creating cysteine-free antibodies, and (iv) cytoplasmic oxidase expression Gaciarz et al., 2016;Veggiani and De Marco, 2011). These methods have been employed with varying degrees of success. Denaturation and refolding does often not prove efficient, whereas increasing the expression of chaperones and cytoplasmic oxidases have successfully increased yields for Fab and V H H fragments, respectively Gaciarz et al., 2016).
Engineering of expression vectors, such as optimization of codons, promotor, Shine-Dalgarno sequence, leader sequence, and transcript stability, can further improve scFv expression in E. coli . Furthermore, cultivation of E. coli in bioreactors instead of shake flasks has in some cases significantly increased scFv yields. As an example of shake flask cultivation, Kipriyanov et al. obtained a yield of 16.5 mg/L for an scFv against the T cell surface antigen CD3 by expression in E. coli cultivated in Table 6 Expression of antibody formats targeting spider, scorpion, and snake toxins.
Fabs are typically expressed in E. coli
Within antivenom research, Fabs have primarily been produced in E. coli strains ( Table 6). Many of these strains have been engineered to circumvent problems inherent to expression of mammalian proteins in prokaryotic cells. As an example, E. coli strains have been modified to compensate for the limited availability of tRNAs corresponding to codons infrequently used in prokaryotes, but frequently used in eukaryotes. Bugli et al. tested such an E. coli strain and found that increasing the intracellular availability of tRNAs with anticodons for AGG, AGA, AUA, CUA, CCC, and GGA also increased yields of their Fab directed against alphalatrotoxin from the venom of L. tredecimguttatus (Mediterranean black widow) from 0.5 mg/L to 1.5 mg/L (Bugli et al., 2008).
Optimization of growth media and additives, timing and duration of induction, concentration of reactants used for induction, and other parameters may dramatically increase antibody expression yields (Kipriyanov et al., 1997;Selisko et al., 2004;Ukkonen et al., 2013). Although still in the lower range of yields, this is demonstrated by a study of a Fab capable of neutralizing whole venom antigens of the C. noxius scorpion, in which Fab yields were increased by a factor of 20 (from 0.05 mg/L to 1 mg/L) through optimization of addition of sucrose to the medium, temperature and timing of induction, and concentration of the induction agent (Selisko et al., 2004). In the same study, Selisko and colleagues also found that lowering the temperature of induction in their case had a profound positive impact on the yield of biologically active protein, as this reduced the number of insoluble, cytoplasmic aggregates (Selisko et al., 2004). Conversely, however, Aubrey et al. found that inducing expression at low temperatures resulted in extensive cytoplasmic aggregation and low Fab yields (Aubrey et al., 2004). This demonstrates that the temperature of induction is of paramount importance for correct folding, but that the optimal temperature may be different from case to case.
Similar to scFvs, Fabs are often localized to the periplasm to promote disulphide bond formation and ameliorate aggregations (Aubrey et al., 2004;Bugli et al., 2008;Selisko et al., 2004). An alternative solution to periplasmic expression from outside the field of antivenom is introduction of enzymes (e.g. protein disulphide isomerase) facilitating disulphide bond formation in the cytoplasm, as used by Gaciarz and colleagues for Fab expression (Gaciarz et al., 2016). Thus, it is important to consider in which cellular space the Fab fragment should be localized to achieve the highest possible yield.
IgGs are expressed in mammalian hybridoma cell lines within antivenom R&D
Although aglycosylated IgGs have been produced in E. coli cells , a much more commonly employed expression organism for IgGs for research use is hybridoma cells. Hybridomas are generated by fusion of antibody-producing, mammalian, B lymphocytes (typically murine cells) from immunized animals and an immortalized cell line of choice. Hybridomas thus present advantages and disadvantages, making them suited for R&D purposes, but less suited for large-scale production. As their most relevant feature, they are immortalized and capable of antibody production. Antibody expression in hybridoma cells has been extensively used within the field of antivenoms, as illustrated by Table 6, especially for the IgG format, partially due to the difficulty of expressing functional versions of the IgG format in prokaryotes. In 2008, Morine and colleagues produced two IgGs capable of neutralizing both the haemorrhagic and proteolytic activities of the (Fernandes et al., 2010) IgG2a Murine Hybridoma cells (in vivo hybridoma cultivation/ascite) (Stiles et al., 1994) (Tr emeau et al., 1986) IgG2b Murine Hybridoma cells (in vivo hybridoma cultivation/ascite) (Stiles et al., 1994) (Masathien et al., 1994, p. 3, p. 3) IgM Murine Hybridoma cells (in vitro hybridoma cultivation) 1/1024 for whole venom (Fernandes et al., 2010) Hybdridoma cells (in vivo hybridoma cultivation/ascite) (Masathien et al., 1994, p. 3, p. 3) Ig Murine Hybridoma cells (in vivo hybridoma cultivation/ascite) 2 mg/mouse (Boulain et al., 1982) Hybridoma cells (in vitro hybridoma cultivation) (Dias-Lopes et al., 2014) snake venom metalloproteinase Hr1a (Morine et al., 2008). These IgGs were produced by hybridomas cultivated in vitro and harvested from the culture supernatant (Morine et al., 2008). Others have followed similar procedures for expression of toxinneutralizing IgGs (Bahraoui et al., 1988;Jia et al., 2000). Another approach entails in vivo production and harvest of IgGs from ascitic fluids (Alvarenga et al., 2005Boulain et al., 1982;Clot-Faybesse et al., 1999;Frauches et al., 2013;Licea et al., 1996;Li et al., 1993;Lomonte et al., 1992;Lomonte and Kahan, 1988;Masathien et al., 1994;Perez et al., 1984;Stiles et al., 1994;Tr emeau et al., 1986;Zamudio et al., 1992). Several reasons for favouring this approach exist for research purposes. Some hybridoma cell lines do not grow well in vitro, and purification of IgAs, IgMs, and IgG3s from in vitro cultures may result in denaturation and consequent loss of activity (Ward et al., 1999). Thus, if high antibody concentrations and activity levels are needed for preliminary studies and a small degree of impurity is permissible, growing hybridomas inside the peritoneal cavity of mice may be preferable to cultivation in conventional medium for research application (Ward et al., 1999). Hybridomas cultured in vitro have in some cases been shown to produce alternatively glycosylated IgGs relative to those produced by hybridomas in vivo, affecting their antigen-binding capacities (Ward et al., 1999). Thus, it may be important to investigate glycosylation patterns when going from in vitro to in vivo.
Although hybridomas have historically been used extensively for expression of antibodies within many fields, these cell lines have several restraints for upscaling. These restraints include poorly defined nutrient needs of these cell types, accumulation of toxic metabolites, high oxygen demand, and fragility of the cells (Randerson, 1985). The problem of chromosomal instability is also inherent to long-term expression in many cell lines, such as hybridomas, non-secreting murine myeloma (NS0) cells, and human embryonic kidney (HEK) cells, and overgrowth by nonproducing cells constitutes another potential problem (Randerson, 1985).
6.6. Organisms well suited for large-scale production of antibodies and antibody fragments Antibodies and antibody fragments are the fastest growing class of biopharmaceuticals (Pucca et al., 2011a,b). Most of the organisms described above are suited for R&D purposes, but have their limitations when it comes to large-scale production. These limitations include the propensity for producing endotoxins and the restricted number of formats that can be produced in E. coli, and the low costefficiency and difficulty of upscaling for hybridoma cell lines.
From a quantitative perspective, microbial cell lines, and E. coli lines in particular, are responsible for the production of the majority of approved biotherapeutics (Walsh, 2014). However, they are not responsible for the production of the majority of approved therapeutic antibodies, which may be due to the inability of microbial cell lines to provide correct human glycosylation of antibodies (Ecker et al., 2015;Walsh, 2014). Furthermore, microbial cell lines often attain low yields due to incorrect folding and formation of aggregates (Chadd and Chamow, 2001). Another disadvantage of E. coli and other gram-negative bacteria is that they produce endotoxins, which may compromise safety, if they are not properly removed. While efforts have been made to produce endotoxin-free E. coli strains for recombinant protein production (Mamat et al., 2015), no antibodies produced in E. coli have been approved by the Food and Drug Administration (FDA) or European Medicines Agency (EMA) since 2009, respectively (ACTIP, 2017. For production of therapeutic antibodies, mammalian cell lines are often chosen as the expression organism (Berlec and Strukelj, 2013;Wurm, 2004). Mammalian cell lines were responsible for production of 95% of approved therapeutic antibodies in 2013 (J€ ager et al., 2013) and for the production of 29 out of 30 (96.7%) approved therapeutic antibodies in 2014 (Walsh, 2014). By comparison, E. coli was only responsible for the production of one of these antibodies in 2014 (Walsh, 2014). One of the popular mammalian expression hosts for therapeutic proteins is the Chinese Hamster Ovary (CHO) cell. In 2014, CHO cells alone were responsible for the production of 35.5% of all approved biotherapeutics (Walsh, 2014). Although CHO cells are the most commonly used mammalian cell lines for IgG production, other cell lines (e.g. NS0, HEK, and hybridoma lines) are also used (Chadd and Chamow, 2001;Frenzel et al., 2013). Fig. 8 shows a schematic representation of mammalian (CHO) cell production of IgGs.
Finally, antibodies have also been expressed in other gramnegative bacteria (in addition to E. coli), gram-positive bacteria, various yeast strains, fungi, protozoa, insect cells, additional mammalian cell lines, transgenic plants, and even transgenic animals (Chadd and Chamow, 2001;Frenzel et al., 2013). Recently, a recombinant antivenom made in transgenic plants expressing various camelid antibodies against toxins of the venom of Bothrops asper was described (Julve Parreño et al., 2017). Several of the aforementioned production hosts are in use for large-scale production of biotherapeutics (Walsh, 2014), while others are still in the process of procedure optimization for future large-scale production. Given their regulatory success and the efforts put into strain development and genetic engineering in other fields, it seems likely, though, that the CHO cell will be the main expression organism for antibodies in most therapeutic areas e particularly full IgGs.
Practical considerations for production of recombinant antivenoms
In addition to production cost, factors to consider when choosing a manufacturing strategy for (mixtures of) antibodies and antibody fragments for recombinant antivenoms, include i) the therapeutic benefits of the specific antibody format (different formats have different PK-PD and are suitable for different purposes), ii) the importance of (proper) glycosylation, iii) ease of purification, Fig. 8. Schematic representation of three different CHO cells expressing three different glycosylated IgGs. The mammalian cell line contains the necessary cellular components to produce properly folded and glycosylated IgGs. It has been proposed that co-culturing such cell lines could be used for the production of recombinant antivenom based on oligoclonal mixtures of (human) IgGs (Laustsen et al., 2017). iv) history of regulatory approval, and v) availability of genetic tools for development of production strains, such as CRISPR/Cas systems (clustered regularly interspaced short palindromic repeats). Considering these factors, CHO cells or other mammalian cells may possibly be the best choice for large-scale production of recombinant antivenoms based on more complex antibody formats, such as IgG (Walsh, 2014;Wright and Morrison, 1997). In regards to cost of treatment, it has been suggested that using CHO cells for oligoclonal expression of mixtures of recombinant human IgGs could provide an entire treatment against a typical snakebite envenoming for as little as USD 30e350 (Laustsen et al., 2017(Laustsen et al., , 2016b. This compares favourably with prices described by Harrison et al., who report a current market price of an antivenom vial in Kenya ranging from USD 47.9 to USD 315 (depending on the product), considering that the treatment of a snakebite case usually requires several vials .
Targeting toxins of different toxicokinetic profiles and sites of action
Animal venoms contain cocktails of toxins with a wide range of biological activities and variable toxicokinetic profiles. Some toxins, like elapid and scorpion neurotoxins, are low molecular mass proteins with a large volume of distribution, which allows them to rapidly reach systemic distribution and access extravascular targets in the peripheral nervous system (Fig.7D) . Other toxins, such as high molecular mass metalloproteinases and serine proteinases, have a lower volume of distribution, and many of them act systemically within the vasculature, generating hemorrhage and coagulation disorders. Still, some toxins, particularly PLA 2 s and metalloproteinases, generate local tissue damage at the site of injection before reaching a systemic distribution. Other venomous animals that cause local tissue damage include brown spiders (Loxosceles spp.), whose venom can induce dermonecrotic lesions, although systemic manifestations are also observed, including acute kidney injury (Chaim et al., 2006). Thus, these different toxicokinetic scenarios and the consequent profile of toxicity associated with the various types of toxins demand a detailed consideration when designing the most effective antibody format for neutralization. Locally acting toxins are possibly better neutralized by Fabs, scFvs, or V H Hs, as these fragments better reach and neutralize toxins in deep tissue compartments compared to IgGs (Fig. 7D), which largely remain within blood vessels. Unfortunately, biodistribution studies involving these fragments and their use as antivenoms are scarce. However, other studies involving anti-tumor antibodies have already demonstrated their rapid and efficient tissue penetration, in which scFvs exhibited fast and high penetration in the tumor mass, while Fabs demonstrated intermediate tissue penetration in comparison to IgGs (Yokota et al., 1992). In contrast, an in vivo study using mice envenomed with B. asper venom, demonstrated that IgG and F(ab') 2 were in fact capable of reaching muscle tissue, although the researchers pointed out that the observed antibody accumulation could be a result of venom-induced microvascular alterations, which could increase the antibodies extravasation (Le on et al., 2001). Interestingly, no differences in the ability to neutralize local tissue damage between IgG, F(ab') 2 , and Fab antivenoms were observed, probably owing to the effects of tissue damage on antivenom PK (Le on et al., 2000,1997). Thus, antivenom PK is affected by the pathological changes induced by venoms in the tissues, and this must be considered when discussing the best antibody format for a given type of envenoming.
Systemically acting toxins are known to induce systemic toxic effects, including neuromuscular blockade, bleeding, coagulopathies, acute kidney injury, and cardiovascular shock, among others (Guti errez et al., 2017a). Neurotoxins represent a relevant example, since they need to reach extravascular targets in the peripheral nervous system to exert their actions. Venoms from scorpions, spiders and elapid snakes are rich sources of such neurotoxins (Del Brutto, 2013;Escoubas et al., 2000;Kini and Doley, 2010;Laustsen et al., 2016aLaustsen et al., , 2016c. The best antibody format to treat systemically acting toxins may be one that enables rapid diffusion to the tissues to bind and neutralize toxins that have reached systemic tissue targets (see section 3). On the other hand, the long half-life of the IgG format provides prolonged protection from toxins remaining in the circulatory system, such as high molecular mass metalloproteinases and serine proteinases, or toxins escaping the bite site at late stages of envenoming, which is beneficial in cases where toxins leak from the bite wound over the course of days. In these circumstances, the prolonged half-life of IgGs ensures that toxins remaining in the circulatory system or getting access to the circulatory system at later time periods would be bound and neutralized. Thus, the optimal antibody format has to be analyzed on a case by case basis, and it is likely that formulations that combine high and low molecular mass formats may be the optimal solution in many cases (Guti errez et al., 2003). Toxin neutralization has generally been considered to take place when a toxin is bound by the variable region of an antibody. Therefore, antivenoms used in passive immunotherapy are frequently prepared using Fab/F(ab') 2 formats to limit immunogenicity and the risk of serum sickness. However, with the possibility of using monoclonal human antibodies, the Fc region has gained renewed interest (Laustsen et al., 2017;Richard et al., 2013), as it dramatically increases antibody plasma half-life. The attached Fc domain also enables the interaction with Fc-receptors found on immune cells, a feature that is particularly important for clearance mechanisms. Additionally, from a biophysical perspective, the Fc domain folds independently and can improve the solubility and stability of the antibody molecule (Kontermann, 2011;Nimmerjahn and Ravetch, 2008). Use of the human Fc domain of novel monoclonal toxin-targeting antibodies thus deserves further investigation e particularly for targeting systemically acting toxins.
Conclusions and predictions
With the renewed focus on snakebite as a neglected tropical disease by the WHO (Guti errez et al., 2017a), a hope emerges that research efforts within novel envenoming therapies will be intensified. This may not only contribute to the development of a new generation of antivenoms for treating envenomed snakebite victims, but it may also pave the way for novel antivenoms against envenomings by other animals. In the field of antivenom, antibody technologies have been introduced several decades ago, although with very limited efforts compared to the fields of oncology, autoimmune diseases, and infectious diseases. Despite its nascent state, research within monoclonal antibodies against animal toxins is thus well-positioned to harness the developments from these other fields that have made major progress in antibody discovery technologies, antibody engineering approaches, and antibody manufacturing.
Based on what is known from the field of antivenom research itself and general knowledge on monoclonal antibodies, it seems likely that different antibody formats may be applicable for different types of envenomings. An urgent need exists for targeting locally acting toxins with better efficacy within snakebite envenomings (Guti errez et al., 2017a). However, improvements in monoclonal human IgG discovery and development also open a door for improved therapies targeting systemically acting toxins.
Generally, a trend in antivenom research seems to present itself as a move away from the use of immunization, hybridoma technology, and murine antibodies towards phage display technology and human and camelid antibodies instead (Laustsen, n. d.;Roncolato et al., 2015). One possible prediction may be that combinatorial approaches merging (novel) immunization techniques and phage display may be introduced into the field of antivenom R&D, as transgenic animals engineered to contain the human antibody repertoire become more widely available to academia. This would allow researchers to obtain human antibody mRNA from immunized transgenic animals and use this mRNA to construct affinity matured fully human antibody phage display libraries. In turn, such libraries could be employed in a high-throughput fashion for discovery of a multitude of novel toxin-targeting human antibodies. As auxiliary tools for guiding antivenom development, novel approaches within determination of antibody cross-reactivity may accelerate development of novel antivenoms. Particularly promising technologies include antivenomics, which may provide a holistic view of the toxin-capturing abilities of antibodies, and high-density peptide microarray technology, which can provide amino acid level resolution of epitope-paratope interactions between toxins and antibodies (Engmark et al., 2017b(Engmark et al., , 2016. Finally, it is possible that other display technologies (e.g. mammalian display (Bowers et al., 2014;Ho and Pastan, 2009)) and novel binding protein formats, such as DARPins (designed ankyrin repeat proteins) (Rasool et al., 2016;Stumpp et al., 2008), Armadillo repeat proteins (Varadamsetty et al., 2012), Affitins (B ehar et al., 2016;Correa et al., 2014;Pacheco et al., 2014), Adhirons (Tiede et al., 2014), Anticalins (Schiefner and Skerra, 2015), and various other protein scaffolds (Simeon and Chen, 2017) may find their way into the field of antivenom development.
Conflicts of interest
The authors declare no conflict of interest. | 2018-04-03T04:35:07.272Z | 2018-05-01T00:00:00.000 | {
"year": 2018,
"sha1": "e835ccd03b76adb1466bafe4fc1548fe4fb4e0c6",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.toxicon.2018.03.004",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "9da671413482661cd36f7683062da60ebfee48b3",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
78091580 | pes2o/s2orc | v3-fos-license | Internet of Vehicles and Cost-Effective Traffic Signal Control
The Internet of Vehicles (IoV) is attracting many researchers with the emergence of autonomous or smart vehicles. Vehicles on the road are becoming smart objects equipped with lots of sensors and powerful computing and communication capabilities. In the IoV environment, the efficiency of road transportation can be enhanced with the help of cost-effective traffic signal control. Traffic signal controllers control traffic lights based on the number of vehicles waiting for the green light (in short, vehicle queue length). So far, the utilization of video cameras or sensors has been extensively studied as the intelligent means of the vehicle queue length estimation. However, it has the deficiencies like high computing overhead, high installation and maintenance cost, high susceptibility to the surrounding environment, etc. Therefore, in this paper, we propose the vehicular communication-based approach for intelligent traffic signal control in a cost-effective way with low computing overhead and high resilience to environmental obstacles. In the vehicular communication-based approach, traffic signals are efficiently controlled at no extra cost by using the pre-equipped vehicular communication capabilities of IoV. Vehicular communications allow vehicles to send messages to traffic signal controllers (i.e., vehicle-to-infrastructure (V2I) communications) so that they can estimate vehicle queue length based on the collected messages. In our previous work, we have proposed a mechanism that can accomplish the efficiency of vehicular communications without losing the accuracy of traffic signal control. This mechanism gives transmission preference to the vehicles farther away from the traffic signal controller, so that the other vehicles closer to the stop line give up transmissions. In this paper, we propose a new mechanism enhancing the previous mechanism by selecting the vehicles performing V2I communications based on the concept of road sectorization. In the mechanism, only the vehicles within specific areas, called sectors, perform V2I communications to reduce the message transmission overhead. For the performance comparison of our mechanisms, we carry out simulations by using the Veins vehicular network simulation framework and measure the message transmission overhead and the accuracy of the estimated vehicle queue length. Simulation results verify that our vehicular communication-based approach significantly reduces the message transmission overhead without losing the accuracy of the vehicle queue length estimation.
signal controller sequentially even if higher preferences are given to vehicles farther away from the upcoming intersection.
In this paper, to overcome this sequential transmission characteristic of the distance-based mechanism, we propose a new mechanism that can reduce the number of messages transmitted to the traffic signal controller. This newly proposed mechanism is called the sector-based mechanism. The sector-based mechanism further reduces the number of the vehicles sending messages to the traffic signal controller by adopting the concept of sectors. There can be a number of sectors in a road segment between two consecutive intersections. A sector of a road segment is a subarea of the road segment. Instead of all the vehicles waiting for the green light having rights to perform V2I communications, only the vehicles within the sectors are allowed to transmit messages to the approaching traffic signal controller. That is, the set of candidate vehicles for sending messages to the traffic signal controller of the sector-based mechanism is smaller than that of the distance-based mechanism, resulting in less transmissions to the traffic signal controller. For the performance evaluation, intensive simulations are carried out by utilizing the vehicle network simulation framework Veins [23] based on SUMO [24] and OMNet++ [25] with considering various performance-affecting factors like sector length, inter-sector distance, vehicle density of the road segment, etc. In the performance evaluation section, we can observe that the sector-based mechanism, with the sector length 10 m and the inter-sector distance 10 m, performs almost the same as the distance-based mechanism in terms of the estimation accuracy of the vehicle queue length with significantly less V2I message transmissions, almost a third of the distance-based mechanism (i.e., a sixth of the Naïve mechanism). Because the parameters like sector length and inter-sector distance are easily adjustable, the sector-based mechanism can be a good candidate for estimating the vehicle queue length for intelligent traffic signal control in the IoV environment.
The rest of the paper is organized as follows: in Section 2, we will describe the related work on traffic pattern monitoring mechanisms. Section 3 describes the detailed operation of our V2I communication-based traffic pattern monitoring and vehicle queue estimation mechanisms. In Section 4, we evaluate the performance of our mechanisms from the intensive simulation results. Finally, Section 5 concludes this paper.
Related Work
In this section, we first go over the definition of the vehicle queue. The vehicle queue is defined as a line of the vehicles stopping at the red light and the vehicles approaching to the stopping vehicles at speeds slower than the given stopping speed in the Highway Capacity Manual [19]. In [20], the vehicle queue is composed of the standing queue and the moving queue. The standing queue is with the vehicles stopping at the red signal and the moving queue with the vehicles slower than the stopping speed because of the standing queue. The equivalent standing queue is defined as the vehicle queue including both the standing queue and the moving queue. In this paper, we adopt the equivalent standing queue of [20] as the vehicle queue.
For the estimation of the vehicle queue length, first of all, the vehicles waiting for the green signal have to be recognized, which can be accomplished by utilizing devices like video cameras mounted on fixed roadside structures such as traffic signal controllers or like sensors installed under the pavement. The time-stringent control of traffic signals requires real-time processing of video frames and the accurate measurement of vehicle queue length requires sophisticated deployment of sensors.
In the video-based approach, the first thing to do for the vehicle queue length estimation is detecting vehicles from video frames in real time. After the vehicle detection process, vehicles are tracked and counted in real time. Thanks to various computer vision techniques and hardware capabilities, real-time processing of vehicle detection, tracking and counting becomes possible [4][5][6][7][8].
The mechanisms that can be used for real-time vehicle detection from video images are background subtraction method, blob analysis, thresholding, hole filling, morphological operations, etc. Once vehicles are detected, vehicle tracking and counting are performed with using various schemes like similarity measurement, patch analysis, virtual detection line, virtual detection zone, shadow Sensors 2019, 19, 1275 4 of 17 detection, removal, etc. A sequence of complex processing of video images induces very high computing overhead and requires specialized hardware to expedite the processing. In [7], ARM/FPGA processor-based vehicle counting system is proposed to expedite video processing. As an example, the video processing procedure of vehicle detection and counting proposed in [4] consists of preprocessing, background update, background subtraction, image segmentation, lamplight or shadow suppression, contour extraction and filling, vehicle detection and vehicle counting using virtual coil or detecting line depending on traffic congestion situation. Even with various video processing techniques, the adversary road surrounding environment, like bad weather (e.g., rain drops and snowflakes), dim lights, curved roads, etc., may significantly downgrade the quality of video images. The authors of [4] aimed to provide robustness to video processing for vehicle detection, tracking and counting in various weather and light conditions. [9] and [10] improve robustness and accuracy even under bad road situations by adopting a feature-based detection method and a machine learning-based method, respectively, but consume abundant resources and may not guarantee real-time processing of video frames due to processing complexity. Recently, the mechanisms based on video images from unmanned aerial vehicles (UAVs) for traffic monitoring have been studied and this UAV-based approach is appropriate for large area monitoring with overcoming obstacles from wider top-view video images. For instance, in [8], a framework based on UAVs is proposed for moving-vehicle detection, multi-vehicle tracking and vehicle counting. As we have described, most of the work on the video-based approach tackles previously-mentioned environmental hurdles which may not be completely overcome by the means of various video processing methods.
In the sensor-based approach, various types of sensors, like inductive loop detectors, ultrasonic sensors, magnetometers, radar/lidar based sensors, etc., are installed near to intersections for vehicle detection, tracking and counting [11][12][13][14][15][16][17]. Each sensor is equipped with devices like a microphone to collect acoustic, seismic or any signals to classify vehicles. From the collected sensing signals, sensors and base stations detect, track and count vehicles. However, in the harsh road environment, sensing signals are affected by ambient noise, resulting in resource-intensive signal processing. Typical road sensors are deployed under the road surface at specific points and monitor the presence of vehicles at fixed locations, separately in each lane. Each sensor transmits a sequence of binary values indicating the presence of vehicles which is used for estimating vehicle flow, vehicle speed, vehicle classification, etc. For instance, inductive loop detectors are deployed at pre-specified points for traffic signal control as illustrated in [17]. In [17], we can find various deployment strategies of inductive loop detectors for various applications. For the accurate estimation of vehicle queue length, road sensors are to be deployed at sophisticatedly arranged points, which requires high installation and maintenance cost. Also, in order to supply power and allow communications, long cables are required to be installed along with sensors. Even with excluding the cabling cost, the high sensor installation and maintenance cost makes sensor deployment in all intersection areas infeasible. Wireless sensors can avoid cabling, but they have the drawback of short lifetime due to their power-constrained batteries. The lifetime of wireless sensors can be lengthened by implementing energy harvesting capability in wireless sensors which converts the vibrations induced by vehicles into energy.
Instead of using video cameras or sensors, the mechanisms utilizing GPS-mounted probe vehicles have been proposed for the estimation of the vehicle queue length [26][27][28][29][30][31][32][33]. Probe vehicles are special purpose vehicles designed for monitoring road traffic situations and collecting trajectory data. The performance of the probe vehicle-based approach is affected by the number of probe vehicles deployed on the road. The ratio of the number of probe vehicles to the total number of vehicles is called the penetration ratio of probe vehicles. Larger penetration ratio is better for achieving higher accuracy in terms of the vehicle queue length estimation. In the probe vehicle-based mechanisms, due to low penetration ratio of probe vehicles, one of the major issues is to enhance the accuracy based on the insufficient information from probe vehicles. Another issue is how to efficiently estimate vehicle queue length or traffic volume from the substantial data collected by probe vehicles. The main purpose of using probe vehicles is to collect traffic-related data throughout their journey and, then, to do the off-line analysis or estimation of traffic situations based on the collected data. Therefore, the probe vehicle-based approach is not for the real-time control of traffic signals.
The aim of our mechanisms differs from that of the probe vehicle-based mechanism in that our mechanisms use V2I communications for the real-time traffic signal control. That is, we consider the environment where the traffic signal controller detects vehicles through V2I communications and estimates the length of the vehicle queue and, then, controls the traffic signals. As the age of IoV is approaching [34,35], all the vehicles performing V2I communications (i.e., the penetration ratio of probe vehicles is 1) will be realized in the near future. In this case, V2I communication attempts from all the vehicles may cause collisions, so our objective is to limit the number of vehicles sending messages to traffic signal controllers without deteriorating the accuracy of the estimated vehicle queue length.
Communication Environment
The road is composed of road segments each of which has vehicles heading to an intersection with a traffic controller. In this paper, we consider a road segment starting from R start to R end with the length of R len (see Figure 1). The vehicle queue is the queue of vehicles waiting for the green light and a vehicle decides that it is in the vehicle queue if its speed is lower than the specific speed at the red light. A vehicle in the vehicle queue is called an in-vehicle-queue (IVQ) vehicle and may send a Vehicle Information (VI) message to its upcoming traffic signal controller. The VI message has the information of the IVQ vehicle such as identifier, location, speed and moving direction. A traffic signal controller can estimate the vehicle queue length to control the traffic signal based on the received VI messages. We assume that the transmission range of a vehicle, V range , is large enough to cover the upcoming traffic signal controller; that is, V range ≥ R len . Vehicles move at a speed faster than the stopping speed when it is not in the vehicle queue, and know the information of traffic signal controllers and all the information related to the road segment, such as R start , R end , R len , etc. The aim of our mechanisms differs from that of the probe vehicle-based mechanism in that our mechanisms use V2I communications for the real-time traffic signal control. That is, we consider the environment where the traffic signal controller detects vehicles through V2I communications and estimates the length of the vehicle queue and, then, controls the traffic signals. As the age of IoV is approaching [34,35], all the vehicles performing V2I communications (i.e., the penetration ratio of probe vehicles is 1) will be realized in the near future. In this case, V2I communication attempts from all the vehicles may cause collisions, so our objective is to limit the number of vehicles sending messages to traffic signal controllers without deteriorating the accuracy of the estimated vehicle queue length.
Communication Environment
The road is composed of road segments each of which has vehicles heading to an intersection with a traffic controller. In this paper, we consider a road segment starting from to with the length of (see Figure 1). The vehicle queue is the queue of vehicles waiting for the green light and a vehicle decides that it is in the vehicle queue if its speed is lower than the specific speed at the red light. A vehicle in the vehicle queue is called an in-vehicle-queue (IVQ) vehicle and may send a Vehicle Information (VI) message to its upcoming traffic signal controller. The VI message has the information of the IVQ vehicle such as identifier, location, speed and moving direction. A traffic signal controller can estimate the vehicle queue length to control the traffic signal based on the received VI messages. We assume that the transmission range of a vehicle, , is large enough to cover the upcoming traffic signal controller; that is, ≥ . Vehicles move at a speed faster than the stopping speed when it is not in the vehicle queue, and know the information of traffic signal controllers and all the information related to the road segment, such as , , , etc.
Distance-Based Transmission of Vehicle Information Messages
If we allow any vehicles in the vehicle queue to transmit VI messages (we call this the naïve mechanism), the number of VI message transmissions will be the same as the number of the vehicles in the vehicle queue. From the perspective of the accuracy in estimating the vehicle queue length, this mechanism is the best. However, this will lead to higher possibility of collisions. Therefore, we need to figure out the way of reducing the number of VI messages. The optimal way of achieving this is to allow only the last vehicle in the vehicle queue to transmit a VI message. However, a vehicle has no means of knowing that it is the last vehicle in the vehicle queue because it cannot know whether there are any vehicles following itself. If the V2V communication is adopted for that purpose, a vehicle can know whether there are any following vehicles or not. Even with the V2V communication, in the
Distance-Based Transmission of Vehicle Information Messages
If we allow any vehicles in the vehicle queue to transmit VI messages (we call this the naïve mechanism), the number of VI message transmissions will be the same as the number of the vehicles in the vehicle queue. From the perspective of the accuracy in estimating the vehicle queue length, this mechanism is the best. However, this will lead to higher possibility of collisions. Therefore, we need to figure out the way of reducing the number of VI messages. The optimal way of achieving this is to allow only the last vehicle in the vehicle queue to transmit a VI message. However, a vehicle has no means of knowing that it is the last vehicle in the vehicle queue because it cannot know whether there are any vehicles following itself. If the V2V communication is adopted for that purpose, a vehicle can know whether there are any following vehicles or not. Even with the V2V communication, in the situation of contiguous vehicles running on the road, if a vehicle does not send a VI message because of any following vehicles, the transmission of a VI message may be delayed, resulting in non-reactive traffic signal control. Therefore, we proposed a mechanism, the distance-based mechanism, in which an IVQ vehicle sends a VI message according to its distance from the intersection in our prior work [17]. In the mechanism, we allow an IVQ vehicle farther from the upcoming intersection to send a VI message earlier and any IVQ vehicles closer to the intersection not to transmit any VI messages if they overhear a VI message behind themselves. For that, we introduced a timer that is used for an IVQ vehicle to defer a VI message transmission according to the distance from the intersection: An IVQ vehicle can transmit a VI message M at time T M . T current is the current time and τ is the unit time and V dist is the distance of the vehicle from R start . In Equation (1), the second term gives randomness to T M according to the distance from R start so that the collisions caused by simultaneous VI message transmissions can be avoided. Once an IVQ vehicle closer to the traffic signal controller listens a VI message from a farther IVQ vehicle, the closer IVQ vehicle gives up its transmission, resulting in less VI message transmissions. Figure 2 illustrates the operation of the distance-based mechanism from the perspective of VI message transmissions. situation of contiguous vehicles running on the road, if a vehicle does not send a VI message because of any following vehicles, the transmission of a VI message may be delayed, resulting in non-reactive traffic signal control. Therefore, we proposed a mechanism, the distance-based mechanism, in which an IVQ vehicle sends a VI message according to its distance from the intersection in our prior work [17]. In the mechanism, we allow an IVQ vehicle farther from the upcoming intersection to send a VI message earlier and any IVQ vehicles closer to the intersection not to transmit any VI messages if they overhear a VI message behind themselves. For that, we introduced a timer that is used for an IVQ vehicle to defer a VI message transmission according to the distance from the intersection: An IVQ vehicle can transmit a VI message at time . is the current time and τ is the unit time and is the distance of the vehicle from . In Equation (1), the second term gives randomness to according to the distance from so that the collisions caused by simultaneous VI message transmissions can be avoided. Once an IVQ vehicle closer to the traffic signal controller listens a VI message from a farther IVQ vehicle, the closer IVQ vehicle gives up its transmission, resulting in less VI message transmissions. Figure 2 illustrates the operation of the distance-based mechanism from the perspective of VI message transmissions. From the collected VI messages, the traffic signal controller estimates how many vehicles are in the vehicle queue. The estimated length, , of the vehicle queue is computed as follows: Here, is the number of lanes, is the −th VI message and is the distance of the vehicle sending . is the length of a vehicle and _ is the distance between two backto-back vehicles. For simplicity, we assume that and _ are constant. However, at the red light, vehicles tend to line up one after another with slowing down their speeds with some time gap and may send VI messages to the traffic signal controller sequentially because of the larger time gap between the stopping times of two back-to-back vehicles compared with the random time delay gap between them determined according to the distance from the upcoming intersection. This may result in non-optimal VI message transmissions because of the VI message transmissions of the vehicles in the middle of the vehicle queue. Therefore, in the following subsection, we propose a new mechanism that can further reduce the VI message transmission overhead by limiting the areas in which vehicles are allowed to transmit VI messages. From the collected VI messages, the traffic signal controller estimates how many vehicles are in the vehicle queue. The estimated length, Q len , of the vehicle queue is computed as follows:
Sector-Based Transmission of Vehicle Information Messages
Here, k is the number of lanes, M i is the i-th VI message and V i dist is the distance of the vehicle sending M i . V len is the length of a vehicle and V inter_dist is the distance between two back-to-back vehicles. For simplicity, we assume that V len and V inter_dist are constant.
However, at the red light, vehicles tend to line up one after another with slowing down their speeds with some time gap and may send VI messages to the traffic signal controller sequentially because of the larger time gap between the stopping times of two back-to-back vehicles compared with the random time delay gap between them determined according to the distance from the upcoming intersection. This may result in non-optimal VI message transmissions because of the VI message transmissions of the vehicles in the middle of the vehicle queue. Therefore, in the following subsection, we propose a new mechanism that can further reduce the VI message transmission overhead by limiting the areas in which vehicles are allowed to transmit VI messages.
Sector-Based Transmission of Vehicle Information Messages
The objective of the sector-based mechanism is to reduce the number of candidate IVQ vehicles to transmit VI messages. Compared with the distance-based mechanism in Section 3.2 where all the IVQ vehicles have the chances to transmit VI messages, the sector-based mechanism allows only the IVQ vehicles located in the sectors to transmit VI messages. Sectors are designated areas on a road segment. This is a reasonable approach because a vehicle tends to stop right behind a stopped vehicle and the fact that a vehicle V sends a VI message implies that the vehicles ahead of V have already stopped and belong to the vehicle queue. Figure 3 shows the operation of the sector-based VI message transmission mechanism. In the figure, each sector is represented as a square-shape area filled with slashes and the vehicles sending VI messages are filled with small dots. The objective of the sector-based mechanism is to reduce the number of candidate IVQ vehicles to transmit VI messages. Compared with the distance-based mechanism in Section 3.2 where all the IVQ vehicles have the chances to transmit VI messages, the sector-based mechanism allows only the IVQ vehicles located in the sectors to transmit VI messages. Sectors are designated areas on a road segment. This is a reasonable approach because a vehicle tends to stop right behind a stopped vehicle and the fact that a vehicle sends a VI message implies that the vehicles ahead of have already stopped and belong to the vehicle queue. Figure 3 shows the operation of the sector-based VI message transmission mechanism. In the figure, each sector is represented as a square-shape area filled with slashes and the vehicles sending VI messages are filled with small dots. In order to reduce the number of VI message transmissions from a sector, if an IVQ vehicle overhears the VI message transmission from another IVQ vehicle in the same sector, it gives up its transmission. The sector identifier is included in the VI message so that multiple VI messages with the same sector identifier cannot be transmitted.
The starting positions of sectors are determined by the sector length and the inter-sector distance _ . The starting location of the first sector, , is meters away from , the start position of the road segment. Then, the distance in meters from of the starting location of the th sector , , is computed as follows: Then, the number of sectors is the largest that satisfies the condition ( + ) ≤ . Figure 4 shows how sectors are determined. In the sector-based mechanism, the traffic signal controller collects VI messages and computes the length of the vehicle queue based on the collected VI messages, like in the distance-based mechanism. However, because there exists a gap (i.e., a non-sector area) between two consecutive sectors, the last vehicle in the vehicle queue may be located at the last sector, , or in the nonsector area right before . Therefore, we take the estimated vehicle queue length to be the In order to reduce the number of VI message transmissions from a sector, if an IVQ vehicle overhears the VI message transmission from another IVQ vehicle in the same sector, it gives up its transmission. The sector identifier is included in the VI message so that multiple VI messages with the same sector identifier cannot be transmitted.
The starting positions of sectors are determined by the sector length S len and the inter-sector distance S inter_dist . The starting location of the first sector, S 1 , is S start meters away from R start , the start position of the road segment. Then, the distance in meters from R start of the starting location of the ith sector S i , S i start , is computed as follows: Then, the number of sectors is the largest i that satisfies the condition S i start + S len ≤ R len . Figure 4 shows how sectors are determined. The objective of the sector-based mechanism is to reduce the number of candidate IVQ vehicles to transmit VI messages. Compared with the distance-based mechanism in Section 3.2 where all the IVQ vehicles have the chances to transmit VI messages, the sector-based mechanism allows only the IVQ vehicles located in the sectors to transmit VI messages. Sectors are designated areas on a road segment. This is a reasonable approach because a vehicle tends to stop right behind a stopped vehicle and the fact that a vehicle sends a VI message implies that the vehicles ahead of have already stopped and belong to the vehicle queue. Figure 3 shows the operation of the sector-based VI message transmission mechanism. In the figure, each sector is represented as a square-shape area filled with slashes and the vehicles sending VI messages are filled with small dots. In order to reduce the number of VI message transmissions from a sector, if an IVQ vehicle overhears the VI message transmission from another IVQ vehicle in the same sector, it gives up its transmission. The sector identifier is included in the VI message so that multiple VI messages with the same sector identifier cannot be transmitted.
The starting positions of sectors are determined by the sector length and the inter-sector distance _ . The starting location of the first sector, , is meters away from , the start position of the road segment. Then, the distance in meters from of the starting location of the th sector , , is computed as follows: Then, the number of sectors is the largest that satisfies the condition ( + ) ≤ . Figure 4 shows how sectors are determined. In the sector-based mechanism, the traffic signal controller collects VI messages and computes the length of the vehicle queue based on the collected VI messages, like in the distance-based mechanism. However, because there exists a gap (i.e., a non-sector area) between two consecutive sectors, the last vehicle in the vehicle queue may be located at the last sector, , or in the non- In the sector-based mechanism, the traffic signal controller collects VI messages and computes the length of the vehicle queue based on the collected VI messages, like in the distance-based mechanism. However, because there exists a gap (i.e., a non-sector area) between two consecutive sectors, the last vehicle in the vehicle queue may be located at the last sector, S last , or in the non-sector area right before S last+1 . Therefore, we take the estimated vehicle queue length Q len to be the average of the minimum vehicle queue length Q min len and the maximum vehicle queue length Q max len : Here, k is the number of lanes, V last is the IVQ vehicle in S last and V last dist is the distance of V last from R start . Q min len is the vehicle queue length when V last is the last vehicle in the vehicle queue and Q max len is the vehicle queue length when V last is located right before S last+1 . Figure 5 shows Q min len and Q max len of the case when the vehicle (with a dot pattern) on the first lane of the second sector sends a VI message to the traffic signal controller. Here, is the number of lanes, is the IVQ vehicle in and is the distance of from . is the vehicle queue length when is the last vehicle in the vehicle queue and is the vehicle queue length when is located right before . Figure 5 shows and of the case when the vehicle (with a dot pattern) on the first lane of the second sector sends a VI message to the traffic signal controller.
Simulation Environment and Performance Factors
Simulations were performed with the vehicular network simulation framework Veins [23] based on SUMO [24] and OMNet++ [25]. The IEEE 802.11p [21] is used as the MAC protocol for vehicular wireless communications and no background data traffic is generated except for VI message transmissions. The simulation network is a 4-way intersection with three lanes per road segment and the vehicle queue length is measured for a specific road segment. Three types of vehicles with different maximum speed and acceleration values are deployed for a realistic road traffic environment. The simulation parameters are listed in Table 1.
Simulation Environment and Performance Factors
Simulations were performed with the vehicular network simulation framework Veins [23] based on SUMO [24] and OMNet++ [25]. The IEEE 802.11p [21] is used as the MAC protocol for vehicular wireless communications and no background data traffic is generated except for VI message transmissions. The simulation network is a 4-way intersection with three lanes per road segment and the vehicle queue length is measured for a specific road segment. Three types of vehicles with different maximum speed and acceleration values are deployed for a realistic road traffic environment. The simulation parameters are listed in Table 1.
We evaluate and compare our distance-based and sector-based mechanisms with the Naïve mechanism for various simulation scenarios. The Naïve mechanism measures the vehicle queue length by making every IVQ vehicle to send a VI message to the traffic signal controller. The degree of saturation (%) is taken as one of the performance affecting factors, which is a ratio of demand (the number of vehicles moving on the road segment) to capacity (the maximum possible number of vehicles on the road segment) in percentage. We evaluate the performance of the proposed mechanisms for two degree of saturation scenarios, 30% and 50%. For the sector-based mechanism, the sector length S len and the inter-sector distance S inter_dist are used as the performance affecting factors. For both S len and S inter_dist values, 10 m, 20 m and 30 m are taken. For the performance analysis, we measure three performance factors: • The accuracy of the vehicle queue length estimation: The accuracy of the estimated vehicle queue length is measured in terms of the arithmetic mean (AM), the mean absolute deviation (MAD) and the mean absolute percentage error (MAPE). With a given data set, AM is obtained by dividing the sum of the given data set by the set size. MAD is the average of the absolute deviations from the mean (AM) of the given data set and MAPE is a measure of prediction accuracy in percentage. If A i and F i are the ith measured and estimated values, respectively, and n is the total number of measured values, the AM of |A 1 − F 2 |, . . . , |A n − F n | is the average of the absolute deviations |A 1 − F 2 |, . . . , |A n − F n |: The MAD of |A 1 − F 2 |, . . . , |A n − F n | is the average of the absolute deviations |A 1 − F 2 |, . . . , |A n − F n | from the AM of |A 1 − F 2 |, . . . , |A n − F n |:
and the MAPE of A i 's and F i 's is the average of
, expressed as a percentage: • The number of VI message transmissions: This performance factor is used for measuring the message transmission overhead sent from the IVQ vehicles to the traffic signal controller. The number of VI message transmissions is measured by counting in the original transmissions and the retransmissions of VI messages during the simulation time.
•
The VI message transmission delay: The transmission delay of a VI message is the time taken for a VI message to be delivered to the traffic signal controller successfully.
For the performance evaluation, we have executed five simulation runs for each mechanism and for each S len and S inter_dist value pairs in the case of the sector-based mechanism. We use 10 m, 20 m and 30 m as the values of S len and S inter_dist . Table 2 lists the simulation results in terms of the vehicle queue length with the degree of saturation 30%. The number in the 'Round' column of the table indicates the execution order of the corresponding simulation run. The actual vehicle queue length is obtained from the Naïve mechanism and the estimated vehicle queue lengths from the distance-based and the sector-based mechanisms. The distance-based mechanism performs almost the same as the Naïve mechanism with less VI message transmissions (will be described later in this subsection). On the other hand, the sector-based mechanism shows 1.3~1.94 vehicle differences from the actual vehicle queue length, except for S len or S inter_dist of 30 m. However, for the 3-lane road segment, 1~2 vehicle difference may not significantly affect the performance of the traffic signal controller. With considering only about 4 to 7.4 message transmissions of the sector-based mechanism (will be described later in this subsection), the sector-based mechanism is a good candidate for the vehicle queue length estimation because it significantly decreases the VI message transmission overhead. In order to measure the accuracy of each of the proposed vehicle queue length estimation mechanisms, Table 3 shows the simulation results in terms of the AM and the MAD and the MAPE of the estimated vehicle queue length for the degree of saturation 30%. AM is the average of the absolute differences between the actual vehicle queue length and the estimated vehicle queue length. So, lower AM values mean better performance in estimating the vehicle queue length. From the MAD values in Table 3, we can anticipate the degree of stability of the mechanisms in estimating the vehicle queue length from the perspective of accuracy. As for MAD, smaller MAD values indicate better stability, so it can be asserted that the distance-based mechanism gives more stable estimated values than the sector-based mechanism. The sector-based mechanism performs well enough, except for S len = 30 m or S inter_dist = 30 m. Setting S len or S inter_dist to a larger value has a tendency to over-or under-estimate the vehicle queue length because of large variance in Q min len and Q max len . MAPE is the average of the ratios of the difference between the actual and the estimated vehicle queue lengths to the actual vehicle queue length, represented in percentage. MAPE indicates the relative significance of the difference between the actual and the estimated vehicle queue lengths to the actual vehicle queue length. Even though the MAPE values of the sector-based mechanism are larger than those of the distance-based mechanism, this is acceptable because, for example, the MAPE value of 6.59% implies 0. Table 4 shows the number of VI message transmissions and the average transmission delay of a VI message for the degree of saturation 30%. In the distance-based mechanism, the IVQ vehicles generate 21 VI messages in total and, in the naïve mechanism, 44. Thus, we can say that the distance-based mechanism is significantly better than the naïve mechanism in terms of the VI message transmission overhead. The transmission delay of a VI message is almost the same in all the mechanisms. The reason for this is that there is not sufficient traffic generated to hinder the transmissions of VI messages because there exist only the VI messages generated by the vehicles moving in one direction on a single road segment with no background traffic. Table 5 shows the estimated vehicle queue length for the degree of saturation 50%. The distancebased mechanism works almost the same as the naïve mechanism in terms of the vehicle queue length estimation. Similar to the case of the degree of saturation 30%, the sector-based mechanism performs worse than the distance-based mechanism and the larger S len or S inter_dist value gives worse performance. The sector-based mechanism shows 2.5~6.86 vehicle differences from the actual vehicle queue length. For the 3-lane road segment, 2~7 vehicle difference is acceptable with considering the actual queue length of around 50 vehicles. Table 6 gives the accuracy values computed from the estimated vehicle queue lengths in Table 5. As for AM, MAD and MAPE, the distance-based mechanism performs better than the sector-based mechanism. The distance-based mechanism works almost the same as the naïve mechanism from the perspective of the accuracy in estimating the vehicle queue length. Even though the MAD and MAPE values of the sector-based mechanism are higher than those of the distance-based mechanism, for the 3-lane road segment, the MAD and the MAPE values are acceptable due to the same reasoning as that in Table 3. Similar to the case of the degree of saturation 30%, the sector-based mechanism performs worse than the distance-based mechanism and larger S len or S inter_dist values give worse performance. Table 6. The accuracy of the estimated vehicle queue length (with degree of saturation = 50%).
Distance-Based
Sector-Based (S len = 10 m) Sector-Based (S inter_dist = 10 m) Table 7 shows the number of VI message transmissions and the average transmission delay of a VI message for the degree of saturation 50% The distance-based mechanism outperforms the naïve mechanism because the distance-based mechanism generates 37.4 VI message transmissions compared with 80 VI transmissions of the naïve mechanism, which is less than a half of the VI message transmission overhead of the naïve mechanism. Compared with 80 and 37.4 message transmissions of the naïve and the distance-based mechanisms, respectively, 8~12.8 message transmissions of the sector-based mechanism are desirable in the real-world environment with heavy data traffic. The message transmission overhead of the sector-based mechanism, with S len = 10 m and S inter_dist = 10 m, is almost a third of that of the distance-based mechanism. With considering the message transmission overhead of the distance-based mechanism is less than a half of that of the naïve mechanism, the sector-based mechanism significantly outperforms the naïve mechanism. The transmission delay of a VI message is almost the same in all the mechanisms and the reason is the same as that for Table 4. Table 7. The number of transmitted VI messages and the average transmission delay of a VI message for various inter-sector distances and sector lengths (with degree of saturation = 50%).
Naïve
Distance So far, we have analyzed the simulation results presented in tables for two cases of the degree of saturation. The common objective of the distance-based and the sector-based mechanisms is to estimate the vehicle queue length accurately with less message transmission overhead compared with the Naïve mechanism. Thus, we depict the accuracy in terms of AM and the number of VI message transmissions of the distance-based and the sector-based mechanisms in Figures 6 and 7, respectively. In each graph of the figures, both the 30%-and the 50%-degree of saturation are plotted together in order to see how the degree of saturation has affected the performance.
In Figure 6, the V2I message transmission overhead is compared for three mechanisms in terms of the number of VI messages transmitted. The sector-based mechanism performs the best for both degree saturation cases and the performance improvement of our V2X communication-based mechanisms is not affected by the degree of saturation. Figure 7 shows the estimation accuracy of our V2X communication-based mechanisms in terms of AM for two degree of saturation cases. The sector-based mechanism performs worse than the distance-based mechanism in all cases and the degree of saturation affects the performance of the sector-based mechanism more than that of the distance-based mechanism. According to the simulation results, as the degree of saturation increases, the number of vehicles in the vehicle queue increases and the average difference between the actual and the estimated vehicle queue lengths increases, too. From this, we can deduce that the number of vehicles in the vehicle queue affects the estimation accuracy of the sector-based mechanism. The reasoning behind this is that, in a large vehicle queue, the uncertainty of a vehicle being included in the vehicle queue increases because the action of a rear vehicle is influenced by the action of the vehicles ahead of the rear vehicle (more ahead vehicles in a longer vehicle queue). That is, a longer vehicle queue causes higher uncertainty of a rear vehicle (i.e., lower accuracy) especially in the sector-based mechanism because of the additional uncertainty caused by large variance in Q min len and Q max len of the sector-based mechanism. So far, we have analyzed the simulation results presented in tables for two cases of the degree of saturation. The common objective of the distance-based and the sector-based mechanisms is to estimate the vehicle queue length accurately with less message transmission overhead compared with the Naïve mechanism. Thus, we depict the accuracy in terms of AM and the number of VI message transmissions of the distance-based and the sector-based mechanisms in Figures 6 and 7, respectively. In each graph of the figures, both the 30%-and the 50%-degree of saturation are plotted together in order to see how the degree of saturation has affected the performance.
In Figure 6, the V2I message transmission overhead is compared for three mechanisms in terms of the number of VI messages transmitted. The sector-based mechanism performs the best for both degree saturation cases and the performance improvement of our V2X communication-based mechanisms is not affected by the degree of saturation. Figure 7 shows the estimation accuracy of our V2X communication-based mechanisms in terms of AM for two degree of saturation cases. The sector-based mechanism performs worse than the distance-based mechanism in all cases and the degree of saturation affects the performance of the sector-based mechanism more than that of the distance-based mechanism. According to the simulation results, as the degree of saturation increases, the number of vehicles in the vehicle queue increases and the average difference between the actual and the estimated vehicle queue lengths increases, too. From this, we can deduce that the number of vehicles in the vehicle queue affects the estimation accuracy of the sector-based mechanism. The reasoning behind this is that, in a large vehicle queue, the uncertainty of a vehicle being included in the vehicle queue increases because the action of a rear vehicle is influenced by the action of the vehicles ahead of the rear vehicle (more ahead vehicles in a longer vehicle queue). That is, a longer vehicle queue causes higher uncertainty of a rear vehicle (i.e., lower accuracy) especially in the sector-based mechanism because of the additional uncertainty caused by large variance in and of the sector-based mechanism.
Discussion
In the previous subsection, we have observed the microscopic performance of our V2X communication-based mechanisms in estimating the vehicle queue length for traffic signal control. In this subsection, we will discuss the pros and cons of our mechanisms in a broad sense, including comparisons with other relevant mechanisms. The aspects taken for the discussion are performance, required capabilities, operational cost, robustness, and extensibility: As we have observed from the simulation results and analysis in Section 4.2, the distance-based mechanism achieves very high accuracy with very low (less than a half of the Naïve mechanism) message transmission overhead. Compared with the distancebased mechanism, the sector-based mechanism performs a slightly worse in terms of the estimation accuracy, but the discrepancy between the actual and the estimated values is acceptable as we pointed out in Section 4.2 regarding Tables 3 and 6. The major merit of the sector-based mechanism is the vast reduction of the message transmission overhead which is almost a third of the distance-based mechanism and a sixth of the Naïve mechanism. We
Discussion
In the previous subsection, we have observed the microscopic performance of our V2X communication-based mechanisms in estimating the vehicle queue length for traffic signal control. In this subsection, we will discuss the pros and cons of our mechanisms in a broad sense, including comparisons with other relevant mechanisms. The aspects taken for the discussion are performance, required capabilities, operational cost, robustness, and extensibility:
•
[Performance] As we have observed from the simulation results and analysis in Section 4.2, the distance-based mechanism achieves very high accuracy with very low (less than a half of the Naïve mechanism) message transmission overhead. Compared with the distance-based mechanism, the sector-based mechanism performs a slightly worse in terms of the estimation accuracy, but the discrepancy between the actual and the estimated values is acceptable as we pointed out in Section 4.2 regarding Tables 3 and 6. The major merit of the sector-based mechanism is the vast reduction of the message transmission overhead which is almost a third of the distance-based mechanism and a sixth of the Naïve mechanism. We can say that our V2X communication-based mechanisms are well suited for road traffic situations. On the other hand, the sensor-based approach can hardly measure the right number of IVQ vehicles with sensors installed under the pavement at specific points of the road segment. This approach can only count the vehicles passing by sensors. As for the video-based approach, it has to cope with weather conditions, lighting and obtacles, like bent roads, trees and buildings, in order to correctly count vehicles. [Robustness] Our V2X communication-based mechanisms are resistant to harsh outside-world environments because they are not affected by weather, lighting and road pattern. The only obstacle is any objects hindering communications between vehicles and traffic signal controllers, but wireless communications are more resilient to obstacles than the line-of-sight nature of video cameras. With regard to robustness, the sensor-based approach is the weakest because sensors are susceptible to harsh road conditions like noise and vibrations. On the other hand, the V2X communication-based and the video-based approaches are not affected by road conditions. As for weather conditions, the video-based approach is the weakest because rain or snow will blur video images. Therefore, we can assert that our V2X communication-based approach is the best from the perspective of robustness. • [Extensibility] Based on the vehicular information, like moving direction and speed, obtained from V2I communications, traffic signal controllers can cooperate for more enhanced control of traffic signals. On the other hand, the mechanisms using sensors are not capable of supporting cooperation among traffic signal controllers because there is no way of acquiring or deducing the direction of a vehicle. The video-based approach can guess the next road segment that a vehicle is heading towards via the lane on which the vehicle is. However, it has no way of knowing the route of a vehicle. On the contrary, our mechanisms may let traffic signal controllers know the routes of vehicles via VI messages so that they can collaborate on the optimization of traffic signal control in a wide area. By means of V2X communications, traffic signal controllers can collect various kinds of information from vehicles so that they can accomplish many valuable functionalities.
From the above discussion, we can conclude that the V2X communication-based approach is the best in estimating the vehicle queue length. The only limitation that we confront with is the penetration ratio of the vehicles equipped with V2X communication modules on the road. The optimistic point is that the immense efforts to realize the cooperative intelligent transport system (C-ITS) in Europe (known as the connected vehicle technology in the United States) [34,35] will help us in setting up the environment for our mechanisms. More specifically, the European Commission announced its plan for the coordinated deployment of C-ITS in Europe in order to start the full-scale deployment of C-ITS services and C-ITS enabled vehicles in 2019.
Conclusions
The Internet of Vehicles (IoV) allows vehicles to communicate with anything in the Internet, including vehicle themselves and traffic signal controllers on the road. In this paper, we focused on a cost-effective ITS application of controlling traffic signals in real-time in the IoV environment. Vehicular communication capabilities enable vehicles to communicate directly with traffic signal controllers (i.e., RSUs) located at intersections so that traffic signal controllers can estimate how many vehicles are waiting for the green light (i.e., vehicle queue length). Thus, with V2I communications, we can avoid using conventional computing-intensive video cameras or high operational-cost sensors for the real-time estimation of vehicle queue length. Furthermore, V2I communications are robust compared with video-taking and sensing in the harsh road environment with full of obstacles.
In the V2X communication-based approach, both the V2I communication overhead and the accuracy of the vehicle queue length estimation must be considered. For the reduction of V2I communication overhead, we proposed the distance-based mechanism in [17] and newly proposed the sector-based mechanisms in this paper. In the distance-based mechanism, vehicles farther from the traffic signal controller have higher possibility of sending messages to the traffic signal controller. In the sector-based mechanism, sectors are specified in a road segment and only the vehicle first stopped in each sector is allowed to perform V2I communications.
For the performance comparison of our mechanisms, we carried out simulations based on the Veins vehicle network simulation framework for various performance determining factors and analyzed the performance in terms of the message transmission overhead and the accuracy of the vehicle queue length estimation. The message transmission overhead indicates the utilization of constrained wireless link resource, so less message transmissions are preferred. From the simulation results, we showed that our mechanisms significantly reduce the number of message transmissions without losing the accuracy of the estimated vehicle queue length, compared with the Naïve mechanism. The sector-based mechanism decreases the message transmission overhead about a sixth of the Naïve mechanism and a third of the distance-based mechanism. This indicates that our V2X communication-based mechanisms are especially good for the road situation with many vehicles. Also, we verified that the proper selection of sector length and inter-sector distance is important for achieving acceptable performance of the sector-based mechanism. According to the simulation results, the sector length 10~20 m and the inter-sector distance 10~20 m are good enough for the case of the vehicle length 5 m and the inter-vehicle distance 2.5 m. Because sector length and inter-sector distance are easily tunable parameters, our sector-based mechanism can be applied to any road environments at no extra cost. Also, based on the discussion in Section 4.3, we come to the conclusion that our vehicular communication-based approach can be the ultimate enabler of intelligent traffic signal control in the age of IoV. | 2019-03-14T04:07:24.184Z | 2019-03-01T00:00:00.000 | {
"year": 2019,
"sha1": "0f2286a9ea22fb39cf78e906fe1ba5764c08c60b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/19/6/1275/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0f2286a9ea22fb39cf78e906fe1ba5764c08c60b",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
} |
254636424 | pes2o/s2orc | v3-fos-license | Entanglement resolution of free Dirac fermions on a torus
Whenever a system possesses a conserved charge, the density matrix splits into eigenspaces associated to the each symmetry sector and we can access the entanglement entropy in a given subspace, known as symmetry resolved entanglement (SRE). Here, we first evaluate the SRE for massless Dirac fermions in a system at finite temperature and size, i.e. on a torus. Then we add a massive term to the Dirac action and we treat it as a perturbation of the massless theory. The charge-dependent entropies turn out to be equally distributed among all the symmetry sectors at leading order. However, we find subleading corrections which depend both on the mass and on the boundary conditions along the torus. We also study the resolution of the fermionic negativity in terms of the charge imbalance between two subsystems. We show that also for this quantity, the presence of the mass alters the equipartition among the different imbalance sectors at subleading order.
Introduction
Entanglement is one of the most peculiar property of quantum physics. It is due to quantum correlations between different parts of a system, meaning that it is an intrinsically nonlocal quantity without a classical analogue. It is a fundamental tool in theoretical physics, with applications in high-energy physics [1][2][3][4], in the context of black holes [5][6][7], and in low-energy physics, where it can be useful to study extended quantum systems [8][9][10][11][12]. Given the relevance of the subject, several different ways to quantify the entanglement have been studied over the years. Here, we consider two of them: the first one is the von Neumann entropy, which characterises the entanglement of a bipartite system in a pure quantum state |ψ [11]. Starting from a bipartition of a system A∪B, and the corresponding reduced density matrix (RDM) ρ A = Tr B (|ψ ψ|), the entanglement entropy is defined as (1.1) A related family of functions, known as Rényi entropies, is given by (1. 2) The latter are particularly convenient because in the path-integral formalism, for integer n, Tr [ρ n A ] corresponds to a partition function on an n-sheeted Riemann surface R n , obtained by joining cyclically the n sheets along the region A [13]. In quantum field theory, this object can be computed as a correlation function involving a particular type of twist fields, which are related to branch points in the Riemann surface R n [14,15]. Then, the limit n → 1 allows to compute the von Neumann entropy.
Despite the success of Rényi entropies in describing bipartite pure systems, entanglement entropies are no longer good measures of entanglement in mixed states since they mix quantum and classical correlations (e.g. in a high temperature state, S 1 gives the extensive result for the thermal entropy that has nothing to do with entanglement). If we consider the mixed state described by the reduced density matrix ρ A , where A = A 1 ∪ A 2 , and we want to quantify the entanglement between A 1 and A 2 , we can use the entanglement negativity [16,17], defined as where Tr|O| := Tr √ O † O denotes the trace norm and ρ T 1 A is the partially-transposed RDM of A, which is defined as follows. Let us write the RDM as i , e 2 j ρ A e 1 k , e 2 l e 1 i , e 2 j e 1 k , e 2 l , (1.4) where e 1 j and e 2 k are orthonormal bases for the Hilbert spaces H 1 and H 2 describing the system in A 1 , A 2 , respectively. The partial transpose ρ T 1 A is defined by exchanging the matrix elements in the subsystem A 1 , i.e.
(1. 5) In terms of the eigenvalues of ρ T 1 A , λ i , and recalling the normalisation i λ i = 1, Tr|ρ T 1 A | can be written as This expression shows that the negativity measures "how much" the eigenvalues of ρ T 1 A are negative, from which the name negativity comes. Finally, it is worth mentioning that the superscrit b in Eq. (1.3) refers to the fact that this expression is sometimes referred to as bosonic negativity: indeed, it turned out that its definition is not well-suited to investigate entanglement properties in the context of (free-)fermionic systems. To circumvent this issue, a slightly different definition for the partial transpose, based on the partial timereversal (TR) transformation of the RDM, can be adopted, as we are going to review in Section 2.
The paper is organised as follows. In Section 2 we provide all the definitions concerning the measures of symmetry resolved entanglement. We also introduce the concept of negativity for fermionic systems and we discuss the charge imbalance resolved negativity. This setup allows us to calculate the symmetry resolved entanglement entropy of massless Dirac fermions for a subsystem with multiple intervals at finite size and temperature in Section 3. By adding a massive perturbative term to the Dirac action, we generalise the previous study in Section 4. The approach we adopt is suitable also to study the charge imbalance negativity, extending the results found in [82] to the massive case in Section 5. Throughout the manuscript, we check the agreement of the analytic expressions with the numerical lattice computations, both for the charged moments and their Fourier transforms. We draw our conclusions in Section 6. Five appendices provide the details of our computations, especially for the analytic continuations of the Rényi entropy.
Entanglement resolution in pure and mixed states
In this section, we provide the general definitions of the main quantities of interest, namely the symmetry resolved entanglement entropies and the charge imbalance negativity: they quantify the entanglement in each charge sector when the system is invariant under a global U (1) symmetry. A charge sector is defined as a U (1) invariant subspace of the Hilbert space [22].
Symmetry resolved entanglement
Le us briefly review the study of critical systems with an additional internal global U (1) symmetry. The generator of the symmetry, which we call Q, commutes with the Hamiltonian and thus the density matrix, that for a system at nonzero temperature 1/β reads ρ = e −βH /Z, with Z = Tre −βH . If the generator Q can be written as the sum of the local charges in the two subsystems A ∪ B, Q = Q A + Q B , the charge restricted to A commutes with the reduced density matrix and the RDM takes a block form in the basis of the eigenstates of Q A , with eigenvalues q: The factors p(q) have been introduced in order to normalise each block, so that They describe the probability of finding q as an outcome of the measurement of Q A . The density matrix of each block can be found projecting ρ A in the corresponding eigenspace and normalising the result where Π q is the projector in the q-charge sector. Since each ρ A (q) is a density matrix (meaning it is positive semidefinite and normalised), the corresponding entanglement entropy, which we call symmetry resolved entanglement entropy, is given by (2.5) and is related to the total entanglement entropy as From this expression, we can decompose the total entanglement as a sum of two quantities. The first one is the weighted sum of the entanglement in each symmetry sector, which has been called configurational entanglement entropy S c [18]; the second one, instead, arises from the fact that the system can be found in each symmetry sector with a certain probability p(q). This uncertainty, which can be viewed as "missing" information, is called number entanglement, S num . One can also define the symmetry resolved Rényi entropy as To compute these quantities using CFT, we will follow the approach described in [22]. Exploitng the property that the eigenvalues of Q A only take integer discrete values (for the U (1) symmetry), we can conveniently express the projector with a Fourier transform By introducing the charged moments Z n (α) = Tr(ρ n A e iαQ A ), the projected density matrix in a charge sector is a Fourier transform 9) and the symmetry resolved Rényi entropies read Luckily, the charged moments can be computed via the replica trick; the factor e iαQ A is translated in a phase difference condition in the path integral formulation [22]. This means that, when sewing all the Riemann sheets together, we get a phase e iα whenever we complete a cycle through the n copies.
Fermionic negativity and charge imbalance
As mentioned in the introduction, the limit of applicability for entanglement entropy lies in the requirement that the system is in a pure state, thus excluding the case of a finite temperature system or a multipartite geometry. In this subsection, we review the definition of fermionic negativity presented in [83], which covers such cases. Consider a one-dimensional subsystem A, described by a RDM ρ A , partitioned in two parts A 1 and A 2 . The fermionic negativity can be defined by looking at the action of a partial time-reversal transformation on the density matrix. This operation, in bosonic systems, is equivalent to a partial transposition, while it differs for a phase in the case of fermions. This property can be conveniently shown by looking at the time-reversal operation on fermionic coherent states [84], which is: where ξ,ξ are Grassman variables and |ξ = e −ξc † |0 , ξ = 0| e −cξ are the related fermionic coherent states while c, c † are the fermionic operators satisfying {c † i , c j } = δ ij . The factor i is necessary for the anticommuting nature of the Grassman variables, and it is required to leave the trace invariant under this operation. The superscript R is used to distinguish the fermionic from the standard partial transpose operation defined in Eq. (1.5). This transformation rule can be rewritten in the occupation number basis as It shows that the partial time-reversal acts as the composition of the partial transpose plus a unitary operator, i.e. a phase [84]: In this basis, the phase contribution is 0 along the diagonal, since φ ({n i }, {n i }) = 0. Under the partial time-reversal R 1 , the density matrix is no longer hermitian, in general.
To restore this property, we can consider the matrix ρ R 1 = ρ R 1 † ρ R 1 which has only real eigenvalues, and the fermionic negativity is defined by Analogously to Rényi entropy, we can define the fermionic Rényi negativity as (2.16) From Eq. (2.16), we can recover the fermionic negativity by the analytical continuation of the even Rényi negativities, evaluated in n e = 1 as To further investigate the internal symmetry structure of this quantity in the presence of a conserved charge, we can follow Refs. [50,74]: let us call Q A the conserved charged restricted to A, which in turn can be expressed as the sum Q 1 and Q 2 of charges localised in the two partitions of the system A 1 and A 2 , respectively. The density matrix commutes with the total charge and we act on the commutation relation with a partial time-reversal operation, as in Eq. (2.14): In the occupation number basis, which diagonalises the charge operator, we can see explicitly Q R 1 = Q 1 , since Q is diagonal in this basis, so the conserved charge of the partial time-reversed operator, Q imb = Q 2 − Q 1 , has the form of a charge imbalance between A 1 and A 2 .
Exploiting the presence of a conserved charge, we can decompose the total negativity of the system as done for the entropy: (2.19) where the normalising factor p(q) now is defined as 20) and Π q is the projector in the symmetry q imbalance sector. Let us remark that if our initial density matrix ρ A is in a pure state, there exist some degenerate values of q for which p(q) = 0; this is due to the fact that the value of Q A = Q 1 + Q 2 is fixed in this case, thus allowing only certain values of the imbalance Q imb = Q 2 − Q 1 to be different from zero. This is not the case for mixed states. Finally, we can define the charged moments of the partial time-reversed density matrix Tr ρ A (ρ A ) † n−1 ρ A e iαQ imb , n = 2m + 1 (2.21) and the charge imbalance resolved moments as their Fourier transform From the moments, we can recover the normalised Rényi negativity The analitycal continuation performed on the even numbers provides the charge imbalance negativity according to the limit 3 Warmup: Symmetry resolution -massless case As a warmup, we consider the massless Dirac field theory, described by the following Lorentz invariant Lagrangian in a 1 + 1 dimensional spacetime The γ µ matrices can be represented in terms of the Pauli matrices as γ 0 = σ 1 and γ 1 = σ 2 . This action has a global U (1) symmetry, ψ → e iα ψ andψ → e −iαψ which is related to the conservation of the charge Q = dx 1 ψ † ψ. Here, we focus first on the computation of the charged moments through a diagonalisation in the space of replicas. Using the bosonisation technique, the problem is mapped to the calculation of a correlation function of vertex operators. The analytical expressions are compared with the lattice computations, finding a good agreement. Finally, we perform a Fourier transform of the obtained quantities. We think it is instructive to report all the steps for the derivation of the known results about the massless case since they will be the building blocks for the following sections about the massive Dirac fermions, whose computations are more cumbersome.
Replica diagonalisation
Let us consider a system of free one-dimensional Dirac fermions at finite temperature and size, whose corresponding field is defined on a torus spacetime, given the periodicity in both imaginary time τ and space x. The partition function corresponding to Tr [ρ n A ] has a single fermionic field defined on a Riemann surface made of n different sheets, and can be mapped to an equivalent one in which one deals with a n-component field, which is instead defined on a single sheet: where ψ j is the Dirac field of the j-th copy of the system. These copies interact through a composite twist field operator T n,α which imposes the appropriate boundary conditions that, in the Riemann surface picture, connects the various sheets. Given a component of the field ψ j , the insertion of a twist field at a point u implies that a winding around such point (z − u) → e 2πi (z − u) maps it to the next component, with an additional phase e iα/n . The same applies to the composite anti-twist fieldT n,α , which takes a field from the copy j to j − 1 adding a phase e −iα/n . This transformation can be encoded in the twist matrix: The action of this matrix is diagonalised by the appropriate change of basis in the replica space:ψ for k = − n−1 2 , . . . n−1 2 , so that the twist fields can be decomposed into n different twists acting independently on each replica T n,α = k T k,n,α ,T n,α = kT k,n,α .
Since theψ k fields are decoupled, the total partition function is just the product of n independent partition functions, Z k . In this formalism, the addition of a global flux α is straightforward by equally splitting it among all the n replicas. There is an ambiguity in the choice of the phase as it can generically take the form 2πk+α n + 2πm. Different choices of the winding number m correspond to different configurations of the field, and in principle we should take a linear combination of the partition functions computed with all the possible choices of m. However, the leading order contribution to the partition function Z k corresponds to the choice m = 0, as explained in Appendix B.
We now review the procedure outlined in [85] to compute the decoupled partition functions Z k . The most generic subsystem A is composed of p intervals the charged moments can then be expressed as a correlator where we indicate as u i , v i the points corresponding to u i = (u i , 0), v i = (v i , 0) in the Euclidean coordinates of the path integral. Our fields are multivalued: they gain a different phase around the branch points u i and v i , respectively. In order to obtain single-valued fields, we can let the phase be absorbed by a gauge transformation [85]. Calling ψ G the new, gauged, fields, they are related with the multivalued ones by the gauge transformatioñ Requiring a null phase condition around the branch points for ψ G k , Stokes theorem allows to find the appropriate gauge transformation: where µν is the antisymmetric tensor in 2 dimensions. Similarly, the points v i give an opposite phase and putting all together we get The Dirac Euclidean lagrangian in a gauge field is We can use the fermionic current j µ =ψ G k γ µ ψ G k to isolate the gauge field term in the action, rewriting the partition function as where · CF T denotes that the expectation value is taken on the massless, ungauged theory, which is conformal invariant. We can apply the bosonisation technique, with the substitution and compute Eq. (3.11) in the dual theory of the Dirac fermion, which is that of a free scalar φ k , described by the action: Because of Eq. (3.9), Z k (α) is nothing but a correlation function of vertex operators: (3.14) The correlation function of vertex operators on a torus is given by [86]: Using Eqs. (3.15) and (3.14), we find: where we have added an ultraviolet cutoff ≈ a L necessary for the numerical comparison with a lattice theory with spacing a → 0. The spacial length of the torus is set equal to L, and τ = iβ L is the ratio of the two periods of the torus. θ ν are the Jacobi theta functions, with the pedix ν specifying one of the four possible boundary conditions along the torus, the spin sectors ν = 1, 2, 3, 4. Their meaning is reported in the following table using the standard notation of N S and R for Neveu-Schwarz or Ramond periodic condition, respectively: Spin sector Periodicity in the x direction Periodicity in the τ direction Before concluding this section, we notice that Eq. (3.16) can be put in another form by applying a modular transformation on the parameter τ → − 1 τ . Under such transformation, the Jacobi theta functions obey the identity [86]: where ν = ν for ν = 1, 3, while ν = 2, 4 interchange. Applying this transformation to (3.16), we obtain the expression:
Charged moments on the torus
We can start from Eq. (3.6) to compute the charged moments. To this end, we divide it in two parts, where the first factor, Z 0 k (α), does not depend on the spin sector ν, so we dub it the spin independent part. The second factor will be named instead the spin dependent part Z ν k (α), such that Explicit analytic expressions are provided in Eqs. (3.22), (3.25), (3.26).
Spin independent part
For convenience, we take the logarithm of both terms in 3.6, to transform the product into a sum: We focus on the physical sectors ν = 2, 3 which are antiperiodic in the imaginary time with period β. The sum of the spin independent part can be easily calculated since and we get This expression, in the limit L, β → ∞, agrees with the well known CFT results on a flat space-time [22] .
Spin dependent part
The spin dependent part of the partition function is the most interesting, since it shows deviations from standard CFT on a plane. As we will find out, these terms, which come purely from the boundary conditions, are responsible for small violations of the equipartition of entanglement. Physically, the influence of the boundary on entanglement shows the non-local nature of the latter. The goal is the computation of where r, from now on, will be defined as the total length of the subsystem: Let us focus on the ν = 2 spin sector. Substituting the infinite product expansion of theta functions [86], Taylor-expanding the logarithm and using the results for the geometric series, we find This series converges exponentially fast for all real values of n.
A similar formula can be found for the spin sector ν = 3
Comparison with the lattice theory
In this section we benchmark the results for the charged moments log Z n ≡ log Z 0 n (α) + log Z ν n (α), ν = 2, 3. For the lattice computations, we use the techniques reported in the Appendix A, setting the number of lattice sites in the numerics to N = 300. We also choose the lattice spacing a = 1/N in such a way that L = N a = 1. As can be seen from Eq. (3.22), there is an unknown lattice parameter, , necessary to compare the CFT results with the numerics. To avoid this problem, a possibility is to consider quantities such as the difference of the logarithm of two charged moments, each computed for the same number of intervals of different length, so that the dependence on cancels out. This has been done in Figure 1, showing a good agreement between the lattice computations and the field theory prediction in Eqs. (3.22) and (3.23). Figure 1: Difference between the logarithm of the charged moments ∆ log Z n (α) ≡ log Z n (α)| r= 1 − log Z n (α)| r= 2 , evaluated for a single interval of length, respectively, It is also possible to fix using its analytical expression found in [50] by exploiting the generalised Fisher-Hartwig conjecture. In short, this technique allows to obtain an explicit formula for the charged moments computed on a lattice theory at T = 0 and N → ∞. The comparison with the analytical formula derived from field theory provides an expression for the lattice constant , which we assume to weakly depend on τ = i β L . Numerical evidence seems to validate this assumption, as shown in Fig. 2, where the lattice data and the field theory prediction perfectly overlap. We have set the number of lattice sites N = 300.
We report explicitly the expression we used for the lattice constant [50], which is (3.27)
Symmetry resolved moments via Fourier transform
Applying the inverse Fourier transform (2.22) we obtain the symmetry resolved Rényi entropies. In order to explicitly carry out this integral, we consider the physically relevant limit ∼ a L = 1 N 1. The function log Z n (α) is well described by a quadratic function in α in this limit, since the spin independent part is dominant, thanks to the term ∝ α 2 log( ) in Eq. (3.22): In this limit, we can approximate the charged moments at the lowest nontrivial order in α as where the explicit expression for the parameters A n , b n are reported in Appendix C to lighten the notations. The Fourier transform can be explicitly carried out by extending the integration domain of Eq. (2.9) to the whole real axis, instead of just [−π, π]. This is a good approximation provided again bn 8 1, which holds true in the thermodynamic limit ∼ a L → 0. Within this assumption, we get the symmetry resolved moments 30) and the symmetry resolved entanglement entropy It is interesting to study whether the equipartition of entanglement is broken. In particular, looking at Eq. (3.31), the charge dependent contribution cancels if b n ∝ 1 n . Since this is always true for the spin independent part of the partition function (ignoring the n and α dependence of the lattice constant ), any breaking of equipartition predicted by pure CFT is due to the spin dependent part of the charged moments, thus to the boundary conditions. Moreover, if we consider the explicit expression for b ν=3 n in Eq. (C.2), we observe that the spin dependent part is suppressed for β L 1, meaning the equipartition is preserved at high temperatures. In Figure 3, we show the temperature dependence of the symmetry resolved entanglement entropy for one interval and different charges q, in both relevant spin sectors. We can clearly see that the equipartition is recovered in the high temperature regime, while this is spoiled by the boundary terms at low temperature, in particular in the spin sector ν = 2. In Figure 4 (top panels), we plot the symmetry resolved moments obtained with the saddle point approximation in Eq. (3.30), testing the agreement with the ones obtained from a numerical transform of the field theory formula in Eq. (3.20) and with the lattice theory. Finally, in Figure 4 (bottom panels) we do the same for the symmetry resolved entropy in Eq. (3.31). We see that the agreement is good around q = 0 and it worsens for higher values of q since, while the lattice entropy seems to saturate, the saddle point predicts a parabolic behavior for all q, which only mimics the lattice behavior in a neighborhood of q = 0.
Symmetry resolution -massive case
In this section we derive the leading correction to the symmetry resolved entanglement entropy on the torus due to a mass term in the Dirac Lagrangian, i.e.
L =ψγ µ (∂ µ )ψ + mψψ. (4.1) The procedure to compute the partition function is the same as the one described in Section 2.1, with the only difference that the average (3.11) is now computed on a massive theory and not on a CFT. Given the complexity of the resulting expressions, we refer to Appendix D for the technical details, while we report the steps for their derivation in the main text. We show that the presence of a small mass in the Dirac action does not spoil equipartition at leading order, in the thermodynamic limit, but gives a subleading, spin sector dependent correction, which is proportional to the mass.
Here, we treat the mass perturbatively, in such a way that the extra term in the 0 2 4 6 8 10 We use the superscript m in order to distinguish the massive charged moments from the massless ones. Using the bosonisation technique, Eq. (4.1) can be mapped into a sine-Gordon model, whose action reads where the coupling λ is proportional to the mass with a dimensional factor dependent on a cutoff scale (this constant will be fitted when comparing the field theory and the numerical results). Eq. (4.2) can be rewritten in terms of expectation values on a bosonic theory replacing the massive term with the cosine interaction, and using Eq. where for brevity we omit the subscript CFT on the average from now on, assuming all path integrals are performed on the massless action. The first nontrivial order for the mass term is quadratic in λ, due to the neutrality condition of vertex operators: We can then express the first order mass correction to the charged moments as: (4.6) We use the following compact notation to label our main quantities of interest: allowing us to write the correction to the full partition function as: where, we stress again, λ ∝ m up to a renormalisation constant. A k,n is a correlation function of vertex operators, so Eq. (3.15) yields: (4.10) The second addendum ensures that there is no mass correction when the size of the subsystem goes to 0, and its presence is crucial to regularise the integral. Indeed, we can check that the integral in Eq. (4.10) does not diverge, so that a regularisation for the mass is not required to obtain a meaningful result. Assuming to work at finite L, β, a divergent contribution can only come from poles of the integrand. Since θ 1 (z) ∝ z as z → 0, x = y is a singular point. Expanding around x = y at the lowest order, we have and θ ν which means that the two singular addends in Eq. (4.10) cancel out at the leading order, and the divergence can behave at most as ∼ 1 |x−y| . Since x, y are 2−dimensional variables, when divided in real and imaginary part, a pole must diverge at least like ∼ 1 |x−y| 2 to give an infinite contribution, which is not the case. Points where x = u a , v a or y = u a , v a (and vice-versa) behave as |x − u a | − 2(k+α/(2π)) n . In our case, |k| ≤ n−1 2 and |α|/(2π) ≤ 1 2 , so this exponent can be 2 at most, where the value 2 occurs only for the most extreme values of α = ±π. Except for these α = ±π, such poles behave like 1/|x − x 0 | s , s < 2 which means they do not give a divergent contribution. The analytical resummation of k A k,n , can be done using identities for the Jacobi theta functions and we report the result of these computations in Appendix D.
Numerical implementation
While the total entanglement entropy at finite temperature and sistem size has already been studied in Ref. [87] for the massless case, the massive correction was only found for integer values of n ≥ 2, so it is important to test the analytic continuation for n = 1 and α = 0 against lattice computations. As already stressed, the mass is defined up to a renormalisation factor; to match the field theory result with the lattice model we fit a multiplicative constant for the renormalised mass and use it in the plots. We assume that the renormalised mass does depend on both the number of lattice sites and the ratio β L , so a different fit is performed for every choice of these parameters. The Rényi entropies get a small mass contribution equal to and taking the limit n → 1 of this expression (as explained in Appendix D), we get the entanglement entropy mass correction, which has been plotted in Fig. 5. As noted in [87], since the ground state in the spin sector ν = 2 is degenerate, the limit e −βH | β→∞ does not produce a pure state. The presence of the mass should remove this degeneracy, but our perturbative assumption requires m T, L −1 , meaning that the degeneracy is not lifted and the limits m → 0, T → 0 do not commute. This is shown in Figure 5, where, within the spin sector ν = 2, the property S n (r) = S n (L − r), (4.14) which holds for a pure state, is not respected. Similarly, we compare the leading order mass correction of the charged moments in Fig. 6. The integration of Eqs. (4.9) and (4.13) has been carried out with the Mathematica Montecarlo algorithm, averaged on many different realisations, to reduce the integration error, which was not negligible in a single run of the algorithm. (blue points) for a subsystem of one single interval, at β L = 5, in the spin sectors ν = 2 (left) and ν = 3 (right). We plot the Rényi entropy S n corresponding to n = 1 for the ν = 2 spin sector, while we choose n = 1.5 for the ν = 3 spin sector. Notice that in this case we used the symbols for the field theory prediction, since the evaluation of the integrals defining this correction makes it hard to sample many points; for visual clarity, we used instead a continuous line for the lattice data. The overlap between the field theory and the lattice theory is obtained by a fit of a multiplicative constant for the data, due to a renormalisation of the mass. Figure 6: Mass correction to the logarithm of the charged moment log(Z m n (α)) computed at β = 5 for a subsystem of size r = 0.5L in a system of N = 300 sites. This quantity is evaluated for n = 1.5, in the spin sector ν = 2 (left), and for n = 1 in the spin sector ν = 3 (right). The plots have been overlapped thanks to a multiplicative constant obtained by a subsystem-size fit like the one shown in Figure 5. In particular, the blue points correspond to the analytical prediction in Eq. (4.15).
Fourier transform and number entropy
Starting from the charged moments in Eq. (4.9), the leading order massive correction to the symmetry resolved moments and entropy are given by and ∂S m n (q) respectively. The presence of a mass gives, within this perturbative approach, a term which violates the equipartition, and which is subleading in the small m limit we are considering. The correction to the symmetry resolved entropy has been plotted in Fig. 7 against the exact lattice computations; we can observe a deviation between the lattice data and the field theory results for q > 1, which are due to finite size effect. Before concluding the section, we want to show how the mass correction to the Dirac action affects the result for the number entropy. Using a quadratic approximation for the logarithm of the charged moments around α = 0, we can get an expression for the number entropy using the saddle point approximation: by defining we can rewrite Eq. (4.9) as log(Z m 1 (α)) ≈ log(Z m 1 (0)) + and, using Eq. (2.9), the probability distribution p(q) reads .
(4.19)
Estimating the sum with an integral, a substitution justified in the same limit of the saddle entropy approximation, the number entropy defined in Eq. (2.6) is where for small m, we can find the O(m 2 ) corrections to the massless case.
Charge imbalace resolution -massive case
We now turn to the computation of the fermionic negativity. We need to evaluate N 1 (α) for the normalisation constant p(q) in Eq. (2.23) and the even charged moments which will be analytically continued to n e → 1. This quantity has been studied in Ref. [25] for massless free fermions at finite size and temperature and we will briefly review it as a starting point to compute the leading order massive correction. As custom, we denote with A 1 and A 2 the subpartitions of A and we want to evaluate the charge imbalance negativity between them. The odd and even charged moments N n (α) can be found via replica trick; in this case we need to sew the n copies of the fermionic field taking into account also the antiperiodic boundary connection to each field due to the anticommuting nature of fermions, i.e.
where T α is the twist matrix already defined in Eq. (3.3), while T R α takes into account the time-reversal operation and reads The twist matrices T α and T R 1 α , defined in Eqs. (3.3) and (5.3), respectively, can be simultaneously diagonalised for n even or, trivially, for n = 1, by choosing the following basis for the fields on the replicas for k = 1−ne 2 , . . . ne−1 2 . In this basis, our problem is split into the computation of n decoupled partition functions in which the fields have different twist phases, e 2πi k+α/2π ne and −e −2πi k+α/2π ne passing through A 2 and A 1 , respectively. The total partition function is the product of each of these As a concrete example, we consider the following geometry: our subsystems A 1 , A 2 are adjacent segments of lengths 1 , 2 , respectively, such that . Moreover, we work at finite size and temperature, so that each partition function is defined on a torus. Using again the bosonisation technique, Z R 1 ,k (α) is expressed as a correlation function of vertex operators. As explained in Appendix B we need to fix the phase ambiguity of the vertex operators by choosing the leading order contribution in the scaling limit. This has been done in Appendix B.2, and we report here only the final result which is readily evaluated for massless fermions as [25] Z m=0 Here we want to study the leading order massive correction using again a perturbative expansion, which mirrors the one in Section 4. This expansion is meaningful only if the mass is taken to be smaller than all the energy scales involved, i.e. m T, L −1 . In this case we define the quantityà k,n as , which is again a correlator of vertex operator that can be explicitly worked out. Thus, the massive correction can be written as (5.10) The integral in Eq. (5.9), similarly to the one in Eq. (4.10), does not have any UV divergences. This expression can be analytically continued tõ for all values of n e ≥ 1. In Appendix E, we report its explicit expression, which has been used to obtain the numerical plots. The case n = 1, where n is considered odd and not an analytic continuation from the even values, can be treated explicitly: , δ((Z m R 1 ,n=1 )) = m 2 d 2 xd 2 y (I(α, x, y) − I(0, x, y)) , (5.12) where the phase ambiguity is fixed by the prescription as derived in Appendix B.2, and the correlation function is evaluated using Eq. (3.15). In Figure 8, we plot the analytic continuation of In the first three plots, we fix the value of α to α = 0, 0.5, 1, the size of A 1 is 0.4L and the size of A 2 corresponds to the x axis; in the last panel, instead, we have fixed the sizes of A 1 , A 2 and we let the value of α vary in the range [−π, π]. In every case, we compare the field theory result with the ones obtained from the lattice, fitting a mass renormalisation constant. Moreover, the evaluation of the integral in Eq. (5.14), performed with the Mathematica Montecarlo algorithm, is particularly challenging given the oscillations of the integrand, which may lead to small, systematic errors. We also made the assumption, for the cutoff constant in Eq. (5.10), to be independent on α, which allows it to be absorbed in the definition of λ, and that may explain a small mismatch between the lattice theory and the field theory in the last plot of Figure 8. Finally, following the steps outlined in Section 2.2, one can link the charged quantities to the charge-imbalance negativity. The massive correction to the Fourier transforms can be expressed as while the charge-imbalance resolved negativities read Since we are working in the regime in which the mass correction is smaller than the other inverse lengths, the correction is subleading with respect to the results found in [82] for massless fermions, so we find again equipartition at leading order.
Conclusions
In this manuscript, we have studied the behaviour of entanglement in systems with a conserved local charge explicitly computing both the symmetry resolved entropies and the charge imbalance resolved negativity. We first considered the 1+1 dimensional CFT of free Dirac fermions at finite temperature T and finite size L, distinguishing the boundary conditions imposed on the fermionic field in the spacial and temporal directions. We focused on the sectors with antiperiodic conditions in the imaginary time and periodic (ν = 2) or antiperiodic (ν = 3) along the space. The other two spin sectors (i.e. ν = 1, 4) have not been considered here, since they correspond to periodic conditions in the imaginary time, while physical fermions are required to be antiperiodic. We found that while the spinindependent part of the symmetry resolved entropies satisfies entanglement equipartition, the spin dependent term, i.e. the one related to the boundary conditions, causes a small subleading violation of equipartition. The comparisons between the analytical formulae and the exact lattice computations showed an overall very good agreement.
We then moved to the massive Dirac fermions treating the mass as a perturbation of the CFT. We found that the correction due to the small mass violates equipartition, showing that, when moving away from criticality, systems tend to part from this behavior. This result generalises the findings of Ref. [25], in the planar limit, to the torus. As a byproduct, we have also managed to analytically continue the expression of the leading massive correction to the total Rényi entropies. We also studied how the presence of the mass affects the charge imbalance resolved-negativity, always in conformal perturbation theory. Also for the negativity, the massive term breaks the equipartition.
There are some aspects that our manuscript leaves open for further investigations. For example, the sum over all the possible spin sectors on the torus would lead to the symmetry resolved entanglement for the modular invariant Dirac fermion. Furthermore, our formalism strictly relies on the decoupling of the fermionic modes in the replica space, which is true only for free theories. It is natural to wonder what happens for interacting fermion (i.e. a compact boson with given compactification radius). To this aim, we could employ some of the methods developed in the literature as, e.g., in Ref. [79].
A Numerical methods
In this first appendix, we report how to numerically calculate the charged moments of the reduced density matrix ρ A . In free lattice models, ρ A can be written as [88][89][90] where H is a free effective entanglement (or modular) Hamiltonian. Using the relation between the entanglement Hamiltonian and the correlation matrix restricted to the subsystem A, C, the charged moments on the lattice Z n (α) = Tr ρ n A e iQ A α are [22,50] Z n (α) = log Tr C n e 2πiαN where the charge (or number) operator N distinguishes between particles and antiparticles with g † l the creation operator for antiparticles, and f † l for particles. The entanglement Hamiltonian is also the sum of two parts which contain particles and antiparticles, respectively: The correlation matrix is similarly divided in two blocks since g † f = 0, and P , A are the particles/antiparticles restricted correlation matrices, respectively. Restricting to particles, N = 1, so that log(Z p n (α)) = Tr log e 2πiα1 P n + (1 − P ) n , (A.6) while for antiparticles, N = −1 and log(Z a n (α)) = Tr log e −2πiα1 A n + (1 − A) n . (A.7) Assuming we have the same number of states for particles and antiparticles (i.e. the total state is neutral): log(Z n (α)) = Tr log e −2πiα1 A n + (1 − A) n + Tr log e 2πiα1 P n + (1 − P ) n = Tr log e −iπα1 A n + e iπα1 (1 − A) n + Tr log e iπα1 P n + e −iπα1 (1 − P ) n . (A.8) We now need a proper lattice discretisation of the field theory. The free (massive) fermion Lagrangian in Minkowsky space is where / ∂ = ∂ µ γ µ ,ψ = ψ † γ 0 and the matrices γ µ obey the Clifford algebra A representation of such matrices in 2D is If the total length of the lattice is L (in particular we set L = 1) and we have N sites, the discretisation rules are: It is also useful to rescale the fields, obtaining the canonical anticommutation relations Then, the discretised Lagrangian is In order to get the Hamiltonian, we perform a Legendre transformation, which yields Being the Hamiltonian quadratic, it can be explicitly diagonalised. In particular, we can notice it is invariant under translations j → j + 1, so the eigenvectors are of the form (with ν 1 = 0, 1 2 corresponding to periodic or antiperiodic spacial fermions, respectively, and with Greek letters for spinor indices): This relation can be inverted Plugging it into the Hamiltonian (A.15), the diagonalisation reads The energy eigenvalues are [87] ±,k = ± m 2 + sin 2π(k−ν 1 ) where the negative eigenvalues physically represent the annihilation of antiparticles. As noted in [87], in Eq. (A.19), the eigenvalues ±,k are equal to the ±,N −k (this is true for ν 1 = 0, in the antiperiodic case ν 1 = 1/2 the correspondence is k ↔ N − k + 1) and so a doubling problem is present. We simply fix the doubling problem by dividing the final result by two. The correlator of a Dirac field includes both particles and antiparticles: (A.20) and the implicit form of the correlation matrix of the Dirac field is This shows that the spinor indices in the correlation matrix ψ † ψ contain the particle correlation function P lm , and the matrix g m g † l = (1 − A) ml . Using Eq. (A.8), we get Z n (α) = 1 2 Tr log e iπα1 C n + e −iπα1 (1 − C) n , (A. 22) where the 1 2 factor has been inserted to cure of the doubling problem. Using Eq. (A.21), we can now compute C explicitly. The correlators in the eigenstate basis are By plugging the explicit form of M (A.18) into Eq. (A.21), we get In the massless case we have instead [87] Moreover, there is an additional subtlety in the choice of N : as noted in [87], for N odd we end up with two copies of the system (which are due to the doubling problem), of opposite spacial periodicity. In order to only have one fixed boundary condition, the simplest solution is to limit ourselves to take N even.
A.1 Fermionic negativity
We want now to compute the entanglement negativity for the same system of lattice free fermions. We switch to the Majorana fermion operators c i,α : Any operator can be expressed as a sum of string of Majoranas, in the form We took the string length to be even, which is a constraint for theories conserving the parity fermion number operator. Following [82], we consider the covariance matrix Γ, that can be expressed as σ 2 ⊗ (1 − 2C) ≡ σ 2 ⊗ Γ, with C being the correlation matrix (A.24).
Our total system is bipartite into A, B, and we want to compute the entanglement between two parts of A, which are A 1 , A 2 . The reduced covariance matrix on A takes the block form Under partial transposition, we find the correlation matrices associated to ρ R 1 , ρ R 1 † , which we call Γ ± We also introduce Finally, with the eigenvalues of Γ x and C, which we call ν i and ζ i , respectively we can express the even charged moments of the partial transposed density matrix, as We also need the charged normalisation N 1 , which can be written in terms of the eigenvalues of Γ + , ν + j , as [82] log(N 1 ) = −iα (A.32)
B Flux strength ambiguity
As observed in Ref. [91], Eq. (3.8) is actually ambiguous, since we can add an integer number of 2π flux caused by the gauge field A µ , such that without affecting the monodromy of the fermion fields. The partition function Z k must preserve this symmetry, so the correct form of the partition function is a combination of all the possible inequivalent representations each corresponding to a different value of m. Without losing generality, we suppose to compute the correlation functions of p vertex operators with phases h i , respecting the neutrality condition We call m i the integer addition to the flux, which must respect the neutrality i m i = 0, too. The most general partition function is a combination of all the choices of .
(B.3)
In the end, we want to test our field theoretical predictions against the numerical results for the theory defined on a lattice. This implies that the cutoff behaves as ∝ a, where a = L N is the lattice spacing, given by the ratio between the total spacial lenght of the torus, which we set L = 1 in Appendix A, and the number of lattice sites. This can be seen by either a dimensional argument, or by noting that, if we compute on the lattice the correlation function of two vertex operators, we find e iαφ(0) e −iαφ(a) ∼ 1 (B.4) because the two opposite vertex operators almost overlap on the lattice. Field theory would predict, instead where we have done an expansion at the lowest order in a. Comparing the two expressions we get ∼ a. Going back to Eq. (B.3), we focus on the factor, which is: where we used the neutrality condition i −2(h i +m i ) = 1. In the limit ∼ a → 0 the leading term in (B.3) will be the one minimising the exponent of , which is i (h i + m i ) 2 .
B.1 Entanglement entropy
In the entanglement entropy calculations, rewriting the partition function in Eq. (3.14) as the neutrality condition of vertex operators imposes a m a +m a = 0.
This factor is maximised by the following choices: Since n ≥ 1, |k| ≤ n−1 2 , |α| ≤ π, we have that | k+α/(2π) n | < 1 2 , thus all the dominant m a 's have to be 0. This justifies retrospectively our calculations, and also predicts a worse agreement with the lattice computations for |α| ≈ π, where the power law suppression with . Thus, when plotting the α dependence of the charged moments, we expect a worse agreement at the edges of the figure (as can be well seen in Figs. 1-2). Similar conclusions were also drawn in [50], where the subleading corrections were found to be comparable to the dominant ones around the edges α = ±π.
B.2 Fermionic negativity
For the negativity in the tripartite geometry, the partition function has, in general, the form for n = n e even. As before, we need to minimise the sum where we already imposed the neutrality condition for the extra fluxes m a . When k+ α 2π ≥ 0 we need to require m 1 = −1, i.e. m 2 = 1, while, in the remaining case, m 1 = 0 (m 2 = 0). The lowest positive value of k we can take is k = ne−1 . This implies sgn(k + α 2π ) = sgn(k), leading to the correct expression: Additionally, in the n = 1 case, we need to consider The quantity is minimised, if |α| ≤ 2π 3 , for the value m 1 = m 2 = 0, while, in the remaining cases, for either (m 1 , m 2 ) = (−sgn(α), 0), (0, −sgn(α)).
C Saddle point approximation in the massless case
In the limit of the cutoff going to 0, the leading order for the alpha dependence of the logarithm of the charged moments is log(Z n (α)) ≈ log(Z n (α = 0)) +
D Analytical continuation of the symmetry resolved entanglement entropy
Here we report the details for the analytical continuation of the symmetry resolved entanglement in the massive fermionic theory for any real values of n. This is the main step to take the replica limit n → 1.
D.1 ν = 2 spin sector
A modular transformation of Eq. (4.10) leads to with ν = 4, since ν = 2. The resummation can be carried out rewriting A k,n with the following identity, [92] e 2mniπτ sin(2π(mx + ny))), (D.2) combined with the quasiperiodicity relation A k,n is rewritten as When Eq. (D.1) is squared, we get 6 terms that we can index schematically as (D.8) We sum in k each addendum in this equation separately, labelling the results as γ ij : Finally, we follow the prescription in Eq. (4.8), which amounts to change the γs as so that the final result is written as The analytic continuation in n of expressions (D.9) can be found since every term in Eqs. (D.6), (D.7), (D.8) can be expressed as a convergent sum of exponentials, for the range of parameters we are interested in. In particular, we can rewrite the coth function with the identity: Using this fact, we can reduce the sum over k in (D.9) to a geometric series with a trivial analytic continuation. We now report the final expressions found using this approach: (D.14) (D.16) We used the following auxiliary expressions: We use again expression (D.4) for Φ, while we need to modify those for γ ij in this spin sector, as following:
E Analytical continuation of moments for the charge imbalance negativity
Here we give explicit analytical continuations for A k in Eq. (5.10), in the spin sectors of interest, ν = 2. Similar results can be obtained also for ν = 3, as done for the entanglement entropy. The techniques used for the resummation are similar to the ones described in Appendix D, the main difference being the fact we need to divide the case k > 0 from k < 0, to handle the sign function; we only report the final expressions for simplicity, after some premises. The expression to resum, after a modular transformation, is: We set ν = 2 =⇒ ν = 4. Using property D.2 and definining r = 2 − 1 , it becomes: Note that this series is well defined and convergent for all values of the parameters we are interesed in, i.e. α ∈ [−π, π], k ∈ [− n−1 2n , n−1 2n ]. Moreover, if we assume a cutoff independent of n and α, then we can reabsorb in the definition of the mass, so we will ignore it from now on. We define auxiliary quantities similar to what has been done in Appendix D: Again, all the terms K i K * j can be written as a sum of exponentials, and this allows to find an analytic continuation for the sum in k. The final expressions are . (E.14) The auxiliary expressions read | 2022-12-15T06:42:30.489Z | 2022-12-14T00:00:00.000 | {
"year": 2022,
"sha1": "a8415409cad79f67a1d9b66e05baf4739e15ab3d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "a8415409cad79f67a1d9b66e05baf4739e15ab3d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
267679361 | pes2o/s2orc | v3-fos-license | Signatures of selection in Mulinia lateralis underpinning its rapid adaptation to laboratory conditions
Abstract The dwarf surf clam, Mulinia lateralis, is considered as a model species for bivalves because of its rapid growth and short generation time. Recently, successful breeding of this species for multiple generations in our laboratory revealed its acquisition of adaptive advantages during artificial breeding. In this study, 310 individuals from five different generations were genotyped with 22,196 single nucleotide polymorphisms (SNPs) with the aim of uncovering the genetic basis of their adaptation to laboratory conditions. Results revealed that M. lateralis consistently maintained high genetic diversity across generations, characterized by high observed heterozygosity (H o: 0.2733–0.2934) and low levels of inbreeding (F is: −0.0244–0.0261). Population analysis indicated low levels of genetic differentiation among generations of M. lateralis during artificial breeding (F st <0.05). In total, 316 genomic regions exhibited divergent selection, with 168 regions under positive selection. Furthermore, 227 candidate genes were identified in the positive selection regions, which have functions including growth, stress resistance, and reproduction. Notably, certain selection signatures with significantly higher F st value were detected in genes associated with male reproduction, such as GAL3ST1, IFT88, and TSSK2, which were significantly upregulated during artificial breeding. This suggests a potential role of sperm‐associated genes in the rapid evolutionary response of M. lateralis to selection in laboratory conditions. Overall, our findings highlight the phenotypic and genetic changes, as well as selection signatures, in M. lateralis during artificial breeding. This contributes to understanding their adaptation to laboratory conditions and underscores the potential for using this species to explore the adaptive evolution of bivalves.
| INTRODUC TI ON
Unraveling the genetic basis of organisms' adaptation is essential for understanding how they acclimate to their environments (McKay & Latta, 2002;Storz, 2005).Experimental evolution, in particular laboratory selection, provides a method for studying adaptive responses in a controlled environment (Sandberg et al., 2017;Swamy & Zhou, 2019).Advances in our understanding of adaptive evolution have been driven by research using organisms amenable to laboratory conditions (LaCroix et al., 2017;Sandberg et al., 2017;Sterken et al., 2015).Typically, these studies focus on organisms with short generation times, such as Saccharomyces cerevisiae and Drosophila melanogaster (Claire et al., 2021;Sterken et al., 2015).For example, polymorphic populations or mixtures of various ancestral genotypes from one or more populations are utilized to establish experimental animal populations (Declerck et al., 2015;Graham et al., 2014).A key aspect of these studies is the application of selection pressure on organisms to enhance overall population adaptation (Sandberg et al., 2014;Vasi et al., 1994).In the case of Escherichia coli, Sandberg reported distinct genetically adaptive strategies in response to frequently switching growth substrates in the laboratory environment, allowing for continuous reproduction while maintaining exponential growth (Sandberg et al., 2017).Many approaches have been pursued to study adaptation (Coop et al., 2010;Tusso et al., 2021).In the rotifer Brachionus plicatilis, undergoing seven reproductive cycles under laboratory conditions, BayeScan (BS) analysis revealed 76 SNPs showing strong signals of selection, while genome-wide association analysis (GWAS) further identified five SNPs associated with two key life-history traits (Tarazona et al., 2019).This life history evolution, facilitating organisms' success in their environment, was linked to a variety of genes involved in diverse biological processes (Sandberg et al., 2017;Tarazona et al., 2019).
Currently, more than 30 mollusk species undergo varying degrees of domestication and are cultivated in hatcheries, encompassing numerous oysters, scallops, mussels, clams, and abalone species (Guo, 2021).The closure of the life cycle in hatchery production inevitably leads to genetic changes, including alterations in genes, genetic variation, and trait inheritance, whether through intentional selection or not (Gjedrem & Rye, 2018;Lowell, 2021).The genomes and transcriptomes of numerous mollusk species have been sequenced, providing rich information on the genetic and molecular mechanisms underlying mollusk development, function, immunity, and physiology, which has improved our understanding of mollusk genetics and adaptation (Abdelrahman et al., 2017;Guo, 2021).
The dwarf surf clam (Mulinia lateralis, 1822) is a small bivalve that is native to the western Atlantic Ocean from the Gulf of St. Lawrence to the Gulf of Mexico (Calabrese, 1969;Walker & Tenore, 1984).M. lateralis usually inhabits sandy and muddy sediments in estuarine and intertidal zones (Calabrese, 1969;Walker & Tenore, 1984).This species is a typical short-generation species, reaching sexual maturity within two months (Calabrese, 1969).Sexually mature individuals are highly fecund, capable of laying up to 3 × 10 5 eggs at a time, and can release gametes multiple times in a short period (Calabrese, 1970;Guo & Allen, 1994).Furthermore, induction spawning and artificial culture of M. lateralis can be easily achieved in the laboratory (Calabrese, 1969;Rhodes et al., 1975).These characteristics make M. lateralis a great potential model species for the study of bivalve biology and evolution.Recently, M. lateralis has been introduced into China, and we have systematically developed optimized culture conditions for this species in our laboratory (Yang et al., 2021(Yang et al., , 2022)).At present, after several generations of successful breeding, M. lateralis has clearly demonstrated its adaptability to laboratory conditions.Nonetheless, the genetic basis of M. lateralis' adaptation to laboratory conditions during artificial breeding is still unknown.
To this end, the present study aimed to investigate the phenotypic and genetic changes in M. lateralis during artificial breeding.
Furthermore, the genomic selection signatures were scanned to identify regions associated with variations in adaptation to laboratory conditions during artificial breeding.Overall, our findings provide insights into the genetic basis of M. lateralis' adaptation to laboratory conditions during artificial breeding and reveal the potential of using this species to explore the adaptive evolution of bivalves.
| Clam culture, breeding, and collection
In 2017, a total of 200 M. lateralis individuals were introduced from the USA and then preserved in the MOE Key Laboratory of Marine Genetics and Breeding, Ocean University of China (Qingdao, China).
The initial samples introduced were considered as the germplasm resource's first generation (G0).Adults from the G0 population were cultured to maturity in a recirculating system (temperature 21-22°C and salinity 27-28 psu).The seawater was pretreated with sand, activated carbon, and absorbent cotton, and then used as filtered seawater (FSW).A habitat substrate for M. lateralis consisted of a 3 cm layer of sand with particle sizes ranging from 0.25 to 0.5 mm.Some microalgal species, including Isochrysis galbana, Nitzschia closterium, Chaetoceros muelleri, Platymonas helgolandica, and Chlorella pyrenoidesa, were selected as the diet for M. lateralis, purchased from Jianyang Biological Technology Ltd. (Dalian, China).Feeding occurred three times a day, providing microalgae at a concentration of 5-10 × 10 4 cells/mL −1 , with a stocking density ranging from 600 to | 3 of 13 YANG et al.
According to Figure 1a,b, mature individuals were selected as the foundation of the offspring population (G1), and each individual was independently subjected to thermal stimulation conditions for spawning induction.After collecting eggs from 87 females and sperm from 93 males, they were thoroughly mixed in a 3:1 ratio of sperm-to-egg to maximize the number of mature individuals contributing to offspring production.The culture of the offspring followed the methodology described in our previous studies (Yang et al., 2021(Yang et al., , 2022)).In brief, zygotes were transferred to 15-L aquariums (30 cm × 25 cm × 20 cm) at a density of <50 eggs mL −1 and a temperature of 21-22°C for embryo incubation.After 14 h, zygotes rapidly developed into D-shaped larvae, which were then collected and transferred to clean 15-L aquariums filled with FSW.
Thereafter, D-shaped larvae progressed through stages including larval (1-10 days post-fertilization (dpf)), metamorphosis (11-20 dpf), and grow-out (21-50 dpf).In the meanwhile, adequate aeration of the aquariums ensured saturated oxygen concentration, and seawater was fully renewed every day with FSW at a temperature of 21-22°C.Aquariums were replaced every two days and sterilized with a 0.2% potassium permanganate solution.During this process, larvae were fed microalgae three times per day.At the larval stage, we initiated feeding I. galbana and gradually incorporated P. helgolandica at a concentration of 2.4 × 10 3 cells/mL −1 during the D-shaped stage, increasing to 12 × 10 3 cells/mL −1 before the metamorphic stage.Larval stocking density remained less than 7 larvae/ mL −1 .At the metamorphic stage, the diet transitioned to a mixture of I. galbana, P. helgolandica, and Nitzschia Closterium (1:1:1) at a concentration of 3-4 × 10 4 cells/mL −1 , and the larvae were cultured at a stocking density of less than 5 larvae/mL −1 .At the growth-out stage, due to space constraints, some individuals were randomly selected and transferred to larger aquariums (1 m × 1 m × 0.5 m) with a 3 cm sandy substrate for benthic culture.They were fed a mixture of C. muelleri and C. pyrenoidesa (1:1) at a concentration of 5-10 × 10 4 cells/mL −1 , with a stocking density of 600-800 individuals/m 2 .Three aquarium replicates were conducted using a completely randomized design.Subsequently, adults from the G1 population were cultured to maturity for the next generation.This cycle allowed the successful establishment of offspring from different generations in our laboratory (Figure 1b).In the meanwhile, mature individuals from different generations were collected, and various tissues, including the gill, mantle, muscle, gonad, and foot, were quickly dissected and stored at −80°C for subsequent experiments.
| Phenotypic trait statistics
Five traits, including fertilization percentage, hatching percentage, survival percentage, metamorphosis percentage, and growth rate, were calculated to evaluate the performance of different generations of M. lateralis throughout the life cycle (Figure 1c).For the embryonic stage, fertilization percentage was calculated as (the number of zygotes)/(the number of eggs) × 100%, and hatching percentage as (the number of D-shaped larvae)/(the number of zygotes) × 100%.
During the larval stage, survival percentage was determined by (the number of larvae at 10 dpf)/(the number of larvae at 1 dpf) × 100%, and the growth rate (μm/d) by (shell length of larvae at 10 dpf − shell length of larvae at 1 dpf)/10 d.At the metamorphosis stage, the metamorphosis percentage was calculated using (the number of juveniles at 20 dpf)/(the number of creeping larvae at 11 dpf) × 100%, and the growth rate (μm/d) as (shell length of juveniles at 20 dpf − shell length of creeping larvae at 11 dpf)/10 d.For the grow-out stage, survival percentage was assessed by (the number of adults at 50 dpf)/(the number of juveniles at 21 dpf) × 100%, and the growth rate (mm/d) = (shell length of adults at 50 dpf − shell length of juveniles at 21 dpf)/30 d.Additionally, 30 larvae/juveniles/adults were measured to calculate the growth rate for each stage.These traits were calculated based on three replicates and presented as mean ± standard deviation (S.D.).Data were analyzed using IBM SPSS Statistics23 and depicted graphically using GraphPad8.0.A one-way analysis of variance (ANOVA) followed by a post hoc comparison of means based on Tukey's test was performed to determine significant differences among generations (p < 0.05).Percentages were log 10 -transformed to obtain normality and homogeneity of variance before ANOVA.
The normality of data was confirmed by Kolmogorov-Smirnov's test and the homogeneity of variances by Levene's test.
| DNA extraction and sequencing
Genomic DNA was extracted from the gills of M. lateralis by the TIANamp Marine Animals DNA Kit (No. DP324, TIANGEN, China) following the manufacturer's protocol.Subsequently, DNA was purified using the Column DNA purification kit (No. B610367, Sangon Biotech, China).The quality, purity, and concentration of DNA were assessed through electrophoresis on 1.5% agarose gels and NanoDrop Lite (Thermo Fisher, Waltham, MA, USA).IsoRAD sequencing libraries were then constructed using the method developed by Wang et al. (2016).Sequencing was performed on the Illumina Hiseq2000 platform (Novogene, China), generating 150 bp paired-end reads for subsequent analyses.
| Genotyping and SNP annotation
Applying three filtering steps, raw reads were processed by FastQC (https:// github.com/ s-andre ws/ FastQ C) to obtain highquality, clean reads.The filtering steps included: (i) removing reads with ambiguous basecalls (N); (ii) removing reads with long homopolymer regions (>10 bp); and (iii) removing reads with excessively low-quality bases (>20% of bases with a quality score <10) (Andrews, 2010).RADtyping (https:// github.com/ jinzh uangd ou/ RADty ping), renowned for its general applicability to various forms of RAD-seq data processing, was used to discover and genotype SNPs from the clean reads (Fu et al., 2013).In short, RAD tags were extracted from the genome of M. lateralis, serving as a reference based on the recognition sites of the BsaXI enzyme.
| Genetic diversity, population structure, and linkage disequilibrium (LD)
Firstly, we evaluated the changes in genetic diversity of M. lateralis across different generations.Minor allele frequency (MAF), observed heterozygosity (H o ), expected heterozygosity (H e ), and inbreeding coefficient (F is ) were calculated using Plink v1.9.Nucleotide diversity (π), fixation index (F st ), and Tajima'D were evaluated through VCFtools (https:// vcfto ols.github.io/ ) and averaged in 40 kb windows (Danecek et al., 2011).Secondly, the genetic structure of M. lateralis was assessed through principal component analysis (PCA) and a phylogenetic tree.PCA was conducted using GCTA (Yang et al., 2011), and the first two-dimensional coordinates were plotted using the ggplot2 package (Villanueva & Chen, 2019).A neighborjoining (NJ) tree was constructed using Iqtree (Minh et al., 2020) with a bootstrap value of 1000, and the visualization was performed using iTOL (https:// itol.embl.de/ ).In addition, linkage disequilibrium (LD) decay was analyzed by PopLDdecay with a maximum distance of 50 kb, and the average r 2 of pairwise markers was calculated (Zhang et al., 2019).
| Selection signature analysis
To detect selection signatures in M. lateralis during artificial breeding, we scanned the genome using the ratio of θ π and the fixation index (F st ) between the initial population (G0) and offspring of advanced generation (G6).The estimation of θπ and F st was conducted with sliding windows of 40 kb that had 20 kb overlap between adjacent windows.Subsequently, the ratio of θπ values at the same chromosomal location was computed.Regions under selection were identified using an empirical procedure (Li et al., 2018).Briefly, candidate genomic regions with significantly biased θπ ratios (top 5% largest and smallest) and significantly high F st values (top 5% largest) of the empirical distribution were regarded as regions with strong selection signatures across the genome.Among them, regions with significantly biased θπ ratios (top 5% largest) and significantly high F st values (top 5% largest) were categorized as exhibiting strong positive selection signatures, whereas regions with significantly biased θπ ratios (top 5% smallest) and significantly high F st values (top 5% largest) were defined as regions with strong negative selection signals.The statistical significance was determined using permutation tests through random replacement of samples in population pairs (p < 0.05).
| Candidate genes and functional analysis
According to genome annotation, candidate genes under selection were defined as within or overlapped with regions exhibiting selection signatures.Transcriptomic data generated from gonad tissues collected from the G0 and G6 generations from the same population used in the genetic analysis were used to explore the expression pattern of candidate genes associated with reproduction during artificial breeding.Specifically, sexually mature individuals, including males and females, were randomly selected from both generations and cultured under identical laboratory conditions.Total RNA was isolated based on the conventional guanidinium isothiocyanate method (Hu et al., 2006).Five RNA samples from each sex per generation were used for transcriptome construction.RNA sequencing libraries were constructed using an RNA-seq library kit (Vazyme, China), and RNA-seq data was generated using the high-throughput Illumina HiSeq2000 platform.After quality control, high-quality reads were mapped to the M. lateralis genome using STAR (Dobin et al., 2013), and the expression level was calculated in transcripts per million (TPM) using the RNA-Seq by Expectation Maximization (RSEM) method.Meanwhile, the transcriptome data for G6 has been previously published (PRJNA862073; Li et al., 2022), while the transcriptome data of the G0 generation will be released with the genome article (as shown in Supplementary Sheet 6).Three transcriptome samples from each sex per generation were used for differential expression analysis.The expression profiles of candidate genes were normalized and represented as relative expression (TPM candidate genes /TPM EF1A ), with elongation factor 1A (EF1A) serving as an internal reference gene widely used in bivalve (Volland et al., 2017;Zhao et al., 2018).Significant differences between G0 and G6 generations were determined using independent-sample ttests (p < 0.05, n = 3).
To gain insights into the biological significance of selection signatures, we performed gene ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) functional enrichment analysis using OmicShare (https:// www.omics hare.com/ tools/ ).GO terms and KEGG pathways with p-values less than 0.05 were considered significantly enriched.
| Phenotypic traits of M. lateralis
As shown in Figure 1c, the phenotypic traits of M. lateralis revealed the performance of different generations throughout the life cycle.During the embryonic stage, the fertilization and hatching percentages of M. lateralis exhibited a tendency to increase with successive laboratory reproductions, with fertilization percentages ranging from 95.18 ± 1.47% to 96.86 ± 2.03% and hatching percentages ranging from 73.54 ± 7.88% to 76.61 ± 6.73%.
The ANOVA test revealed no significant differences between generations (p > 0.05).At the larval stage, significant differences were observed in larval survival and growth among different generations (p < 0.05), with the advanced generation (G6) displaying the highest survival percentage (47.22± 4.81%) and growth rate (15.71 ± 3.13 μm/d), respectively.Similar trends were observed in the metamorphosis and grow-out stages of M. lateralis, with the advanced-generation (G6) exhibiting the highest metamorphosis percentages (53.46 ± 4.57%) and survival percentages (93.72 ± 2.26%).In addition, high growth rates were found in the metamorphosis (19.84 ± 2.56-23.54± 7.58 μm/d) and grow-out stages (0.21 ± 0.03-0.23 ± 0.03 mm/d) across different generations, but no significant differences were detected.Overall, M. lateralis demonstrated the ability to maintain stable phenotypic traits during artificial breeding in the laboratory.
| Sequencing and genotyping of M. lateralis
After the sequencing of libraries, a total of 821,047,756 raw reads were generated from 310 samples (Supplementary sheet 1).After quality filtering, 804,926,229 clean reads were retained, with an average effective mapping rate of 98.85%.The data showed high quality scores, with averages of 97.57% and 94.06% for Q20 and Q30, respectively.The GC content was stable, ranging from 45.37% to 48.03%.On average, 33.75% of reads containing RAD tags were uniquely mapped to the reference genome of M. lateralis, reaching an average depth of approximately 68× per sample (Supplementary sheet 2).In total, 34,901 SNPs were genotyped in these samples.
The distribution of SNPs on M. lateralis chromosomes is illustrated in Figure S1a, with an average of 1169 SNPs distributed on each chromosome.The largest number of SNPs was observed on Chr1 (1894), while Chr19 had 324 SNPs (Figure S1b).Among all SNPs, 36.28% were located in intergenic regions, 35.23% in intragenic regions (24.42% in exons and 10.81% in introns), and 0.61% in UTR regions (0.22% in 5' UTR and 0.39% in 3' UTR) (Table S1).
| Genetic changes of M. lateralis during artificial breeding
The dataset comprising 22,196 SNPs was used to estimate the genetic diversity of M. lateralis (Table 1).MAF exhibited variability among generations, with mean MAF ranging from 0.1922 (G0) to 0.2011 (G6).Examining the distribution of MAF (Figure 2a) revealed a lower proportion of rare SNPs (0.05 < MAF <0.1) in the G6 generation (16.45%) compared to the G0 generation (18.12%).
Across generations, average H o ranged from 0.2733 to 0.2934, while mean H e ranged from 0.2842 to 0.2908 (Table 1).Notably, the H o and H e values of the G1, G2, and G6 populations were nearly identical (0.01 difference), while the G0 and G4 populations exhibited relatively larger differences between H o and H e values (0.0168 and 0.0138), respectively.Patterns revealed by heterozygosity analysis were supported by inbreeding analysis (Figure 2b, Table 1).F is values varied from −0.0244 to 0.0261 for each generation, with the G0 and G4 populations showing higher F is values of 0.0128 and 0.0261, respectively.As shown in Figure 2c and Table 1, all generations exhibited equal levels of π, with mean values ranging from 1.91 × 10 −5 to 2.01 × 10 −5 .In summary, M. lateralis consistently maintained high levels of genetic diversity during artificial breeding.
To explore the genetic relationships among different generations of M. lateralis, we conducted principal component analysis (PCA) and generated a neighbor-joining phylogenetic tree.The PCA results depicted different clusters for the initial population and the offspring of different generations (Figure 2d).Principal Component 1 (PC1) explained 5.19% of the total variance, separating G0, G1, and G2 from other generations, while PC2 explained 3.69% of the total variance, describing the genetic differentiation of G6 from other generations.Notably, all generations of M. lateralis were collectively clustered in the same class without substantial separation.The NJ tree revealed that M. lateralis individuals from different generations clustered into multiple subclades, indicating close genetic relatedness and a common ancestry (Figure 2e).Tajima's D is a widely used metric for inferring the types of variations in DNA sequences.As illustrated in Figure S3, Tajima's D values for M. lateralis of different generations generally exceeded 0, with respective average values of 0.834, 1.097, 1.042, 1.034, and 1.139, suggesting that M. lateralis was under selection.Simultaneously, an interesting finding reveals that 258 regions exhibit Tajima's D values exceeding 0 in the G0 generation, whereas these values drop below 0 in the G6 generation (Figure S4).Nevertheless, the quantity of these regions is notably less than those with values exceeding 0.
Pairwise F st indices between generations of M. lateralis were estimated to infer the genetic distance between generations.The levels of genetic differentiation among generations were low, with average F st values of 0.0192 (Table 2).The highest genetic distance (0.0371) was observed between G6 and the earlier generations (G1), while the lowest genetic distance (0.0054) was observed between G1 and G0, respectively.The average r 2 values of LD in different generations decreased with increasing marker distance between pairwise SNPs, showing a rapidly declining trend with the first 50 kb (Figure 2f).Overall, genetic differentiation did occur in M. lateralis during artificial breeding, but it remained at a relatively low level.
| Detection of selection signatures involved in adaptation to laboratory conditions during artificial breeding
To detect the genomic signature of selection during artificial breeding, we scanned the genome of offspring by comparing the advanced generation (G6) with the initial population (G0).Using the top 5% cutoffs for the θπ ratio and F st values (log 2 (θπ ratio) > 1.030 or log 2 θπ ratio < −1.054 and F st >0.106), we identified 168 candidate regions under positive selection and 148 candidate regions under negative selection (Figure 3a, Supplementary sheet 3).In addition, these selected regions were unevenly distributed in the genome of M. lateralis, with chromosome 3 harboring more regions with selection signatures (48), including the region with the highest F st value (0.3929) (Figure S2, Supplementary sheet 3).
Furthermore, 227 candidate genes were identified in positive selection regions, while 196 candidate genes were identified in negative selection regions (Supplementary sheet 3).Several important genes were identified in these regions, including GAL3ST1, IFT88, TSSK2, MED1, CDC123, MKS1, THADA, DNAJB5, DNAH2, and CES5A, which functionally involve a variety of biological processes.Of particular interest is the candidate gene (GAL3ST1) located in the region with the highest F st value (F st = 0.3929) (Figure 3b), which was associated with spermatogenesis during reproduction.Similar phenomena were also observed in male reproduction-related genes (IFT88 and TSSK2), which were identified in regions with high F st values (Figure 3c,d).Moreover, we found that the variation in allele frequencies of these genes was a continuous, long-term process during artificial breeding and eventually exhibited highly positive-selected signatures in these regions.Transcriptomic data indicated that the expression of these selected genes was affected during artificial breeding, as evidenced by significant changes in expression levels.As shown in Figure 3e-g, GAL3ST1, IFT88, and TSSK2 were highly expressed in gonad tissues of the advanced generation compared to the initial population (p < 0.05).
To further reveal the genetic basis of the adaptation of M. lateralis, positively selected genes were annotated in multiple categories of GO (Figure 4a, Supplementary sheet 4).The categories most significant in the "biological process" principal category were "SnoRNA In the present study, all generations of M. lateralis were cultured under similar laboratory conditions, aiming to minimize environmental differences that could drive adaptive change between generations.Overall, we achieved a complete life cycle culture of the clam in the laboratory and maintained high reproductive performance during artificial breeding.This also confirms the potential of M. lateralis as a model bivalve, which is consistent with previous reports (Calabrese, 1969;Rhodes et al., 1975).Simultaneously, adaptive advantages in phenotypic traits were observed in subsequent M. lateralis generations, such as gradual increases in fertilization and larval survival, providing evidence of the species' adaptation to laboratory conditions.
In general, higher genetic variability within an experimental population enhances the likelihood of individuals possessing alleles better adapted to the environment, enabling them to survive, transmit favorable genetic characteristics to their offspring, and gradually undergo adaptive evolution (D'ambrosio et al., 2019;Durland et al., 2021).However, for experimental populations reared under laboratory conditions, artificial breeding usually contributes to a significant reduction in genetic variability, posing a threat to long-term genetic progress as well as diminishing the adaptive capacities of populations (Felsenstein, 1965).For example, in a study on the drone fly (Eristalis tenax L.), the fourth and eighth generations of laboratory populations showed a severe lack of genetic diversity compared to natural populations, accompanied by phenotypic divergence (i.e.wing traits and abdominal color patterns) (Francuski et al., 2014).In contrast, we did not observe a similar phenomenon in the artificial breeding of M. lateralis, as offspring consistently maintained high genetic diversity.
In a previous study, the loss of genetic variability in laboratory organisms was primarily due to the founder effect, followed by inbreeding (Francuski et al., 2014;Vlachos & Kofler, 2019).In our study, a strategic approach was employed by selecting more individuals from the germplasm population of M. lateralis as the initial population.This deliberate action aimed to increase the number of founders, thereby expanding the pool of genetic variations and overcoming the founder effect to some extent.In addition, during the artificial breeding of M. lateralis, offspring were generated from the initial population using random mating, without the imposition of deliberate artificial selection for certain traits.This approach effectively decreased the likelihood of inbreeding, a conclusion supported by the observed high heterozygosity and low levels of inbreeding across different generations of M. lateralis.
Nonetheless, the genetic structure of M. lateralis during artificial breeding exhibited dynamism rather than static constancy.
Wherein, PCA analysis showed no substantial separation among the offspring of M. lateralis, yet there was a discernible tendency for the offspring of advanced generation to gradually deviate.This observed deviation might be attributed to selection, which often occurs in shellfish during artificial breeding (Guo et al., 2018;Langdon et al., 2003;Plough, 2012).In our study, Tajima's D et al., 2021;Langmüller et al., 2021;Neher, 2013).Genetic drift is also one of significant driving forces behind evolution, leading to the rapid reduction of genetic diversity in small populations and thereby swiftly altering their genetic structure (Lynch et al., 2016;Masel, 2011).M. lateralis was initially introduced to our laboratory from the USA with a relatively small initial population.Subsequently, it underwent long-term cultivation in closed conditions, forming a relatively closed and small population.
Furthermore, owing to the prevalent "sweepstakes effects" observed in bivalves (Hedgecock, 2011), the genetic drift observed in M. lateralis is entirely plausible, further confirming the results obtained from the PCA.Simultaneously, we noted consistently low genetic differentiation between generations of M. lateralis (F st <0.05), and the genetic diversity of the offspring exhibited no significant decline.The marginal loss of genetic diversity and the minor fluctuations in F st values across populations imply that, in comparison to selection, genetic drift played a relatively minor role (Furlan et al., 2012;Matos et al., 2015;Tarazona et al., 2019).
In addition, we also observed an interesting finding that 258 regions exhibited Tajima's D values exceeding 0 in the G0 generation, whereas these values dropped below 0 in the G6 generation.
Nevertheless, the quantity of these regions was notably less than those with values exceeding 0, indicating the presence of genetic drift in the offspring, but at a weak level.Hence, both selection and genetic drift are pivotal in the adaptation of M. lateralis to laboratory conditions, with selection exerting a predominant influence.In summary, genetic differentiation arising from both selection and drift occurred in the artificial breeding process of M.
lateralis, though at a relatively modest level.
Although M. lateralis populations did not exhibit obvious genetic differentiation among generations at the level of the entire genome (F st <0.05), significant selection signatures were observed in specific genomic regions.These findings were not unexpected, considering the consistent imposition of laboratory conditions throughout the breeding process of M. lateralis populations.
plicatilis (Tarazona et al., 2019), Daphnia magna (Orsini et al., 2012), and D. melanogaster (Claire et al., 2021).For instance, in the case of B. plicatilis, researchers reported genomic signatures indicating rapid phenotypic divergence in laboratory populations, supporting the evolution of divergent life-history traits over a short time span in rotifer populations (Tarazona et al., 2019).Nonetheless, the genetic variants crucial for adaptation to different environments are not fully shared between these studies.To the best of our knowledge, recent studies involving bivalves have identified specific genomic signatures of environmental adaptations, with limited focus on laboratory cultivation (Abdelrahman et al., 2017;Durland et al., 2021;Hornick & Plough, 2019;Li et al., 2021;Lim et al., 2021;Scobeyeva et al., 2021;Zhong et al., 2016).In the present study, 316 candidate regions were identified in the offspring of advanced generation under selection using the top 5% cutoffs of the θπ ratio and F st values, of which 168 showed significant positive selection.Considering that these genomic signatures enable this species to adapt to laboratory culture, our findings can serve as a foundational platform for in-depth studies concentrating on bivalve adaptation.
Adaptation is a complex process that helps organisms thrive in an environment and is often associated with various biological processes.As organisms adapt to specific environments, their growth and metabolism typically undergo great changes due to variations in environmental conditions (Teletchea, 2016).In the present study, to the constraints of limited spaces and high density.These conditions frequently exert pressure on their immune response and may even lead to the occurrence of large-scale diseases (Lin et al., 2018).
Fortunately, during artificial breeding, M. lateralis had high survival rates without outbreaks of disease.
Improving reproduction in a breeding population has been recognized as a key factor for optimizing offspring production (Fox & Czesak, 2000;Izquierdo et al., 2001;Lind et al., 2019).In our study, we identified significant selection signatures in the genomic regions of M. lateralis associated with reproduction, especially in the regions of male reproduction-related genes (such as GAL3ST1, IFT88, and TSSK2), showing higher F st values.Of particular interest is the candidate gene (GAL3ST1) located in the region with the highest F st value (F st = 0.3929), which encodes a protein known to be involved in spermatogenesis (Suzuki et al., 2010).
IFT88 and TSSK2 are genes associated with spermatogenesis (Yap et al., 2022;Zhang et al., 2020).Furthermore, we observed an increase in the expression of GAL3ST1, IFT88, and TSSK2 during breeding in gonad tissues of advanced-generation animals compared to the initial population (p < 0.05).In addition, the allele frequency in these genes showed gradual changes between generations with artificial breeding.Besides, we also significantly identified several genomic signatures that are localized to functional genes (e.g.DNAH2 and CES5A) responsible for determining sperm morphology and capacitation.Specifically, DNAH2 is a gene associated with various morphological abnormalities of sperm flagella (Li et al., 2019), and CES5A is required for sperm capacitation and male fertility (Ru et al., 2015).Previous studies have shown that male traits typically exhibit substantial additive genetic variance and rapid evolutionary responses to selection (Evans & Simmons, 2008), which have been widely confirmed in vertebrates, such as humans (Swanson & Vacquier, 2002), mice (Vicens et al., 2014), and birds (Kleven et al., 2009).Consequently, we speculated that significant selection signatures in male genomic regions of M. lateralis may be related to the rapid selective response in sperm-associated genes during laboratory breeding.
One possible explanation for this phenomenon is that males with the most abundant and active sperm may be more successful reproductively during mass spawning of M. lateralis, where spermassociated genes play a key functional role in regulating these processes.
The field of experimental evolution is expanding, with an increasing number of model organisms amenable to experimental culture and manipulation in a laboratory environment.In the present study, we have provided a detailed elucidation of the genetic basis of adaptation of M. lateralis to laboratory conditions during artificial breeding based on the long-term breeding practices of this species.
Our findings exhibit the potential of using M. lateralis as a valuable model for exploring the adaptive evolution of bivalves.However,
F
I G U R E 1 Reproduction cycle (a), breeding schedule (b), and phenotypic traits (c) of Mulinia lateralis populations.The gray square represents the population size of each generation.The ellipse represents the number of individuals contributing to the next generation (red for females and yellow for males).G0 refers to the initial population, while G1-G6 refers to the offspring populations.Different colors indicate the M. lateralis population of different generations.
F
I G U R E 2 Genetic parameters of the initial population (G0) and offspring of different generations (G1-G6) in Mulinia lateralis.(a) Distribution of minor allele frequency (MAF); (b,c) Inbreeding coefficient and nucleotide diversity (40 kb windows); (d) Principal component analysis (PCA); (e) Neighbor-joining tree; (f) Linkage disequilibrium decay (LD decay).Different colors indicate the M. lateralis population of different generations.TA B L E 2 The pairwise F st among the different generations of Mulinia lateralis.
results supported this hypothesis, as the values for M. lateralis of different generations generally exceeded 0. Selection, being one of the driving forces, plays an important role in the adaptive evolution of organisms.Individuals with certain genotypes exhibit great adaptability to their given environment, and their offspring show better population reproduction under natural selection, thereby influencing the overall genetic structure of the population (Burny several genes associated with growth and metabolism, such as EGF1, MED1, PHYH, and CDC123, exhibited significant positive selection signatures in M. lateralis during artificial breeding.Furthermore, several immune-related genes, including BIRC2, CDC20, MKS1, etc., showed strong signals of positive selection.Generally, experimental organisms are cultured in laboratory conditions, subjected laboratory conditions are defined by a number of various and interacting factors, including temperature, stocking density, diet, etc.In the present study, it proves challenging to distinguish the individual effect of each factor, hindering our ability to delve into the specific mechanisms underlying adaptation to any particular environmental condition.Thus, further research is needed to gain deeper insights into the adaptations of M. lateralis to laboratory conditions in the future breeding program.5 | CON CLUS IONThis study reports the phenotypic and genetic changes as well as selection signatures of M. lateralis populations of different generations reared under laboratory conditions.The results demonstrate that M. lateralis could consistently maintain high genetic diversity during artificial breeding, with relatively low levels of genetic differentiation among generations.Several genomic regions showing significant selection signatures were identified, harboring genes associated with key adaptive traits throughout the life cycle of M. lateralis.Overall, our findings provide insights into the genetic basis for adaptation of M. lateralis to laboratory culture during artificial breeding and reveal the potential of using this species to explore the adaptive evolution of bivalves. | 2024-02-16T05:08:40.352Z | 2024-02-01T00:00:00.000 | {
"year": 2024,
"sha1": "5f249f5cefc16f413f38cbf6aab570c69b09c572",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/eva.13657",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5f249f5cefc16f413f38cbf6aab570c69b09c572",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248119823 | pes2o/s2orc | v3-fos-license | The closest lineage of Archaeplastida is revealed by phylogenomics analyses that include Microheliella maris
By clarifying the phylogenetic positions of 'orphan' protists (unicellular micro-eukaryotes with no affinity to extant lineages), we may uncover the novel affiliation between two (or more) major lineages in eukaryotes. Microheliella maris was an orphan protist, which failed to be placed within the previously described lineages by pioneering phylogenetic analyses. In this study, we analysed a 319-gene alignment and demonstrated that M. maris represents a basal lineage of one of the major eukaryotic lineages, Cryptista. We here propose a new clade name 'Pancryptista' for Cryptista plus M. maris. The 319-gene analyses also indicated that M. maris is a key taxon to recover the monophyly of Archaeplastida and the sister relationship between Archaeplastida and Pancryptista, which is collectively called 'CAM clade' here. Significantly, Cryptophyceae tend to be attracted to Rhodophyta depending on the taxon sampling (ex., in the absence of M. maris and Rhodelphidia) and the particular phylogenetic 'signal' most likely hindered the stable recovery of the monophyly of Archaeplastida in previous studies.
Background
Our understanding of the evolutionary relationship among major eukaryotic groups has been progressed constantly. The foundation of the tree of eukaryotes was developed initially based on the combination of morphological characteristics (including those on the ultrastructural level) and molecular phylogenetic analyses of a single or few marker genes [1][2][3]. In recent years, 'phylogenomic' analyses-phylogenetic analyses of large-scale multigene alignments, particularly those comprising hundreds of genes-were often conducted to reconstruct deep splits in the tree of eukaryotes with high statistical support [4][5][6][7]. For instance, recent phylogenomic analyses have constantly reconstructed the clade of stramenopiles, Alveolata, and Rhizaria (SAR clade) [8], that of Opisthokonta, Amoebozoa, Breviatea and Apusomonadida (Amorphea) [9], and that of Collodictyonidae, Rigifilida and Mantamonas (CRuMs) [10].
There are many unicellular micro-eukaryotic lineages of which phylogenetic positions remain uncertain ('orphan' lineages). Some of the current orphan lineages most likely represent as-yet-unknown portions of the diversity of eukaryotes and hold clues to resolve the eukaryotic evolution. Prior to DNA sequencing experiments gaining in popularity in phylogenetic/taxonomic studies, diverse eukaryotes were isolated from the natural environments and examined by microscopes. If the morphological characteristics of the eukaryotes of interest showed no clear affinity to any other eukaryotes, their phylogenetic affiliations remained uncertain [11][12][13][14][15]. The analyses of small subunit ribosomal DNA (SSU rDNA)-one of the most popular gene markers for organismal phylogeny-succeeded in finding the phylogenetic homes of many lineages, of which morphological information was insufficient to resolve their phylogenetic affiliations [16][17][18][19]. More recently, orphan lineages, as well as newly found eukaryotes have been subjected to phylogenomic analyses [8,9,[20][21][22][23][24][25][26][27].
Phylogenomic analyses are not always valid for elucidating the phylogenetic positions of all of the orphan lineages recognized to date. For instance, the positions of Malawimonadida [28], Ancyromonadida [10], Hemimastigophora [29], Ancoracysta twista [30] and Microheliella maris [31] could not be clarified even after phylogenomic analyses. The pioneering studies might have failed to clarify the phylogenetic positions of the orphan lineages listed above due to insufficient data and/or taxon sampling in the alignments and various forms of systematic artefacts in tree reconstruction (e.g. long branch attraction or LBA [32]). However, there is a possibility that some of the orphan lineages are genuine deep branches that are critical to resolving the backbone of the tree of eukaryotes. In this study, we attempted to clarify the phylogenetic position of M. maris by analysing a new phylogenomic alignment. Microheliella maris was originally described as a member of the phylum Heliozoa based on the shared morphological similarities (e.g. the radiating axopodia with tiny granules and the centroplast) [33]. Cavalier-Smith et al. [31] then examined the phylogenetic position of M. maris by analysing the alignment comprising 187 genes. Nevertheless, M. maris is still regarded as one of the orphan eukaryotes [34], as the choice of the methods for tree reconstruction and taxon sampling affected largely the position of this eukaryote in the 187-gene phylogeny [31].
We here reassessed the phylogenetic position of M. maris by analysing a new phylogenomic alignment comprising 319 genes (88 592 amino acid positions in total). The 319-gene phylogeny placed M. maris at the base of the Cryptista clade with high statistical support, suggesting that this eukaryote holds keys to understanding the early evolution of Cryptista as well as Diaphoretickes. Indeed, we further demonstrated that M. maris and Rhodelphidia, which occupy the basal position of Cryptista and that of Rhodophyta, respectively, suppress the erroneous 'signal' attracting Cryptophyceae and Rhodophyta to each other and contribute to recovering (i) the monophyly of Archaeplastida and (ii) the sister relationship between Archaeplastida and the clade of Cryptista plus M. maris. Finally, we explored the biological ground for the phylogenetic artefact uniting Cryptophyceae and Rhodophyta together.
Cell culturing and RNA-seq analysis
We generated the RNA-seq data from M. maris and Hemiarma marina, a species of Goniomonadea, in this study. The culture of M. maris (studied in Yabuki et al. [33]) and that of H. marina (established in Shiratori and Ishida [35]) have been kept in the laboratory and were used in this study. The harvested cells of both organisms were subjected to RNA extraction using TRIzol (Life Technologies) by following the manufacturer's instructions. We shipped the two RNA samples to a biotech company (Hokkaido System Science) for cDNA library construction from the poly-Atailed RNAs followed by sequencing using the Illumina Hi-seq 2000 platform. For M. maris, 1.6 × 10 7 paired-end 100 bp reads (1.6 Gb in total) were obtained and then assembled into 30 305 unique contigs by TRINITY v. 2.8.4 [36,37]. For H. marina, we obtained 1.9 × 10 7 paired-end 100 bp reads (1.9 Gb in total) and assembled them into 41 539 unique contigs by TRINITY v. 2.8.4 [36,37].
Global eukaryotic phylogeny
To elucidate the phylogenetic position of M. maris, we prepared a phylogenomic alignment by updating an existing dataset comprising 351 genes [29]. For each of the 351 genes, we added the homologous sequences retrieved by TBLASTN (E-value cut-off was set to 10 −30 ) from the transcriptomic data newly generated from M. maris and H. marina in this study (see above), as well as other eukaryotes that were absent in the original data [29], such as Marophrys sp. SRT127 [38], two species of Rhodelphidia (i.e. Rhodelphis limneticus and R. marinus) [26], and Ancoracysta twista [30]. Individual single-gene alignments were aligned by MAFFT v. 7.205 [39,40] with the L-INS-i algorithm followed by manual correction and exclusion of ambiguously aligned positions. Each of the single-gene alignments was subjected to a preliminary phylogenetic analysis using FAS-TTREE v. 2.1 [41,42] under the LG + Γ model. The resultant approximately maximum-likelihood trees with SH-like local supports were inspected to identify the alignments bearing aberrant phylogenetic signal that disagreed strongly with any of a set of well-established monophyletic assemblages in the tree of eukaryotes, namely Opisthokonta, Amoebozoa, Alveolata, stramenopiles, Rhizaria, Rhodophyta, Chloroplastida, Glaucophyta, Haptophyta, Cryptista, Jakobida, Euglenozoa, Heterolobosea, Diplomonadida, Parabasalia and Malawimonadida. A total of 32 out of the 351 singlegene alignments were found to violate the above-mentioned criteria and were excluded from the phylogenomic analyses described below. The remaining 319 single-gene alignments (electronic supplementary material, table S1) were concatenated into a single phylogenomic alignment containing 82 taxa with 88 592 unambiguously aligned amino acid positions. The coverage for each single-gene alignment is summarized in electronic supplementary material, table S1.
We first subjected the final alignment comprising 319 genes from 82 taxa (GlobE alignment) to the maximum-likelihood (ML) method by IQ-TREE v. 1.6.12 [43] with the LG + Γ + F + C60 model [44]. The robustness of the ML phylogenetic tree was evaluated with a non-parametric ML bootstrap analysis with the LG + Γ + F + C20 + PMSF ( posterior mean site frequencies) model (100 replicates). The ML tree inferred with the LG + Γ + F + C60 model was used as the guide tree for the bootstrap analysis incorporating PMSF. We also conducted Bayesian phylogenetic analysis with the CAT + GTR model using PHYLOBAYES-mpi v. 1.8a [45,46]. In this analysis, two MCMC runs were run for 5000 cycles with 'burn-in' of 1250. The consensus tree with branch lengths and Bayesian posterior probabilities (BPPs) were calculated from the remaining trees.
We evaluated the contribution of fast-evolving positions in the GlobE alignment to the position of M. maris.
royalsocietypublishing.org/journal/rsob Open Biol. 12: 210376 Substitution rates of individual alignment positions were calculated over the ML tree by IQ-TREE v. 1.6.12 [43] and top 20%, 40%, 60% and 80% fastest-evolving positions were then removed from the original alignment. The processed alignments were then subjected to the ML bootstrap analysis with the UFBOOT approximation [47] (1000 replicates) by using IQ-TREE v. 1.6.12 [43] with the LG + Γ + F model. Henceforth, the alignment modification and the following ML bootstrap analyses are designated as 'FPR (fast-evolving position removal)' analysis.
We also examined the impact of the sampling of the genes in the GlobE alignment on the position of M. maris by 'RGS (random gene sampling)' analyses described below [48]. From the 319 genes in the GlobE alignment, 50 genes were randomly sampled and concatenated into a single alignment ('rs50g' alignment). The above procedure was repeated 50 times to obtain 50 of rs50g alignments. Likewise, we prepared (i) 50 of 'rs100g' alignments comprising 100 randomly sampled genes, (ii) 10 of 'rs150g' alignments comprising 150 randomly sampled genes and (iii) 10 of 'rs200g' alignments comprising 200 randomly sampled genes. The alignments comprising randomly sampled genes were subjected individually to the ML bootstrap analysis with the UFBOOT approximation (1000 replicates) by using IQ-TREE v. 1.6.12 [43] with the LG + Γ + F model.
Diaphoretickes phylogeny
To evaluate the impact of the inclusion of M. maris to the phylogenetic relationship among the species/lineages in Dipahoretickes, we excluded 22 taxa from the GlobE alignment to generate the second phylogenomic alignment, of which taxa were mostly members of Diaphoretickes. Note that the number of genes remained the same between the GlobE and the second, 'Diaph' alignments. The Diaph alignment was subjected to both ML and Bayesian phylogenetic analysis under all the same conditions as described above, except that we used the LG + Γ + F + C60 + PMSF model for the ML bootstrap analysis with the ML tree inferred with the LG + Γ + F + C60 model as the guide tree. Both FPR and RGS analyses (see above) were applied to the Diaph alignment. We also conducted both FPR and RGS analyses after excluding Rhodelphidia and M. maris alternatively from the Diaph alignment.
The taxon sampling of the Diaph alignment was further modified by excluding (i) Rhodelphidia and M. maris, (ii) Rhodelphidia, M. maris and Palpitomonas bilix, (iii) Rhodelphidia, M. maris, P. bilix and Goniomonadea and (iv) Rhodelphidia, M. maris, P. bilix and Cryptophyceae. We ran RGS analyses of all of the four alignments described above, and the last two were subjected to FPR analyses as well.
Results and discussion
3.1. Microheliella maris represents a lineage basal to Cryptista: proposal of 'Pancryptista' We analysed a transcriptome-based GlobE alignment consisting of 319 genes sampled from 82 eukaryotes, which represent the major taxonomic assemblages and several orphan taxa/lineages. The GlobE phylogeny recovered the major clades in eukaryotes, such as SAR, Amorphea, CRuMs, Discoba and Cryptista with full statistical support in both ML and Bayesian methods (figure 1; see also electronic supplementary material, figure S1). Microheliella maris branched at the base of the Cryptista clade, which comprises P. bilix, Goniomonadea including Hemiarma marina, and Cryptophyceae, with an MLBP of 99% and a BPP of 1.0. The GlobE alignment includes no data of Kathablepharidacea which is the other cryptistan subgroup, as their available data are extremely low site coverage. However, the lack of Kathablepharidacea most likely had little impact on the phylogenetic position of M. maris relative to Cryptista, as far as the alignment includes P. bilix which is more basal than Kathablepharidacea in the Cryptista clade [21]. The monophyly of Archaeplastida including Rhodelphidia (Rhodelphis limneticus and R. marinus) and Picozoa sp., both of which grouped with Rhodophyta, was recovered with an MLBP of 87% and a BPP of 1.0. The intimate affinity of Rhodelphidia and Picozoa to Rhodophyta in the GlobE phylogeny is consistent with the recent phylogenomic studies [26,49]. Neither of the two recently proposed major clades in eukaryotes, T-SAR (Telonemia plus SAR) [24] and Haptista (Centrohelea plus Haptophyta) [22], was reconstructed. Either or both ML and Bayesian phylogenetic analyses failed to give full statistical support to the nodes connecting the lineages/species in Diaphoretickes, namely Archaeplastida, Centrohelea, Haptophyta, Telonemia, SAR and Cryptista plus M. maris. Thus, we conclude that the analyses of the GlobE alignment are insufficient to retrace the early evolution of Diaphoretickes with confidence. We here examined the phylogenetic position of M. maris inferred from the GlobE alignment by the progressive removal of fast-evolving positions (FPR analyses). The contribution of fast-evolving positions in the GlobE alignment to the union of M. maris and Cryptista is most likely negligible, as the ultrafast bootstrap support values (UFBPs) for the clade comprising M. maris and Cryptista stayed 100% until the top 80% fastest-evolving positions were removed (purple line in electronic supplementary material, figure S2a). We detected two conflicting phylogenetic signals regarding the position of M. maris relative to the members of Cryptista included in the GlobE alignment, one placing M. maris at the base of the Cryptista clade and the other uniting M. maris and P. bilix directly (red and yellow lines, respectively, in electronic supplementary material, figure S2a). However, the former signal constantly dominated over the latter, regardless of the amount of fast-evolving positions in the alignment. Thus, we conclude that the basal position of M. maris to the Cryptista clade in the GlobE phylogeny (figure 1) is free from potential phylogenetic artefacts stemming from fastevolving positions.
To evaluate the impact of gene sampling on the position of M. maris in the GlobE phylogeny, we randomly sampled 50 genes, 100 genes, 150 genes and 200 genes from the 319 genes and concatenated them into 'rs50g,' 'rs100g,' 'rs150g,' and 'rs200g' alignments, respectively (note that the taxon sampling remained the same). The UFBPs for the clade comprising M. maris and Cryptista calculated from 50 of rs50g alignments and 50 of rs100g alignments distributed from 0 (or nearly 0) to 100% (electronic supplementary material, figure S2b; see also electronic supplementary material, table S2 for the details). The UFBP for the clade of M. maris and Cryptista appeared to be less than 40% in the analyses of 12 out of the 50 of rs50g alignments and five royalsocietypublishing.org/journal/rsob Open Biol.
The sister relationship between Archaeplastida and
Pancryptista: proposal of 'CAM clade' The Diaph alignment, which was generated by excluding 22 taxa from the GlobE alignment, was analysed to explore the impact of M. maris on the phylogenetic relationship among the major lineages in Diaphoretickes. Twenty-one out of the 22 taxa excluded from the GlobE alignment were not a member of Diaphoretickes-most of Opisthokonta, all discobids, Paratrimastix pyriformis, Nutomonas longa, and Malawimonas jakobiformis. We excluded a single member of Diaphoretickes, Picozoa sp., from the Diaph alignment due to its instability in the GlobE phylogeny, which likely stemmed from low site coverage in the GlobE alignment ( figure 1). The phylogenetic relationship among the major Diaphoretickes lineages/species inferred from the Diaph alignment (e.g. the basal position of M. maris to the Cryptista clade) was essentially the same as that inferred from the GlobE alignment (electronic supplementary material, figures S1 and S2a). We here focus on the monophyly of Archaeplastida and the sister relationship between Archaeplastida and Pancryptista, both of which were fully supported in the ML and Bayesian analyses of the Diaph alignment (figure 2a). Neither monophyly of Archaeplastida nor sister relationship between Archaeplastida and Cryptista has been unambiguously recovered. For instance, Gawryluk et al. [26] conducted analyses of an alignment comprising 254 genes but could not settle the relationship among Chloroplastida, Glaucophyta, and Rhodophyta plus Rhodelphidia. The ML analysis of their 254-gene alignment put Cryptista within the three lineages of Archaeplastida, albeit Bayesian analysis of the same alignment recovered both monophyly of Archaeplastida and sister relationship between Archaeplastida and Cryptista. In another phylogenomic study, the monophyly of Archaeplastida was not reconstructed in either ML or Bayesian analysis of an alignment comprising 248 genes, as Cryptista was tied with Rhodophyta [24]. Irisarri et al. [50] recently analysed a 311-gene alignment and demonstrated that taxon sampling, selection of alignment positions, and substitution models for phylogeny are critical to recovering the monophyly of Archaeplastida.
By contrast to the pioneering phylogenomic studies (see above), both monophyly of Archaeplastida and the sister relationship between Pancryptista and Archaeplastida were reconstructed from the Diaph alignment with full statistical support by both ML and Bayesian methods (figure 2a). Significantly, FPR analysis on the Diaph alignment appeared to have little impact on the UFBPs for the two nodes of interest (figure 2b). Both monophyly of Archaeplastida and sister relationship between Pancryptista and Archaeplastida received full or nearly full UFBPs until the top 60% fastestevolving positions were removed (purple and green lines, respectively, in figure 2b). We also analysed rs50g, rs100g, rs150g and rs200g alignments generated from the Diaph alignment (figure 2c,f,i; see also electronic supplementary material, table S3 for the details). The results from RGS analyses clearly indicate that the UFBPs for the two nodes of interest (and that for the monophyly of Pancryptista) increased in proportion to the number of genes considered. We here conclude that Archaeplastida is a genuine clade as demonstrated by Irisarri et al. [50], and Pancryptista is the closest relative of Archaeplastida and thus propose the sister relationship between Archaeplastida and Pancryptista as 'CAM' clade. The proposed name is an acronym derived from the first letters of Cryptista, Archaeplastida and Microheliella.
It is significant to note that the monophyly of Archaeplastida and the sister relationship between Archaeplastida and Cryptista was recovered by a 311-gene phylogeny prior to the M. maris data are available [7,50]. Thus, we decided to evaluate systematically how the inclusion of M. maris, as well as that of Rhodelphidia, contributed to the recovery of the monophyly of Archaeplastida and the sister relationship between Archaeplastida and Pancryptista/Cryptista. We re-analysed rs50g, rs100g, rs150g and rs200g alignments generated from the Diaph alignment after excluding Rhodelphidia or M. maris (figure 2d,e,g,h,j, k; see also electronic supplementary material, table S4 for the details). In the analyses of rs50g and rs100g alignments, the removal of Rhodelphidia/M. maris lowered the overall distributions of the UFBPs for the monophyly of Archaeplastida and sister relationship between Archaeplastida and Pancryptista/Cryptista (figure 2d,e,g,h). However, most of the UFBPs for the two groupings of interest in the analyses of rs200g alignments were around or greater than 90% (figure 2d,e,g,h; electronic supplementary material, table S4). Likewise, the overall distribution of the UFBPs for the monophyly of Pancryptista was apparently lowered in the analyses of rs50g, rs100g and rs150g alignments in the absence of Rhodelphidia (figure 2j). After M. maris was excluded, the monophyly of Cryptista was constantly recovered with full UFBPs, except a UFBP of 21.5% obtained in the analyses of a single rs50g alignment (figure 2k; electronic supplementary material, table S4). The removal of Rhodelphidia or M. maris appeared to possess a moderate but apparent impact on the monophyly of Archaeplastida and sister relationship between Archaeplastida and Pancryptista/Cryptista in the alignments comprising 100 or fewer genes, albeit such impact can be overcome by an increment of the alignment size.
On the artefactual grouping of Rhodophyta and Cryptophyceae
The phylogenetic analyses described above indicated that taxon sampling is a key to recover the monophyly of royalsocietypublishing.org/journal/rsob Open Biol. 12: 210376 Archaeplastida and the sister relationship between Archaeplastida and Pancryptista (CAM clade) with confidence. Then, why did phylogenomic analyses, in which either or both of Rhodelphidia and M. maris were absent, often failed to recover the monophyly of Archaeplastida? For instance, a recent phylogenomic study [24], which considered neither Rhodelphidia nor M. maris, grouped Rhodophyta and Cryptista together instead of recovering the monophyly of Archaeplastida. The absence of both Rhodelphidia and M. maris had a greater impact on the UFBP for the monophyly of Archaeplastida and that for the sister relationship between Archaeplastida and Cryptista than the absence of either of the two lineages/species. Regardless of the number of randomly sampled genes, the distributions of the UFBPs for the two groupings of interest tend to be lower than the corresponding values calculated from the analyses excluding either Rhodelphidia or M. maris (compare figure 2d,e with figure 3a, and figure 2g,h with figure 3e; see also electronic supplementary material, table S3 for the details). Interestingly, the faint affinity between Rhodophyta and Cryptista became detectable in the absence of Rhodelphidia and M. maris (figure 3i). After P. bilix was additionally excluded (Cryptista was represented by Goniomonadea and Cryptophyceae), both UFBP for the monophyly of Archaeplastida and that for the sister relationship between Archaeplastida . We repeated ultrafast bootstrap analyses using IQ-TREE 1.6.12 on the Diaph alignment after excluding no position, the top 20, 40, 60 and 80% fastest-evolving positions. The plots in purple, green, blue, and red indicate the ultrafast bootstrap support values (UFBPs) for the monophyly of Pancryptista, the monophyly of Archaeplastida, CAM clade, and the union of Rhodophyta and Pancryptista, respectively. (c-k) Analyses of the alignments generated by random gene sampling (RGS). We sampled 50, 100, 150 and 200 genes randomly from the 319 genes in the Diaph alignment, concatenated into 'rs50g,' 'rs100g,' 'rs150g' and 'rs200g' alignments, and subjected to ultrafast bootstrap analyses using IQ-TREE 1.6.12. We presented the UFBPs for CAM clade (i.e. the sister relationship between Pancryptista and Archaeplastida), the monophyly of Archaeplastida, and the monophyly of Pancryptista as box-and-whisker plots (c), ( f ) and (i), respectively. The above-mentioned analyses were repeated after Rhodelphis spp. or M. maris were excluded from the alignments alternatively. The UFBPs from the analyses excluding Rhodelphis spp. and those from the analyses excluding M. maris are presented in (d ), (g) and ( j ), and (e), (h) and (k), respectively. The UFBPs shown in the plots described above are summarized in electronic supplementary material, table S3. royalsocietypublishing.org/journal/rsob Open Biol. 12: 210376 and Cryptista were further lowered (figure 3b,f ). In stark contrast, the exclusion of P. bilix enhanced the affinity between Rhodophyta and Cryptista (figure 3j ). These results clearly indicated that, in the absence of Rhodelphidia and M. maris, P. bilix possesses a significant impact on the recovery of the monophyly of Archaeplastida by excluding Cryptista.
We further analysed the alignments in which Cryptista was represented solely by Cryptophyceae or Goniomonadea. The most drastic results were obtained from the analyses of the alignments in which Cryptophyceae was the sole representatives of Cryptista (figure 3c,g,k; see also electronic supplementary material, table S4 for the details). After the exclusion of the non-photosynthetic lineages in CAM clade (i.e. Rhodelphidia, M. maris, P. bilix and Goniomonadea), the union of Cryptista (i.e. Cryptophyceae) and Rhodophyta appeared to dominate over the monophyly of Archaeplastida, particularly in the analyses of larger-size alignments ( figure 3g,k). The analyses of rs200g alignments recovered the union of Cryptista and Rhodophyta with UFBPs ranging from 43.2 to 97.6% (electronic supplementary material, table S4). Of note, the decrease in UFBP support values appeared to be much more severe for the sister relationship between Archaeplastida and Cryptista than the monophyly of Archaeplastida-the UFBPs for the former grouping were 0 or nearly 0, regardless of the alignment size (figure 3c; electronic supplementary material, table S4). Compared to the analyses considering Cryptophyceae as the sole representative of Cryptista, we observed only the mild suppression of the monophyly of Archaeplastida and that of the sister relationship between Archaeplastida and Cryptista in the analyses of the alignments in which Cryptista was represented by Goniomonadea (figure 3d,h; electronic supplementary material, table S4). These observations coincide with the affinity between Goniomonadea and Rhodophyta being more weakly supported (figure 3l ) than that between Cryptophyceae and Rhodophyta (figure 3k).
We revealed that Rhodophyta was attracted to Goniomonadea and Cryptophyceae erroneously in the ML phylogenies inferred from the alignments lacking Rhodelphidia, M. maris and P. bilix (see above). In the global eukaryotic phylogeny, Rhodelphidia interrupts the branch leading to the Rhodophyta clade. Likewise, M. maris and P. bilix, both of which are basal branches of the Pancryptista clade, break the branch leading to the clade of Cryptophyceae and Goniomonadea (i.e. Cryptomonada). Thus, the grouping of Rhodophyta and Cryptomonada is most likely the phylogenetic artefact in which the two long branches, one exposed by the absence of Rhodelphidia and the other exposed by the absence of M. maris and P. bilix, attract to each other-LBA artefact [32]. Importantly, this phylogenetic artefact could not be overcome completely in the analyses of the alignments comprising at least 200 genes (figure 3j-l ). If so, we anticipated that the putative phylogenetic artefact uniting Rhodophyta and Cryptomonada was enhanced further by the exclusion of Cryptophyceae (or Goniomonadea) ( figure 3k,l ), as this procedure extended the branch leading Figure 3. Analyses assessing the phylogenetic affinity of Rhodophyta to Cryptophyceae and/or Goniomonadea. (a-l ) Analyses of the alignments generated by random gene sampling (RGS). We excluded both Rhodelphis spp. and Microheliella maris from the 'rs50g,' 'rs100g,' 'rs150g' and 'rs200g' alignments, which were generated from the Diaph alignments (see Methods for the detail) and then subjected to the ultrafast bootstrap analyses using IQ-TREE 1.6.12. The ultrafast support values (UFBPs) for the sister relationship between Archaeplastida and Cryptista, the monophyly of Archaeplastida, and the union of Rhodophyta and Cryptista are presented as box-and-whisker plots (a), (e) and (i), respectively. The ultrafast bootstrap analyses on the rs50g, rs100g, rs150g and rs200g alignments were repeated after further exclusion of Palpitomonas bilix (b, f and j ), P. bilix and Goniomonadea (c, g and k), and P. bilix and Cryptophyceae (d, h and l ). The UFBPs shown in the plots described above are summarized in electronic supplementary material, table S4. (m,n) Analyses of the alignments processed by fast-evolving position removal (FPR). We modified the Diaph alignment in two ways, (i) the exclusion of Rhodelphis spp., M. maris, P. bilix, and Cryptophyceae and (ii) that of Rhodelphis spp., M. maris, P. bilix and Goniomonadea. The two modified Diaph alignments were processed by FPR and further subjected to ultrafast bootstrap analyses. We plotted the UFBPs for the monophyly of the SAR clade (brown), those for the monophyly of Archaeplastida (green), and those for the uniting of Rhodophyta and Goniomonadea/Cryptophyceae (red).
royalsocietypublishing.org/journal/rsob Open Biol. 12: 210376 to the clade of Goniomonadea (or Cryptophyceae). The exclusion of Goniomonadea (i.e. Cryptophyceae were the sole representatives of Cryptomonada) appeared to enhance the putative phylogenetic artefact much greater degree than the exclusion of Cryptophyceae (i.e. Goniomonadea were the sole representatives of Cryptomonada) (compare figure 3c with d, figure 3g with h, and figure 3k,l ). These results imply that the phylogenetic artefact uniting Rhodophyta and Cryptophyceae is substantially different from that uniting Rhodophyta and Goniomonadea. Altogether, we conclude that both size and taxon sampling, particularly the sampling of the members of CAM clade, in alignments heavily matter to reconstruct the monophyly of Archaeplastida and sister relationship between Archaeplastida and Pancryptista/Cryptista with confidence.
To pursue the reason why Rhodophyta is artefactually attracted to Cryptophyceae more severely than Goniomonadea (see above), we modified the Diaph alignment by excluding Rhodelphidia and all members of Pancryptista except Goniomonadea, and the resultant alignment was then subjected to FPR analysis (figure 3m). In the analysis of the alignment with full positions, the union of Goniomonadea and Rhodophyta received a UFBP of greater than 80%, while the monophyly of Archaeplastida was supported by a UFBP of smaller than 10%. The analyses of the alignments after removing the top 20-60% fastest-evolving positions drastically increased the UFBP for the monophyly of Archaeplastida (89-100%; green line in figure 3m), while the UFBP for the grouping of Rhodophyta and Goniomonadea was reduced to less than 10% (red line in figure 3m). Thus, we conclude that the union of Rhodophyta and Goniomonadea is the typical LBA artefact stemming from fast-evolving positions. We repeated the same analysis described above but substituted Goniomonadea with Cryptophyceae (figure 3n). Unexpectedly, the union of Rhodophyta and Cryptophyceae received UFBP of 98%, 100%, 100%, 70% and 92% in the analyses after removal of top 20%, 40%, 60% and 80% fastestevolving positions, respectively (red line in figure 3n). The UFBP for the monophyly of Archaeplastida was less than 10%, except the analysis after removal of the top 60% fastest-evolving positions gave a UFBP of 30% (green line in figure 3n). These results strongly suggest that, in terms of dependency on fast-evolving positions, the phylogenetic artefact uniting Rhodophyta and Cryptophyceae is distinct from the typical LBA artefact uniting Rhodophyta and Goniomonadea.
Exploring the biological perspective on the 'signal'
uniting Rhodophyta and Cryptophyceae recovered in phylogenomic analyses It is attractive to propose that the difference between the artefact uniting Rhodophyta and Cryptophyceae and that uniting Rhodophyta and Goniomonadea stems from the difference in lifestyle between the two closely related lineages in Cryptista. Goniomonadea is primarily heterotrophic and their nuclear genomes are free from endosymbiotic gene transfer (EGT) [51]. Indeed, the series of the phylogenetic analyses described above demonstrated that the typical LBA was sufficient to explain the union of Rhodophyta and Goniomonadea (illustrated typically by figure 3m). By contrast, the extant member of Cryptophyceae possesses the plastids that were traced back to a red algal endosymbiont in the common ancestor of Cryptophyceae. During the red algal endosymbiont being transformed into a host-governed plastid, a number of genes had been transferred from the endosymbiont nucleus to the host nucleus. If a phylogenomic alignment contains genes acquired from the red algal endosymbiont, such genes are the source of the phylogenetic 'signal' uniting Cryptophyceae and Rhodophyta. However, we selected the 319 genes, each of which showed no apparent sign of EGT in the corresponding single-gene phylogenetic analysis, for the phylogenomic analyses in this study. Additionally, we calculated the log-likelihoods (lnLs) of two identical tree topologies except for the position of Cryptophyceae-one bearing the monophyly of Archaeplastida (Tree 1) and the other bearing the grouping of Rhodophyta and Cryptophyceae (Tree 2)-for each of the 319 single-gene alignments (note that Rhodelphidia, M. maris, P. bilix and Goniomonadea were omitted from the alignments) (electronic supplementary material, figures S5a and S5b). The 319 single-gene alignments were sorted by the lnL difference between the two test trees (normalized by the alignment lengths) and the top 10 alignments, which prefer Tree 2 over Tree 1, were subjected individually to the standard ML phylogenetic analyses (electronic supplementary material, figure S6, see also electronic supplementary material, table S5 for the details). Nevertheless, we did not detect any strong phylogenetic affinity between Rhodophyta and Cryptophyceae in any of the 10 ML single-gene analyses (electronic supplementary material, figure S6, see also electronic supplementary material, table S5). These results cannot be explained by a simple scenario assuming that a subset of the 'cryptophycean genes' in the phylogenomic alignment was in fact acquired endosymbiotically from the red algal endosymbiont as briefly mentioned in Cavalier-Smith et al. [31]. We examined an additional scenario which assumes that a potentially large number of the cryptophycean nuclear genes (including those composed of the phylogenomic alignment) are the chimeras of the sequence inherited vertically beyond the red algal endosymbiosis and that acquired from the red algal endosymbiont. In each chimeric gene, the phylogenetic signal from the red algae-derived gene portion is likely insufficient to unite Rhodophyta and Cryptophyceae together in the single-gene analysis. However, when multiple chimeric genes in the cryptophycean nuclear genomes were included in a phylogenomic alignment, the phylogenetic signal from the red algae-derived gene portion becomes detectable as the union of Rhodophyta and Cryptista in the absence of Rhodelphidia and the basal branching taxa in Pancryptista, such as M. maris and P. bilix. To examine the second scenario, we additionally calculated the site-wise lnL differences between Trees 1 and 2, albeit no clear sign for the putative red algal gene fragments was detected (electronic supplementary material, figure S7, see also electronic supplementary material, table S6). Although the results described above cannot exclude the potential chimerization of the nuclear genes in Cryptophyceae completely, we have no plausible explanation for the artefactual union of Rhodophyta and Cryptophyceae now. When we clarify the principal reason why Rhodophyta and Cryptophyceae artefactually attracted to each other in phylogenomic analyses, we may unveil an as-yet-unknown commonality in genome evolution between the two separate branches in the tree of eukaryotes.
Conclusion
In this work, we successfully deepen our understanding of the early evolution of eukaryotes. The phylogenomic analyses presented here demonstrated that M. maris is critical to understanding the early evolution of Cryptista, as well as that of Archaeplastida. We also revealed that the deep branches of Archaeplastida and Pancryptista-Rhodelphidia, M. maris, P. bilix (although not examined in this study, Picozoa most likely possesses the equivalent impact to the above-mentioned species/lineages, too)-are critical to suppress the cryptic and severe phylogenetic 'signal' in cryptophycean genes. The data are provided in electronic supplementary material [52]. | 2022-04-13T16:37:09.285Z | 2022-04-01T00:00:00.000 | {
"year": 2022,
"sha1": "2982d9eaee43f21ca454f5d93034c7dd180303f3",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "RoyalSociety",
"pdf_hash": "2982d9eaee43f21ca454f5d93034c7dd180303f3",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235701708 | pes2o/s2orc | v3-fos-license | Severe onset of inflammatory myositis in a child: think to paraneoplastic myositis
Background Juvenile idiopathic inflammatory myopathies (JIIMs) are a group of heterogenous, acquired, autoimmune disorders that affect the muscle. While the association between IIMs and malignancy has been widely reported in adults, cancer-associated myositis (CAM) is rare in children, so that routine malignancy screening is not generally performed. This report shows a case of severe CAM in a child. Case presentation An 11-years-old girl presented with worsening dyspnea after a 3-weeks history of progressive proximal weakness, myalgia, dysphagia, and weight loss. Her past history was remarkable for a type I Arnold-Chiari malformation associated with an anterior sacral meningocele. Physical examination showed severe hypotony and hypotrophy. Pulse oximetry and blood test showed a type II respiratory failure (SpO2 88%, pCO2 68 mmHg) and increased muscle enzyme levels (CPK 8479 U/L, AST 715 U/L, ALT 383 U/L, LDH 1795 U/L). The patient needed invasive mechanical ventilation. Inflammatory myositis was considered and treatment with intravenous methylprednisolone (30 mg/Kg/day for 3 days followed by 2 mg/Kg/day) and IVIG (1 g/kg/day for 2 days) was started. Muscle biopsy showed endomysial and perimysial necrosis and inflammation. The presence of serum anti-TIF1-γ antibody positivity led to a malignancy screening. Whole-body MRI showed a mature teratoma underneath sacral meningocele and both lesions were surgically removed. Given the histological and clinical severity of the myopathy, mycophenolate (500 mg twice a day) and rituximab (360 mg/m2, 4 weekly infusions) were added. Due to extreme muscular wasting, severe malnutrition and intolerance to enteral feeding the patient needed a transient tracheostomy and parenteral nutrition, followed by physiotherapy, speech therapy and nocturnal non-invasive ventilation. A complete remission was achieved 3 months after. Conclusions Among cancer-associated autoantibodies (CAAs) in adult patients, anti-TIF1-γ carries the highest risk of CAM, which recognizes with a high likelihood a paraneoplastic pathogenesis. In children, anti-TIF1-γ antibody has been associated with severe cutaneous disease, lipodystrophy, and chronic disease course, but not with CAM, which is overall rare in younger patients. Severe onset of a JIIM, especially if anti-TIF1-γ antibody positive, should prompt suspect of a CAM and lead to a screening for malignancy.
Background
Juvenile idiopathic inflammatory myopathies (JIIMs) are a group of heterogenous, acquired, autoimmune disorders that affect muscle and, to a lesser extent, skin, with onset during childhood. Juvenile dermatomyositis (JDM) is the most recognizable and frequent (up to 95% of JIIMs), with an incidence of approximately 2.5 per million [1]; other forms of JIIMs, such as juvenile polymyositis (JP), immune-mediated necrotizing myositis (IMNM) and juvenile connective tissue diseaseassociated myositis (JCTM) are even rarer, and more difficult to identify compared to adult counterparts.
All of the JIIMs commonly present with an acute or subacute onset of symmetric and proximal (hip and shoulder girdles, axial muscles) weakness; typical skin manifestations (Gottron papules, heliotrope rash, V-sign and shawl-sign rashes) are key features in JDM, but usually lack in other forms of JIIM; involvement of other organ systems such as gastrointestinal tract, pulmonary system, or joints may also be present and are considered elements of disease severity [2]. Clinical diagnosis is based on EULAR/ACR classification criteria [3]. Treatment with glucocorticoids along with methotrexate is the mainstay of the therapy. More severe patients require adjunctive immunosuppressant drugs, IVIG, and/ or rituximab to obtain remission.
Several myositis-associated autoantibodies (MAA) have been recognized and widely accepted in their ability to stratify patients into clinically homogenous groups so far. Their use in making an accurate diagnosis and define prognosis is emerging [4], but further characterization of their role is needed, especially in children.
This report shows the role of anti-TIF1-γ antibody in the diagnosis of a CAM in an 11-years-old girl presenting with severe JIIM onset.
Case presentation
An 11-years-old girl presented to the emergency department with worsening dyspnea and mild dysuria after a 3-weeks history of progressive proximal weakness to both upper and lower extremities, occasional bilateral leg myalgia, dysphagia and dysphonia. She had approximately lost the 13% of her body weight. Her past history was remarkable for incidentally diagnosed type I Arnold-Chiari malformation 5 years before, associated to a 3 cmlong cervical hydrosyringomyelia and an anterior sacral meningocele.
Physical examination was remarkable for severe diffused muscle hypotony and hypotrophy, with diminished deep tendon reflexes and abolished patellar reflexes. Pulse oximetry and capillary blood gas test showed type II respiratory failure (SpO 2 88%, pH 7.33, pCO 2 68 mmHg, HCO 3 − 30 mmol/L). Blood tests were remarkable for neutrophilic leukocytosis (WBC 14.400/mm 3 , N 9.710/mm 3 ) and elevated muscle enzyme levels (CPK 8479 U/L, AST 715 U/L, ALT 383 U/L, LDH 1795 U/L) with normal inflammatory markers. The neurological evaluation and a MRI scan of the brain and spine ruled out a worsening in Arnold-Chiari malformation. Capillaroscopy evaluation showed dilation and giant capillaries with avascular areas.
Findings were consistent with a severe onset of JIIM and treatment with intravenous methylprednisolone (30 mg/Kg/day for 3 days, followed by 2 mg/Kg/day) and IVIG (1 g/kg, two infusions) was started. The girl was then admitted to PICU, given her need of invasive mechanical ventilation.
Metabolic myopathies were excluded through urinary organic acids and serum acylcarnitine profile evaluation. Autoantibodies screening showed ANA positivity (1: 1280). Repetitive nerve stimulation test ruled out a simultaneous neuromuscular junction disorder. Muscle biopsy showed endomysial and perimysial necrosis and infiltration of mononuclear cells (CD4+ and CD8+ Tcells and NK cells) ( Fig. 1) ruling out a mitochondrial myopathy. Evaluation of MAA revealed anti-TIF1-γ and anti-PM/Scl100 antibodies positivity. A Whole-body MRI showed a 19 mm-wide mass underneath the previously documented meningocele (Fig. 2). After surgical removal of both the meningocele and the mass, the latter was histologically characterized as a mature teratoma.
Due to extreme hypotonia, muscular wasting, severe malnutrition and intolerance to enteral feeding the patient needed a transient tracheostomy and parenteral nutrition, followed by physiotherapy, speech therapy and nocturnal non-invasive ventilation.
Given the clinical severity of the disease additional immunosuppressive therapy with mycophenolate mofetil (500 mg twice a day for 2 weeks, then increased to 750 mg twice a day as a result of therapeutic drug monitoring) and rituximab (360 mg/m 2 of body surface, 4 weekly infusions) was added, together with antimicrobial prophylaxis with TMP/SMX and folic acid supplementation. Slow steroid tapering was started (5 mg decrease every week until 25 mg/day, then 2.5 mg every 10 days).
Disease activity was adequately controlled, so that muscle enzyme levels normalized from day 20 after admission (CK 139 U/L, aldolase 6.9 U/L) and a slow but progressive clinical improvement was observed. Patient was discharged from PICU at day 31.
Parenteral nutrition granted a stable weight growth until patient recovered from swallowing difficulties and incomplete glottic closure (as revealed by fiber-optic endoscopy) at day 48, when oral nutrition was restored followed by tracheostomy tube removal 5 days after.
The Childhood Myositis Assessment Scale (CMAS) [7] showed a slowly but persistent improvement in the 5 weekly measurements increasing from day 31 (CMAS: 10/52) to hospital discharge (CMAS: 35/52). Speech therapy and respiratory physiotherapy were integrated in the rehabilitation program.
Respiratory muscle function was the last to fully recover. Non-invasive ventilation was suspended at day 31, but hypoventilation persisted especially at night, as shown by overnight transcutaneous capnography performed at day 46 (average pCO2 51.9 mmHg, time over 50 mmHg: 73%; average SpO2 93%, time underneath 88%: 5%). Nocturnal non-invasive ventilation was therefore maintained for another month, when overnight capnography in spontaneous breathing confirmed recovery (average pCO2 45.1 mmHg, time over 50 mmHg: 4%).
Two months after discharge the patient was in complete clinical remission, her CMAS being 52/52. Her laboratory tests were completely normal while being treated with mycophenolate and low dosage of steroids (5 mg/day).
Discussion and conclusions
The association between IIMs and cancer has been widely reported, with an increased risk by 2-to 7-fold in adults, so that malignancy screening is suggested for all adult patients with newly diagnosed inflammatory myositis [8]. Cancer-associated myositis (CAM) is typically defined as the development of a malignancy within 3 years of the diagnosis of myositis. Pathogenesis of CAM is still unclear, but a paraneoplastic nature has been proposed, given the cancer diagnosis and myositis onset temporal coincidence, their clinical course correlation, and common expression of myositis-specific autoantigens between cancer cells and regenerating muscle cells [9,10]. While no significant difference was observed in the incidence of cancer among IIMs subgroups, recognized risk factors for CAM include male gender, older age at disease onset, extensive skin or muscle involvement, elevated inflammatory markers, negative ANA and/or MSAs and, interestingly, anti-SAE1, anti-NXP2, anti-HMGCR and anti-TIF1-γ antibodies positivity [10,11], also referred to as cancer associated autoantibodies (CAAs). Adult patients with anti-TIF1-γ antibody showed the highest risk (17-fold higher compared to age-and sex-matched general population) and prevalence (40.7%) of CAM, with an estimated specificity for diagnosing CAM of 92% [12]. No correlation was found between different CAAs and certain type of cancer, prognosis (which is overall worse compared to myositis without cancer), and temporal relationship between myositis onset and cancer diagnosis [10].
In children, at least one myositis autoantibody can be identified in approximately 70% of JIIM patients [13]. Anti-TIF1-γ antibody is the most prevalent (22-36%) [4], and has been associated with more severe cutaneous disease, lipodystrophy, and chronic disease course [14], but not with CAM [6]. TIF1 family includes three 155-kDa, 140-kDa and 120 kDa proteins (TIF1-α, TIF1-β, and TIF-γ respectively), involved in several cellular pathways such as cell proliferation, apoptosis, and innate immunity [15]; in particular, high levels of TIF1-γ were found in both regenerating skeletal muscle cells [16] and tumor cells [17], supporting the hypothesis of a paraneoplastic mechanism causing CAM. Further studies are needed to explain the difference in CAM's incidence between anti-TIF1-γ positive adult and children; the correlation between age and risk of cancer observed even among anti-TIF1-γ positive adult patients [18] could be part of the answer.
CAM is rare in children: an update by Morris [19] only found 12 pediatric cases over 45 years of literature up to 2008. Therefore, routine malignancy screening is not generally performed [20]. Nonetheless, as shown by our case, cancer can occur, defining a poorer prognosis especially if not recognized. Severe onset, with or without CAAs positivity, and anti-TIF1-γ antibody in particular, should always be considered in JIIMs and lead to perform a screening for malignancy. Anti-PM/Scl100 antibody is one of the most common MAAs in JIIMs, accounting for approximately 4% of cases; it is correlated with overlap myositis (OM) in adult patients, but data on associated clinical phenotype in children are limited [14,21]. ANA testing does not necessarily identify a specific rheumatic disease if positive [2]; ANA positivity is found in approximately 70% of JIIM patients [22], especially if anti-TIF1-γ positive [14].
Another important issue to consider in this case is the presence of an underlying known spinal dysraphism that can be associated to cancer presence, as in our case. Benign teratomas have already been reported to be possibly associated with JIIMs, along with other paraneoplastic syndromes such as limbic encephalitis, seronegative polyarthritis, or autoimmune hemolytic anemia [23].
The treatment of CAM follows the rules of JIIM; as recently stated by SHARE recommendations [24], the mainstay of treatment is high-dose glucocorticoid (preferably methylprednisolone pulse 15-30 mg/Kg/dose for 3 days, followed by oral prednisolone 1-2 mg/Kg/day) initially in combination with methotrexate (15-20 mg/ m 2 weekly, preferably subcutaneously). Given the need of surgical intervention, in our case mycophenolate mofetil was preferred over methotrexate for its better profile in terms of infectious risk, being an effective and well tolerated option in JDM treatment as well [25]. IVIG can be added to first-line therapy of severe forms of JIIM, presenting with marked dysphagia or weakness [26]. Other treatment options, variably used for refractory disease in absence of head-to-head trials, include ciclosporin A, cyclophosphamide, azathioprine and biologics such as rituximab. In particular, in a trial with 200 adult and juvenile patients suffering from PM, DM or JDM and treated with rituximab, 83% reached a clear improvement [27], with the presence of MSAs predicting a more rapid response [28]. The severity of our patient suggested to be very aggressive in early treatment (steroids, IVIG, mycophenolate and rituximab) even if a teratoma was found and then successfully surgically removed. Steroids tapering should be considered only when clinical improvement is documented; a steroidtapering regimen was recently proposed by PRINTO group [29], suggesting to gradually reach a prednisone dose of 1 mg/Kg/day by month 2, then the safer dose of 0.2 mg/Kg/day by month 6, and to maintain such dose up to month 12, when the dose should be halved twice more until steroid suspension at month 24. Withdrawal of disease-modifying drug should be considered once the patient is in remission and off steroids for a minimum of 1 year [24].
Overall mortality for JIIMs accounts for approximately 4%; clinical subgroup (JCTM>JPM > JDM), weight loss and dysphagia at illness onset are predictors of mortality [30]. A monocyclic course, with medication suspension within 2 years, is reported in 25% of patient, with another 25% having a polyphasic course; anti-TIF1-γ antibody positivity and severe illness onset carry a greater risk of chronic course, observed in the remaining 50% of patients [31]. Most of adult patients with CAM obtain remission after removal of malignancy, but in some cases myositis recur even without a relapse of cancer, probably because of a self-perpetuating, although cancer-triggered, immune response [12].
Severe onset of a JIIM, especially if anti-TIF1-γ antibody positive, should prompt suspect of a CAM and lead to a screening for malignancy. | 2021-07-02T13:18:39.269Z | 2021-07-01T00:00:00.000 | {
"year": 2021,
"sha1": "d13ae5a9807bc6def1fd253b463b52481f8162e7",
"oa_license": "CCBY",
"oa_url": "https://ijponline.biomedcentral.com/track/pdf/10.1186/s13052-021-01098-1",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "29814381af48fd6dbd505c0bcedf728d22d621f1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
46266463 | pes2o/s2orc | v3-fos-license | Oversized semi-compliant balloon to dilate an “undilatable” stenosis
A 56 year old woman presented for elective percutaneous coronary intervention (PCI), following the recurrence of limiting angina two years after coronary bypass operation. In this short time, the saphenous vein graft to her right coronary artery had blocked, and the left internal mammary artery (LIMA) anastomosed to her left anterior descending artery (LAD) …
R eactive oxygen species (ROS) are highly reactive chemical species comprising both free radicals such as superoxide and non-radicals such as hydrogen peroxide. When the normal balance between ROS generation and antioxidant systems is perturbed, a state of oxidative stress is said to exist, which has traditionally been considered deleterious due to tissue oxidation and damage. Recently, however, ROS have been recognised to exert more subtle effects. Tightly regulated ROS production modulates intracellular signalling pathways (''redox signalling'') and can induce highly specific changes in cell phenotype, especially in pathological settings. ROS also inactivate the signalling molecule nitric oxide (NO) and cause endothelial dysfunction, which may itself be a contributor to disease pathogenesis.
The following articles in this minisymposium address several topical aspects of the roles of ROS in cardiovascular disease. The overview article by Shah and Channon considers general mechanisms, effects, and relevance of redox signalling, and is followed by an article by Jin and Berk addressing recently identified novel redox signalling mechanisms. Kathy Griendling reviews the fascinating family of enzymes known as NADPH oxidases, which have recently been identified as major players in redox signalling in several cardiovascular disorders. Finally, Verhaar and colleagues discuss the potential pathogenic importance of superoxide production by NOS, the enzyme that normally generates NO but can switch to ROS production when the NOS co-factor tetrahydrobiopterin is deficient-for example, in diabetic vasculopathy. We hope that this minisymposium will provide an up-to-date review of the important field of oxidative stress and redox signalling and its relevance to clinical cardiovascular disease.
Oversized semi-compliant balloon to dilate an ''undilatable'' stenosis
A 56 year old woman presented for elective percutaneous coronary intervention (PCI), following the recurrence of limiting angina two years after coronary bypass operation. In this short time, the saphenous vein graft to her right coronary artery had blocked, and the left internal mammary artery (LIMA) anastomosed to her left anterior descending artery (LAD) showed a long, severe, heavily calcified stenosis proximal to the anastomosis. As the native vessels were not amenable to PCI, a decision was made to tackle the LIMA-LAD lesion.
The lesion was crossed with a BMW guidewire and balloon pre-dilatation attempted. However, multiple coronary balloons (diameters 2.0-3.0 mm) failed to crack the severely calcified stenosis (panel A) Two balloons ruptured (inflation pressures being more than 20 atm), leading to localised coronary dissection. Usage of cutting balloon (Boston Scientific), which includes microsurgical blades to incise the calcified plaque, was attempted, but even high inflation pressures failed to impact on the narrowing. One of the ruptured balloons (Boston Scientific Maverick 2.0 mm balloon, inflated at 22 atm), retained the circumferential shape of the stenosis even after removal and washing (panel B). Eventually, an oversized 3.5 semi-compliant balloon (Medtronic Extensor 3.5 mm) succeeded in dilating the stenosis (panel C), that was then successfully treated with a drug eluting stent.
The treatment of non-dilatable, severely calcified stenosis remains challenging. The use of rotational ablation techniques (for example, Rotablator) may solve the problem, but vessel tortuosity may limit their utility in distally located stenoses. In this setting, using an oversized semi-compliant or a non-compliant balloon, with careful step-by-step inflations, may be helpful by permitting an increase of radial dilation force. | 2018-04-03T06:19:31.170Z | 2004-04-14T00:00:00.000 | {
"year": 2004,
"sha1": "7e7c86b409eeb0e225e8466531ffb761ec9c1768",
"oa_license": null,
"oa_url": "https://heart.bmj.com/content/90/5/485.2.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Highwire",
"pdf_hash": "c6d64725b820fa32c37ee99afa4f052efa5777bd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.