id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
250324661
pes2o/s2orc
v3-fos-license
Is There Student Anxiety in Physical Education Learning during the COVID-19 Pandemic in Indonesia? Anxiety is one of the psychological aspects that may have influenced physical education learning during the COVID-19 pandemic. The purpose of this article is to find out about students' anxiety in tailoring learning during the COVID-19 pandemic. The method used in writing this article is literature studies. Methods used in article search and making research inquiries to uncover new findings using the PICO method. Access used by authors using google scholar in the form of nationally accredited articles with a range of issues from March 2020 to October 2021. The data collection technique using the PRISMA method from a total of 52 reference sources from the google scholar database is then filtered to get 4 references in the final stage which will be a literature review. The results of the literature study show that there is student anxiety in learning physical education during the COVID-19 pandemic in Indonesia. The findings of 50% of the studies that were self-reviewed showed there was a level of student anxiety in the category is sufficient. The remaining 50% of the studies self-reviewed showed there were high levels of student anxiety in the high categories. This review literature study has implications for the known anxiety condition of students in the learning process during the COVID-19 pandemic. These findings contribute as material for further study for the creation of appropriate learning methods to address student anxiety in tailoring learning during the COVID-19 pandemic. Optimization through the right methods will also optimize the benefits of tailors to improve health during pandemics. Research results Capel (1997) In England and Wales it shows that these physical education students experience moderate levels of anxiety after each school experience. There is a significant association between the amount of anxiety experienced after each school experience. The greatest anxiety is caused by being observed, evaluated, and assessed, with the suggestion that anxiety caused by assessment becomes more important than anxiety caused by being observed. The least anxiety is caused by relationships with school staff and teaching assignments. The results of the factor analysis identified two factors in all three questionnaires: evaluation anxiety and anxiety control classes, but two other factors on each of the three different questionnaires showed inconsistent factors in all three administrations of the questionnaire. These results are discussed in the light of early school-based teacher education. Physical education teachers continue to help students to be mentally healthy, raise standards for a healthier life, and develop character from childhood to adulthood (Phytanza et al., 2018;Phytanza & Burhaein, 2019;Pramantik & Burhaein, 2019). Educators provide opportunities for children through to adulthood to develop skills, build self-confidence, and understand the value of mental and physical health (Phytanza, Burhaein, Indriawan, et al., 2022;. However, this is experiencing challenges when there is a COVID-19 pandemic globally around the world. This COVID-19 pandemic has had an unprecedented impact on all lines of human life (Burhaein, Ibrahim, et al., 2020;Burhaein, Demirci, et al., 2021;Phytanza & Burhaein, 2020). This pandemic has impacted many social and economic sectors recently and has displaced millions of people around the world. Schools or the education sector are one of the sectors affected by the COVID-19 pandemic, so students' study independently at home and teachers are not present at school because of the pandemic (Irawan & Prayoto, 2021;Mahmood, 2021;Sibarani & Manurung, 2021;Widodo & Zainul, 2021). Since the United Nations Educational, Scientific, and Cultural Organization (UNESCO) accepted the national Human Resource Development (HRD) institute, millions of schools and universities have moved their classrooms online as their place of study to ensure that "student learning" is never interrupted during conditions isolation at home (Burhaein, Tarigan, Budiana, Hendrayana, Phytanza, et al., 2021;P. Purwanto, Lumintuarso, et al., 2021;S. Purwanto & Burhaein, 2021). The sudden and forced shift to online teaching has a direct impact on teaching pedagogy in all subjects, especially in physical education and sports teaching methods, the implementation of which is based on practical lessons Mumpuniarti et al., 2021;Nanda et al., 2021). Based on the above problems, there is urgency related to student anxiety in physical education learning. The purpose of writing this JUMORA: JURNAL MODERASI OLAHRAGA E-ISSN: 2797-8761 Volume 2, No. 1, June 2022 DOI: https://doi.org/10.53863/mor.v2i1.212 Department of Sports Education, Universitas Ma'arif Nahdlatul Ulama Kebumen review article is to describe students' anxiety in physical education learning during the COVID19 pandemic. METHODS Access used by authors uses google scholar to collect data in the form of national articles, and access google scholar to collect data in the form of national articles. The term of publication of the article is accessed from 2020 to 2021. The methods used in article search and creating research questions to uncover new findings use the PICO method as written in table 1. With the PICO method, we can create questions and search for articles with these keywords so that the results can be more specific and can answer the purpose of writing this article, for example, we look for journals with keywords that will be included in the journal search engine (tailors) (students) (Pandemic COVID-19) (Physical Education Learning) we will get several journals that match what we will look for. After conducting a journal search with the PICO method, the next step is to perform data extraction by the author. With the PICO method, we can create questions and search for articles with these keywords so that the results can be more specific and can answer the purpose of writing this article, for example, we look for journals with keywords that will be included in the journal search engine (tailors) (students) (Pandemic COVID-19) (Physical Education Learning) we will get several journals that match what we will look for. After conducting a journal search with the PICO method, the next step is to perform data extraction by the author. In carrying out this method, the author makes references using the inclusion and exclusion criteria as presented in Table 2 so that the results obtained become increasingly difficult, if there are criteria that are not appropriate according to the subjectivity of the author of the article will be issued. A total of 138 articles were obtained based on search, then issued duplication so that there were 127 articles. Furthermore, 57 articles were issued because it was not the treatment given instead of using problem-based learning methods so that the remaining 70 Articles full text. Furthermore, 20 articles were issued because they did not fit the inclusion criteria so that the remaining 50 articles, after being examined more deeply based on abstracts, methods, etc. then issued 34 articles so that the remaining 16 articles to be analyzed with qualitative methods. Data Extraction and Identification Strategies using PRISMA flow diagaram (Ekelund et al., 2019;Moher et al., 2009;Tricco et al., 2018). RESULTS AND DISCUSSIONS Based on the results of article analysis that has been obtained from google scholar, it was obtained 4 articles. The research results article in the context of the COVID-19 pandemic Sinta indexed article = 30 Non -Sinta indexed article = 22 After issuing duplication (12) was selected and reviewed using quantitative descriptive methodology, in physical education learning. In summary the results of the research are presented with the following summary of the output: Table 3. Summary of review article results Author Based on the results of the research in the table above we can conclude that there is student anxiety in physical education learning during the COVID-19 pandemic in Indonesia. This leads to the importance of addressing students' anxiety levels. Learning when it can be optimized has benefits and benefits for psychological health, especially in student anxiety. The physical benefits of exercise are improving physical condition and fighting diseases and viruses including COVID-19, and doctors always recommend staying physically active. Exercise is also considered important for maintaining mental fitness, and can reduce stress. Studies show that it is very effective at reducing fatigue, improving alertness and concentration, and improving overall cognitive function. With limited space and time in the midst of the Covid-19 pandemic, prospective students must also continue to do physical activity in order to maintain and improve their physical fitness, including students' anxiety levels (Jannah et al., 2021;Phytanza, Mumpuniarti, Burhaein, Lourenço, et al., 2021;Muhammad Saleh & Septiadi, 2021). This can be especially helpful when stress has depleted your energy or ability to concentrate. When stress affects the brain, with many of its neural connections, other parts of the body also feel the impact. So, it makes sense that your body feels better, so does your mind. Exercise and other physical activity produce endorphinschemicals in the brain that act as natural pain relieversand also improve sleep ability, which in turn reduces stress (Melzer et al., 2004;Phytanza, Burhaein, & Pavlovic, 2021;P. Purwanto, Nopembri, et al., 2021). Meditation, acupuncture, massage therapy, even taking deep breaths can cause the body to produce endorphins Phytanza, Mumpuniarti, Burhaein, Demirci, et al., 2021). Low to moderate intensity exercise makes the body feel energized and healthy (Burhaein, 2020;Pramantik, 2021). CONCLUSIONS The findings of 50% of the studies that self-reviewed showed there was a level of student anxiety in the category is sufficient. The remaining 50% of the studies self-reviewed showed there were high levels of student anxiety in the high categories. This review literature study has implications for the known anxiety condition of students in the learning process during the COVID-19 pandemic. These findings contribute as material for further study for the creation of appropriate learning methods to address student anxiety in tailor learning during the COVID-19 pandemic. Optimization through the right methods will also optimize the benefits of tailors to improve health during pandemics.
2022-07-07T15:13:25.267Z
2022-06-21T00:00:00.000
{ "year": 2022, "sha1": "9e90bad8e5739ff350d0317b64e0bd17a533e2eb", "oa_license": "CCBYSA", "oa_url": "https://jurnal.umnu.ac.id/index.php/mor/article/download/212/100", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "525f30b3d9fe8919e1715d211ecd5cf3d45b96a8", "s2fieldsofstudy": [ "Education", "Psychology" ], "extfieldsofstudy": [] }
252935340
pes2o/s2orc
v3-fos-license
The potential of miR-153 as aggressive prostate cancer biomarker Introduction Prostate cancer (PC) is one of the most frequently diagnosed cancers in males. MiR-153, as a member of the microRNA (miRNA) family, plays an important role in PC. This study aims to explore the expression and possible molecular mechanisms of the miR-153 action. Methods Formalin-fixed paraffin-embedded (FFPE) tissues were collected from prostatectomy specimens of 29 metastatic and 32 initial stage PC patients. Expression levels of miR-153 were measured using real-time reverse transcription polymerase chain reaction (qRT-PCR). 2−ΔΔCT method was used for quantitative gene expression assessment. The candidate target genes for miR-153 were predicted by TargetScan. Mutations in target genes of miR-153 were identified using exome sequencing. Protein-protein interaction (PPI) networks, Kyoto Encyclopedia of Genes and Genomes (KEGG) analyses were performed to investigate the potential molecular mechanisms of miR-153 in PC. Results MiR-153 was significantly up-regulated in PC tissues compared to non-cancerous tissues. The analysis of correlation between the expression level of miR-153 and clinicopathological factors revealed a statistically significant correlation with the stage of the tumor process according to tumor, node, metastasis (TNM) staging system (p = 0.0256). ROC curve analysis was used to evaluate the predictive ability of miR-153 for metastasis development and it revealed miR-153 as a potential prognostic marker (AUC = 0.85; 95%CI 0.75–0.95; sensitivity = 0.72, specificity = 0.86)). According to logistic regression model the high expression of miR-153 increased the risk of metastasis development (odds ratios = 3.14, 95% CI 1.62–8.49; p-value = 0.006). Whole exome sequencing revealed nonsynonymous somatic mutations in collagen type IV alpha 1 (COL4A1), collagen type IV alpha 3 (COL4A3), forkhead box protein O1 (FOXO1), 2-hydroxyacyl-CoA lyase 1 (HACL1), hypoxia-inducible factor 1-alpha (HIF-1A), and nidogen 2 (NID2) genes. Moreover, KEGG analysis revealed that the extracellular matrix–receptor (ECM-receptor) interaction pathway is mainly involved in PC. Conclusion MiR-153 is up-regulated in PC tissues and may play an important role in aggressive PC by targeting potential target genes. Introduction Prostate cancer (PC) is a commonly diagnosed condition, with an estimated 1414.3 thousand new cases of PC and 375.3 thousand deaths from the disease in 2020 [1]. In one third of PC patients the tumor progresses (perineural and stromal invasion) after initial regression in response to androgen deprivation therapy [2]. Despite current advances in surgical, chemotherapeutic and radiological methods of treatment, the five-year survival rate in castration-resistant patients is approximately 31.0% [3]. Most PC-related deaths occur due to the inability of existing treatments to prevent tumor spread [4]. Rectal examination and prostate specific antigen (PSA) levels are used to diagnose PC. However, the medical and scientific communities have questioned routine PSA testing. PSA has high sensitivity but very low specificity for PC. It can be elevated in the presence of benign prostate disease, infection, inflammation or benign hyperplasia [5]. Routine PSA testing leads to a high percentage of false-positive results. Moreover, PSA levels correlate poorly with the stage of the disease, leading to misdiagnoses and overtreatment of indolent forms of PC [6]. Considering all the above, there is a need for new molecular genetic markers capable of both detecting the disease at the earliest stage and predicting its course. MicroRNAs (miRNAs) are considered to be one of the most promising markers for various diseases including PC. The discovery of miRNAs provided a conceptual breakthrough in cancer research. MiRNAs are non-coding RNAs (ncRNAs) (19-22 nucleotides) which are involved in post-transcriptional regulation of gene expression. MiRNAs are increasingly associated with the initiation, development and progression of malignancies. Aberrant miRNAs expression has been identified in a variety of malignant tumors, with recent evidence suggesting that miRNAs function as tumor suppressor genes and/or oncogenes [7]. Recent studies have various miRNAs expression profiles in PC tissues [8][9][10][11]. MiR-153 was found to be downregulated in various cancers, such as breast cancer (BC), gastric cancer (GC) and oral cancer (OC) [15]. However, there are few investigations devoted to miR-153 and its role PC [12][13][14][15]. In this study, we evaluate the expression levels of miR-153 in PC tissue specimens and adjacent normal tissue specimens, the association of miR-153 with clinical characteristics and mutations in target genes of miR-153 and identified the pathways involved in PC progression. Our study was aimed to evaluate and understand the expression level of miR-153 in malignant tumor and normal prostate tissue in patients with metastatic PC and initial stages of the disease and possible molecular mechanism of the miR-153. Patients and samples collection In this study, 61 PC patients (average age 60, range 41-80 years) who underwent surgical treatment between 2008 and 2020 in Bashkir State Medical University hospital were included. Ethical approval for this study was obtained from Institute of Biochemistry and Genetics Bioethics Committee. All samples investigated in this study were obtained with written informed consents of the participants. We collected formalin-fixed paraffin-embedded (FFPE) tissues from prostatectomy material including 29 metastatic and 32 localized (stages I, II) PC patients. The samples were classified using the tumor, node, metastasis (TNM) staging system from clinical stages I-IV. Samples evaluation After tissue section were obtained, Hematoxylin and Eosin (H&E) staining was performed and the slides were examined by two independent experienced pathologists. DNA and RNA were isolated from tumor regions and normal prostate tissue from each patient. The healthy region of the prostate without tumor cells in the selected FFPE block, was taken as a source of normal RNA and DNA. Total RNA and DNA extraction was performed using Quick-DNA/RNA™ FFPE Kits (Zymo Research) following the manufacturer's protocol. The process consisted of simple tissue deparaffinization in deparaffinization solution, proteinase K treatment followed by RNA and DNA isolation in spin columns. Real-time reverse transcription polymerase chain reaction (qRT-PCR) The PCR amplification for the quantification of the miR-153 and U6 RNA was performed using a TaqMan MicroRNA Reverse Transcription Kit (Applied Biosystems; Life Technologies Corp) and TaqMan Human MicroRNA Assay Kit (Applied Biosystems; Life Technologies Corp). Expression levels were measured using CFX96™ PCR detection system (BioRad). All reactions were performed three times for each sample. The 2− ΔΔCT method was used for quantitative gene expression assessment. The 2− ΔΔCT method is based on the assumption that the cycle threshold difference (ΔCt) between target gene and reference gene is proportional to relative target gene expression. The relative expression of miR-153 was shown as fold difference relative to U6 RNA. Whole exome sequencing Patients with metastatic adenocarcinoma were chosen for exome sequencing to identify mutations in target genes of miR-153. Target genes were selected using TargetScan. DNA fragmentation, library preparation, and exome capture were conducted according to the manufacturer's recommendations. Selection of specific DNA fragments was conducted using the SureSelect system followed by concurrent sequencing of the obtained libraries using Illumina HiSeq 2000. All the reads were aligned with the reference genome using Burrows-Wheeler Alignment (BWA) software. We used the human genome sequence (Genome Reference Consortium Human Build 37 (GRCh37-hg19)) as a reference. Identification of the variants was conducted using the Genome Analysis Tool Kit (GATK). The identified variants were annotated by ANNOVAR software using the scripts table_annovar.pl and annotate_variation.pl, which allows to compare single nucleotide substitutions with the number of specialized databases and to annotate prognostic functional significance of the revealed alterations using six in silico software programs (SIFT, PolyPhen-2, LRT, Mutation Assessor, MutationTaster, phyloP, and GERP++) from dbNSFP v.3.0а. Additionally, we used CLINVAR and CADD (Combined Annotation Dependent Depletion) softwares. Exome sequencing procedure and bioinformatics were performed as previously described by Gilyazova et al. [16]. Network analysis The network of mutated genes was generated using STRING-DB 9.1 [17]. Minimum required interaction scores were set to "high confidence" (0.700). The Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analysis was performed using online KEGG tools (http://www.kegg.jp/). Statistical analysis The obtained data was analyzed using R-studio program by calculating average values, standard deviation, and arithmetic mean error. The data are presented as means ± standard deviation. To assess the significance of differences, the Mann-Whitney U test was used. The predictive ability of miR-153 was determined by the receiver operating characteristic (ROC) curves. A logistic regression prediction model was applied to calculate prediction scores for individual samples. Changes were considered reliably significant at p ≤ 0.05. MiR-153 expression and clinicopathological factors Patients were divided into subgroups with high (n = 31) expression and low (n = 30) expression of miR-153. The median miR-153 expression value defined the threshold value. The analysis of correlation between miR-153 expression and clinicopathological factors (Table 1) revealed a significant correlation with the stage of the tumor process according to TNM classification (p = 0.001). Expression levels of miR-153 in different TNM stages of PC is presented (Fig. 4). It was shown that miR-153 expression significantly higher (mean ± SEM: 2.29 ± 0.41) in metastatic PC tumors compared to non-metastatic tumors (mean ± SEM: 0.53 ± 0.12) with p < 0.0001. ROC curve of miR-153 In addition, the receiver-operating characteristic (ROC) curve was drawn to further calculate the area under the curve (AUC) and authenticate the diagnostic ability of miR-153. Results revealed that miR-153 expression presented low diagnostic ability for tumor and normal prostate tissue discrimination. As shown at ROC curve of the miR-153 presented in Fig. 5, the AUC was 0.61 (95% CI 0.52-0.71; sensitivity = 0.66, specificity = 0.53). Furthermore, logistic regression analysis revealed that the miR-153 was independent predictor for PC (odds ratios = 1.57, 95% CI 1.13-2.36; p = 0.014). Further the ROC curve analysis was used to evaluate the predictive ability of miR-153 for metastasis development. As shown at Fig. 6 the AUC was 0.85 (95%CI 0.75-0.95; sensitivity = 0.72, specificity = 0.86) that let suggest miR-153 as potential prognostic marker. According to logistic regression model the high expression of miR-153 increased the risk of metastasis development (odds ratios = 3.14, 95% CI 1.62-8.49; pvalue = 0.006). Table 2 Pathogenic mutations identified in target genes for miR-153 using exome sequencing. deleted on chromosome 10 (PTEN) gene. None of the pathogenic mutations in miR-153 target genes was seen in all patients. This may be due to the fact that shared somatic mutations were not the cause of metastatic PC. The most deleterious germline and somatic mutations are summarized in Table 2. Protein-protein interaction (PPI) network and hub gene analysis From 23 proteins, consists of 75 nodes, 23 edges (PPI enrichment value 0.0568, and average local clustering coefficient of 0.253) a PPI network was constructed. Gene Ontology (GO) analysis showed that overlapping miR-153 target genes were mainly enriched in collagen type IV trimer, basement membrane, extracellular matrix and cytoplasm. Regarding molecular function classification, miR-153 target genes were enriched in these functions: transforming growth factor beta binding, ATPase-coupled intramembrane lipid transporter activity, nucleoside monophosphate kinase activity, collagen binding and ATPase activity, coupled to movement of substances. The extracellular matrix-receptor (ECM-receptor) pathway was the most commonly seen enrichment pathway in KEGG (FDR = 0.03) (Supplementary Table S1). STRING analysis of protein-protein interaction network of metastatic PC is presented in Fig. 7. Discussion Patients who have undergone a radical prostatectomy remain at risk of biochemical recurrence after surgery, even though they have increased survival rates [18]. PC prognostic biomarkers are important to facilitate optimization of existing treatment strategies. Recently, it has been shown that miRNAs are intimately related to the development of several different types of cancers and may be useful to determine therapeutic targets for effective treatment strategies in a variety of cancers [19,20]. Xie et al. evidenced that miR-520a inhibits non-small cell lung cancer (NSCLC) progression through suppression of ribonucleoside-diphosphate reductase subunit M2 (RRM2) and Wnt signaling pathways [21]. Dong et al. show that miR-369 expression is reduced in hepatocellular carcinoma (HCC) tissues [22]. MiR-516a-3p expression is suppressed in PC tissue, and loss of miR-516a-3p expression promotes PC progression through ATP binding cassette subfamily C member 5 (ABCC5) targeting [23]. In 2020 Wang et al. presented evidence supporting the fact that miR-1231 expression is decreased in PC tissues and cell lines and that reduced expression of this miRNA had significant association with presence LNM, TNM stage, and clinical stage [24]. This research suggests that miRNAs play an important role in the process of cancer progression. Previous studies have shown that miR-153 is aberrantly expressed in several common cancers. Zhang et al. performed miRNAs profiling and showed that miR-153 is overexpressed in tissues of advanced colorectal cancer (CC). The miRNAs upregulation was further noted in primary CC compared, unlike normal colorectal epithelium [25]. Further studies showed miR-153 plays a role in promotion of colorectal malignancy progression through invasion and chemotherapy resistance enhancement. Hua et al. showed miR-153 to be activated in HCC cells with correlation between increased expression and poor outcome [26]. In our study we compare expression levels of miR-153 in tumor and normal prostate tissues in patients with metastatic PC and initial stages of the disease and identify specific pathogenic mutations in miR-153 gene targets in PC patients. We showed that miR-153 expression rates are significantly increased in PC tissue, when compared to normal prostate tissue. Moreover, high expression of miR-153 was found significantly associated with TNM stage. Our results suggest that it may act as an oncogene in PC and may be involved in the development of PC. MiR-153 overexpression is able to stimulate the transcriptional activity of β-catenin, which leads to cycle progression of cells, activation of proliferation and HCC cell colony formation. However, decreased expression of miR-153 is observed in some other cancers. Interestingly, Zhao et al. show that miR-153 expression is suppressed glioma cells compared to normal glial cells [27]. The authors showed that miR-153 suppresses cell invasion by regulating the expression of the snail family transcriptional repressor 1 (SNAI1), which is the target of miR-153. Guo et al. present findings showing that miR-153 is significantly overexpressed in patients with nasopharyngeal carcinoma (NPC), more so this miRNA affects NPC progression through the transforming growth factor-beta 2 (TGF-β2)/Smad2 signaling pathway [28]. Wang et al. showed that miR-153 was overexpressed in BC tissue samples and MDA-MB-231 cells [29]. Our study revealed that miR-153 is in fact overexpressed in PC tissues. Our results are consistent with Wu et al., who identified miR-153 to be overexpressed in PC and showed that miR-153 plays a crucial role in increased proliferation of human PC cells and via a process of miRNA-mediated suppression of PTEN expression in PC cells [8]. So far, there is very limited data on clinical significance of miR-153 and its role in PC. Bi et al. found that high miR-153 expression in PC tissues closely correlated with burdened clinical manifestations [11]. They presented data suggesting that PC patients with high levels of miR-153 expression had a lower five-year survival rate, when compared with patients with low miR-153 expression levels. Importantly, the authors use multivariate Cox regression analysis to show that miR-153 rates of expression are independent factors in predicting 5-year overall outcome in patients with PC. Thus, based on data of the present study and the results of previous publications, we can assume that miR-153 may serve as an available biomarker for PC prognosis. In our study we built upon existing reports of miR-153 significant by identifying specific somatic mutations in miR-153 target genes ( Table 2). One of the mutated genes in metastatic PC patient is COL4A1 gene. COL4A1 is involved in epithelial-mesenchymal transformation (EMT). In previous studies it was found that depending on age of PC patients and Gleason score altered expression of COL4A1 together with 7 other genes may be an EMT marker among PC patients [30]. Mapelli et al. generated a 10-gene predictive classifier which showed that COL4A1, a low-luminal marker, supports the association of attenuated luminal phenotype with metastatic disease [31]. We also found mutations in the FOXO1 gene in metastatic PC patients. The forkhead box O (FOXO) has a common conserved DNAbinding "fork-box" domain and in mammals consists of four members: FOXO1, forkhead box class O 3a (FOXO3a), forkhead box class O 4 Fig. 7. The miR-153-target regulatory interaction network. Protein-protein interaction (PPI) analysis showed that overlapping miR-153 target genes were mainly enriched in collagen type IV trimer, basement membrane, extracellular matrix and cytoplasm. (FOXO4), and forkhead box class O 6 (FOXO6). All FOXO factors are involved in a wide range of biological processes, including cell cycle arrest, apoptosis, DNA repair, glucose metabolism, resistance to oxidative stress and longevity [32]. The biological activity of FOXO factors mainly depends on posttranslational modification of phosphorylation, acetylation or ubiquitination, thereby determining their intracellular transport [33]. Dong et al. demonstrated that FOXO1A inhibits androgen receptor (AR)-mediated gene regulation and cell proliferation in PC [34]. Another found mutation in PC cells, minichromosome maintenance complex component 4 (MCM4), belongs to the minichromosomal maintenance (MCM) protein complex which consists of six highly conserved proteins (MCM2-7) collectively interacting to promote DNA replication and DNA unwinding through replicative helicase activity disinhibition [35]. Cancers arising in different anatomic sites are also associated with minichromosome maintenance complex component 2 (MCM2), MCM4, and minichromosome maintenance complex component 6 (MCM6) overexpression, but there is not much information about the role of MCM4 in PC [35][36][37]. It is known that PSA may mediate MCM4 to promote the initiation and progression of PC and confirmed that PSA knockdown induce the upregulation of MCM4 [38]. Another important tumor microenvironment component is the hypoxia-inducible factor (HIF) pathway. HIF1A was also mutated in metastatic PC patient. There are some articles devoted to analysis of mutations in HIF1A and their role in PC, although ultimately its role in PC remains unknown [39,40]. KEGG analysis showed that a key signaling pathway in metastatic PC is the extracellular matrix (ECM)-receptor interaction signaling. The ECM is a non-cellular component of the stroma of tumor. ECM represent a complex network of macromolecules which undergo extensive reconstruction during tumor progression. Such remodeling of the extracellular matrix during cancer progression causes changes in its density and composition [41]. Damage to the ECM structure leads to reactive growth of tumor cells due to switching of intracellular signaling processes and cell cycle changes [42]. Proliferation increases, normal tissue architectonics is lost, local migration of tumor cells and invasion into surrounding stromal tissue occurs. One of the main factors determining the degree of tumor malignancy is the process of EMT characterized as loss of epithelial phenotype by epithelial cells and acquisition of mesenchymal phenotype associated with the ability to migrate through basal membrane. ECM is accompanied by loss of cell adhesion molecule E-cadherin, cytokeratins, increased N-cadherin, fibronectin, and vimentin [43]. ECM proteins provide biochemical signals to induce EMT. In turn, EMT becomes an inducer of metastasis, triggering various transcription factors. Thus, transformation plays an important role in tumor progression and metastasis, involving various transcription factors (TFS) and signaling processes [43,44]. Conclusions Our results show that high miR-153 expression is associated with TNM stage increase in PC patients. Several potential PC related genes and pathways were identified in the study, which will improve our understanding of the molecular mechanisms which support prostate cancer progression and development. A key signaling pathway, the ECM-receptor interaction signal pathway, was identified as possibly involved in the development of PC. Further investigations are needed to perform in-depth analysis on gene ontology, browser tracks, and expression levels of miR-153 targets in metastatic PC patients. Compliance with ethical standards Ethical approval for this study was obtained from Institute of Biochemistry and Genetics Bioethics Committee. The study was carried out in accordance to Helsinki Declaration and local guidelines. Informed consent All samples investigated in this study were obtained with written informed consents of the participants. Availability of data and material All supporting data and materials are available from the corresponding author upon reasonable request. Author's contribution Irina Gilyazova: conceptualization, writing-original draft, and project administration. Elizaveta Ivanova: writing-review and editing, investigation. Mikhail Sinelnikov: formal analysis, writing-review and editing, methodology. Ilgiz Gareev: resources and data curation. Aferin Beilerli, Ludmila Mikhaleva, and Yanchao Liang: validation and data curation. Valentin Pavlov and Elza Khusnutdinova: validation and visualization. Valentin Pavlov and Elza Khusnutdinova: supervision and funding acquisition. All authors have read and agreed to the published version of the manuscript. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
2022-10-18T17:27:40.565Z
2022-10-01T00:00:00.000
{ "year": 2022, "sha1": "893e4cf9a0b06b6fbbc06f5d51422275fb484488", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.ncrna.2022.10.002", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "a1c3a3c871591a82c513d97c81a126e2db31e5c6", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
256705456
pes2o/s2orc
v3-fos-license
Giant nonlinear Hall effect in twisted bilayer WTe2 In a system with broken inversion symmetry, a second-order nonlinear Hall effect can survive even in the presence of time-reversal symmetry. In this work, we show that a giant nonlinear Hall effect can exist in twisted bilayer WTe2 system. The Berry curvature dipole of twisted bilayer WTe2 (θ = 29.4°) can reach up to ~1400 Å, which is much larger than that in previously reported nonlinear Hall systems. In twisted bilayer WTe2 system, there exist abundant band anticrossings and band inversions around the Fermi level, which brings a complicated distribution of Berry curvature, and leads to the nonlinear Hall signals that exhibit dramatically oscillating behavior in this system. Its large amplitude and high tunability indicate that the twisted bilayer WTe2 can be an excellent platform for studying the nonlinear Hall effect. INTRODUCTION Over the last few decades, the study of various Hall effects has become an important topic in condensed-matter physics. For Hall effect or anomalous Hall effect, it is well known that an external magnetic field or magnetic dopants are necessary to break the time-reversal symmetry 1 . However, a recent theoretical study predicted that a second-order nonlinear Hall effect can exist in materials with time-reversal symmetry but without inversion symmetry 2 . Subsequently, the experimental study on bilayer and multilayer WTe 2 has validated this prediction in terms of the observed transverse nonlinear Hall-like current with a quadratic current-voltage characteristic 3,4 . The nonlinear Hall effect arises from the non-vanishing dipole moment of Berry curvature in momentum space, i.e. Berry curvature dipole 2 . The theoretical predictions of the nonlinear Hall effect in a material can be achieved by calculating the Berry curvature dipole from band structure. Multiple materials [5][6][7][8][9][10][11][12][13][14][15][16][17] have been predicted or validated to possess strong nonlinear Hall effect, such as the Weyl semimetals [5][6][7][8] , the giant Rashba material bismuth tellurium iodine (BiTeI) under pressure 10 , the monolayer WTe 2 and MoTe 2 with an external electric field 11 , and the strained twisted bilayer graphene 12 or WSe 2 13,14 . The experimental observations of the nonlinear Hall effect are mainly limited to the two-dimensional systems 3,4,[14][15][16] . It is well known that the twist angle between adjacent layers can be used as a new degree of freedom to modulate the electronic structures of two-dimensional system, which has attracted great attention since the recent discovery of the unconventional superconductivity and correlated state in the twisted bilayer graphene 18,19 and twisted bilayer of transition metal dichalcogenides 20 . Very recently, it was shown that the strained twisted bilayer graphene 12 and WSe 2 13,14 present a large nonlinear Hall response. The experimental results demonstrate that there exists a significant nonlinear Hall effect in bilayer WTe 2 3 , it is interesting to know how the twist regulates the nonlinear Hall effect in this system. In this work, we study the nonlinear Hall effect in twisted bilayer WTe 2 by using first-principles calculations combined with the semiclassical approach. Multiple twisted bilayer WTe 2 structures are constructed, and the twist angles range from 12°to 73°. It is found that the twisted bilayer WTe 2 has a more complicated band structure compared to the prefect bilayer system. We choose the twisted bilayer WTe 2 with twist angle θ = 29.4°as a typical system, and show that the Berry curvature dipole of twisted bilayer WTe 2 can be strongly enhanced. Twisted bilayer WTe 2 The bilayer WTe 2 exhibits an orthorhombic lattice, the optimized lattice constants are calculated to be a 1 = 3.447 Å and a 2 = 6.284 Å. We construct a series of twisted bilayer WTe 2 structures based on the method described in Ref. 21 . For simplicity, we start with the normal stacking of perfect bilayer WTe 2 . As shown in Fig. 1, to construct the twisted bilayer WTe 2 , a supercell lattice as bottom layer is built, where b 1 and b 2 are superlattice basis vectors generated by primitive basis vectors a 1 and a 2 . Correspondingly, the supercell basis vectors of top layer share a mirror plane M x with bottom layer, which are indicated by b 0 1 and b 0 2 in Fig. 1. Let b 1 = ma 1 + na 2 and b 2 = pa 1 + qa 2 , in which m, n, p and q are integers, then the twist angle satisfies θ ¼ 2 arctan m a 1 j j=n a 2 j j ð Þ . The twisted bilayer structure is formed by rotating the top layer around the original point by angle θ and translate the top layer by vector b 2 , while the bottom layer remains fixed. To ensure that the twisted bilayer structure is commensurate, it has to be made sure that b 1 is perpendicular to b 2 . However, this condition is hard to fully meet because of the non-particularity of lattice constants. Approximately, we use a looser condition, for example, the angle between b 1 and b 2 ranging from 89°to 91°is acceptable. It is important to note that the twist angle will change slightly after structural relaxation. More details on the forming process of twisted bilayer for orthorhombic lattice can refer to Ref. 21 . Table 1 gives the optimized twist angle, formation energy and average interlayer distance for our constructed twisted bilayer WTe 2 . Despite the twist angles ranging from 12°to 73°, these systems share pretty close formation energies and average interlayer distances. It is noted that the interlayer spacing in twisted bilayer WTe 2 is corrugated, and the average interlayer distance is slightly larger than that in perfect bilayer system 1 (2.888 Å), suggesting a weakening of interlayer coupling. Besides, the relatively smaller formation energy means that the twisted bilayer WTe 2 may form in an efficient way. Compared to the twisted bilayer graphene, the twisted bilayer WTe 2 exhibits much more complicated Moiré patterns since one monolayer WTe 2 consists of one layer of W atoms and two layers of Te atoms. Figure 2a, b depicts the top and side views of the optimized superlattice for twisted bilayer WTe 2 with twist angle θ = 29.4°(the structures with other twist angles are given in Supplementary Fig. 1). Such complicated Moiré superlattice leads to an intricate electronic structure in twisted bilayer WTe 2 . Take the system with twist angle θ = 29.4°as an example, there exist extensive band anticrossings and band inversions around the Fermi level, as shown in Fig. 2c. With spin-orbit coupling (SOC) being considered, the band structure becomes more intricate, which can be seen from Fig. 2d. Similarly, the band structures with other twist angles also exhibit complicated band structures around the Fermi level, as shown in Supplementary Fig. 2. In our calculations, it is gapless for the constructed twisted bilayer WTe 2 systems. However, it is found experimentally that there is small band gap in the twisted bilayer WTe 2 for some twist angles 22 . This inconsistency may be due to the PBE functional underestimating the band gap in band structure calculation. Besides, the difference in the twist angles between calculation and experiment may be another reason for this contradiction. Giant nonlinear Hall effect The multiple bands cross or nearly cross in momentum space may bring about large gradient of Berry curvature around the band edges and result in strong nonlinear Hall response 2 . Next we focus on the twisted bilayer WTe 2 with twist angle θ = 29.4°, which possesses the smallest number of atoms in a superlattice, to calculate its Berry curvature dipole and estimate the nonlinear Hall effect. The Berry curvature dipole D xz and D yz of twisted bilayer WTe 2 (θ = 29.4°) as a function of the chemical potential are shown in Fig. 3a. For a two-dimensional system, the Berry curvature dipole takes the unit of length. It is noted that in perfect bilayer WTe 2 system, due to the presence of M y mirror plane, only the y component of Berry curvature dipole is nonzero. However, the introduction of twist can break this symmetry, so the x component of Berry curvature dipole is also nonzero in the twisted bilayer WTe 2 . It is clear that the Berry curvature dipole D xz and D yz exhibit drastic oscillating behavior near the Fermi level, and D xz and D yz can switch their signs dramatically within a very narrow energy region. On the other hand, we find that the magnitude of Berry curvature dipole in our considered twisted bilayer WTe 2 is much larger than that in previous reports 3,4,[11][12][13][14][15][16] . For example, the peak of D yz locates near the Fermi level, which is calculated to be~1400 Å. As a comparison, the Berry curvature dipole in monolayer or bilayer WTe 2 is estimated to be in the order of 10 Å 3,11 , and 25 Å in strained twisted bilayer WSe 2 14 ,~200 Å in strained twisted bilayer graphene 12 ,~700 Å in artificially corrugated bilayer graphene 16 . To have a clearly understanding of the features of Berry curvature dipole in the twisted bilayer WTe 2 , we analyze the band structure and the distribution of Berry curvature. In general, the large Berry curvature appears at the band edge, as displayed in Fig. 3b, where shows the band structure and Berry curvature Ω nk;z along the Γ À Y line. As can be seen from Fig. 3b, the entanglement of multiple bands around the Fermi level causes a complicated distribution of Berry curvature, while the large Berry curvature dipole arises from the drastic change of Berry curvature in momentum space, as indicated by Eq. (2). We find that the peaks of Berry curvature dipole shown in Fig. 3a mainly come from the contribution of the Berry curvature near the band edge shown in Fig. 3b, for instance, the peaks of Berry curvature dipole originate from the band edges at corresponding energy about −0.02, −0.01, 0.0 and 0.02 eV. The optimized twist angles θ, formation energies ΔE and average interlayer distances ΔZ for various twisted bilayer WTe 2 . Z. He and H. Weng When the Fermi level is set to the charge neutral point, we plot the Berry curvature Ω z and the corresponding density of Berry curvature dipole d yz along the Y À Γ À Y direction in Fig. 3c, d, respectively. Here, the density of Berry curvature dipole in momentum space is defined as d bd ¼ P n f nk ∂Ω nk;d =∂k b . Clearly, the Berry curvature and d yz mainly distribute in the region near the Y point, and the tremendous change of Berry curvature contributes large magnitudes to d yz . Furthermore, it is found that the pronounced peaks (marked by "A") shown in Fig. 3c, d indeed come from the contribution of band edge indicated by the red circle in Fig. 3b. As the Fermi level is shifted from the charge neutral point, the Fermi level go through multiple band anticrossings in a narrow energy region, leading to the dramatic sign change of Berry curvature dipole around the charge neutral point. As mentioned above, the Berry curvature dipole in twisted bilayer WTe 2 exhibits dramatically oscillating behavior near the charge neutral point. This indicates that the strong nonlinear Hall effect in our considered system may be observed at low temperatures. For clarity, we calculate the temperature dependence of Berry curvature dipole D yz for twisted bilayer WTe 2 (θ = 29.4°) and perfect bilayer WTe 2 according to Here, for simplicity, we fix the Fermi level to the charge neutral point. As shown in Fig. 4, the Berry curvature dipole D yz of perfect bilayer WTe 2 decreases as the temperature raises. This is qualitatively consistent with the experimental result in few-layer WTe 2 4 , where the nonlinear Hall response decreases monotonically with increasing temperature. When temperature is larger than 20 K, the Berry curvature dipole D yz for the twisted bilayer WTe 2 (θ = 29.4°) is slightly larger than that in perfect bilayer WTe 2 . However, the Berry curvature dipole D yz for the twisted bilayer increases rapidly when temperature is less than 20 K. These results suggest that one could detect a very strong nonlinear Hall response at low temperatures in twisted bilayer WTe 2 . On the other hand, the Berry curvature dipole in twisted bilayer WTe 2 is sensitively dependent on the Fermi level, its magnitude and sign can be switched dramatically within a very narrow energy region. The large magnitude and highly tunable characteristics of Berry curvature dipole in twisted bilayer WTe 2 provide an excellent platform to investigate the nonlinear Hall effect. DISCUSSION In this work, we focus on the nonlinear Hall effect in twisted bilayer WTe 2 with twist angle θ = 29.4°. It is worth understanding the angle dependence of the nonlinear Hall effect in twisted bilayer WTe 2 systems. For comparison, we calculate the energy dependence of Berry curvature dipole for twisted bilayer WTe 2 with twist angle θ = 40.1°, the results are shown in Supplementary Fig. 3a. Similar to the system with twist angle θ = 29.4°, the Berry curvature dipole in twisted bilayer WTe 2 (θ = 40.1°) also exhibits large magnitude (larger than 1500 Å) and drastic oscillating behavior. However, considering the large computational costs, the nonlinear Hall effect of other twist angle systems is not calculated. Nevertheless, we can make a simple prediction for the features of the nonlinear Hall effect in these systems according to the band structure. For example, there also exist rich band crossings in the case of twist angles θ = 12.3°and θ = 25.7°(see Supplementary Fig. 2). The complicated band structure in a narrow energy region brings about the dramatically oscillating behavior of nonlinear Hall response in these twisted bilayer WTe 2 systems. As a comparison, the band structure in the perfect bilayer WTe 2 is much simpler (see Supplementary Fig. 2a), and the energy dependence of Berry curvature dipole for perfect bilayer WTe 2 is also much smoother (see Supplementary Fig. 3b). Moreover, the entanglement of multiple bands around the Fermi level indicates the drastic change of Berry curvature in momentum space, which may lead to the giant Berry curvature dipole in these twisted bilayer WTe 2 systems. Of course, the predictions mentioned above may be not applicable to the small twist angle systems. On the other hand, the contributions to the nonlinear Hall effect can be divided as intrinsic (geometric) and extrinsic (disorder-induced) contributions 23 . Here, we focus on the intrinsic part of the nonlinear Hall conductivity, which is related to the Berry curvature dipole. However, it is revealed that disorder-induced extrinsic part has more important contribution to the nonlinear Hall effect in some recent works [24][25][26] . In twisted bilayer systems, the nonuniformity of twist angle across the sample is believed to be the main source of disorder 27 , which may have pronounced contribution to the extrinsic part of the nonlinear Hall effect. These issues deserve further study. In summary, we have predicted that a giant nonlinear Hall effect can exist in twisted bilayer WTe 2 system. We show that the twist can greatly change the band structure of bilayer WTe 2 . There exist abundant band anticrossings and band inversions around the Fermi level in the twisted bilayer WTe 2 , which brings a strong nonlinear Hall signal in this system. The Berry curvature dipole of twisted bilayer WTe 2 (θ = 29.4°) can reach up to~1400 Å, much larger than that in previously reported nonlinear Hall systems. In addition, the nonlinear Hall effect in twisted bilayer WTe 2 exhibits dramatically oscillating behavior due to the complicated distribution of Berry curvature around the Fermi level. Our results show that the twisted bilayer WTe 2 can become an excellent platform to investigate the nonlinear Hall effect. METHODS Nonlinear Hall effect The nonlinear Hall effect originates from the dipole moment of Berry curvature over the occupied states. In a system with time-reversal symmetry but broken inversion symmetry, when an oscillating electric field E c ¼ Refξ c e iωt g is applied, a transverse response current j a ¼ Refj ð0Þ a þ j ð2ωÞ a e 2iωt g can be generated, where j ð0Þ a ¼ χ abc ξ b ξ Ã c and j ð2ωÞ a ¼ χ abc ξ b ξ c are rectified current and second-harmonic current, respectively. The nonlinear conductivity tensor χ abc is associated with the Berry curvature dipole 2 as follows: Here, D bd is the Berry curvature dipole, ε adc is the Levi-Civita symbol and τ is the relaxation time. The Berry curvature dipole can be written as 2 where E n and n j i are eigenvalues and eigenwave functions, respectively. First-principles calculations The first-principles calculations are performed by using the Vienna ab initio simulation package (VASP) 29 with the projector-augmented wave potential method [30][31][32] . The exchange-correlation potential is described using the generalized gradient approximation (GGA) in the Perdew-Burke-Ernzerhof (PBE) form 33 . Spin-orbit coupling is taken into account self-consistently. The energy cutoff of the plane-wave basis set is 350 eV. A vacuum region larger than 15 Å is applied to ensure no interaction between the slab and its image. In our optimization, all structures are fully relaxed until the force on each atom is less than 0.02 eV/Å. The Van der Waals interactions between the adjacent layers are taken into account by using zero damping DFT-D3 method of Grimme 34 . The maximally localized Wannier functions [35][36][37] for the d orbitals of W and p orbitals of Te are generated to compute the Berry curvature and Berry curvature dipole. For the integral of Berry curvature dipole, the first Brillouin zone is sampled by very dense k grids to get a convergent result. Considering the large computational cost, we separate the first Brillouin zone into multiple blocks, and the convergence test is carried out independently for each block. For example, some blocks are sampled with the k-point separation of 5 × 10 −5 Å −1 . DATA AVAILABILITY The data that support the findings of this study are available from the corresponding authors upon reasonable request.
2023-02-10T14:09:56.006Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "c9cb59be31894f25bc3adc55ab55c74a8c673820", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41535-021-00403-9.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "c9cb59be31894f25bc3adc55ab55c74a8c673820", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
218863336
pes2o/s2orc
v3-fos-license
Creation and application of virtual patient cohorts of heart models Patient-specific cardiac models are now being used to guide therapies. The increased use of patient-specific cardiac simulations in clinical care will give rise to the development of virtual cohorts of cardiac models. These cohorts will allow cardiac simulations to capture and quantify inter-patient variability. However, the development of virtual cohorts of cardiac models will require the transformation of cardiac modelling from small numbers of bespoke models to robust and rapid workflows that can create large numbers of models. In this review, we describe the state of the art in virtual cohorts of cardiac models, the process of creating virtual cohorts of cardiac models, and how to generate the individual cohort member models, followed by a discussion of the potential and future applications of virtual cohorts of cardiac models. This article is part of the theme issue ‘Uncertainty quantification in cardiac and cardiovascular modelling and simulation’. Introduction Hindsight is a wonderful thing. If we know what will happen, it is easy to make the right decision. Biophysical patient-specific models strive to encode known physics and physiology within mathematical equations and to tune these models to represent individual patients. The aim is to use these digital twins to predict disease progression, better estimate risk and predict treatment response so that the outcome might be known before a decision is made. With sufficiently accurate predictions, the choice of the best treatment for a patient shifts from being based on the current or past condition of the patient to the future one. While conceptually simple, the practical reality of determining the equations, tuning the parameters to patient data and generating reliable predictions remains a significant engineering and mathematical challenge. Overcoming such engineering and mathematical challenges has huge potential. Once a patient-specific model is created, it can then be re-used to design new treatments, evaluate inclusion criteria, simulate imaging or diagnostic signals, or test mechanistic hypotheses. In contrast to recent advances in statistical regression models-which are limited to cases where large datasets are already available-biophysical models are based on physical laws and known physiological systems and so have greater versatility in their predictions, mechanistic explanatory power and susceptibility to analysis. There has been significant investment in the development of biophysical models. Cardiac modelling, in particular, has made major recent advances in moving patient-specific modelling into the clinic [1]. The move to human-scale simulations has driven the development of efficient and scalable code that is required for the simulation of large human hearts [2][3][4]. The ability to simulate human hearts increased the ability to tune models to clinical datasets, motivating the use of image-and signal-processing techniques to convert medical images into data that could be used to constrain the models. This work has culminated in the recent use of models to guide therapies in prospective studies of ventricular tachycardia ablation [5], atrial fibrillation ablation [6] and cardiac resynchronization therapy lead positioning [7]. This shift from developing research or proof-of-concept models to using models in clinical workflows requires a step change in speed and robustness in model creation [8]. A patientspecific model needs to be created from standard clinical data robustly and reliably, assessed and interrogated, all within a short time frame, often less than 24 h. As the cost, in time, of model creation decreases significantly, this enables the development of large virtual cohorts of models. These virtual cohorts will allow inter-patient variability to be captured in cardiac model simulations. These virtual cohorts will allow virtual trials (VTs), which will impact clinical care through therapy design and development, including patient selection and therapy guidance. The development of virtual patient cohorts poses new opportunities for cardiac modelling. How best to move from a cottage industry, where each model is handcrafted, to an industrialized process where models are produced en masse with limited to no human intervention, is a challenge. This white paper discusses the current state of patient-specific cardiac models; the process of developing virtual patient cohorts; how we validate these models; how we quantify uncertainty at the level of the individual model and at the level of the virtual cohort; and finally potential and future applications of virtual cohorts. Examples of virtual cohorts of cardiac models real patient virtual patient Figure 1. Schematic of the strategies for obtaining a virtual cohort, based on biophysical models. (Online version in colour.) ventricular models using an eikonal formulation [13], demonstrating their methodology on seven CT and electrophysiology datasets. Corrado et al. [14] presented a workflow to build personalized computational models from local multi-electrode catheter measurements and applied the technique to data from seven paroxysmal atrial fibrillation clinical cases. Kayvanpour et al. generated a larger cohort of models consisting of 46 heart failure patients, incorporating personalized anatomy, electrophysiology, biomechanics and haemodynamics [12]. As well as personalizing the cardiac anatomy, models may include a torso mesh from imaging data [15]. The effect of variability is also important; Muszkiewicz et al. investigated this by integrating cellular-level and ion channel recordings in human atrial models using right atrial appendage measurements from 35 patients [9]. Multiple studies have used computational models to test response to therapies, including ablation of atrial arrhythmias [16][17][18][19], ablation of ventricular arrhythmias [5], CRT optimization [10] and ventricular tachycardia risk assessment following CRT [11]. These studies vary in the degree of model complexity and personalization included and the size of the virtual cohort. For example, atrial ablation studies range from smaller studies, incorporating left and right atrial fibrosis distributions in bilayer models (n=12 [17]), or left atrial volumetric models including transmural fibrosis (n=12 [18]), to large studies using left atrial shell models for which only the anatomy was personalized (n=108 patients [19]). Approaches to generating a virtual cohort of cardiac models The aim of virtual cohorts of cardiac models is to account for inter-patient variability in simulation studies. A virtual cohort consists of multiple members, where each member of the cohort has a distinct parameter set. The variation in parameter sets and anatomy between members aims to represent the variability in the true patient population. (a) Strategies for generating virtual cohorts The parameter set for each member of the virtual cohort can be obtained in three ways: first, by having each member of the virtual cohort represent a specific patient from a real-world cohort (1:1 mapping); second, by generating a parameter set from inferred parameter distributions (sampling from inferred distributions); and, third, by completely randomly generating parameters and testing if these result in physiologically plausible models (random variation with acceptance criteria). Figure 1 gives a schematic summary, while the methods are detailed below. mapping virtual cohorts The development of 1:1 mapping virtual cohorts incrementally builds on current techniques for creating models of a specific patient's heart: it repeats this process multiple times in order to generate a number of specific models that, in turn, form the virtual patient cohort. While superficially simple, the process of repeating the patient creation workflow on new patients is often subject to subtle variations or the effect of artefacts in clinical data. Manual steps quickly become bottlenecks, data structures need to be standardized and cost functions that were tailored to the first case need to be generalized for all cases. In addition, the first case is always built on the best and most complete data. Moving to multiple cases exposes the challenges of obtaining multiple high-quality datasets from specific patients. This makes data collection for the generation of large virtual cohorts time-consuming and expensive. Nonetheless, the availability of data is essential also for other strategies aiming at the construction of virtual cohorts, and a number of research groups are working on this area, addressing each of challenges mentioned above [1,24,[26][27][28][29]. (ii) Sampling from inferred distributions In cases where a 1:1 mapping virtual cohort can be created for a representative subset of a population, it is possible to also infer the parameter variability and co-variability. This allows one to model the parameters as following a statistical distribution. By sampling from these distributions, new parameter sets can be generated representing new virtual members of the patient population. The statistical distribution of the parameters can be assumed to be of known form-for example, Gaussian. The task, in this case, reduces to using the data to estimate the hyper-parameters of the distribution-corresponding to the mean vector and the variancecovariance matrix in the Gaussian example. Alternatively, if there are no principled reasons to assume a known form for the parameter distribution, this can be inferred applying either Bayesian statistical methods [30] such as Markov chain Monte Carlo (MCMC) [31], or frequentist techniques such as bootstrapping [32], to individual or cohort data. The resulting parameter distribution is a discrete (or sample) approximation of the underlying distribution. In both Bayesian and frequentist approaches, parameter sets for members of the virtual cohort can then be generated by drawing from the inferred distributions that have the property of representing the variance and covariance structure of the parameters emerging from the data. While this approach allows estimation of the effects of population variance, there are no guarantees that the generated cohort members will be physiologically plausible and, ideally, each member model should be evaluated to ensure a physiological plausibility. (iii) Random variation with acceptance criteria In situations where the variance and co-variance of the model parameters is unknown, and sufficient measurements to infer a distribution of parameters are not available, then it is possible to generate large numbers of parameter sets by randomly varying parameters to generate new members of a virtual cohort. Variations of this approach were first taken by [33,34]. It is possible for each proposed member of the virtual cohort to be evaluated against population measurements and only models that fall within these physiological bounds to be included in the virtual cohort, as performed in [35] and subsequent papers. This is a robust method, which is simple to implement, and allows virtual cohorts to be built when limited data or summary statistics or processing time are available, and provides measures of how parameter variability might affect results, by producing high-dimensional plausible regions in the parameter space. Differences between virtual cohorts can then be evaluated by the degree of overlap in model predictions generated by the population. However, despite the formal similarity to the MCMC accept-reject procedure, the method fails to account for co-variation in model parameters and measured phenotypes; the parameter sets that are generated are possible but there is no guarantee that they occur physiologically; as more variables are added to define acceptance criteria, it becomes less likely that any individual is near the mean of the multivariate population distribution; finally, while parameter sets are generated these are not samples from probability distributions, so standard statistical tests for differences between populations should not be applied. (b) Synthetic versus patient-derived members of the virtual patient cohort The strategies for model development can be separated into development of synthetic virtual patients, where virtual patients are generated by sampling from distributions (whether inferred or guessed and not-rejected) and patient-derived virtual patients, generated using the 1:1 mapping approach, in which case the virtual patients correspond to actual real people. While the data cost for each patient is high in the 1:1 mapping approach, this does provide some guarantee that each virtual patient's heart will operate within some physiologically plausible space. In the case where only a limited number of models can be made, due to time or data restrictions, using a 1:1 mapping approach has the potential for patient-specific bias. The bias, if any, can potentially be estimated by comparing emergent model phenomena with population statistics from larger clinical trials or from population databases. On the other hand, the generation of synthetic virtual patients allows speculative studies to evaluate what could potentially happen in extreme edge cases. This could be useful when trying to identify rare events or edge patient cases, who may not be represented in available patient cohorts. However, this approach runs the risk of creating non-physiological or implausible models that can skew the results for the virtual cohort, especially in the case where parameters are randomly guessed. However, once 1:1 mapping cohorts of sufficient size have been generated, these provide better bounds and parameter distribution estimates for constraining synthetic virtual patient approaches. Constructing, constraining and validating virtual cohort models Creating a virtual cohort requires the development of a template model for representing each member of the cohort. The template needs to be carefully designed to be able to capture patient variability, physiology, diseases and treatments of interest. In cases where the model is tied to clinical data for specific patients, the model complexity needs to reflect the available clinical data and the time and resources available to create the model. In virtual cohort strategies, where models need to be tuned to represent all or a subset of specific patients, the model parameters must be inferred using nonlinear optimization, statistical or machine learning approaches. The constrained model is subsequently exposed to a validation step, in order to assess its generalization properties for predictions tasks. These three steps are depicted graphically in figure 2. (a) Designing virtual cohort template model templates A virtual patient cohort is made of virtual members who all share a common model structure. This means that variability is encoded in the anatomy, physiological parameters and boundary condition parameters, as opposed to differences in model structure. As with previous cardiac biophysical models, while the initial aim of a virtual patient cohort may be specific, if virtual cohorts are to be reused outside of their original application, the model template will need to be designed to be generic, reusable and easy to manipulate, allowing simulations to be run and analysed at scale. Ideally, the model template will be combined with a definition of the physiological envelope within which the member and virtual cohort models have been validated to ensure appropriate use and reliable predictions. (i) Physiologically relevant virtual cohort member model templates The model template must encode physiologically relevant mechanisms for the virtual cohort application. The level of physiological detail in a model template needs to balance complexity versus the ability to constrain model parameters. Increased complexity is motivated by the lack of a clear set of known important physiologically relevant mechanisms and the desire to make a general virtual cohort that can be applied to multiple applications. However, complex biophysical models often rely on representative or population parameters and this may miss important patient-specific physiology aspects. By contrast, the use of simple models is motivated by the intent of constraining most (if not all) the model parameters to be patient specific, as well as the inability to precisely constrain all the parameters of a complicated model by the available data, and the need to contain simulation costs [23,36]. As a minimum, when tested, simulations of individual patients within the virtual cohort should be able to reproduce the corresponding clinical measurements from that patient to within a specified tolerance. The model parameters and predicted phenotypes should be prescribed into the mathematical model structure to ensure that the model captures fundamental physiology and that parameters are globally identifiable from the available data [37]. (ii) Clinically tractable virtual cohort member model templates As described, cohorts of virtual patients can be created in two ways. First, synthetically by sampling parameter distributions, as part of an offline focused project to create a virtual patient cohort. Second, from a cohort of patient-derived virtual patients created as part of a clinical trial or routine clinical care. When models need to be made at scale, or as part of the clinical workflow, the time taken to create each model becomes critical. As models in the near future are likely to be created as an adjunct to standard clinical practice, the data used to constrain model parameters and validate model predictions need to reflect available clinical data. Adhering to these two steps will allow virtual patient cohorts to fit directly into clinical applications, in which parameters are inferred by matching patient physiological dynamics [38]. (iii) Parameter and predicted phenotype sensitivity of virtual cohort member model templates Model sensitivity is another important challenge in designing the virtual cohort member template model. Sensitivity is defined as the rate of change in simulated model predictions or outputs in response to changes in model parameters. For an example, see [39]. In biophysical cardiac models, the mapping from patient data to simulation predictions can be separated into two steps. First, the dependence of model parameters on input data can be determined. This sensitivity can be used for informing how values are measured [40,41], for example choosing between echocardiography, MRI or CT to measure cardiac mechanics to achieve a desired precision in inferred model parameters. Second, the dependence of the model outputs on model parameters can be determined. Parameters to which the model output of interest is relatively insensitive may not need to be personalized, whereas parameters to which the output is highly sensitive may need to be precisely personalized for each patient. Examples of sensitivity analysis applied to cardiovascular diseases, where model output measures are used to reflect clinical goals, can be found in [42,43]. (iv) Identifiability and uncertainty quantification of virtual cohort member model templates In cases where models are based on patient-specific clinical data, the template model parameters, or a defined subset, should be derived from the available data. In the case where a single deterministic parameter set is used to reflect the physiology of each member of the virtual cohort, the model parameters should be uniquely identified for each specific patient. However, there is growing application of uncertainty quantification (UQ) to patient-specific models. Within this framework parameters are defined by distributions as opposed to a single value. The distribution of a parameter for a given patient represents the uncertainty of the true parameter value for that patient. An advantage of the UQ approach is that it provides a natural framework for predictions: these are simply generated from multiple instances of the same patient, with parameters randomly sampled from the distributions in each simulation. In both cases, the creation of a virtual cohort based on the inferred parameters identifies a population level distribution. (v) Using experimental or population data in virtual cohort member model templates The need to capture physiologically relevant mechanisms, create identifiable models and develop models rapidly and robustly has led to pragmatic modelling choices. For example, in tissue electrophysiology simulations, detailed representative biophysical cell models are used that are not tuned to the individual patient [5,6]. However, these models capture the complex interplay of cellular electrophysiology and calcium dynamics that are important in arrhythmia simulations and allow qualitatively relevant predictions to be made that have been successfully used to inform patient treatment. Setting a non-personalized parameter to a single fixed value is often the only practical option due to computational resource constraints. However, it would be more correct to view non-personalized parameters as uncertain and specify them using probability distributions, ideally representing population variability in that parameter conditional on known information about the patient (e.g. on sex or age). This approach is currently infeasible for whole-heart patient-specific models due to both computational cost and lack of information on population (and sub-population) variability for the wide assortment of functional parameters in cardiac models. However, the increased availability of population databases, for example the UK biobank [44] and, in the future, the availability of virtual cohorts of cardiac models, will provide population-based priors for informing some non-personalized model parameters in individuals. This will allow more sophisticated models to be made that are informed by a combination of population and patient-specific data, and that fits more and more the UQ paradigm. (b) Constraining parameters in virtual cohorts of cardiac models With the rapid growth of clinical and consumer sensor technology, which make the gathering of large individual and population patient data readily available, there is an increasing need for accurate big data analytics to construct and constrain virtual cohorts of cardiac models. The tools are supplied by emerging fields such as big data informatics and machine learning [45,46], used to inform model parameters from large datasets. For example, modern deep data techniques can be used to analyse and constrain larger numbers of model parameters, which increases understanding of how patient-data could be used and shared. On the other hand, machine learning-especially its probabilistic version [47]-provides efficient approaches for clustering, dimensionality reduction and constraining models. These methods are thus becoming ubiquitous in the way virtual cohort models and data are used in medicine. (i) Nonlinear optimization and Bayesian approaches The most intuitive approach for constraining the parameters of a cardiac physiological model is to minimize a cost function that represents the distance between the model evaluated in a given set of parameters and the data. A number of cost functions can be considered, that, for example, weight differently each data point, or that add a penalty of some sort on the magnitude of the parameter vector. The task is thus to find the values of the parameters by means of iterative nonlinear optimization algorithms [48][49][50], where the nonlinearity is intrinsic in the way the parameters enter the cardiac model. The main drawback of this approach lies in the multimodality of the cost function: this is both a computational challenge for numerical algorithms, which need to have the ability to learn multiple minima, and it poses interpretability issues in a case where there is not a unique global minimum. A natural solution is offered by Bayesian inference, that forms a large part of the UQ methods mentioned above. In fully Bayesian approaches, a prior belief on the parameters (a distribution with typically population-based parameters) is modified to a posterior belief by means of observed patient data, whose information is modelled through the chosen likelihood function. Inference corresponds, thus, to the ability of sampling from the posterior distribution of the parameters: this is, for example, the key step for obtaining predictions from the model, as explained above, but another typical task is using the samples to approximate integrals of interest (for example, the posterior mean). MCMC methods are the main class of algorithms for full posterior sampling with batch data, based on a principled accept-reject scheme. When the data are seen as arriving sequentially in time or space, and one wants to capture the intrinsic stochasticity of the sequential data, the corresponding class of Bayesian learning algorithms is provided by sequential Monte Carlo methods [51]. These generalize the Kalman filter to non-Gaussian data and nonlinear models; we refer to [52] for an electrophysiology example, aimed at constraining the Mitchell and Schaeffer model. A number of challenges arise in these seemingly easy procedures, primarily related to the difficulty of designing efficient proposal distributions, especially when the parameter of interest is high-dimensional and the posterior is multimodal. Computational solutions have been provided by means of (i) approximation of the posterior, as in variational Bayes approaches [53], approximate Bayesian computation, [54,55], posterior tempering [56] and dimension reduction [57]; (ii) direct exploitation of the geometry of the posterior [58]; (iii) state-space augmentation to ease the inference [59]. Finally, restarting MCMC methods multiple times from different initial points allows one to identify at least some of the different posterior modes, and to design a sampler with improved capacity of exploring high probability regions. (ii) History matching approaches When the variance-covariance structure of the parameters is not of strict interest, or it too costly to obtain, history matching (HM) provides an approach to constrain the parameters to plausible regions. The model is evaluated for a large space-filling design of parameter values (e.g. latin hypercube), which are accepted or rejected based on an implausibility criterion. As an example, in [60], HM is used to constrain a subset of the parameters of the Courtemanche and the Maleckar cell models. The main difference between the MCMC and the HM acceptance criteria is that the former, if successfully converged, guarantees that all the accepted parameter values are (correlated) samples from the posterior distribution, although to assess the convergence of MCMC chains remains a challenging task in itself [61]. The latter simply identifies regions of the parameter space that are consistent with the measured data, trying to account for uncertainty in both observations and predictions, but without guarantees. To tackle the computational burden of evaluating a very large number of samples, MCMC and HM often rely on a model surrogate or emulator, as described in a later section. (iii) Machine learning approaches for mapping data to parameters Data-driven machine learning has the advantage of extracting complex relationships from large amounts of data, giving a model which later can be executed efficiently for time-sensitive workflows. This approach will complement virtual cohorts of cardiac models that are biologically and physiologically sophisticated but face challenges in assimilating data from different sources in a timely fashion. There are thus many opportunities for integrating machine learning methods with virtual cohorts of cardiac models. Machine learning models can be trained to learn a direct mapping between model parameters and outputs generated by virtual cohorts of cardiac models, which can later be used to allow a fast calibration or UQ of virtual cohorts of cardiac models when given clinical data. In [62], a polynomial regression model was trained to predict myocardial electrical diffusion from simulated ECG data, which was then used for fast calibration of a cardiac electrophysiology model from clinical ECG data. Similar ideas were explored in [63,64] for personalizing cardiac electrophysiology models from higher density surface ECG data. In [65], linear regression models and decision trees were learned to map input geometrical features to simulated haemodynamic features, which were then used for quantifying the uncertainty in haemodynamic outputs as a result of geometric uncertainty. These approaches provide an attractive time-effective alternative to calibrating or quantifying the uncertainty in virtual cohorts of cardiac models in clinical workflows, in contrast to traditional optimization and statistical inference which are typically computationally expensive. The challenge regarding the multi-modality or non-identifiability of the parameter given available data, however, still remains, highlighting the importance of UQ to characterize the probabilistic distribution in the model parameters even in machine learning approaches [66]. An additional challenge arises from how well the machine learning relationship trained on simulation data may generalize to clinical data, and this approach was suggested as a preliminary step prior to more refined model personalization in [62,67]. (iv) Machine learning approaches for model emulation With the rapid growth in their modelling capacity, data-driven deep learning models may also have a role in directly approximating the simulation model for the purpose of accelerating data assimilation in virtual cohort models that are otherwise computationally prohibitive to realize. Similar to earlier surrogate/emulator models such as polynomial chaos and Gaussian process, the fundamental idea is to learn to approximate the simulation-based solutions and then use these computationally efficient surrogates in later tasks such as data assimilation [68,69], which traditionally consists of the optimal integration of typically sparse real-world olbservations to improve model estimates such as forecasts or state reconstructions [70]. This is rather appealing for enabling data assimilation of virtual cohort models at scale, although several challenges remain to be addressed. To build the training database from simulation remains time-consuming, and it is not clear how exhaustive the simulation needs to be in order for the machine learning surrogate to be able to mimic the simulation model over a wide range of parameter values [71,72]. (c) Validation of virtual cohorts Validation is the process of assessing whether a model is suitably representative of the physical process it seeks to represent and therefore whether predictions from the model are sufficiently close to those of the real system. It should be noted that there is no such thing as a validated model, but rather a body of evidence that the model produces results which are consistent with the physical system being modelled in a specified regime or parameter space. While a model may generate accurate predictions within one region of parameter space, it may not necessarily extend to producing reliable, or even physiologically plausible, results outside of that region [73]. Our confidence in a specific model output should, therefore, reflect its position relative to the regime in which validation has been undertaken. Furthermore, models will often predict variables that can not be or were not measured directly, for example stress in cardiac mechanics models. These variables can be of interest for understanding mechanisms underpinning emergent observations. There will be less confidence in these model predictions that can not be compared against experimental data. However, confidence in the model prediction can be gained if the model is physics based and is validated across a wide range of conditions that alter the unmeasured model output. (i) Challenges in validating individual models Cardiac models are often highly complex, frequently spanning multiple scales, from tissue properties to individual ion channels. Each of these sub-components includes assumptions and parameters for which evidence supporting their representation of reality should be sought. Few parameters correspond to directly observable quantities, requiring inference procedures in order to incorporate these observations into the model. These may use simple approaches (such as linear regression), or necessitate the use of more sophisticated statistical models to tease out the relevant associations. For example, tissue conductivity in homogenized models cannot be measured experimentally/clinically. Statistically, models can be used to associate this quantity with an observable, such as conduction velocity. In calibrating a cardiac model to an individual, observations are often only acquired in a small region of the parameter space (e.g. during pacing or sinus rhythm) or at low spatio-temporal resolution, depending on time, practical and ethical constraints. The process of validation often requires evaluating the model multiple times; the complex nature of whole-heart cardiac models can make this computationally expensive and time-consuming, adding further challenge to the validation process. (ii) Validating cohort models These issues are further compounded when looking to validate cohort models. Independent of the approach used to generate the cohort model, we would like to validate it against real patient observations, or real cohorts of patients. The method used to generate the virtual cohort strongly impacts the ability to perform validation. For example, virtual patients generated using the 1:1 mapping method correspond to actual patients, from which other data can be obtained for validation. Synthetic virtual patients, on the other hand, cannot be validated in the same way, as there is not a corresponding real patient to compare against. Cohorts of either type can be validated in a statistical manner by comparing cohort-level statistics against population-level statistics. Owing to the significant challenges in performing this validation, evidence supporting the validity of biophysical models in the existing literature tends to be sporadic and ad hoc. At the cellular scale, efforts to validate action potential models are generally quite prevalent. While verification of computational implementations of tissue-scale modelling has been proposed [74,75], limited validation against actual patient data is found in many whole-heart modelling studies [14]. Many cohort studies cite evidence supporting their validity from previous studies, which in turn cite earlier studies which ultimately provide limited actual evidence in themselves. (iii) Improving validation in cohort models To improve validation, each study should be able to reference a body of primary evidence (rather than earlier studies using the model) supporting the components of the model being used from earlier work, and include evidence that the model as a whole is representative of the population it aims to represent. This may include some cohort-level validation steps taken within the study, but also explicitly citing validation data for sub-components used, such as assessments of tissue-scale and action potential models, and their calibration methods, for the individual simulations within the cohort and for the regimes under consideration in the study. This procedure should be used in helping to establish the validity of results, and consequently the strength of conclusions drawn during the peer-review process. This process would be made more effective through the pursuit of specific (multiple, independent) validation studies, either published through traditional journals or as white papers on pre-print servers, which includes the raw data used. As part of this validation process, quantification and propagation of uncertainties in sub-components must inevitably play a role. Substantial work has already been done in the area of UQ for action potential models [76,77], but this needs to be propagated through to whole-organ models and cohort models [22]. Stating that a given outcome is the most likely outcome, given the available data, is a much stronger statement than a given outcome is plausible. Achieving all this, and being transparent about the extent to which the constituent components are themselves validated, will provide greater strength to cohort studies and document a clear and unambiguous provenance of validation evidence to support their use. Potential and future applications of virtual cohorts of patient models The creation of virtual cohorts of cardiac models are a relatively new innovation in cardiac modelling. These ideas have been proposed and adopted in other fields of computational modelling [78][79][80], including cardiovascular [81,82] and thymus modelling [83] as well as modelling of insulin and glucose [84,85], pacing lead design [85] and immunomodulation [86]. Currently, as described above, cohorts of cardiac models are being developed for specific applications to answer clinical or scientific questions. However, as the number of virtual cohorts developed and made publicly available increases, so too will the applications of these models. (a) Trial outcome prediction/proof of concept One potential use of virtual cohorts is to simulate a clinical trial in advance of investing in the actual clinical trial. Assuming the simulations are expected to reliably predict the clinical endpoint of interest (difficult to ensure in practice), the advantages are clear: millions of dollars could be saved if the VT prevents a failing trial going ahead. For example, in [87], the authors used a VT to retrospectively re-create the results of the Rhythm ID Goes Head-To-Head (RIGHT) trial, which compared performance of two ICD devices. The conclusion of the RIGHT trial was the opposite of what had been hypothesized at the time; the virtual result reproduced this result. (b) Responder identification Related to prediction of trial outcomes is the use of virtual cohorts to identify individuals who are likely, or not likely, to respond to the therapy, for example by identifying sub-populations within the intended patient population for which the proposed therapy is not likely to be effective. Studies using virtual cohorts could be used to derive better inclusion-exclusion criteria for the real trial, to reduce the size of the trial while maintaining adequate power. This could, depending on the effect size, ultimately be the difference between trial success or failure. (c) Trial augmentation and reduction It has been proposed that some clinical trials could be augmented with a corresponding VT. Results from the VT, if they agree with the real-world trial, could be used to end the realworld trial earlier than initially planned. The medical devices community, via a collaboration facilitated by the medical devices innovation consortium (MDIC), has recently developed a Bayesian statistical framework for the formal integration of VT data and a real-world trial. The basic idea is a VT is performed in parallel with the real trial, and VT results are weighted according to the extent that they match the real-world trial. Specifically, the number of virtual patients used is controlled by a discount function which uses the similarity between modelled and observed data. This is a powerful approach because it reduces the (pre-use) validation burden for the computational models; results from the VT will essentially be discarded if they fail to match the real-world trial. The approach is described in [88]. Virtual cohorts are already used in some applications to provide data for regulatory submissions, where performing a clinical trial would be impossible. Most notably, computational modelling has been used to evaluate safety of metallic implantable medical devices when the patient is exposed to the radiofrequency electromagnetic radiation during MRI. Implanted devices may heat and cause thermal tissue damage during MRI. Performing clinical trials to study whether heating remains within safe limits presents too great a risk to patients. Therefore, electromagnetic computational simulations are routinely used to predict potential MR heating for new implantable devices. A range of virtual patients-in this case, detailed wholebody anatomical models with electromagnetic material properties for each tissue-have been developed for these purposes. The virtual population [89] is a set of virtual patients, covering a range of ages, BMIs and both sexes. Some members were generated from segmentation of data from real subjects, others through morphing (synthetic virtual patients). Simulation studies using the virtual population have been used to establish RF safety in regulatory submissions for scores of devices [90]. (e) Training machine learning algorithms using virtual cohorts Virtual cohorts can be combined with machine learning approaches to generalize knowledge gained in virtual cohorts to future patient groups. In this setting, virtual cohorts have the ability to generate the high volumes of data required by machine learning and deep learning models that are otherwise difficult, expensive or impossible to obtain in experimental/clinical environments. This provides a way of generating low-cost high-volume synthetic data to initialize machine and deep learning models, which can then be retrained on potentially smaller but more relevant datasets. At the same time, machine learning models can mine from data generated by virtual cohort models and convert it to actionable knowledge for decision-making in the future. Several challenges exist in this process. How to address the discrepancy between the virtual cohort models and the reality? How to introduce sufficient variations in the virtual cohorts such that the derived machine learning models can generalize well when applied? Rapid advances in related machine learning concepts such as transfer learning and domain adaptation are likely to help us resolve these challenges, as demonstrated in recent work [64,91]. (f) Virtual trials replacing clinical trials The above applications use virtual cohorts to improve trial design (including whether to perform a clinical trial at all), reduce the size of a trial, provide evidence when a clinical trial is not possible or develop new algorithms. Ultimately, the holy grail for virtual cohorts is to replace clinical trials that are currently used to establish safety and efficacy/effectiveness of medical products. The current exponentially increasing cost of bringing medical products to market demonstrates the urgent need for cheaper, more efficient (but equally reliable) methods; computational modelling provides one potential solution [92]. Should virtual cohorts become successful in the above applications, it may become feasible for some clinical trials to be replaced by VTs. However, the current limited use of virtual cohorts in the above applications demonstrates that we remain far from this ambitious goal. The numerous challenges described throughout this paper, both related to cohort development and validation, will need to be robustly addressed, as will other challenges, for example, the social challenges of ensuring public confidence in such approaches. 6. What is needed to extend the application of virtual cohorts of cardiac models (a) Creating member template models with a hierarchy of complexity be based on the best available imaging of the object and should include detailed multiscale representation of all involved physiological properties with proper description from (sub)cellular to the whole organ level. However, we should also have a hierarchy of models for the same patients with lower spatial accuracy and a more generic description of physiological properties. The type of model used should be dictated by its specific application. The creation of top-end models still remains challenging due to problems with proper data collection and also due to insufficient understanding of some underlying physiological processes. (b) Omics driven models In view of the huge amount of data obtained using genomics and proteomics data, it would be useful to connect the cardiac model parameters to the characteristics measured from omics. One of the most straightforward ways to do this would be to use widely available mRNA expression data and tune conductivities for the corresponding ion channels [93]. Another interesting approach was recently proposed in [94], which uses a novel methodology of Cap-Analysis of Gene Expression to tune model parameters to patient-specific data. In addition, as we obtain more and more data on cell regulatory systems, it would be good to add such data to electrophysiological models as was done, e.g., by Tan et al. [95] for cardiomyocyte mechano-signalling. It would be good to extend similar approaches to other regulatory systems. (c) Modelling tissue substrate The fine structure of cardiac tissue is very complex and heterogeneous, and its features have a significant impact on heart function. Usually, the micro-anatomical organization of cardiac myocytes is modelled by spatial fields of cardiac fibre-sheet. The conduction system, in particular the Purkinje network (PN), is another structure that critically influences cardiac function [96]. Ex vivo imaging provides valuable information that has motivated the development of different rule-based models for both fibre-sheet fields and the PN. Unfortunately, these rule-based models are not patient-specific, and recent studies revealed that uncertainties on fibre-sheet fields [97,98] and on the PN models [99] considerably impact the results of cardiac simulations. Fibrosis is present in many cardiovascular diseases, and it is known to participate as both trigger and substrate of arrhythmias. Many proof-of-concept studies have shown the importance of the cellscale and intricate pattern of fibrosis [100,101]. However, today's non-invasive techniques only provide coarse-grained information about its location and shape. In the absence of fine-grained characterization of fibrosis, the amount of uncertainty significantly increases and challenges patient-specific modelling. In [29], to evaluate the pro-arrhythmic nature of a fibrotic region, the construction of a patient-specific model involved a collection of 500 biventricular models, each one representing a different but yet possible cell-scale pattern of the patient's fibrotic region. Fortunately, emerging imaging techniques are expected to contribute with patient-specific information about the fine structure of cardiac tissue [102]. (d) Publicly accessible virtual cohorts The creation of patient-specific cardiac models requires access to patient data, access to tools to process the data and access to the compute resource required to run simulations to fit the model to the data. These all represent barriers to research groups developing patient-specific models, using virtual-patient cohorts or creating software to process and analyse simulation outputs. Recent interactive tools exploiting the computational power of relatively low-cost graphics cards [24] are designed to increase accessibility and can form the basis of a virtual-patient workflow. While repositories have been created for sharing patient data on public databases, for many groups this is not possible due to the use of historic data, data policies or question on data ownership. However, fully anonymized computational models of patients hearts that contain no clinical data are a lower barrier to sharing publicly. The publishing and sharing of virtual cohorts of cardiac models-analogous to the successful approach adopted in cardiac cell modelling-will accelerate the development and adoption of virtual cohorts of cardiac models. Discussion and conclusion The development of detailed biophysical virtual patient cohorts of cardiac models is an area of great potential but with many technical challenges. Interacting with industry and regulators will be important for the translation of virtual cohorts of patients into industrial and clinical tools. Modelling applications, including physiologically based pharmacokinetic models, provide an exemplar process for how to develop and validate models for use in clinical applications. Similarly, the use of models by industry in low regulatory early phases of device, drug or product development will provide a real-world context for developing and applying virtual patient cohorts. The increased complexity and computational cost in moving from a single patient to multiple patient modelling studies will require the creation of shared community resources. Improved access to simulation software, virtual cohorts of patients and model personalization workflows will facilitate the development and adoption of this modelling approach, and also reduce the number of projects that are forced to re-invent the wheel. Virtual cohorts of cardiac models provide a low-cost tool for quantifying the impact of patient variability on physiology, pathophysiology and treatments. The ability to perform lowcost simulations over a meaningful representation of a patient population is an important step in the translation of computational models of the heart into industrial and clinical applications. Disclaimer. The mention of commercial products, their sources, or their use in connection with material reported herein is not to be construed as either an actual or implied endorsement of such products by the Department of Health and Human Services. Data accessibility. This article has no additional data. Authors' contributions. All authors contributed to the conception, writing and editing of this paper. Competing interests. We declare we have no competing interest.
2020-05-25T13:05:12.737Z
2020-05-25T00:00:00.000
{ "year": 2020, "sha1": "49ecbc1e9fcedcaeb2928dcb8bda294eaf0135e4", "oa_license": "CCBY", "oa_url": "https://royalsocietypublishing.org/doi/pdf/10.1098/rsta.2019.0558", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "49ecbc1e9fcedcaeb2928dcb8bda294eaf0135e4", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
54048294
pes2o/s2orc
v3-fos-license
Mode Transition of Filaments in Packed-Bed Dielectric Barrier Discharges We investigated the mode transition from volume to surface discharge in a packed bed dielectric barrier discharge reactor by a two-dimensional particle-in-cell/Monte Carlo collision method. The calculations are performed at atmospheric pressure for various driving voltages and for gas mixtures with different N2 and O2 compositions. Our results reveal that both a change of the driving voltage and gas mixture can induce mode transition. Upon increasing voltage, a mode transition from hybrid (volume+surface) discharge to pure surface discharge occurs, because the charged species can escape much more easily to the beads and charge the bead surface due to the strong electric field at high driving voltage. This significant surface charging will further enhance the tangential component of the electric field along the dielectric bead surface, yielding surface ionization waves (SIWs). The SIWs will give rise to a high concentration of reactive species on the surface, and thus possibly enhance the surface activity of the beads, which might be of interest for plasma catalysis. Indeed, electron impact excitation and ionization mainly take place near the bead surface. In addition, the propagation speed of SIWs becomes faster with increasing N2 content in the gas mixture, and slower with increasing O2 content, due to the loss of electrons by attachment to O2 molecules. Indeed, the negative O− 2 ion density produced by electron impact attachment is much higher than the electron and positive O 2 ion density. The different ionization rates between N2 and O2 gases will create different amounts of electrons and ions on the dielectric bead surface, which might also have effects in plasma catalysis. Introduction Plasma catalysis is gaining increasing interest for various environmental applications, such as gaseous pollutant removal, the splitting of CO 2 , hydrogen generation and O 3 production [1][2][3][4][5][6][7][8][9].Plasma catalysis can be regarded as the combination of a plasma with a catalyst, and often results in improved performance, in terms of selectivity and energy efficiency of the process.Plasma is an ionized gas, consisting of various reactive species, like electrons, positive ions, negative ions and radicals.These reactive species are created by applying a potential difference to a gas.The gas itself remains at room temperature, which is beneficial in terms of energy saving compared to classical thermal catalysis.Plasma-based pollutant removal and gas conversion have recently gained increased attention, being possible alternatives for chemical reactors.The production of new molecules in the plasma can be selective when a catalyst is added to the plasma.Plasma catalysis can be realized by introducing dielectric packing beads (coated with catalyst material) in the discharge gap, forming a packed bed dielectric barrier discharge (PB-DBD) reactor.The DBD generally occurs in a filamentary mode by applying a high driving voltage [10][11][12][13][14], which induces a very fast ionization avalanche, propagating from powered electrode to grounded electrode, i.e., a so-called streamer [12,13,[15][16][17][18][19].Each streamer starts when the driving voltage passes a certain threshold, and will further polarize the dielectric surface [20]. The ionization avalanches can occur either within the bulk plasma (so-called volume discharge) or along the surface of a dielectric (so-called surface discharge), sustained by electron impact surface ionization waves (SIWs) [21].SIW discharges operating in N 2 at low pressure (5-100 Torr) and created by high voltage (10-15 kV) nanosecond pulses, traveling along a dielectric surface or a liquid surface, have been experimentally investigated by Intensified Charge Coupled Device (ICCD) camera images [22], with measured propagation speeds of ∼5 × 10 5 m/s.Furthermore, a SIW discharge in hydrogen was experimentally reported, by measuring the time-resolved electric field at special positions, within tens of nanosecond pulse scales, using a picosecond four-wave mixing technique [23].The results showed that the discharge developed itself as a SIW propagating along the dielectric surface at an average speed of ∼10 6 m/s with a maximum electric field of ∼2.3 × 10 6 V/m [23]. On the other hand, volume discharges can be sustained by filamentary microdischarges (MDs) in a PB-DBD reactor, by limiting the dimensions at atmospheric pressure, following the famous Paschen law [24][25][26].A filamentary MD may be tens of micrometers in space and can exist for a few nanoseconds [27], yielding a high concentration of reactive species in a narrow gap, which is beneficial for pollutant remediation.Furthermore, filamentary MDs can co-exist with SIWs along the surface of the dielectric beads in a PB-DBD reactor under proper conditions [28,29].This may yield high concentrations of chemically active species and radicals on the surface due to electric field enhancement along the dielectric surface, compared to an unpacked DBD. The generation of reactive species at catalyst surfaces and in the gaps between the packing beads is very important for plasma catalysis.Therefore, it is crucial to understand the properties of surface and volume discharges and their formation mechanisms.However, only a few theoretical studies have been performed for PB-DBD reactors [11,[29][30][31][32][33][34][35][36][37].Russ et al. [30] adopted a two-dimensional (2D) hydrodynamic theory to study transient MDs in a PB-DBD reactor for a gas mixture 80% N 2 , 20% O 2 , and 500 ppm NO, with 23 species and 120 plasma reactions.However, this work was limited to a one-directional discharge, without considering discharge channels in the voids of the packed beads.Kang et al. [31] implemented a 2D fluid model to investigate the typical properties of MDs in a ferroelectric PB-DBD, focusing only on the surface discharge with strong electric field for two packing beads.Chang et al. [32] studied a ferroelectric packed bed N 2 plasma for a spherical void between two pellets based on the 1D Poisson equation and transport equations.This work focused on the electron density, electron temperature, and electric field as a function of the applied voltage, the discharge gap size, and the dielectric constants of the packing beads. In our previous work, we employed a kinetic 2D particle-in-cell/Monte Carlo collision (PIC/MCC) model to investigate the formation and propagation of a filamentary MD in a PB-DBD, and the results were compared to an unpacked DBD [11].Van Laer and Bogaerts developed a comprehensive 2D fluid model for a PB-DBD in helium [33][34][35].They reported that a packing enhanced the electric field strength and electron energy near the contact points of the dielectric beads.Microplasma discharges in humid air in a PB-DBD reactor were experimentally characterized through ICCDimaging and numerically simulated by the 2D multi-fluid simulator non PDPSIM, with a negative driving voltage at atmospheric pressure [29].The behavior of three kinds of discharge modes, including positive streamers (restrikes), filamentary MDs and SIWs, were reported in their work.Wang et al. [36] studied the microplasma characteristics by means of fluid modelling and experimental observations using ICCD imaging, for packing materials with different dielectric constants, in dry air at atmospheric pressure.They also studied the same three types of discharges predicted in Ref. [29].In addition, they reported a transition in discharge mode from surface discharges to local filamentary discharges upon increasing the dielectric constant of the packing from 5 to 1000.Finally, Kang et al. [37] experimentally and numerically investigated surface streamer propagation on an alumina bead, and predicted three distinct phases, i.e., the generation and propagation of a primary streamer with a moderate electric field and velocity, rapid primary streamer acceleration with an enhanced electric field, and slow secondary streamer propagation. Although several studies have been performed to better understand the microplasma properties in a PB-DBD reactor, as outlined above, the interaction mechanisms between the plasma and the catalyst are still poorly understood.Both physical and chemical effects can play a role in this interaction, and this will affect the discharge types and properties.Therefore, in the present work, we investigate the filament formation and mode transition between volume and surface discharges, as a function of driving voltage and O 2 /N 2 mixing ratio at atmospheric pressure, by using a 2D PIC/MCC model in a micro gap PB-DBD reactor.Specifically, we aim to obtain a better understanding of the production mechanisms and concentrations of reactive species sustained by the different modes. Results and Discussion The spatial and temporal evolutions of the species densities, electric field, and excitation and ionization rates are presented in this section, including the whole process of filament formation and transition to SIWs. Effect of Driving Voltage It has been reported that the SIW properties are dependent on both the direction and value of the driving voltage and the surface properties [21].In Figures 1-7, we present the effect of driving voltage on the density of the reactive species and the electric field for a dry air discharge (so only a mixture of 80% N 2 and 20% O 2 , no other air components).Figure 1 shows the electron density for four different driving voltages, i.e., −5 kV (a1-a4), −10 kV (b1-b4), −20 kV (c1-c4) and −30 kV (d1-d4), and four different time intervals.Note that the different time intervals at various driving voltages are based on the filament formation times under the different conditions.The driving voltages are all above the breakdown voltage of air (∼3 kV/mm) in glow mode.Since a filamentary discharge needs higher breakdown voltage than a glow discharge, no filament occurs at −5 kV, so only the seed electron density (∼10 18 m −3 ) is shown in Figure 1a1-a4, which corresponds with the values in Ref. [29].A limited volumetrically sustained filament appears at −10 kV, with an average speed of 3 × 10 5 m/s and a maximum electric field of 8.6 × 10 7 V/m.For −20 kV, the filament is formed in the gap at around 0.25 ns with an average speed of 2 × 10 6 m/s.Subsequently, the SIWs are developed and start dominating the discharge.The SIWs can be further classified as downward-and upward-propagating modes.The downward-directed SIWs are formed in the time interval of 0.25-0.6ns with an average velocity of 2.2 × 10 6 m/s, while the upward-directed SIWs are developed in the time range of 0.6-0.8ns with an average velocity of 2 × 10 6 m/s.It is worth to note that a multi-channel filamentary MD also develops with the upward-directed SIWs, i.e., the plasma extends away from the dielectric surface.However, the maximum density in the MD is nearly two orders of magnitude lower than in the SIWs, as shown in Figure 1c4, which indicates that the discharge is mainly governed by the SIWs.For the driving voltage of −30 kV, the filament is formed and sustained in the time range of 0-0.2 ns, as nearly pure SIWs propagating downward along the surface with a high speed of 5 × 10 6 m/s, while no upward-directed SIW develops.There is obvious asymmetry in the electron density for −30 kV, due to the inherent statistical character of the streamers. Therefore, as the driving voltage increases, a mode transition will happen, i.e., the filament is mainly sustained by an MD at low driving voltage (−10 kV), it becomes a combination of a limited portion of MD and significant SIW at a moderate driving voltage (−20 kV), and finally it is almost completely dominated by SIWs.The physical mechanism is that the charged species will rapidly escape to the dielectric surface under the effect of high driving voltage (strong electric field) and charge the dielectric.The surface charging will induce a strong tangential component of electric field along the dielectric surface, and lead to the formation of SIWs.The SIWs will in turn yield high concentrations of reactive species, as well as high electron density on the surface, which will further charge the dielectric, and gradually form a filament along the dielectric surface.This filament may increase the surface activity of the beads, which is probably beneficial for plasma catalysis. Our results are consistent with the theoretical and experimental predictions in Refs.[22,23,29,36].For −10 kV, the average speed of the volume streamer is 3 × 10 5 m/s, which is in the same order as in Refs.(∼5 × 10 5 and ∼10 5 -10 6 m/s) [29,36].For −20 kV, both volume and surface filaments co-exist.Therefore, the filament could propagate in the gaps and on the surface of the dielectric beads.This trend is in good agreement with recent experimental observations by fast-camera imaging for numerous packing beads with dielectric constant of 25.Once the SIWs are initiated in the PB-DBD, we calculate a propagation speed of the SIWs around 10 6 m/s, which is in the same order as in literature (∼5 × 10 5 and ∼10 6 m/s) [22,23].The reactive species are highly concentrated on the surface due to the presence of SIWs along the catalyst surface, which agrees with experimental observations in literature [22,36]. Figures 2-4 show the density of N + 2 , O + 2 and O − 2 ions, respectively, for the same conditions as in Figure 2. As shown in Figure 2, the N + 2 ion density profile is a bit different from the electron density distribution at −20 kV, i.e., with a higher density in the voids between the beads.The maximum N + 2 ion density is a bit higher than the electron density, due to loss of electrons by attachment to O 2 molecules.However, the N + 2 ion density profile becomes very similar to the electron density distribution at −30 kV, because the discharge is now dominated by pure surface discharges.This indicates that the electron attachment is not significant for a pure surface discharge.Figure 3 clearly shows that the O + 2 ions are mainly formed along the dielectric surface by the SIWs, and the maximum density is about 20 times lower than the maximum electron density for all driving voltages, because it is more difficult for the discharge to become sustained in O 2 gas [38].This can be justified by the ionization rate shown later in Figure 5. Therefore, in total, the electrons and N + 2 ions are present both along the dielectric surface and in the gaps between the beads, whereas the O + 2 and O − 2 ions only concentrate along the dielectric surface at −20 kV.The difference in spatial distributions of charged reactive species may affect the catalytic selectivity, which is important for plasma catalysis, as the interaction between the charged species and the surface might influence the morphology and work function of the catalyst, and accordingly affect the catalyst performance.However, the exact influence cannot be deduced from our model and can be quite complicated, which is beyond the scope of this work. It has been reported [11,33,36,39] that the local electric field can be enhanced near the contact points and at the boundaries between beads and dielectric layers.This is because the dielectric beads and plates are strongly polarized by the applied electric field, which reduces the potential drop in the dielectric, but increases the potential drop in the gas gap.However, most of the previous works only focus on local electric field in the gas gap [29,40], but they do not study the variation and influence of the electric field in the dielectric materials. Figure 6 shows the electric field amplitudes for the same conditions as in Figure 1, and Figure 7 shows the vertical and horizontal electric field components for −20 kV (at 0.8 ns) and −30 kV (at 0.2 ns).As clearly seen from Figures 6 and 7, the presence of dielectric beads and plates induces a sharp boundary with the gas phase, which strongly enhances the displacement electric field inside the dielectric material near this boundary.As a result, the electric field on the surface and in the voids of the packing beads is also enhanced, due to the significant charge accumulation in the narrow gaps (see Figures 1-4 above).Indeed, the local electric field inside the vertical gaps between two beads (or between bead and plate) is enhanced so much that the maximum value is much larger than the average external electric field, even for the lowest driving voltage of −5 kV without breakdown.The maximum values, presented in Figure 6, stay constant when there is no breakdown, but they increase with time when there is a discharge. The electric field is more enhanced in the vertical gaps between the dielectric beads than inside the materials for the −10 kV case, due to the very limited filamentary MD, as displayed in Figure 1b.On the other hand, at −20 kV and −30 kV, the electric field on and near the surface of the dielectric materials is significantly enhanced, due to the high concentration of the charged species on the catalyst surfaces (see Figures 1-4 above).In addition, as shown in Figure 6c4, the electric field is also obviously enhanced in the head of the upward-directed SIWs (noted in Figure 1c4 above), where the electric field is characterized by an overlap of both vertical and horizontal electric field components.Behind the SIW head, the vertical component rapidly reduces, as shown in Figure 7a1.Moreover, at −30 kV, the electric field has its maximum values on the surface of the beads and plates, and the values gradually decrease away from the surface boundary, because of the pure surface discharge mode.The high electric field on the dielectric bead surface is important for plasma catalysis [8], because it could induce locally stronger electron impact reactions, thus higher reaction rates and a higher plasma density.The SIW discharges therefore play an important role in a PB-DBD reactor for plasma catalysis applications.The physical mechanism is related to the high concentration of reactive species on the surface of the beads, which yields tangential components of the electric field, and the latter in turn enhance the SIWs. We calculated the tangential components of the electric field E x to be around 2 × 10 7 V/m, as shown in Figure 7b2, which is larger than the value of 6 × 10 6 V/m from literature [29].The difference is mainly due to the much smaller discharge gap of 1 mm, and bead size of 508 µm (bead diameter) used in this work, compared to the large discharge gap of 1 cm and bead size of 1.8 mm (bead diameter) in Ref. [29], as the same driving voltage will induce a much stronger electric field in a smaller gap.In addition, we consider ZrO 2 dielectric beads with a dielectric constant of 22, which yields stronger polarization than for quartz material with a dielectric constant of 4, used in Ref. [29].This stronger polarization also induces a higher electric field in the gap. In order to elucidate the mechanisms giving rise to the SIWs on the dielectric surfaces, we need to distinguish where the reactive species are generated, either on the dielectric surfaces or in the gap.To clarify this, the electron impact excitation rate (a1-b1) and ionization rate (a2-b2), for N 2 (a1-a2) and O 2 (b1-b2) components in the mixture, are plotted in Figure 8, for 0.8 ns and a driving voltage of −20 kV. As seen in Figure 5, both excitation and ionization mainly take place on the surface of the beads.The maximum excitation rate is about one order of magnitude higher than the maximum ionization rate, due to the lower threshold energy needed for excitation reactions (see Table 1 below).The excitation rate for the N 2 component is almost uniformly distributed on all surfaces, and fills the gaps between beads 3, 4 and 5, indicating that the excited N * 2 molecules will be fairly uniformly distributed on these surfaces and in the gaps.The ionization rate for the N 2 component is locally enhanced on the top surface of dielectric bead 4 and the bottom dielectric plate, due to the high local electric field there, which corresponds to the higher N + 2 ion density at these positions, shown in Figure 2c4.Although the excitation and ionization rates for the N 2 component spread out from the surface and cause restrikes [29], which may further give rise to multi-channel filamentary MD (see Figure 1c4 above), the maximum values in the restrikes are about one order of magnitude smaller than on the surface, as shown in Figure 5a1-a2.This again demonstrates that the SIWs are the main discharge mode at the driving voltage of −20 kV, while the MDs sustained by column-like filaments are more or less negligible. Table 1.Reaction set of a N 2 /O 2 gas mixture used in the model, as well as the threshold energy for electron impact ionization and excitation collisions.The cross sections for all reactions are adopted from Refs.[41][42][43][44], and are downloaded from the LXCat database [45]. Reactions Threshold Energy (eV) Electron On the other hand, both the excitation and ionization rates for the O 2 component show discontinuous distributions on the bead surfaces, except for the top surface of central bead 4, as shown in Figure 5b1-b2.This indicates that the production of the excited O * 2 molecules and the O + 2 ions is not a continuous process, but rather a discrete process, as predicted in Ref. [29].This discontinuous ionization rate leads to a rather discretely distributed O + 2 ion density (see Figure 3 above).The difference between the ionization rate for N 2 and O 2 gas components may be attributed to the different mean free path for ionization collisions.The mean free path for ionization collisions with N 2 is about 250 µm (hence similar to the radius of the beads, i.e., 254 µm), while it is ∼2 mm (similar with the perimeter of the beads) for O 2 .A shorter ionization mean free path results in more intensive ionization collisions, whereas an ionization mean free path longer than the bead radius and the gap distance between beads results in weaker ionization and may yield a discrete distribution, as shown in Figure 5b2.Although the densities of the excited species (N * 2 and O * 2 ) are not displayed in this work, the electron impact excitation rate reveals that these excited species are highly concentrated on the surface.The electron impact ionization rate profile can be considered as evidence for SIW formation on the surface. Effect of the Gas Composition We vary the O 2 /N 2 mixing ratio in this subsection, to obtain better insight in the different character between electropositive and electronegative gases.The effect of gas composition on the production of MDs inside porous ceramics was investigated experimentally by Hensel et al. [38], using photographic visualization and optical emission spectroscopy.It was predicted that a higher O 2 content resulted in the redistribution of the MD channels inside the porousceramics [38], while the total number of MD channels was reduced, since the breakdown voltage was increased.Their results can be partially correlated to the excitation and ionization rates, which are stronger in N 2 than in O 2 , as shown in Figure 5 above.Therefore, it is important to understand the effect of gas composition on the mechanisms of filament formation and mode transition between volume and surface filaments in a PB-DBD reactor, which has not been reported before. In this section, we will demonstrate that the gas composition is a critical parameter for mode transition.Figure 8 shows the electron density.The filament is initiated from t = 0 with a few seed electrons.The filaments travel for a similar distance within 0.2 ns, and arrive at the central bead 4 at 0.25 ns, for the different gas compositions, except in the case of 10% O 2 and 100% O 2 .However, after the filaments reach the central bead, their propagation speed becomes faster in N 2 gas and gradually becomes slower with increasing O 2 contents, as shown in Figure 8 at 0.4 and 0.7 ns.Indeed, the average speed of the filaments is 2 × 10 6 m/s during the first 0.25 ns, while during 0.25-0.4ns, the average speed is 3 × 10 6 m/s for pure N 2 gas, indicating the fastest SIW, 2.3 × 10 6 m/s for 10% O 2 , 2 × 10 6 m/s for 50% O 2 , and 1.7 × 10 6 m/s for pure O 2 . The physical mechanism is that the discharge is more difficult to be sustained, and thus it needs a higher breakdown voltage and leads to a slower discharge evolution, for electronegative gas, due to the loss of electrons by attachment to O 2 molecules.This feature was experimentally confirmed by Hensel et al. [38], through measurements of the breakdown voltage in a N 2 /O 2 gas mixture.Therefore, for the same driving voltage, the discharge can more easily be created in N 2 than in O 2 , resulting in gradually slower propagation speeds of the filaments upon higher O 2 fractions in the mixture. For pure N 2 gas, our calculations predict a dominated surface discharge with a negligible volume discharge.Indeed, the maximum electron density (1.51 × 10 24 m −3 ) on the surface is approximately five orders of magnitude higher than in the gap (3.51 × 10 19 m −3 ).The electron density is almost uniformly distributed on the bead surfaces after the filament reaches the central bead 4. The maximum electron density on the bead surfaces is represented by the red color in Figure 8.It is smooth for the first two cases, indicating that the electron density is indeed uniformly distributed, while it becomes discontinuous with increasing O 2 contents.However, the maximum electron density still occurs on the surface and not in the gap, giving rise to a high concentration of electrons on a specific part of the surface in pure O 2 .Again it shows that the SIWs can produce a high density of charged species on the surface of the beads, which might be beneficial for gas treatment by plasma catalysis [46].Finally, while adding more O 2 to the mixture makes the SIWs to become discontinuous, the gas mixture almost does not affect the volume discharge, as seen in Figure 8. Smooth Smooth discontinuous Figure 9. N + 2 ion density (m −3 ) for the same conditions as in Figure 8. Figure 10. O + 2 ion density (m −3 ) for the same conditions as in Figure 8. The N + 2 ion density profile is similar to the electron density profile, but the maximum N + 2 ion density is two times larger than the electron density, again due to the loss of electrons by attachment to the O 2 molecules.The number of lost electrons can be determined by the O − 2 ion density (see Figure 11 below).The maximum N + 2 ion density (2.91 × 10 24 m −3 ) is always found on the surfaces, and the speed of the N + 2 ion filament is almost the same as for the electron filament discussed above. Figure 11. O − 2 ion density (m −3 ) for the same conditions as in Figure 8. The O + 2 ion density exhibits different profiles with increasing O 2 content, as shown in Figure 10.The O + 2 ion density is at least two times smaller than the electron density, for the same reason as above, i.e., it becomes harder for the discharge to be created for high O 2 content at a fixed driving voltage, resulting in a lower O + 2 ion density.As a result, the O + 2 ion density is at least four times smaller than the N + 2 ion density at equal gas fractions (50%/50% mixture).In pure O 2 , the propagation speed of the SIW is about two times slower than in pure N 2 , and no MD channel is formed above the surface of beads 3 and 5, due to the absence of an upward-directed SIW. The O − 2 ions are again mainly formed on the surface by the SIWs, with a maximum value of 4.01 × 10 24 m −3 in 100% O 2 , which is 2.6 times higher than the maximum electron density and 5.8 times higher than the maximum O + 2 ion density, indicating that electron attachment is quite significant.Indeed, attachment is easier than ionization, because it requires no threshold energy, while the ionization threshold is 12.06 eV for O 2 and 15.58 eV for N 2 .Therefore, the O − 2 ion density sharply increases with rising O 2 content, as expected. To summarize, the O + 2 and O − 2 ion densities are enhanced, while the electron and N + 2 ion densities drop on the bead surface, for higher O 2 content, as expected.This might affect the surface reactions in plasma catalysis [47,48]. Model Assumptions The dimensions of the whole reactor are 1.65 mm in the y direction and 10 mm in the x direction.The discharge is sustained between two parallel plate electrodes covered by two dielectric layers of 0.3 mm thickness, separated by a gap of 1 mm in the y direction.Dielectric beads are inserted in the gap, forming a packed bed reactor.The dielectric constant of the layers and the beads is r = 22, characteristic for ZrO 2 .A schematic illustration of the reactor is presented in Figure 12, showing in the x direction only the central part including the dielectric beads, but the entire simulation region is taken from 0 to 10 mm in the x dimension.The dielectric layers and beads are colored in blue.We consider five dielectric spheres with diameter d = 508 µm.In order to leave some gas space for the filament formation between the packing beads, these five beads are packed in a non-strict spherical packing manner [11], with distance between adjacent bead centers of (0.1 + √ 3/2)d in the y dimension and 1.1d in the x dimension.Note that some gaps are present between the dielectric beads in Figure 12.This was intentional to show filament formation.Indeed, streamers cannot propagate in a 2D system without gaps between the beads, while they can propagate either along the bead surfaces or the gaps between the beads in a 3D case.We thus assume a certain gap between the beads, to allow the streamers propagate in our 2D model.Furthermore, even in experimental (3D) packed-bed reactors, there are some gaps between the dielectric beads, i.e., the cross sections between the beads are similar with the 2D model, when a lot of beads are packed together, as the beads cannot touch each other perfectly.This can be observed from Ref. [49].On the other hand, it needs extremely long calculation times to model a PB-DBD with 5 beads in an entire 3D geometry, owing to severe mesh requirements and very large number of macro particles.Therefore, we consider a 2D model.The upper electrode (y = 1.65 mm) is powered by a pulsed voltage with a rise time of 0.1 ns and then kept constant for the duration of the simulation, considered as the cathode.The lower electrode (anode; y = 0 mm) is grounded.The dielectric surfaces are considered as absorption boundaries for the reactive plasma species, i.e., all reactive species will be removed from the simulation if they hit the surface of the dielectrics and they cannot participate in the discharge anymore.The absorbing boundary condition is a reasonable and widely used boundary condition in PIC model [50].When charged species are absorbed on the dielectrics, they can emit a secondary electron and they will deposit their charge on the dielectric surface.The deposited charge is determined by the charge of the ion striking the dielectric surface and the secondary electron emission charge and coefficient, accounting for the formalism Here, Q D is the deposited charge with initial value of zero, Q ion is the charge of the ions striking the dielectric surfaces of the beads or layers, and Q se is the charge of the secondary electron, respectively.The effect of secondary electron emission is self-consistently coupled in the PIC/MCC model, assuming a constant ion impact secondary electron emission coefficient of 0.15 [29]. Photoionization is neglected in this study.Indeed, we found in our previous work [39] that the results are nearly the same with and without photoionization in a gap of tens of µm, which is the typical gap size between the packing beads and between the dielectric layers and the beads in a PB-DBD.Furthermore, the photoionization rate was nearly two or three orders of magnitude lower than the electron impact ionization rate, even in a large gap (hundreds of µm ) [29,36].Indeed, as will be shown below, the filaments in the present model are mainly sustained by SIWs, and the volume discharges will be negligible in a narrow gap of ∼10 µm. The simulations are applied to a mixture of N 2 and O 2 at atmospheric pressure with a temperature of 300 K.Only charged species, i.e., electrons (e) and ions (N + 2 , O + 2 and O − 2 ), are simulated in this PIC/MCC model, as the total ionization degree is typically less than 10%.Filaments are initiated from initial seeding electrons emitted from the surface of the top dielectric plate in the region x ∈ [4.9, 5.1] mm.A few seed electrons can be generated by cosmic radiation or external emission.The emission current density is set as 10 5 Am −2 .Subsequently, the filament sustains itself through anode-directed avalanches due to the electron impact ionization and secondary electron emission.Dissociation process, and the behavior of atomic ions, are not included in the model, because the cross sections of these are relatively small, thus omitting these reactions will almost not affect the kinetics of the streamer. Simulation Method We developed a 2D PIC/MCC model based on the VSIM simulation software [51], which has been widely used and validated [11,39,51].This PIC/MCC simulation is based on an explicit and electrostatic method, which was first introduced and described in detail in Ref. [52].The PIC/MCC model takes advantage of accounting for the detailed kinetic behavior of charged particles, compared to a fluid model.There are four steps in a PIC/MCC model: (1) pushing the particle velocity based on the previous electric field; (2) weighting from the positions of all charged particles to obtain the charge density; (3) summing and extrapolating the charge density to achieve a new electric field based on the Poisson equation; and (4) using a standard MCC method to describe electron impact elastic collisions, excitations, ionizations and electron attachment. The particle pushing procedure is described by the Newton equations where Here n represents the nth time step (δt).The new electric field in time t n+1 is solved by the Poisson where = r 0 is the permittivity, and ρ n+1 is the total charge density, given by ρ n+1 = ∑ p Z p eS(r − r n+1 p ). with the light speed c, and the space steps in x and y directions δx and δy.As mentioned above, the simulation region is 1.65 mm in the y direction and 10 mm in the x direction, and we use a mesh of 1000 × 500 uniform grid points.Dirichlet boundary conditions are adopted in the y direction, and Neumann conditions are used in the x direction. A standard MCC method [53] is used to account for the electron impact elastic, excitation, ionization and attachment collisions with N 2 and O 2 gas molecules, as listed in Table 1.The cross sections and threshold energies used for these reactions are adopted from the Refs.[41][42][43][44] and downloaded from LXCat database [45]. We consider only three types of ions, since they are the most important ones, with the lowest ionization threshold and largest density compared to other ions.We consider a simulation time up to 0.8 ns, which is enough to develop both the volume and surface discharges in a PB-DBD reactor.Thus, the effect of electron-ion and negative ion-positive ion recombination is negligible.Indeed the recombination reactions have a larger relaxation time up to the microsecond time scale, as predicted by Kruszelnicki et al. [29], as it requires a sequence of two-body reactions or three body reactions. Conclusions We have applied a 2D PIC/MCC model to study the formation and mode transition of filamentary discharges for a PB-DBD reactor with various driving voltages and N 2 /O 2 gas mixtures at atmospheric pressure.The discharge is sustained between two parallel plate electrodes covered by two dielectric plates, separated by a gap distance of 1mm.Dielectric beads are inserted in the gap to form a PB-DBD reactor. As the driving voltage increases, a pure surface discharge gradually dominates, because the charged species can more easily escape to the beads and charge the bead surface due to the strong electric field at high driving voltage.This significant surface charging will enhance the tangential component of the electric field along the bead surface, yielding SIWs.The SIWs, in turn, yield a high concentration of reactive species on the bead surface, which will affect the chemical reactions (and energy efficiency) in plasma catalysis.Indeed, electron impact excitation and ionization mainly take place on the bead surfaces. The propagation speed of SIWs becomes faster in N 2 gas and slower with increasing O 2 content, because it is more difficult for the discharge to be created, and it yields a slower discharge evolution in electronegative gas, due to the loss of electrons by attachment to O 2 molecules.This trend can also be understood from the significant difference in ionization rates between N 2 and O 2 gases.These different ionization rates will create different amounts of electrons and ions on the dielectric bead surface, which might have consequences for plasma catalysis. Figure 2 . Figure 2. N + 2 ion density (m −3 ) for the same conditions as in Figure 1. Figure 3 . Figure 3. O + 2 ion density (m −3 ) for the same conditions as in Figure 1. Figure 4 Figure 3 clearly shows that the O +2 ions are mainly formed along the dielectric surface by the SIWs, and the maximum density is about 20 times lower than the maximum electron density for all driving voltages, because it is more difficult for the discharge to become sustained in O 2 gas[38].This can be justified by the ionization rate shown later in Figure5.Figure4presents the O − 2 ion density distribution.The O − 2 ions are generated by electron impact attachment.Thus, the space and time distributions of the O − 2 ion density are nearly the same as for the electron density, but the maximum density is about two times smaller than the maximum electron density due to the smaller fraction of O 2 in air. Figure 4 . Figure 4. O − 2 ion density (m −3 ) for the same conditions as in Figure 1. Figure 5 . Figure 5. (a1,b1) Electron impact excitation rate (m −3 s −1 ) and (a2,b2) electron impact ionization rate (m −3 s −1 ), at 0.8 ns for a driving voltage of −20 kV, in an air discharge, for the nitrogen (a1,a2) and oxygen (b1,b2) components.The same color scale is used in all panels to allow comparison.The maximum values are noted in the figure. Figure 6 . Figure 6.Electric field amplitude |E| (V/m) for the same conditions as in Figure 1.The maximum values are noted in each panel.They occur at the contact points between the beads, and are larger than the color scale, but the color scale is chosen in such a way to better illustrate the general behavior. Figure 7 . Figure 7. Electric field components in the y and x direction, (a1,b1) E y (V/m) and (a2,b2) E x (V/m), (a1,a2) for −20 kV (0.8 ns), and (b1,b2) for −30 kV (0.2 ns), in an air discharge.The maximum values are noted under the figure.They occur at the contact points between the beads, and are larger than the color scale, but the color scale is chosen in such a way to better illustrate the general behavior. 12 . Geometry used in the packed-bed dielectric barrier discharge (PB-DBD) reactor.The entire simulation domain is 10 mm in the x dimension and 1.65 mm in the y dimension, but only a smaller part in the x direction is shown here, for clarity.The discharge gap is 1 mm, and the top and bottom electrodes are covered by 0.3 mm thick dielectric plates.The top electrode is 0.05 mm thick, and it acts as powered electrode.The bottom electrode (x = 0) is grounded.Dielectric packing beads with diameter of 508 µm are placed in the gap.The numbering is used later in the paper. ( 5 ) Here, p represents the electrons (e) and ions (N + 2 , O + 2 and O − 2 ), and r p , v p , Z p , m p and S are the particle position, velocity, charge, mass and shape function in space.The shape function is chosen to be a first order b-spline (b l ) function, S(r − r p ) = b l ( r−r p r ), with r = δxδy.The time step δt fulfills the Courant condition cδt < 1 -impact ionization e + O 2 → 2e + O + O 2 → e + O 2 e + N 2 → e + N 2
2018-11-23T18:01:36.436Z
2018-06-15T00:00:00.000
{ "year": 2018, "sha1": "0350b3d58869ded6d40cf4d0d14024688c180d42", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4344/8/6/248/pdf?version=1529066437", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "0350b3d58869ded6d40cf4d0d14024688c180d42", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
251371592
pes2o/s2orc
v3-fos-license
A 35-Year Longitudinal Analysis of Dermatology Patient Behavior across Economic&Cultural Manifestations in Tunisia, and the Impact of Digital Tools The evolution of behavior of dermatology patients has seen significantly accelerated change over the past decade, driven by surging availability and adoption of digital tools and platforms. Through our longitudinal analysis of this behavior within Tunisia over a 35-year time frame, we identify behavioral patterns across economic and cultural dimensions and how digital tools have impacted those patterns in preceding years. Throughout this work, we highlight the witnessed effects of available digital tools as experienced by patients, and conclude by presenting a vision for how future tools can help address the issues identified across economic and cultural manifestations. Our analysis is further framed around three types of digital tools:"Dr. Google", social media, and artificial intelligence (AI) tools, and across three stages of clinical care: pre-visit, in-visit, and post-visit. Introduction In this accelerating and expanding digital age, information technologies and image processing techniques have thrown us into an inevitable "iconosphere" whose psychological, epistemological, and cultural impacts are undeniable [13]. Amongst the various industries where the digital age has transformed the respective processes, products, and services, healthcare has significantly lagged all other industries despite it having the greatest potential, both in terms of the ability to fundamentally improve the delivery of care as well as the overall impact of that to civilization. Despite the connectivity potential and advanced digitization in this new age, patients are still at the mercy of inaccessibility to accurate clinical care in a timely or convenient manner. The potential transformative impact corresponding author: mohamed@aip.ai arXiv:2208.02852v1 [cs.CY] 4 Aug 2022 of this digital age is especially exciting, yet thus far disappointing, in lower socioeconomic regions of the world, where economic and cultural barriers uniquely inhibit patients' access to the requisite care. A patient's healthcare "journey" has been significantly impacted by software technology [7]. Patient "empowerment" has been augmented due to the easy capture of skin lesions, taken by mobile phones or digital cameras from a distance, and the proliferation of those "macroscopic" images in search engines and social media. This empowerment by technological advances is disseminated and encouraged by the media, which is driving patients to be more demanding in their education and in the vocalization of their care with their physicians. With healthcare challenges facing the world significantly worsened by COVID, there is an increasing imperative to better analyze and understand how digital tools can transform healthcare across various economic and culturally distinct regions. To effectively design and deploy such tools, it is important that we analyze historical patient behavior across the three clinical stages (pre-, in-, and post-visit) over an extended time frame, and in unique cultural and economic landscapes. This is especially the case in underdeveloped countries such as Tunisia, where healthcare services have more primitive infrastructure and resources. In these regions, digital adoption further lags more developed regions but there is greater potential to improve the current standard of care and address unique economic and cultural barriers [16,4]. By studying patient behavior over the last 35 years through the consistent lens of two leading dermatologists in Tunisia, we identified that the first 25 of the 35 years mainly experienced slow and minimal change. However, the last 10 years witnessed a significant change due to the introduction of expansive digital tools such as social media, Google search, reverse image searches, and AI tools. Over that time frame, we also witnessed an economic shift in Tunisia after the revolution in 2010-2011, and a significant deterioration in the average household income and level of access and quality of both public and private healthcare. This paper makes the following contributions: -We describe the change in patients' behavior over 35 years in Tunisia, segmented into two periods of 1987-2012 and 2012-2022, and we pattern this behavior across coinciding economic and cultural environments. -We discuss and highlight the witnessed effects of digital tools on patient behavior and incorporate that impact across the three stages of clinical care (pre-, in-, and post-visit). -We present a case for how we envision digital tools, such as "AIPDerm", can positively address Tunisia's, and similar regions', economic and cultural issues, as well as significantly evolve the current standard of how digital tools impact the patient's behavior across the three clinical stages. 35 years of dermatology patient behavior across all three clinic stages We analyze patient behavior in a single region of Tunisia across 35 years by examining the direct experiences of two dermatologists who continuously practiced in the region over that period. We segment the observed patient behaviors and perceptions into the three clinical stages: pre-, in-, and post-visit. Within each segment, we analyze the shift of those behaviors and perceptions over the 35-year period, and more specifically, the more notable period of 2012-2022. Pre-visit Perception Preceding the knowledge-at-your-fingertips era of search engines, it was significantly difficult to access any breadth or depth of medical information, constraining patients to almost exclusively trust the diagnosis of the dermatologist. Following the launch and evolution of search engines, patients were more quickly catalyzed at the slightest illness manifestation or concern, with patients now empowered to match their symptoms and be pre-emptively guided on the possible diagnoses, the urgency of the pathology, and the therapeutic possibilities. This empowerment of manually navigating search engines for textual matching of symptoms and conditions has more recently evolved into taking photos of their skin lesions and using reverse image search functionality to find cases with similar visual representations and the corresponding possible skin diagnoses. Around the 2010-2011 period, Facebook gained significant scale and reach in Tunisia, with this adoption having been validated extensively in the press and public domain given the pivotal role it played in the Tunisian and other "Arab Spring" uprisings [3]. On the dermatology side, Tunisian citizens are constantly engaged on Facebook groups to exchange opinions at different engagement levels about i) existing medical prescriptions from prior medical visits, ii) personal judgment about the personality, professionalism, availability, and expertise of the respective dermatologist, and iii) personal experiences with specific treatments for a given pathology (e.g., acne). While these discussions can represent useful natural language processing datasets for future AI tools and guide the design of clinical trial recruitment, prevention campaigns, and chronic disease management [5], the quality of shared medical information on social media suffers from the lack of reliability and confidentiality [10]. As the working class's household income and purchasing power have been falling since the 2010-2011 revolution, many patients rely on their discussions on social media to self-educate and self-medicate, exposing patients to various risks such as mistreatment and triggering rashes or drug toxicity. This is in contrast to the pre-digital stronghold era (maturity of search engines and scale-up of social platforms), which had materially lower risk exposure of erroneous and dangerous self-education or self-medication, and where dermatologists were granted more exclusive trust in medical determinations. In order to mitigate some of these risks and limitations with basic self-education, Tunisian website alternatives (e.g., med.tn, tobba.tn) offer a dedicated medical forum where subscribed doctors can answer medical questions posted by patients. However, doctors are asked to not provide a specific diagnosis, with their answers instead of revolving around reporting the severity of the patient's case, thus providing a more triaging type of guidance. Certain skin conditions can be embarrassing or culturally sensitive, such as sexually transmitted diseases (e.g., venereal vegetations, genital herpes, genital candidiasis), which have been, and continue to be, very taboo culturally in Tunisian society. In these cases, most Tunisian patients have notably preferred sending a photo of their lesion accompanied by a textual description of symptoms. This optionality is something that has been enabled by the availability of mobile devices and platforms, and whose absence rendered a significant proportion of the population at the mercy of rigid cultural norms, often resulting in undiagnosed or untreated conditions. In-visit Perception Across the clinical time frame of 35 years of experience, we had extensive exposure to both public hospitals and private clinics in Tunisia. We characterize the profile and mindset of dermatology patients during medical visits and classify them into the following (with potential overlapping) categories. Patients with higher education background This category includes patients with higher-education and scientific backgrounds (e.g., engineers, physicians, architects) or with a background in studying advanced literature (e.g., lawyers, journalists, judges, administrative executives). They expect the dermatologist to direct them towards the right etiological and therapeutic approach after they self-administered and pre-established a survey about their symptoms and possible diagnosis from Google searches and other digital tools. They also keep track of high-resolution images and videos of their skin lesions (e.g., angioedema and urticaria) which simplify and accelerate the diagnostic process of the dermatologist. This category of patients perceives the role of dermatologists as the only single and final source of providing trustworthy diagnoses and treatments. These patient profiles also tend not to over-argue or question the dermatologist's determination of appropriate next steps and treatments (e.g., blood test before drug prescription). Patients with limited income This category is the one that significantly utilizes social media and Google searches, with those tools most impacting their patient journey through the three clinical stages. These patients try to actively avoid paying medical consultation fees and can excessively opt for self-medication until a friend or pharmacist urges them to consult a dermatologist (e.g., in front of a mole growing in size out of concern of a melanoma diagnosis) or until their condition worsens. For most witnessed patients in this category, dermatologists in Tunisia are seen as the last resort after exhausting all self-medication methods. Over the past decade and with the worsening economic landscape since the 2010-2011 revolution, there has been an increase in patients that fit into this category and who actively seek to avoid the medical fees of multiple visits. We have noticed that they tend to consult a dermatologist for aggregated medical concerns within a single medical visit, which implies that they are allowing for a build-up of illness and manifestations over an extended period of time before acquiescing to paying for fees associated with a single visit. They view this as allowing them to obtain a "multiple treatment package" without the intention of treating each of the skin diseases properly and within the appropriate time frame. However, this practice is not prohibited by Tunisian law and tends to drive awkwardness during dermatologist interactions. Patients who already "know" the diagnosis While it is true that a diagnosis can sometimes seem obvious (e.g., vitiligo, common wart, acne, or burn), patients can often have overly inflated confidence in some of these cases and unnecessarily argue with dermatologists on their medical decisions. The patient consults to ensure possible therapeutic arrangements, their effectiveness, and their aesthetic aspects. Patients often perceive the role of dermatologists as one that is oriented toward frequent follow-ups and very personalized medical advice, without which the patient is not satisfied. This is because the patient can unintentionally not consider the dermatologist's actual core diagnosis as part of the overall medical visit. The role of the dermatologist is to reassure the patient and track them regularly according to the severity of the disease, its location, its extent, and its treatment punctuality. Given that patients can be overconfident about their diagnosis, they tend to ask questions to better understand any discrepancy between the dermatologist's opinion and what they have read on social media or "Dr. Google". Questions such as "we know the diagnosis, so why are you asking for a blood test?" or "I saw similar photos to my case on Facebook; why are you complicating my case?" are not uncommon. Patients with skin conditions in intimate body locations Patients in Tunisia having genital skin lesions often refuse to be physically examined in person. They usually come with multiple photos taken before their visit and expect the dermatologist to give a prescription purely based on analyzing the pre-captured photos, as they believe that "they have nothing else", and that it must be sufficient for the medical examination. This request goes against the medical code of ethics but is common behavior in Tunisia given the cultural backdrop and sensitivity in the region around intimate body locations and taboo topics such as sexual activity and disease contraction. These embarrassing situations can leave doctors with a dilemma, and often find themselves engaged in a time-consuming cultural discussion in order to convince the patient into accepting the appropriate medical examination. For example, in the case of Behcet's disease, when a patient photographs their genital ulcers and asks the dermatologist for the therapeutic solution without being examined, the dermatologist must explain to the patient that it is imperative to inspect for medical ulcers, pseudo-folliculitis, and even carry out a neurological examination. Predictably, based on the cultural and sexual sensitivities on this front, our discussions with both male and female dermatologists in Tunisia confirm that this scenario is especially frequent when the patient and dermatologist are of opposite genders. In general, we observe that patients in Tunisia actively seek to validate their skin conditions in intimate locations by sending their photos via messaging apps before being obliged to visit the dermatologist in-person. It is worth noting that dermatologists may find themselves in various delicate situations that could give rise to legal interventions, such as the case of a molluscum of the perianal region of a child suggesting sexual abuse. In such medico-legal situations, taking photos becomes imperative and mandatory, but can also be problematic in front of some parents who opposite it in order to avoid cultural shame and stigmas. Uncooperative patients Patients with lower intellectual levels, educational backgrounds, or with limited job and career prospects, are often seen spending time on Facebook groups and other social media platforms looking for depigmenting miracle creams for their melasma, and doing so without the more basic concern of worrying about having a sunscreen protecting against ultraviolet type A (UVA), ultraviolet type B (UVB) and blue light. Some of these patients are often seen consulting dermatologists to "achieve" their cosmetic dreams that they conjured in their imaginations through sporadic reading of various online sources, and thus have established unreasonably high expectations on what can be achieved and over short periods of treatment [15]. In such cases, many patients are quick to claim that the treatment must not be effective. It has therefore proven prudent for dermatologists in Tunisia to take regular photos showing the temporal evolution of the disease, and present those depictions as clear visually convincing evidence whenever the patient questions the improvement of their skin condition or the inefficacy of the prescribed treatment. Fig. 1 depicts the case of a patient who visited his hairdresser to shave his beard. The hairdresser suggested to the patient to stop the nevus's volume from increasing by surrounding it with a line of a fishing rod. Fig. 1a shows the photo of the patient during his first medical visit where the nevus is surrounded by a line of a fishing rod. The nevus shown in Fig. 1b illustrates its evolution after two months of treatment. The patient claimed that the state of his nevus did not improve, yet it was sufficient for the dermatologist to show him his photo in Fig. 1a from his first visit to convince him of the improvement. Post-visit Perception Patient follow-ups can be almost as important as the actual medical visit, given that they help ensure patients have adhered to their treatments and in track-ing the treatments' effects. Given the observed historical tendency for patients to potentially neglect in-person follow-ups, digital tools have shown utility in enabling convenient remote check-ins for post-visit care and tracking. For example, the patient can use digital tools to send images of the progress of their burn to their dermatologist, allowing for remote monitoring as well as treatment adjustments if necessary. A distinct phenomenon observed in Tunisia is related to the differences between the prescribed treatment by dermatologists (and doctors in general) and the one ultimately provided by pharmacists. These differences can be due to the lack of availability of specific drugs or brands in pharmacies, which is a simple and logical justification. However, the most common reason for the difference in what is prescribed is explained by the unfortunate unfolding competition between pharmacists, who can withhold patients' prescriptions until the unavailable drugs are in-stock again, even though the specifically needed drugs are already available in other pharmacies. In such situations, the ideal patient contacts the dermatologist to verify if the drugs they obtained from the pharmacist are equivalent to the ones prescribed. However, due to the lack of sensitivity and awareness in most cases, the patients will start their treatment without giving adequate attention to the changes in the prescription made by their pharmacists. Given these more nuanced and technical problems, which are seldom present in more developed countries, dermatologists in Tunisia often find themselves playing the role of "medical policeman". For example, the dermatologist can instruct their patient to take a photo of their treatment before beginning to use it to allow for the dermatologist to confirm what has been issued by the pharmacist. A similar issue with cosmetic products that has become more prevalent in recent years is when a patient exchanges drugs with friends, especially when dealing with budget limitations, thereby neglecting the specific prescribed dosage of drugs from the dermatologist for a personalized and optimal therapeutic result. From this perspective, the perceived role of dermatologists revolves around checking treatments provided by pharmacists and answering any questions, which is often expected for free in cases of complications and uncertainty. The Historical Evolution of Digital Tools & Their Patient-Behavior Impact Across Clinical Stages The adoption of digital tools has evolved significantly over the past ten years, with distinct progress impacting patient behavior as compared to the 25 years prior to that. However, the full potential of digital tools in the healthcare industry is still far from being realized, with historical implementations largely centering around broader communication and information accessing platforms. While this improves patients' access to any kind of relevant medical information and provides the opportunity to discuss with fellow patients, these tools can be very high in risk of mis-education or mis-treatment, and can hinder dermatologists' efforts in providing optimal care. We investigate "Dr. Google", social media, and AI tools, the three major digital tools that were used over the last 10 years in Tunisia, and assess their impact across the clinical stages. Social Media The usage of social media in Tunisia, in particular Facebook, has undergone an astonishing growth over the last ten years, and today, Facebook drives 92.65% of Tunisian social media traffic in the month of June 2022, with similar levels over the last 12 months; Facebook's dominance significantly beats other platforms in that category which includes YouTube, Twitter, Instagram, Pinterest, LinkedIn, Reddit, and other platforms [1]. Popular applications such as Facebook, Messenger, and Instagram are freely and instantly accessible and provide user-friendly platforms to search and share medical information. Further, Tunisian patients also rely on Facebook groups and pages to seek other patients' opinions and ratings of different dermatologists or clinics. Such platforms offer support systems where disease-focused groups or communities can be created to share experiences and provide psychological support. Facebook and Messenger usage is not restricted to patients, dermatologists also utilize such tools to discuss and solicit each other's opinions and potential diagnoses on more complex or ambiguous cases [9]. Furthermore, social media, and messaging apps facilitate useful patient-doctor and doctor-doctor communications. If the doctor has an online presence, such tools can be used to provide clinic communications, virtual assistance and facilitate follow-ups. Social media platforms have been generally used by Tunisian dermatologists to raise awareness, encourage preventive or misguided behaviors, and offer advice and information on skin diseases to help minimize the risks of self-education and self-treatment. However, the availability of dermatologists to directly regulate content on such platforms is minimal, and their impact is unscalable given the broad usage of those platforms by the general population. Unfortunately, social media is not always a reliable source of information, especially with complex and highly-personalized diagnoses and treatments, and has been shown to quite easily and quickly disseminate false or misleading information to widespread audiences. Such platforms are often used by commercial organizations or sales-representatives seeking to aggressively promote products for profit, which can lead to harmful misinformation or consequences [14]. "Dr. Google" Search engine market share in Tunisia is heavily dominated by Google, with 94.95% of the total search engine market as of June 2022 [1]. With around 72% of residents having access to the internet, and with over 90% of the population having at least one mobile phone, accessibility to the internet and information is quite high, enabling most patients to consult the internet before visiting a dermatologist. Patients usually seek information through searching their symptoms or using more advanced search tools like reverse image searches. The latter is a software tool that enables users to find visually similar images simply by uploading a photograph of the skin lesion. While these tools are used in both developed and underdeveloped countries, dermatologists in Tunisia have emphasized that patients in underdeveloped or economically challenged regions, such as Tunisia, have been driven to rely heavily on these online tools to solely self-medicate, deferring a consult with a specialist to when symptoms significantly deteriorate or misalign with their self-diagnosis. The ability for patients to seek medical information through various online search engines outside their clinical visit can be beneficial but can also pose significant risks. Self-educating through online resources can facilitate more comprehensive communication between patients and dermatologists, encourage the patient to ask more questions, alert the dermatologist to relevant aspects of their case, and become more engaged with their treatment across the clinical stages. However, the vast amount of information online can be overwhelming, and the ability to ascertain an accurate online diagnosis is low and dangerous, yet is unfortunately still relied upon by patients in deferring visits, especially in less developed or economically challenged countries. The first AI tools AI is playing a pivotal role in advancing e-health solutions for dermatology [8,11,2]. Dermatology is a suitable high-value use case for AI, as it is a visualdependent specialty and given that numerous image recognition algorithms have already been proven to be effective in detecting skin diseases and cancers [6]. This progress in AI has also fueled the design of several web or mobile image-based diagnosis apps for skin conditions such as "AIPDerm", which has had significant success in Europe. These AI-based tools are beneficial for various expertise levels within a clinic, from senior dermatologists to junior nurses, and help distinguish possible skin conditions and provide more sophisticated differential diagnosis possibilities along with ancillary information. AI-based diagnostic tools also play an important role in teledermatology given that they can help patients avoid long waiting periods and pre-identify and triage the severity of their skin condition. Currently, most dermatologists in Tunisia do not use AI tools for a second opinion. We asked more than 300 dermatologists about their reasons for not yet adopting AI tools, and more than 96% were worried about the following two aspects of these applications: -Privacy concerns about users' health data: most applications indicate they would not use uploaded images to target advertising, and would only save them to further train their classification algorithms, if users gave them explicit permission to do so. However, dermatologists in Tunisia do not have a legal framework established for digital dermatology, and at this early juncture, still feel insecure about adopting these applications. -Limited coverage of skin diseases: most dermatology AI applications are limited in scope of capabilities as they are narrowly targeted toward specific skin diseases such as skin cancers. In other words, they do not cover a wide enough diagnostic capability supporting the majority of skin diseases and cancers. Remote triaging solutions based on AI tools offer the potential for addressing challenges in many low-and middle-income countries in Africa, or similar economic and culturally inhibiting regions, especially if offered at scale [12]. However, the need for in-person visits remains irreplaceable in many cases, and where the need for precise and nuanced knowledge of medical context is critical. Fig. 2 shows one of those cases where the back of the patient has well-defined, painless but itchy, multiple lesions, such as scratches, ulcerations, or cuts. The patient uses a brush to scratch her back. All AI applications we have tested failed this case as it looks like skin picking. This is because the correct diagnosis, pathomimia (a.k.a., factitious dermatosis), is self-induced and often difficult to find if the dermatologist does not consider the psychological profile of the patient, which is confirmed by the opinion of the psychiatrist. Although we have witnessed transformative evolutions in digital connectivity tools over the last few decades, healthcare remains stagnated in a world of underadoption and misutilization. The available tools are constantly evolving to be far more sophisticated and tailored than the basic platforms broadly used today. We have begun to see the potential of these future tools in addressing the weakness and limitations of past tools, as global efforts in developing dermatology AI tools have been substantial. With respect to the future capabilities of AI systems, we believe its core features should address the patient care and digital tool issues described in this paper. One of the issues is dangerous selfeducation, and the vast degree of error and risk associated with that. With a system that utilizes sophisticated diagnosis AI modules such as a visual image diagnosis system, the system can minimize the associated error risk by providing a highly accurate diagnostic tool that is not at the mercy of patient diligence. It can be accessed by patients, with results sent to the clinic for oversight, and used as a high-confidence triaging and education tool benefiting both the previsit and in-visit stages. For the post-visit stage, a patient management platform can offer a patient portal that provides direct information about their diagnosis and treatments along with detailed reference material for contextual explanations, minimizing the possibility of speculation, skepticism, and non-adherence. This platform can also provide the ability for patients to asynchronously submit questions to the clinic regarding any concerns, minimizing misinformation and ensuring patient actions and understandings are carefully managed by the clinic. This system will also minimize patients' delays in addressing a possible ailment due to economic reasons and false confidence from self-education, by instantly providing a sophisticated triage and education tool at no incremental overhead costs due to its scalability. The system will also allow dermatologists and their staff to focus their efforts on management and oversight of a single platform while allowing for the ability to follow up with patients on obtaining correct prescriptions and subsequent treatment adherence, as well as enable remote condition monitoring post-visit, which more efficiently qualifies the need for future in-person visits. Such a system has already been comprehensively developed, clinically validated, and successfully deployed at a country-wide level in Europe -"AIPDerm', which is based in Hungary and Spain. Economically, in regions where fiscal stability is sensitive, we believe the scalability and accessibility of such platforms can be transformative. Culturally, we believe such a system is especially relevant in regions such as Tunisia, where longstanding cultural norms and sensitivities can inhibit the requisite treatment for many. In such regions, this can manifest within households where female members can be prohibited from seeing a physician by the male patriarch of the family on the basis of privacy, protection, and cultural sensitivities. We believe that the system can help mitigate such issues by allowing patients to localize the lesion in an image, minimize the inclusion of other body parts, and not require the patient's physical presence, allowing for remote triaging of their case. Overall, we believe this envisioned system can largely solve the aforementioned patient behavior issues, including cultural and economic constraints, and reduce the perversion of existing basic tools and social media platforms. Conclusion Patient behavior in dermatology has undergone significant changes over the past 35 years, predominantly over the last decade, driven by significant transformation and adoption of digital tools. Patient behavior is analyzed across three clinical stages: pre-visit, in-visit, and post-visit, and is classified into several profiles that reflect observed patient behaviors. The causes of different patients' behaviors are not singular and this work illuminates the predominantly witnessed patterns in Tunisia. Digital tools have materially impacted how patients behave in all three clinical stages, with various economic and cultural aspects specific to Tunisia also contributing to the patterned patient behavior. While basic digital tools have exposed patients to risks of self-education and self-medication, there is clear potential in minimizing those risks and in mitigating the economic and cultural concerns of Tunisia and various regions globally. The aim of this paper was to describe patient behavior over 35 years in Tunisia across the three clinical stages, while analyzing the coinciding effects of economic and cultural manifestations, and the overarching impact of digital tools. We culminated by detailing how digital tools can mitigate Tunisia's, and similar regions', economic and cultural constraints, and how specific innovations can significantly evolve the current standard of how digital tools impact patient behaviors and outcomes.
2022-08-08T01:15:06.970Z
2022-08-04T00:00:00.000
{ "year": 2022, "sha1": "0ded808d8cf68fee1579a3e07e40b257e960df59", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "0ded808d8cf68fee1579a3e07e40b257e960df59", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
214937190
pes2o/s2orc
v3-fos-license
Skin Grafting A wound that will accept a skin graft must be free of infection, free of devitalized tissue, and must have an adequate blood supply. Skin grafts are classified as either split thickness or full thickness and are selected depending on the size of the defect to be covered and the thickness of coverage that is desired for a particular area. Split-thickness skin grafts (STSGs) are typically used to cover defects when cosmesis is not of primary concern and when the size of the defect is too expansive to be covered by a full-thickness skin graft (FTSG). During the harvesting of an STSG, a thin layer of epidermis and dermis is excised using either a dermatome or the Humby knife at a thickness that is predetermined by the adjustable setting that is selected. STSG are often meshed to expand the surface area of the graft, thereby minimizing the size of the necessary donor site while providing drainage holes for blood or serum. FTSGs are usually used to cover a smaller surface area and require primary closure of the donor site. In the harvesting of an FTSG, an ellipse is made around the defect pattern, which is then incised and undercut. The donor site is then closed primarily; or alternatively, if there is too much tension, an STSG may be used to cover the donor site. The effective healing of an STSG depends on the presence of a well-vascularized recipient site, close apposition of the graft to the recipient bed, and appropriate immobilization of the graft to foster development of nascent vascular connections. The 2 basic factors affecting graft take are the ability of the graft to receive nutrients and the presence of vascular ingrowth from the recipient bed. Factors that will negatively impact the healing of a skin graft include patient smoking, radiation, chemotherapeutic or immunomodulating drugs, and malnutrition. Complications of skin grafting include primary and secondary graft contracture, altered function, sensation and hair growth of donor skin, and pigmentation mismatch. HISTORY The biological transfer of skin was first reported in 1869 by Jacques Louise Reverdin, a Swiss surgeon working in Paris, who harvested very small slivers of epidermis using a lancet to apply to a forearm wound. 1 In 1870, G.D. Pollack of London reported success in 8 of 16 cases in which he tried small Reverdin grafts. 2 Also in London in 1870, George Lawson presented 3 cases of full-thickness skin grafts (FTSGs) before the Clinical Society of London. 3 L.X.E.L. Ollier of Lyon later described the use of intermediate-thickness skin graft and FTSG in 1872. 4 After the Franco-Prussian War of 1870 to 1871, Carl Thiersch of Leipzig, an accomplished microscopist, did experimental work on skin grafting and recommended very thin split grafts. 5 J.R. Wolfe of Glasgow, in 1875, 6 and Fedor Krause of Germany, in 1893, reported on FTSGs. 7 It was not until 1929 that Vilray P. Blair and James Barrett Brown of St Louis reported on their landmark achievement of using large split-thickness skin grafts (STSGs). 8 This technique was made easier by the invention of the dermatone by Earl Padgett and George Hood in 1939. In Europe at that time, tubed flaps dominated reconstructive surgery. Skin grafting was uncommonly used, except by Sir Archibald McIndoe, who used them for burn injuries. After World War II, however, the dermatome and an improved Blair's knife were introduced, and skin grafting became a common technique for appropriate cases in Europe as well. 2 Since then there have been improvements in graft harvesting techniques and graft stabilization. The underlying science of skin graft healing has also been studied and is now well understood. WOUNDS REQUIRING SKIN GRAFTS Simply stated, wounds come in 2 types: those without skin loss and those with skin loss. All wounds must be carefully evaluated, and the treating surgeon must classify the wound by etiology or mechanism, chronology, location, size, and nature of injury. Bleeding must be controlled, infection drained, and necrotic tissue debrided. If a wound can be closed primarily or will heal by secondary intention in a short period of time, then this may be the best option. Larger wounds will require more complicated surgical closure techniques such as skin grafting, local or distant flaps, or both. Generally speaking, skin grafts can only be placed on healthy vascularized tissue. If a wound bed is not adequate for a skin graft, then a flap might be needed. A wound that will accept a skin graft must be free of infection, free of devitalized tissue, and must have an adequate blood supply. In cases where a limb has inadequate arterial inflow, revascularization is necessary. Edematous tissues must be elevated and/or compressed until the edema resolves. Grafts will not adhere to radiated tissues. They usually will not take on bone without periosteum, cartilage without perichondrium, or tendon without paratenon. A skin graft should only be placed on wounds when they are appropriate and ready for grafting. CLASSIFICATION OF SKIN GRAFTS Skin grafts are classified as either split thickness or full thickness and are selected depending on the size of the defect to be covered and the thickness of coverage that is desired for a particular area. STSGs include epidermis and varying proportions of the underlying dermis. FTSGs are comprised of the epidermis and the entire thickness of the dermis, including structures such as sweat glands, sebaceous glands, hair follicles, and capillaries. 9 STSGs STSGs are typically used to cover defects when cosmesis is not of primary concern, and when the size of the defect is too expansive to be covered by an FTSG. STSGs range in thickness from thin (0.008 to 0.012 mm in width), to medium (0.012 to 0.018 mm in width), and thick (0.018 to 0.030 mm in width). The thickness of the graft is dependent upon the thickness of the dermal component; and the thicker the graft, the closer the likelihood that the normal appearance of skin will be retained. Thin STSGs may be harvested from any area in the body and take more readily than FTSGs. 9 FTSGs FTSGs are usually used to cover a smaller surface area and require primary closure of the donor site. For this reason, and because of a superior cosmetic result, FTSGs are commonly used to cover defects of the face, the hand, or other openly visible areas. The most common donor sites for FTSGs are the ears, upper eyelids, neck, hypothenar eminence, and groin. Because these grafts consist of the entire dermal thickness and include the dermal appendages, FTSGs also maintain the hair pattern of their original location even after transplantation to the recipient site. FTSGs are also less likely to contract over time, making them more suitable than STSGs for covering defects on the hands, extremities, joint extensor surfaces, and face. The location and quality of the recipient site is of ultimate importance in deciding which type of skin graft to use. A graft has the potential to take on any site that has effective microcirculation, which includes granulation tissue, dermis, fascia, periosteum, perichondrium, paratenon, adipose tissue, and muscle. Bare surfaces of tendon, bone, and cartilage, however, are lacking the microcirculation and collateralization necessary to support the graft. Sometimes, the application of artificial dermis on these surfaces may provide a suitable substrate on which a skin graft may take. 10 OPERATIVE TECHNIQUES STSG In the harvesting of STSGs, a power-driven dermatome or free-hand dermatome may be used. The free-hand dermatome, although free from an external power source and thus simpler to use, offers less control of the precise depth of the graft. Some centers utilize a local anesthetic pump inserted before graft harvesting into the graft donor site to aid in postoperative pain control. Before the harvesting of the graft, it is helpful to infiltrate the subcutaneous tissue with saline to make the dermis more prominent above the underlying structures. Vaselinebased lubrication also facilitates a friction-free surface between the dermatome and epidermis during harvesting Figures 1-6. 8 During graft harvesting, a thin layer of epidermis and dermis is excised using either a dermatome or the Humby knife at a thickness that is predetermined by the adjustable setting that is selected. When using the dermatome, it is helpful to have an assistant stretch the skin taut on the donor site as the dermatome is engaged and lowered to the skin surface. The dermatome should be turned on while in the air above the donor site and slowly lowered to the donor site at a 45-degree angle. Applying a constant force, the dermatome should be slowly advanced down the length of the donor site until the appropriately sized graft is obtained. At this point, with the power still engaged, the dermatome should be lifted off the skin at a 45-degree angle until it is completely free from the skin surface, at which point the power can be disengaged. When using a Humby knife to harvest a graft, it should be held with the sharp edge at a 45-degree angle to the skin and manipulated in a back-and-forth motion over tight skin in long, even strokes to separate the graft from the thick dermis. When the desired size graft is taken, the back-and-forth motion is continued while supinating the wrist to remove the knife from the skin. 11 STSGs are often meshed to expand the surface area of the graft, thereby minimizing the size of the necessary donor site while providing drainage holes for blood or serum, which would otherwise collect underneath the graft. Meshing the graft also reduces the rate of disruption by shear forces that would otherwise preclude graft take. In addition, skin graftrecipient sites with surface irregularity are better suited for meshed grafts, as the increased pliability of the graft will allow closer apposition into the contours of an irregular surface. A mesh expansion ratio of 1.5:1 is typically preferred for most STSGs, although mesh ratios of 2:1 or 3:1 can also be used depending on the defect to be covered. A skin graft meshing device is used to roll the graft through a set of metal cutters to create a uniform meshed pattern throughout the entirety of the graft. The skin graft is placed on the appropriate side of the carrier and then passed through the machine. Disadvantages of meshing a graft include the characteristic "stocking net" appearance of the resultant graft and the risk of enhancing wound contraction as the small gaps between the skin edges close by contraction. An alternative to using a commercial mesher is using a scalpel to make multiple slices in the graft with the same effect. 12 After meshing, the graft should be placed over a clean, well-vascularized recipient wound bed with the dermal side, which is shinier in appearance, facing downward. Once the graft is appropriately laid over the recipient site, utmost care must be taken to ensure hemostasis and inspection should display no hematoma formation. Saline flushes beneath the graft are useful in evacuating blood clots and therefore allowing better adherence of the graft. The graft should also be arranged to sit directly on the underlying recipient tissue, across all of the topographical variation of the wound. Quilting sutures may be used to anchor the graft onto peaks and valleys of the recipient site where the graft is more likely to shear away from, and thus not take. The edges of the graft are secured with interrupted or running absorbable sutures or skin staples, approximating the graft to the recipient tissue bed without strangulation. Edges of the STSG that overlie intact and healthy epidermis on the recipient bed should be trimmed so that there is minimal overlap. 11 Finally, the STSG is then dressed in the manner according to the surgeon's personal preference. Typically, a layer of nonstick material, such as antibiotic-impregnated gauze, is placed directly over the graft. Alternatively, a layer of antibiotic ointment followed by a dry sterile gauze can be used to help prevent desiccation and infection of the wound. A light pressure dressing providing 10 to 20 mm Hg of pressure with cotton bolster is effective at enhancing graft adherence without causing pressure necrosis. Most recently, negative-pressure wound therapy has proven to be effective at bolstering grafts to recipient beds, thus improving graft outcomes. 11 Recent literature suggests that using a negative-pressure wound therapy device can reduce the time of inpatient wound bed preparation and enhance the rate of graft take due to improved inosculation combined with decreased hematoma and seroma formation, as well as less frictional disruption of the wound bed and skin graft interface. Patients managed with negative-pressure wound therapy are also able to ambulate earlier after surgery due to the stabilizing effect of the negative-pressure wound therapy device, thereby decreasing overall length of inpatient hospitalization. 13 The dressing should be kept in place for 3 to 5 days, unless there is concern for underlying infection that would necessitate earlier evaluation of the wound. With the initial dressing change, utmost care must be taken to prevent shearing Fortier and Castiglione Techniques in Orthopaedics$ Volume 27, Number 4, 2012 of the graft away from the underlying recipient bed. Moistening the dressing with saline before its removal will help prevent it from sticking to the underlying graft. The initial dressing change, twice daily wet-to-dry dressing changes or daily application of an antibiotic ointment such as bacitracin or nonstick gauze, will help to protect the graft in its initial healing phases. Some surgeons choose to continue negativepressure wound therapy for the first 7 to 10 days after grafting. Once the graft appears pink and well adherent to the wound bed, the graft may be exposed to open air with daily application of a gentle moisturizing cream. Both the donor site and graft site should be kept out of direct exposure to the sun, and once fully healed should always be protected with sunscreen. The STSG harvest site is typically dressed with a semiocclusive dressing that allows the accumulation of sterile fluid under the dressing that will bathe the exposed dermis, thereby initiating reepithelialization. The original operative dressing is typically left intact until reepithelialization is complete. If the fluid collection beneath the semiocclusive dressing appears purulent at any time, it should be removed and appropriate wound care should be initiated. 12 More classically, a fine mesh gauze impregnated with petrolatum ointment or other bacteriostatic substance is placed over the donor site, and the wound is treated with an external heat source to stimulate coagulation and dessication of the exudate within the fine mesh gauze. FTSG Because of their superior cosmetic results, FTSGs are typically used in grafting defects on the face, hands, and other openly visible aesthetic surfaces. The posterior surface of the ear extending to the skin overlying the mastoid process is a common donor site for facial grafts due to the proximity of its color and texture when compared with facial skin. Skin of the upper eyelid is another frequently used source of FTSGs for facial skin grafts and can provide considerable coverage in individuals with marked redundancy of the upper eyelid. When requiring larger surface area for coverage, the skin of the lower posterior triangle of the neck in the supraclavicular region provides satisfactory color and texture match for facial skin. Skin of the thigh or abdomen may be used for larger defects, although these sources tend to have suboptimal color and texture match for facial defects. Thigh and abdominal skin has a thicker dermis than facial skin and is described as providing a "masked" appearance when grafted to the face, although it may be useful in covering defects of the palmar surface of the hand or the sole of the foot. The redundant and mobile skin of the antecubital fossa and groin can be used for repair of a defect on the hand, but these sources are limited by the hair patterns and also by concern for contracture of the donor site over a mobile joint. Small defects of the palmar surface of the hand can be covered by hypothenar eminence grafts. FTSGs are fitted uniquely to the defect, and the use of a pattern is recommended to ensure that an adequate amount of tissue is excised. An irregular defect can be reconstructed on the donor site by using a marker to designate points mirroring those of the irregular defect before the skin is excised. Before the excision of the FTSG, infiltration of the subcutaneous tissue with fluid or local anesthetic helps to differentiate the plane between the dermis and subcutaneous adipose tissue. An ellipse is made around the defect pattern, which is then incised and undercut. In the excision of the FTSG, care is taken to leave no adipose tissue on the undersurface of the graft; any adipose tissue that remains adherent to the graft should be excised with scissors. 11 The donor site is then closed primarily; or alternatively, if there is too much tension, an STSG may be used to cover the donor site. GRAFT HEALING The effective healing of an STSG depends on the presence of a well-vascularized recipient site, close apposition of the graft to the recipient bed, and appropriate immobilization of the graft to foster development of nascent vascular connections. The ideal recipient bed is one that granulates rapidly and will therefore take a graft readily. Soft tissues, such as muscle and fascia, generally accept grafts with ease, although the ability of adipose tissue to take a graft is dependent on its vascularity. Connective tissues such as cartilage, bone, and tendon also accept grafts readily if they have an intact vascularized perichondrium, periosteum, or paratenon. Bare cartilage and bone may be able to support a small graft if the tissue is able to provide bridging vessels from the periphery of the recipient bed to support graft take. The initial stage of graft healing is characterized by adherence of the graft to the recipient bed through a fibrin network, which initiates the generation of vascular buds. This apposition permits plasmatic imbibition, during which the donor tissues receive nourishment by the passive absorption of plasma from the recipient bed through capillary action. This takes place over the first 24 to 48 hours after graft apposition, and during this phase, the graft may appear white and edematous. The inosculation phase of graft healing begins 48 to 72 hours after grafting and may continue for up to 1 week postoperatively. This phase is characterized by further development of the vascular buds within the fibrin network, which eventually anastomose with preexisting and newly forming blood vessels in the graft-recipient interface. During this phase, the graft may appear mottled or demonstrate variation from an erythematous blush to slight cyanosis. 14 The development of lymphatic flow in the graft begins in parallel with vascularization, and lymphatic flow is typically established by postoperative day 5 or 6. At this point, effective drainage of the excessive fluid that had bathed the graftrecipient interface may occur, resulting in a decreased edema and reduced fluid weight of the graft from days 7 to 9. The final phase of graft healing is reinnervation of the graft, which typically commences in the first 4 weeks after grafting. Sensation generally returns to the periphery first and then gradually proceeds to the interior of the graft. Full return of sensation is more likely to occur in FTSGs compared with STSGs. The donor sites of STSGs heal by reepithelialization, during which the epithelial cells from remaining portions of dermal appendages migrate across the newly exposed dermis to establish a new epidermis. Within 3 to 4 weeks of graft harvesting, the epidermis of a healed donor site is fully differentiated. 14 FACTORS AFFECTING GRAFT TAKE The 2 basic factors affecting graft take are the ability of the graft to receive nutrients and the presence of vascular ingrowth from the recipient bed. Any conditions that preclude the graft from receiving nutrients by decreasing the rate of diffusion will thereby reduce the rate of graft take. Seroma and hematoma formation underneath the graft must be prevented with adequate hemostasis, as well as effective meshing or bolstering of the graft to promote adherence to the recipient bed without underlying collections. Edematous wound beds Techniques in Orthopaedics$ Volume 27, Number 4, 2012 Skin Grafting c 2012 Lippincott Williams & Wilkins provide difficult substrates for grafting, because their recipient beds may prevent adequate diffusion of nutrients from the bed to the graft in the initial stages of graft healing. Ischemic wounds must be evaluated for revascularization before attempting skin grafting. Vascular ingrowth in a new graft is most commonly disrupted by shear forces between the graft and the recipient bed, thereby decreasing the rate of graft take. Immobilization of the graft is of utmost priority to allow for inosculation of the graft, and this may be achieved by using a bolster dressing or negative-pressure wound therapy. To ensure immobilization of a joint containing a graft, the grafted extremity is often placed in a splint for 3 to 5 days to promote improved graft adherence. 9 The presence of contamination or infection of a wound bed will decrease the rate of graft take due to a higher rate of inflammatory mediators such as nitric oxide, cytokines, and interleukins that stimulate an inflammatory milieu in which the fibrin bonds between the graft and recipient bed are destroyed. Chronic wounds should have a clean, granulating wound bed before attempting grafting. To ensure optimal chance of graft take, preoperative tissue cultures should be obtained in wounds that have a history of infection or in chronic, nonhealing wounds that are likely colonized. Skin grafting is not recommended in wound beds with culture-proven >10 5 organisms/g of tissue. Radiated tissues are not adequately vascularized and thus are not acceptable substrates for STSGs. 15 In wounds where there is only a partial take of a graft, the nonviable tissue should be gently debrided and the remaining defect is left to close by secondary intention or may undergo additional grafting. With either of these approaches, there is an increased risk of scarring, deformity, contracture, and irregularities of the contour of the graft. FACTORS AFFECTING WOUND HEALING Adequate wound healing requires effective transport of nutrients and sufficient delivery of oxygen to healing tissues. Oxygen is essential for cellular respiration in the graft bed and for hydroxylation of proline and lysine residues of collagen during wound healing. Although oxygenation in tissue beds can be decreased secondary to impaired delivery (poor cardiac function, anemia, or peripheral vascular disease), oxygenation to a highly metabolic graft bed can also be severely limited by smoking, which acutely stimulates vasoconstriction and delivers toxic compounds into the tissues. Patients undergoing skin grafts should be educated preoperatively of the significant risk of graft failure with concomitant smoking and should be counseled to quit smoking before skin grafting. Tissues that have undergone radiation therapy display an abnormal healing process that results in excessive synthesis of collagen, thinning of the epidermis, decreased density of blood vessels as well as sebaceous and sweat glands, and pigmentation changes in the dermis. Radiation imposes a higher risk of infection and an overall slower healing process that makes these tissues suboptimal substrates for skin grafting. 15 Malnutrition directly contributes to poor wound healing and should be considered when evaluating a patient for skin grafting. Hypoproteinemia limits the available supply of essential amino acids that is necessary to synthesize collagen and other proteins. Cross-linking of new collagen is essentially impossible without adequate levels of vitamin C, whereas vitamin A is necessary for normal epithelialization and bone function and thus is intimately related to the early phase of graft healing. Zinc deficiency can lead to poor formation of granulation tissue and inhibition of cellular proliferation. Nutrition laboratories including albumin, prealbumin, and transferrin should be evaluated in preoperative patients where malnutrition could limit their ability to support graft take. Chemotherapeutic agents are known to impair healing by inhibiting cellular proliferation. Similarly, adrenocortical steroids slow the collagen cross-linking process, thereby weakening incisions and also interfere with wound epithelialization and contraction. The most profound effect of steroids is noted when they are administered several days before or after skin grafting. 11 COMPLICATIONS OF SKIN GRAFTING One of the most common complications of both STSGs and FTSGs is graft contracture, which can lead to functional limitations and a suboptimal aesthetic result. Primary contraction is a result of the immediate recoil of the elastin fibers in the dermis within a freshly harvested graft. The amount of primary contraction is directly related to the amount of dermis in the graft; thus, FTSGs have more primary contraction when compared with STSGs. FTSGs contract to a surface area as small as 40% of their original surface area, whereas STSGs only contract half as much. Secondary contracture is caused by contraction of a healed graft due to the action of myofibroblasts in the graft. This is more commonly observed in STSGs compared with FTSGs and contracts to a degree that is inversely proportional to the dermal content of STSGs. The ability of a graft to carry out functions that it previously preformed in its former location, such as sweating and providing hair growth, is directly related to the thickness of the dermis present and therefore the number of epithelial appendages transferred in the graft. Thinner STSGs tend to be more dry and require regular moisturization in comparison with thicker STSGs. FTSGs have the greatest sensory return in comparison with STSGs due to the transfer of neurilemmal sheaths in the graft, in addition to a greater number of hair follicles. 11 Finally, pigmentation mismatch between the graft and the recipient site is a common complication, especially when attempting to graft defects on the face or hands. Using dermal grafts from the posterior auricular region or eyelid can provide the best aesthetic result for facial defects, although the limited size of these available sources may preclude an ideal pigmentation match in all cases. A general rule is to replace tissue with "like" tissue for optimal pigmentation and texture match. 12
2019-08-20T05:11:03.629Z
2012-12-01T00:00:00.000
{ "year": 2012, "sha1": "6ca56b4ed4432beece205aec1bae1ba3ddcb6cf1", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "WoltersKluwer", "pdf_hash": "2da2739b908269efc5e3c65489b71e6f3bb99f32", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
55706703
pes2o/s2orc
v3-fos-license
Algorithm for wavelength assignment in optical networks Wavelength division multiplexing (WDM) is a technology which multiplexes many optical carrier signals onto a single optical fiber by using different wavelengths. Wavelength assignment is one of the important components of routing and wavelength assignment (RWA) problem in WDM networks. In this article, to decrease the blocking probability, a new algorithm for wavelength assignmentLeast Used Wavelength Conversion algorithm is introduced and is an enhancement to the previously used Least Used Wavelength assignment algorithm. The performance of this wavelength assignment algorithm is evaluated in terms of blocking probability and the results show that the proposed technique is very promising in future. The Least Used Wavelength Conversion algorithm is compared with algorithms such as first-fit, best-fit, random and most-used wavelength assignment algorithm. INTRODUCTION Wavelength division multiplexing (WDM) is a very promising technology to meet the ever increasing demands of high capacity and bandwidth.The term wavelength-division multiplexing is commonly applied to an optical carrier.In a WDM network several optical signals are sent on the same fiber using different wavelength channels.WDM is used to address routing and wavelength assignment (RWA) problem.The first part of this problem is to route the network and the second is wavelength assignment.Wavelength assignment is one of the important issues in RWA.In general, if there are multiple feasible wavelengths between a source node and a destination node, then a wavelength assignment algorithm is required to select a wavelength for a given light path.Either distinct wavelength is assigned on each link of path or the same wavelength on each link can be used.When the wavelength conversion is not possible at intermediate routing nodes a light path must occupy the same wavelength on each link over its physical route.The clever algorithms are needed in order to ensure that RWA function is performed using a minimum number of wavelengths (Dar and Saini, 2013;Wason and Kaler, 2010). The Least Used Wavelength Conversion Algorithm (LUWC) is an improvement over least used wavelength assignment algorithm.In LUWC, least-used wavelength assignment algorithm is executed until blocking.Also the *Corresponding author.E-mail: drmushtaqpbg@gmail.com.Author(s) agree that this article remain permanently open access under the terms of the Creative Commons Attribution License 4.0 International License concept of wavelength conversion is used in this model.When the call is blocked, wavelength conversion is introduced and hence blocking probability is reduced.If the full wavelength conversion is used after least-used wavelength assignment algorithm, the blocking probability is reduced to a very large extent and its value reduces to a minimum possible value.Hence, the overall performance of the network increases.The Least Used Wavelength Conversion algorithm is compared with algorithms such as first-fit, best-fit, random and mostused wavelength assignment algorithm.Simulation results proved that the proposed approach is very effective for minimization of blocking probability of optical wavelength division multiplexed networks (Mukherjee, et al., 2004).All the results are taken using MATLAB. WAVELENGTH ASSIGNMENT There are two constraints that need to keep in mind while trying to solve RWA.One is distinct wavelength assignment constraint that is, all light paths sharing a common fiber must be assigned distinct wavelengths to avoid interference (Arun and Azizoglu, 2000).This applies not only with in alloptical networks but in access links as well.Another is wavelength continuity constraint which means the wavelength assigned to each light path remains the same on all the links while it traverses from source end-node to destination end-node (Murthy and Gurusamy, 2002;Qi and Chen, 2006).The most important wavelength assignment algorithms are as follows: (i) First-fit algorithm: In this algorithm, firstly the wavelengths of the traffic matrix are sorted in non decreasing order.Then algorithm steps through this sorted list for selecting candidate chains joined.This process is carried on until all chains are considered to form a single chain representing linear topology.The idea of the first-fit scheme is to pack the usage of the wavelengths toward the lower end of the wavelengths so that high numbered wavelengths can contain longer continuous paths.By using first fit algorithm blocking can be reduced to a great extent as compared to random algorithm (Alyatama, 2005;Yuan and Zhou, 2000).(ii) Random algorithm: In this algorithm, wavelength is selected randomly from available wavelengths.A number is generated randomly and the wavelength is assigned to this randomly generated number.As this technique chooses a random wavelength from the set of all wavelengths that are available along the path, the blocking probability cannot be minimized as much as compared to other wavelength assignment algorithms (Alyatama, 2005).(iii) Most-used: The most-used scheme furthers the idea of the first-fit scheme in packing the usage of wavelengths.In this scheme, all the available wavelength that can be used to establish a connection are considered, the wavelength that has been used the most is selected for the connection.The wavelength usage using the most-used scheme is more compact than that using the first-fit scheme.Studies have shown that with precise global network state information, the most used scheme performs slightly better than the first-fit scheme (Yuan and Zhou, 2000).(iv) Best-fit algorithm: Among all the wavelengths from the list, this algorithm chooses an efficient wavelength for the assignment and is known to be the best fit wavelength assignment.According to the previous study the best fit algorithm performs better than other existing algorithms.Depending upon the number of wavelengths and load given to the path per unit link, the blocking probability of the best fit increases and decreases respectively (Yuan and Zhou, 2000). BLOCKING PROBABILITY If there is no free wavelength available on any link, the call will be blocked.In simple terms blocking probability as per Poisson's formula can be calculated as the ratio of calls blocked to the total number of calls generated as given below The network performance of any network can be measured through blocking probability, which is the statistical probability that a telephone connection cannot be connected due to insufficient transmission resources in the network.Also, the blocking probability on the link can be calculated by famous Erlang-B formula as given by Wan et al. (2003) Equation ( 2). (2) Where, P b(L,W) is blocking probability for L load and W wavelengths. MODELING SETUP Here, the enhanced algorithm is given and is evaluated in terms of blocking probability.Following assumptions are made to design the model: (i) The network is connected in an arbitrary topology.Each link has a fixed number of wavelengths.(ii) Point to point traffic.(iii) There is no queuing of the connection request.The connection blocked will suddenly be discarded.Firstly, a source to destination pair is selected.Then using the shortest path routing algorithm, a route is selected.After that with the help of proposed algorithm, wavelength is assigned to the route.Until blocking, leastused wavelength assignment algorithm is executed.If the call is blocked, the concept of wavelength conversion is introduced which is done with the help of wavelength converters and hence blocking probability is reduced. RESULTS AND DISCUSSION Here, the simulation results of proposed algorithm are shown.constant.The results prove that the blocking probability of the proposed algorithm decreases with the increase in the number of wavelengths.In the second phase, the load per unit link is increased keeping the other
2018-12-12T08:56:07.272Z
2015-03-30T00:00:00.000
{ "year": 2015, "sha1": "6c1ca5efcb0456ec502995de3f06e3e6ae330e12", "oa_license": "CCBY", "oa_url": "https://academicjournals.org/journal/SRE/article-full-text-pdf/589695451826.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "6c1ca5efcb0456ec502995de3f06e3e6ae330e12", "s2fieldsofstudy": [ "Business", "Physics" ], "extfieldsofstudy": [ "Biology" ] }
268500487
pes2o/s2orc
v3-fos-license
Significance of navigated transcranial magnetic stimulation and tractography to preserve motor function in patients undergoing surgery for motor eloquent gliomas Resection of gliomas in or close to motor areas is at high risk for morbidity and development of surgery-related deficits. Navigated transcranial magnetic stimulation (nTMS) including nTMS-based tractography is suitable for presurgical planning and risk assessment. The aim of this study was to investigate the association of postoperative motor status and the spatial relation to motor eloquent brain tissue in order to increase the understanding of postoperative motor deficits. Patient data, nTMS examinations and imaging studies were retrospectively reviewed, corticospinal tracts (CST) were reconstructed with two different approaches of nTMS-based seeding. Postoperative imaging and nTMS-augmented preoperative imaging were merged to identify the relation between motor positive cortical and subcortical areas and the resection cavity. 38 tumor surgeries were performed in 36 glioma patients (28.9% female) aged 55.1 ± 13.8 years. Mean distance between the CST and the lesion was 6.9 ± 5.1 mm at 75% of the patient-individual fractional anisotropy threshold and median tumor volume reduction was 97.7 ± 11.6%. The positive predictive value for permanent deficits after resection of nTMS positive areas was 66.7% and the corresponding negative predictive value was 90.6%. Distances between the resection cavity and the CST were higher in patients with postoperative stable motor function. Extent of resection and distance between resection cavity and CST correlated well. The present study strongly supports preoperative nTMS as an important surgical tool for preserving motor function in glioma patients at risk. Based upon nTMS motor mapping and fibertracking, the risk for developing new motoric deficits can be estimated.General risk factors for postoperative deterioration in motor function are the distance between the lesion and the pyramidal tract (LTD) [28][29][30][31][32] and the resection of preoperatively identified nTMS motor positive areas.Both nTMS motor positive spots within the primary motor cortex and even within non-primary motor areas should be preserved in order to preserve the patients' motor status [28][29][30]33,34].The aim of the present study was to provide information and to externally confirm previously published results concerning the association of postoperative motor status in motor-eloquent glioma patients in relation to the results of nTMS motor mapping results and nTMS based tractography.We additionally investigated several approaches of reconstructing the pyramidal tract in order to counterbalance some common issues of DTI-fibertracking [15,35].The primary outcome of our study was deterioration of motor function after surgery within the first days after surgery and a residual deficit on follow-up visit.Secondary outcome was the extent of resection on postoperative MRI. Compliance with Ethical standards The study design was approved by the Institutional Review Board of Paracelsus Medical University Nuremberg (Registration numbers IRB-2020-022 and IRB-2022-012).All procedures performed in studies involving human participants were in accordance with the 1964 Helsinki declaration and its later amendments.Informed consent for study participation was waived due to the retrospective study design and statistical analysis of anonymized data. Patient cohort Patients treated between April 2016 and June 2021 at our institution were retrospectively reviewed.Inclusion criteria were age over 18 years, preoperative nTMS and DTI for evaluation of the motor system, histologically confirmed glioma CNS WHO grade 2-4, availability of pre-and postoperative MR imaging and surgery with the aim to achieve maximum safe resection.Patients with LTD >20 mm and one patient with postoperative subcortical ischemia on diffusion MRI likely for causing a new motor deficit were excluded from the study.We further excluded four patients who died from medical complications before discharge from hospital.All surgical procedures were conducted under general anesthesia in a microsurgical 5-ALA fluorescence guided approach with the nTMS data integrated into the neuronavigation system.Based upon clinical examination, preoperative motor deficits were assessed as "no deficit", "mild" (disruption of fine motor skills or protonation in arm drift test, facial palsy), "moderate" (incomplete hemiparesis) and "severe" (disability to lift one extremity from surface or plegia of one or more extremities).Pre-and postoperative neurological examinations were performed by neurosurgeons.A permanent deficit was defined as residual surgery-related deficit which occurred in the immediate postoperative phase and did not return to the preoperative status within at least six weeks of follow-up.Parts of this cohort have been published recently [36]. Imaging and navigated brain stimulation All patients underwent pre-and postoperative MR-imaging including a 3D post-contrast T1 weighted sequence.Preoperative imaging additionally included diffusion tensor imaging (DTI with 32 gradient directions).All nTMS examinations were conducted with the NexStim NBS 5 system (NexStim Oy, Helsinki, Finland, Fig. 1A) and the workflow followed recently published guidelines [37].Slight modifications were made and described in the following.Ambu Neuroline 700 surface electrodes were used in all cases (Ambu, Denmark) and attached to the target muscles in a bellytendon-fashion.The ground electrode was placed at the patient's elbow. A minimum of two intrinsic hand muscles (abductor pollicis brevis (APB), abductor digiti minimi (ADM) was monitored for the upper extremity.Lower limb muscles (tibialis anterior and/or gastrocnemius muscle) were mapped for tumors close to the anatomical leg motor area. The first round of mapping was conducted to determine the hand motor hotspot at a default stimulation intensity of 30-35% stimulator output.Stimulation intensity was increased if no MEPs could be elicited.The first round of mapping was started at the omega-shaped portion of the precentral gyrus.If MEPs from the hand muscles could be registered, the stimulation site which produced the greatest peak-to-peak-amplitude was stimulated several times to ensure reproducibility of MEPs.That spot was considered the hand motor hotspot and was used for determination of the resting motor threshold. The patient-individual resting motor threshold (rMT) was determined using the NBS system's threshold hunting algorithm.Mapping of the cortical representations of the upper extremity muscles was performed at 110-120% rMT (with a standard of 110%rMT and intensity was only increased if patients had difficulties in muscle relaxation). The leg area was mapped with a stimulation intensity of at least 110% of the upper-extremity rMT.The primary motor cortex of the upper extremity and the peritumoral cortex were mapped in all cases.Each examination was analyzed post-hoc for positive muscle responses defined as MEPs with amplitudes greater than 50 μV and with latencies within a range of 10-30 ms.nTMS positive spots outside the precentral gyrus were additionally evaluated for the electrical field current at the corresponding extremity's motor hotspot.If the corresponding electric field current at the motor hotspot reached the electric field current of the patient's motor threshold, these stimulations were classified as false positive due to possible indirect stimulation of the primary motor cortex and subsequently excluded from the analysis. DTI-tractography The nTMS-positive spots were exported via the standard DICOM format for tractography using the Medtronic StealthStation S8 StealthViz/StealthDTI (Medtronic Inc, Louisville, CO, USA) module.In the post-hoc analysis, the nTMS spots were enlarged to a diameter of 6 mm and served as a cortical region of interest (ROI).A second standard anatomy-based cubic ROI was placed within the caudal pons.Each corticospinal tract (CST) was visualized as follows: Tractography was performed at 75% and 50% of the patientindividual maximum fractional anisotropy (fractional anisotropy threshold -FAT).Minimum fiber length was set to 110 mm and the maximum directional change to 60 • .Fibers which clearly did not belong to the corticospinal tract were removed.In addition to the above-mentioned approach, a nTMS assisted anatomical CST reconstruction was performed.A cubic ROI was placed slightly below the nTMS-positive motor area and served as an alternative (sub-) cortical seeding region (Fig. 1B).Software settings were identical to the settings described above.The nTMS assisted anatomical CST reconstruction was conducted as it was not possible to visualize the CST in all patients by purely using the enlarged nTMS spots as a cortical seeding region in our cohort.However, we did not aim to compare different approaches of tractography in this study. The closest distance to the lesion (lesion-to-tract-distance (LTD)) measured on the axial plane were recorded for statistical analysis.For reasons of clarity, only the statistical evaluation and results of the purely nTMS-based DTI fibertracking is displayed in the text unless mentioned otherwise.The results of the nTMS-assisted anatomical fibertracking are shown in the tables and figures. Data-and imaging analysis Pre-and postoperative tumor volume was assessed using the Medtronic StealthStation S8 (Medtronic Inc, Louisville, CO, USA) software.A neuroradiologist evaluated all postoperative MRI examinations for residual tumor tissue.The post-contrast T1 studies were used for volumetric assessment in high grade glioma patients and T2/FLAIR images were used in non-enhancing tumors.Pre-and postoperative MRI as well as nTMS positive areas and reconstructions of the pyramidal tracts were merged using the StealthViz/ StealthDTI and the software's inbuilt automatic fusion algorithm.All fusion results were checked for correct fusion and manually corrected if necessary.nTMS spots were classified as resected if they projected into the resection cavity. The minimum distance between the resection cavity and the CST-reconstructions was measured on the axial plane (cavity to tract distance (CTD)) We classified the fused images as intersection of the CST and the resection cavity if the distance between the resection cavity and any CST-reconstruction was 0 mm.Pre-and postoperative imaging analyses were subsequently compared to the patients' preoperative and postoperative status.Postoperative motor status was assessed as stable, transient (surgery-related) deficit or permanent (surgery-related) deficit. Clinical postoperative status was analyzed for nTMS spots at tumor margins on preoperative imaging, resection of nTMS positive spots and intersection with CST reconstructions as well as differences in LTD, CTD measurements.Resected nTMS spots outside the precentral gyrus were additionally analyzed. The extent of resection was correlated with LTD and CTD measurements as well as with resected nTMS spots, intersection with the CST and the postoperative clinical status. We further conducted a subgroup analysis of Glioblastoma patients with similar analyses. Statistical analysis Statistical analysis was carried out using IBM SPSS® Version 27 for Windows 10 (IBM Corp. Released 2020.IBM SPSS Statistics for Windows, Version 27.0.Armonk, NY: IBM Corp).Metric variables are presented as means and standard deviations (±SD) for age and distances and as medians and interquartile ranges (±IQR) for tumor volumes.Categorial variables are presented as absolute number (n) and percentage (%). Fisher Exact Tests and Fisher-Freeman-Halton-Tests were applied for categorial variables.For significant results, Cramer's V is provided as a measure of effect size. For categorial variables, positive and negative predictive values were calculated.The Mann-Whitney-U-Test and Kruskal-Wallis-Tests were conducted for nonparametric testing of continuous variables.Cohen's d was calculated to measure effect size for non-parametric tests in significant results. Correlation analyses were performed using Spearman's correlation with Spearman's Rho (r s ) and p-values displayed.A p-value <0.05 was considered significant in two-tailed testing. Patient cohort 38 tumor surgeries were performed in 36 patients.Two patients were operated on for primary and recurrent tumors.11 patients (28.9%) were female and mean age at surgery was 55.1 ± 13.8 years.16 (42.1%)patients presented with motor deficits on admission of whom 7 (18.4%)were classified as mild, 7 (18.4%)as moderate and 2 (5.3%) as severe.Table 1 summarizes the baseline data of the patient cohort. nTMS examination and tractography Navigated brain stimulation was well-tolerated in all patients and no adverse events occurred in the cohort.MEPs could be obtained in all patients.Mean resting motor threshold (rMT) was 32.8 ± 11.6% of the system's stimulator output.nTMS positive spots at tumor margin were present in 12 cases (31.6%) and associated with preoperative motor deficits (p = 0.014).The closest distance between the CST-fibers and the lesion (lesion-to-tract-distance -LTD) measured 6.9 ± 5.1 mm at 75% FAT.In five cases (12.8%), the nTMS-based approach did not visualize any fibers which clearly belonged to the CST.nTMS-assisted anatomical fibertracking could be conducted successfully in all patients of the study.Mean LTD was 6.8 ± 5.6 mm at 75% for the nTMS-assisted anatomical approach.LTDs between both approaches for tractography correlated well (r s = 0.79, p < 0.001 at both 75% and 50% FAT).Mean distance between the resection cavity and the corticospinal tract was 9.7 ± 8.1 mm at 75% FAT.For all CST-reconstructionapproaches, LTD and CTD showed significant correlations.CTDs for the different CST-reconstructions and FA-values showed strong correlations.The results of the correlation analyses are outlined in Fig. 2A and B. Intersection of the reconstructed CSTs with the resection cavity was observed in 10 cases (26.3%). Functional outcome in relation to nTMS and tractography 11 patients (28.9%) developed new motor deficits or aggravation of a preoperative motor deficit at discharge from hospital.4 patients (10.5%) recovered until the follow-up visit and 7 (18.4%)had a permanent surgery-related moderate motor deficit.No patient suffered from a permanent surgery-related severe motor deficit and only two patients (9.1%) without a preoperative tumor-induced deficit suffered from a permanent surgery-related motor deterioration. nTMS positive spots at the tumor margin were visualized in 12 cases (31.6%).In these cases, surgery-related motor decline Fig. 2. Results of the correlation analyses of LTD and CTD measurements for nTMS-based fibertracking (A) and nTMS assisted anatomical fibertracking (B).All correlation analyses revealed significant correlations (all p < 0.01) with a correlation coefficient r s > 0.5. T. Eibl et al. occurred in 6 cases and the new deficit remained in 4 patients (p = 0.017, Cramer's V = 0.45).This resulted in a positive predictive value of 50% for any motor deterioration and 33.3% for permanent deficits if nTMS positive spots were present at tumor margins.Viceversa the negative predictive value for stable motor function was 88.5% if the tumor did not infiltrate the nTMS positive area. Only two patients with nTMS positive spots at the tumor margin outside the precentral gyrus as identified by nTMS developed a permanent motor deficit. nTMS spots had to be resected in 6 cases (15.8%) and this was strongly associated with infiltration to nTMS-motor-positive gyri as observed in preoperative mapping (p < 0.001, Cramer's V = 0.64).Resection of nTMS positive spots lead to permanent surgery-related deficits (p = 0.003 Cramer's V = 0.57).The positive predictive value for permanent deficits after resection of nTMS positive areas was 66.7% and the corresponding negative predictive value was 90.6% (Table 2, Fig. 3).Correspondingly, intersection of the CST and the resection cavity was associated with permanent motor deficits (p = 0.012, Cramer's V = 0.5).The resection of nTMS positive spots and subcortical white matter pathways (i.e.intersection of the CST and the resection cavity) could explain all but one permanent deficits.The greatest CTD to develop a permanent surgery-related deficit was 9.2 mm at 75%FAT, regardless of resection of nTMS positive spots.In the analysis of subcortical fiber pathways, we observed shorter LTDs in the anatomical approach if patients developed postoperative permanent motor deficits (Fig. 4 A, B). Extent of resection Median residual tumor volume was 0.3 ± 3.0 cm 3 for the whole cohort.Accordingly, the overall EOR was 97.7 ± 11.6% of the tumor volume.In glioblastoma patients, the median residual tumor volume was 0.2 ± 0.8 cm 3 (EOR 94.4 ± 8.5%), whereas the median residual volume in grade 2 and 3 gliomas was 4.3 ± 9.25 cm 3 (EOR 84.6 ± 17.5%).A gross total resection was achieved in 10 glioblastoma patients (38.5%) and in 2 (16.7%) of the grade 2 and 3 tumors.LTD measurements did not correlate with residual tumor volume or volume reduction (all p > 0.2).CTDs correlated with residual tumor volume (r s = 0.42, p = 0.015 at 75% FAT).Residual tumor volume was not associated with postoperative motor decline (p = 0.89) as well as with the resection of nTMS spots (p = 0.097) or intersection with the CST on postoperative imaging (p = 0.4).There were no surgery-related complications apart from one hemorrhage which required re-craniotomy and evacuation of the hematoma. Subgroup glioblastoma patients This subgroup comprised 26 patients of which 12 (46.2%)had a preoperative motor deficit.Eight (30.8%) patients developed new motor deficits postoperatively of which 2 (7.7%) were transient.Cortical motor eloquent spots were resected in 6 surgeries (23.1%) and injury of the CST was seen in 9 (34.6%)postoperative images.Persistent decline in motor function was associated with resection of nTMS spots (p = 0.004, Cramer's V = 0.63) and intersection with the CST (p = 0.014, Cramer's V = 0.57).Correspondingly, the positive predictive value for persistent motor deficits after resection of nTMS positive spots was 66.7% and the negative predictive value was 90%.For intersection of CST and resection cavity, the positive predictive value was 83.3% and the negative predictive value was 80%.Closer CTDs were associated with persistent worsening in motor function (p = 0.043, Cohen's d = 0.49 for 75%, p = 0.065, Cohen's d = 0.43 for 50%FAT, in the nTMS assisted anatomical fibertracking, all p > 0.13 for the nTMS based fibertracking). Discussion This study strictly supports the significance of functional imaging in glioma surgery.Using a nTMS and fibetracking approach, it was possible to resect highly motor eloquent intrinsic brain tumors with a minimal rate of surgery-related permanent motor deficits.Firstly, as a proof-of-principle, the resection of motor eloquent cortical tissue resulted in permanent motor deficits.Secondly, we demonstrated a clear relation between permanent motor deficits and the intraoperative distance to subcortical motor pathways. In neuro-oncologic neurosurgery, the extent of resection is one main prognostic factor.Furthermore, the surgical goal should bewhenever feasible -a gross total resection in order to significantly improve the prognosis of the patients [1][2][3][4]8].Subsequently, it must not be negotiated that both quality of life and long overall survival can only be maintained if patients do not suffer from postoperative motor deficits [7].The extent of resection in our cohort was within the range of other published cohorts from different centers indicating similar approaches and patient selection [13,[28][29][30][31]. We observed a significant correlation between the estimated intraoperative distance to the pyramidal tract and the amount of tumor volume reduction as a sign of a more aggressive surgical strategy in patients with lower risk of postoperative unfavorable functional outcome.Yet, the extent of resection and postoperative motor status were independent in our analysis.This finding is congruent with a previously published cohort [28].In previously published literature, the prevalence of postoperative deficits ranged between 9.75% [38] and 22% [28][29][30]32].The LTD cut-offs varied between 8 and 12 mm [28][29][30]32].A second important risk-factor for long-term motor deficits is direct infiltration of the nTMS positive area [28][29][30]. In our cohort, only 18.4% suffered from a new surgery-induced motor deficit.It needs to be pointed out, that only 9.1% of the patients without a preoperative deficit deteriorated postoperatively.Infiltration of nTMS positive cortex was linked to motor deficits, but did not solely predict motor outcome [29][30][31]33,34].Moser et al. [33] reported 62% permanent deficits after resection of nTMS positive points and nTMS positive sites outside the precentral gyrus were moreover associated with development of motor deficits.In line with these results, Muir et el [34].experienced motor decline in 50% of the patients if nTMS spots underwent resection during tumor removal.Additionally, nTMS based tractography could precisely predict motor deficits in their cohort [34].The spatial relation of motor positive cortical areas and the tumorous lesion suggested a different importance of nTMS positive spots outside the precentral gyrus.No patient with nTMS positive spots at the tumor margin within the postcentral gyrus developed a permanent deficit in our cohort and only two patients with infiltration of supplementary motor areas suffered from a permanent surgery-related deficit.The positive predictive value of nTMS positive cortex at the tumor-brain-interface for functional outcome was rather low in our cohort.The negative predictive values for preserving motor function if eloquent tissue was preserved were remarkably high.These results do not advocate a less aggressive surgical strategy concerning extent of resection.On the contrary, surgery aiming at gross total resection should be performed if motor eloquent tissue is not endangered according to nTMS. Motor mapping of patients with preoperative motor deficits was possible in our cohort and functional imaging data should be considered in the preoperative planning process of those patients. The results of our study and previously published data strongly support the prognostic value of presurgical nTMS motor mapping.Furthermore, overlay of nTMS spots and CST-reconstructions with postoperative MRI correlated with postoperative outcome.In cases with infiltrated motor positive area, careful planning of the extent of resection is required in order to preserve the preoperative clinical status of the patients. Limitations of the study Limitations arise from the lack of intraoperative verification of the preoperative planning via electrophysiological monitoring and commonly known limitations of DTI [15,39,40]. Nevertheless, the preoperative nTMS-workflow was highly standardized and nTMS mapping could be conducted in all patients, even in those with severe preoperative deficits.Furthermore, it is broadly accepted that the spatial accuracy of nTMS is comparable to intraoperative mapping [18,19,22].We have to state that the small sample size might underestimate non-significant results and some results might be underpowered, such as the preoperative tractography.On the contrary, most of our findings are supported by the existing literature.We have to acknowledge that motor deficits were not quantified in a standardized way, however, no patient in the cohort developed a surgery-related plegia or functionally severe impairment. Moreover, the detailed analysis of the postoperative imaging classified all but one permanent surgery-related deficits correctly.This underlines the significance of functional imaging in surgery of motor eloquent gliomas. Conclusion In the presented study, postoperative motor function was both associated with the resection of nTMS motor positive spots and with the distance between the resection cavity and the pyramidal tract.nTMS positive spots within the tumor or at the tumor-brain-interface were at risk for being resected during surgery with a subsequent elevated risk for permanent motor deficits.Our data strictly support T. Eibl et al. aggressive surgical strategies in patients without nTMS positive spots at the tumor margin underlined by a negative predictive value of 90% for stable motor function if the motor cortex can be preserved during tumor resection. Compliance with Ethical standards The study design was approved by the Institutional Review Board of Paracelsus Medical University Nuremberg (Registration numbers IRB-2020-022 and IRB-2022-012).All procedures performed in studies involving human participants were in accordance with the 1964 Helsinki declaration and its later amendments.Informed consent for study participation was waived with due to the retrospective study design and statistical analysis of anonymized data. Disclosures and conflict of interest T.E. received research grants from "Verein zur Förderung des Tumorzentrums der Universität Erlangen-Nürnberg e.V" Funding There was no external funding T. Eibl et al. Fig. 1 . Fig. 1.Identifying the hand motor area with nTMS.Motor positive spots are displayed in orange (A).Seeding regions for fibertracking (B), with the nTMS-based ROI displayed in white and the anatomical ROI in pink.(For interpretation of the references to colour in this figure legend, the reader is referred to the Web version of this article.) Table 1 Overview over the patient cohort. Table 2 Positive and negative predictive values for motor outcome in relation to motor eloquent tissue.
2024-03-18T15:12:04.913Z
2024-03-01T00:00:00.000
{ "year": 2024, "sha1": "89aee9591004ba7c05b977f72aa7519bfe44c980", "oa_license": "CCBYNCND", "oa_url": "http://www.cell.com/article/S240584402404146X/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3148c20366fefc3f8f6ef52d16b5e245bb11bf2a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
13280976
pes2o/s2orc
v3-fos-license
Constrained evolution in axisymmetry and the gravitational collapse of prolate Brill waves This paper is concerned with the Einstein equations in axisymmetric vacuum spacetimes. We consider numerical evolution schemes that solve the constraint equations as well as elliptic gauge conditions at each time step. We examine two such schemes that have been proposed in the literature and show that some of their elliptic equations are indefinite, thus potentially admitting nonunique solutions and causing numerical solvers based on classical relaxation methods to fail. A new scheme is then presented that does not suffer from these problems. We use our numerical implementation to study the gravitational collapse of Brill waves. A highly prolate wave is shown to form a black hole rather than a naked singularity. Introduction Driven by the need of gravitational wave data analysis for waveform templates, numerical relativity has focused in recent years on the modelling of astrophysical sources of gravitational waves such as the inspiral and coalescence of compact objects. Such systems do not possess any symmetries and thus require a fully 3 + 1 dimensional numerical code. The advantage of assuming a spacetime symmetry, on the other hand, is that it allows for a dimensional reduction of the Einstein equations, which reduces the computational effort considerably so that greater numerical accuracy can be obtained. While spherical or planar symmetry yields the greatest reduction in computational cost, the intermediate case of axisymmetry is more interesting in that it permits the study of gravitational waves. In this article we focus on vacuum axisymmetric spacetimes and assume that the Killing vector is hypersurface orthogonal so that there is only one gravitational degree of freedom. The axisymmetric Einstein equations can be simplified considerably by choosing suitable gauge conditions. Here we consider a combination of maximal slicing and quasi-isotropic gauge [1,2,3]. This gauge reduces the number of dependent variables to such an extent that only one pair of evolution equations corresponding to the one gravitational degree of freedom needs to be kept. All the other variables can be solved for using the constraint equations and gauge conditions. This fully constrained approach was taken in [4]. Partially constrained schemes (e.g., [5,6]) substitute some of the constraint equations with evolution equations; this is possible because the Einstein equations are overdetermined. Such (fully or partially) constrained schemes have proven very robust in simulations of strong gravity phenomena. Examples include the collapse of vacuum axisymmetric gravitational waves, so-called Brill waves [7], in [6]. Critical phenomena in this system were found in [5,8]. Critical phenomena in the collapse of massless scalar fields [9] and complex scalar fields with angular momentum [10] were also studied, as was the collapse of collisionless matter [11]. Nevertheless, constrained evolution schemes have been plagued with problems. The authors of [4] reported that their multigrid elliptic solver failed occasionally for the Hamiltonian constraint equation in the strong-field regime. This problem can be circumvented by using instead the evolution equation for the conformal factor. However, it was found that this was "not sufficient to ensure convergence in certain Brill-wave dominated spacetimes". Similar difficulties were encountered in [12,13]. The purpose of this article is to determine the cause of these problems and to develop an improved constrained evolution scheme. The suspect elliptic equations belong to a class of (nonlinear) Helmholtz-like equations, which are discussed quite generally in section 2. We point out that if these Helmholtz equations are indefinite (loosely speaking, they have the "wrong sign") then their solutions, should they exist, are potentially nonunique. The same criterion is found to be related to the convergence of numerical solvers based on classical relaxation methods. In section 3, we review the partially constrained scheme of [6] and the fully constrained scheme of [4]. We show that some of the elliptic equations in these formulations are indefinite. This leads us to the construction of a modified fully constrained scheme that does not suffer from this problem. The arguments involved turn out to be closely related to questions of (non)uniqueness in conformal approaches to the initial data problem in standard 3 + 1 numerical relativity [14,15]. A numerical implementation of the new fully constrained scheme is described in section 4. In section 5, we apply it to a study of Brill wave gravitational collapse. After performing a convergence test and comparing results for a strong wave with "spherical" initial data, we focus on a highly prolate configuration-one of the initial data sets examined in [16]. By considering sufficiently prolate configurations, the authors were able to construct initial data without an apparent horizon but apparently with arbitrarily large curvature. They conjectured that such initial data would evolve to form a naked singularity rather than a black hole. This would constitute a violation of weak cosmic censorship. A numerical evolution of one of these prolate initial data sets was carried out in [6]. Due to a lack of resolution on their compactified spatial grid, the authors could not evolve the wave for a sufficiently long time. The trends in certain quantities suggested however that an apparent horizon would eventually form. Using our new constrained evolution scheme, we are able to evolve the same initial data for much longer and we confirm that an apparent horizon does form. We conclude and discuss some open questions in section 6. General remarks on Helmholtz-like elliptic equations Let us first consider the Helmholtz equation where u ∈ R n , ∆ is the flat-space Laplacian, and c and f are smooth functions. We impose the boundary condition u → 0 at spatial infinity. More generally, a boundary condition u → a = const can always be transformed to this case by considering the function u − a. It follows from standard elliptic theory (see e.g. [17]) that (1) has a unique solution if c 0 everywhere. If c > 0 then multiple solutions may exist or there may not be any solution at all. For c > 0 the elliptic equation is said to be indefinite. Next we consider the quasilinear equation where F (u) is a smooth (not necessarily linear) function. Proving existence and uniqueness of solutions to this equation is nontrivial. However a necessary condition for uniqueness can easily be obtained. Suppose u 0 is a given solution and consider a small perturbation of it, /du, we find that for u to be a solution of (2), δu must satisfy This is of the form (1) with c = F ′ (u 0 ) and f = 0. If F ′ (u 0 ) 0 then there is only the trivial solution δu ≡ 0 and we call the original problem (2) linearization stable. If on the other hand F ′ (u 0 ) > 0 then multiple solutions of the linearized problem (3) and hence also of the nonlinear problem (2) may exist. As an example relevant to the formulations of the Einstein equations discussed in this article, we take with p ∈ R and c a smooth function. Then (2) is linearization stable provided that pc 0. We say that in this case the equation has the "right sign". Because of the above considerations on the uniqueness of solutions, it is clearly desirable to have an equation with the "right sign" if a numerical solution is attempted. There is however also a more practical reason. Consider again the linear Helmholtz equation (1) in, say, n = 2 dimensions. Suppose we cover the domain with a uniform Cartesian grid with spacing h, denoting the value of u at the grid point with indices (i, j) by u ij . A discretization of (1) using second-order accurate centred finite differences yields We formally write this system of linear equations as Large systems (6) are commonly solved using relaxation methods, which obtain a series of successively improved numerical approximations. For example, a step of the Gauss-Seidel method consists in sweeping through the grid (typically in lexicographical or in red-black order), at each grid point (i, j) solving the equation for u ij and replacing its value, The relaxation converges provided the matrix A in (6) is strictly diagonally dominant, i.e., in each row of the matrix the absolute value of the diagonal term is greater than the sum of the absolute values of the off-diagonal terms (see e.g. [18]). In our example (5), the diagonal term is |4 − ch 2 | and the off-diagonal terms add up to 4, so that the condition for convergence is c < 0. (The other possibilty, c large and positive, is not feasible because h → 0 in the continuum limit.) In practice, if c is positive but sufficiently small then the relaxation will still converge but as c is increased convergence will begin to stall and ultimately the relaxation will diverge. Similar convergence criteria hold for other relaxation schemes such as the Jacobi or SOR methods. In particular, the multigrid method [19] is based on these relaxation schemes and will not converge if the underlying relaxation does not. These warnings do not apply to certain versions of the conjugate gradient method [18] or other Krylov subspace iterations which ideally only require the matrix A to be invertible. For a combination of such methods with multigrid see e.g. [20]. Formulations of the axisymmetric Einstein equations We focus on axisymmetric vacuum spacetimes. Axisymmetry means that there is an everywhere spacelike Killing vector field ξ with closed orbits. Here we restrict ourselves to the case where the Killing vector is hypersurface orthogonal. We choose cylindrical polar coordinates t, z, r, φ such that ξ = ∂/∂φ. In the following, indices a, b, . . . range over t, r, z, φ, indices i, j, . . . over r, z, φ, and indices A, B, . . . over r, z. The line element is written in the form Here α and β A are the usual ADM lapse function and shift vector. We have imposed as a gauge condition that the 2-metric on the t = const, φ = const hypersurfaces be conformally flat in our coordinates (quasi-isotropic gauge): the spatial metric obeys γ rz = 0 and γ rr = γ zz . This condition must be preserved by the evolution equation for the spatial metric, where K ij is the extrinsic curvature of the t = const surfaces and L denotes the Lie derivative. We deduce that where U ≡ K z z − K r r . Maximal slicing K ≡ K i i = 0 is imposed, so that the extrinsic curvature has three degrees of freedom, which are taken to be K z r , U , and W ≡ (K r r − K φ φ )/r (this particular combination is motivated by regularity on the axis of symmetry [6]). The evolution equation for the extrinsic curvature is given by where D is the covariant derivative compatible with the spatial metric γ ij and R ij is its Ricci tensor. Preservation of the maximal slicing condition implies D i D i α = αR = αK i j K j i (using the Hamiltonian constraint for the second equality) or α ,rr + α ,zz + (2P r + r −1 )α ,r + 2P z α ,z −2αψ 4 e 2rS 1 3 (U + 1 2 rW ) 2 + 1 4 (rW ) 2 + (K z r ) 2 = 0. Here and in the following we use the notation There are many different ways of constructing an evolution scheme for the axisymmetric Einstein equations in the above gauge, depending on the number of constraint equations being solved. We review two schemes that have been used for numerical simulations and show that some of their elliptic equations are indefinite as discussed in the section 2. Finally we propose a new scheme that does not suffer from this problem. A partially constrained scheme: Garfinkle and Duncan [6] Garfinkle and Duncan [6] choose to solve only the Hamiltonian constraint equation This equation is of the type (2), (4) with p = 5 and c 0 (note the second square bracket in (14) is non-negative). Hence it has the "wrong sign" and suffers from potential nonuniqueness of solutions as well as difficulties in solving it numerically using relaxation methods (section 2). The latter is not a concern in [6] though because the authors use a conjugate gradient method. The momentum constraints D j K ij = 0 are not solved but only monitored during the evolution. Written out explicitly they are The extrinsic curvature variables K z r , U and W are all evolved using their time evolution equations [6]. The slicing condition is solved in the form (12), and this is a Helmholtz equation with the "right sign" (c 0 in (1); note the square bracket in (12) is non-negative). In order to solve for the shift vector, additional derivatives are taken of equations (10), which combine to two decoupled second-order equations, These are Poisson equations (c = 0 in (1)) and do not cause any problems. A fully constrained scheme: Choptuik et al [4] A similar formulation was developed by Choptuik et al [4]. Their definition of the variablesσ and ψ differs slightly from our S and ψ, where the subscript 'Ch' refers to [4]. This difference does not have any consequences on the properties of the elliptic equations that we are concerned with here and so for the sake of consistency we continue to use our convention (which agrees with the one in [6]). As a result the equations displayed below differ from those in [4] in a minor way. In the same way as Garfinkle and Duncan, Choptuik et al also solve the Hamiltonian constraint, which is again indefinite. Unlike Garfinkle and Duncan, however, they also solve the momentum constraints. This is done by replacing K z r and U with first derivatives of the shift using the gauge conditions (10). The momentum constraints (15) now read The principal part of these two coupled equations is elliptic and so far there is no need for concern. A problem arises however when equations (10) are substituted in the slicing condition (12), The term containing the square bracket has the "wrong sign", p = −1 and c 0 in (2), (4). A new fully constrained scheme We observed that in both of the above schemes, the Hamiltonian constraint was indefinite, and in the second one, the slicing condition was, too. We now present a scheme in which both equations and in fact all the elliptic equations that are being solved are definite. The Hamiltonian constraint can be cured by rescaling the extrinsic curvature variables with a suitable power of the conformal factor, In terms of the new variables, the exponent of ψ multiplying the second square bracket in (14) is 5 − 2p so that the equation becomes definite for p 5/2. There is a preferred choice: for p = 6 the terms containing derivatives of ψ in the momentum constraints (15) all cancel under the substitution (20). The same rescaling of the extrinsic curvature was applied by Abrahams and Evans [5]. Their scheme is however not fully constrained-the extrinsic curvature variables are evolved as in [6]. The indefiniteness of the slicing condition was caused by the substitution (10), more precisely by its α dependence. The original motivation for this substitution was the desire to be able to solve the momentum constraints. However we can still do this as before if we introduce a new vector η A and set The momentum constraints are then solved for η A . The price we have to pay is that we still need to solve the spatial gauge conditions (16), where now K z r and U are expressed in terms of η ± . That is, we have to solve two more elliptic equations than Choptuik et al . Let us now write out all the elliptic equations explicitly. The momentum constraints are The Hamiltonian constraint is The slicing condition is The spatial gauge conditions are We note that (22)-(25) form a hierarchy: the equations are successively solved for η A , ψ, α, and β A . After substituting the solutions of the previous equations, each equation in the hierarchy can be regarded as a decoupled scalar elliptic equation, or elliptic system in the case of (22). The terms in the second lines of (23) and (24) now have the "right signs". An exception common to all the schemes discussed in this section is the term multiplying ψ in the first line of (23)-in general one expects S to oscillate so that ∆(rS) can have either sign. This is the usual difficulty one faces in conformal formulations of the initial value equations, see section 3.4. The variable S and its "time derivative"W are evolved. This pair of evolution equations corresponds to the one dynamical degree of freedom. Note that if we had not restricted the Killing vector to be hypersurface orthogonal then there would be a second dynamical degree of freedom. In linearized theory these two degrees of freedom can be understood as the two polarization states of a gravitational wave. There are additional evolution equations for ψ, η + = 2K z r and η − = −Ũ that are not actively enforced but that can be used in order to test the accuracy of a numerical implementation. All the evolution equations are given in Appendix A. Here we remark that assuming a solution to the elliptic equations is given, the principal part of the evolution equations is that of a wave equation, where ≃ denotes equality to principal parts. This equation is clearly hyperbolic, a necessary criterion for the well posedness of the Cauchy problem. See also [21] for a recent analysis of the hyperbolic part of a fully 3+1 dimensional constrained evolution scheme based on the Dirac gauge. Relation to the (extended) conformal thin sandwich formulation Our discussion of different constrained evolution schemes for the axisymmetric Einstein equations is closely related to conformal approaches to solving the initial value equations in standard 3 + 1 dimensional spacetime. Here one seeks to find a spatial metric γ ij and extrinsic curvature K ij satisfying the constraint equations on the initial slice, and often also a lapse α and shift β i satisfying suitable gauge conditions. This is done by setting where ψ is the conformal factor, and the conformal metricγ ij is assumed to be given. For simplicity and for analogy with the axisymmetric formulations discussed above, we impose the gauge condition ∂ tγij = 0. (In the axisymmetric case, we only controlled the r, z components, ∂ tγAB = 0.) We also assume maximal slicing K = 0 throughout, and we work in vacuum. It is well known in the conformal approach that the extrinsic curvature K ij cannot be freely specified [22]; instead it has to be conformally rescaled, This corresponds to the proposed rescaling of the extrinsic curvature variables (20) in the new axisymmetric scheme. The Hamiltonian constraint now takes the form of the Lichnerowicz equation, where∆ is the covariant Laplacian andR the Ricci scalar of the conformal metricγ ij . Note again that the last term in (29) has the "right sign" for linearization stability, cf. (23). As pointed out in the previous subsection, the linear term −Rψ can have either sign. However,R < 0 does not necessarily imply that multiple solutions exist. There is a well-developed theory for existence and uniqueness of solutions to (29), see [23]. In order to solve the momentum constraints, York's original conformal transverse traceless (CTT) method introduces a vector η i and sets where L is the conformal Killing operator defined as The momentum constraints now read in analogy with (22). In the CTT approach, any gauge conditions are solved after a solution of the constraint equations has been found. For example, maximal slicing K = ∂ t K = 0 implies the following elliptic equation for the conformal lapseα = ψ −6 α, where (30) is substituted. This equation has the "right sign", as it has in the new axisymmetric formulation (24) and in the one by Garfinkle and Duncan (12). In contrast, the extended conformal thin sandwich (XCTS) method [24] directly expresses the extrinsic curvature in terms of the shift β i , instead of (30). As a result, the slicing condition (33) acquires the wrong sign. This is precisely what happens in the scheme by Choptuik et al , cf. (10) and (19). Remarkably, a numerical study of the XCTS equations [14] showed that this system does admit nonunique solutions. Two solutions were found for small perturbations of Minkowski space, one of them even containing a black hole. The two branches meet for a certain critical amplitude of the perturbation. This parabolic branching was explained using Lyapunov-Schmidt theory in [15]. Because of the similarity with the XCTS equations, it is conceivable that the constrained axisymmetric formulation of Choptuik et al [4] might show a similar branching behaviour. This is clearly undesirable for numerical evolutions because the elliptic solver might jump from one solution branch to the other during the course of an evolution. However even before this can happen the multigrid method used in [4] will fail to converge, as explained in section 2. Numerical method In this section we describe a numerical implementation of the new fully constrained scheme presented in section 3.3. The equations are discretized using second-order accurate finite differences in space. A collection of the finite difference operators we use can be found in Appendix B. Similarly to [6], and unlike [4], we use a cell-centred grid to cover the spatial domain [0, r max ]×[0, z max ]: grid points are placed at coordinates r i = (i− 1 2 )∆r, 1 i N r , where ∆r = r max /N r is the grid spacing and N r is the number of grid points in the r direction. (Corresponding relations hold in the z direction.) Note that no grid points are placed at the boundaries. Ghost points are placed at r 0 = − 1 2 ∆r and at r Nr +1 = r max + 1 2 ∆r. The values at these ghost points are set according to the boundary conditions, as described in the following. Here we only refer to the "physical" grid boundaries at r = 0, z = 0, r = r max and z = z max . In the adaptive mesh refinement approach discussed further below, additional finer grids are added that do not cover the entire spatial domain. These finer grids are also surrounded by ghost points. On grid boundaries that do not coincide with a "physical" boundary, ghost point values are interpolated from the coarser grid [25]. The boundary conditions at r = 0 follow from regularity on axis (see [26] for a rigorous discussion): either a Dirichlet or a Neumann condition is imposed depending on whether the variable is an odd or even function of r. All the equations being solved (both the elliptic equations and the evolution equations) are regular on the axis provided that these boundary conditions are satisfied. In addition, we impose reflection symmetry about z = 0 so that the variables are either odd or even functions of z, and this implies Dirichlet or Neumann conditions at z = 0. The r and z parities of all the variables are listed in Appendix B. At the outer boundaries r = r max and z = z max , we impose Dirichlet conditions on the gauge variables, α = 1 and β A = 0. For the variables {ψ, η A } ∋ u, we impose where R = √ r 2 + z 2 and θ = tan(r/z) are spherical polar coordinates and u ∞ is the value of u at spacelike infinity, i.e., ψ ∞ = 1 and η A ∞ = 0. This boundary condition obviously holds up to terms of O(R) −2 for any asymptotically flat solution of the constraint equations. For the "dynamical" fields {S,W } ∋ u, we follow [4] and impose a Sommerfeld condition at the outer boundary, This condition is only exact for the scalar wave equation in spherical symmetry. It is however expected to be a reasonable first approximation because S andW obey a wave equation (26) to principal parts and the elliptic variables will be close to their flat-space values near the outer boundary (α = ψ = 1, β A = η A = 0). See Appendix B for details on the discretization at the outer boundary. The evolution equations are integrated forward in time using the Crank-Nicholson method; this method is second-order accurate in time. The resulting implicit equations are solved by an outer Gauss-Seidel relaxation (in red-black ordering), and an inner Newton-Raphson method in order to solve for the vector of unknowns at each grid point. A typical value of the CFL number λ ≡ ∆t/ min(∆r, ∆z) we use is λ = 0.5. Fourth-order Kreiss-Oliger dissipation [27] is added to the right-hand sides of the evolution equations, with a typical parameter value of ǫ = 0.5. The elliptic equations are solved using a multigrid method [19]. The Full Approximation Storage (FAS) variant of the method enables us to solve nonlinear equations directly, i.e. we do not apply an outer Newton-Raphson iteration in order to obtain a sequence of linear problems. In the relaxation step of the multigrid algorithm, a nonlinear Gauss-Seidel relaxation (in red-black ordering) is directly applied to the full nonlinear equations. At each grid point, we solve simultaneously for the unknowns η A , ψ, α, and β A . Only the Hamiltonian constraint (23) requires the solution of a (scalar) nonlinear equation, and this is done using Newton's method; a single iteration is found to be sufficient. All interior grid points are relaxed and afterwards the values at the ghost points are filled according to the boundary conditions. In order to transfer the numerical solution between the grids, we use biquadratic interpolation for prolongation and linear averaging for restriction. For the prolate wave evolved in section 5.3, the elliptic equations become highly anisotropic and the standard pointwise Gauss-Seidel relaxation employed in the multigrid method no longer converges. A common cure to this problem is line relaxation [28]. We solve for the unknowns at all grid points in a line z = const simultaneously. One Newton-Raphson step is applied to treat the nonlinearity and the resulting tridiagonal linear system is solved using the Thomas algorithm. Note that this method has the same computational complexity as the pointwise Gauss-Seidel relaxation. The wide range of length scales in the solutions we are interested in necessitates a position-dependent grid resolution. The classic adaptive mesh refinement (AMR) algorithm by Berger and Oliger [25] was designed for hyperbolic equations. Including elliptic equations in this approach is rather complicated. A solution with numerical relativity applications in mind was suggested by Pretorius and Choptuik [29], and we shall use their algorithm here, with minor modifications due to the fact that our grids are cell centred rather than vertex centred. The key idea of the algorithm is that solution of the elliptic equations on coarse grids is deferred until all finer grids have reached the same time; meanwhile the elliptic unknowns are linearly extrapolated in time and only the evolution equations are solved. We have found that this approach works well as long as no grid boundaries are placed in the highly nonlinear region. In particular, adaptive generation of finer grids in the course of the evolution causes small but noticeable reflections that from our experience make the study of problems such as Brill wave critical collapse unfeasible. For this reason, the evolutions presented in this article use fixed mesh refinement (FMR), i.e. the grid hierarchy is defined at the beginning of the simulation and remains unchanged as time evolves. Finally we briefly discuss how an apparent horizon is found in a t = const slice. The horizon is parametrized as a curve R = f (θ) in spherical polar coordinates. Requiring the expansion of the outgoing null rays emanating from the horizon to vanish yields a second order ordinary differential equation, which is solved using the shooting method. The boundary conditions are f ′ (0) = f ′ (π/2) = 0, i.e., the horizon has no cusps on the axes. We follow an idea of Garfinkle and Duncan [6] in order to monitor the approach to apparent horizon formation. For each point on the axis r = 0 ⇔ θ = 0, we find the angle γ at which the curve starting from that point meets the z = 0 ⇔ θ = π/2 axis, We find the maximum γ max of this angle over all such curves. Obviously γ max = π/2 for an apparent horizon, and the deviation from this value indicates how close we are to the formation of an apparent horizon. Numerical results As an application of our numerical implementation, we consider vacuum axisymmetric gravitational waves, so-called Brill waves [23]. The initial slice is taken to be timesymmetric so thatW = η A = 0 initially. We consider the same initial data for the function S as in [16] and [6], where a, σ r and σ z are constants. The initial lapse and shift are taken to be α = 1 and β A = 0. The momentum constraints (22) are trivially satisfied initially and only the Hamiltonian constraint (23) needs to be solved. Convergence test In order to check convergence of the code, we first consider a wave with parameters a = σ r = σ z = 1. This will disperse rather than collapse to a black hole but is still well in the nonlinear regime. The ADM mass is M ADM = 0.034. We take the domain size to be r max = z max = 40. The FMR hierarchy contains three grids (figure 1). All the grids contain the origin, are successively refined by a factor of 2, and all have the same number of grid points N r = N z . We run the simulation with three different resolutions, N r = N z ∈ {64, 128, 256}. This enables us to carry out a three-grid convergence test: for each variable u we define a convergence factor with the indices referring to the grid spacing. The norms are discrete L 2 norms taken over the subsets of all grids in the FMR hierarchy that do not overlap with finer grids. For a second-order accurate numerical method we expect Q u = 4. Figure 2 confirms that the code is approximately second-order convergent. (Occasional values Q u > 4 are not uncommon in similar numerical schemes [4,29].) As noted earlier, there are additional evolution equations for the variables η + , η − and ψ that are not actively evolved in our constrained evolution scheme. We use these to check the accuracy of the numerical implementation in the following way. We keep a set of auxiliary variablesη + ,η − andψ which are copied from their unhatted counterparts initially but evolved using the evolution equations (A.3)-(A.5). During the evolution, we form the differences between the two sets. Doing so for two different resolutions (grid spacings h and 2h) allows us to define another convergence factor for each u ∈ {η + , η − , ψ} (referred to as residual convergence in the following), The results in figure 3 are again compatible with second-order convergence. We note that the residual convergence test just presented is more severe than the three-grid convergence test in the following sense. For the residuals of the unsolved evolution equations to converge as desired, not only must the numerical solution be second-order convergent but the constraint and evolution equations and their boundary conditions must be compatible. No exact boundary conditions are known at a finite distance from the source and compatibility of the boundary conditions we use is only achieved at infinity. We deliberately chose the domain size in this convergence test to be sufficiently large (∼ 10 3 M ADM ) so that the effect of the boundary on the convergence factors q u is small. As another consistency test, we compute a numerical approximation to the ADM mass. This is evaluated as a surface integral on a sphere close to the outer boundary, at spherical radius R = R M , where n A is the unit normal in the spherical radial direction, and these expressions are valid in linearized theory. We evaluate M ADM in (41) for two radii R M ∈ {14, 18} and extrapolate to infinity assuming that M ADM (R M ) = M ADM (∞) + const/R M . The result in figure 3 shows how numerical conservation of the ADM mass improves with increasing resolution. "Spherical" collapse Next we consider a wave with σ r = σ z = 1 and a = 8.5. We refer to this as "spherical" because σ r = σ z , although of course the actual wave is not spherically symmetric. Unlike the one in section 5.1, this wave is super-critical and will collapse to form a black hole. The ADM mass is M ADM ≈ 2. We run the simulation for two different domain sizes, r max = z max ∈ {10, 20}. The FMR hierarchy is of a similar type as in section 5.1. On the smaller domain there are three grids and for the larger domain we add on another coarse grid (figure 1). We run the simulation with two different resolutions, N r = N z ∈ {128, 256}. In [6], the same initial data were evolved on a non-uniform grid with spacing ∆r = ∆z = 1.92 × 10 −2 close to the origin. This is comparable to our lower resolution grid hierarchy, which has grid spacing ∆r = ∆z = 1.95 × 10 −2 on the finest grid. Figure 4 shows the residual convergence factors defined in (40). The general trend is that the convergence factors are close to 4 at late times but somewhat smaller at early times. Moving the outer boundary further out improves convergence considerably at early times, as can be seen particularly for the variables η ± . This demonstrates the effect of the outer boundary where imperfect boundary conditions are imposed at a finite distance. Because of the elliptic equations involved in our evolution scheme, inaccuracies in the outer boundary conditions influence the entire domain instantaneously, not only after the outgoing radiation interacts with the boundary as is the case in a hyperbolic scheme. Moving the outer boundary much further out by adding more coarse grids is not feasible for this evolution because of the computational cost involved in the current single-processor implementation of the code. In particular, the value of the conformal factor ψ far out appears to be very sensitive to the dynamics close to the origin. This is not the case forψ, which is evolved from the initial data by a hyperbolic PDE. As a result, the differenceψ −ψ has a large, spatially nearly constant contribution that is nearly resolution independent, thus causing the convergence factor q ψ to degrade. At times later than those shown here, the convergence factors ultimately decrease because large gradients develop due to the grid-stretching property of maximal slicing. However, here we are only interested in the part of the evolution until just after the formation of the apparent horizon. Figure 4 also indicates that both increasing the resolution and the boundary radius improves the numerical conservation of the ADM mass. For the larger domain at the higher resolution, the initial oscillations are at the 5% level and after t ≈ 1 the mass remains constant to within 1%. Next we evaluate the lapse function α in the origin r = z = 0. As a consequence of the singularity avoidance property of maximal slicing, the lapse is expected to collapse towards zero as a strong-gravity region of spacetime is approached. Our result in figure (5) is in good agreement with [6] and appears to be insensitive to the resolution and boundary location. We also plot the Riemann invariant I = R abcd R abcd /16 in the origin. The decay of this quantity after t ≈ 1 agrees roughly with [6], although we find a somewhat different behaviour at earlier times (rather than increasing right from the beginning, I first decreases for a short time). However there is a rather strong dependence on resolution and outer boundary location in this case, which indicates that the results for I should be interpreted with care. Finally we search for an apparent horizon. The evolution of the angle γ max (cf. (37)) is shown in figure 6. It agrees reasonably well with [6], although we find that the horizon forms slightly earlier at t = 3.6 ± 0.2 rather than at t = 3.9. Also shown in figure 6 is the mass of the horizon, computed from its area A AH = 16πM 2 AH . When it first forms, the horizon has mass M AH = 1.85 ± 0.05. The numerically computed ADM mass at this time is M ADM = 2.04 ± 0.02 so that M AH /M ADM = 0.91 ± 0.03, as compared with M AH /M ADM = 0.82 reported in [6]. After its formation the horizon expands slightly (its mass increases by about 3%) and appears to ultimately settle down. The results stated here correspond to the run on the larger domain at the higher resolution and the errors are estimated by comparison with the other runs. Highly prolate collapse We now turn to a highly prolate Brill wave with σ r = 0.128, σ z = 1.6, and a = 325, which again has M ADM ≈ 2. This is one of the initial data sets considered in [16] and it was also evolved (until t ≈ 1.5) in [6]. Our spatial domain has size r max = z max = 20. The resolution on the base grid is taken to be N r = 256 and N z = 64. There are 6 grids in the FMR hierarchy. Again all the grids contain the origin and the grid spacing is successively halved in both dimensions. The number of grid points N r in the radial direction is the same on all grids but N z is successively multiplied by a factor of (approximately) 1.34. In this way the finer grids are better adapted to the prolate shape of the initial data. The grid hierarchy is shown in figure 1. The spacing on the finest grid is ∆r = 2.44 × 10 −3 and ∆z = 9.77 × 10 −3 . By comparison, the grid used in [6] has ∆r = 9.70 × 10 −3 and ∆z = 3.74 × 10 −2 close to the origin, roughly four times coarser. Figure 7 shows the evolution of the lapse function α and Riemann invariant I in the origin. These agree well with [6], except for the t 0.5 part of I (but note the strong dependence of this quantity on resolution and outer boundary location apparent from figure 5). The approach to apparent horizon formation is shown in figure 8. We are able to evolve the wave for much longer than the authors of [6] and we confirm their conjecture that an apparent horizon will indeed form. It first appears at t = 3.9 and its shape is remarkably close to a sphere in our coordinates. At its formation the apparent horizon has mass M AH = 1.990 and at this time the ADM mass has settled down to a value of M ADM = 2.065 so that M AH /M ADM = 0.96. This is in accordance with the Penrose inequality [30], which conjectures that M AH /M ADM 1. Conclusions We considered constrained evolution schemes for the Einstein equations in axisymmetric vacuum spacetimes. One of the motivations for this work was to try and understand why the numerical elliptic solvers in some of these schemes, e.g. [4], failed to converge in certain situations. We found that this was related to the elliptic equations becoming indefinite. Apart from the implications for numerical convergence, we also pointed out that such equations might admit nonunique solutions. In section 3.3, we presented a new scheme that does not suffer from this problem. Its main features are a suitable rescaling of the extrinsic curvature with the conformal factor, and separate solution of the momentum constraints and isotropic spatial gauge conditions. Thus the scheme involves the solution of six elliptic equations rather than four as in [4]. Given that multigrid methods [19] can be used to solve these equations at linear complexity, this does not imply a severe increase in computational cost. Our numerical implementation uses second-order accurate finite differences and combines mesh refinement with a multigrid elliptic solver, based on the algorithm of [29]. We work in cylindrical polar coordinates. Unlike in [6], we do not compactify the spatial domain but impose boundary conditions at a finite distance from the origin. As an application of the code, we evolved Brill waves in section 5. We carried out a careful convergence test in section 5.1 and demonstrated that the code is approximately second-order convergent. For a stronger super-critical wave (section 5.2), convergence of the residuals of the unsolved evolution equations was somewhat slower at earlier times. Varying the domain size indicated that this is mainly caused by inaccuracies in the outer boundary conditions we use. These errors appear to have little effect on the formation of the apparent horizon. In section 5.3, we evolved a highly prolate Brill wave. Such initial data were conjectured in [16] to form a naked singularity rather than a black hole, which would violate weak cosmic censorship. However an apparent horizon does form in our evolution. We thus improve on the results of the authors of [6], who could not evolve the wave for a sufficiently long time to see an apparent horizon form although they conjectured that this would happen eventually. There are many directions in which this work could be extended. For simplicity we only considered vacuum spacetimes with a hypersurface-orthogonal Killing vector, i.e., vanishing twist. The addition of matter and twist should be straightforward. Care must be taken that the additional variables are rescaled with suitable powers of the conformal factor so that the Hamiltonian constraint (23) remains definite. An elegant framework capable of including twist is provided by the (2 + 1) + 1 formalism [31,32]. From a mathematical point of view, it would be interesting to prove that the Cauchy problem or even the initial-boundary value problem is well posed for the present (or a similar) formulation of the axisymmetric Einstein equations. These questions were studied for similar hyperbolic-elliptic systems in [33,34,35]. It is a disadvantage of constrained evolution schemes that inaccuracies in the outer boundary conditions influence the entire domain instantaneously. More work is needed on improved boundary conditions in the context of mixed hyperbolic-elliptic formulations of the Einstein equations. An alternative to an outer boundary at a finite distance would be the compactification towards spatial infinity used in [6]. However, outgoing waves ultimately fail to be resolved on such a compactified grid, which because of the elliptic equations involved has again adverse effects on the entire solution. This problem is avoided if hyperboloidal slices are used, which can be compactified towards future null infinity (see [36] for a related review article). In this case the constraint and evolution equations become formally singular at the boundary, which needs to be addressed in a numerical implementation. On the computational side, the accuracy of the code could be improved by using fourth (or higher) order finite differences. Computational speed could be gained by parallelizing the code and running it on multiple processors. It would be interesting to evolve even more prolate Brill waves than the one considered here. However the elliptic equations then become more and more anisotropic and the relaxation method employed in the multigrid method must be modified to ensure convergence. For the wave considered in this paper, line relaxation was found to accomplish this but we have not been able to achieve convergence for even more prolate configurations. More sophisticated modifications such as operatorbased prolongation and restriction [28] are likely to be required. In any case, in order to evolve some of the extremely prolate initial data sets considered in [16] where σ r /σ z ≈ 10 4 in (38), a radically different approach will probably be needed. Another interesting application of our code would be Brill wave critical collapse. However, preliminary results indicate that we are currently unable to evolve waves close to the critical point for a sufficiently long time because reflections from the interior AMR grid boundaries become increasingly pronounced as more and more finer grids are added close to the origin. The mesh refinement algorithm of [29] that we adopt appeared to be sufficiently robust in the scalar field evolutions of [9,10] but we suspect that the situation is quite different in vacuum collapse. An improved AMR algorithm for mixed hyperbolic-elliptic systems of PDEs will probably be required. the various variables are as follows, odd in r : β r , η r , S,W even in r : α, ψ, β z , η z odd in z : β z , η z even in z : α, ψ, β r , η r , S,W (B. 2) The value of a variable u at the ghost point is set to be u 0 = −u 1 if u obeys a Dirichlet condition and u 0 = u 1 if it obeys a Neumann condition ‡. This discretization at the boundary is second-order accurate. Dirichlet conditions at the outer boundary are implemented in a similar way. In order to impose the falloff condition (35), we note that Ru is a linear function of R and so we linearly extrapolate Ru in the radial direction from the interior grid points to the ghost points in order to find the values of u there. The Sommerfeld condition (36) is rewritten in the form and discretized at the outer ghost points on the base grid of the AMR hierarchy in order to integrate the values of u there forward in time. Here backward differencing is used in the direction normal to the boundary, and similarly in the z direction.
2008-05-29T15:05:34.000Z
2008-02-26T00:00:00.000
{ "year": 2008, "sha1": "58a078d95dcbef5ada9748509545baee626708fd", "oa_license": null, "oa_url": "https://authors.library.caltech.edu/10957/1/RINcqg08.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b7b1f0475d71c162f7e8c02d4682667d8f670c23", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
13782780
pes2o/s2orc
v3-fos-license
Systematic Review of Randomized Clinical Trials on Safety and Efficacy of Pharmacological and Nonpharmacological Treatments for Retinitis Pigmentosa Aims. Several treatments have been proposed to slow down progression of Retinitis pigmentosa (RP), a hereditary retinal degenerative condition leading to severe visual impairment. The aim of this study is to systematically review data from randomized clinical trials (RCTs) evaluating safety and efficacy of medical interventions for the treatment of RP. Methods. Randomized clinical trials on medical treatments for syndromic and nonsyndromic RP published up to December 2014 were included in the review. Visual acuity, visual field, electroretinogram, and adverse events were used as outcome measures. Results. The 19 RCTs included in this systematic review included trials on hyperbaric oxygen delivery, topical brimonidine tartrate, vitamins, docosahexaenoic acid, gangliosides, lutein, oral nilvadipine, ciliary neurotrophic factor, and valproic acid. All treatments proved safe but did not show significant benefit on visual function. Long term supplementation with vitamin A showed a significantly slower decline rate in electroretinogram amplitude. Conclusions. Although all medical treatments for RP appear safe, evidence emerging from RCTs is limited since they do not present comparable results suitable for quantitative statistical analysis. The limited number of RCTs, the poor clinical results, and the heterogeneity among studies negatively influence the strength of recommendations for the long term management of RP patients. Introduction Retinitis pigmentosa (RP) comprises a group of inherited progressive retinal dystrophies, characterized by rod and cone photoreceptor degeneration and progressive loss of peripheral and central vision. Retinitis pigmentosa is often diagnosed in children and young adults and, as the disease progresses and more photoreceptors degenerate, patients experience centripetal visual loss leading to legal and functional blindness [1][2][3]. Due to the progressive nature of the disease, there is great interest in the development of therapeutic interventions that may halt the evolution of the disease or restore the lost visual function. Currently, the therapeutic approach is restricted to slowing down the degenerative process, treating the ocular complications such as cataract and macular edema, and helping patients to cope with the social and psychological impact of blindness [1]. Nonpharmacological interventions are based on strategies of light protection as evidences indicate that some genetic types of pigmentary retinopathies are partly light-dependent [4]. Hyperbaric oxygen therapy has been also proposed for RP patients in order to promote photoreceptors survival [5,6]. Several medical treatments have been proposed to slow down disease progression. Specifically, the trophic and antioxidant effects of vitamins have been evaluated in RP patients in order to demonstrate a protective action on 2 Journal of Ophthalmology photoreceptors [1,7,8]. Other nutritional supplementations, including docosahexaenoic acid (DHA), an omega 3 fatty acid found in high concentration in oil fish, lutein, and gangliosides are cited as a potential therapeutic modality that can help in preserving the visual function of patients with RP. Among them, DHA is considered important for photoreceptor function because membranes containing rhodopsin and cone opsins in photoreceptor cells have very high concentrations of this fatty acid, while the protective effect of lutein supplementation has been demonstrated in age-related macular degeneration [9,10]. Other pharmacological treatments have been proposed for RP in small clinical studies, including oral valproic acid, oral nilvadipine, and beta-carotene [11][12][13]. Lastly, topical brimonidine tartrate 0.2% treatment and intravitreal delivery of ciliary neurotrophic factor (CNTF) were proposed for their neuroprotective effects observed in animal studies [14,15]. The objective of this study was to systematically review scientific evidence currently available in the literature, in order to assess the effects of medical interventions for the treatment of patients with RP. All the randomized clinical trials for the evaluation of any medical treatments for RP published up to December 2014 were included in the review. Literature Search. Six observers, divided in three groups of two, independently performed a literature search of all publication years up to December 2014. The articles were identified through a computerized search for clinical trials in the Cochrane Controlled Trial Register (CENTRAL/CCTR) (which contains the Cochrane Eyes and vision group trials register) on the Cochrane Library, Medline, and Embase. The search strategy was used to identify randomized clinical trials, as recommended by the Cochrane collaboration. The following search strategy was used: (a) Publication type was clinical trial. Articles were excluded if they did not satisfy one or more inclusion criteria or if they were irretrievable after performing all available search strategies, including request to authors and editors. The articles' eligibility was initially determined by evaluating the titles, abstracts, and MeSH (medical subject headings). Four observers divided into two groups of two examined all the retrieved 389 abstracts to consider their eligibility. After matching the decisions of the two groups, 360 abstracts were immediately excluded because they were either not randomized, not on medical treatment for RP, or related to different kinds of ocular disease. The remaining 29 complete articles were obtained and printed to identify whether they were suitable for inclusion in the revision and distributed to four researchers randomly divided into two groups of two each. The observers were blinded to the names of the authors and institutions, the name of the journals, the sources of funding, and the sponsors of the studies. The observers of each group were also blinded to the decisions of the other group and trial selection was matched between them. Nine trials were excluded because they did not match one or more inclusion criteria and one was excluded because it was not eligible [16][17][18][19][20][21][22][23][24][25]. All the remaining 19 RCTs were included in the systematic review [5, 6, 8-11, 13-15, 17, 26-34] (Figure 1). Outcome Measures. Outcome measures included in this review are changes in visual field (Goldmann perimeter and Humprey visual field analyzer), best corrected visual acuity (BCVA), electroretinogram (ERG) amplitude, contrast sensitivity, dark adaptation, and treatment-related adverse events. Assessment of Risk of Bias in Included Studies Validity Assessment. Two authors independently assessed the included studies for sources of systematic bias according to the guidelines in section 6 of the Cochrane handbook for systematic reviews of interventions. Specifically, the following criteria were considered. All studies included in the review were randomised controlled trials. The main quality attributes were scored as "low risk," "high risk," and "unclear risk" for the following areas: (i) whether or not the randomisation is properly concealed (random sequence generation and allocation concealment), (ii) whether or not the participants are masked (blinding), (iii) whether or not the outcome assessment is masked (incomplete outcome data), and (iv) for selective reporting bias. Other biases include the presence of commercial support, potential source of bias related to the specific study design used, or we are not sure whether an important risk of bias existed ( Figure 2). Data Synthesis. We were unable to conduct meta-analyses on RP treatments because of the clinical heterogeneity observed between studies. Different interventions, different time points for outcome measures, and different instruments and methods of outcome evaluation meant that a summary effect was not estimated. Statistical Analysis. RevMan5 software was used to analyse the data. We tested the heterogeneity between studies using the Chi-square test, with significant heterogeneity ( < 0.05) precluding meta-analysis. Results The 19 studies included in the systematic review dated from 1968 to 2014, all but one of these published in ophthalmic journals. Two trials were performed in Europe, 4 in Asia, and 13 in America. All were randomized clinical trials. Sixteen studies were double-masked or single-(investigator-) masked and three were unmasked (Table 1). RCTs included in this systematic review lasted from 4 months to 10 years with a mean duration of approximately three and a half years. While in some trials randomization appeared to have been executed properly, that is, an unpredictable sequence of treatment allocation was concealed adequately from people recruiting participants into the trial, the following biases have arisen in the trials included in this review ( Figure 2): (i) Sequence generation was not specified or performed in seven studies [5, 6, 13-15, 28, 33]. (iii) Five studies were unmasked [5,6,12,13,33]. Journal of Ophthalmology DHA supplementation was associated with an acceptable safety profile [30,34]. In a recent study, Hoffman et al. further evaluated the effects of oral DHA (30 mg/kg/day) supplementation versus placebo in a 4-year study in 78 patients with X-linked RP [29]. The loss rate of cone, rod, or maximal ERG function was not different between groups. In the same population, Hughbanks-Wheaton et al. confirmed the safety profile of long term DHA supplementation in the same population, with no differences in adverse events rate, antioxidant activity, platelet aggregation, or plasma lipoprotein levels between groups [29,31]. An additional study from , in 221 patients with RP, showed that combined supplementation with vitamin A (15000 UI/day) and DHA (1200 mg/day) over a 4-year period did not slow the course of RP when compared to placebo in terms of visual field, visual acuity, and 30 Hz ERG changes [27]. A subsequent subgroup analysis performed by the same authors showed that, in RP patients not taking vitamin A therapy before entering the study, addition of DHA 1200 mg/d slowed the decline of visual field and ERG [26]. Specifically, the mean annual rate of decline of visual field was not significantly different between the placebo + vitamin A and DHA + vitamin A groups (30.26 ± 3.92 dB/year versus 39.41 ± 3.76 dB/year, = 0.09), while patients not taking vitamin A prior to entry to the study showed significant differences between the placebo + vitamin A and DHA + vitamin A groups (52.5 ± 5.99 dB/year versus 30.7 ± 6.48 dB/year, = 0.002). Similarly, the percentage of decline per year of 30 Hz ERG amplitude was not significantly different for those taking vitamin A prior to entry (9.23% in placebo + A and 10.57% in DHA + A groups), while patients not taking vitamin A prior to entry in the study showed a percentage of decline of 8.05% in the placebo + vitamin A group and of 12.99% in the DHA + vitamin A group, which was significant when comparing rates of decline in both groups of treatment ( = 0.02) [26]. Lutein Supplementation. Two RCTs evaluated the effects of lutein supplementation in patients with RP when compared to placebo [9,10]. Bahrami et al. demonstrated a positive effect of lutein supplementation on preserving visual filed in a 6-month study on 34 patients with RP. Specifically, lutein supplementation showed that the mean retinal area of the central visual field, evaluated by Goldman perimetry, was 0.018 log higher ( = 0.038) when compared to placebo [9]. A later study from Berson et al. evaluated 225 patients with RP treated with lutein (12 mg/day) or placebo in a 4year study. All patients were given 15000 UI/day of vitamin A. This study showed no significant difference in visual acuity, ERG amplitude, and the rate of decline between the lutein plus vitamin A and control plus vitamin A groups for the HFA 30-2 program. For the HFA60-4 program, a decrease in mean rate of sensitivity loss was observed in the lutein plus vitamin A group ( = 0.05) [9,10]. Gangliosides Supplementation. One RCT by Newsome et al. demonstrated no significant effects of gangliosides administration when compared to placebo on the progression of RP evaluated by visual field and ERG in a 4-month study in 32 patients [32]. Beta-Carotene Acid Supplementation. Oral administration of 9-cis -carotene-rich alga Dunaliella bardawil (300 mg/day) was compared to placebo in a 3-month crossover study in 29 patients with RP. This study showed a significant improvement of maximal dark-adapted ERG b-wave amplitude responses when compared to placebo (+8.4 V versus −5.9 V, resp., = 0.001) with 34.5% of patients showing an increase of more than 10 V for both eyes in the Dunaliella group. Light-adapted single-flash b-wave amplitudes were also significantly improved in Dunaliella group as compared to placebo (+17.8 V versus −3 V, resp., = 0.01) while no significant changes were observed in visual acuity and visual field assessment [11]. In this study, only evaluators were masked and 30 patients were randomized to valproic acid or no treatment. Visual acuity improved in the treatment group when compared to baseline and a statistically significant difference with controls was observed after 1 year (1.3 versus 1.83 LogMar, resp.; ≤ 0.01). Multifocal ERG also showed a significant improvement in the valproic acid group ( ≤ 0.01) [12]. 3.1.7. Brimonidine 0.2% Eye Drops. Topical treatment with brimonidine tartrate 0.2% (Alphagan; Allergan, Irvine, CA) was administered to 17 patients with RP for 24/36 months. At the end of the study there were no significant benefits in visual field, visual acuity, and contrast sensitivity when compared to artificial tears [14]. Ciliary Neurotrophic Factor Intraocular Implants. Encapsulated cell-ciliary neurotrophic factor intraocular implants were applied to one randomly selected eye of 133 patients included in two clinical trials on early-stage (CNTF4) and late-stage (CNTF3) RP. Implants were retained for 12 or 24 months, with 42 patients receiving low dose (5 ng/day) implants and 91 patients receiving high dose implants (20 ng/day). At the end of the study no significant differences in BCDVA and ERG amplitude were observed, and the high dose implants induced a significant worsening in Humphrey visual field sensitivity (CNTF3: −98. 4 3.1.10. Hyperbaric Oxygen Delivery. The effects of HBO therapy were evaluated in two studies by Vingolo et al. The first study reported an improvement of low-noise ERG in 11% and unchanged levels in 89% of patients in the HBO treatment group, while 62% of patients in the control group showed worsening of ERG and 38% remained unchanged with a significant difference between groups ( < 0.001) [6]. The other study compared HBO to vitamin A treatment and showed that HBO group had a slower decline in visual function, a higher percentage of visual field stabilization, and an improvement of low-noise ERG b-wave amplitude. However, in this study the dose regimen of the control group (vitamin A group) was not specified, the dropouts were not described, and the ERG instrument was changed after 3 years [5]. Discussion Nineteen RCTs included in this review evaluated the effects of medical treatments on the progression of RP, including vitamins A and E, DHA, lutein, gangliosides' supplementation, HBO delivery, topical brimonidine tartrate treatment, intraocular CNTF release, oral nilvadipine, beta-carotene, and oral valproic acid. Among treatments evaluated in this systematic review, long term vitamin A supplementation showed a good short and long term safety profile but poor significant clinical result with the only significant effect being a slower rate of decline of ERG amplitude [28,33,35]. All the RCTs evaluating the efficacy and safety of DHA treatment compared to placebo or to vitamin A failed to demonstrate a beneficial effect of DHA supplementation, despite the good safety profile [26,27,[29][30][31]36]. The only beneficial effect was described by Berson et al. in a subgroup analysis, showing that patients with RP beginning vitamin A therapy plus DHA showed a better clinical outcome of the disease at 2 years [27]. Although a clear benefit on outcomes that would be clinically relevant for patients, such as visual acuity or visual field amplitude, has not been achieved, based on the results of all these studies, the evidence supports the supplementation of adults with early or middle stages of RP with 15000 IU of oral vitamin A palmitate every day, while supplementation with high dose vitamin E should be avoided [1,8,33,35]. The encouraging results on the use of lutein supplementation in patients with RP obtained by Bahrami et al. were not confirmed by the subsequent RCT by Berson et al. that failed to demonstrate beneficial effects of lutein supplementation in terms of visual acuity and visual field [9,10]. Other encouraging results come from small studies on 9-ciscarotene-rich alga Dunaliella bardawil supplementation, oral valproic acid, and oral nilvadipine, which demonstrated to slow the decline of clinical outcomes in patients with RP [11][12][13]. However, these data should be confirmed by further larger, double-masked, controlled studies. The other medical treatments, including gangliosides' supplementation, topical brimonidine tartrate 0.2%, and intravitreal CNTF delivery, showed no beneficial effects on RP progression and clinical outcomes [14,15,32]. The latter even induced a worsening in visual field sensitivity when administered at higher doses [15]. Two RCTs by Vingolo et al. showed a significant improvement of low-noise ERG in patients with RP treated with HBO as compared to control untreated group [6]. In their later study, a slower progression in HBO group was also reported [5]. However, the conclusions of the authors are not definitive as they are not supported by data due to inappropriate statistical analysis and poor study design. Nevertheless, we were unable to draw any conclusions at this time regarding the applicability or otherwise of the currently available medical interventions for RP because of the general paucity of evidence from the limited number of RCTs available and because of the differences between studies (e.g., different instruments and different outcome measures) that made it impossible to compare statistical data from different studies. Outcome assessment was mostly performed by visual acuity, visual field, or ERG evaluations; however, different instruments or different outcome measures were used for the same variable in different studies. For example, visual field was a common outcome assessed by Goldman perimetry or Humphrey visual field; however, while most of the studies using Goldman perimetry evaluated the visual field area, the studies using HFA evaluated the mean deviation or the total point score from different programs such as 10-2, 30-2, or 60-4. More standardized outcome measures to be used in RCT evaluation in RP patients should be assessed in the future, to make results reproducible and comparable. Owing to different formats of outcome reporting (e.g., mean rate of decline versus mean observed changes and continuous versus dichotomous data), different kinds of effect sizes had to be used, rendering both pooling and comparing of findings difficult. Further RCTs with more standardized outcome measures and reporting are needed to definitely demonstrate the efficacy of the proposed therapeutic interventions and novel therapeutic agents aimed at improving visual function in patients with RP are highly sought after. Disclosure Flavio Mantelli is an employee of Dompé US.
2016-05-12T22:15:10.714Z
2015-08-03T00:00:00.000
{ "year": 2015, "sha1": "eae671b60b2ee14be7a2c3da67f0db71da4c3b1b", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/joph/2015/737053.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "eae671b60b2ee14be7a2c3da67f0db71da4c3b1b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
246883055
pes2o/s2orc
v3-fos-license
Integration of vegetation classification with land cover mapping: lessons from regional mapping efforts in the Americas Aims: Natural resource management and biodiversity conservation rely on inventories of vegetation that span multiple management or political jurisdictions. However, while remote sensing data and analytical tools have enabled production of maps at increasing spatial resolution and reliability, there are limited examples where national or continental-scaled maps are produced to represent vegetation at high thematic detail. We illustrate two examples that have bridged the gap between traditional land cover mapping and modern vegetation classification. Study area: Our two case studies include national (USA) and continental (North and South America) vegetation and land cover mapping. These studies span conditions from subpolar to tropical latitudes of the Americas. Methods: Both case studies used a supervised modeling approach with the International Vegetation Classification (IVC) to produce maps that provide for greater thematic detail. Georeferenced locations for these vegetation types are used by machine learning algorithms to train a predictive model and generate a distribution map. Results: The USA LANDFIRE (Landscape Fire and Resource Management Planning Tools Project) case study illustrates how a history of vegetation-based classification and availability of key inputs can come together to generate standard map products covering more than 9.8 million km2 that are unsurpassed anywhere in the world in terms of spatial and thematic resolution. That being said, it also remains clear that mapping at the thematic resolution of the IVC Group and finer resolution require very large and spatially balanced inputs of georeferenced samples. Even with extensive prior data collection efforts, these remain a key limitation. The NatureServe effort for the Americas encompassing 22% of the global land surface demonstrates methods and outputs suitable for worldwide application at continental scales. Conclusions: Continued collection of input data used in the case studies could enable mapping at these spatial and thematic resolutions around the globe. Abbreviations: CART = Classification and Regression Tree; CONUS = Conterminous United States; DSWE = Dynamic Surface Water Extent; EPA = United States Environmental Protection Agency; FGDC = Federal Geographic Data Committee; IVC = International Vegetation Classification; LANDFIRE = Landscape Fire and Resource Management Planning Tools Project; LFRDB = LANDFIRE Reference Database; LiDAR = Light Detection and Ranging; NDVI = Normalized Difference Vegetation Index; NLCD = National Land Cover Database; USNVC = United States National Vegetation Classification; USA = United States of America; WWF = World Wildlife Fund or Worldwide Fund for Nature. Introduction Natural resource management and biodiversity conservation often rely on inventories of vegetation that span multiple management or political jurisdictions. However, while remote sensing data and analytical tools have enabled production of maps at increasing spatial resolution and accuracy, there are limited examples where maps are produced at large national or continental scale to represent vegetation at high thematic detail. For example, the U.S. National Land Cover Database (NLCD) (Wickham et al. 2014) depicts land cover at 30 m pixel resolution, and is periodically repeated, enabling detection of major trends in land use of relevance to a broad range of resource management decisions. However, these and similar land cover products around the world depict relatively few distinct map classes (Wickham et al. 2021). As vegetation classification has advanced, the potential to map far more ecologically distinct map classes presents important opportunities to address pressing societal needs (Lavrinenko 2020). Regional map products of increasingly high thematic and spatial resolution have proven essential for successfully mapping species ranges (Aycrigg et al. 2010), assessing ecosystem representation within protected areas (Pliscoff and Fuentes-Castillo 2011), and in systematic, place-based conservation planning (Groves and Game 2016). They are needed for documenting relative risk of ecosystem collapse, as can be documented through the International Union for Conservation of Nature (IUCN) Red List of Ecosystems framework (Keith et al. 2013). Additionally, depiction of vegetation structure and composition at moderate to finer levels of thematic detail enable assessment of dynamic processes, such as wildfire regimes (Rollins 2009). We will review two case studies of both national and continental land cover mapping that have bridged the gap between traditional land cover mapping and modern vegetation classification. We trace recent development of terrestrial ecological classification in the Americas with specific reference to land cover mapping applications at regional to national and continental scales. This experience assisted in development of the International Vegetation Classification (IVC) (Faber-Langendoen et al. 2018). In turn, that classification hierarchy has been utilized directly in the mapping process. The two case studies span conditions from subpolar to tropical latitudes of the Americas. One, limited to the United States, aimed to map the current distribution of the Group level of the IVC hierarchy ( Table 1). The second aimed to map both potential/historical distributions of IVC Macrogroup Level of the IVC hierarchy (Table 1) across temperate and tropical North America, and all of South America. Our first case study reviews the experience of the U.S. Landscape Fire and Resource Management Planning Tools Project (LANDFIRE) that, since the mid 2000s, has produced a series of moderate-to-high resolution national map products to facilitate strategic decision support to both wildfire and wildlife habitat managers. Resultant map layers describe vegetation composition and structure and can be compared to expected conditions to indicate alteration to expected natural wildfire regimes (Rollins 2009). They can also be readily associated with wildlife species where habitat requirements are developed by the U.S. Gap Analysis Program (Gergely and McKerrow 2013). Use of vegetation classification hierarchy, systematic treatment of the "natural-to-cultural" land cover continuum, handling of field observations, and spatial modelling with remotely sensed data all contribute to advanced LANDFIRE map products. Our second case study describes continental-scale mapping that encompasses temperate and tropical latitudes of North America and all of South America. Mapping methods such as those used by LANDFIRE were adapted to the hemisphere to provide a more thematically detailed view than had been previously attained. The intent of this effort was to support conservation status assessment of ecosystem types occurring within and across national borders (Comer et al. 2020). Both case studies used a supervised modeling approach which include a priori classification of vegetation types as the basis for mapping (Cihlar 2000, De Cáceres andWiser 2012). That is, each began with a set of classification concepts where vegetation types are known and described. Georeferenced locations for these types are used by machine learning algorithms to train a predictive model and generate a map of their distribution (Muchoney et al. 2000). This approach was well suited to these efforts because development of a predictive distribution of described natural ecosystem types prior to intensive human intervention was needed for both case studies. Many terrestrial ecosystems across temperate North America support natural wildfire regimes of varying frequency and intensity. The multi-agency LANDFIRE effort was established in 2001 to produce a series of moderate-to-high resolution map products, along with state-and-transition models, to characterize expected and actual vegetation condition with regards to natural disturbances like wildfire. All products of the effort are intended to facilitate strategic decisions by both wildfire and wildlife habitat managers. Beginning with a vegetation-based classification standard, conceptual and quantitative state-and-transition models describe expected succession and disturbance pathways, as well as characteristic fuels, for a given vegetation type. Spatial models, called biophysical settings, aim to depict the likely historical location of each type, given natural disturbance regimes. These models were based on the terrestrial ecological systems classification developed by NatureServe ). Then, current land cover map products aim to depict the location of each natural vegetation type, cultural land cover, vegetation canopy closure, canopy height, and recent disturbances. The ecological systems classification was developed in the early 2000s in part to address deficiencies in the U.S. National Vegetation Classification Standard, as it existed at the time (Federal Geographic Data Committee [FGDC] 1997). Specifically, the ecological systems classification established classification units that integrated geophysical characterstics with natural disturbance regimes to describe recurring assemblages of plant communities Schulz 2007) with 638 units currently described for the conterminous USA. The national application of that classification led to substantial revisions to the U.S. National Vegetation Classification (USNVC), following the newly established IVC framework (Faber-Langendoen et al. 2014). This included additional hierarchical levels, making the USNVC more usable for mapping applications. In addition to using the ecological systems classification, the second major national remap effort by LANDFIRE adopted use of the USN-VC Group-level concepts (see below) for mapping existing vegetation and land cover. In this case study, we will focus on this aspect of the LANDFIRE products. Mapping units The LANDFIRE map legend for existing vegetation and land cover encompasses the continuum from natural to ruderal and cultural vegetation types. The hierarchical structure of the USNVC includes broad units at upper levels defined by vegetation physiognomy, followed by progressively narrow units at lower levels defined by vegetation floristic composition (Federal Geographic Data Committee [FGDC] 2008). The full spectrum, from "natural" to "cultural" vegetation types are encompassed by the USNVC, but here we will ref-erence use of "natural" to "ruderal" units. Table 1 provides an example of the USNVC hierarchy, with defining characteristics. Here, tallgrass prairie types have been well described at all levels of the hierarchy down to the association level, where multiple dominant and diagnostic species are used to define a given type. Over 6,000 associations describe natural vegetation types within the conterminous United States (CONUS). While this level of thematic detail is not currently feasible to map beyond relatively local scales, Group and Alliance levels are increasingly feasible to target in regional and national map legends. Within the CONUS, LANDFIRE mapped nearly 300 natural USNVC Group concepts. Descriptions of this classification hierarchy and these units may be found at the USNVC website (http://USNVC.org/) and NatureServe Explorer (https://explorer.natureserve.org/). While the USNVC Group level provided a useful classification of natural vegetation units for the map legend, additional map legend categories were used to provide robust map product. First, the revised USNVC includes "ruderal" units that are defined as including plant assemblages with no natural analog. These commonly result from prior land conversion and subsequent abandonment, so they encompass what are often referred to as "old fields" and secondary forests where exotic species and/or native species are present in abundances not found where prior human influence is less discernable. Several ruderal vegetation units, approximating the Group level, were documented for use in LANDFIRE map products. Second, a series of map class modifiers were used to facilitate mapping structural variants within each USNVC Group-based map class informed by the using the National Land Cover Database (NLCD; Homer et al. 2015). For example, where "evergreen" "deciduous" and "mixed" variants of a given forest type were discernable, they were mapped separately. Additionally, where "forest" vs. "shrubland" or "herbaceous" structural stages in forest succession occurred in discernable pattern, they were also differentiated in the map legend. Training samples Modern mapping methods include use of georeferenced sample locations -each labeled to the intended map units they represent -to train models that will combine predictor layers to generate a vegetation map. Due the the very large number of georeferenced samples needed for national land cover mapping at thematic levels like the USNVC Group, LANDFIRE produced algorithmic tools called "autokeys" for processing vegetation sample plot data for subsequent modeling and mapping. The autokey algorithm scans the content of each sample plot to detect species presence and abundance as well as structural categories to determine to which map legend class the sample belongs. It then applies the appropriate label for use in subsequent modeling steps. Autokeys were designed and implemented within regions determined by clustering ecologically similar ecoregions modifed from US Forest Service (Cleland et al. 2007) and US Environmental Protection Agency (EPA) (Omernik and Griffith 2014) sources. In the CONUS project area 278 USNVC Groups were processed for use in the LANDFIRE legend. Each autokey pertains to one of 17 regions, including the southern tip of Florida which was treated along with adjacent Caribbean islands. Expert ecologists reviewed and hand labeled nearly 18,000 samples to assess autokey performance. The total number of samples labeled by autokeys varied by region, from a high of 80,148 in the Rocky Mountains to 3,517 in the North Coast region. For most regions, the proportion of plots used in assessment that were reviewed by experts was 4% to 8%. Validation statistics for each map legend category were used throughout the development and final evaluation of each autokey. The overall validation statistic is a useful measure of how well each autokey performed across all types. It is the number of matches between expert and autokey labels divided by the total number of expert plots × 100, and the overall validation statistic was calculated for each autokey. Overall agreement for the USNVC Group keys ranged from a high of 90% (Texas-Oklahoma Hill Prairie) to a low of 39.9% (Coastal Plain). In most cases, lower performance occurred where substantial proportions of the landscape are dominated by ruderal vegetation, and distinguishing among very similar vegetation types using sample plots becomes more challenging. Although over 500,000 vegetation samples were labeled through autokeys, there were still hundreds of thousands of samples with insufficient quantitative information to run through the autokeys. These often included documented locations from local natural resource inventories where an existing classification was used to label the location without including vegetation composition and structure. A series of classification crosswalks were used to reconcile these differences and label samples to the intended unit on the LANDFIRE map legend. In the 2016 LANDFIRE map, over one million samples were processed either by autokeys or through expert labeling across the CONUS. Modeling process and resulting map A key factor in ecosystem modeling is determining the boundaries within which to build and apply models for existing vegetation types. Initial prototyping found that Omernik Level III Ecoregions (Omernik and Griffith 2014) provided a more ecologically meaningful framework for modeling existing vegetation types than map zones used in previous LANDFIRE mapping efforts (Picotte et al. 2019). The ecoregions were grouped to create 13 vegetation production units ( Figure 1) across the CONUS that were of a manageable size for efficiently preparing satellite imagery and georeferenced sample inputs. These were similar to, but not identical to the ecoregion clusters used for design and implementation of autokeys. The modeling process began by removing sample plots in recently disturbed areas collected prior to the disturbance using LANDFIRE annual disturbance products. Spectral outliers were identified by summing Landsat bands one through six for each class and sample plots, those plots greater than two standard deviations from the mean were removed. The resultant filtered plots were used to model lifeform and vegetation structure. The spectral test was performed separately for vegetation types prior to witholding samples for map validation. Vegetation structure is important for fire behavior fuel models. Therefore, existing vegetation products were designed to nest by lifeform. For example, pixels identified as tree in the lifeform mask will be assigned a tree cover, height, and vegetation type. The lifeform modeling process began with an initial output using the filtered sample plots. The initial lifeform model output was improved through an iterative process by adding expert-labeled training samples based on desktop review of aerial photos to correct obvious mapping errors. Using USNVC Group concepts as a guide, sample plots were separated into three lifeforms: tree, shrub, and herbaceous vegetation types, as well as barren or sparsely vegetated types (<10% total cover). Plots were further separated into wetland vs non-wetland categories, and alpine vs non-alpine categories where they existed. Classification tree models were generated with the See5 algorithm using raster predictor variables (Table 2). Models were generated within individual ecoregions to produce categorical outputs for each lifeform layer. The layers were then combined using the lifeform mask, wetland mask, barren/sparse mask, and alpine mask created with a separate modeling process to restrict the mapping of certain vegetation in appropriate locations where applicable. After modeling, vegetation types that comprised a mix of evergreen and deciduous dominant/ co-dominant species to varying degrees were separated using the NLCD (Homer et al. 2015) categories to further refine the map and aid in assessing fire behavior fuel models. An overview of the modeling process is shown in Figure 2. The draft thematic map was edited using rulesets based on geography or topography, or manual pixel reclassification with hand-drawn polygons based on expert opinion and review. Draft maps were also revised by removing problematic plots identified during the modeling process, reclassifying plots to a better fit, or adding sample plots based on expert opinion to correct modeling errors and improve mapping of problematic classes. Draft maps were reviewed by regional experts with NatureServe and staff from state agencies. Wildland Urban Interface maps produced by the Forest Service (Radeloff et al. 2017) were used in combination with rulesets based on existing vegetation to identify ruderal vegetation in proximity to developed areas. Recently disturbed areas (within previous 10-years) were identified using LANDFIRE disturbance products to appropriately label transitional vegetation. For example, it was more appropriate to label regenerating clearcuts in the Pacific Northwest as recently disturbed herbaceous cover rather than native montane grassland. Vegetation percent cover and height training samples were derived from field estimates of vegetation cover and height. In addition, tree canopy percent cover estimates were calculated from the percentage of Light Detection and Ranging (LiDAR) point cloud above 3 m and tree height estimates were derived from the 90 th percentile of LiDAR returns. Sample plots were separated into three lifeforms: tree, shrub, and herbaceous cover and height. Regression tree models predicting percent cover and height of dominant vegetation were generated with the Cubist algorithm using the following predictor layers identified in Table 2: Seasonal Landsat imagery, tasseled cap, and NDVI 5-year statistics. Models were generated within a vegetation production unit to predict continuous outputs for each lifeform layer. The layers were then combined using the lifeform mask. Recently disturbed areas (within previous 3 years) did not model well because the satellite image composites spanned multiple years and comprised a mixture of pre and post-disturbance pixels. Disturbance severity and timing rulesets based on LAND-FIRE annual disturbance products were used to assign lifeform and estimated cover. Several masks were developed to identify open water, barren land, sparse vegetation, developed, and agricultural lands. Open water was identified using custom model- ing methods based on the Landsat Level 3 Dynamic Surface Water Extent (DSWE) Science Product from the U.S. Geological Survey. Fragmented segments along streams and rivers were connected using National Hydrography Dataset (USGS 2016a) flowlines to form a continuous network. Barren land (0% vegetated cover) and sparse vegetation (<9% total cover) were identified using NDVI 5-year median thresholds and calibrated by location based on known barren and sparsely vegetated areas. Developed land and snow/ice were identified using NLCD. Ruderal vegetation classes were modeled within NLCD Class 21 (Developed -Open Space) in order to assign the appropriate fire behavior fuel models. Agricultural land was identified using the Cropland Data Layer (USDA 2015) and summarized by Common Land Unit polygons (USDA 2006) using a zonal majority. The mapping process for existing vegetation based on the USNVC Group concepts generated a 30 m pixel resolution map raster with 499 natural, ruderal, and cultural map classes (Figure 3). The majority of these map classes are reflected in the more than 300 USNVC Groups. In addition to these, map class modifiers distinguish structural variants within each natural USNVC Group. Then a series of map classes reflect USNVC Groups for ruderal vegetation plus cultural land cover derived from other sources as described above. Evaluation of LANDFIRE map The LANDFIRE Program implemented a map assessment approach that utilized existing information because no resources were available to collect additional data. The goal was to provide assessment results in tandem with data delivery. The assessment sampling strategy for the LANDFIRE 2016 Remap randomly withdrew 10% of the available plots for each of the terrestrial ecological systems classification developed by NatureServe ) within a vegetation production unit, with two caveats. First, no more than 300 plots were withdrawn for any ecological system type. Second, no plots were withdrawn for an ecological system type within an individual production unit if less than 30 plots would be available for the assessment in an effort to maximize training data for modeling categories with few plots. This same set of withdrawn plots was used for the USNVC assessment, although the plot labels were assigned using independent methods and therefore did not always represent conceptually equivalent types. Withdrawn plots were never used in the modeling process so they represent an independent assessment sample. Confusion tables were created for each of the six LANDFIRE GeoArea delivery packages across CONUS by cross-tabulating the autokey USNVC Group assignment for each assessment plot against the LANDFIRE USNVC Group assignment for map pixels at the plot location. Category agreement focused tables were then generated from each GeoArea contingency table. No stratification, spatial buffering, or category weighting was used (Table 3). A summary report was also generated for each GeoArea to provide an initial indication of map performance (available from https://www.landfire.gov/remape-vt_assessment.php). The assessment sample was based on plots previously available to the program so the sample size and distribution reflected the overall plot numbers and categorical distribution present in any GeoArea. Across the continent the assessment sample was not sufficient for most of the mapped categories. While the agreement results were not high, there was variation in the results across GeoAreas and across categories within each GeoArea. No consistent error patterns were identified, although there is some indication that forest types tend to have lower error rates than shrub and herbaceous types. The opportunities for comparing category error rates across GeoAreas are limited by the sample sizes. For example, Southern Rocky Mountain Ponderosa Pine was mapped in both the Southwest and Northwest GeoAreas but the limited extent in the Northwest GeoArea resulted in too small an assessment set for this GeoArea. Results were not specifically linked to the number of categories assessed. For example, while the Southeast GeoArea had the lowest number of assessed categories and the lowest percentage of assessed categories with more than 70% agreement, the North Central had the lowest percentage of assessed categories with more than 50% agreement. Readers should note that these are absolute errors. If the plot assignment did not match the map assignment exactly it was designated as an error, so errors between floristically similar groups are counted the same as errors between floristically dissimilar groups. Users can review results for USNVC Groups of specific interest to fully understand the results of the assessment analysis. To understand the results and ramifications for the US-NVC Group-based map, a small portion of the Category Agreement Table for the Northwest GeoArea is presented in Table 4. One row represents higher agreement, one row These types can occur immediately adjacent to each other across the western landscapes where they are found, and share substantial floristic composition, while the Dry-Mesic Spruce -Fir Forest & Woodland is much less similar so those errors may be more substantial depending on the application. This type of variation in agreement results was common across the GeoAreas, so users can review results for USNVC Groups of specific interest to fully understand the results of the assessment analysis. Map users should also note that, in addition to the issues with sample size and distribution, these results do not indicate the scope of misclassifications, e.g., how much area within a GeoArea had agreement greater than or less than 50% or 70%. Case study 2 -Continental Americas (NatureServe) Accelerating landscape change threatens biodiversity worldwide, so documented trends in the extent of ecosystems provide a foundation that can be used for conservation action. However, a comprehensive ecosystem classification of sufficient thematic detail to support these types of analyses has been lacking across the Americas. While a number of ecosystem classification maps exist at regional (Sano et al. 2010), continental (Stone et al. 1994, Eva et al. 2004, and global (Sayre et al. 2020) extents, nearly all utilize thematic classifications with a limited number of land unit descriptors that do not differentiate floristic composition among types. This second case study includes a project area of approximately 32.6 million km 2 or nearly 22% of the global land surface, excluding the Boreal and Arctic regions of North America. The aim was to produce both "potential" and "current" distribution maps for major terrestrial ecosystem types that would be suitable for continental-scale assessment and planning, and also include units suitable for on-the-ground conservation action. The "potential distribution" includes biophysical conditions where each type might occur today had there not been any prior intensive human intervention. "Current distribution" then accounts for those areas of intensive intervention and conversion, as of approximately 2010. For this effort an effective minimum map unit size, or mapped pixel resolution, ranged from 270 m to 450 m. Mapping across the hemisphere brings challenges of working with a high diversity of vegetation types and uneven availability of modeling inputs. Different modeling approaches and lower levels of thematic and spatial resolution in map products may be useful. Building from experience in the United States, we developed new spatial models of potential distributions of vegetation Macrogroups as defined by the International Vegetation Classification or IVC (Faber-Langendoen et al. 2014) ( Table 1). We then combined current land use classes, derived from globally-available land use maps, with potential distribution maps of natural types to estimate their current extent. Similar modeling methods previously applied in Africa were adapted for this effort in the Americas. Analytically, RandomForest (Gislason et al. 2006) classification and regression trees (CART) (Breiman et al. 1984) were used to identify relationships of predictor layers for combinations of map surfaces relative to the location of georeferenced samples for each target class from the desired map legend (Hansen et al. 1996, De'ath andFabricus 2000). A combination of ArcGIS (10.1), ER-DAS Imagine, and the data mining tool See 5 (Rulequest Research 2012) was used to develop models representing vegetation type distributions. Table 5 provides a summary of map inputs, including existing map sources for potential distribution modeling. Here we emphasize project components outside the USA. Existing national and regional maps, along with georeferenced field sample data for vegetation types, were all reconciled thematically to the IVC and NatureServe ecological systems classifications . Again, given the intent to map potential distribution of "natural" vegetation, only these types were sampled from existing sources. Given limitations of available field samples, randomized samples were also gathered from existing local maps in order to provide a robust and spatially balanced representation of each target map class where there was an acceptable level of confidence in the map source. Here we define "acceptable" as being judged sufficiently reliable by project ecologists experienced in the region and familiar with each map source. Next, we screened the patch sizes of a given type, patches > 10 km 2 in area provided the pool of source areas for sample selection. Selection of the 10 km 2 is again an expert judgment, having evaluated existing maps and concluded that sampling from types depicted in smaller areas risked introducing substantial error. We acknowledge that this risks exclusion of naturally rare ecosystem types, but we judged this risk was warranted given the quality of existing map information for this purpose. This pool of map polygons encompasses 95% of natural landscapes. Stratified random sample selection was weighted by continent-wide area of each type using the log 10 (area)*100, providing a sample total weighted towards types of lesser area. A total of 595,951 georeferenced samples were generated for the Americas, with an additional 70,380 held aside for map validation. Mapping inputs Explanatory variables, represented as map surfaces, included a series of biophysical factors, such as bioclimate, landform, slope, and aspect, as well as surface flow accumulation (Table 6). Bioclimates, as modeled by Metzger et al. (2013), reflect the categorization of temperature and precipitation regime to globally-available remotely sensed data, resulting in a total of 125 unique bioclimates at 1 km 2 spatial resolution. Geophysical map surfaces were developed using 90 m × 90 m digital elevation data (Jarvis et al. 2008). Slope and aspect were measured in terms of degrees. The methodology for the landform class derivation used a variable moving window to assess relative relief and followed other regional scale approaches to model macro-landforms (Dikau et al. 1991, True et al. 2000. Landforms as discrete units were derived from Weiss (2001), who used the 90 m continental digital elevation to assign pixels into one of the following regional physiographic types: "canyons, deeply incised streams, " "midslope drainages, shallow valleys, " "upland drainages, headwaters, " "u-shaped valleys plains, " "open slopes, " "upper slopes, mesas, " "local ridges, hills in valleys, " "midslope ridges, small hills in plains, " and "mountain tops, high ridges. " Land surface flow accumulation was derived from existing continental data (Lehner et al. 2006), based on a 90 m × 90 m hydrologically conditioned digital elevation. This data set specifically aims to use topographic surfaces to indicate stream flow direction and flow accumulation for use in stream, wetland, and riparian ecosystem analysis. EarthSat NaturalVue (1998)(1999)(2000)(2001)(2002) multi-year composites of a red-green-blue translation of 6 bands from Landsat 5-7 at 150 m pixel resolution served as the only spectral inputs to the model. These data served to differentiate the locations of natural types tending to occur in proximity, and therefore, similar to geophysical settings. An overview of the hierarchical modeling process to map potential distibrutions of IVC Macrogroups is shown in Figure 4. IVC vegetation hierarchy modeling We used a sequential mapping process where maps derived for multiple broader levels of the IVC classification hierarchy were then used as input to modeling distributions of types defined at lower hierarchical levels. In this application, the first thematic level for inductive modeling was the IVC Division (Level 4 from Table 1). For example, in South America the 80 types may be viewed as continental expressions of vegetation formations; with vegetation responding most directly to global climate patterns. On average, 1,032 (Min = 53, Max = 5,724) samples per map class were used to generate the South America portion of this map. No satellite imagery was used in the development of the IVC division-level Josse et al. (2007, 2009), Borhidi (1991, and others Hexagon Grid vector 320,561 96 km 2 NatureServe, DGGRID, Sahr (2013) map output, the EarthSat NaturalVue (1998)(1999)(2000)(2001)(2002) imagery was used for both MacroGroup and Ecological Systems level of classfication. Macrogroups were subsequently modeled using an average of 1,234 (Min = 33, Max = 3,433) samples per map class. Both the IVC division map output and EarthSat NaturalVue (1998)(1999)(2000)(2001)(2002) imagery were used as map inputs for the macrogroup model. Terrestrial ecological systems, being most numerous and most finely differentiated among the classification units used in this effort, were modeled using the macrogroup map as an additional model input. Once completed, the modeled terrestrial ecological systems layer is the finest thematic scale achievable using this technique. Because these units could be conceptually nested into IVC macrogroup concepts, the "bottom-up" aggregation of maps depicting these units is expected to provide the most reliable map of macrogroups. This aggregation was then reviewed and edited to finalize the distribution of each IVC macrogroup ( Figure 5). Numbers of map classes by region and classification level are listed in Table 7. Map editing and refinement Over-prediction of more common (over rare, or low sample size) land cover types is a common source of error in CART-based inductive modeling of land cover (Weiss 1995, Lowry et al. 2007. That is, more common land cover classes can be over-mapped at the expense of less common classes. This could be anticipated in this particular application where there is high similarity in predictor variable combinations for vegetation types that are naturally adjacent. In these instances, predicted distributions may be skewed in favor of some over other types in portions of their range of co-occurrence. Because of this, some form of expert-based review and map refinement is unavoidable. We assumed that over-prediction would be concentrated in regional landscapes with extensive land use history and only fragmentary remnants of natural vegetation types. However, because the generalized distribution of each terrestrial ecological system type had been previously documented by country and World Wildlife Fund ecoregion , this knowledge was used in expert type-by-type review and refinement. Draft model outputs were attributed as extent measures per WWF ecoregion. These distributions were compared against known ecoregion distributions to identify likely error. Types found to be in error had their pixel distributions recoded to most-likely correct types for each WWF ecoregion. In turn, a second phase expert review and refinement followed the procedure used in ecoregions but was applied to each type using a common grid of 100 km 2 hexagons (Sahr 2013). Again, with each type attributed to the hexagon grid, type-by-type review led to final recoding of pixels to most-likely correct types. Final map prod-ucts were produced at 90 m and 270 m pixel resolutions with resampling to the unified pixel size accomplished using bilinear interpolation technique suitable for continuous data. Map validation As noted above, during initial sample data collection from map sources, georeferenced samples of each vegetation type were gathered and set aside for use in map validation. These samples were gathered for types that had existing polygons in regional/local source maps > 10 km 2 in size in South America and > 5 hectares for temperate and tropical North America. Of the 315 Macrogroup map classes in North and South America, 284 had sufficient samples to be quantitatively assessed. Once map edits were finalized for the 90 m products, validation samples were used to score the degree of agreement between samples and map classes for each map class at three spatial scales. Circular buffers around each sample encompassed 1-km 2 (within 6 pixels of center) and 5-km 2 (within 28 pixels of center). A point sample was defined from the centroid of each pixel of the 6 × 6 neighborhood of the 90 m product and is equivalent to the 270 m version of each map. Overlay of these samples on the final map product generated tabular summaries to determine whether or not the mapped class present matched the type labeled to each sample; i.e., the same types co-occur within the buffered area. While truly independent samples could not be acquired to evaluate a spatial model depicting "potential/historical" extent of these vegetation types, this technique provides one initial measure of map quality, and serves as a primary input to decisions regarding use of the map for type-by-type assessment. Thus, the percentage of agreement between validation samples and maps can indicate the degree of map reliability for use with a practical minimum map unit of 270 m vs. 1 km 2 vs. 5 km 2 . Table 8 provides a summary of validation statistics for potential distribution maps of macrogroups. Additional detail is found in Suppl. material 1. For the 315 mapped macrogroup types, per class numbers of samples for the 1-km 2 validation sample area varied from a high of 972 to a low of 10. Summary of validation statistics indicate high (> 80%) to moderate (> 60%) map accuracy overall, and on a per map class basis at 270 m vs. 1 km 2 vs. 5 km 2 map resolutions. Using the most demanding "point" (or 270 m) validation sample area, 8 types scored at 90-100% agreement, 11 types scored 80-90% agreement, 16 types scored 70-80% agreement, 39 types scored 60-70% agreement and 52 types scored 50-60% agreement. A total of 158 types (56% of all assessed map classes) scored below 50% agreement. The inclusion of the 1 km 2 was limited to the North American portion of the map product for two reasons. First, the sample sizes available for CONUS in North America was substantially higher than that for adjacent countries. Secondly, the inclusion of the 1 km 2 allowed the examination of the gradient of model performance over a spatial gradient of neighborhoods. Using the 1-km 2 validation sample area in North America only, 44 types scored at 90-100% agreement, 32 types scored 80-90% agreement, 14 types scored 70-80% agreement, 8 types scored 60-70% agreement, and 9 types scored 50-60% agreement. A total of 12 types (10% of all assessed map classes) scored < 50% agreement. For 1 km 2 samples, the total sample agreement was 85% and the median level of map class agreement for the types assessed was 88%. These results indicate that map reliability is limited on a per pixel basis (at 270 m pixels), but within relatively small clusters of adjacent pixels, the reliability of the map increases for most map classes. Conclusions There is scientific value in documenting the location and trends in the extent and condition of ecosystem types to inform public policy and conservation action. These two case studies illustrate what can be accomplished with the systematic application of robust, hierarchically-structured vegetation-based classification and machine learning tools that utilize georeferenced sample locations and robust predictor maps. The USA LANDFIRE case study illustrates where a deep history of vegetation-based classification and investments in key inputs to mapping (e.g., georeferenced samples, remote sensing data, sophisticated algorithms) can come together to generate standard map products covering more than 9.8 million km 2 of U.S. land that are unsurpassed, in terms of spatial and thematic resolution, anywhere in the world. That being said, it also remains clear that mapping at thematic resolutions of the USNVC Group and finer require very large and spatially balanced inputs of georeferenced samples, and even with the extensive prior investments, these remain a key limitation affecting the quality of map outputs. While one can reasonably say that "we know enough" about vegetation types at "mid" scales of the classification hierarchy (e.g., the USN-VC Group), sufficient numbers of georeferenced samples that depict the full spectrum of those classification units is lacking across their range of distribution. Efforts such as LANDFIRE provide knowledge of where these gaps exist so that new data collection could maximize its effect on future map iterations. The NatureServe effort for the Americas -encompassing 22% of the global land surface -demonstrates methods and outputs suitable for worldwide application at continental scales; albeit more challenging in parts of the globe with a more limited history of ecosystem classification and mapping, and more limited availability of predictor layers. Along with this mapping approach, the rich text, tabular, and map data set accompanying that study provide a foundation for deepened analysis and conservation action across the Americas. Continued collection of the input data used in the case studies could enable mapping at these spatial and thematic resolutions around the globe. Data availability Data associated with the LANDFIRE case study are available from www.landfire.gov; Data associated with the Na-tureServe case study are accessible from: https://transfer. natureserve.org/download/Longterm/Ecosystem_Americas/Maps/ Author contributions P.J.C. was team member in both case studies, and led the manuscript preparation; J.C.H. developed and implemented map production of the second NatureServe case study; D.D. completed primary mapping tasks in the LANDFIRE case study, and provided manuscript text and review; J.S. coordinated with P.J.C. on classification-related efforts for LANDFIRE case study, and contributed text for the manuscript. All authors critically revised the manuscript.
2022-02-17T16:20:16.125Z
2022-02-15T00:00:00.000
{ "year": 2022, "sha1": "6e21924638fee651485b6e5b671b3a04bb3cafd7", "oa_license": "CCBY", "oa_url": "https://vcs.pensoft.net/article/67537/download/pdf/", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "7237eb542cbbeba0779b67f1c519172b85a3f957", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
147703961
pes2o/s2orc
v3-fos-license
ABJM quantum spectral curve at twist 1: algorithmic perturbative solution We present an algorithmic perturbative solution of ABJM quantum spectral curve at twist 1 in sl(2) sector for arbitrary spin values, which can be applied to, in principle, arbitrary order of perturbation theory. We determined the class of functions -- products of rational functions in spectral parameter with sums of Baxter polynomials and Hurwitz functions -- closed under elementary operations, such as shifts and partial fractions, as well as differentiation. It turns out, that this class of functions is also sufficient for finding solutions of inhomogeneous Baxter equations involved. For the latter purpose we present recursive construction of the dictionary for the solutions of Baxter equations for given inhomogeneous parts. As an application of the proposed method we present the computation of anomalous dimensions of twist 1 operators at six loop order. There is still a room for improvements of the proposed algorithm related to the simplifications of arising sums. The advanced techniques for their reduction to the basis of generalized harmonic sums will be the subject of subsequent paper. We expect this method to be generalizable to higher twists as well as to other theories, such as N=4 SYM. Eventually, a detailed study of TBA equations for super spin chains corresponding to N = 4 SYM and ABJM models has led to their simplified alternative formulations in terms of Quantum Spectral Curve (QSC), a set of algebraic relations for Baxter type Qfunctions together with analyticity and Riemann-Hilbert monodromy conditions for the latter [77][78][79][80][81][82][83][84]. Within the quantum spectral curve formulation one can relatively easy obtain numerical solution for any coupling and state [85][86][87]. Also, QSC formulation allowed to construct iterative perturbative solutions for these theories at weak coupling up to, in principle, arbitrary loop order [81,88,89]. The mentioned solutions are however limited to the situation when the states quantum numbers are given explicitly by some integer numbers. Recently, in Ref. [90,91] we started developing techniques for the solution of QSC equations treating state quantum numbers as symbols. The first technique based on Mellin space transform [90] turned out to be quite complex to go for all-loop generalization. On the other hand, in Ref. [91] we suggested, that there should be relatively easy way to obtain a perturbative solution for the spectrum of twist 1 operators in sl (2) sector for ABJM model working directly in spectral parameter space. The goal of this paper is to present the algorithm for perturbative solution of ABJM quantum spectral curve at twist 1 in sl(2) sector to any loop order. The latter is based on the existence of a class of functions -products of rational functions in spectral parameter with sums of Baxter polynomials and Hurwitz functions, which is closed under elementary operations, such as shifts and partial fractions, as well as differentiation. The introduced class of function is sufficient for finding solutions of involved inhomogeneous Baxter equations using recursive construction of the dictionary for the solutions of Baxter equations for given inhomogeneous parts. As an application of the proposed method we present computations of anomalous dimensions of twist 1 operators at six loop order. There is still a room for improvements of the proposed algorithm related to the simplifications of the arising sums and we plan to present advanced techniques for their reduction to the basis of generalized harmonic sums in one of our subsequent papers. The presented approach has the potential for generalizations to higher twists of operators, as well as to other theories such as N = 4 SYM and twisted N = 4 SYM and ABJM models. This paper is organized as follows. In the next section we give necessary details on ABJM quantum spectral curve equations putting emphasis on Pν-system. Section 3 contains all the details about our solution of Riemann-Hilbert problem for Pν-system, used for calculation of anomalous dimensions of twist 1 operators. Next, section 4 contains the results for anomalous dimensions up to six loop order and their discussion. Finally, in section 5 we come with our conclusion. Appendices and Mathematica notebooks contain some details of our calculation. ABJM quantum spectral curve As it was already mentioned in introduction, the ABJM model is the second most popular playground for testing AdS/CFT correspondence. It is a three-dimensional N = 6 Chern-Simons theory based on the product U(N) ×Û (N) of two gauge groups with opposite Chern-Simons levels ±k. In the planar limit, where N, k → ∞ so that the 't Hooft coupling λ = k N kept fixed, this theory has a dual description in terms of IIA superstring theory on AdS 4 × CP 3 . The field content of ABJM model consists of two gauge fields A µ and µ , four complex scalars Y A and four Weyl spinors ψ A with matter fields transforming in the bi-fundamental representation of the gauge group. The global symmetries of ABJM theory with Chern-Simons level k > 2 are given by orthosymplectic supergroup OSp(6|4) [38,92] and the "baryonic" U(1) b [92]. The bosonic subgroups of OSp(6|4) supergroup are related to isometries of superstring background AdS 4 × CP 3 . In the present paper we will be interested in the calculation of anomalous dimensions of twist 1 gauge-invariant operators from sl(2) sector for arbitrary spin values S. The latter are given by single-trace operators of the form [93]: where twist 1 corresponds to L = 1. The expressions for anomalous dimensions can be conveniently obtained by solving the corresponding spectral problem for OSp(6|4) spin chain. The most advanced framework for that at the moment is offered by quantum spectral curve (QSC) method. The latter is an alternative reformulation of Thermodynamic Bethe Ansatz (TBA) as a finite set of functional equations: Q-system. The most important advance is provided by the considerable simplification of the spectral problem calculations. In the case of ABJM model QSC formulation was introduced in Ref. [82,83], see also Ref. [89]. To perform actual calculations of anomalous dimensions we will use monodromy conditions for the part of ABJM Q-system known as Pν-system [82,83]. The latter consists of six functions P A , A = 1, . . . , 6 and eight (4 + 4) functions ν a , ν b , a, b = 1, . . . 4 satisfying nonlinear matrix Riemann-Hilbert problem [82,83]: where Here and in the followingf will denote a function f analytically continued around one of the branch points on the real axis. In addition, the P and ν -functions should satisfy extra constraints P 5 P 6 = 1 + P 2 P 3 − P 1 P 4 , (2.5) ν a ν a = 0, (2.6) Both P A and ν a , ν a are functions of spectral parameter u. The P A functions have a single cut on the defining Riemann sheet running from −2h to +2h (h is effective ABJM QSC coupling constant 1 ), while ν a , ν a functions have an infinite set of branch cuts located at intervals (−2h, +2h) + in, n ∈ Z and satisfy simple quasi-periodicity relations where P is a state dependent phase fixed from self-consistency of QSC equations [83]. To get QSC description of states in sl(2) sector (2.1) it is sufficient to consider Pν-system reduced to symmetric, parity invariant states. The reduced Pν-system is identified by constraints P 5 = P 6 = P 0 , ν a = χ ab ν b and is written as [82,83,89]: where and ν a satisfy now periodic/anti-periodic constraints (σ = ±1) where f [n] (u) = f (u + in/2) and the constraint for P functions takes the form In addition to the above analytical properties of P and ν functions it is required [83,89] that they are free of poles and stay bounded at branch points. The quantum numbers of spin chain states under consideration, that is twist L, spin S and conformal dimension ∆ are encoded in the behavior of P, ν functions at large values of spectral parameter u [82,83,89]: In contrast to N = 4 SYM, ABJM QSC coupling constant h is a nontrivial function of 't Hooft coupling constant λ, Ref. [25,94]. There is a conjecture for the exact form of h(λ), Ref. [95,96], made by a comparison with the localization results. and which serve as boundary conditions for the Riemann-Hilbert problem under study. The anomalous dimension γ, which is our main interest here, is given by γ = △ − L − S. Solution of Riemann-Hilbert problem for Pν-system To solve the Riemann-Hilbert problem for fundamental Pν-system it is convenient to add to original equations (2.8) -(2.9) their algebraic consequences [89]. We will also need the equations following from the sum of equations (2.8) and (3.15): where p A = (xh) L P A and is the Zhukovsky variable used to parameterize the single cut of P -functions on the defining Riemann sheet. In summary, the equations we are going to solve are given by and σν [2] , (3.24) In addition, there are simple consequences of a given cut structure for ν-functions, which will be used during solution. Namely, the following combinations of functions don't have cuts on the real axis. To find a perturbative solution of the above system of equations we will use expansion of ν a (u) functions in terms of QSC coupling constant h together with the following parametrization of P -functions [88,89] where we have also accounted for large u asymptotic of P functions (2.13). We would like to note, that, due to residual gauge symmetry of QSC equations, 2 the coefficients m i,k are some functions of spin S only and the mentioned gauge freedom can be also used to set A 1 = 1 and A 2 = h 2 . The analytical continuation of P-functions through the cut on real axis is simple and is given by [88]: In what follows we will consider perturbative solution in a special case of twist 1 operators, so from now on we put L = 1. Sums of Baxter polynomials Recently, in Ref. [91] we have suggested that the full all-loop solution of the Pµ-system (2.8) -(2.9) can be obtained in terms of linear combinations of products of rational (in spectral parameter u), Hurwitz -functions and Baxter polynomials and showed an explicit example at four-loop order. The purpose of this paper is to present explicit all-loop solution. To do that, let us first introduce the necessary notation. The expressions for Baxter polynomials are obtained as leading order solutions for ν where Q S (u) is given by (3.37) and α is some spin-dependent constant to be determined later. Let us now introduce the following class of sums involving Baxter polynomials where w k are some weights. In the case, when the argument of Q S is u we will often drop it and simply write Q|w 1 (•) , w 2 (•) , . . . , w n (•) . We also introduce a shortcut Note that the w 1 , W -sums satisfy usual stuffle relations. Here and below we write weights w k (•) in several equivalent ways, w k (•) ≡ w k (j) ≡ w k and use W to denote arbitrary (maybe empty) sequence of weights. In addition, we will use the notation It turns out that weights at twist 1 can always be reduced to a set of canonical weights for which we introduce special notations: In Ref. [91] we have considered elementary operations on Baxter polynomials, such as shifts and partial fractions. The latter can be also extended to the sums of Baxter polynomials. In particular, the shift in spectral parameter u can be performed using (a = ±1): and Next, we have the rules for partial fractions (a = ±): and Finally we can shift the spin S of the Baxter polynomials using (3.50) Remarkably, the introduced class of functions, Q| . . . , is closed under differentiation. In order to prove this, let us first consider the sums Let us prove that these sums reduce to the linear combination of our standard sums (3.41). We proceed by induction over the depth of the sum. Let us first write where the lower sign is chosen if w 1 (j 1 ) contains the factor (−) j 1 . Then the denominator j 0 − j 1 cancels in the first term and, therefore, this term gives rise only to standard sums (3.41). The second term gives rise to the sums In order to transform these sums, we observe that This identity is proved by the substitution j 1 → j 0 + j 2 − j 1 . Then 55) and the inner sum is again of the form (3.51), but the depth is reduced. This proves the induction step. Now, using the differentiation formula 4 Since the inner sums in the last expression are both of the form (3.51), they can be expressed as a linear combination of the standard sums (3.41). Therefore, i∂ u Q| . . . can indeed be expressed as a linear combination of Q| . . . sums. In particular, the expansion of Q|W -sums at u = i/2 can be expressed in terms of W -sums. Let us summarize the results of the present subsection. We introduced the class of functions -products of rational functions in spectral parameter u with Q|W -sums (3.38) closed under elementary operations, such as argument shifts and partial fractions, as well as under differentiation. As we will see in next subsection this class of functions extended to products with Hurwitz functions is also sufficient for finding a perturbative solution of inhomogeneous Baxter equations. Solutions of Baxter equations The most complicated part in the perturbative solution of Riemann-Hilbert problem for 1 . The solution of these equations contains in general two pieces: the solution of homogeneous equation with arbitrary periodic coefficients and some particular solution of nonhomogeneous equation. Homogeneous solution The first homogeneous solutions of Baxter equations (3.58) and (3.59) are easy to find, they are given by 5 Here Φ per Q (u) and Φ anti Q (u) are arbitrary i-(anti)periodic functions of spectral parameter u. To find second solutions let us consider the following identity It is easy to see that the second homogeneous solution of the second Baxter equation (3.59) is given by Then the general solutions of first and second homogeneous Baxter equations are given by where Φ per a and Φ anti a are arbitrary i-periodic and i-anti-periodic functions in spectral parameter u. They have to be determined from the consistency conditions implied by the equations (3.20)-(3.28). We will parametrize their u dependence similar to Ref. [88,89] using the basis of i-periodic and i-anti-periodic combinations of Hurwitz functions defined as Note that P k (u) can be expressed via elementary functions: Then the functions Φ per a and Φ anti a are written as where the upper limits of summation depend on the order of perturbation theory as follows Here k = 1 for NLO, k = 2 for NNLO, and so on. Dictionary for inhomogeneous solutions To find particular solutions of Baxter equations (3.58) and (3.59) let us introduce the operators F ±1 which are right inverse of the Baxter operators B ±1 , Eqs. (3.58), (3.59), i.e., satisfy Note that they are nothing but the operators F S 1,2 introduced in our previous paper [91]. Our basic idea now is to compile a dictionary sufficient to treat all the functions appearing in the right-hand sides of Baxter equations. Action of F ±1 on Q|W . We first act with the operators B ±1 on the functions Q (u) |w, W : Replacing w → 1 + · w in the first formula and w →1 + · w in the second we obtain the following entries in our dictionary: Action of F ±1 on Q|W ξ a 1 ,a 2 ,... . The basic idea of calculating F ±1 [ Q|W ξ A ] is to use the analogue of summation-by-part formulae from Ref. [88,89]. For any two functions f and g we have 7 Substituting g → F ±σ [g] and f = ξ a,A , we obtain In the present section σ = ±1 is an arbitrary sign, not to be confused with σ = (−) S entering (2.11) and other equations for Pν-system. The operators F ±1 in the right-hand side of the above equations act on the 'simpler' objects because the number of indices of ξ-functions is reduced by one. Let us present separately the formula (3.80) In order to find F σ Q| ξ σ|a|,A , we consider the identity We transform the right-hand side using the formulas (A.133) and obtain Then we obtain From now on we will present only the final entries of our dictionary as the derivations goes more or less along the same lines as before. The action of F ±1 on ξ A (u±i/2) a is the following Action of F ±1 on u n Q|W ξ A Finally, the right-hand side of the Baxter equations may also contain terms of the form u n Q|W ξ A with n = 1, 2. First, we use the same summation-by-part technique as before. Namely, we use Eq. (3.77) with g = u n Q|W to reduce the problem to the calculation of F σ [u n Q|W ]. In order to calculate F σ [u n Q|W ], it is convenient to introduce notation In terms of these functions we can easily express the required action: and we assume that w(j) is one of the canonical weights (3.42). Then we use the partialfractioning identities similar to where we choose the lower sign if w(j) contains (−) j factor and the upper sign otherwise. Then the first term in Eq. (3.94) is obviously a combination of canonical weights. The second term contains a shifted weight 1 ± (j − 1). Note that this shift is correlated with superscript {n} of Q functions. Therefore, we need the transformation rules for the sums of the form where w (•) is one of the canonical weights (3.42), and where1 is one of the four weights 1 + , 1 − ,1 + ,1 − . First, we note simple consequences of Eq. (3.49): for n > 0 we have The sums of the form (3.96) after the substitution of the definition (3.88) and shifting j 1 → j 1 + n are almost of the required form except for the upper limit of summation over j 2 which is j 1 + n − 1, i.e., is shifted by n. Then we can treat the missing/redundant terms in a recursive manner. For example, we have Then the second term is again of the same form as those in the right-hand side of Eq. (3.89). Note that special attention should be paid to the sums with depth less or equal to |n|. The full set of the reductions rules can be found in the code of the attached Mathematica file. Constraints solution Now, with the knowledge of how to find the solutions of two Baxter equations (3.58) and (3.59) in each order of perturbation theory we may proceed with the determination of constants in the anzats for P -functions together with additional φ (3.100) Also from the requirement of absence of poles in combinations (3.28) for ν NLO Following the above procedure step by step at NLO we get 9 1. First, from equation (3.24) : (3.103) 9 The expressions for q (1|2) 1,2 -functions can be found in accompanying Mathematica files. Then, from equation (3.25) : (1) 1 [1] function is given by and from the absence of poles in equations (3.28) we have NNLO At NNLO we were not reducing all w 1 (•) , w 2 (•) , . . . , w n (•) sums at intermediate steps to H and B -sums. Such reduction was performed only for the NNLO anomalous dimensions at the end. Moreover, this final reduction was not algorithmic -we just solved a system of equations for 768 spin values, which is the dimension of ourH basis 11 at weight 5 corresponding to NNLO. Still, our preliminary considerations show that the required algorithmic reduction at all steps is possible and, what is more important, it will make our algorithm much more efficient. The details of this reduction will be the subject of one of our subsequent papers. Following the steps of general procedure for constraints solution at NNLO we get 12 Anomalous dimensions The expressions for the anomalous dimensions can be easily obtained from the corresponding expressions for A where 13 See appendix B for the definition ofH -sums. and The obtained results are in complete agreement with previous results at fixed spin values [89,97,98]. Note, that ourH -sums here can be further rewritten using cyclotomic or S-sums of Ref. [99,100] provided one extends the definition of the latter for the complex values of x i parameters. It is also possible to express them in terms of twisted η-functions introduced in Ref. [41]. Here, we see that the maximal transcendentality principle 14 [104,105] also holds for anomalous dimensions of ABJM theory with the account for finite size corrections up to six loop order and it is now natural to assume it is validity for ABJM model to all orders. That is the results for anomalous dimensions in each order of perturbation theory are expressed in terms ofH-sums of uniform weight w, where w = 3 at NLO and w = 5 at NNLO. In general, the size of the basis ofH-sums at weight w is equal to 3·4 w−1 and at NNNLO (w = 7) we should have 12288 such sums. Moreover, while discussing solution of NNLO constraints in subsection 3.3 we noted that at present we are missing automatic reduction of w 1 (•) , w 2 (•) , . . . , w n (•) sums, arising at different steps of our calculation, toH -sums, which makes intermediate expressions even larger. We are planing to address this latter issue in one of our subsequent publications. In addition it is desirable to construct Gribov-Lipatov reciprocity respecting basis [106][107][108] of generalized harmonic sums also for ABJM model. The latter in the case of N = 4 SYM is known to be much more compact compared to the original basis of harmonic sums and was used in Ref. [109][110][111][112][113] to simplify the reconstruction of the full spin S dependence of anomalous dimensions from the knowledge of anomalous dimensions at a set of fixed spin values. Conclusion In this paper we have presented an algorithmic perturbative solution of ABJM quantum spectral curve for the case of twist 1 operators in sl(2) sector of the theory. The solution treats operator spin S as a symbol and applies to all orders of perturbation theory. The presented solution is performed directly in spectral parameter u-space and effectively reduces the solution of multiloop Baxter equations given by inhomogeneous second order difference equations with complex hypergeometric functions to purely algebraic problem. The solution is based on the introduction of a new class of functions -products of rational functions in spectral parameter with sums of Baxter polynomials and Hurwitz functions, which is closed under elementary operations, such as shifts and partial fractions, as well as differentiation. This class of functions is also sufficient for finding solutions of inhomogeneous Baxter equations involved. For the latter purpose we present recursive construction of F ±1 images for different products of Hurwitz functions with arbitrary indexes or fractions 1 (u±i/2) a with leading order Baxter polynomials or their sums. The latter are entering inhomogeneous pieces of multiloop Baxter equations at different orders of perturbative expansion in coupling constant. Similar to Ref. [88,89], where all the operations performed were closing on trilinear combinations of rational, η and P k -functions, all our operations are closing on fourlinear combinations of rational, η, P k and Q|W -functions. As a particular application of our method we have considered anomalous dimensions of twist 1 operators in ABJM theory up to six loop order. The obtained result was expressed in terms of generalized harmonic sums decorated by the fourth root of unity factors and introduced by us earlier. The results for anomalous dimensions respect the principle of maximum transcendentality. It should be noted, that there is still a room for improvements of the proposed algorithm related to the simplifications of arising sums at different steps of presented solution. The advanced techniques for their reduction toH-sums will be the subject of one of our subsequent papers. We expect the presented method to be generalizable to higher twists as well as to other theories, such as N = 4 SYM. The developed techniques should be also applicable for solution of twisted N = 4 and ABJM quantum spectral curves with P functions having twisted non-polynomial asymptotic at large spectral parameter values, see [79,84,114] and references therein. The latter models received recently a lot of attention in connection with the advances in so called fishnet theories [115][116][117][118][119][120][121][122][123][124][125][126][127][128][129][130]. Moreover, similar ideas should be also applicable to the study of BFKL regime within quantum spectral curve approach [54][55][56] for N = 4 SYM. In the latter case we also have a perturbative expansion when both coupling constant g and parameter w ≡ S + 1, describing the proximity of operator spin S to −1 are considered to be small, while their ratio g 2 /w remains fixed. A Hurwitz functions We define Hurwitz functions entering the presented solution as Here A denotes the arbitrary sequence of indexes and ξ function without indexes is identical to unity. These are the shifted versions of Hurwitz functions introduced in [88,89] ξ A = η
2019-05-08T14:49:21.000Z
2019-05-08T00:00:00.000
{ "year": 2019, "sha1": "dd84cbcd91a64a7bef2870f86dcbee0d5ceeb7f8", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP11(2019)018.pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "b6388e61d00fe50d7849a2903bb9c5584cd63eda", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
24999200
pes2o/s2orc
v3-fos-license
Induction of Apoptosis by Eugenol and Capsaicin in Human Gastric Cancer AGS Cells - Elucidating the Role of p53 Background: Loss of function of the p53 gene is implicated in defective apoptotic responses of tumors to chemotherapy. Although the pro-apoptotic roles of eugenol and capsaicin have been amply reported, their dependence on p53 for apoptosis induction in gastric cancer cells is not well elucidated. The aim of the study was to elucidate the role of p53 in the induction of apoptosis by eugenol and capsaicin in a human gastric cancer cell line, AGS. Materials and Methods: AGS cells were incubated with or without various concentrations of capsaicin and eugenol for 12 hrs, in the presence and absence of p53 siRNA. Cell cycling, annexin V and expression of apoptosis related proteins Bax, Bcl-2 ratio, p21, cyt c-caspase-9 association, caspase-3 and caspase-8 were studied. Results: In the presence of p53, capsaicin was a more potent pro-apoptotic agent than eugenol. However, silencing of p53 significantly abrogated apoptosis induced by capsaicin but not that by eugenol. Western blot analysis of pro-apoptotic markers revealed that as opposed to capsaicin, eugenol could induce caspase-8 and caspase-3 even in the absence of p53. Conclusions: Unlike capsaicin, eugenol could induce apoptosis both in presence and absence of functional p53. Agents which can induce apoptosis irrespective of the cellular p53 status have immense scope for development as potential anticancer agents. Introduction Evasion of cell death is one of the important hallmarks of malignant transformation and hence provides an attractive opportunity for intervention.To this day, induction of apoptosis is considered to be a key strategy in the treatment of cancer.A host of literature suggests that the success of many anticancer therapies lies in their ability to generate a potent pro-apoptotic stimulus.However, due to the severe side-effects of conventional chemotherapies, phytochemicals from dietary sources are receiving increasing prominence as potential sources of more compatible anticancer drugs.A myriad of literature encompassing both in vitro and in vivo studies indicate apoptosis induction as a mechanism of action for a wide range of dietary phytochemicals. Eugenol (4-allyl (-2-methoxyphenol)), a phenolic compound, found abundantly in the essential oil of a commonly consumed spice, clove (Syzgium aromaticum), is one such phytochemical.It has been exploited for various medicinal applications such as antiseptic, analgesic, antioxidant, antiviral and antibacterial agent (Pramod et al., 2010).Furthermore, many recent studies have explored the anticancer potential of eugenol and its ability to induce apoptosis in diverse cancer cell Induction of Apoptosis by Eugenol and Capsaicin in Human Gastric Cancer AGS Cells -Elucidating the Role of p53 Arnab Sarkar, Shamee Bhattacharjee*, Deba Prasad Mandal* lines as well as in in vivo tumor models (Jaganathan and Supriyanto, 2012;Majeed et al., 2014). Capsaicin is yet another phytochemical found naturally as the pungent constituent of hot chilli peppers of the genus Capsicum (family Solanaceae).There are various reports which highlight the physiological and pharmacological properties of capsaicin including its anticancer property.The capacity of capsaicin to suppress the growth of cancer cells is considered to be primarily mediated through induction of apoptosis (Lin et al., 2013). Despite increasing evidences and molecular unmasking of apoptotic induction by the above mentioned phytochemicals, the involvement of p53 in the apoptotic cascade induced by eugenol and capsaicin in gastric cancer cells is not very clear.Although a fair number of studies have focused on the anticancer potential of capsaicin and eugenol, the influence of p53 status on induction of apoptosis by these two phytochemicals is not well characterized.Various mechanisms for the induction of apoptosis have been proposed for capsaicin and eugenol.While, several studies have reported that capsaicin induces apoptosis by upregulating p53, there are also few instances of p53 independent induction of apoptosis by capsaicin (Mori et al., 2006).Similarly, there are reports of both p53 dependent as well as independent induction of apoptosis by eugenol in diverse cellular systems (Al-Sharif et al., 2013). The purpose of this study is to elucidate the mechanism of apoptotic induction by eugenol and capsaicin in gastric cancer cells and to investigate the outcome of treatment by these phytochemicals in the presence and absence of p53. Chemicals Annexin V-assay Kit was purchased from (Abcam, USA).Cycle TEST PLUS DNA reagent kit procured from Becton Dickinson Immunocytometry system (San Jose, CA).p53 siRNA transfection kit (Dharmacon ON-TARGET Plus siRNA) was purchased from GE Healthcare.Anti-mouse anti-bodies against p53, p21, Bax, Bcl-2, Cyt c, caspase-3, caspase-8, procaspase-9 and PCNA, were procured from Santa Cruz (USA), bacitracin, leupeptin, pepstatin A, PMSF, phosphatase inhibitor cocktails.A-Sepharose beads, RNase, NAC and NBT were purchased from Sigma (St. Louis, MO).Nitrocellulose membrane, and filter papers were obtained from Pall Corporation, USA.Acetonitrile, methanol and ethanol were HPLC grade and were purchased from Merck.The remaining chemicals and materials were purchased from local firms (India) and were of highest grade. Cell culture AGS cells were routinely maintained in RPMI 1640 supplemented with 10% fetal bovine serum, insulin (0.1units/mL), L-glutamine (2 mM), sodium pyruvate (100μg/mL), non-essential amino acids (100μM), streptomycin (100μg/mL), penicillin (50unit/mL) and tetracycline (1μg/mL) (Sigma Chemical Co.) at 37°C in a humidified incubator containing 5% CO 2 .Cells were incubated with or without various concentrations of capsaicin and eugenol for 12 hrs.The cells were then processed for the analysis of cell count and cell cycle, detection of apoptosis and Western blotting of many proand anti-apoptotic proteins as described in the following sections. Hemolytic assay Fresh human blood was centrifuged at 4000Xg for 10 min and the cell pellet was washed thrice and re-suspended in 10mM PBS at pH 7.4 to obtain a final concentration of 1.6x10 9 erythrocytes/ml.Equal volumes of erythrocytes were incubated with varying concentrations of capsaicin and eugenol and with shaking at 37ºC for 1hr.Samples were then subjected to centrifugation at 3500Xg for 10 min at 4ºC.RBC lysis was measured at different peptide concentrations by taking absorbance at an OD of 540 nm.Complete hemolysis (100%) was determined using 1% Triton X 100 as a control.Hemolytic activity of the spice active components was calculated in percentage using the following Equation H=100 X (Op-Ob)/(Om-Ob) where, Op is the optical density of given peptide concentration, Ob is the optical density of buffer and Om is the optical density of Triton X 100. Cell viability assay The effect of capsaicin and eugenol on the viability of AGS cells was determined by Trypan blue exclusion test.Cells were treated with different doses of these two phytochemicals and at definite time point (12hrs) the cells that could exclude the Trypan blue dye were counted in haemocytometer as viable cells.Vehicle-treated cells were considered as control and viability of these cells was taken as 100%.IC 50 , the dose which induced 50% cell-killing, was determined for the compounds. MTT assay The cytotoxicity of the spice principles was tested on AGS cells by the MTT-assay.Briefly, cells were seeded in a 96-well microtitre plate (2 x 10 4 cells/well in 100 µL of complete medium) and then incubated with different concentration of these two phytochemicals.After 12 h of exposure to the phytochemicals, 50 µl of MTT (5 mg/5 mL) was added to each well, and the cells incubated in the dark at 37°C for an additional 4 h.Thereafter, the medium was removed, the formazan crystals dissolved in 200 µL of dimethyl sulfoxide, and the absorbance measured at 570 nm. siRNA transfection Gene Silencing of p53 was done using siRNA in accordance with the manufacturer's instructions.For siRNA transfection experiments, AGS cells were plated and transfected after 24 h at ~70% confluency by using DharmaFECT 1 siRNA transfection reagent, according to the manufacturer's instructions.After 60 h of transfection, cells were exposed to capsaicin and eugenol for 12 h prior to analysis of experimental parameters.A non-targeting control siRNA ("scramble" siRNA) was used as negative control.Siglo incorporation was used to analyze the extent of transfection, which in this case was approximately 90%.West blot of protein expression was tallied with the result. Cell cycle distribution analysis For the determination of cell cycle phase distribution of nuclear DNA, In vitro AGS cells (1x106 cells) were harvested.Then, cells were fixed with 3% p-formaldehyde, permeabilized with 0.5% Triton X-100, and nuclear DNA was labeled with propidium iodide (PI, 125 μg/mL) after RNase treatment using Cycle TEST PLUS DNA reagent kit.Cell cycle phase distribution of nuclear DNA was determined on FACS Calibur using Cell Quest Software (Becton-Dickinson Histogram display of DNA content (x-axis, PI fluorescence) versus counts (y-axis) has been displayed.Cell Quest statistics was employed to quantitate the data at different phases of the cell cycle. Annexin V assay Apoptosis assays were carried out based on the instruction from the Annexin V Apoptosis Kit (Biovision).Briefly, PI and Annexin V were added directly to AGS cells.The mixture was incubated for 15 min at 37 o C. Cells were fixed and then analyzed on FACS Calibur (equipped with 488 nm Argon laser light source; 515 nm band pass filter, FL1-H, and 623 nm band pass filter, FL2-H) (Becton Dickinson).Electronic compensation of the instrument was done to exclude overlapping of the emission spectra.Total 10,000 events were acquired, the cells were properly gated and dual parameter dot plot of FL1-H (x-axis; Fluosfluorescence) versus FL2-H (y-axis; PI-fluorescence) shown in logarithmic fluorescence intensity. Western blot analysis AGS cell lysates were obtained and equal amounts of protein from each sample were diluted with loading buffer, denatured, and separated by 10% sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) followed by protein transfer to polyvinylidene fluoride membranes (PVDF).The effect of the two spice principles on the expression of certain cell cycle proteins such as p53, p21 and PCNA and on apoptotic proteins such as Bax/Bcl-2 ratio, caspase-3 and caspase-8, was determined.Proteins were detected by incubation with corresponding primary antibodies (anti p53, anti-p21, anti-PCNA, anti-Bax, anti-Bcl-2 and anti-caspase-3, and anti-caspase-8) antibodies followed by blotting with HRP-conjugated secondary antibody.The blots were then detected by using a chemiluminescent kit from Thermofisher.This analysis was performed three times. Co-immunoprecipitation For the determination of direct interaction between cyt-c and procaspase-9, co-immunoprecipitation technique was applied.For this purpose AGS cells were harvested and lysed in IP buffer (50 mM Hepes, pH 7.6, 200 mM NaCl, 1 mM EDTA, 0.5% Noniodet P-40) containing protease (10 μg/mL each of benzamidine, trypsin inhibitor, bacitracin, 5g/mL each of leupeptin, pepstatin A and PMSF) and phosphatase inhibitor (5μM each of o-phosphoserine, o-phosphotyrosine, o-phosphothreonine, β−glycerophosphate, p-nitrophenylphos-phate and sodium vanadate) cocktails.The lysate (200 μg protein) was incubated for 4 h in rocking condition at 4°C with 2 μg antibody of a cyt c and the immunecomplex was then incubated with protein A-Sepharose beads.The immunoprecipitate was centrifuged at 12,000 rpm for 2 min at 4°C and the pellet was washed with ice-cold PBS containing phosphatase inhibitor.The immunopurified protein was then used to detect the presence of associated protein, procaspase-9, by Western blot analysis using specific antibody of protein of interest as described above. Statistical analysis The experiments were repeated three times and the data were analyzed statically.Values have been shown as standard error of mean, except where otherwise indicated.Data were analyzed and one-way ANOVA were used to evaluate the statistical differences.Statistical significance was considered when P<0.05 or P<0.01. Selection of a sub-lethal concentration of capsaicin and eugenol Trypan blue exclusion test and MTT assay revealed that more than 40% cell killing was observed for capsaicin and eugenol at and above a concentration of 250μM and 1mM respectively.Hemolytic data also suggests that capsaicin above 250 μM and eugenol above 1mM is very toxic.Hence, we selected a concentration of 200 μM for capsaicin and 0.7mM for eugenol for our experiments (Figure 1).In order to analyze the effect of the spice principles on AGS cells flowcytometric analysis of the cell cycle phase distribution was performed (Figure 2). Modulation of Cell cycle phase distribution of AGS cells Treatment with capsaicin resulted in a significant increase in the sub-G0 region (hypolpoidy population); Figure 2 shows representative data of the various experiments.As compared to 5.33%±0.058%cells in the hypoploidy region of the AGS control group, capsaicin treatment recorded 19.86%±0.78%cells.Eugenol treatment at 0.7 mM, also caused an increase in the hypolpoidy peak (7.92%±0.06%)although to a much lesser extent than capsaicin.However, the synthetic phase or S-phase, showed distinctive depression in the eugenol treated group as compared to the untreated counterparts. Effect of capsaicin and eugenol on AGS apoptosis In order to confirm the increase in apoptotic induction by the phytochemicals, we performed annexin V/PI assay in all the groups by staining the AGS cells with FITC-tagged annexin V and PI and measuring the fluorescence intensity in a flowcytometer.Results suggest a significant increase in the percentage of annexin positive cells from 5.18%±0.04% in the AGS control group to 8.48%± 0.19% and 23.9%±0.2%respectively in and capsaicin treated groups.Percentage of PI positive cells also increased significantly by both the phytochemicals (9.03% by eugenol and 17.15% by capsaicin) (Figure 3). Validating the intrinsic pathway of apoptosis The increase in the percentage of apoptotic cells in the capsaicin treated group was further confirmed by coimmunoprecipitating caspase-9 with cyt c, two proteins which participate in apoptosome formation.Capsaicin as well as eugenol treatment increased the interaction of the two proteins as compared to the AGS control group (Figure 4) Western Blot analysis of protein expressions Capsaicin and eugenol treatment was found to substantially increase the expressions of pro-apoptotic proteins such as caspase-3, Bax and caspase-8 with a simultaneous decrease in Bcl-2 and PCNA expression as compared to AGS control cells (Figure 4).The extent of decrease in PCNA expression was much more in the eugenol treated group than capsaicin treatment which corroborates the decrease in the S-phase cell population as mentioned above.Interestingly, the phytochemicals differed remarkably in their ability to induce p53 and p21 proteins.Whereas, there was a significant induction of these proteins by capsaicin, eugenol treatment didn't cause any significant change in the expressions of p53 and p21 as compared to AGS control.but not for eugenol Silencing p53 caused a significant decrease in the ability of capsaicin to induce apoptosis in AGS.As shown in Figure 2 lower panel.the percentage of hypoploid cells by capsaicin treatment was brought down to 8.62%±0.32% in p53-/-AGS as compared to 19.86%±0.78% in AGS with wild type p53 expression.Annexin V/PI assay also reflected a similar same trend with a significant reduction in the percentage of annexin positive cells by capsaicin treatment in p53-/-AGS cells. p53 is essential for induction of apoptosis by capsaicin However, p53 knock down did not influence the apoptotic activity of eugenol in AGS as evident from both the hypoploidy peak and percentage of annexin positive cells induced by eugenol treatment (Figure 3).Quite interestingly, silencing p53 significantly affected the capacity of eugenol to reduce the percentage of S-phase cell population. Study of protein expressions in p53-/-AGS cells treated with spice principles The significant elevation in the expression of proapoptotic proteins such as caspase-3, caspase-8 and bax, decrease in that of anti-apoptotic protein bcl-2 and proliferative marker PCNA by capsaicin in AGS with wild type p53 was completely offset by silencing p53 in AGS (Figure 5) suggesting a decreased ability of capsaicin to induce apoptosis in the absence of p53.In eugenol treated cells, silencing p53, didn't cause much alteration in the expression of either caspase 3 and 8 as compared to eugenol treated cells with wild type p53.Eugenol, however, was unable to modulate the Bax/bcl-2 ratio and cyt c-capsapse 9 association in p53-/-cells.Moreover, extent of reduction in PCNA expression was much lesser in eugenol treated p53-/-cells suggesting that eugenol modifies the expression of PCNA in a p53 dependent manner. Discussion p53 mutation is a common event in human cancers which causes defects in apoptosis and makes cancer cells resistant to chemotherapeutic agents (Do et al., 2012).Alterations in p53 occurs in gastric carcinoma and it increases in frequency during the course of gastric carcinoma developments (Fenoglio-Preiser et al., 2003;Busuttil et al., 2014).Chemosensitivity of gastric cancer towards chemotherapy has been shown to be abrogated in the absence of p53 (Osaki et al., 1997).Therefore, identification of agents that can kill cancer cells with mutated/deleted p53 is a promising anticancer strategy. In the present study we showed that capsaicin induces apoptosis in AGS cells through upregulation of p53 and that the apoptotic activity of capsaicin is p53-dependent.In contrast, eugenol was found to induce apoptosis independent of p53.Moreover, eugenol was also found to be a potent inhibitor of cellular proliferation as evident from a significant decrease in the population of S-phase cells and a corresponding decrease in PCNA expression by eugenol treatment.This antiproliferative activity of eugenol was, however, dependent on p53. Mechanistically, we found that the ability of capsaicin to induce the expression of proapoptotic proteins such as Bax, caspase-3 and caspase-8 was almost completely obliterated by knocking down p53.Moreover, the cyt-c-caspase-9 association, which is considered to be important in caspase-9 mediated apoptosis, didn't change significantly in capsaicin treated p53-/-AGS as compared to untreated controls.Similarly, eugenol too couldn't modulate the levels of either Bax/Bcl-2 ratio or cyt c-caspase-9 interaction in p53-/-AGS cells suggesting inhibition of the intrinsic pathway of apoptosis in the absence of p53.Interestingly, however, eugenol was able to enhance the expression of both caspase 8 and caspase 3 even in the absence of p53.As it is known that apoptosis can be induced by the extrinsic pathway via sequential activation of caspase-8 and capsase-3 (Kim et al., 2007), it can be hypothesized that while the intrinsic pathway of apoptotic induction by eugenol is dependent on p53, the extrinsic pathway is not.In case of capsaicin, on the other hands, both the intrinsic and extrinsic pathways were dependent on cellular p53 status.In line with our study, Al-Sharif et al. (2013) had previously demonstrated the p53-independent mechanism of apoptosis induction by eugenol in breast cancer cells.The dependence of capsaicin on p53 for the induction of apoptosis has also been reported in an earlier study (Jin et al., 2014). This study also showed eugenol to be a potent inhibitor of cell proliferation and the antiproliferative activity of eugenol was compromised in the absence of p53.P53 is known to directly control DNA replication and repair by modulating the levels of PCNA and some studies have found that higher levels of p53 represses PCNA promoter (Xu and Morris, 1999).The decrease in the percentage of S-phase cells in eugenol treated AGS was matched by a similar reduction in PCNA expression.However, in the absence of p53, the repression on the PCNA promoter couldn't be affected by eugenol leading to obliteration of its antiproliferative effect.Capsaicin, on the other hand, was not found to exert any anti-proliferative on AGS cells either in the presence or absence of p53. In conclusion, we demonstrated that both eugenol and capsaicin induced apoptosis in AGS by the intrinsic as well as extrinsic apoptotic pathways.Between the two phytochemicals, capsaicin caused a more potent apoptotic induction than eugenol in AGS cells in the presence of p53.In the absence of p53, however, eugenol was a better apoptotic agent than capsaicin because of its ability to induce the extrinsic pathway of apoptosis in a p53 independent manner.Loss of function of p53 tumor suppressor gene is implicated in defective apoptotic response of tumors to chemotherapy.Thus agents which can induce apoptosis irrespective of the cellular p53 status have immense scope for development as potential anticancer agents.Therefore, eugenol warrants further investigation for its potential use as anticancer agent against p53 defective or null tumors with poor prognosis. Figure 1 . Figure 1.Effect of Treatment with Capsaicin and Eugenol on AGS Cell Viability.Percentage hemolytic activity of capsaicin (A) and eugenol (D) on human red blood cells.B) Cell count at various concentration of capsaicin (B) and eugenol (E) using Trypan Blue exclusion method.Percentage cell viability at various concentration of capsaicin (C) and eugenol (F) measured by MTT assay Figure 2 .Figure 3 . Figure 2. Cell Cycle Analysis of Treated and Untreated AGS Cells by Flowcytometer using Propidium Iodide (PI) as DNA-binding Fluorochrome.A) Histogram display of DNA content (x-axis, PI-fluorescence) versus counts (y-axis) has been shown.The upper and lower panels display cell cycle phase distribution of treated and untreated AGS cells with wild type p53 and silenced p53 respectively.B) Bar diagram representation of cell cycle phase distribution of AGS (with wild type and p53 knock out cells) from different experimental groups.The data represented as mean+S.D. for the three different experiments performed in triplicate Figure 4 .Figure 5 . Figure 4.Western Blot detection of Pro-apoptotic and Proliferative Proteins in Treated and Untreated AGS with wild type p53.Detection of p53, p21, Bcl-2, Bax (upper panel) caspase-3, caspase-8 and PCNA (lower panel) in vehicle control, capsaicin and eugenol treated AGS.Co-IP of pro-caspase 9 with cyt c suggesting formation of apoptosome.Equal loading of protein in the lanes was confirmed by GAPDH.Indicated proteins are represented as bar diagrams of mean + SD of their relative densities as measured from three independent experiments.Each test was performed 3 times and images presented were typical of 3 independent experiments.The data were presented as mean ± SD DOI:http://dx.doi.org/10.7314/APJCP.2015.16.15.6753Induction of Apoptosis by Eugenol and Capsaicin -Role of p53
2018-04-03T04:39:06.593Z
2015-10-06T00:00:00.000
{ "year": 2015, "sha1": "31f4ac5f4efb19b77925317a279786bcc17f6d0a", "oa_license": "CCBY", "oa_url": "http://society.kisti.re.kr/sv/SV_svpsbs03V.do?cn1=JAKO201534168452377&method=download", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "750f200154458b49f19cf20533d57fba1d139c04", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
226316939
pes2o/s2orc
v3-fos-license
In the Pursuit of an Identity: Analysing the Case of Male Health Care Providers Being a female-concentrated job, nursing has forgotten the place of men within the profession despite their contribution since time immemorial. The heightened efforts of Florence Nightingale to transform nursing into a respectable female occupation denied men the opportunity to enter this domain. Despite their growing representation, they are still a minority in nursing in countries across the globe. When the occupational roles do not conform to the gender-appropriate roles prescribed by the society, the ‘male’ nurses’ prestige and self-esteem are at risk since others recognize them neither as true nurses nor as real men. Drawing majorly from secondary sources and data gathered from an anthropological study of in-home care providers in the South Indian state of Kerala, this paper on the predicament of men in nursing throws light on the ‘spoiled identity’ they carry; the work stress, gender stereotyping, stigma and discrimination they encounter by always being suspected and their very identity and sexual orientation questioned. A note on the strategies employed by them to overcome the problems is also within the purview of this paper. Al ser este un trabajo realizado principalmente por mujeres, la enfermería ha olvidado el lugar de los hombres dentro de la profesión a pesar de su contribución desde tiempos inmemoriales. Los intensos esfuerzos de Florence Nightingale para transformar la enfermería en una ocupación femenina respetable le negó a los hombres la oportunidad de ingresar a este campo. Aunque el número de hombres que ingresan a la profesión ha aumentado con los años, todavía se mantienen como una minoría en el campo de enfermería en países de todo el mundo. Cuando los roles ocupacionales no se ajustan a los roles apropiados para el género prescritos por la sociedad, el prestigio y la autoestima de los estos enfermeros se encuentran en riesgo ya que otros no los reconocen como verdaderos enfermeros ni como verdaderos hombres. Basándose principalmente en fuentes y datos secundarios recopilados de un estudio antropológico de proveedores de atención domiciliaria en el estado de Kerala, en el sur de India, este documento sobre la situación de los hombres en el campo de enfermería arroja luz sobre la "deteriorada identidad" que llevan consigo; el estrés laboral, los estereotipos de género, el estigma y la discriminación que enfrentan al ser siempre sospechosos y su identidad y orientación sexual cuestionadas. También se encuentra una nota sobre las estrategias empleadas por ellos para superar los problemas dentro del ámbito de este documento. Palabras clave: enfermería, profesión, enfermeras, proveedor de cuidados, identidad ursing, perceived to be a profession meant for women then and now, is a field that largely retains the characteristic of female domination across the globe. Any discussion on the profession of nursing would remain incomplete without a mention of the efforts of Florence Nightingale whose brainchild was to transform nursing into a respectable female occupation. When feminine qualities found parallels with attributes of a nurse, doors to enter the profession were closed for men. Women who are naturally considered as caregivers became the suitable members to perform the role of nurses thereby rendering nursing the tag of a more or less single-sex occupation. Landivar (2013) identified an ascent in the number of males in the US registered nurse force since the 1970s, yet they remained a modest 11 percent in 2011 (as cited in Cottingham, 2019, p.198). Purnell (2007) demonstrated a worldwide point of view where men comprise less than 10% of the nursing workforce in China, Denmark, Finland, Hungary, Australia, Mexico, and New Zealand. Few men seemed to become nurses in Pakistan and Arab Middle-Eastern countries. Italy, Spain, and Portugal had about 20% of men as nurses when England and Israel had slightly fewer men in the profession. Male nurses in the Czech Republic and Francophone Africa displayed greater numbers and men outnumbered women in the latter. Bernabeu-Mestre, Carrillo-García, Galiana-Sánchez, García-Paramio, & Trescastro-López (2013) find that "women represent more than 80% of nursing professionals in Spain" (p. 288). Ayala, Holmqvist, Messing, & Browne (2014) report that only 6-10 percent of nurses in Chile are men, according to 2013 statistics available (p.1481). Aspiazu (2017) states that "in 2016, 85% of nurses in Argentina were women" (p.11). These numbers do not lie. However, contrary to what is imagined, Mackintosh (1997) claims that "men's contribution has been perceived as negligible due to the 19th-century female nursing movement has had on the occupation's historical ideology" (p.232). The historical backdrop of nursing has more or less entirely neglected the role of men despite their involvement as caregivers in asylums, army, etc. Many scholars have similar findings: Men have played a dominant role in organized nursing dating back to 330 A.D. in the Byzantine Empire. During this era, hospitals were one of the major institutions where nursing emerged as a separate occupation, primarily for N 313 . men (Bullough, 1994). Moreover, military, religious, and lay orders of men known as nurses have a long history of caring for the sick and injured during the Crusades in the 11th century (MacPhail, 1996). In the United States, men served as nurses during the Civil War. John Simon, the lesser-known rival of Florence Nightingale, was the founder of an experimental field hospital in Germany during the Franco-Prussian War (1870-1871). Male nurses were hired to staff the hospital, and mortality rates among the troops were kept abnormally low (Halloran & Welton, 1994). (Meadus, 2000, p.6) These works attempted to highlight that nursing is not a field alien to men though women have established kind of a monopoly over it since the last few centuries. In the mid-20 th century, nursing witnessed a drastic change in the gender composition of its workforce. The research by Ramacciotti and Valobra (2017) and Ramacciotti (2020) reveal that the history of nursing in Argentina saw a state-sponsored feminization of the profession. The efforts of the Eva Perón Foundation in this direction are mentioned by the authors. They point out that "the nursing schools reinforced this discourse that accentuated the feminization process and made the role of males invisible" (Ramacciotti & Valobra, 2017, p.382). Though male entry was restricted by several nursing schools, it was not uniform. The authors provided instances of nursing schools that did not discriminate against men. The Red Cross School of Nurses in Santa Fe, the Ministry of Health of the Province of Buenos Aires' school for paratroopers, etc. were among them. "The 1944 San Juan earthquake in Argentina is also considered a landmark event that attracted many women to join nursing" (Ramacciotti, 2020, p.48). Ramacciotti does not forget to mention how female pioneers in healthcare in the country (who were influenced by the system created by Florence Nightingale) found nursing an ideal occupation for women. "Cecilia Grierson's efforts to professionalize nursing seem to have resulted in its feminization" (Ramacciotti, 2020, p.62). Studies also demonstrate the association of gender and caregiving, and the tendency to replicate the sexual division of labor in the private sphere into the public sphere. Gill (2018) identifies this "gendered construction of the occupational realms" in the Indian context (p.44). Ramacciotti (2020) also shares a similar view when she says "women occupied jobs in which they 314 Ajith -In the Pursuit of an Identity displayed that supposed feminine nature" since they were efficient in those tasks (p.36). It is also true that nursing as a profession does not enjoy a great status; nurses' subservient position compared to doctors is particularly remarkable. Nurses were considered as lacking in skill and there had been a dishonour attached to nursing prevalent in the country. Nurses have many times been questioned of their character since they involve in intimate body-care, interact with men, work during night hours, and deal with polluting substances (Abraham 2004;French, Watters & Matthews, 1994;Hollup, 2014;Nair & Healey 2008;Percot 2006;Varghese & Rajan 2011). Authors like Etzioni (1969) have classified nursing in the category of semi-professions, along with teaching and social work. Yet, nursing had not received due respect until recently in a developing country like India (Gill, 2018). A transformation in this scenario began to occur due to globalization and the increased demand for nurses overseas. Indian nurses who have been migrating internationally witnessed respect for their profession and improved working conditions which paved way for their economic security (George 2005;Nair & Healey, 2008;Percot 2006). Even though the major motive behind migrating was financial independence, overseas migration not only helped the nurses fetch themselves enhanced earnings but also became the reason for many to select nursing as a vocation back in their homeland. More nurses became sought after in the marriage market and a preference for nurses working abroad was observed (Percot, 2006a). The better prospects that migration offered were able to attract a lot of men to enter the field and various studies reported an upsurge in nursing becoming a career alternative for men in the last few decades (Buerhaus, Staiger, & Auerbach 2004;Hodges et al., 2017;Trossman, 2003;Walton-Roberts, 2010;Zamanzadeh et al., 2013). However, they are still negligible and a nation-wide analysis of the nursing institutions in many countries would testify this. Predominantly, female occupations are thought to be less professional compared to male-dominated ones and various scholars adhere to the fact that gender acts as an obstacle to professional advancement. Gilbert and Rossman (1992) opine that this happens since "women are viewed from a gender perspective as less able to take on leadership roles" (p.234). The situations of 315 . men who go into female-concentrated professions have to be looked at from various angles. This paper majorly focuses on the difficulties, as well as benefits the 'male' nurses experience due to being in this field. The gender bias and discrimination 'male' nurses are subjected to for doing the 'women's work' has also been highlighted. The Two Conjectures About Male Nurses and The Issue of Identity Certain professions are considered appropriate for women, the oft-cited ones among these being teaching and nursing. Meanwhile, women had not limited themselves and remained restricted to those realms that are thought to be apt for them. They have been entering previously male-dominated professions and have become doctors, lawyers, and businesswomen. Though their journeys were hard, many scholars opine that women received much more support and encouragement compared to men who enter female-concentrated jobs like nursing. The belief that nursing is an arena meant for women remained even after men re-entered the vocation, thereby leading Mackintosh (1997) to produce the assumptions "that the introduction of male nurses was an attempt in some way to violate the respectability of the occupation and male nurses could therefore not be "real" men since men were not naturally capable of performing caring activities" (p.235). These two conjectures have to be analyzed critically since these statements are a threat to the identity of the male nurses in particular and men in general. Since a profession is seen naturally suitable for a particular gender it does not mean that the entry of another gender into it would make it less respectable or irreverent. Men's Entry: Boon or Bane? The world has been witnessing the interplay between gender and division of labor due to which occupational sex-typing prevails. Elliot (2016) suggests that "the central features of caring masculinities are their rejection of domination and their integration of values of care, such as positive emotion, 316 Ajith -In the Pursuit of an Identity interdependence, and relationality into masculine identities" (p.241). In a system where men are favoured, those occupations labelled traditionally male have had an upper hand. Evans (1997) proposes that "in female-dominated occupations such as nursing, patriarchal gender relations function and due to this male nurses tend towards powerful and authoritarian positions" (p.226). A few authors have found that nursing has been a profitable game for men who chose it as a career. Lupton (2006) identifies three main phenomena and writes that: Firstly, men progress more quickly than women to senior positions -riding the 'glass escalator' (Williams, 1995). Secondly, men may be channelled into particular specialities in occupations that are regarded (by themselves and by others) as more appropriate to their gender, and that often carry greater rewards and prestige-which may be both a cause and a consequence of their gender associations. The third advantage relates to remuneration. Williams (1995) and England and Herbert (1993) have shown that men are paid more than women in female-concentrated occupations. (pp.105-106) Though men have had these obvious advantages concerning remuneration and placement in elite/ superior positions, many difficulties are involved in the process. Heikes (1991) puts forth the idea that: Male nurses, like other tokens, experience stress at work due to their token status and tend to experience conflict in the work setting because of incompatible gender socialization. The very traits which help males in other occupations-ambition, assertiveness, and strength-create stress and friction for the male which may hinder them occupationally. In addition to workrelated stressors, they must cope daily with the consequences of being a man doing "women's work". (p.398) "Cockburn has suggested that the handful of men who cross into traditional female areas of work at the female level will be written off as effeminate, tolerated as eccentrics or failures. (1988, p. 40)" (as cited in Lupton, 2006, p.106). Simpson (2004) studied masculinity at work and noted that "emotional labour such as teaching, nursing, and social work may call for special abilities that only women are deemed to possess (Hochschild, 1983)". This can "invite challenges to men's sexuality and masculinity if they adopt a more feminine approach" (Simpson, 2004, p.7) MCS - Masculinities and Social Change,317 . Another well-documented theme is that of role strain 1 . Stott (2004) identifies that "males tended to choose "less intimate" specialization areas such as administration, anesthetics, and psychiatric nursing to cope with role strain in a female-dominated profession. (p.92). She also comments that "the male nurses perceived themselves as being the victims of discrimination" (p.92). "Issues such as role strain, minority status, and stereotypical attitudes are perceived as being central to the unique conflicts facing men in nursing". (Stott, 2004, p.95) Do Men Get Support to Enter Female-Concentrated Professions? According to Egeland and Brown (1988), "males appear to encounter more negative criticism from the public on entering female-identified occupations. For example, they are "held suspect" and penalized for role violation" (as cited in Meadus, 2000, p.6). These men encounter a lot of hardships and are subjected to criticism and ridicule since they challenge the conventional image that nursing has (Villeneuve, 1994;Williams, 1992). Meadus (2000) notes that "another commonly held stereotype concerning men who choose nursing as a career is that they are effeminate or gay (Boughn, 1994;Gray et al., 1996;Williams, 1992;Williams, 1995)" (p.8). "The stigma associated with homosexuality exposes male nurses to homophobia in the workplace". (Harding, 2007, p.636). In the opinion of Mangan (1994), the labelling of male nurses as effeminate or homosexual can be interpreted as a social control mechanism that redefines nursing as a woman's work" (as cited in Meadus, 2000, p.9). Segal (1962) elucidates that "male nurses are suspect because they enter a traditionally female occupation. They are involved in a status contradiction between characteristics ascribed to men in our society and characteristics that are supposed to inhere among members of the nursing profession" (p.37). Since the perception that female-oriented jobs are easy to perform exists, many a time the male nurses are regarded as lacking skills to perform maleoriented professions. They are also categorized as lazy by some since they choose female-dominated jobs that demand less effort and physical strength. Ajith -In the Pursuit of an Identity Citing Macintosh (1997), Anthony (2004) asserts that "when the nursing profession perpetuates the feminine stereotype and uncritically uses a feminine language and value system, individuals who do not fit that stereotype are at risk for being oppressed and silenced by those who do fit the image" (p.123). It is not just in the workplace that male nurses encounter these difficulties and suffering. During their education as well, male students are likely to suffer from these issues and barriers such as "the feminine paradigm in nursing education, a lack of role models and isolation, gender-biased language, differential treatment, different styles of communication, and issues of touch and caring" (O'Lynn, 2007, p.174). It has been accentuated by Evans (1997) that male nurses employ "strategies that allow them to distance themselves from female colleagues and the quintessential feminine image of nursing itself, as a prerequisite to elevating their prestige and power" (p.226). The gender-specific specializations whereby male nurses are involved in selecting the more 'masculine' elements have been analyzed already. Such specialties include psychiatry, anaesthesiology, emergency care, etc. Significance of the Study As far as the Indian scenario is concerned, the literature suggests that the country has produced umpteen nurses and has been a supplier of nursing care services worldwide. The greater part of these studies on nursing has had hospital-based nursing as the focus. A handful of entries has looked at homebased nursing care as research conducted in India has mainly concentrated on nursing homes, community nursing, and palliative care. Indian nurses have migrated internationally and a lion's share of them have been nurses from Kerala, "the leading Indian state for the training and 'export' of nurses for the international market" (Walton Roberts & Rajan, 2013, Introduction, para 3). 'Home nursing' (home-based nursing) has become a buzzword in the arena of nursing care in Kerala over the last few decades. The care providers stay with the patients in the latter's homes and deliver their caregiving services. Though some studies like the one by George and Bhatti (2019) have focused on male nurses who work in hospitals in Kerala, there has been no scholarly effort to 319 . look into the cases of men who work as in-home care providers. The need for in-depth studies on home-based care and live-in care providers prompted the researcher to undertake the study with the aid of an anthropological perspective and methodology. Methodology The data for the study were collected from May 2018 to July 2019 from three categories of people who formed the part of the study viz. the care providers, owners of nursing agencies who supply the care providers, and customers who avail the services to take care of patients. The area chosen for the study was the districts from Southern Kerala where the home-based care industry is more in prevalence. Thiruvananthapuram, Kollam, Pathanamthitta, Alappuzha, Kottayam, and Ernakulam were these districts. In-depth interviews of the respondents with the aid of a semi-structured schedule were conducted. The face-to-face interviews lasted for 55 minutes on average. The nursing agencies which the participants were associated with served as the sampling frame. Since the number of men working for an agency is less, the snowball technique had to be applied to reach a desirable sample size. The data presented in this paper consider the cases of 20 male care providers who were a part of the study that had a total sample of 150 care providers. The interviews were conducted in the regional language (Malayalam) and were not tape-recorded. The anonymity of the respondents was assured and their informed consent sought. The interview schedule initially framed in English was later translated to Malayalam for convenience. The respondents were asked questions about their reasons and motivations behind taking up the job, their prior work experiences, their relationship with the patients, and the pros and cons of this non-traditional occupational choice. They were also asked to briefly narrate their experiences in the field. Previous Employment and Prior Work Experiences The twenty men were aged between 32 and 70 (M = 48.05 years, SD = 11.44). Their experience in the job ranged from a few years to about two decades (M = 7.7 years, SD = 4.76). Though only four of the men were academic nursing qualified, five others have had prior experience as hospital staff like nursing attendants and aides. Low pay and difficult working conditions were the two main reasons why these people left hospital-based work. 55 percent of the sample has taken nursing training from the private nursing agencies that they are associated with. The nursing trained participants also include five people who were trained under the Indian Red Cross Society and thus guided by hospital nurses. Most of the respondents perform health care tasks like insulin administration, monitoring blood pressure, catheter care, nasogastric tube feeding, etc. Reasons for Becoming a Male Caregiver An inquiry into the rationale for taking up the job revealed that eight men were influenced by their relatives and friends. The wives of four out of these eight people also worked as in-home care providers and were the ones who motivated their spouses to take up the job. When four participants opined that they found the job satisfying and rewarding, five people expressed a desire to help people since they considered serving the sick as a divine activity. Nine participants mentioned that this job paid them better than their previous occupations. For them, better pay has been a motivating factor in addition to the interest they had to take care of patients. However, it could be known that this was never a first-choice occupation for any of the study participants. Thomas recalls, 'My wife has been working in the field since 2007 and she persuaded me to join the same nursing agency she was working for. After I got in, I realized that I love to care for the elderly. I am not in this job just to earn money; I want to make a change in the lives of people. My previous job used to pay me better but here I find myself more satisfied when I serve the MCS -Masculinities and Social Change, 321 . ailing and needy. I think I should thank my wife for leading me to this job'. John discerns, 'It needs a lot of patience and will power to be in the profession of nursing. It is never an easy job for men to do this since we have not done the caring work at our homes. But one thing for sure is that caring for the sick is a divine activity. My parents are happy about the good work I am doing by helping those in need. It was my elder sister (a hospital nurse) who, from my school days, has inspired me by telling the stories of nurses who did the caring when doctors did the curing. But nursing was not my first choice of occupation. When I wanted a change from my previous job, I thought why not give home-based care a try. And here I am'. Perceptions About Nursing and Caregivers Out of the 20 participants, six either considered caregiving as a women's job or thought it is perceived so by society. The rest voiced the idea that they were comfortable working in a female-dominated sector and that men are also equally capable of providing care to the ailing. Some respondents believe that as long as they look after male patients alone, there is nothing to feel ashamed about. Nevertheless, only a quarter of the participants have revealed their occupational identity to all of their relatives and friends. When 12 men made their family members aware of their job, three kept it hidden even from their close kin. Out of these 17 who let their family members informed about the job, six said that their family members were very supportive. The rest pointed out that their families had mixed opinions about them doing the job. A major reason why men had to hide their work identity was to not disclose their involvement in home-based work. They were afraid of the humiliation they might have to suffer if their relatives and friends know that they are doing 'women's work'. It is this fear that led 14 of the participants to disguise themselves to be working as hotel cooks, shopkeepers, security personnel, and teachers. Most of them worked in other districts and did not like to work near their hometowns. This confirms the fact that many men often do not get enough social support to perform their roles as care providers. When they were criticized and ridiculed for their choice of job, some care providers sought solace in the words of their male co-workers, and relatives who performed similar work. Michael states, 'I am not alone, my brother-inlaw also works as a home-based caregiver. I also have a few friends who do the same job. I used to be a nurse at a private hospital. I left the job since I had difficulties with the work conditions. Later, I went abroad in search of a better life but I had to return to Kerala due to personal reasons. After a couple of years, I took up this job. Even if others believe that this is a small job, for me it is a big deal. It is this job that fetches my bread and butter now and I am well pleased with it'. Harry, who dropped his job as a hospital nurse also has a comparable opinion and thus says, 'my uncle and friends who are home nurses have been my support system. Even when my wife's family asked me to quit the job and find another one, they stood by me and supported my decision to continue in the job'. Some men in the study had to choose the job at the cost of losing their prestige. In such cases, the desire to help people and work satisfaction were the pull factors (besides better pay than the previous job). Problems Facing Male Caregivers Five men recalled instances of physical or mental harassment they had experienced from the patient/ patient's relatives. It was mostly the patients who physically assaulted their care providers out of their anger and frustration. But the caregivers understand the helpless conditions of the patients and adjust with them. Three participants pointed out how the labels of 'gay men'/ 'homosexual' haunt the men in the caregiving profession and one respondent recollected an incident where he was approached by a patient's relative since he was mistaken for a gay man. The participants were asked if their gender restricted them from performing their roles as care providers. Though some of them believe that the provision of care is women's job and thus the work performed by male care providers are stigmatized, eight men stated that it does not matter since they do not take care of patients of the opposite sex. A few reinstated that the job still lacks respect in our society. However, none of the respondents opined 323 . that they find any aspect of their work polluting-whether it be bathing or toileting the patient. It is also worth noting that none of them expressed dissatisfaction with the job though they have various inconsequential difficulties due to staying away from their families. There still prevails a stigma around nursing and the study participants reveal that it is worse when it comes to home-based nursing. Men who perform non-traditional occupations could have a sense of shame when they do not live up to social expectations. Adam comments, 'People think that we are unqualified and good for nothing since we are in-home care providers. This makes me sad because many people like me are trained in healthcare tasks. I have not disclosed my occupational identity to my relatives since they believe it is unmanly to involve in home-based care work. They will tell you that men are meant to work outside homes and not inside.' John also has a similar opinion on the societal perception of home-based caregivers. He says that 'Some people have asked me why I go for a job where I have to clean people's dirt and fecal matter. I was told that it's better to work in a tea shop or a hotel'. John hypothesizes that even if nurses are academically qualified personnel, they are equated to servants or cleaning staff. He laments that society doesn't recognize the good work caregivers are doing. Jacob mentions that 'Not just women, but men in this field are also prone to harassment and moral policing. We men are considered as homosexuals. People call us 'Chanthupottu' (A term which became popular in Kerala after a Malayalam movie of the same name and is used by some people in the state to refer to homosexual or effeminate type men). Even now, caregiving does not have a good reputation in our society'. George finds that 'Men in this job find it difficult to get marriage alliances. A male home nurse is often portrayed as a gay man'. He also adds that 'Our society perceives that being homebound is not something meant for men and it is due to this reason that most of us do not reveal our job identities. I'd rather like to work as a hospital nurse; I think that has a better acceptance in our society.' Ajith -In the Pursuit of an Identity When their family members and relatives did not take proper care of elderly patients (due to various reasons), paid care providers came to their rescue. They have been a tower of strength for the sick elderly as they sustain the lives of the patients by providing emotional support in addition to physical tending. Francis describes, 'It feels good to be in this job since there is a homely atmosphere in the patient's house. I take care of an 87-yearold man who broke his leg recently. I don't know how long he will survive. I consider him like my grandfather. His children are busy with work and they do not spend enough time with him. He enjoys my company and I often crack jokes to make him laugh. I feel happy when I see him smile'. Vijay is of the view that one has to be a good listener when involved in elderly patient care. He says, 'Many times the adult children do not have time to listen to what their elderly parents have to say. Then, it is the care providers who have to lend their ears to the stories of elderly patients. This will help develop bonding and attachment between the two and might also help improve the health condition of the patient'. Strategies Employed to Sustain in the Field In India, even if they enter female-dominated occupations, the majority of men try to make sure that they conform to the mainstream / traditional masculine values. This is in contrast to the study undertaken by Bodoque-Puerta et al. (2019) in Spain which found that the male social caregivers "distance themselves from the values of traditional masculinity to construct an alternative masculinity" (p.220). It was observed that the male care providers consider knowledge to perform physiotherapy as something that would give them an upper hand compared to their female counterparts. The study participants consider being able to perform physiotherapy as a desirable qualification for male care providers. 12 out of the 20 participants have done physiotherapy courses and 5 men have learned the basics of the same through experience in the field over the years. This aspect is akin to male nurses concentrating on specializations like psychiatric nursing that appears to be a specialty that helps them display their masculine qualities. Most of the 325 . participants also mentioned that they do not involve in performing any household tasks at the patient's home unless they work for elderly patients who do not have any co-resident relatives. This is in contrast to the female care providers who often help with the household chores though they are employed for patient's healthcare. The respondents highlighted the fact that they see to it that they maintain the 'clean habits' like abstaining from alcohol consumption and smoking since these are considered pre-requisites to enter the field and remain in the job. Does a Glass Escalator Exist? A feature of 'glass escalator' (Williams,1992) that is observed in this study is the gender-wage gap. Female care providers formed the majority of the larger study. Compared to them, the male care providers earned about 2000 rupees more. The average monthly salary of the men is around 16000 rupees (~225 USD) and thus approximately 15-20 percent more than that of women. According to the nursing agency owners, this wage gap exists because taking care of male patients is riskier and more difficult. When asked about their opinion on the wage gap and why they thought men out-earned women, half of the respondents felt that since it happens in other jobs, the same is replicated in this job too. The other half believed that they are paid more as it is tougher to take care of male patients and therefore to retain male care providers in the job, they should have better pay than their female co-workers. Despite this being the case, most of the respondents mentioned that their work is harder than that of their female colleagues. It could also be noticed that when choosing an employee, clients were particular about habits such as smoking and alcohol consumption of male care providers. They do not wish to employ men who could be a threat to the safety of the female family members. John remarks, 'I do not consume alcohol or smoke cigars. I have always behaved well with everyone I have worked for. I think teetotallers are better liked by patients' families. Though this is the case, bachelors like me find it difficult to make it into this field because we are held suspect for various reasons'. Unlike hospital-based nurses, in-home care 326 Ajith -In the Pursuit of an Identity providers do not have any benefits and job promotions/ advancements in their positions. Since the employees carry the same job title irrespective of gender, it cannot be completely categorized as an instance of a glass escalator. Moreover, some of the above narratives demonstrate the difficulty that men (especially, unmarried men) experience to gain entry to the field and to sustain in it. As male care providers are employed only for patients of the same gender, they do not pose a threat to the opportunities of the womenfolk. Discussion This study complements the prior research on men in nursing by considering the cases of a few men in home-based care services from the state of Kerala in South India. The efforts of male nurses towards the profession have not been adequately recognized though they have been actively involved in caring and nursing people. Due to scrutiny and suspicion that arise when men enter nursing, several men identified a split in terms of how they presented themselves at work and outside work. Some do not like to be known as nurses when they are in the social context since their tags as 'male' nurses often go against the desirable notions. It is also worth noting that a non-gendered occupational title can have a positive effect on societal perceptions. "In Mauritius, the professional title and grade 'nursing officer' is non-gendered and thus does not represent a barrier to men" (Hollup, 2014, p.758). On the contrary, some of the study participants from Kerala consider the term 'home nurse' to suit female caregivers more than males. Studies have reflected that while most men expressed higher levels of satisfaction with their career, role strain is inescapable. Participants in the present study also expressed similar views and tensions related to the public perceptions of men performing caring work and the indifferent attitudes of people around them. But they believe that as long as they look after male patients alone, there is nothing to feel ashamed about. They also highlight the fact they work as per the care plan and do not cross the boundaries to help their clients in domestic chores (unlike the female caregivers who go the extra mile). In their study on male nurses from Kerala, George and Bhatti (2019) find that "nursing is not perceived suitable for men and the majority of the 327 . participants complained about the difficulties in convincing their families or being ridiculed by their friends" (p.121). The respondents of the current study had similar opinions though not all of them consider the job as women's forte. However, it sheds light on men performing home-based care being prone to double stigma: for taking up nursing, a women's job and for working at home, a sphere meant for women. Likewise, Acker (1990) highlighted the gendering of occupations and workplaces (cited by Scrinzi, 2010, p.46). Prior research on men in nursing highlighted men's choice of specialism and its association with career progression. Male caregivers in Kerala also showed an interest in one special area-the ability to perform physiotherapy and they take pride in being able to perform the task. However, this does not contribute to career progression. An effort made to analyze the presence of a glass escalator as experienced by the study participants shows that there are no real 'hidden advantages' (Williams, 1992) that they have over female caregivers. It was observed that support from significant others was a persuasive factor in occupational selection. Zamanzadeh et al. (2013) came up with parallel findings from their study of male nurses and the authors add that "most of this support comes from females who are close to men that are interested in pursuing a nursing profession" (p.53) This has been true to an extent in the current study. However, unlike their study that found career opportunities and salary as the most important motivators for entrance (Zamanzadeh et al., 2013, p.53), this study shows that desire to help people and satisfaction derived from the job have also been more important motivators for the male nurses. According to Araüna et al. (2018), the ability to use and maintain a sense of humor in critical situations has been identified as a characteristic of male nurses. The caregivers in the study also exhibited similar conduct. Dill et al. (2016) in an analysis of a 'wage penalty' find that "men who are involved in direct care work occupations experience a penalty for caring" (p.354). However, the participants in the present study do not perceive the existence of such a penalty. The narratives provided above exemplify that the wages they earn are satisfactory and a considerable proportion of the sample find it better compared to that of their previous occupations. Conclusion The paper aimed to analyze male in-home care providers' reasons and motivations behind taking up the job, their prior work experiences, their relationship with the clients/ patients, and the perceived pros and cons of this non-traditional occupational choice. The study from the Indian state had male caregivers comprising only 13.33% of the total sample. The situation is not different in other states of the country. It is identified that home-based care is also female-concentrated, akin to hospital-based nursing. Authors like Ayala et al. (2014) consider the entry of men in substantial numbers to benefit the future of nursing as they assume that "the masculine presence could counterbalance an alleged lack of political power" (p.1483). But, the authors caution that this move "may lead to the reproduction of earlier historical inequalities if not handled judiciously" (Ayala et al., 2014(Ayala et al., , p.1484. Although Shen-Miller and Smiler (2015) opine the possibility of an overall rise in wages in the field as a result of men's substantial entry, a remark is also made about a subsequent gender-wage gap that could occur (p.272). In a similar line, Bernabeu-Mestre et al. (2013) opine that the inclusion of more men into nursing should not be at the cost of compromising equality in terms of responsibility and power (p. 288). However, the current study participants do not reflect the presence of a glass escalator, barring the aspect of the genderwage gap. As several authors consider male (re)entry to nursing as a much welcome move, multiple interpretations have come up from various studies that were conducted globally. "The presence of men in feminized occupations can challenge the sex-typing that characterize them" (Scrinzi, 2010, p.59). "It is anticipated that a large influx of men will raise the profession's status and prestige" (Evans, 1997, p.227). Elliott (2015) mentions the need to encourage men to engage in gender-equal and caring practices. (p. 247). Research by Aranda et al. (2015, p. 105) comes up with similar findings and argues that the inclusion of men "could produce a change in gender stereotypes". Dill et al. (2016) are also optimistic that "the presence of men in low-and middleskill care work occupations may redefine "women's work" as both men's and women's work" (p.355). Change,329 . As Mackintosh (1997) rightly puts forward, "the contribution men have made to nursing history should be recognized more positively, thereby allowing male nurses the opportunity to fulfill their roles with full knowledge of their place in the historical background of the profession" (p.236). The present study also is in favor of bringing a change to the 'feminization of nursing' (O'Lynn, 2007) and thereby help men reinforce their caring identities. It also finds that caregiving becoming a gender-neutral activity as an after-effect of the enlarged representation of male caregivers is a possibility that cannot be ignored. Though the sample size of the study is limited, it is envisioned that the study would pave the way for future research and can be replicated in other parts of the country. Ethical Approval The study on home-based caregivers from Kerala was approved by the University of Hyderabad. The Institutional Ethics Committee's certificate to conduct the research was granted on 09/03/2018 for the Application No. UH/IEC/2018/6. Notes 1 Role strain is "when an individual is likely to experience tension in coping with the requirements of incompatible roles" (Jary, D., & Jary, J., 1991, p. 538)
2020-10-29T09:08:57.925Z
2020-10-21T00:00:00.000
{ "year": 2020, "sha1": "8c0a7dd49d3550d77a161e18f6d63a3788a9b47c", "oa_license": "CCBY", "oa_url": "https://hipatiapress.com/hpjournals/index.php/mcs/article/download/5461/pdf", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "178da90847192d1a495c600d6bc0c3926e11c2c0", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Psychology" ] }
256633989
pes2o/s2orc
v3-fos-license
Methylglyoxal couples metabolic and translational control of Notch signalling in mammalian neural stem cells Gene regulation and metabolism are two fundamental processes that coordinate the self-renewal and differentiation of neural precursor cells (NPCs) in the developing mammalian brain. However, little is known about how metabolic signals instruct gene expression to control NPC homeostasis. Here, we show that methylglyoxal, a glycolytic intermediate metabolite, modulates Notch signalling to regulate NPC fate decision. We find that increased methylglyoxal suppresses the translation of Notch1 receptor mRNA in mouse and human NPCs, which is mediated by binding of the glycolytic enzyme GAPDH to an AU-rich region within Notch1 3ʹUTR. Interestingly, methylglyoxal inhibits the enzymatic activity of GAPDH and engages it as an RNA-binding protein to suppress Notch1 translation. Reducing GAPDH levels or restoring Notch signalling rescues methylglyoxal-induced NPC depletion and premature differentiation in the developing mouse cortex. Taken together, our data indicates that methylglyoxal couples the metabolic and translational control of Notch signalling to control NPC homeostasis. Gene regulation and metabolism co-ordinate self-renewal and differentiation of neural precursors (NPCs) in the developing brain. Here the authors show that methylglyoxal, a glycolytic intermediate metabolite, promotes GADPH-dependent translational repression of Notch1, thereby promoting NPC differentiation. D uring the development of the mammalian brain, neural precursor cells (NPCs) self-renew and differentiate to give rise to appropriate numbers of neurons 1 . Key to this balance of self-renewal and differentiation is the crosstalk between gene expression and metabolism, two fundamental processes that co-ordinate NPC fate decision [2][3][4] . The expression of metabolic genes is known to be tightly controlled to support the metabolic shift during NPC differentiation. For example, activation of Notch signalling in NPCs induces the expression of pro-proliferative genes (e.g. basic helix-loop-helix transcription factor Hes1) to maintain the self-renewal of NPCs while at the same time upregulating the expression of glycolytic enzyme genes (e.g. hexokinase 2 and lactate dehydrogenase) that are required by NPCs for energy production and biosynthesis 5,6 . On the other hand, during neurogenesis NPCs express proneural genes to induce differentiation and downregulate the expression of glycolytic enzymes to support the metabolic transition from glycolysis to mitochondrial oxidative phosphorylation as the primary energy source of neurons 5,6 . Accumulating evidence suggests that NPC metabolism is not a simple adaptation to different cellular states but instead plays a more direct role in regulating selfrenewal and differentiation. Although NPCs rely on glycolysis, their mitochondria exhibit an elongated morphology and are functional, with a forced metabolic switch to mitochondrial oxidative phosphorylation enhancing their differentiation 7,8 . These observations reveal the reciprocal nature of the relationship between metabolism and gene expression critical for NPC fate decision. However, while the pathways that regulate metabolic gene expression are well described, the signals and mechanisms that mediate the metabolic feedback control of gene expression for proper NPC fate decision remain poorly understood. Glycolysis produces a wealth of metabolites. How do these metabolic cues instruct gene expression in NPCs? One mechanism used in the adult murine NPCs is mediated by a cyclic AMPresponsive element-binding protein (CREB)-dependent pathway that senses the level of glucose to direct Hes1 transcription and thereby the self-renewal of NPCs 9 . A second mechanism may involve glycolytic enzymes acting as RNA-binding proteins (RBPs) to regulate target mRNAs post-transcriptionally [10][11][12] . For example, glyceraldehyde-3-phosphate dehydrogenase (GAPDH) is a key glycolytic enzyme that catalyzes the conversion of glyceraldehyde-3-phosphate (G3P) into 1, 3-bisphosphoglycerate (1, 3-BPG) 13 . Interestingly, GAPDH can bind to the AU-rich element in the 3ʹ untranslated region (3ʹUTR) of mRNAs and subsequently alter their stability and translation 14 . This dual function of GAPDH is best described in immune cells. In T cells where oxidative phosphorylation serves as the primary energy source, GAPDH functions as an RBP to repress the translation of the interferon γ mRNA 10 . When T cells are activated and switch from oxidative phosphorylation to glycolysis, GAPDH is reengaged in the glycolytic pathway and no longer represses the translation of interferon-γ mRNA 10 . What controls the functional switch of metabolic enzymes is still largely unknown. One means of switching may involve feedback or feedforward control of their enzymatic activities by post-translational modifications with intermediate metabolites 15,16 . For example, methylglyoxal, an intermediate metabolite produced from G3P during glycolysis modifies GAPDH in a non-enzymatic manner, leading to inhibition of its enzymatic activities 17 . The competitive binding between the enzyme cofactor nicotinamide adenine dinucleotide (NAD) and RNA to the same domain on GAPDH suggests that its compromised activity for glycolysis may otherwise promote its engagement as an RBP to regulate target mRNAs 18,19 . We have recently found that an increase in methylglyoxal levels depletes NPC numbers in the developing mouse cortex 20 , raising the possibility that methylglyoxal may serve as a metabolic signal to regulate specific genes for NPC homeostasis by modulating RNA-binding enzymes such as GAPDH. Here, we show that methylglyoxal induces feedback regulation of Notch signalling in NPCs by engaging GAPDH as an RBP. An increase in methylglyoxal levels reduces the enzymatic activity of GAPDH and promotes its binding to Notch1 mRNA in NPCs. This leads to the translational repression of Notch1 mRNA and a reduction in Notch signalling, ultimately causing premature neurogenesis. This study provides a mechanistic link for the metabolic regulation of gene expression in NPC homeostasis. Results Excessive methylglyoxal depletes neural precursors. We have previously shown that methylglyoxal-metabolizing enzyme glyoxalase 1 (Glo1) maintains NPC homeostasis, thereby preventing premature neurogenesis in the developing murine cortex 20 . To determine whether Glo1 controls NPC differentiation by enzymatically modulating methylglyoxal, we initially assessed methylglyoxal-adduct levels in NPCs and neurons in the cortex 21,22 . Immunostaining of embryonic day 16.5 (E16.5) cortical sections for a major methylglyoxal-adduct MG-H1 showed only weak immunoreactivity in the cytoplasm of Pax6+ radial precursors in the ventricular and subventricular zones (VZ/SVZ) ( Fig. 1a, b, Supplementary Fig. 1a). MG-H1 production was gradually increased in newborn neurons migrating in the intermediate zone (IZ) and became highly enriched in the cortical plate (CP), where it accumulated in the nuclei of neurons expressing neuronal markers βIII-tubulin (cytoplasmic) and Brn1 (nuclear) (Fig. 1a, b, Supplementary Fig. 1a). The gradual increase in methylglyoxal levels from NPCs to neurons was consistent with a previous study 23 and is in agreement with the higher expression level of Glo1 in NPCs than in neurons 20 . We next manipulated Glo1 enzymatic activity using S-pbromobenzylglutathione diethyl ester (BBGD), a cell-permeable and reversible Glo1 inhibitor 24 . As expected, upon incubation with BBGD, methylglyoxal levels were significantly elevated in isolated E13.5 cortical tissues (Fig. 1c). We then injected BBGD into the lateral ventricle at E13.5 followed by in utero electroporation of a plasmid encoding nuclear EGFP to label and track NPCs and the neurons they give rise to. The reversible effect of BBGD allows the manipulation of NPCs adjacent to the lateral ventricle, with a minimal impact on migrating newborn neurons in the IZ. Cortical sections were immunostained for EGFP and cell-type-specific markers three days after treatment. We found that BBGD exposure led to a reduction of EGFP+ cells in the VZ/ SVZ (Fig. 1d, e). In contrast, the proportion of EGFP+ cells in the CP was increased, with no change in proportions in the IZ (Fig. 1d, e). In line with the altered cell distribution, we found fewer EGFP+ cells that also expressed the radial precursor marker Pax6, and more EGFP+ cells expressing the neuronal marker Satb2 in cortices exposed to BBGD (Fig. 1f, g). To further confirm that the alterations of NPCs were due to an aberrant increase in methylglyoxal, we labeled proliferating NPCs with bromodeoxyuridine (BrdU) followed by injection of PBS or methylglyoxal into the lateral ventricle. Two days later, we found fewer BrdU+ cells in the VZ/SVZ as well as fewer BrdU+, Pax6+ radial precursors in the cortex received methylglyoxal (Supplementary Fig. 1b-g). These results indicate that an increase in methylglyoxal shifts the balance of NPC homeostasis towards neurogenic differentiation. Methylglyoxal regulates Notch signalling to affect NPCs. Notch signalling plays a crucial role in stem-cell maintenance 5 . Perturbing components of Notch signalling leads to NPC depletion and aberrant neurogenesis characterized by NPC apical detachment and mislocalization in the developing cortex 25 , which is phenocopied by Glo1 knockdown 20 . Therefore, we asked whether methylglyoxal alters Notch signalling in NPCs, using a Notch signalling reporter that contains the responsive element of the canonical Notch effector c-promoter binding factor 1 (CBF1) upstream of the EGFP gene (CBFRE-EGFP) 25,26 . We coelectroporated the Notch signalling reporter with control or Glo1 shRNA plasmids into E13.5 cortices and a plasmid encoding DsRed2 driven by the constitutive CMV promoter, used as an internal transfection control. After two days, almost 70% of DsRed2+ cells in the VZ/SVZ of the control cortex expressed EGFP, indicating active Notch signalling in the transfected cells (Fig. 2a, b). However, following Glo1 knockdown, this number was reduced to less than 40%, suggesting that Notch signalling is suppressed by excessive methylglyoxal induced by Glo1 knockdown. To further assess the changes in Notch signalling, we examined the mRNA levels of downstream Notch targets in control and Glo1 knockdown cells. To this end, we coelectroporated control or Glo1 shRNAs with an EGFP plasmid into E13.5 cortices and collected EGFP+ cells 2 days later by fluorescence-activated cell sorting (FACS) for quantitative realtime polymerase chain reaction (qRT-PCR) analysis. The mRNA levels of the Notch1 targets, Hes1, Hey1, and Hey2, were significantly reduced following Glo1 knockdown (Fig. 2c), suggesting a reduction of Notch signalling. To confirm these findings, we also examined the effect of Glo1 inhibition on Notch signalling in a relatively homogenous NPC population, in vitro cultured human embryonic stem-cell (H9)-derived NPCs (hNPCs). Treatment with BBGD caused a significant increase in methylglyoxal levels ( Supplementary Fig. 1h) and reduced the mRNA levels of HES1, HES2, HES5, and HEY2 genes (Fig. 2d). The expression of MASH1, which is repressed by the HES protein family, was correspondingly upregulated (Fig. 2d). Moreover, knockdown of GLO1 in hNPCs caused an increase in intracellular methylglyoxal levels ( Supplementary Fig. 2a-d), accompanied by a reduction in expression of Notch1-responsive genes (Supplementary Fig. 2e). These results indicate that excessive methylglyoxal reduces Notch signalling in NPCs. Further support for this conclusion came from the direct application of exogenous methylglyoxal to hNPCs, which also increased intracellular methylglyoxal levels (normalized fold change, 10.16 ± 1.80; p < 0.01; n = 3) and suppressed Notch1-responsive genes (Supplementary Fig. 2f). To determine if physiological levels of methylglyoxal regulate Notch signalling, we overexpressed Glo1 in the cortex by in utero electroporation to deplete intracellular methylglyoxal and at the same time, co-electroporated the Notch signalling reporter. The Fig. 2g-j). Consistently with these results, ectopic expression of human GLO1 in cultured hNPCs reduced methylglyoxal levels ( Supplementary Fig. 1h) and caused an increase in the expression of Notch1-responsive genes ( Supplementary Fig. 2k). Taken together, our results indicate that methylglyoxal regulates Notch signalling in NPCs and raise the possibility that impaired Notch signalling might account for the depletion of cortical NPCs induced by increased methylglyoxal. If so, then restoring Notch signalling in NPCs might antagonize the effect of methylglyoxal and rescue Glo1 knockdown-induced NPC depletion. Notch signalling is initiated when the Notch1 receptor binds to its ligands, followed by protease cleavage and release of the Notch intracellular domain (NICD) that translocates into the nucleus to activate downstream targets 5,26 . To test our hypothesis, we constitutively activated Notch signalling by ectopically expressing NICD in the E13.5 cortex in which Glo1 was also knocked down. After 3 days, our analysis of cortices showed that NICD expression completely reversed the phenotype induced by Methylglyoxal perturbs NPC by suppressing Notch signalling. a, b E13.5 cortices were co-electroporated with CBFRE-EGFP and DsRed2, and control or Glo1 (shGlo1) shRNAs and analyzed two days later. a Images of sections immunostained for EGFP (green) and DsRed2 (red). Arrows denote double-labelled cells, and arrowheads denote cells with reduced Notch signalling. b Quantification of sections as in a for EGFP+, DsRed2+ cells. n = 4 and 5 embryos each. c qRT-PCR analysis of FACS sorted EGFP+ cells from cortices co-electroporated with EGFP and control or Glo1 shRNAs. n = 3 experiments. d qRT-PCR analysis of low passage hNPCs treated with BBGD or DMSO for 48 h. n = 3 experiments. e, f E13.5 cortices were coelectroporated with CBFRE-EGFP and DsRed2 plus empty vector control or a plasmid expressing Glo1 (Glo1) and analyzed two days later. Images (e) and quantification (f) of EGFP (green) and DsRed2 (red) double-positive cells. n = 3 and 5 embryos each. g-j A low dose of plasmids (0.5 µg µl −1 ) expressing NICD was co-electroporated with Glo1 shRNA and EGFP into E13.5 cortices. Cortical sections were analyzed three days later. g Images of sections immunostained for EGFP (green). Dotted white lines denote the ventricular surface and boundaries between VZ/SVZ, IZ, and CP. h Quantification of the relative localization of EGFP + cells. n = 4 embryos each. i Images of the VZ or CP of electroporated sections that were immunostained for EGFP (green) and Pax6 (red, left panels) or Satb2 (red, right panels). Arrows denote double-labelled cells. j Quantification of the proportion of EGFP+ cells that were also positive for Pax6 or Satb2. n = 4 embryos each. LV lateral ventricle, VZ ventricular zone, SVZ subventricular zone, IZ intermediate zone, CP cortical plate. Scale bars, 25 µm in a, e and g; 10 µm in i. Data are presented as mean values ± SEM and analyzed using two-tailed, unpaired students t-test with Bonferroni correction. **p < 0.01, *p < 0.05, ns = p > 0.05. Source data and p-values are provided as a "Source Data file". Glo1 knockdown, resulting in significantly more EGFP+ cells in the VZ/SVZ as well as EGFP + Pax6+ radial precursors and a reduction in Satb2+ neurons ( Supplementary Fig. 3a-c). Given the important role of Notch signalling in NPC maintenance, we wondered whether the phenotypic rescue by NICD was due to an unspecific overactivation of Notch signalling. We therefore titrated the dose of NICD used in electroporation to a level at which NICD itself did not change cortical development ( Supplementary Fig. 3d-f). This low dose of NICD was sufficient to normalize the aberrant distribution of EGFP+ cells in the VZ/ SVZ induced by Glo1 knockdown (Fig. 2g, h) and rescued the proportions of EGFP+, Pax6+ radial precursors and Satb2+ neurons to control levels ( Fig. 2i, j, Supplementary Fig. 3g). These results demonstrate that the downregulation of Notch signalling mediates methylglyoxal-induced NPC depletion and premature neurogenesis. Interestingly, NICD expression did not rescue the reduction of EGFP+ cells in the CP (Fig. 2g, h, Supplementary Fig. 3a, b), suggesting that methylglyoxal regulates neuronal migration independent of Notch signalling. Methylglyoxal regulates translation of Notch1 mRNA. Our data suggest that methylglyoxal may target-specific component(s) of the Notch pathway to regulate NPC homeostasis. Therefore, we asked whether one target could be the Notch1 receptor itself since it initiates the signalling cascade and is under extensive regulation for precise cell fate decision [27][28][29][30][31][32] . To test whether methylglyoxal regulates Notch1 expression in NPCs, we electroporated control or Glo1 shRNAs and a nuclear EGFP plasmid into the E13.5 cortex and examined Notch1 protein expression by immunostaining EGFP+ cells 2 days later. The quantification of Notch1+ cells in VZ/SVZ showed a 35% reduction following Glo1 knockdown ( Fig. 3a, b), in line with our observation of attenuated Notch signalling (Fig. 2a-c). To determine if the reduction in Notch1 protein abundance was due to a decrease in its mRNA levels, we sorted EGFP+ cells from electroporated cortices by FACS and examined Notch1 mRNA by qRT-PCR. Glo1 knockdown did not affect Notch1 mRNA levels ( Fig. 3c). Similarly, we found that GLO1 knockdown caused a reduction in NOTCH1 protein abundance in hNPCs without changing its mRNA levels ( Supplementary Fig. 4a-c). We next examined NOTCH1 protein and mRNA levels in cultured hNPCs treated with BBGD or directly with methylglyoxal. Western blot and qRT-PCR analyses showed that while BBGD and methylglyoxal treatment caused a reduction in NOTCH1 protein abundance (Fig. 3d, e, Supplementary Fig. 4d, e), NOTCH1 mRNA levels again were unaltered (NESTIN mRNA used as a control) (Fig. 3f, Supplementary Fig. 4f). Given that glycolysis is the primary source of intracellular methylglyoxal, we asked whether an increased glycolytic flux induced by high glucose condition could lead to similar changes in hNPCs. However, high glucose medium did not affect methylglyoxal levels in hNPCs ( Supplementary Fig. 4g). In contrast, ectopic expression of GLO1 that reduced methylglyoxal in hNPCs ( Supplementary Fig. 1h) caused an increase in NOTCH1 protein abundance ( Supplementary Fig. 4h, i) but did not affect NOTCH1 mRNA levels ( Supplementary Fig. 4j). These results suggest that the regulation of NOTCH1 expression may occur at the translational level. To test this, we performed polysome profiling to measure mRNA translational status by separating non-translated mRNAs from those being associated with multiple ribosomes (heavy polysomes) for active translation (Fig. 3g). We observed that BBGD treatment did not alter the polysome profile in hNPCs (Fig. 3g), suggesting that global translation was not perturbed by excessive methylglyoxal. This was consistent with the lack of changes seen in total protein synthesis measured by puromycin metabolic incorporation ( Supplementary Fig. 4h, i). We then examined the polysomal distribution of NOTCH1 mRNA by qRT-PCR. BBGD treatment induced a robust shift in NOTCH1 mRNA towards lighter polysome fractions, with a concomitant increase in fractions containing non-translated mRNA (Fig. 3h, i), indicating reduced translation. On the contrary, the highly expressed GAPDH mRNA was similarly engaged in translation in both conditions (Fig. 3h, i). Together, these results indicate that methylglyoxal negatively regulates NOTCH1 expression in NPCs at the level of translation. Suppression of Notch1 translation requires an AU-rich motif. Translational regulation of Notch1 mRNA has been described in C. elegans during germline development 33,34 , and recently in mouse T cells during thymocyte development 35 . In these cases, the translational regulation of Notch receptors is mediated by the 3ʹUTR of Notch1 mRNA. Therefore, we asked whether the Notch1 3ʹUTR mediated methylglyoxal-induced translational repression. Using luciferase reporter assay, we included the full-length Notch1 3ʹUTR downstream of the firefly luciferase gene in a dualluciferase vector and electroporated it with control or Glo1 shRNAs into the E13.5 cortex. Analysis of the cortex 2 days later showed that Glo1 knockdown markedly suppressed firefly luciferase activity (Fig. 3j), while luciferase mRNA levels remained unchanged (Fig. 3k), suggesting the presence of a translational regulatory component within the Notch1 3ʹUTR. The AU-rich element within the Notch1 3' UTR mediates the translational regulation of Notch1 by interacting with RBPs in T cells 35 . We found that the deletion of this AU-rich region in the Notch1 3ʹUTR completely abolished Glo1 knockdown-induced reduction in luciferase activity in the cortex (Fig. 3l). Similarly, in cultured hNPCs, BBGD treatment significantly suppressed the luciferase activity from the reporter with the full-length Notch1 3ʹUTR but not the construct lacking the AU-rich region (Fig. 3m, n), suggesting that the AU-rich region present in the Notch1 3ʹUTR mediates the translational regulation of Notch1 mRNA. GAPDH binds to the Notch1 3ʹUTR to represses its translation. Given the involvement of the AU-rich region, RBP(s) may interact with this region to mediate methylglyoxal-induced translational repression of Notch1 mRNA. We reasoned that methylglyoxal might modulate this interaction by changing the availability or enzymatic activity of an RBP since the posttranslational addition of methylglyoxal moieties can change protein stability and functions 21,[36][37][38] . We focused our search of RBPs to those known to bind AU-rich elements and to be modified by methylglyoxal. The glycolytic enzyme GAPDH is a known target of methylglyoxal modification, and this modification inhibits its enzymatic activity for glycolysis 17 . Interestingly, when its enzymatic function in glycolysis is disengaged in T cells, GAPDH acts as an RBP to suppress the translation of interferon γ mRNA after binding an AU-rich sequence within interferon γ 3ʹUTR 10 . This raises the possibility that GAPDH mediates the effect of methylglyoxal on Notch1 translation in NPCs. To test this idea, we first asked whether GAPDH interacts with Notch1 3'UTR. We performed an RNA-electrophoretic mobility shift assay (REMSA) on in vitro transcribed and radioactivelylabelled Notch1 3ʹUTR with different amounts of recombinant GAPDH (rGAPDH) protein, followed by the separation of RNAprotein complexes on native polyacrylamide gels. Multiple shifted bands of labelled RNA were detected in the presence of rGAPDH in a dose-dependent manner, indicating a direct interaction between GAPDH and the Notch1 3ʹUTR (Fig. 4a). This interaction was specific, as an excess of unlabeled Notch1 3ʹUTR RNA out-competed the binding of rGAPDH and abolished shifted bands of labelled Notch1 3ʹUTR RNA, while yeast tRNA was unable to out-compete the binding of rGAPDH to Notch1 3ʹUTR RNA (Fig. 4b). We next asked whether the AU-rich region within the Notch1 3ʹUTR mediates this interaction. We repeated the REMSA using an in vitro synthesized Notch1 3ʹUTR containing a deletion of the AU-rich region and found no shifted bands in the presence of rGAPDH (Fig. 4c). Moreover, this mutant form of Notch1 3ʹUTR was unable to compete for the binding of rGAPDH to the fulllength Notch1 3ʹUTR (Fig. 4c). Given that the AU-rich region in the Notch1 3ʹUTR can be bound by rGAPDH and was critical for the translational suppression of Notch1 mRNA in cortical precursors (Fig. 3j, l), we speculated that GAPDH could suppress Notch1 mRNA translation. To test this, we co-transfected the Notch1 3ʹUTR luciferase reporters into the human embryonic Fig. 4 GAPDH binds Notch1 mRNA 3ʹUTR in response to excessive methylglyoxal and regulates Notch1 translation. a-c REMSA was performed using the labelled (hot) full-length (FL) or AU-rich element-deleted (dARE) Notch1 3ʹUTR RNA probe in the presence of incremental amounts of recombinant GAPDH protein and the presence of unlabeled (cold) specific or unspecific RNA probes. Arrows and arrowheads indicate free RNA probes and RNA-GAPDH complexes, respectively. n = 3 experiments. d Luciferase activity from a reporter containing the full-length (FL) or AU-rich element-deleted (dARE) Notch1 3ʹUTR co-transfected with a plasmid expressing EGFP control or GAPDH in HEK293 cells. Firefly luciferase activity values were normalized to Renilla luciferase activity in the same samples. n = 3 experiments. e-i hNPCs were treated with DMSO or BBGD for 48 h. e GAPDH activity in the cell lysates. The same number of cells were seeded prior to treatment and the GAPDH activity was further normalized by total protein mass. n = 3 experiments. f Western blots probed for GAPDH and reprobed for ACTb as a loading control. n = 3 experiments. g qRT-PCR analysis of mRNA enrichment by RNA immunoprecipitation (RIP) with control IgG or an anti-GAPDH antibody (normalized to the total RNA input). n = 3 experiments. h qRT-PCR analysis of the relative distribution of the HIF1a and c-MYC mRNAs across all fractions from polysome profiling. i Quantification of the relative enrichment of HIF1a and c-MYC mRNAs in the heavy polysome fractions. Each point corresponds to the value of each fraction normalized to the total HIF1a or c-MYC mRNA. n = 3 experiments. Data are presented as mean values ± SEM and analyzed using two-tailed, unpaired students t-test. **p < 0.01, ns = p > 0.05. Source data and p-values are provided as a "Source Data file". NATURE COMMUNICATIONS | https://doi.org/10.1038/s41467-020-15941-2 ARTICLE NATURE COMMUNICATIONS | (2020) 11:2018 | https://doi.org/10.1038/s41467-020-15941-2 | www.nature.com/naturecommunications kidney (HEK) 293 cells with plasmids overexpressing GAPDH or EGFP as a control. Indeed, the luciferase assay showed that ectopic expression of GAPDH caused a 30% decrease in the luciferase activity from full-length Notch1 3ʹUTR, but not the mutant 3'UTR lacking the AU-rich region (Fig. 4d), suggesting that GAPDH acts as a translational repressor on the Notch1 3'UTR via the AU-rich region. Methylglyoxal engages GAPDH to suppress Notch1 mRNA. Our data suggest that the translation of Notch1 mRNA is controlled by its interaction with GAPDH. It is known that methylglyoxal can post-translationally modify GAPDH, and this modification inhibits its enzymatic activity for glycolysis 17 . Therefore, we explored the model that methylglyoxal modulates the dual function of GAPDH in NPCs by altering its enzymatic activity and subsequently engaging it as an RBP to bind Notch1 mRNA, leading to translational repression. To test this, we first assessed GAPDH activity in hNPCs. Following BBGD treatment, we found that while GAPDH protein levels remained unchanged, GAPDH enzymatic activity was suppressed by approximately 80% (Fig. 4e, f). We next asked if the suppression of GAPDH enzymatic activity induced by BBGD could lead to enhanced interaction between GAPDH and NOTCH1 mRNA. We performed RNA immunoprecipitation (RIP) using antibodies against GAPDH or control isotype IgG from hNPCs extracts treated with or without BBGD. qRT-PCR analysis of immunoprecipitated RNA showed that the GAPDH antibody was enriched for NOTCH1 mRNA as well as other known GAPDH target mRNAs encoding regulators of NPC fate decision, including HIF1a 39,40 and c-MYC 41 (Fig. 4g). The interaction was specific, as these target mRNAs were not immunoprecipitated by control IgG, and the GAPDH antibody was not able to immunoprecipitate NESTIN mRNA or 18 S RNA used as negative controls (Fig. 4g). Interestingly, following BBGD treatment, the amount of GAPDH-enriched NOTCH1 mRNAs, but not HIF1a or c-MYC mRNAs, were markedly increased, suggesting the existence of functional selectivity (Fig. 4g). In agreement with this observation, we found that HIF1a and c-MYC mRNAs were actively engaged in translation regardless of BBGD treatment (Fig. 4h, i). GAPDH knockdown rescues methylglyoxal-induced NPC depletion. The above results support a model whereby the modification of GAPDH by methylglyoxal engages it as an RBP to selectively suppress Notch1 mRNA translation, leading to NPC depletion and premature neurogenesis. If the model is correct, a reduction in GAPDH abundance should ameliorate the impact of excessive methylglyoxal on NPCs in the developing cortex by releasing the repression on Notch1 translation. To test this, we used a GAPDH shRNA that efficiently knocked down GAPDH expression in transfected cultured NPCs (Fig. 5a, b). We coelectroporated this shRNA with EGFP plus control or Glo1 shRNAs into E13.5 cortices. Analysis 3 days later showed that while Glo1 knockdown caused a robust reduction in EGFP+ cells in the VZ/SVZ and CP, concurrent knockdown of GAPDH and Glo1 partially rescued the distribution of EGFP+ cells in the VZ/ SVZ but not the CP (Fig. 5c, d). Moreover, GAPDH knockdown normalized the proportions of EGFP+ cells that were also positive for Pax6 or Satb2 (Fig. 5e, f). Together, these data suggest that methylglyoxal controls NPC homeostasis, at least in part, by engaging GAPDH as an RBP to regulate Notch signalling. Discussion The coordination of gene expression and metabolism is essential for NPC homeostasis 2 . While much progress has been made to understand the gene networks that direct metabolic transitions during neurogenesis 42,43 , little is known about how metabolic cues signal back to gene expression programs to co-ordinate NPC self-renewal and differentiation. Our results reveal a mechanism mediating the metabolic control of gene expression in NPC homeostasis. We show that the intermediate metabolite methylglyoxal regulates the expression of the Notch1 receptor through the RBP function of the glycolytic enzyme GAPDH. As such, the metabolic signal is coupled to the translational control of Notch signalling to regulate the balance of NPC self-renewal and differentiation ( Supplementary Fig. 4m). Accumulating evidence suggests that intermediate metabolites are critical players in the crosstalk between metabolism and gene expression for stem-cell fate decision 2,42,43 . For example, two well-studied intermediates in the tricarboxylic acid (TCA) cycle, α-ketoglutarate, and Flavin adenine dinucleotide, function as cofactors of histone and DNA demethylases to epigenetically modulate global transcription for stemcell maintenance 4,15,44 . In NPCs, however, glycolysis is the major source of energy, producing a plethora of intermediate metabolites, whose roles as signalling molecules in gene regulation and NPC homeostasis are poorly understood. Our results show that methylglyoxal, a glycolytic intermediate derived from G3P 21 , can modulate the translation of Notch1 mRNA and the activity of Notch signalling, to co-ordinate NPC self-renewal and differentiation. Like other small signalling molecules, the magnitude of methylglyoxal signalling is likely to be determined by its steady-state concentration as the result of synthesis and degradation. Although high glycolytic rates in NPCs inevitably increase the production of methylglyoxal, high expression levels of Glo1 in NPCs keep its concentration low ( Fig. 1 and Supplementary Fig. 1), which permits active Notch signalling in NPCs. Therefore, it is likely that the rate of methylglyoxal clearance by Glo1 rather than synthesis from glycolysis dictates its signalling activity as a metabolic feedback control of NPC fate decision. This idea is supported by our findings that manipulation of Glo1 activity in NPCs had a strong impact on Notch signalling and NPC homeostasis (Figs. 1-3, Supplementary Fig. 1-4) while increasing glycolytic influx did not affect intracellular methylglyoxal levels (Supplementary Fig. 4d) and NOTCH1 expression. However, how Glo1 activity is instructed by glycolysis to modulate intracellular methylglyoxal levels remains elusive. It should be noted that the crosstalk mediated by methylglyoxal likely operates at multiple levels of gene regulation. In this regard, methylglyoxal has been recently shown to change histone modification and alter the epigenetic control of gene transcription 37 . Furthermore, methylglyoxal can also act directly on transcription factors to allow transcriptional regulation of specific genes in mammalian endothelial cells 21,38 . In NPCs, we found that methylglyoxal regulated Notch1 receptor mRNA at the translational level (Fig. 3), suggesting a role of methylglyoxal in gene regulation beyond transcriptional regulation. A recent study in human umbilical vein endothelial cells corroborates this notion, showing that methylglyoxal affects the regulation of cell adhesion molecule-1 at the post-transcriptional level 45 . Given the diverse mechanisms in different contexts, it is possible that multiple stemness-related pathways may also contribute to the effect of methylglyoxal on NPC fate decision. Nonetheless, two lines of evidence suggest the presence of selectivity and indicate that methylglyoxal acts, at least in part, through Notch signalling. First, methylglyoxal induces translational suppression of Notch1 receptor but not HIF1a and c-MYC, both of which are critical regulators of NPC biology [39][40][41] (Figs. 3, 4). Second, reactivation of Notch signalling results in a phenotypic rescue of Glo1 knockdown (Fig. 2). How do NPCs sense metabolic signals to co-ordinate translational regulation? One well-studied regulator involved in metabolic sensing and mRNA translation is the mechanistic (mammalian) target of rapamycin (mTOR). By interacting with sensor proteins, mTOR senses nutrients and amino acids, in part by coordinating global protein synthesis through cap-dependent translational machinery [46][47][48] . Nonetheless, how intermediate metabolites are sensed to regulate the translation of specific mRNA targets is not clear. Our results on methylglyoxal suggest a plausible mechanism for intermediate metabolite sensing and signalling, involving post-translational conjugation on RNAbinding metabolic enzymes. First, methylglyoxal is known to react with side chains, such as lysine and arginine to change protein function 21,22,36,49 . Although this reaction is thought to be enzymatically independent, the conjugation is target-specific in live cells through mechanisms that are still unclear. In the case of histones, methylglyoxal is conjugated selectively to the histone H3 and H4 families but not to H2, leading to changes in epigenetic modification and transcription 37 . The modification also occurs on some glycolytic enzymes, such as GAPDH, altering their enzymatic activity (Fig. 4) 17,50 . Interestingly, another glycolytic intermediate 1,3-BPG has also been found to modify metabolic enzymes and modulate their activity, suggesting that conjugation may be a common mechanism by which intermediates can be sensed to initiate signalling 51 . Second, numerous metabolic enzymes can act as target-specific RBPs 52 . This dual function was initially observed in only a few enzymes, but recent mRNA-bound proteomic studies have identified a much broader class of metabolic enzymes involved in various pathways such as in glycolysis and the TCA cycle 11,12 . Our understanding of the functional roles of these dual role enzymes in RNA regulation under different biological contexts is still minimal. In NPCs, we found that GAPDH can bind to the 3ʹUTR of Notch1 mRNA and suppress its translation (Figs. 3, 4). This binding was mediated by an AU-rich region, consistent with previous reports that GAPDH recognizes AU-rich sequences 10,18 . Importantly, we found that GAPDH contributed to NPC fate decision, at least in part by coordinating Notch signalling through its RNA-regulatory activity (Fig. 5). This finding represents a previously unrecognized route of regulation for NPC fate decision by metabolic enzymes. It should be noted that in addition to RNA-and glycolysis-related activities, GAPDH is known to perform other roles 13 . For example, it is known that upon cellular stress, GAPDH is translocated into the nucleus and triggers a signaling cascade resulting in apoptosis 53 . Therefore, it is possible that the changes in NPC homeostasis induced by GAPDH knockdown may reflect a combinatorial effect on multiple mRNA targets and different biological processes. Further studies are required to dissect the contribution of each target and process. Reducing GAPDH abundance partially rescues the NPC depletion induced by excessive methylglyoxal. a, b Cultured E12.5 cortical precursors were co-transfected with EGFP and control or GAPDH shRNAs (shGAPDH) for three days, immunostained for EGFP (green) and GAPDH (red) (a) and quantified for EGFP-positive cells with detectable GAPDH (b). Arrows and arrowheads denote EGFP-positive and -negative cells, respectively. unpaired ttest; n = 3 experiments. c-f E13.5 cortices were co-electroporated with control or Glo1 shRNAs together with or without a shRNA against GAPDH, and coronal cortical sections were analyzed three days later. c Images of electroporated sections immunostained for EGFP (green). Dotted white lines denote boundaries between VZ/SVZ, IZ and CP. d Quantification of sections as in c for the relative location of EGFP+ cells. n = 3 (Control), 8 (shGlo1), 6 (shGAPDH) and 6 (shGAPDH + shGlo1) embryos. e Images of the VZ or IZ of electroporated sections that were immunostained for EGFP (green) and Pax6 (red, top panels) or Satb2 (red, bottom panels). f Quantification of sections as in e for the proportion of EGFP+ cells that were also positive for Pax6 or Satb2. n = 3 (Control), 8 (shGlo1), 6 (shGAPDH) and 6 (shGAPDH + shGlo1) embryos. Sections in a and c were counterstained with Hoechst 33258 (blue). Scale bars, 100 µm in c, 10 µm in a and e. Data are presented as mean values ± SEM and analyzed using two-tailed, unpaired students t-test with Bonferroni correction. **p < 0.01, *p < 0.05, ns = p > 0.05. Source data and p-values are provided as a "Source Data file". Third, the RNA-regulatory activity of metabolic enzymes is related to their enzymatic functions. For example, when GAPDH is dismissed from the glycolytic metabolism in T cells, its RBP function as a translational repressor of interferon γ is engaged, and when GAPDH participates in glycolysis, it no longer suppresses interferon γ translation 10 . Consistently, we found that inhibition of GAPDH enzymatic activity by methylglyoxal significantly enhanced its binding to Notch1 mRNA for translational repression (Fig. 4). A decrease in GAPDH abundance, and thus less GAPDH to suppress Notch1 mRNA, reversed the impact of methylglyoxal on NPC homeostasis (Fig. 5). These data support the idea that GAPDH is engaged as an RBP when its enzymatic activity is inhibited. Although our study reveals a functional switch of GAPDH induced by methylglyoxal in NPCs, how GAPDH enzymatic activity modulates its RNA-binding ability still requires further studies. Taken together, our findings in this study demonstrate a regulatory node that utilizes glycolytic metabolites to co-ordinate gene expression for NPC homeostasis. Methods Animals. All animal use was approved by the Animal Care Committees of the Hospital for Sick Children and the University of Calgary in accordance with the Canadian Council of Animal Care policies. Female CD1 mice (8-12 weeks old), purchased from Charles River Laboratory, were used for all animal experiments. Mice were housed in groups of 1-5/cage, at the temperature of 24°C under a 12 h light-dark cycle with free access to food and water. For Glo1 inhibition in vivo, E13.5 embryos were used for the injection of 100 μM S-p-Bromobenzylglutathione cyclopentyl diester (BBGD) (Sigma) to lateral ventricles together with in utero electroporation. Cell culture. H9 hESCs were obtained from the National Stem Cell Bank (WiCell, Madison, WI) and cultured with mTeSR1 culture medium (StemCell Technologies) on BD hESC-qualified matrigel. Pluripotent stem-cell work was approved by the Canadian Institutes of Health Research Stem Cell Oversight Committee and The Hospital for Sick Children Research Ethics Board. An embryoid body (EB)-based method was used for the induction of NPCs 54 . EBs were made by treatment of hESCs with 2 mg ml^− 1 collagenase type IV, and cultured in DMEM/F12 media supplemented with 20% KSR, 1x non-essential amino acid (NEAA), 1x betamercaptoethanol, 1x penicillin-streptomycin, 2.5 μM dorsomorphin, and 5 μM SB431542. After 4 days, the EBs were plated on Matrigel-coated dishes and cultured in induction media (DMEM/F12 media, N2 supplement, NEAA and 20 ng ml^− 1 bFGF) for eight days. Cells were then fed with induction media plus B27 supplement every other day. After 1-week, neural rosettes were isolated by Dispase (StemCell Technologies) treatment and re-plated on Matrigel-coated plates and maintained in induction media for up to four passages. Briefly, HEK 293 cells were maintained in DMEM (Gibco) supplemented with 10% fetal bovine serum and 1% penicillin/streptomycin 55 . Plasmids and reagents. The pEF1α-EGFP plasmid expressing nuclear EGFP has been previously reported 56 . Glo1 and control shRNAs in the pSUPER vector were published previously 20 . The shRNAs against human GLO1 (GATGGCTACTG-GATTGAAA) were cloned into the pSUPER vector. The shRNA against mouse GAPDH was obtained from EZbiolab. To generate luciferase reporter plasmids, the 3'UTR of Notch1 mRNA was amplified from genomic DNA by PCR and cloned into the pmirGLO vector (Promega) downstream of the Firefly Luciferase gene. The AU-rich element (141-367 of 3'UTR) was deleted to generate the Notch1 3'UTR-dARE plasmid. A Renilla Luciferase, serving as the internal control, is expressed independently from the same pmirGLO plasmid. The pCAG-mGFP (#14757), CBFRE-EGFP (#17705) and 3XFlagNICD1 (#20183) plasmids were obtained from Addgene. Myc-DDK-GAPDH was obtained from the Origene. Human GLO1 cloned into lentiviral vector pLX304, from ORFeome initiative, was acquired from DNASU. All clones were verified by sequencing. Recombinant rabbit GAPDH was purchased from Sigma. -1:1000). The donkey anti-mouse and anti-rabbit Alexa 488, 555 and 647-conjugated secondary antibodies were obtained from ThermoFisher and used at 1:500 dilutions. HRP-conjugated goat anti-mouse, anti-rabbit or antichicken secondary antibodies were purchased from ThermoFisher and used at 1:5000 dilutions. NIR-conjugated goat anti-mouse and anti-rabbit secondary antibodies were acquired from LI-COR and used at 1:25,000 dilutions. Lentiviral particles, transduction, and transfections. Lentiviruses were produced in HEK293T cells using the third generation packaging system (pMDG.2, pRSV-Rev, and pMDLg/pRRE). Supernatant containing viral particles were collected 48 h post-transfection, filtered and concentrated by 91,000 × g centrifugation for 2 h at 4°C. For transduction, hNPCs were incubated overnight supplemented with the concentrated virus. hNPCs were transfected Lipofectamine 2000 (Invitrogen) according to manufacturer's instructions. HEK293 cells were transfected with polyethylenimine. RNA immunoprecipitation. hNPCs were seeded in two 10 cm dishes and grown overnight. Cells were then washed in ice-cold PBS. RNA immunoprecipitations were carried out using EZ-Magna RIP-RNA Immunoprecipitation Kit (Millipore) according to the manufacturer's instructions. Five micrograms of antibody rabbit anti-GAPDH (Sigma), was used per reaction. Five micrograms of rabbit isotype IgG antibody was used as negative control. Target transcripts were detected by qRT-PCR and plotted as % of Input. Methylglyoxal assay. Methylglyoxal concentration was determined using the methylglyoxal colorimetric assay kit (BioVision) according to the manufacturer's instructions. To examine methylglyoxal levels in tissues, cortices were dissected from E13.5 embryonic brain, lysed with PBS/0.2% Triton-X and subjected to DMSO or BBGD treatment (50 μM) for 3 h. For cultured hNPCs, 5 × 10 5 cells were lysed in 100 μl of PBS/0.1% Triton-X and 20 μl or serial dilutions used to determine methylglyoxal concentration changes using the same method as above. Fluorescence-activated cell sorting. For assessment of Notch1 mRNA in vivo, a nuclear EGFP plasmid and control or Glo1 shRNAs were in utero electroporated into different E13.5 embryos from the same mother. After 2 days, EGFP+ cortices from embryos of each group were dissected, pooled, and dissociated into single-cell suspensions in ice-cold HBSS. Dissociated cells were filtered through a 40 µm cell strainer to obtain suspended single cells for FACS using a BD FACS Aria II cell sorter (the University of Calgary Flow Cytometry Facility). The EGFP signal was detected at a 530/30 nm bandpass using a 488 nm laser, and dead cells were stained with propidium iodide and excluded from the analysis. EGFP + cells were sorted into ice-cold FBS, followed by the centrifugation for 5 min at 200 × g at 4°C and total RNA analysis. Quantitative real-time PCR and immunoblotting. Total RNA was isolated from sorted mouse embryonic cerebral cortical cells or hES-derived NPCs using Trizol (ThermoFisher) according to the manufacturer's instruction. cDNA was synthesized from 500 ng of total RNA using SuperScript III reverse transcriptase kit (ThermoFisher) with random hexamer primers according to manufacturer's instructions. qRT-PCR was performed using SYBR Select PCR Master Mix (ThermoFisher) with amplification for 40 cycles with annealing temperature at 60°C, using ViiA7 Real-Time PCR system (ThermoFisher). All primers used in this study are described in Table 1. Fold changes were calculated by the 2 -(ΔΔCt) method. For immunoblotting, hNPCs were lysed with radioimmune precipitation assay (RIPA) buffer (25 mM Tris-HCl, pH 7.6, 150 mM NaCl, 1% Nonidet P-40, 1% sodium deoxycholate, and 0.1% SDS). Equivalent protein mass was loaded on SDS-PAGE and transferred to Nitrocellulose membrane Hybond ECL (GE HealthCare). HRP-conjugated secondary antibodies (Invitrogen) were used, and the membranes were developed with SuperSignal West Pico Chemiluminescent Substrate (Pierce). Images acquired using ChemiDoc MP (BioRad) and quantified using software Imagelab v5.2.1 (BioRad). Cultured HEK293 cells were lysed with 1x sample buffer with 1 mM dithiothreitol (DTT), boiled for 8 minutes, and analyzed with SDS-PAGE 20 . In utero electroporation. Expression constructs for nuclear EGFP or luciferases were co-electroporated with shRNAs or overexpression constructs at a 1:3 ratio, or when two additional plasmids were co-electroporated, at a 1:2:2 ratio for a total of 4 µg DNA or as indicated 55 . For detecting Notch signalling, control or Glo1 shRNAs were co-electroporated with CBFRE-GFP and a plasmid expressing DsRed2 at a 2:2:1 ratio. Prior to injection, plasmids were mixed with 0.5% trypan blue. Following injection into lateral ventricles, the square electroporator CUY21 EDIT (TR Tech, Japan) was used to deliver five 50 ms pulses of 40-50 V with 950 ms intervals per embryo. Brains were dissected and analyzed 2 or 3 days post electroporation. Luciferase assay. Luciferase assays were carried out using Dual-Luciferase reporter kit (Promega) according to the manufacturer's instructions. To examine activity and mRNA levels of luciferases in vivo, cortices were lysed 2 days after electroporation with pmirGLO plasmids together with control or Glo1 shRNAs. For HEK293 cells, a plasmid expressing mouse GAPDH or EGFP was co-transfected with the luciferase reporter plasmids using Lipofectamine 2000 (Invitrogen) and incubated for 16-24 h. Transfected cortices or cells were lysed and subject to either luciferase assay or total RNA extraction for qRT-PCR analysis. The activity and mRNA levels of Renilla Luciferase were used as an internal control for normalization. The results presented are the average of at least three independent experiments. Immunostaining and histological analysis. For immunostaining of cortical sections, embryonic brains were dissected in ice-cold HBSS, fixed in 4% paraformaldehyde at 4°C overnight, cryopreserved with 30% sucrose and cryosectioned coronally at 16 μm. Sections were blocked at room temperature with 5% BSA (Jackson Immunoresearch) in PBS with Triton-X, and incubated with primary antibodies in blocking buffer overnight at 4°C, followed by incubation with appropriate secondary antibodies in blocking buffer at room temperature for 1 h. Nuclei were stained with Hoechst 33258 (Sigma). For the detection of BrdU on cortical sections, sections were first immunostained for indicated antibodies, followed by HCl treatment (1 N and 2 N) for 40 min, overnight incubation with anti-BrdU antibody at 4°C and 1-h incubation with the secondary antibody at room temperature 20 . RNA-electrophoretic mobility shift assay-REMSA. For probe synthesis, the two fragments corresponding to the Notch1 3ʹUTR and Notch1 3ʹUTR dARE were PCR amplified and a T7 promoter sequence (TAATACGACTCACTATAGGG) was added upstream of the forward primer for in vitro transcription. In vitro transcriptions were carried out using MAXIscript T7 Kit (Invitrogen) and 50 μCi of 32P-labeled UTP 3000 Ci mMol −1 10µCi µl −1 (EasyTide -BLU507T -250µCi) according to manufacturer's instructions. Corresponding probes without 32 Plabeled UTP were also synthesized and used for specific competition control. After DNAse digestion, the probes were PAGE purified. For binding reaction and electrophoresis, 100.000 cpm of each probe was heated to 65°C for 10 min to denature and slowly cooled down to room temperature. Probes were combined with the recombinant GAPDH protein as indicated in binding buffer EBKM (25 mM HEPES pH 7.6, 5 mM MgCl2, 1.5 mM KCl, 75 mM NaCl, 6% sucrose, 1x complete protease inhibitor cocktail -Roche) plus 100 μg ml −1 BSA for 15 min at room temperature. Cold probes or 2.5 μg of tRNA were used as indicated. Two microlitres of a 50 mg ml −1 Heparin sulphate stock solution was then added to the reaction for an additional 15 min at room temperature. Samples were run in 4% native PAGE for 2 h at 200 V at 4°C in 0.5x TBE buffer. The gel was dried, exposed in Phosphorimager, and images acquired in Typhoon FLA 9500 and analyzed using ImageQuant TL 8.1 software. GAPDH activity assay. GAPDH activity was detected using the KDalert GAPDH assay kit (Ambion) according to the manufacturer's instructions. The same number of cells were seeded, four wells per condition, onto 96 well plates and treated with either BBGD or DMSO for 48 h. The results were normalized by total protein mass and expressed as a percentage of GAPDH activity relative to DMSO. The results presented are of three independent repeated experiments. Polysome fractionation. Human H9 NPCs (3 × 100-mm culture dish) were treated with DMSO or BBGD for 48 h, followed by a 10 min incubation with 100 μg ml −1 cycloheximide (CHX) (Sigma) at 37°C. Cells were washed with ice-cold PBS containing CHX and harvested with lysis buffer (20 mM Tris-HCl pH 7.5, 100 mM KCl, 5 mM MgCl 2 , 1% Triton X-100, 1 mM DTT, 0.5% Sodium deoxycholate, 100 mg ml −1 CHX, 1% RNaseOUT) supplemented with protease inhibitors (Roche). After centrifugation at 13,000 × g for 5 min, the lysates were layered onto 10-50% continuous sucrose gradient and centrifuged at 39,000 rpm in a Beckman SW-41Ti rotor for 90 min at 4°C. Each gradient was collected in 12 fractions [57][58][59] . RNA was extracted from all fractions and analyzed by qRT-PCR. Ten nanograms of synthetic luciferase transcript (NEB) was added to each fraction for normalization. Total RNA was then extracted using Trizol (Invitrogen). cDNA synthesis and qRT-PCR were carried out. The Ct values of transcripts were then normalized to spike-in luciferase transcript, and plotted as percent detected in each fraction. Puromycin metabolic incorporation assay. hNPC cultures in one well of a sixwell plate treated with DMSO or BBGD were pulsed with 1 µM puromycin (Sigma) for 1 h and then processed for western blots as described above using an antipuromycin antibody (EQ0001, KeraFast -1:1000). The puromycin incorporation, representing protein synthesis during the pulse phase, was determined by measuring total lane signal from 15-250 kDa. Signals were quantified using Empiria (LI-COR), normalized to total protein levels using Revert total protein stain (LI-COR), and presented as percent change relative to DMSO control. Microscopy and quantification. For the analysis of embryonic brains with in utero electroporation, 3-4 anatomically matched sections per brain from at least three embryos of two to three independent mothers for each group were imaged with a ×20 objective on an Olympus IX81 fluorescence microscope equipped with a Hamamatsu C9100-13 back-thinned EM-CCD camera and Okogawa CSU X1 spinning disk confocal scan head 20 . Images were processed using Volocity software (Perkin Elmer). Pax6, Tbr2, and Hoechst staining were used to define the ventricular zone (VZ), subventricular zone (SVZ), and cortical plate (CP). Statistics. All data were expressed as the mean plus or minus the standard error of the mean (SEM). With the exception of the microarray data, statistical analyses were performed with a two-tailed Student's t-test or, where relevant, ANOVA with Tukey or Dunnett's post-hoc tests, unless otherwise indicated.
2023-02-08T15:44:32.481Z
2020-04-24T00:00:00.000
{ "year": 2020, "sha1": "8d1e87013ff6fb7031ddbe01f64469772c2597ab", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41467-020-15941-2.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "8d1e87013ff6fb7031ddbe01f64469772c2597ab", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [] }
244150173
pes2o/s2orc
v3-fos-license
Risk factors for mortality and multidrug resistance in pulmonary tuberculosis in Guatemala: A retrospective analysis of mandatory reporting Highlights • National TB cohort analyzing risk factors associated with MDR-TB and mortality in Guatemala.• Indigenous ethnicity and prior TB treatment were associated with increased risk of mortality and MDR-TB.• HIV/Unknown HIV status were associated with increased mortality and diabetes with risk for MDR-TB. Introduction Tuberculosis (TB) was the leading cause of mortality from a single infectious agent in 2019, with an estimated 1.4 million deaths worldwide. [1] Risk factors for TB mortality include HIV infection, diabetes, anemia, chronic lung, heart, and liver disease. [2] Multidrug resistant tuberculosis (MDR-TB), defined as resistance to isoniazid and rifampin, is a growing threat and a significant burden on health systems worldwide. The World Health Organization (WHO) estimated that in 2019, there were 465,000 estimated new cases of rifampin resistant TB (RR-TB) or MDR-TB worldwide, with 182,000 associated deaths. [1] RR-TB, detectable by readily available point-ofcare testing, is an excellent surrogate marker for concomitant INH resistance,with INH resistance identified in 90% on RR-TB cases. [3] Among new TB cases in 2019, 3.3% were MDR-TB, of which 17.7% had previously been treated. [1] MDR-TB is associated with higher cost of treatment, lower cure rates, and higher mortality than non-MDR-TB, highlighting its public health importance. [4][5][6] One study from 2000 to 2012 in Peru found that less education, history of prior TB, diabetes, and HIV infection were associated with increased mortality in patients with MDR-TB. [7] A meta-analysis with > 20,000 patients found that previous TB disease and prior TB treatment, non-completion of TB treatment, and failure of TB treatment were strongly associated with MDR-TB. [8] Another systematic review found HIV was not associated with the incidence of MDR-TB but was associated with primary MDR-TB. [9] However, a study from Peru noted a ten-fold increased risk of MDR-TB in people living with HIV (PLWHIV). [10] Other studies from Latin America identified male sex, diabetes, and smoking as risk factors for MDR-TB. [11][12][13] In 2019, Guatemala reported 3,716 new cases of TB and estimated 130 cases of RR-TB/MDR-TB, an indication of success in the efforts to improve access to TB diagnostics, care, and treatment that warranted a change in Guatemala's TB burden status from moderate to low by the Global Fund to Fight AIDS, Tuberculosis, and Malaria. [1,14] However, significant disparities still exist in access to healthcare in rural and indigenous communities in Guatemala that are frequently subject to discrimination and language barriers. [15][16] A Guatemalan National TB Program (GNTBP) report in 2015 found that the Central and Northwestern regions had the highest rates of TB, and that in >15% of TB cases, HIV diagnosis was unknown. [17] A single-center study from Guatemala found that the most significant risk factor for treatment failure was resistance to ≥ 2 TB drugs (OR 6.4, CI 2. 3-17.8), where resistance to isoniazid (19%), streptomycin (18%) and rifampin (3%) were the most commonly seen, but only 2.8% had resistance to both isoniazid and rifampin. [18] The intent of this study was to identify risk factors for mortality and MDR-TB in adults with pulmonary TB in Guatemala. Study design We conducted a retrospective study of all adults with pulmonary TB reported to the GNTBP from January 1, 2016 to December 31, 2017. The Human Research Protection Office at Washington University in St. Louis and the Research Ethical Committee at the Guatemalan Ministry of Health approved the study with a waiver of informed consent. Data collection The GNTBP database collects standardized information on TB cases from all public healthcare facilities in the country, as TB case reporting is mandatory in Guatemala. The form collects age, sex, level of education, occupation, condition at diagnosis as defined by the WHO (i.e., new case, recurrent TB, previous loss to follow up, or treatment failure), site of infection (pulmonary vs. extrapulmonary), resistance profile (non-MDR-TB vs. MDR-TB), prior treatment history, and method used for the diagnosis (Supplementary Figure 1). Standard TB treatment regimens used are described in the Guatemalan Ministry of Public Health and Social Assistance's TB treatment protocol. [19] We included all adults ≥ 18 years old at the time of the case notification. We excluded patients with extrapulmonary disease and those with unknown mortality status at the time of data collection. Date of death was validated using the Guatemalan National Registry of Persons database (Registro Nacional de Personas [RENAP]). Variable definitions TB disease was defined microbiologically, histopathologically or clinically confirmed. Microbiological diagnosis was based on acid-fast bacilli (AFB) visualized on stain, culture positive for Mycobacterium tuberculosis complex, or a positive molecular test (i.e. GeneXpert assay or in-house Mycobacterium tuberculosis [MTB] PCR testing) from respiratory specimens. Histopathological diagnosis was defined as tissue biopsy with granulomatous inflammation and/or AFB on specific stains from respiratory tract tissue samples. Clinical diagnosis was based on symptoms suggestive of TB as determined by the treating physician and supported by improvement after TB therapy, consistent radiographic findings, positive lipoarabinomannan assay, positive TB skin testing and/or interferon gamma release assay. MDR-TB cases were defined as having a positive rpoB rifampin-resistance mutation by the Xpert MTB/ RIF (Cepheid, Sunnyvale, CA) test or resistance to both isoniazid (INH) and rifampin on drug susceptibility testing. Non-MDR-TB cases were defined as cases that did not fit the case definition for MDR-TB. Lower educational level was defined as fewer than six years of formal education (i.e. incomplete primary school). Indigenous ethnicity was determined by self-identification as Mayan, Garífuna, or Xinca. Guatemalan regions were based on the eight geopolitical definitions used by the Guatemalan government (Metropolitan, North, Northeast, Southeast, Central, Southwest, Northwest, Petén) using the patients residence address at the time of notification. [20] Malnourishment was defined as body mass index (BMI) < 18.5 kg/m2 at the time of the case notification, as obtained from local medical records. HIV diagnosis, receipt of antiretroviral therapy (ART), diabetes, hypertension, chronic liver or kidney disease, history of previous TB treatment, drug or alcohol abuse, incarceration history, and pregnancy were obtained from nonmandatory reporting captured in the case-report form. Statistical analysis The primary outcome was all-cause mortality after the diagnosis of pulmonary TB and the secondary outcome was MDR-TB. Pearson Chi2 or Fisher's exact tests, and t test or Mann-Whitney U test were used for descriptive statistics, as appropriate. Variables significantly associated with mortality in the univariate analysis (p < 0.2) were included in the multivariable model. Multivariate binary logistic regression was used to evaluate risk factors associated with mortality and MDR-TB. In the multivariate logistic regression model, we adjusted for significant risk factors identified in the univariate analysis. For mortality, we adjusted for HIV diagnosis, prior TB treatment, education level, ethnicity, diabetes, and MDR-TB. For MDR-TB, we adjusted for previous TB treatment, education level, diabetes, and the patient's Guatemalan region of residence. All statistical tests were two-tailed, and significance was set at α = 0⋅05. All statistical analyses were done using SPSS (IBM, Armonk, New York, USA, version 26). Patients with MDR-TB had lower education level compared to those with non-MDR-TB (100% vs 83.7%, p = 0.001) ( Table 2). There was a significant difference in geographical distribution of MDR-TB, with less MDR-TB cases in the Metropolitan region (2.3% vs 21.7%) compared to the Central (39.5% vs 22.7%) and Southwest regions (44.2% vs 30.4%) (p = 0.003). Patients with MDR-TB were also more likely to have diabetes (41.9% vs 13.4%, p < 0.001) and to have received prior TB therapy (79.1% vs 5.8%, p < 0.001) compared to those with non-MDR-TB. Multivariate analysis In the multivariate analysis, higher odds of mortality was associated with previous TB treatment (OR 3.57, CI 2. 24-5. Discussion In our study we found that of 3,945 cases of pulmonary TB in Guatemala, 154 (3.9%) died and 43 (1.1%) had MDR-TB over a two-year period. By comparison, WHO estimated that in 2017 alone, Guatemala had 4,300 cases of TB, of which, 370 (8.6%) had death as an outcome and 130 were MDR/RR-TB (3.0%). [21] The discrepancy in the number of cases in our study might be partially attributable to including only adults with pulmonary TB, but could also reflect systematic underreporting. [22][23] WHO also reported that among patients with TB, PLWHIV had lower mortality than people without HIV (0.41 per 100,000 population vs 1.8 per 100,000 population). [21] In our study PLWHIV had 3.98 times higher odds of death than patients without HIV coinfection, findings consistent with a previous meta-analysis that found a four-fold increase in mortality in TB/HIV coinfected patients. [24] The WHO estimates of HIV testing rates (94%) in patients with TB in Guatemala are in keeping with what was observed in this study (89.7%). Unknown HIV diagnosis was associated with higher odds of mortality in our study. Limited local testing availability in remote areas or difficulties accessing the healthcare system may account for the mortality difference seen among those with an unknown HIV diagnosis. According to UNAIDS, 32% of PLWHIV in Guatemala do not know their diagnosis, and a recent study found that nearly 60% have a CD4 count ≤ 200 cell/ mm 3 at the time of diagnosis. [25][26] Lack of information on CD4 count and viral load in PLWHIV included in our study is a limitation that could provide insight to the associations found. We also found that clinically-diagnosed cases had higher odds of mortality, and PLWHIV were more likely to receive a clinical diagnosis. The diagnosis of TB can be difficult, and in individuals with advanced HIV disease, who tend to have atypical clinical presentations, diagnostic testing yield is lower. [27] Additionally, without microbiological confirmation, these patients may have harbored another infection with similar manifestations, such as histoplasmosis, suggesting the need for reliable point of care diagnostic tests. Reinhardt et al. found that 11.4% of Guatemalan patients with an AIDS-defining illness at HIV diagnosis had histoplasmosis. [26] Furthermore, in Latin America, the estimated incidence of histoplasmosis is equivalent to TB in PLWHIV and the overall prevalence of previous exposure to histoplasmosis was highest in Guatemala, although this was largely based on older studies using the histoplasmin test. [26,[28][29] The higher mortality seen in clinicallydiagnosed TB is in keeping with recently published data suggesting that in severely immunosuppressed, ART-naïve PLWHIV, empirical treatment for tuberculosis was not superior to test-guided treatment in reducing mortality. [30] Lower educational level was associated with greater odds of mortality and MDR-TB, although the latter was not significant on multivariate analysis. In Peru, less education was also associated with higher mortality in patients with MDR-TB (OR 3.06, CI 1.43-6.55), and more education was associated with less loss to follow-up and lower mortality (OR 0.39, CI 0.16-0.94). [7,31] Primary school or lower education has also been associated with higher mortality in China (OR 2.51, CI 1.34-4.70) [32] and higher rates of MDR-TB have been associated with less education in studies from Pakistan and Turkey. [33][34] Similarly, the odds of mortality and MDR-TB were greater for those of indigenous ethnicity. Cerón et al. demonstrated that indigenous people in Guatemala experience multiple barriers to care, including language barriers and discrimination, likely contributing to the disparity of outcomes. [16] We found indigenous people had lower educational level, more malnutrition and more commonly had a clinical diagnosis, reflecting underlying social disparities. We also found that patients with malnutrition had a seven-fold increased odds of mortality, a finding particularly worrisome as Guatemala has one of the highest rates of malnutrition in Latin America. [35] However, malnutrition at the time of notification may be related to TB disease itself as consumption is fairly common in people with advanced TB disease. Previously reported risk factors associated with MDR-TB were also seen in our study, particularly diabetes mellitus and prior TB treatment. A meta-analysis that included studies from fifteen countries, including Mexico and Peru, found that diabetes was a significant risk factor for development of MDR-TB (OR 1.97, CI 1.58-2.45). [11,36] A singlecenter, prospective study of patients diagnosed with pulmonary TB in Guatemala found that prior TB treatment for more than two weeks (OR 3.0; CI 1.5-10.3) was associated with resistance to ≥ 2 antituberculous antimicrobials. [18] This is the first study to assess risk factors associated with TB mortality and with MDR-TB using nationwide mandatory reporting in the Central American region. Limitations of our study include its retrospective nature, the limited number of variables obtained from mandatory reporting forms, and the significant amount of missing data. The standardized reporting form allows only for mutually-exclusive selection of pulmonary or extrapulmonary TB at the time of the notification, likely leading to the exclusion of patients with extrapulmonary TB who also had pulmonary involvement. Additionally, the limited number of variables and incomplete data are reflective of the usual Table 3 Risk factors associated with mortality and with multidrug resistance in a multivariable binomial logistic regression analysis of adult patients with pulmonary tuberculosis, Guatemala, 2016-2017. RISK standard of data gathering on cases in the country, as the mandatory reporting form was not designed for research purposes. Furthermore, data collection on all comorbidies is not mandatory nor standardized, likely leading to underreporting as reflected by the low prevalences seen. Self-reporting ethnicity might have led to missclassification, and the relevant amount of missing data for this variable (21.7%). The overall small number of MDR-TB cases reported hinders the representativeness of the associations found with MDR-TB. Use of rifampin resistance as a surrogate marker for MDR-TB may also have led to an overestimation in the number of MDR-TB cases. Lastly, the primary outcome was all-cause mortality and deaths might have not been directly related to TB. However, WHO assumes the proportion of deaths attributable to TB is the same as the observed proportion in recorded deaths (i.e. overall mortality). [37] In this two-year study of patients with pulmonary TB in Guatemala, we found that PLWHIV, unknown HIV diagnosis, prior TB treatment, indigenous ethnicity, lower education level, and malnutrition were significantly associated with overall mortality risk. Risk of MDR-TB was higher in patients with indigenous ethnicity, prior treatment for TB, and diabetes. Higher TB mortality and MDR-TB risk in indigenous polulations might reflect social disparities seen in Guatemala and other Latin American countries. Additional studies are needed to further characterize TB morbidity and mortality in Central America.
2021-11-17T16:25:07.452Z
2021-11-01T00:00:00.000
{ "year": 2021, "sha1": "37095060ea801b6e71d1cc31a57c29ca44cd7a33", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.jctube.2021.100287", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "808de47fb0ed736d890db6262eedcfef2b5dc215", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
4808850
pes2o/s2orc
v3-fos-license
An anisotropic integral operator in high temperature superconductivity A simplified model in superconductivity theory studied by P. Krotkov and A. Chubukov \cite{KC1,KC2} led to an integral operator $K$ -- see (1), (2). They guessed that the equation $E_0(a,T)=1$ where $E_0$ is the largest eigenvalue of the operator $K$ has a solution $T(a)=1-\tau(a)$ with $\tau (a) \sim a^{2/5}$ when $a$ goes to 0. $\tau(a)$ imitates the shift of critical (instability) temperature. We give a rigorous analysis of an anisotropic integral operator $K$ and prove the asymptotic ($*$) -- see Theorem 8 and Proposition 10. Additive Uncertainty Principle (of Landau-Pollack-Slepian [SP], \cite{LP1,LP2}) plays important role in this analysis. 0. Many models of high temperature superconductivity [LV] lead to the family of integral operators with anisotropic kernels. Structure and spectral analysis of these operators could be difficult and quite interesting because standard analytical methods (perturbation theory, Fourier transform, etc.) do not necessarily help us. P. Krotkov and A. Chubukov [KC1,KC2] [see [KC2], section B.2, (46)-(60)] simplified one of local Eliashberg gap equations by dropping the Matsubara frequency summation and came to the operator in L 2 (R) Subspaces H e and H o in L 2 (R) of even and odd functions are invariant so we'll consider restrictions The toy model in [KC2], Sect B.2, views τ (a) as an imitation of the shift of critical (instability) temperature where a is the dimensionless quantity proportional to both the curvature and interaction (see (36), (37) in [4] for details). Heuristic manipulations (46)-(62) in [KC2] intended to make us believe that τ (a) ∼ c a 2/5 . Maybe, this is quite remarkable that the potential "2/5" appears in this analysis. We will show in this essay that indeed for a > 0 small enough c 1 a 2/5 ≤ τ (a) ≤ c 2 a 2/5 where c 1 , c 2 > 0 are absolute constants (see Prop. 10 in Sect 4 below). 1. But first, we will find good estimates of the shift ψ in (9), i.e., the behavior of the largest eigenvalue of K o ∈ (7). Of course, E(a) = K a ; E e (a) = K a |H e ; E o = K a |H o because K is a self-adjoint compact operator and H e , H o are its invariant subspaces. Lemma 1. For a > 0 and (13) Proof. As (3) shows, after Fourier transform Or if you wish they are "attained" if f e = δ(x) and f o = δ ′ (x). If a > 0, the norm is attained as, say, We have a strict inequality in (20) because K(x, y) > 0 and an odd g(x) = 0 is negative and positive on some subsets of positive measure. So Indeed, if g is odd and not identically zero put Then measures λ(G + ) > 0, λ(G − ) > 0 are positive and . Two of then (over G + × G + and G − × G − ) are positive and two (over G + × G − and G − × G + ) are negative because K(x, y) > 0 everywhere with the excess being equal to Almost the same argument shows that E(a) < 1. Indeed, Lemma 1 is proven. 2. From now on we analyze the integral operator K = K a , a > 0, with a kernel (2), T = 1. By (7) we can consider it in the block-form where K e , K o are integral operators but their kernels are not uniquely determined because, say for h ∈ H o and A(x, y) = A(x, −y) A(x, y)h(y) dy = 0 any way. To analyze K o we change a kernel (2) to its antisymmetrization and still have a representation Of course, this is a twin of (26). Explicit formula for K ′ is and in polar coordinates (r, ϕ) with We want to get estimate for E o (a) from below by choosing a test function h > 0 to be specified later, hH = 1, and doing explicit calculations of quadratic forms. So Proposition 2. For a kernel (30), (32) with notations (33), (34) for small enough a > 0 (35) α(a) ≤ Ca 2/5 , C an absolute constant, C < 3. 3. As we noticed in (21) To evaluate this quantity from above, we'll use the following Schur lemma [Sc]. Lemma 3. Let A be an integral symmetric operator, i.e., A(x, y) = A(y, x), (See more about Schur lemma, or Schur test, in [DK], Section 3, or [HS], Theorem 5.2. More general statements in the context of the operator interpolation theory can be found in [Mi, Ca].) It is quite surprising that this lemma gives us the sharp up to the second term estimate of the norm K o . Proof is straightforward. so we can consider only x > 0 in (88). Then Notice that the denominator Therefore, with ξ = Y − x 2 + 1, and ξ = 2xt, In (95) we have two positive factors, each of them less than 1, so if we expect their product to be close to 1 we want each of them to be close to 1. It will be achieved if So far we used rough inequalities (93)-(94) and we do not expect to get sharp constants. Therefore, we do not look for finding exact x * but we want reasonable estimates for Notice that for v > 1 Therefore, by (94), (98), (99) (100) and by elementary inequality Again, as in (82) gives the best result in the right side of (102). This leads us to the inequality where by (103) We proved (88)-(89) with c = 2 3 . (33) we've chosen a smooth cut-off but calculations of Sect. 2 could be done (as long as we do not try to find sharp constants) with other f * 's, say, In Then Again, the integral (43) will play important role, i.e., we use the following. Lemma 5. If C > A > 0 then Proof was given in Sect 2, formulas (44)-(51). Now and with the integrand being positive we have The same analysis as on pp. 7-10 will shows that if where ϕ(a) ≤ C 4 a 2/5 although because of (113) with different upper bounds of integration this absolute constant C 4 will be worse than in (84) or (86), even if we will try to choose λ appropriately. 5. In previous section we saw that Schur lemma gives good upper estimates (88)-(89) of the norm of an integral operator with the kernel K ′ ∈(30), (32). But an attempt to apply Lemma 3 to the kernel K ∈ (2), T = 1, does not give a right order of the term which is an analogue of β in (88). Even if we take x = 0 So even if the estimate a, a ≤ a * for some a * > 0 were correct it would be far away from the below estimate (86). However, a more skillful use of Schur lemma (or its proof) combined with Uncertainty Principle (in its additive form) gives (!) good estimates of the norm K a of the full operator (1), (2). These constructions have been suggested by Fedor Nazarov [private communication, Oct. 26, 2006]. Lemma 6 (Uncertainty Principle). Let f ∈ L 2 (R 1 ), For any h > 0, hH = 1, one of two inequalities (a) or (b) holds: This is a version of the celebrated Landau-Pollack-Slepian inequalities (Additive Uncertainty Principle). In Appendix we'll discuss it and give a proof of Lemma 6 to make the present paper a self-contained exposition. Now we will give an estimate from above of the norm of K-image, K ∈ (1), (2), T = 1, We can assume [see (18) If in Lemma 6 (b) holds we do the following estimates: We used (b) and elementary inequalities If (a) holds we do as Schur did, i.e., by Cauchy inequality, with K · f = K 1/2 (K 1/2 f ), For any y M(y) ≤ 1 but if |y| ≥ H we get a better estimate: notice that if |x − y| ≤ 1 then if a 2 H 4 < 1. Let us choose such h > 0 that 1 6 h = 1 6 a 2 H 4 , hH = 1, i.e., h = a 2 1/5 . (114) and (115) give the same estimate for the case (b) and (a), correspondingly. By Lemma 6, it covers all possible cases. These inequalities give the upper bound of the square of the norm K 2 ; therefore Prop 2 gives estimates from above of the deficiency term ψ in (9) in the case of the subspace of odd functions. But with inequalities (12) of Lemma 1, Prop 7 and 2 together complete the proof of the following statement. Theorem 8. Let E(a) = E e (a) and E o (a) be the largest eigenvalues of the integral operator K a ∈ (116) on subspaces of even and odd functions correspondingly. Then for a > 0 small enough 1 12 a 2/5 . 6. Now we can give an asymptotic of τ (a) in the solution (10) of the equation E o (a, T ) = 1. Lemma 9. The norm N(a, T ) of an operator K aT ∈(1) has the property The same is true for the norms N e , N o of K e aT , K o aT , restrictions of K ∈ (1) to the subspaces of even and odd functions. If we take in (118) for small t. If T = 1 − τ (a) and τ (a) → 0 (a → 0) as in (10), the equation (122) With τ (a) → 0 1 2 < 1 − τ (a) < 1, so (126) and (124) imply We proved the following. Remark. If we would know that lim ψ(a)a −2/5 existed and were equal to L, then the same argument would tell us that lim a→0 τ (a) · a −2/5 = L. [KMS, Pa]. He assumes that satisfies the following: Then a positive definite kernel V (x, y) is given, and its eigenvalues moreover, each sequence of A's (Aa = 1) tending to infinity has a subsequence for which ψ j (a), T A ψ j (a) = µ j ψ j (a), converges in L 2 (I) to an eigenfunction of V belonging to the eigenvalues λ j . See details in [Wi2]. Using Weyl symbols H. Widom gave (private communication) a heuristic argument which leads to a conjecture that this operator M exists, it has a symbol |s| + 4x 4 , or in other terms it is determined by the quadratic form 7.2. An integral operator (1)-(2) was brought to my attention by P. Krotkov and their analysis of models in superconductivity. From mathematical point of view, the kernel (2) is interesting because -it is NOT translation invariant, -a polynomal in the denominator is NOT homogenuous, it has terms of order 2 and 4. Although our analysis and results could be extended to a broader family of such kernels, the complete understanding of an interplay of orders of terms depending on (x − y) and (x + y), or (x 2 + y 2 ), would be very instructive. Notice, for example, that the following is true. Proposition 11. Let t > 0 fixed, a > 0 goes to zero. Then its norms N = N e and N o satisfy inequalities where c, C are constants depending on t but not on a. The operator K a with a kernel (129) is compact for any t > 0, a > 0. The conjecture of Section 7.1 can easily be formulated for this example as well. How to prove it? . On another side, for some sequence b = lim u n , v n = lim P u n , Rv n ≤ RP . Proof. If t = 0 this is Pythagor's identity. If t = 1 this is Cauchy inequality. We are ready to prove the following Proposition 14. For any f , f = 1, Proof. P is an orthogonal projector so P f, f − P f = 0; then (143) f, P f = P f 2 , P f = f, u where u = P f / P f and (144) 1 = f 2 = P f 2 + f − P f 2 . and the inequality (143) by (151) and (155) can be rewritten as: We proved the following Theorem 16. Let P , R be orthogonal projectors in a (real or complex) Hilbert space H, and V : H → H a unitary operator. Then for any f , f = 1, This inequality (159) is an Abstract Additive Uncertainty Principle. Corollary 17. Under conditions of Theorem 16 Of course, the main example for us is H = L 2 (R) with a unitary operator V = F , the Fourier transform y − x f (x) dx, −1 ≤ y ≤ 1, 0, if |y| > 1. Now remaining "hard analysis" question is to evaluate the norm b 2 = λ 0 of this operator. The original paper [SP] gives the value 0.57258. We'll give a worse (larger) estimate. It comes if we use (again!) Schur lemma to claim that (161) T 0 ≤ 1 π max |x|≤1 1 −1 sin(y − x) y − x dy := B 2 .
2008-03-21T12:59:14.000Z
2008-03-21T00:00:00.000
{ "year": 2011, "sha1": "8dccdcfef7fd9cfbce27293b49a4669db731eff0", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0803.3159", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "8dccdcfef7fd9cfbce27293b49a4669db731eff0", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
118716203
pes2o/s2orc
v3-fos-license
Dissecting the size evolution of elliptical galaxies since z~1: puffing up vs minor merging scenarios We have explored the buildup of the local mass-size relation of elliptical galaxies using two visually classified samples. At low redshift we compiled a subsample of 2,656 elliptical galaxies from SDSS, whereas at higher redshift (up to z~1) we extracted a sample of 228 object from the HST/ACS images of the GOODS. All the galaxies in our study have spectroscopic data, allowing us to determine the age and mass of the stellar component. Using the fossil record information contained in the stellar populations of our local sample, we do not find any evidence for an age segregation at a given stellar mass depending on the size of the galaxies. At a fixed dynamical mass there is only a<9% size difference in the two extreme age quartiles of our sample. Consequently, the local evidence does not support a scenario whereby the present-day mass-size relation has been progressively established via a bottom-up sequence, where older galaxies occupy the lower part this relation, remaining in place since their formation. We find a trend in size that is insensitive to the age of the stellar populations, at least since z~1. This result supports the idea that the stellar mass-size relation is formed at z~1, with all galaxies populating a region which roughly corresponds to 1/2 of the present size distribution. The fact that the evolution in size is independent of stellar age, together with the absence of an increase in the scatter of the relationship with redshift does not support the puffing up mechanism. The observational evidence, however, can not reject at this stage the minor merging hypothesis. We have made an estimation of the number of minor merger events necessary to bring the high-z galaxies into the local relation compatible with the observed size evolution. Since z=0.8, if the merger mass ratio is 1:3 we estimate ~3+-1 minor mergers and if the ratio is 1:10 we obtain ~8+-2 events. INTRODUCTION Present-day galaxies show a clear correlation between mass and size, with the most massive galaxies having the larger sizes. This mass-size relationship has been known both for elliptical and spiral galaxies for many years. With the advent of large surveys, like the Sloan Digital Sky Survey (SDSS, York et al. 2000) it has been possible to quantify this correlation with high accuracy (see e.g. Shen et al. 2003). However, the mechanisms by which this relationship is built remain uncertain. For instance, we do not have conclusive answers to questions like: "were the galaxies born in-situ at the positions where we find them in the local mass-size relation or were they born in another part on this diagram, drifting to their present location?". If so, "how much have they grown and what are the mechanisms responsible for this displacement?". Answering these questions is directly connected to our understanding of how the assembly of the galaxies has proceeded through cosmic time. In this paper we will particularly focus on spheroidal galaxies as it has been shown in the last few years that their stellar mass-size relation has dramatically changed with redshift. Several papers have explored the evolution of the stellar masssize relation of spheroid-like galaxies (e.g. Trujillo et al. 2004;McIntosh et al. 2005;Trujillo et al. 2006aTrujillo et al. , 2007Buitrago et al. c 2011RAS 2008Ferreras et al. 2009b;Saracco et al. 2011). In general, they all agree with a significant evolution of this relation with redshift. Their results can be summarized as follows: at fixed stellar mass, spheroid-like galaxies were significantly more compact at higher redshift (e.g. Daddi et al. 2005;Trujillo et al. 2006b;Longhetti et al. 2007;Zirm et al. 2007;van der Wel et al. 2008;van Dokkum et al. 2008;Cimatti et al. 2008;Damjanov et al. 2009;Carrasco et al. 2010), with an increase on the effective radii by a factor of ∼2(4) from z∼1(2) (e.g. Trujillo et al. 2007). However, these observational results say little about the amount of size evolution of individual galaxies on the mass-size plane. Nevertheless, at least a few basic statements can be established regarding the growth of individual galaxies based on the current observational evidence. First, at high-z there are no big spheroidal objects, implying that the present-day large elliptical galaxies have either formed recently (in-situ) with large sizes or they are the product of the evolution of previous compact galaxies that populated the high-z stellar mass-size plane. Second, the near absence of compact massive galaxies in the nearby Universe Valentinuzzi et al. 2010;Taylor et al. 2010), which were very common in the early Universe, indicates that individual objects (at least the very old and compact ones) have evolved significantly in size. Some recent works have conducted a detailed analysis of the buildup of the local spheroid mass-size relationship Valentinuzzi et al. 2010). These works propose that the formation of this relation is a result of two steps: a) the continuous emergence of galaxies as early-type systems with larger sizes, as cosmic time increases, due to the decreasing availability of gas during their formation phase (Khochfar & Silk 2006), and b) their subsequent growth through either gas expulsion in the so-called puffing up scenario (Fan et al. 2008(Fan et al. , 2010Damjanov et al. 2009) or by minor merging activity (Naab et al. 2009;Hopkins et al. 2009). If the above scenario is correct, i.e. that the new assembled galaxies are born with larger sizes as redshift decreases, we should observe that the number density of spheroid-like massive galaxies at fixed stellar mass should decrease with increasing redshift. Furthermore, a gradual change of the age of the galaxies at fixed stellar mass should be expected, in the sense that larger galaxies should be younger. However, there is no compelling evidence of a significant drop in the number density of elliptical galaxies up to z∼1 (see e.g. Ferreras et al. 2009b), weakening this formation scenario. On the other hand, in van der Wel et al. (2008) and Valentinuzzi et al. (2010), there is some hint that larger spheroidlike galaxies, at fixed dynamical and stellar mass, are younger than their compact mass equivalents. In this paper we reexamine the buildup of the mass-size relationship of spheroidal galaxies with two significant improvements in relation to previous work. First, this paper addresses the issue of the evolution of early-type galaxies on the size-mass plane by comparing a nearby and a distant sample of galaxies, classified and analyzed in the same way. We will show in this paper that previous studies of the local stellar mass-size relation of early-type galaxies are severely contaminated by galaxies of other morphological types. For this reason, the present study is the first one exploring objects that have been classified only visually, and not by any other criteria, like structural parameters or colours. The second advantage of the present work is that we have quality spectra for all our targets, allowing us -by exploring their spectral energy distributions (SEDs) -to obtain reliable star formation histories (SFHs). Spectroscopic data is essential to robustly determine the properties (stellar mass and age) of the underlying stellar populations in both local and high redshift samples, allowing us to make a much more consistent assessment of the increase in galaxy size on an individual galaxy basis. The information about the ages allows us to explore whether the size evolution depends on the properties of the stellar populations of these galaxies. This information is key to distinguish between the two most likely mechanisms of size growth proposed in the literature for elliptical galaxies: the puffing up versus the minor merging scenario. We will amply discuss in this paper the implications of our findings in relation to these two models. The paper is structured as follows. In Section 2 we describe the local sample. The connection between stellar age and the masssize relationship is explored in Section 3. In Section 4 we present our moderate redshift sample and in Section 5 we quantify the size evolution of our galaxies. Section 6 is devoted to explore which evolutionary scenario is more plausible according to these observations, finally concluding on Section 7 with an overview of our results. In this paper we adopt a standard ΛCDM cosmology, with Ωm=0.3, ΩΛ=0.7 and H0=70 km/s/Mpc. THE LOCAL SAMPLE Our local sample is taken from the morphological catalogue of Nair & Abraham (2010, hereafter NA10) obtained via visual classification from SDSS imaging. The NA10 catalogue comprises 14,034 galaxies from SDSS Data Release 4 (DR4, Adelman-McCarthy et al. 2006) in the redshift range 0.01 < z < 0.1 with an extinction-corrected g-band magnitude brighter than 16. From this catalogue, we select the elliptical galaxies (c0, E0 and E+) with a T-Type class -5, resulting in a final sample comprising 2,656 galaxies with available spectra. Our choice for visually classified galaxies aims at minimizing the impact of morphological contaminants, which frequently degrade automated classification samples. To illustrate this issue, we compare in Fig. 1 the NA10 sample used in this paper with the Graves et al. (2008) early-type sample used in van der Wel et al. (2009) for a similar analysis to the one presented here. In contrast with our selection, which is purely based on morphology, the Graves et al. (2008) galaxies are selected by the following criteria: a) located on the red sequence, b) no emission lines in their spectra and c) concentration parameter C>2.5. From the comparison between both samples shown in Fig. 1, it follows that a sample of early-type galaxies based on the above criteria will be contaminated by bulges of both face-on and edge-on spirals. Unfortunately, those contaminants are not distributed evenly throughout the sample. Instead, they tend to concentrate in certain regions of the parameter space, introducing undesired systematic effects. For example, we see that the largest galaxies in the mass-size plane, at a fixed mass, are heavily contaminated by spiral galaxies. Another important advantage of our selection purely based on morphology is that it is neither biased against galaxies with recent star formation activity, nor with a passive evolution in their star formation history. This is relevant for the purpose of this paper as shown below. We take advantage of the fact that virtually all of the galaxies in the sample of Graves et al. (2008) have their morphology visually studied by the Galaxy Zoo project (Lintott et al. 2011). We use their fraction of votes for ellipticals (p el ); spirals (both clock-wise and anti-clockwise, psp) and edge-on spirals (p edge ) to identify the Late Type Galaxy (LTG) contaminants. We have carried out a further visual check of these contaminants, finding a complete agreement with the results of Galaxy Zoo. Despite the small overall contamination rate (∼1.8%), the face-on LTGs concentrate in the region with large radii, while the edge-on LTGs (∼8%) concentrate towards low values of Re and M dyn . The spectroscopic data and photometric parameters of the NA10 sample are retrieved from the SDSS archive. We have used spectra from DR7 (Abazajian et al. 2009), to benefit from the improved flux calibration introduced in DR6 (see Adelman-McCarthy et al. 2008). The SDSS spectroscopic data cover a wavelength range from roughly 3,800 to 9,200Å at an average spectral resolution of 3.25Å (FWHM). This instrumental resolution is not constant but varies in a complex way with wavelength, fiber and arrangement. All spectra are both de-redshifted and corrected for Galactic foreground extinction, using the dust maps of Schlegel et al. (1998). Hereafter, all size estimates are quoted as the circularized effective radius Re ≡ (b/a) 1/2 × R deV , with parameters deVRad g and deVAB g taken from the photometric SDSS-pipeline. In principle, velocity dispersion data (σ) are also available from the DR7 SDSS-pipeline, although with a moderately high ratio of missing values, amounting to over 15% of our SDSS sample. Consequently, we have re-calculated the values of velocity dispersion with the same spectral fitting method used in this study (STARLIGHT, see §3), taking as velocity dispersion the smoothing parameter of the stellar population mixture that produces the best fit to the observed spectrum. La Barbera et al. (2010) show that there is good agreement between the STARLIGHT and SDSS-DR7 velocity dispersion values, with only a small systematic trend at the low (< 90 km/s) and high (> 280 km/s) ends of the σ range. Very few measurements (0.4 %) are excluded, with σ < 40 km/s, because they are considerably smaller than the resolution of the base SSP models (58 km/s). We have used the Jørgensen et al. (1995) prescriptions to correct the velocity dispersion to the same fraction of the effective radius, Re/8, instead of the fixed fiber diameter (3 arcsec). THE LOCAL MASS-SIZE PLANE: DISTRIBUTION OF GALAXIES ACCORDING TO THEIR STELLAR AGE Both van der Wel et al. (2009) and Valentinuzzi et al. (2010) have argued that there is an age gradient within the mass-size plane of early-type galaxies: at fixed mass, galaxies with larger sizes are found to be younger. As it was argued in §1, this observation is expected if newly assembled spheroidal galaxies feature larger sizes than those systems assembled earlier. For this reason, we have revisited the age distribution of our galaxies in the NA10 sample. The age of the stellar populations of our galaxies is estimated as follows. We use the spectral fitting code STARLIGHT (Cid Fernandes et al. 2005) to find combinations of single stellar population (SSP) models that, broadened with a given velocity dispersion, achieve the best match with the observed galaxy spectrum. For the present study, we have used the spectral energy distributions of the MILES SSP models (Vazdekis et al. 2010) with a Kroupa Universal Initial Mass Function (Kroupa 2001). These models are based on the MILES 1 stellar library (Sánchez-Blázquez et al. 2006), which combines both a rather complete coverage of the stellar atmospheric parameters and a relatively high and nearly constant spectral resolution, 2.3Å (FWHM), optimally suited for the spectral resolution of the SDSS data. Our base for the fitting using STARLIGHT consists of 138 solar-scaled SSP models with 6 1 www.iac.es/proyecto/miles different metallicities, ranging from Z=1/50 to 1.6×Z⊙ and 23 different ages, from 0.08 to 11.22 Gyr. Extinction due to foreground dust is modeled with the CCM-law (Cardelli et al. 1989) and masks are used to avoid emission lines or bad pixels. The Mstars parameter -the fraction of the initial stellar mass which still remains as stars at a later time -is extracted from the model predictions and used to calculate the stellar mass of our galaxies, Ms. In the present work, we have characterized the stellar population mixture of each galaxy by its mass-weighted average age, calculated as: where N * is the number of SSP models in the base, µj is the mass fraction vector, defined as the fractional contribution of the SSP with age tj and metallicity Zj , to the total flux, converted into mass with the M/Lj of each SSP. Stellar masses, Ms, are also computed with the µj and M/Lj. Once each SFH has been calculated, the corresponding lookback-times are added in order to set all the histories to a common z=0 ground. This offset ranges between 0.1 (z=0.01) and 1.3 Gyr (z=0.1). Figure 2 shows the size-mass correlation of our SDSS sample of early-type galaxies, split according to the (mass-weighted) ages determined by STARLIGHT. We follow the spirit of the modeling of van der Wel et al. (2008) and Valentinuzzi et al. (2010) whereby age-segregated samples are expected to occupy different regions of the mass-size relation. In order to maximize the difference according to age, we only show the upper and lower quartiles of the age distribution, corresponding, respectively, to galaxies older than 11.7 Gyr (black crosses), and younger than 10.2 Gyr (grey triangles). These symbols represent the median within bins taken at a fixed number of galaxies per bin. The error bar gives the RMS scatter within each bin. The figure shows that the predicted segregation in the mass-size relation with respect to age is not significant when plotted against stellar mass (panel a). This is in contradiction with the results shown in Valentinuzzi et al. (2010) (their Fig. 3). We have quantified the separation between the young and the old galaxy families by zooming in the region where the masses of the galaxies of our two extreme age quartiles overlap (Fig. 3). We quantify the size difference as follows: ∆Re/Re ≡2 Re,young−R e,old / Re,young+R e,old . As we expect from a visual inspection of the figure, the size difference between the two families, at a fixed stellar mass, is negligible (being compatible with zero change within the statistical uncertainty). The reasons for this discrepancy could be several. On the one hand, Valentinuzzi et al. (2010) segregates their galaxies using luminosity-weighted ages instead of mass-weighted ages as used here. Another possibility is that their early-type selection criteria based on the automatic code MORPHOT could include a larger number of spiral galaxies as contaminants, in contrast with a visual classification. We have explored whether using luminosityweighted ages changes our results and find that this is not the case. In fact, if we repeat the previous exercise using luminosityweighted ages, we find that the difference between the two extreme quartiles is 1.7±0.9% (i.e. very similar to the mass-weighted ages). Finally, our stellar mass-size relation is compared with the earlytype relation of Shen et al. (2003). The agreement is very good for objects with stellar mass Ms < 3 × 10 11 M⊙. However, at the high mass end we note that the sizes of our galaxies are slightly larger than those provided by Shen et al. (2003), a result in agreement with Guo et al. (2009), who find a similar underestimate of Figure 1. Dynamical mass-size relation of the sample used in this paper (NA10 sample; black dots in the right column) compared to the early-type sample selection of Graves et al. (2008, grey data points). This figure illustrates the different position in the mass-size diagram that spiral galaxies (i.e. contaminants) have in this diagram (see text for details). Late-type galaxy contaminants (LTGs) are not distributed homogeneously over the early-type galaxy footprint: face-on LTGs mainly live in the top part of the diagram (large radii), whereas edge-on LTGs populate the bottom-left corner (low sizes and masses). the Shen et al. sizes of a similar sample of visually inspected earlytype galaxies. Our previous results show that at fixed stellar mass galaxies do not show any significant difference in age. However, an interesting change is found when dynamical masses are considered instead of stellar ones (panels c and d). In this case, the age segregation is apparent, with younger galaxies having slightly larger sizes. Under the assumption of dynamical homology (i.e. estimating the dynamical masses as M dyn = 5σ 2 Re/G (Cappellari et al. 2006)) the size difference among the two extreme quartiles reaches a value of ∼16% (∼13% in the case of luminosity-weighted ages). However, elliptical galaxies are well known for not being an homologous family. If we repeat the same analysis using this time the dynamical mass accounting for the non-homology following the expression provided by Bertin et al. (2002): with and n being the Sérsic index of the elliptical galaxies in our sample (determined from Blanton et al. 2005), we find that the size difference, ∆Re/Re, decreases significantly to ∼9% (∼8% in the case of luminosity-weighted ages). Our findings about the size difference between the old and young galaxies at a fixed dynamical mass is in qualitative agreement with the findings by van der Wel et al. The grey triangles (black crosses) represent the youngest (oldest) quartiles of the age (mass-weighted) distribution according to our modeling with STARLIGHT. The dotted line in panel a) is the scaling relation of earlytype galaxies according to Shen et al. (2003). Given that dynamical mass estimates depend on the velocity dispersion quadratically, one would expect that the size difference of the galaxies in the two age quartiles could be linked to a change on the velocity dispersion between the young and the old population. Panel b) of Fig. 2 confirms this point: at fixed velocity dispersion, the older subsample is significantly larger (35.3±0.5%; 25.3±1.7% in the case of luminosity-weighted ages) than the younger galaxies. Alternatively, one could interpret this result as follows: at fixed effective radius, older galaxies have lower velocity dispersion than their younger counterparts (although the region of overlap between old and young galaxies at fixed size is arguably rather small). Although we agree qualitatively with van der Wel et al. (2009) on a size difference between the young and old galaxy families at a fixed dynamical mass, our results about the size difference at a fixed velocity dispersion is in contrast with them. These authors show (in their Fig. 1) that at fixed velocity dispersion the age of the galaxies is independent of their size. The reason for this discrepancy could be double: first, their ages are luminosity-weighted, in contrast with our mass-weighted ages, and second, their sample suffers from some contamination of spiral galaxies in key places of the mass-size diagram. Irrespectively of the comparison with other works, our results indicate that the size variation due to changes in the stellar population ages of the elliptical galaxies in the local Universe is very small. Although the age trend goes in the direction (i.e. older galaxies being more compact than young ones at fixed stellar mass) that one would expect from a progressive bottom-up scenario for the buildup of the local mass-size relation model, it is clear that the differences in size are very small to be able to reproduce the large size variation with cosmic time found at high redshift. We will return to this point more extensively in the following sections. We conclude that the stellar population ages do not resemble the age of the full assembly of the elliptical galaxies and, consequently, that after the formation of the bulk of their stellar content, elliptical galaxies have experienced a significant evolution in their size. Dynamical structure change of the galaxies with age The virial theorem predicts that, at fixed mass, the velocity dispersion will change as the inverse of the root square of the galaxy size. Consequently, one would expect, due to the strong size evolution with redshift observed in the elliptical population, that the velocity dispersion of the high redshift objects were significantly larger than those found in local galaxies. However, observations are at odds with this scenario ): the velocity dispersion of the elliptical galaxies, at fixed stellar mass, only changes moderately with redshift. Hopkins et al. (2009b) have explained this mild change in the velocity dispersion suggesting that the contribution of the dark matter halo to the gravitational potential of the galaxy changes with cosmic time. According to that model, in the present Universe the contribution of the dark matter halo on settling the velocity dispersion of the galaxies will be higher than in the past. We can explore whether our local sample shows any hint of a dynamical structure change as a function of the age as suggested by the Hopkins et al. (2009b) idea. To do that we explore both the baryonic fraction (top) and the velocity dispersion (bottom) of our local galaxies against the age of their stellar populations in Fig. 4. The baryonic fraction is defined as the ratio between the stellar mass and the dynamical mass, and should roughly correspond to the net baryon fraction within the effective radius for our early-type galaxies. We bin the sample according to stellar mass as labelled. In order to illustrate the effect of non-homology, we show the homologous estimates as thick lines and the non-homologous models (using dynamical masses from Eq. 1) as thin lines. The strong trend habitually found between the stellar populations and velocity dispersion is evident, with the oldest galaxies being the most massive ones (see e.g. Bernardi et al. 2005;Graves et al. 2008;Rogers et al. 2010;Napolitano et al. 2010), Figure 4. The stellar mass-weighted age is shown with respect to baryon fraction (top) or velocity dispersion (bottom) for a range of stellar masses as labelled. The points and error bars give the median value and error in age bins chosen at fixed number of galaxies per bin. The thick lines indicate that the dynamical masses have been calculated assuming homology, whereas the mass estimates assuming non-homology are shown as thin lines. with a larger dark matter content within the optical radius (see e.g. Tortora et al. 2009;Leier et al. 2011). This trend of an increased dark matter content with galaxy mass is also consistent with the results pertaining to whole halos, as shown when comparing observed stellar mass functions with cosmological halo abundances (see e.g. Moster et al. 2010). We can now probe in more detail Fig. 4: at fixed stellar mass, the oldest galaxies feature higher velocity dispersions and a lower baryon fraction. The higher velocity dispersion for the older galaxies is in agreement with the findings of at high redshift and supports the idea of Hopkins et al. (2009b). However, the decrease in the baryonic fraction as a function of age seems at odds with the high-z findings. It is interesting to note that our estimation of the dynamical mass is done with the sizes found in the local Universe, so a direct relation with the baryonic fraction estimated at high-z is not straightforward. From the analysis of the local relation, we find that the age of the most massive local ellipticals is quite homogeneous and also their dynamical structure change is limited to Ms/M dyn ∼0.4±0.1. This suggests that the most massive elliptical galaxies formed via an earlier, very homogeneous formation process. This scenario is consistent with the observed lack of evolution in the number density of massive earlytype galaxies (see e.g. Fontana et al. 2006;Ferreras et al. 2009b;Banerji et al. 2010). For our intermediate and lower stellar mass bin, elliptical galaxies show a much more important trend between age and dynamical structure. For instance, we see that for present Ms∼10 11 M ⊙ ellipticals, the baryonic fraction can change between 0.3 to 0.7 and the velocity dispersion between 150 to 250 km/s. We note that our trend, at fixed stellar mass, towards a lower baryon fraction in older populations is at odds with Shankar &Bernardi (2009) andNapolitano et al. (2010). They obtain the opposite trend, namely more dark matter in the younger populations at fixed stellar mass (Napolitano et al. 2010), or corrected luminosity (Shankar & Bernardi 2009). However, our range of stellar masses and ages is much shorter, concentrated towards the high mass end. Furthermore, the age estimates of Napolitano et al. (2010) are based on broadband photometry alone, a method considered robust on the determinations of the stellar M/L but not on age estimates (e.g. . Shankar & Bernardi (2009) use instead the spectroscopic ages from Gallazzi et al. (2005) who use a combination of spectroscopic line strengths. In an independent study carried out with 40,000 ETGs from SDSS (de la Rosa et al. in preparation) several methods and SSP models are compared. The method of Gallazzi et al. (2005) with Bruzual & Charlot (2003) models provide systematically younger ages than the spectral-fitting technique with the MILES population synthesis models used for the present study. Furthermore, by comparing the performance of the model-method combination with repeated observations of the same SDSS targets (∼2300 repeated spectra), the spectral-fitting approach is shown to be considerably more robust than other age dating methods. The difference between older and younger galaxies may reflect different channels of galaxy formation within the same stellar mass bin. As the size of the galaxies is proportional to the dynamical mass and inversely proportional to the square of the velocity dispersion, we obtain, as expected, a slight trend to smaller galaxies as a function of the stellar population ages. This trend is unable to explain the strong size evolution found at high redshift. The detailed analysis of the local mass-size relation reveals that the information contained is unable to fully explain which mechanisms have followed the elliptical galaxies to reach their present sizes. For this reason, it is necessary to conduct a direct comparison of the properties of the local galaxies with those of equivalent galaxies at high-z to extract such information. This is what we do in the following sections. MODERATE REDSHIFT SAMPLE In order to understand in more detail the size evolution of massive galaxies and their relation to age, we include in our study a sample of visually classified early-type galaxies at moderate redshift (z < ∼ 1). The comparison with the local sample allows us to probe the evolution of the mass-size relationship over the past ∼8 Gyr. The deep images of the Great Observatories Origins Deep Survey (GOODS) fields (Giavalisco et al. 2004) taken by the Advanced Camera for Surveys (ACS) on board the Hubble Space Telescope (HST) provide the optimal dataset for visual classification of galaxy morphologies out to redshifts z < ∼ 1. We use the catalogue of early-type galaxies from and Ferreras et al. (2009a) in the North and South GOODS fields, comprising 910 visually classified early-type galaxies brighter than F775W= 24 mag (AB). For a proper comparison with the evolved local sample, we need a reliable estimate of stellar age. The broadband photometry of the GOODS sample is not good enough for our purposes, and we consider a subsample with available spectral data. The PEARS sample of early-type galaxies Ferreras et al. (2009c) comprises 228 galaxies from the GOODS catalogue, with available slitless spectroscopy using grism G800L (HST/ACS). The spectral resolution depends on galaxy size, with an average value R≡ λ/∆λ ∼50 for our objects. This sample covers a redshift range 0.4<z<1.3. The lower redshift was dictated mainly by the requirement of having the 4000Å break within the sensitivity range of the grism data. In Ferreras et al. (2009c) stellar ages are determined using a grid of composite models, including chemical enrichment, from which best fit ages and metallicities are obtained. However, in order to reduce the systematics, we only use this modelling to generate the best fit spectra at similar resolution to those from the local sample. We note this method should introduce a very small systematic given that the values of the reduced χ 2 obtained for the PEARS sample are always of order one, and that the method used in this paper to determine ages uses the full SED for fitting, not individual absorption lines. Ages and stellar masses are re-computed from these spectra, using the same methodology as for the local sample (i.e. STARLIGHT Cid Fernandes et al. 2005), with the only difference being the age range of the model populations. For these galaxies we restrict the oldest SSPs to the age of the Universe at the redshift of the galaxy. This approach is well justified as STARLIGHT uses the full SED to constrain the stellar populations, an equivalent technique as the one used with the PEARS dataset. Comparisons between STARLIGHT ages and stellar masses and those determined with the chemical enrichment modelling in Ferreras et al. (2009c) are fully consistent within error bars. Backtracking the evolution of local early-type galaxies By extracting the star formation histories (SFHs) of our local galaxies, one can backtrack their evolutionary paths and estimate the amount of new stellar mass created due to the formation of new stars as well as the age of the stellar populations at a given redshift. To minimize systematic effects, we apply the same methodology to both local and distant galaxies to determine their ages. In Fig. 5 we show the predicted amount of stellar mass for the galaxies in our local sample formed since z∼1 according to their SFHs. Fig. 5 uses the best fit models from STARLIGHT to quantify the net increase in stellar mass from recent phases of star formation. We show the mass growth as the ratio between the stellar mass already in place at some redshift (Mz) and the current mass at redshift zero (M0) for four redshift bins. The sample is split at the median in age measured at zero redshift. One can see that the stellar mass growth at z < ∼ 0.6 stays well below 10% for most of the galaxies, especially for the most massive galaxies, which belong mostly to the oldest half (black solid lines). We have applied to all our galaxies in the local relation the evolution in mass predicted from their SFHs and we have rebuilt the local stellar mass-size relation taking into account that evolution. We consider both the change in stellar age (mass-weighted) and the change in stellar mass of the galaxy. For simplicity, we assume that, within a galaxy, the star formation history does not have a radial trend. Our sample does not allow us to probe in detail this point, but we note that studies of the colour gradient of early-type galaxies at moderate redshift find almost always the star formation concentrated in the centre, i.e. in a blue core (Ferreras et al. , 2009a. The stellar mass-size relation of our PEARS sample in comparison with the local sample is shown in Fig. 6. The figure shows how the local stellar mass-size relation will look like at different redshifts if we correct for the stellar mass evolution. One can see that the redshifted local stellar mass-size relation changes very little in the high mass regime. The evolution is more evident at lower masses, where the galaxies clearly deviate from the local relationship. SIZE EVOLUTION We are now in a position to explore the size evolution of the early-type galaxies after accounting for the stellar mass growth due to new star formation. In fact, the comparison with the observed PEARS sample for similar stellar ages will allow us to determine the evolution of size at a given stellar mass. At the top-left corner of Fig. 6, the local sample is shown using the same criterion as in Fig. 2, with individual galaxies shown as small dots. We include in that panel the local trend of SDSS earlytype galaxies (long dashed line, Shen et al. 2003), showing agreement with our local sample, except for the most massive end, as discussed in Section 3. In the following panels, PEARS individual galaxies appear as solid (open) circles, with ages younger (older) than the median within each redshift bin. The standard downsizing trend is apparent in this figure, with the younger PEARS galaxies having the lowest stellar masses. If the proposed model in van der Wel et al. (2008) were correct, with the youngest galaxies being more extended, at a given stellar mass, than the older counterparts, one would expect this segregation to be more evident at higher redshifts, where the effect of lookback time makes it easier to discriminate with respect to age (i.e. a reduced age-metallicity degeneracy). However, no clear trend with respect to galaxy size is found in our data. Our best fit models for the local sample predict very small stellar mass changes (see Fig. 5), at levels that correspond to ∆ logMs < ∼ 0.05 dex along the horizontal direction in Fig. 6. The comparison with the PEARS sample shows that there is a noticeable "vertical" evolution (i.e. change in size). This one can be illustrated by comparing the (redshift zero) size of the local galaxies with the observed size of the PEARS galaxies, within subsamples Figure 6. Comparison of the stellar mass-size relation for the extrapolated values of the local sample and the PEARS sample, in redshift bins, as labelled. The black crosses (grey triangles) represent the oldest (youngest) half in mass-weighted age of the local sample extrapolated backwards in time according to the best fit star formation history (see text for details). The circles correspond to the PEARS sample. Open (solid) circles represent galaxies older (younger) than the median value, computed within each redshift bin. Note the local sample is corrected regarding the evolution of the total stellar mass, but not in size, so that a vertical shift should be expected when comparing both samples (see figure 7). At z=0 (top-left panel) individual galaxies are shown as dots and the local early-type stellar mass-size relation from SDSS (Shen et al. 2003) is given as a long-dashed line in all the panels. of the same stellar age. Fig. 7 shows the size evolution for galaxies with stellar mass in the range 5 × 10 10 <Ms/M⊙ < 3 × 10 11 . We have fitted our evolution using the following parametrization: R(z)=R(0)(1 + γz) with R(0) the size obtained from the local stellar mass-size relation (Shen et al. 2003). Our data is compatible with the following value γ=-0.657±0.122 for the full galaxy sample and with γ=-0.631±0.176 for the young subsample and γ=-0.674±0.160 for the older subsample (uncertainties quoted at the 68% confidence level). Fig. 7 shows that the size evolution is significant, in agreement with, e.g. Trujillo et al. (2007), with galaxies at z ∼ 1 being ∼50% smaller in size than their local counterparts. Notice the little difference between the trend of the sample segregated with respect to age (large open/solid grey circles). This is one of the most important result of this work and implies that the amount of size evolution that elliptical galaxies suffers since z∼1 is independent of the age of the galaxies at each redshift interval. This means that, the full population of elliptical galaxies, independently of its level of star formation, experiences a similar evolutionary mechanism for assembly. This is once more a result in contradiction with the idea that younger galaxies at all redshifts are born with significantly larger sizes than their older massive counterparts. In other words, our results point out to a similar displacement in the stellar mass-size relation of all the galaxies in the sample (independently of their age). CONSTRAINING THE DIFFERENT EVOLUTIONARY PATHS OF THE ELLIPTICAL GALAXIES SINCE Z∼1 In this section we explore the current most likely scenarios proposed to explain the evolution of elliptical galaxies on the masssize plane. We use the results obtained here and in previous papers to constrain those scenarios. In what follows we consider that both the size and the stellar mass growth of the elliptical galaxies can be described as the contribution of three different processes: i) formation of new stars in the galaxies as a result of gas consumption, ii) accretion of already formed stars from merging of different subunits and iii) gas ejection from the activity of either an AGN and/or supernova galactic Figure 7. Size evolution of the PEARS galaxies split according to their stellar age. Only galaxies with stellar mass between 5 × 10 10 and 3 × 10 11 M ⊙ are considered. Small open (solid) circles show the evolution for galaxies older (younger) than the median within each redshift bin. The diamonds give the size evolution of massive (Ms > 10 11 M ⊙ ) spheroids (Sérsic index n> 2.5) from Trujillo et al. (2007). The error bars of the value of Trujillo et al. (2007) represent the scatter of their sample. The lines represent a linear fit R(z)=R(0)(1 + γz) to the different galaxy populations: solid line (older subsample) and dashed line (younger subsample).The grey solid area is the fit to the full population including the 68% confidence level. winds. We parameterize the effect of these three processes in the mass and size of the galaxies as follows: ∆Ms = ∆Ms,SF + ∆Ms,acc (3) ∆re = ∆re,SF + ∆re,acc + ∆re,agn with ∆Ms,SF and ∆Ms,acc representing the increase of the stellar mass due to star formation and by accretion of new stars into the galaxies, respectively. ∆re,SF , ∆re,acc and ∆re,agn correspond to the increase in size by star formation, accreted stars and by expansion due to galactic winds either created by the effect of a central AGN or supernovae explosions. Observational facts The results of this paper show that ∆Ms,SF is very small (i.e. ∆Ms,SF << Ms) and also that the evolution of the size of the galaxies is quite independent of the age of their stellar population, so ∆re(old) ∼ ∆re(young). Due to the little increase in the stellar mass due to in-situ star formation we assume from now on, to simplify the discussion, that, if any, ∆Ms ≈ ∆Ms,acc for the elliptical galaxies since z∼1. 6.2 Puffing up model: AGN and/or supernova galactic winds effects Fan et al. (2008Fan et al. ( , 2010 have proposed a mechanism based on the removal of gas as result of AGN activity to explain the size growth of early-type galaxies. According to these authors, the rapid expulsion of large amounts of gas by quasar winds destabilizes the galaxy structure in the inner, baryon-dominated regions, and leads to a more expanded stellar distribution. A similar idea -but based on the gas expulsion associated to stellar evolution -has been proposed by Damjanov et al. (2009). The prediction from the puffing up model can be parameterized as follows: ∆Ms,SF = 0 (5) ∆Ms,acc = 0 (6) ∆Ms = 0 (7) ∆re,SF = 0 (8) ∆re,acc = 0 (9) ∆re = ∆re,agn In other words, all the galaxies in the stellar mass-size relation should just evolve vertically in this relation without any increase of stellar mass. Consequently, the size evolution we observe at fixed stellar mass should be directly interpreted as the total size evolution of the galaxies. This model agrees with observations at predicting a small formation of new stars due to the removal of gas from the galaxies. In addition, this model fits well with the lack of evidence of significant evolution in the number density of massive ellipticals since z∼1. However, we find that our data is in conflict with the model in several aspects. First, according to the Fan et al. (2008) model, after the formation of the compact structure, the AGN activity will remove the gas, triggering a fast growth process (∼20-30 Myr based on recent simulations 2 ; Ragone-Figueroa & Granato (2011). This would imply that galaxies with stellar populations older than ∼1 Gyr should be already located in the local stellar mass-size relation. This is not what our data shows. We have galaxies (old and young) at the same distance from the local relation at all redshifts. For instance, at z=1 the mass-weighted age of our sample is 3.9 Gyr for the old subsample and 3.5 Gyr for the young subsample. We can consequently assure that the mechanism that is operating in the size evolution of our galaxies does not know about the age of the stellar populations. This is in contradiction with the puffing up model. In addition, a natural prediction from the puffing up model is that the scatter of the stellar mass-size relation will increase with redshift (Fan et al. 2010), with some galaxies already in place on the local relation and others still in a very compact phase. We do not observe any increase in the scatter of the stellar mass-size relation with redshift in our data. Major dry mergers Major mergers (i.e. mergers of galaxies with similar mass) were first considered as one the likely paths for size growth in elliptical galaxies. Major dry mergers can increase the size in a way almost directly proportional to the mass increase (e.g. Ciotti & van Albada 2001;Nipoti et al. 2010;Boylan-Kolchin et al. 2006;Naab et al. 2007). This evolution is not strong enough to be compatible with the low number of major mergers observed at least since z∼1 Wild et al. 2009;de Ravel et al. 2009;López-Sanjuan et al. 2010), as well as with recent numerical simulations (Khochfar & Silk 2009). For this reason, we will not consider this mechanism further. Minor dry merging Another possible scenario for elliptical galaxy growth involves to minor mergers on parabolic orbits (e.g. Khochfar & Burkert 2006;Maller et al. 2006;Naab et al. 2009;Hopkins et al. 2009b). Through this channel, the new accreted stars as well as the redistribution of stars in the main galaxy, preferentially populate the outer region of the objects. For this reason, this mechanism has been considered a very efficient way of size growth. Fan et al. (2010), following Naab et al. (2009), show that the fractional variation of the gravitational radius and the velocity dispersion of the main galaxy before (i) and after (f) a minor merger is: with η defined as M f =Mi(1+η) and α representing the exponent of the local stellar mass-size relation (R=bM α ⋆ ). Shen et al. (2003) proposes α ≈0.56 with b=2.88×10 −6+11α (in units of 10 11 M ⊙ ). On what follows, we implicitly assume that the gravitational radius is proportional to the effective radius of our galaxies. This is only strictly correct as long as the galaxies do not change the shape of their surface brightness profiles during the minor merger process. It can be shown that after N mergers of equal mass ratio η, the final mass, velocity dispersion and radius can be written as: We can now make an estimation of the number, N, of minor mergers a galaxy requires in order to reach the present stellar mass-size relation. The final size of the galaxy can be written in terms of the initial size, the size evolution at a fixed stellar mass (provided by the observations) and the difference in stellar mass as follows: The evolution at fixed stellar mass for different redshifts is determined by the size evolution found in our data ∆ log R|M i,f ixed = − log(1 + γz), with γ = −0.657 ± 0.122. Using Eq. (13), (14) and (16) we find for the number of minor mergers: We show in Fig. 8 this number as a function of redshift for two different values of η: 1/3 and 1/10. As expected, the number of minor mergers is a function both of redshift and the mass increase per merger, η. We can use these estimations in the number of minor Figure 8. Predicted number of minor mergers as a function of redshift according to the observed size evolution in our data. The grey regions represent the number of mergers within a 1σ uncertainty, for two choices of the mass ratio (η). As expected, the number of minor mergers is a function of η, being larger for smaller ratios. mergers as a function of redshift to determine the increase in size, stellar mass and velocity dispersion that individual galaxies suffer if their evolution is dictated by the minor dry merging scenario. This is quantified in Table 1. Since z∼0.8, individual objects undergo size growth by a factor around 3.5, whereas the stellar mass grows a factor around 2.5. As expected, the velocity dispersion of the individual galaxies decreases with time due to the minor merging. The evolution is, however, very mild. We can compare this evolution with the observed values found in . The comparison is not straightforward as those authors measure the velocity dispersion increase at a fixed stellar mass, whereas we have followed the evolution for individual galaxies and we observe that the increase in stellar mass is not negligible. find a velocity dispersion change ranging from 0.84 to 0.90 since z=0.8 which is in good correspondence to the observed values predicted here from the observed size evolution. DISCUSSION AND CONCLUSIONS In this paper we have explored how the local stellar mass-size relation of elliptical galaxies has been built up since z∼1. We have compiled a sample of visually classified elliptical galaxies since z∼1 from the GOODS datasets, as well as from SDSS data. All our galaxies have spectroscopic data that enable a robust constraint of the age and mass of the underlying stellar populations. Both the study of the fossil record in the local relation as well as the analysis of the stellar mass-size relation evolution with redshift agree on an evolutionary mechanism that is mostly insensitive to the age of the stellar populations of the galaxies at all redshifts. We do not find any clear evidence for a progressive buildup of the local stellar mass-size relation following a bottom-up sequence. In other words, we do not observe that the smaller galaxies, at fixed stellar mass, are generally older than the larger galaxies. On the contrary, the local stellar mass-size relation seems to be in place (with a similar slope and scatter, at least since z∼1) but with all the galaxies presenting a "vertical drift" towards smaller sizes. The analysis of our data rejects the puffing up scenario which proposes that the growth in size is due to the rapid expulsion of large amounts of gas by the effect of an AGN or supernovae-driven winds. In fact, two key predictions of this model, the increase of the scatter in size with redshift at fixed mass, and the absence of old galaxies with small sizes are not observed in our data. Our data, however, is not in conflict with an increase of the galaxy sizes through minor merging. Minor merging has been also favoured by studies using different methods and with samples at higher redshift (i.e. Bezanson et al. 2009;van Dokkum et al. 2010). Under this hypothesis we have calculated the number of minor mergers that would be necessary to build the local stellar mass-size relation in agreement with the observed size evolution. Since z=0.8, we find ∼3±1 mergers with ratio 1:3 or ∼8±2 with ratio 1:10. The data analyzed in this work together with the evidence collected in recent papers (e.g. Kaviraj et al. 2009;Shankar et al. 2010;Nierenberg et al. 2011) only leaves the minor merging scenario as a viable mechanism for the size increase of elliptical galaxies at least since z∼1. Proving ultimately, however, that elliptical galaxies grow by minor merging will require a direct quantification on the minor merger events found in high redshift galaxies and an exploration of the age and metallicity gradients of the stellar population in local elliptical galaxies.
2011-05-09T16:24:19.000Z
2011-02-16T00:00:00.000
{ "year": 2011, "sha1": "2f2d1b2bfc5fbee7e4abc1604848b45b2765f685", "oa_license": null, "oa_url": "https://academic.oup.com/mnras/article-pdf/415/4/3903/4925304/mnras0415-3903.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "91622546ba80ac053d48e20bef816a308c34ea0e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
270490031
pes2o/s2orc
v3-fos-license
Analysis of the combined effect of rs699 and rs5051 on angiotensinogen expression and hypertension Abstract Background Hypertension (HTN) involves genetic variability in the renin‐angiotensin system and influences antihypertensive response. We previously reported that angiotensinogen (AGT) messenger RNA (mRNA) is endogenously bound by miR‐122‐5p and rs699 A > G decreases reporter mRNA in the microRNA functional‐assay PASSPORT‐seq. The AGT promoter variant rs5051 C > T is in linkage disequilibrium (LD) with rs699 A > G and increases AGT transcription. The independent effect of these variants is understudied due to their LD therefore we aimed to test the hypothesis that increased AGT by rs5051 C > T counterbalances AGT decreased by rs699 A > G, and when these variants occur independently, it translates to HTN‐related phenotypes. Methods We used in silico, in vitro, in vivo, and retrospective models to test this hypothesis. Results In silico, rs699 A > G is predicted to increase miR‐122‐5p binding affinity by 3%. Mir‐eCLIP results show rs699 is 40–45 nucleotides from the strongest microRNA‐binding site in the AGT mRNA. Unexpectedly, rs699 A > G increases AGT mRNA in an AGT‐plasmid‐cDNA HepG2 expression model. Genotype‐Tissue Expression (GTEx) and UK Biobank analyses demonstrate liver AGT expression and HTN phenotypes are not different when rs699 A > G occurs independently from rs5051 C > T. However, GTEx and the in vitro experiments suggest rs699 A > G confers cell‐type‐specific effects on AGT mRNA abundance, and suggest paracrine renal renin‐angiotensin‐system perturbations could mediate the rs699 A > G associations with HTN. Conclusions We found that rs5051 C > T and rs699 A > G significantly associate with systolic blood pressure in Black participants in the UK Biobank, demonstrating a fourfold larger effect than in White participants. Further studies are warranted to determine if altered antihypertensive response in Black individuals might be due to rs5051 C > T or rs699 A > G. Studies like this will help clinicians move beyond the use of race as a surrogate for genotype. C > T in the liver, demonstrating that these variants instead may be involved in influencing hypertension through nonliver-mediated mechanisms. • The results demonstrate that rs699 A > G and rs5051 C > T are associated with AGT messenger RNA abundance in a cell-type-specific manner and have small but clinically meaningful associations (up to 2.7 mmHg) with blood pressure.• The association between rs699 A > G and rs5051 C > T and increased AGT expression in the kidney may have clinical significance since the kidney expresses the necessary enzymes to convert AGT to the prohypertensive angiotensin II as well as expressing the angiotensin II and pressor-control receptors responsible for blood pressure-raising effects. • Further studies are warranted to investigate the potential for direct effects of rs699 A > G or rs5051 to the kidney and to determine if the reason for altered antihypertensive response in Black individuals might be due, in part, to the increased allele frequency of rs5051 C > T or rs699 A > G. | INTRODUCTION Hypertension (HTN) is known to involve genetic variability in the renin-angiotensin system (RAS), 1 and one of the most implicated and studied components of the RAS is angiotensinogen (AGT).AGT is a 485 amino acid pre-prohormone that is cleaved by renin to the decapeptide angiotensin I, followed by angiotensinconverting enzyme (ACE) cleavage to the octapeptide angiotensin II, which causes salt and fluid retention by the kidneys to raise blood pressure.Even small increases in AGT expression have been shown to significantly increase blood pressure. 2,3Two single nucleotide polymorphisms (SNPs) in AGT, rs699 (missense SNP) and rs5051 (promoter SNP), have been associated with increased AGT messenger RNA (mRNA) expression and plasma AGT protein concentrations, [4][5][6][7] suggesting a mechanistic basis for suspecting these variants contribute to HTN-related phenotypes.Rs699 A > G is a common variant that codes for a methionine to threonine (M > T) substitution at position 259 of the AGT protein (this variant has been previously referred to as M235T in the literature).An association with HTN was first published in 1992 in a study comparing hypertensive subjects to controls, and increases in AGT plasma concentrations in subjects homozygous for the G allele were found. 7In attempts to find a genetic risk factor for HTN, many studies have been conducted since then searching for associations between this variant and treatment response or disease susceptibility.There is controversy since some studies have not found statistically significant associations, [8][9][10][11][12][13][14] but as a whole there appears to be a connection between the rs699 locus and both HTN 4,[15][16][17][18][19][20][21][22][23][24][25] and response to antihypertensive drugs including ACE inhibitors, 26,27 angiotensin receptor blockers, [28][29][30] and aldosterone antagonists. 31 particular importance, in vitro evidence shows that rs699 A > G (amino acid change: M > T) has no effect on the conversion of AGT to angiotensin I by renin, which is supported by the fact that the variant does not lie within the renin binding site. 32Findings that rs699 A > G has no deleterious effect on protein activity suggest that disease associations could involve other mechanisms (e.g., microRNA regulation, splice site modification, etc.) or one or more other causal variants in linkage disequilibrium (LD).The AGT promoter variant rs5051 C > T is in LD (r 2 0.94) with rs699 A > G.In vitro assays have shown rs5051 C > T significantly increases AGT transcription (up to 68.6%) through alterations in transcription factor binding. 32,33 Due to the LD of these variants, studies in humans have not detangled the effect of each variant independently.In vitro studies of these SNPs have unraveled complex mechanistic hypotheses, but there remains a gap in understanding how each variant in isolation contributes to AGT expression in the liver (the main tissue which feeds AGT to the systemic circulation) and in important extrahepatic tissues, like, the kidney, brain, and vasculature. We previously reported that (1) AGT mRNA is endogenously bound by miR-122-5p in human hepatocytes using the mir-eCLIP assay and that (2) rs699 A > G significantly decreases reporter mRNA levels in hepatocytes using the high-throughput functional screening assay PASSPORT-seq. 34Since microRNAs bind to and decrease mRNA abundance, our assay results suggest miR-122-5p decreases AGT mRNA more with the rs699 A > G variant.The putative decrease in reporter mRNA due to rs699 A > G, in theory, would oppose (i.e., "counterbalance") the effect of rs5051 C > T on overall AGT mRNA expression in the human liver.The opposing effect of these tightly linked variants could make sense in the context of human evolution assuming that an imbalance in these variants would lead to unfavorable cardiovascular or other consequences.In addition to its published association with HTN, rs699 A > G has also been associated with increased power and strength performance, 35 supporting the possibility of its role in natural selection.Both rs699 A > G and rs5051 C > T are common in humans, with the variant (rs699 G) allele frequencies ranging from 33% in Danish people to 95% in African people, depending on the data source. 36These variants occur independently from one another in about 1.3% of alleles (or 2%-3% of people) based on LD-pair 37 analysis in 1000 Genomes Project data.However, HTN phenotypes have not yet been intentionally studied in humans where these SNPs occur independently from one another.Thus, our hypothesis is that individuals with unbalanced rs699 A > G and rs5051 C > T genotypes (i.e., having more of one variant allele than the other) would exhibit corresponding changes in AGT-related phenotypes, specifically blood pressure and the development of HTN.The objective of this study was to assess the isolated effect of rs699 A > G in silico, in vitro, in vivo, and retrospectively in clinical and Biobank data where rs699 A > G and rs5051 C > T occur independently of each other in research participants and samples.Table 1 provides a clear depiction of our hypothesis. | In silico We used RNAduplex to test the change in binding affinity between hsa-miR-122-5p and the AGT mRNA with and without the rs699 A > G variant.The ViennaR-NA suite version 2.4.14 38 was used to call RNAduplex with default settings to test the effect of rs699 A > G on binding strength.The miR-122-5p sequence was obtained online from miRbase (https://www.mirbase.org/cgi-bin/mirna_entry.pl?acc=MI0000442), and the AGT region of rs699 was obtained from the PASSPORTseq oligo sequence used to clone the reporter construct. 34This sequence included complementary primer ends which were included in the analysis to make sure miR-122-5p was not predicted to target these.VARNA version 3.93 39 was used to create the RNA-binding schematic figure by entering the dot bracket notation output from RNAduplex and manually editing the figure (using Adobe Illustrator version 27.1.1)to display as intended. | In vitro We constructed plasmid expression systems for the fulllength AGT cDNA with or without the rs699 A > G variant on an otherwise isogenic background to test the effect of the variant on AGT mRNA abundance.The untagged human AGT cDNA expression plasmid, under a cytomegalovirus promoter, was obtained from Origene (catalog # SC322276) along with an empty vector to be used as a control (catalog # PS100020).The AGT plasmid initially contained the rs699 A > G variant ("AGT.G") and thus was sent to GenScript for sitedirected mutagenesis to mutate the AGT cDNA from G (variant) to A (wild type, "AGT.A").Since AGT is on the negative strand in the human genome, the presented nomenclature of this mutagenesis has been complemented to the positive strand.The resulting plasmids (empty vector, AGT.G, and AGT.A) were amplified in 5-alpha competent Escherichia coli (New England BioLabs catalog C2987) and purified using Qiagen MaxiPrep according to the manufacturer's instructions.The vector plasmid concentration was 814 ng/µL, AGT.G was 485 ng/µL, and AGT.A was 616 ng/µL, determined by Nanodrop A260.Qubit DNA quantitation confirmed these measurements were accurate relative to each other, but Qubit-obtained concentrations were slightly higher compared with those from the Nanodrop.One million HepG2 cells were thawed from frozen stock and grown overnight in DMEM + 10% fetal bovine serum (FBS), resulting in 8 million cells.Cells were resuspended in 24 mL of media and 1 mL was distributed into each well of two six-well plates.A similar procedure was used for HEK293 and HT29 cells. 40,41ipofectamine 2000 was used according to the manufacturer's instructions using media without FBS to transfect 4.4 µg of plasmid per well.Cells were incubated for 3.5 h followed by media replacement with FBS.Cells were incubated at 37 degrees Celsius for 48 h before RNA isolation using Qiagen RNeasy according to the manufacturer's protocol.TaqMan Gene Expression Assay for AGT (Hs01586213_m1, which does not bind near rs699) and glyceraldehyde-3-phosphate dehydrogenase (GAPDH, Hs02786624_g1) were used according to the manufacturer's protocol to quantify the relative mRNA abundance in cells transfected with AGT.G versus AGT.A.Only one transfection was done for each group in the HEK293 cells, whereas three transfections were done per group in HT29, and four transfections were done per group for HepG2.Each separate transfection was considered a biological replicate.Quantitative polymerase chain reaction (qPCR) was done in triplicate for each bioreplicate.QuantStudio was used for the qPCR reactions. To determine the proximity of rs699 to the miR-122-5p binding site, we repeated mir-eCLIP in five additional replicates of primary hepatocytes, resulting in a total of six mir-eCLIP data sets for analysis.The mir-eCLIP assay was performed in single-donor primary hepatocytes obtained from Xenotech (lot HC2-47) using the kit reagents and protocol provided by ECLIPSE Bioinnovations (catalog number not yet available).This protocol is developed based on the published single-end seCLIP protocol 42 and is refined as previously described. 34ooled-hepatocytes were used for Run 1 (previously completed), and data from this assay 34 was used.Singledonor hepatocytes were used for Runs 2-6.Runs 2 and 3 were performed by ECLIPSE Bionnovations and Runs 4-6 were performed by the study team.Over 50 million reads were obtained for each sample from Illumina NovaSeq.6000 sequencing.Hyb (version 1) was used to call chimeric reads from the sequencing data. 43A custom R pipeline was used to further analyze the Hyb output, and bedgraph files were generated for upload into UCSC Genome Browser. 44UCSC Genome Browser printouts were further edited in Adobe Illustrator version 27.1.1 in accordance with the UCSC Genome Browser user license. | In vivo To assess AGT expression levels across genotype groups according to our hypothesis, we requested access to Genotype-Tissue Expression (GTEx) Project v8 data through dbGaP.Once approved, files were downloaded according to the AnVIL instructions provided online (https://anvilproject.org/learn/reference/gtex-v8-freeegress-instructions).BAM files were downloaded for "Liver," "Brain-Cerebellum," "Colon-Sigmoid," "Kidney-Cortex," "Tibial Artery," and "Coronary Artery," and feature counts (called from the Subread package, version 2.0.1) was used to determine read counts for AGT for each genotype group.Rs699 and rs5051 were extracted from whole-genome sequencing VCF files using Plink version 2.0. 45Expression quantitative trait locis (eQTLs) and cross-tissue expression data were determined using the GTEx web browser by searching for "rs699" or "rs5051."Web browser printouts were assembled into figures using Adobe Illustrator version 27.1.1 in accordance with the GTEx user license. | Clinical and Biobank To test our hypothesis in HTN phenotypes, we used two Biobank data repositories: Indiana University Simon Comprehensive Cancer Center Advanced Precision Genomics (APG) clinic research participants, 46 and UK Biobank.APG patients provided consent for research and reporting research results generated with their data.APG data were retrospectively assessed for HTN phenotypes under a research protocol approved by Indiana University's Institutional Review Board, utilizing available clinical sequencing data to determine rs699 and rs5051 variant genotypes for each participant.Pharmacy prescription claims data for each participant were used to determine total blood pressure medication fills per year or maximum unique blood pressure medications filled in any quarter of the year.UK Biobank data were accessed under an approved material transfer agreement and downloaded according to UK Biobank instructions.The provided imputed genotype files were used to determine rs5051 and the measured genotype files were used to determine rs699.Plink version 2.0 was used to filter by variant and generate.Raw genotype files for upload into R. UK Biobank Data fields 4080, 4079, 2966, 21022, 21000, 6177 + 6153, 131286, 21001, 20116, 22032, 1558, 1478, 30710, 30630, 30640, and 26414 + 26431 + 26421 were used to determine systolic blood pressure, diastolic blood pressure, age of HTN diagnosis, age at recruitment, ethnic background, taking blood pressure medication yes/no, date HTN diagnosis, body mass index (BMI) (kg/m 2 ), smoking status (0 = never, 1 = previous, 2 = current), physical activity (0 = low, 1 = moderate, 2 = high), alcohol intake (1 = daily, 2 = 3 − 4x/week, 3 = 1 − 2x/week, 4 = 1 − 3x/month, 5 = special occasions, 6 = never), preference for adding salt to food (1 = never/rarely, 2 = sometimes, 3 = usually, 4 = always), blood c-reactive protein (CRP), blood apolipoprotein A, blood apolipoprotein B, and education level, respectively.Sex was predefined in the .famfiles provided with the UK Biobank data, which was determined by chromosome X intensity.UK Biobank genotypes were quality controlled by the UK Biobank organization (https://biobank.ndph.ox.ac.uk/ukb/refer.cgi?id=531).Further details of how the phenotype and covariate values were averaged, combined, and cleaned are provided in the R code published on Git Hub (https://github.com/Nickpowe/AGT_rs699_HTN).For example, systolic blood pressure was averaged across 1-4 visit instances for each individual.Race and ethnicity were broadly grouped into White, Black, Indian, Asian, White and Black, White and Asian, Prefer not to Answer, Mixed, and Other based on the more granular data provided by UK Biobank survey responses.Blood biochemistry data were obtained from just the first instance (whenever multiple instances existed) since most of the blood pressure values were recorded from the first instance. | Statistical methods qPCR was analyzed using two-sample two-sided t tests for data that (1) averaged the technical replicates or (2) kept technical replicates separate (both results provided).Fold changes were calculated by comparing to the wild-type group (AGT.A) after normalizing the wildtype cycle thresholds to the average of all wild-type cycle thresholds.All cycle threshold values were also normalized to GAPDH.GTEx eQTLs were precalculated (https://gtexportal.org/home/methods) based on normalized expression slope (slope of regression estimate) and were displayed with 95% confidence intervals.Statistical analyses of the GTEx genotypes and the APG data were not conducted due to low sample sizes in high and low genotype groups.UK Biobank data were analyzed using base R linear or logistic regression.Covariates were included as independent factors along with genotype groupings in multiple regression models.Tests for normality were done visually given the large amount of data.Rs5051 was imputed, and imputation dosages were converted to best guess by rounding to the nearest whole genotype number (0, 1, and 2) when used in genotype groupings; otherwise, the imputation dosage was used in the regression.Statistical analyses were stratified into subgroups based on age <50 or ≥50, on blood pressure med yes or no, and into race and ethnicity groups.Regression estimates for systolic blood pressure, diastolic blood pressure, mean blood pressure (average of systolic + diastolic), and age of HTN diagnosis were compared by dividing the estimates by the standard deviation for the whole cohort (for each phenotype, respectively), resulting in z score-like values.R version 4.1.1 was used for all statistical analyses and to generate all figures. | In silico analysis of microRNAbinding site creation We previously reported that AGT mRNA is endogenously bound by miR-122-5p in liver cells using the mir-eCLIP assay and that rs699 A > G significantly decreases reporter mRNA levels in liver cells using the highthroughput functional screening assay PASSPORT-seq. 34hus, we further investigated the role of miR-122-5p on AGT expression using in silico tools.The existing microRNA-binding and mirSNP prediction tools (mirDB, miRdSNP, PolymiRTS, and others) focus on 3'UTRs of target genes and thus fail to consider the rs699 site, which is in exon 2 of AGT.Therefore, we used RNAduplex to test the hypothesis that rs699 A > G creates a stronger binding site for miR-122-5p, as this could explain the downregulation seen in the reporter assay.The results of the RNAduplex binding analysis showed that the variant construct (Figure 1, right) predicted an additional four bases to pair (three of which are GU pairs) compared with the reference construct (Figure 1, left) and increased the binding strength from a deltaG of −11.3 to −12.3 kcal/mol (a lower free energy state supporting stronger binding).To put this deltaG change in context, we randomly scrambled the target sequence 2000 times and recorded the smallest deltaG and subsequently recorded the deltaG of a perfectly matching siRNA to generate the highest and lowest plausible values (−7.1 to −39.3).The largest plausible RNAduplex change was 32.2 kcal/ mol, and compared with these extremes, the rs699 A > G variant increased the binding strength by 3%. | Mir-eClip to verify miR-122-5p binding sites To further test the hypothesis that miR-122-5p binds near the location of rs699, we repeated the mir-eCLIP assay in 5 replicates on individual donor hepatocytes to gain repeated measures of where microRNAs bind in this location.We found that the rs699 location is about 40-45 nucleotides away from the strongest microRNAbinding peak (Figure 2) in the entire AGT mRNA.The miR-122-5p constitutes the majority of the mir-eCLIP reads, which is not surprising given miR-122-5p makes up closely to 70% of the abundance of all microRNAs in hepatocytes. 34We also detected mir-26a, mir-26b, and several other microRNAs contributing to the main peak shown in Figure 2, but these were not highly prominent.This analysis demonstrates that there is microRNAbinding activity occurring in the specific location of rs699, but that the rs699 A > G variant may be exerting more of its effect indirectly given the close proximity (but not exact overlap) with a very strong site of microRNA binding. | eQTL analysis of rs699 in GTEx Since the reporter assay demonstrated a fivefold decrease in mRNA, and the role of miR-122-5p in regulating AGT mRNA abundance is supported by the in silico evidence and the in vitro mir-eCLIP evidence, we used the GTEx data to test the association between rs699 A > G and AGT mRNA expression in human liver.Figure 3 (right) shows that there is no significant change in AGT mRNA abundance according to the rs699 genotype in the liver.However, in other tissues like the sigmoid colon and cerebellum, there is a strong significant increase and decrease in AGT, respectively.Given the potential to affect the RAS, we also analyzed kidney and arterial tissues.The arterial tissues show a small decrease in AGT, and the kidney cortex tissue shows a significant increase in AGT in the presence of the rs699 G allele (Figure 3, right).Because individuals with rs699 A > G had increased AGT expression in the kidney cortex, we sought to understand the cell-specific expression patterns within the kidney.To this end, we analyzed publicly available data from the Kidney Precision Medicine Project (www.kpmp.org). 47Although genotype data is not available, cell-specific expression patterns were observed in injury.Significantly, AGT and ACE expression were observed within the proximal tubular cell (p < 0.001), which contributes to hypertensive phenotypes through AGT-induced salt and water reabsorption. 48,49Interestingly, the kidney Tissue Atlas also revealed increased expression of AGT in degenerative vascular smooth muscle cells (VSMCs), a form of Figure 3 (left) shows the bulk mRNA expression for each tissue and reveals a possible trend towards more rs699 A > G downregulation effect in tissues expressing higher levels of AGT, like, the brain and arteries.As expected, due to the significant LD, rs5051 C > T results in very similar findings (not shown) and thus these two variants need to be decoupled to address our hypothesis. We hypothesized the effect of rs699 and rs5051 genotypes would occur according to Table 1, where rs699 A > G decreases AGT expression, reduces blood pressure, and is protective of HTN, and where rs5051 C > T increases AGT expression, increases blood pressure, and contributes to increased development of HTN.In the GTEx liver data, there were three individuals who had one more rs5051 T allele than rs699 G, and three individuals who had one more rs699 G allele than rs5051 T. We expected to see elevated AGT expression in genotype groups 1-3 (more rs5051 T alleles than rs699 G) and decreased AGT expression in groups 7-9 (more rs699 G alleles than rs5051 T).Due to the low sample size, we could not conclude from this analysis if the two SNPs have opposing effects on AGT expression because there were too few numbers in the high and low groups (Figure 4).The analysis in genotype groups 4-6 for all tissues demonstrated a similar trend to that observed in the individual SNP GTEx data shown in Figure 3, and the lower and upper genotype groups did not reveal marked effects on AGT expression. | Mechanism in different cell types Given that significant AGT expression eQTLs existed in opposite directions in the colon and cerebellum, we conducted transient transfections of AGT in three cell types to test the effect of rs699 A > G on mRNA abundance or expression from a plasmid containing the full AGT gene with or without rs699 A > G on an otherwise isogenic background.We could not identify a suitable cell model for cerebellum, therefore we only utilized liver (HepG2) and colon (HT29) models.We also included kidney cells (HEK293).We expected to see that in HepG2 cells, where miR-122-5p is the 48th most highly expressed microRNA (out of 680 total), AGT would be decreased by the rs699 A > G variant.However, rs699 A > G caused a near-significant 1.8-fold increase in AGT mRNA of HepG2 liver cells after a 48-h transient transfection (Figure 5A).On the basis of the GTEx results, we expected to see HT29 colon cells express more AGT in the rs699 A > G construct, yet rs699 A > G caused a near-significant twofold decrease in AGT mRNA of HT29 colon cells after a 48-h transient transfection (Figure 5B).Rs699 A > G did not cause a significant change (1.1-fold) in AGT mRNA of HEK293 kidney cells after a 48-h transient transfection (Figure 5C).These results isolate the effect of rs699 | Analysis of Biobank data To further test our hypothesis, and to determine if there is value in more rigorous studies of these variants, we used available clinical cohort and Biobank data.The first cohort consisted of cancer patients from the Indiana University Advanced Precision Genomics Clinic where we have HTN medication fill data.We divided the subjects into genotype groups as shown in Table 1, and Figure 6A,B shows the results of this analysis for two phenotypes: (1) total blood pressure medication fills per year, and (2) total maximum concomitant unique blood pressure medications filled in any quarter of a year.We did not perform statistical comparisons on this data since there were too few numbers in the high-and lowrisk genotype groups.However, it appears there could be a difference in HTN phenotypes across the genotype groups. The trend towards lower HTN medication usage in the groups with more rs699 G alleles (groups 7-9) appeared supportive of our hypothesis, therefore we tested our hypothesis again in a much larger retrospective analysis in the UK Biobank data. Among 462,417 individuals tested in the UK Biobank, we found that rs699 A > G was significantly associated with a 0.11 mmHg increase in systolic blood pressure per rs699 G allele (p = 0.005).Not surprisingly, rs5051 C > T was also significantly associated with a 0.10 mmHg increase in systolic blood pressure (p = 0.011) in the same individuals.The direction of effect of these results is consistent with the literature when the two variants are not considered as a combined genotype.We repeated this analysis after controlling for the following covariates: sex, age, BMI, smoking status, physical activity level, alcohol intake, preference for adding salt to food, blood CRP, blood apolipoprotein A, blood apolipoprotein B, and level of education.These were chosen because they have been previously found to correlate with or be predictive of HTN. 50After controlling these factors, we found that rs699 A > G was significantly associated with a 0.36 mmHg increase in systolic blood pressure per rs699 G allele among 311,004 individuals (p < 0.001).Not surprisingly, rs5051 C > T was also significantly associated with a 0.35 mmHg increase in systolic blood pressure (p < 0.001) in the same individuals (Figure 7C,F) per genotype group. When we conducted the analysis by including rs699 and rs5051 genotype combinations according to our hypothesis, and controlling for the same covariates, there was a statistically significant association that supported our hypothesis that an imbalance favoring rs699 G over rs5051 T (i.e., genotype groups 7-9) would decrease systolic blood pressure (linear regression estimate = −0.25 mmHg change per genotype group, p < 0.001).Figure 7A shows these results as z scores (normalized to the standard deviation of the full cohort) along with similar results for mean blood pressure, diastolic blood pressure, and age of HTN diagnosis.However, when we inspected the blood pressure averages for each genotype group (Figure 7D), it appeared that the regression was being driven by the large sample size in genotype groups 4-6, and that the lower and higher genotype groups (1-3 and 7-9) did not follow the hypothesized pattern.To further isolate the effect of rs699 A > G, we binned genotype groups into 1 versus 2-3 versus 4-6 versus 7-8 versus 9, and found that there was a nonsignificant trend towards an increase in blood pressure (Figure 7B,E) which statistically demonstrates that rs699 A > G is unlikely to be exerting a blood pressure lowering effect.Table 2 provides the regression estimates for the systolic blood pressure phenotypes, including estimates for the covariates. We repeated the covariate-corrected analyses for systolic blood pressure, stratifying the analysis by (1) whether someone was on a BP medication, (2) age above or below 50, and (3) by race and ethnicity.We found that rs5051 C > T is associated with a 0.32 mmHg increase in systolic blood pressure in those not taking BP medications (p < 0.001, n = 243,277), and a nonsignificant 0.08 mmHg increase in systolic blood pressure in those taking BP medications (p = 0.399, n = 67,725).The genotype groups followed the same nonsignificant pattern as the nonstratified analysis.Results were also not markedly different when stratified by age groups <50 or ≥50.The genotype group analysis results followed similar nonsignificant patterns in the race and ethnicity groups.The association between systolic blood pressure and the single variant (rs5051 C > T) in individuals of self-identified Asian ancestry (not including "Indian" ancestry) was not statistically significant (p = 0.15).In contrast, rs5051 C > T was associated with a 1.17 mmHg increase in systolic blood pressure in Black participants (p = 0.032, n = 4580), a >fourfold larger effect size than seen in White participants (0.25 mmHg increase, p < 0.001, n = 294,582).Figure 8A shows the noncovariate adjusted numbers per rs5051 genotype group.Other race and ethnicity groups were not statistically significant.Interestingly, the rs5051 C > T variant has an 88% allele frequency in Black participants, compared with a 40% allele frequency in White participants in the UK Biobank data.Figure 8B shows phased haplotypes from the 1000 Genomes Project illustrating that the rs5051 T allele is far more common in those with African ancestry. | DISCUSSION There is a large body of literature describing investigations into rs699 and rs5051 in modulating AGT influence on HTN, but due to the LD between these variants, the independent mechanisms of each SNP have remained T A B L E 2 Regression results for systolic blood pressure in the UK Biobank.difficult to delineate.Our work presented here expands the body of knowledge and is novel in its focus on the independent effect of each SNP on AGT expression and HTN.On the basis of evidence from our previous work, we hypothesized that rs699 A > G was responsible for decreasing AGT abundance via increasing the binding strength with miR-122-5p, and, that when unopposed by the increased transcription caused by rs5051 C > T, would reduce AGT expression and be protective against HTN.Approximately 2%-3% of people have an imbalance of these two SNPs, indicating the research question is of significance to the human population. Index Our in silico results support our hypothesis, but should be viewed conservatively since the change in binding strength was only around 3%.The additional mir-eCLIP assay results strongly confirm that miR-122-5p binds to AGT in hepatocytes, and that rs699 is near one of these binding sites.However, our in vitro results (which isolate rs699 A > G) do not support our hypothesis since we expected rs699 A > G to cause decreased AGT mRNA abundance in HepG2 hepatocytes (where miR-122-5p is highly expressed), but instead AGT mRNA was increased.In vivo, GTEx data did not show a strong liver upregulation of AGT in three individuals with unopposed rs5051 C > T, and did not show an obvious downregulation of AGT in three individuals with unopposed rs699 A > G. Further analysis of retrospective clinical data from IU cancer patients did not show a clinically meaningful association with several measures of HTN, though this analysis was likely underpowered and potentially confounded by cancer therapies.Retrospective UK Biobank analysis, which provided a large clinical population to isolate the independent effects of rs699 A > G and rs5051 C > T, also did not show a clinically meaningful association with systolic blood pressure based on our hypothesis.On the basis of these results, it is unlikely that rs699 A > G variant strongly modifies miR-122-5p binding strength directly.Our experiments were robustly executed, well powered, and therefore successful in testing both the mechanistic and translational aspects of this hypothesis. However, we did confirm the previous associations between the rs699 A > G (or rs5051 C > T, due to their LD) and increased blood pressure, in the large UK Biobank data set, and expanded on these findings by investigating differences between self-reported race.Importantly, rs5051 C > T and/or rs699 A > G had a significant association with systolic blood pressure in Black participants in the UK Biobank, demonstrating a fourfold larger effect size than that seen in White participants.It is noteworthy that the allele frequencies for rs699 A > G and rs5051 C > T are approximately twice as common in African ancestry groups relative to Europeans.In addition to differences in minor allele frequency, it is known that the LD blocks of AGT differ between White and Black people. 51Salt-sensitive hypertensive phenotypes are reported to be more common in individuals of African ancestry, [52][53][54] which lends support to the possibility that rs5051 C > T or rs699 A > G mediate blood pressure changes more dramatically in Black patients through changes in AGT expression or abundance.7][58] Aligning with this, the Eighth Joint National Committee (JNC 8) endorses different initial antihypertensive therapy for Caucasian and Black individuals.In individuals with normal kidney function, the JNC only recommends ACE inhibitor as first-line therapy in Caucasians. 59Here, we provide evidence that rs5051 C > T or rs699 A > G may be involved in hypertensive phenotype differences between individuals with European and African ancestry that may underly established differences in antihypertensive treatment efficacy between these groups.Despite this, significant admixture exists between populations.As access to whole-genome sequencing in clinical care increases, studies like this will allow clinicians to choose the most effective blood pressure medications (potentially non-RAS acting agents) based on rs699 or rs5051 genotype, instead of race, presenting a possible solution to the problem of admixture in race-based medication choice; a solution in need of further research. An additional valuable finding made by our study is the observation that rs699 and rs5051 have cell-specific effects and the genesis of a new hypothesis regarding the role of these variants in HTN.In HT29 colon cells, rs699 A > G decreased AGT mRNA, the opposite result was seen in HepG2 cells, demonstrating clear cell-typespecific effects attributable to rs699 (independent from rs5051).GTEx analyses also demonstrate strong eQTLs with opposite direction of effects in different tissue types, supporting that the functional effect of rs699 A > G (or rs5051 C > T since they are in LD in these samples) is cell-type specific.The lack of eQTL in liver samples suggests these variants influence blood pressure-related phenotypes in nonliver tissues.This is an important finding because it contradicts the common belief that the increased endocrine secretion of AGT from the liver into circulation is what is responsible for the blood pressure-raising effects of rs699 or rs5051.Therefore, we also investigated arterial and kidney tissues due to the potential relevance to the RAS.Although arterial tissues (which consist mostly of smooth muscle and endothelial cells) in the GTEx data do not show increased AGT expression with rs699 or rs5051, there was a significant increase in the kidney cortex.Cortical kidney samples (which consist of many cell types including endothelial) demonstrate that AGT expression is increased in the presence of the rs699 G allele (p = 0.049).This is interesting because the kidney is the only tissue in the body that expresses appreciable amounts of AGT, renin (REN), ACE, and angiotensin II receptor (AGTR1); the four genes whose expression is needed to convert AGT to angiotensin II and activate angiotensin II receptors.Using the recently developed Kidney Tissue Atlas 47 (https://atlas.kpmp.org/explorer/dataviz)we can see that this pathway may occur through paracrine signaling where AGT and ACE are expressed by proximal tubule cells, REN is secreted by renin-specific granular cells, and AGT and AGTR1 are expressed in VSMCs.Additionally, angiotensin II acts on other receptors in the proximal tubule to increase salt reabsorption and water retention. 48,49The Kidney Tissue Atlas also shows increased expression of AGT in degenerative VSMCs; those VSMCs most affected by HTN in the setting of arteriosclerosis.The presence of this pathway occurring locally in the kidney, near the site of angiotensin II and pressor-control receptors and corresponding kidney arteriolar vasoconstriction and arteriosclerosis underlines the potential relevance of AGT variants in raising blood pressure and reducing renal blood flow through a paracrine (rather than liver-derived endocrine pathway) resulting from increased AGT expression.The local paracrine RAS pathway in the kidney is substantiated by evidence in the literature 60,61 ; thus, our findings that the rs699 or rs5051 variants may disrupt this process specifically in the kidney presents an interesting hypothesis that these AGT variants and/or renal AGT expression may underly the interconnectedness between kidney disease and HTN.Our findings further support the association between rs699 G allele (or rs5051 T allele) which increases HTN risk, mediating arteriosclerosis, and the link to the renal expression of AGT with its upstream and downstream pathway components. Our study has several limitations.The in silico analysis is taken out of the context of the secondary structure of the mRNA and does not account for the steric hindrance of RNA-binding proteins or N6methyladenosine modifications that can alter the microRNA-binding strength.Thus, we stress that the results of this analysis need to be interpreted conservatively.The in vitro results would have benefited from additional replicates conducted on different lots or different passages of cells with transfections occurring on separate days.Despite this limitation, however, our experiments were fit for the purpose of deciding whether the effect of rs699 aligned with our previous observations.The in vivo GTEx and clinical analyses in our institutional cohort were underpowered, but this limitation is inherent to retrospective data analysis where additional subjects cannot be recruited.Additionally, the tight linkage between rs699 A > G and rs5051 C > T necessitates the use of very large clinical cohorts to assess the independent effects of either SNP.Considering this, a great strength of our study is that the UK Biobank analysis was overpowered and thus very useful in statistically rejecting the alternate hypothesis that rs699 A > G acts as a counteracting force against rs5051 C > T. While we did not investigate the doses or number of concomitant HTN medications that patients were prescribed in the UK Biobank (which may be markers of resistant HTN phenotypes), we compensated for this limitation by performing analyses assessing the impact of rs699 and rs5051 on HTN phenotypes in patients who were not prescribed HTN medications.Another strength of this study is the combination of experiments and analyses spanning multiple disciplines of science (in silico, in vitro, in vivo, and observational) to test a single hypothesis. Another potential limitation of this study is that we previously identified the importance of rs699 using a high-throughput assay designed to screen for genetic variants that modify microRNA binding in an indirect manner.While this is a potential disadvantage, it is also a strength to identifying functional variants that exert their effect indirectly.Our results suggest that rs699 A > G could modulate the binding of miR-122-5p indirectly through other cell-type-specific mechanisms such as altered recruitment of RNA-binding proteins or splicing machinery.In fact, ENCODE eCLIP experiments for RNA-binding proteins in HepG2 demonstrate that proteins involved in splicing and pre-mRNA processing; AQR, BUD13, CDC40, NOL12, PPIG, RBM15, RBM22, SRSF1, SRSF9, and TRA2A bind to AGT mRNA in the rs699 location, giving plausibility to this hypothesis (this data can be viewed at www. encodeproject.org). 62While not a main analysis of this paper, we used the GTEx liver AGT RNA-seq data to measure alternative splicing events at the exon junction near rs699 (3′ end of exon 2) to aid in interpreting these data.We found that exon 2 was spliced to a noncanonical exon (i.e., not joined to canonical exon 3) on average 10% of the time compared with all exon 2 junction reads for each individual (range 0-21%, n = 208).However, this alternative splicing did not correlate with the rs699 genotype (nor did any single splice variant), indicating that rs699 A > G does not interfere with AGT exon 2 splicing in the liver.DROSHA, UCHL5, and XPO5 were also identified to bind in the rs699 location based on ENCODE data, but the strongest signal was for SND1 binding.SND1 is an endonuclease that mediates microRNA decay 63 and is a component of the RNAinduced silencing complex (RISC), 64 which is interesting given our finding that this site is near a miR-122-5p binding site.Thus, microRNA-mediated recruitment of SND1-containing RISC to this location is evident, but the significance of this remains unknown. In summary, we comprehensively evaluated the effects of rs699 and rs5051 on AGT expression and HTN phenotypes using in vitro, in vivo, and retrospective clinical approaches, including analyses that isolated the individual effects of each variant.We successfully tested our overarching hypothesis that rs699 A > G reduces AGT expression independently from rs5051 in the liver, demonstrating that these variants instead may be involved in influencing HTN through nonlivermediated mechanisms.We demonstrated that rs699 and rs5051 are associated with AGT mRNA abundance in a cell-type-specific manner and have small but clinically meaningful (up to 2.7 mmHg) increases in HTN phenotypes.Our finding that rs699 A > G increased AGT expression in the kidney may have clinical significance since the kidney expresses the necessary enzymes to convert AGT to the prohypertensive angiotensin II as well as expressing the angiotensin II and pressor-control receptors responsible for blood pressure-raising effects.Further studies are warranted to investigate the potential for direct effects of rs699 in the kidney and to determine if the reason for altered antihypertensive response in Black individuals might be due, in part, to the effect of rs5051 C > T or rs699 A > G. F I G U R E 1 Schematic showing the RNAduplex predicted increase in binding strength with the rs699 A > G variant.The variant position is highlighted in yellow.Nucleotide bonds that are predicted to arise due to the variant are shown in red.GC pairs are shown with double lines, AU pairs are shown with single lines, and GU pairs are shown with single lines with open dots.The variant nomenclature "rs699 A > G" is complemented to the positive strand, whereas the nucleotides shown are for the AGT mRNA structure, which is transcribed from the negative DNA strand.A, adenine; AGT, angiotensinogen; C, cytosine; deltaG, kcal/mol change in free energy; G, guanine; hsa, homo sapien; miR, microRNA; mRNA, messenger RNA; U, uracil. Genome Browser view of our mir-eCLIP data in BedGraph format, showing the proximity of rs699 to the most prominent microRNA-binding peak in the AGT mRNA.AGT, angiotensinogen; kb, kilobase; max, maximum; mRNA, messenger RNA; UTR, untranslated region.RS699 AND RS5051 COMBINED EFFECT ON AGT AND HTN | 107 injured VSMC more abundant in HTN and chronic kidney disease.These findings further support a triad of associations in the same direction of effect: between (1) SNP rs699 A > G and increased HTN risk, (2) SNP rs699 A > G and increased renal cortical expression of AGT, and (3) AGT expression and phenotypic cellular changes in HTN and CKD. F I G U R E 3 GTEx web-portal data.(Left) Bulk mRNA expression data in log transformed units.(Middle) eQTLs for rs699.(Right) Violin plots showing the more granular eQTL data for rs699 for Liver, and tissues with the most extreme eQTLs or potential relevance to the renin-angiotensin system.AGT, angiotensinogen; eQTL, expression quantitative trait loci; mRNA, messenger RNA; NES, normalized expression slope; TPM, transcripts per million. F I G U R E 4 Liver AGT expression across the genotype groups shown with the hypothesized effect on AGT expression.(Left column; A-C) Log transformed TPM AGT reads in liver n = 208, brain n = 209, and colon n = 318.(Right column; D-F) Log transformed TPM AGT reads in kidney n = 76, coronary artery n = 213, and tibial artery n = 583.AGT, angiotensinogen; mRNA, messenger RNA; TPM, transcripts per million. RS699 AND RS5051 COMBINED EFFECT ON AGT AND HTN | 109 A > G and indicate that cell-type-specific factors are leading to seemingly differential regulation of AGT mRNA abundance.Given the differing effect of this SNP in different cell types, further work is needed, if warranted, to understand how AGT expression is modulated by this SNP. F I G U R E 5 (A-C) qPCR results for three different cell lines.(A) AGT expression in HepG2 liver cells, (B) AGT expression in HT29 colon cells, and (C) AGT expression in HEK293 kidney cells.Biorep, biological replicate; Ct, cycle threshold; NA, not assessed; qPCR, quantitative polymerase chain reaction; reps, replicates; WT, wild type.F I G U R E 6 Indiana University Advanced Precision Genomics cohort results for (A) total fills per year and (B) total maximum concomitant meds at any given time.F I G U R E 7 Blood pressure phenotype regression estimates (corrected for covariates and normalized to standard deviation) with (A) genotype groups according to our ordinal hypothesis, (B) binned genotype groups that test only groups with unbalanced rs699 G and rs5051 T, and (C) rs5051 genotype group.Systolic blood pressure boxplots are shown for (D) genotype groups according to our hypothesis, (E) binned genotype groups that test only groups with unbalanced rs699 G and rs5051 T, and (F) rs5051 genotype group.Boxplots display median (black bar), 25th-75th percentile (box), and mean values are labeled over the box.HTN, hypertension. F I G U R E 8 Analysis of (A) rs5051 C > T association with systolic blood pressure in Black participants in the UK Biobank data, and (B) allele haplotype frequencies for rs699 and rs5051 in the 1000 Genomes Project for those with African ancestry.Boxplot displays median (black bar), 25th-75th percentile (box), and mean values are labeled over the box.LD, linkage disequilibrium. RS699 AND RS5051 COMBINED EFFECT ON AGT AND HTN| 113
2024-04-14T15:46:00.760Z
2023-12-26T00:00:00.000
{ "year": 2023, "sha1": "6e1e83c9bab2b5ea8b8b855c764b7316a4ba532b", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/cdt3.103", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "818a94d9a94df70df80df3735ac06e102ff3af4e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
237495099
pes2o/s2orc
v3-fos-license
Soliciting User Preferences in Conversational Recommender Systems via Usage-related Questions A key distinguishing feature of conversational recommender systems over traditional recommender systems is their ability to elicit user preferences using natural language. Currently, the predominant approach to preference elicitation is to ask questions directly about items or item attributes. These strategies do not perform well in cases where the user does not have sufficient knowledge of the target domain to answer such questions. Conversely, in a shopping setting, talking about the planned use of items does not present any difficulties, even for those that are new to a domain. In this paper, we propose a novel approach to preference elicitation by asking implicit questions based on item usage. Our approach consists of two main steps. First, we identify the sentences from a large review corpus that contain information about item usage. Then, we generate implicit preference elicitation questions from those sentences using a neural text-to-text model. The main contributions of this work also include a multi-stage data annotation protocol using crowdsourcing for collecting high-quality labeled training data for the neural model. We show that our approach is effective in selecting review sentences and transforming them to elicitation questions, even with limited training data. Additionally, we provide an analysis of patterns where the model does not perform optimally. data (e.g., click history, past visits, item ratings) [7]. These systems often do not take into account that users might have made mistakes in the past (e.g., regarding purchases) [28] or that their preferences change over time [9]. Additionally, for some users, there is little historical data which makes modeling their preferences difficult [11]. A conversational recommender system (CRS), on the other hand, is a multi-turn, interactive recommender system that can elicit user preferences in real-time using natural language [10]. Given its interactive nature, it is capable of modeling dynamic user preferences and take actions based on users current needs [7]. One of the main tasks of a conversational recommender system is to elicit preferences from users. This is traditionally done by asking questions either about items directly or item attributes [4-7, 12, 25, 26, 30, 32]. Asking directly about specific items is inefficient due to a vast number of items in the collection; therefore, the majority of the research is focused on the estimation and utilization of users preferences towards attributes [7]. Common to these approaches is that the user is explicitly asked about the desired values for a specific product attribute, much in the spirit of slot-filling dialogue systems [8]. For example, in the context of looking for a bicycle recommendation, we might have wheel dimensions or the number of gears as attributes in our item collection. In this case, a system might want to ask a question like How thick should the tires be? or How many gears should the bike have? However, ordinary users often do not possess this kind of attribute understanding, which might require extensive domain-specific knowledge. Instead, they only know where or how they intend to use the item. For example, a user might only be interested in using this bike for commuting but does not know what attributes might be good for that purpose. The novel research objective of this work is to generate implicit questions for eliciting user preferences, related to the intended use of items. This stands in contrast with explicit questions that ask about specific item attributes. Our approach hinges on the idea that usage-related experiences are captured in item reviews. By identifying review sentences that discuss particular item features or aspects (e.g., fat tires) that matter in the context of various activities or usage scenarios (e.g., for conquering tough terrain), those sentences can then be turned into preference elicitation questions. In our envisaged scenario, a large collection of implicit preference elicitation questions is generated offline, and then utilized later in real-time interactions by a CRS; see Fig. 1 for an illustration. In this paper, our focus is on the offline question generation part. Specifically, we start with candidate sentence selection, which can effectively be implemented based on part-of-speech tagging and simple linguistic patterns. Given a candidate sentence as input, question generation produces an implicit question or the label N/A (not applicable). This is done by fine-tuning a pre-trained, sequence-to-sequence model for text generation [24]. The main challenge associated with this task is the collection of high-quality training data. We develop a multi-stage data annotation protocol via crowdsourcing to generate a sentence-to-question dataset. The process consists of generating questions, validating them, as well as expanding the variation of questions. Evaluating our proposed approach against held-back test data shows its effectiveness and its capability of generating questions that are suitable for preference elicitation, can simply be answered, and are grammatically correct. In summary, our main contributions in this paper are as follows: (1) Introduce the novel task of eliciting preferences in CRSs via implicit (usage-oriented) questions. (2) Devise an approach for generating usage-related questions based on a corpus of item reviews, consisting of two main steps: candidate sentence selection (based on linguistic patterns) and question generation (using a neural sequence-to-sequence model). (3) Develop a multistage data annotation protocol using crowdsourcing for collecting high-quality ground truth data. (4) Perform an experimental evaluation of the proposed approach, followed by an analysis of results. The resources developed in this paper (crowdsourced dataset and question generation model) are made publicly available at https://github.com/iai-group/recsys2021-crs-questions. RELATED WORK In this paper, we focus on question-based user preference elicitation and natural language generation, both of which are identified as major challenges in [7]. That is, we provide novel answers to questions what to ask and how to ask. Preference Elicitation Commonly, preference elicitation questions target either items or their attributes. Typical of early studies on CRS, item-based elicitation approaches ask for users' opinions on an item itself, using a combination of methods from traditional recommender systems, such as collaborative filtering, with user interaction in real time [27,35]. The selection of items may be approached as an optimization problem using a static preference questionnaire method [25] or multi-armed bandit algorithms that capture the exploration-exploitation tradeoff [6,27]. Asking about items directly has been found inefficient, as large item sets would require several conversational turns and in turn increase the likelihood of users getting bored [7]. Alternatively, attribute-based elicitation aims to predict the next attribute to ask about. It is often cast as a sequence-to-sequence prediction problem, lending itself naturally to sequential neural networks. However, obtaining large conversational datasets to train conversational recommender systems is challenging [10], therefore, non-conversational data is often leveraged. Christakopoulou et al. [5] propose a question & recommendation (Q&R) method, utilize data from a non-conversational recommendation system, and develop surrogate tasks to answer questions: What to ask? and How to respond? A similar approach of training a sequential neural network on non-conversational data is taken by Zhang et al. [32], who convert Amazon reviews into artificial conversations. The assumption is that the earlier aspect-value pairs appear in the review, the more important they are to the user and should be prioritized as questions. Additionally, they develop a heuristic trigger to decide whether the model should ask about another attribute or recommend an item. Another way to elicit preferences is in the form of critiques, i.e., feedback on attribute values of recommended items [4]. For example if the recommendation is for a phone, a critique might be not so big or something cheaper. Such methods often employ heuristics as elicitation tactics [18,19]. In recent work, Balog et al. [1] study the problem of robustly interpreting unconstrained natural language feedback on attributes. Question Generation While there is research on end-to-end frameworks to enable CRSs to both understand user intentions as well as generate fluent and meaningful natural language responses [13], the predominant approach is still to use templates or construct the utterances using predefined language patterns [7]. Looking at the broader field of dialogue systems, there are two additional strands of research that could be applied to CRSs as well: retrieval-based and generation-based methods. Instead of relying on a handful of templates, retrieval-based methods utilize a large collection of possible responses. The basic approach to retrieving the appropriate response is based on some notion of similarity between the user query and candidate responses, with the simplest being inner product [31]. Generation-based methods in dialogue systems are typically based on sequence-to-sequence modeling. These models are usually trained on a hand-labeled corpus of task-oriented dialogue [3]. Our proposed approach shares elements of both of these methods: it generates questions using a sequence-to-sequence model and stores them in a collection that can be queried using retrieval-based methods. APPROACH We present an approach for generating a collection of implicit elicitation questions from a review corpus. Item review datasets tend to be very large, with both the number of items and reviews in the thousands or even millions, making labeling the entire dataset extremely expensive [14]. To overcome this, we extract candidate sentences from a corpus that have a high probability of mentioning item-related activity or usage (Section 3.1). We wish to train a model that can take a candidate sentence as input and generate a preference elicitation question out of that or the label N/A if it is not possible (Section 3.2). We opt for a pre-trained, transformer-based, state-of-the-art, sequence-to-sequence model (T5). Candidate Sentence Selection We identify sentences that describe some item feature or aspect ( §3.1.1) and mention some activity or usage ( §3.1.2). The fat tires are perfect for / conquering tough terrain. Aspect-Value Pair Extraction. An aspect in this context is a term that characterizes a particular feature of an item [17] (e.g., wheel, seat or gear are aspects of a bicycle). Value words are terms that describe an aspect (e.g., a wheel might be large or small, a seat can be hard or comfortable). Here, we extract all sentences that mention some aspect-value pair for a given category of items, using phrase-level sentiment analysis proposed by Zhang et al. [33,34]. The motivation for this step stems from the assumption that an activity or usage can be mapped to a particular aspect of an item. 1 3.1.2 Activity Identification. In this step, the goal is to classify sentences that mention some item-related activity or usage. Inspired by Benetka et al. [2], our approach revolves around using part-ofspeech (POS) analysis and rules of the English language. We filter for the preposition for followed by a verb in progressive tense heuristically, by looking for -ing endings (e.g., for commuting, for hiking). Note that there might be other formulations that describe activity or usage. Our goal is not to extract all possible sentences containing mentions of activity or usage; a high recall approach would likely come at the cost of a larger fraction of false positives. Instead, we focus on achieving high precision. Question Generation The main motivation for this step is generating natural-sounding questions that are easy for users to understand and answer, without needing any additional context. Consider the sentence The fat tires are perfect for conquering tough terrain. An example of converting it to a yes or no usage-related question might be Would you like a bike that is great for conquering tough terrain? Note that not all candidate sentences that pass our selection heuristic are viable for conversion to a question, e.g., Thank you so much for coming up with such a great product. This sentence is too vague and does not mention any action or usage for the item, and thus should be labeled as not applicable (N/A). Learning to generate questions is done by fine-tuning a large, pre-trained, sequence-to-sequence language model. There are two main benefits of using transfer learning from a pre-trained model. First, it increases the learning speed; as both syntax and semantics of the English language are already learned, there are fewer things the model needs to learn. Second, it reduces the amount of labeled data needed to train models to high performance. Specifically, we use T5 [24] as it is a state-of-the-art approach that can be used for both N/A-classification and generation in one go, where N/Aclassification is posed as a text-to-text problem. Obtaining highquality labeled data for fine-tuning the model is a challenge on its own; we develop a multi-step data collection protocol using crowdsourcing, which we discuss in Section 4.2. In our experiments (in Section 5), we evaluate our question generation models using different number of parameters for pre-training and varying amount of training data for fine-tuning. DATA COLLECTION This section describes the process of creating our dataset, which consists of a set of review sentences and either (i) a corresponding set of five preference elicitation questions or (ii) the label not applicable (N/A) for each. Candidate Sentence Selection The starting point for getting the candidate sentences are the Amazon review and metadata datasets [22], 2 where item reviews from Amazon are extracted along with product metadata information such as title, description, price, and categories. There are, in total, 233.1M reviews about 15.5M products. Due to the sheer dataset size, we focus our research on three main categories: Home and Kitchen, Patio, Lawn and Garden, and Sports and Outdoors. From these, we further sub-select 12 diverse subcategories (simply referred to as categories henceforth): Backpacking Packs, Tents, Bikes, Jackets, Vacuums, Blenders, Espresso Machines, Grills, Walk-Behind Lawn Mowers, Birdhouses, Feeders, and Snow Shovels. Sentence splitting and aspect-value pair extraction is performed using the Sentires toolkit [33,34]. 3 This step discards many nonviable sentences. The remaining ones are POS-tagged using the Stanford NLP toolkit [20]. Finally, we filter for sentences that match our activity detection heuristic ("for + [verb in progressive tense]"). Our sentence selection process is designed to favor precision over recall, and was validated by manual inspection of the results. Upon completion of the crowdsourcing tasks (described in Section 4.2), we find that over 75% of the selected sentences can be turned into questions. This shows that our simple method can indeed identify candidate sentences with high precision. Our final candidate sentence set contains 100 sentences for each of the categories, except Birdhouses, where only 15 candidate sentences are found due to the size of that category, that is, a total of 1,115 sentences over 12 categories. Question Generation using Crowdsourcing Crowdsourcing was done on the Amazon Mechanical Turk (AMT) platform in three steps. The task was available to workers with 95% approval rate and with at least 1,000 approved human intelligence tasks (HITs). 4.2.1 Step 1: Question Collection. Crowd workers are given a review sentence (describing some aspect or use for a product) and a product category as input, and tasked with rewriting it into a question or marking it as not applicable. They are specifically instructed to formulate a question that a salesperson or a recommender agent might ask a customer, such that it is a standalone question that can simply be answered with yes/no. For every input sentence, we collected responses from three different workers. Sentences found non-applicable by at least two workers are set as N/A. The task was re-run if a single worker responded with N/A. This process resulted in approx. 2,600 sentence-question pairs. 4.2.2 Step 2: Validation and Filtering. Next, we validate all responses (i.e., generated questions) for applicable sentences collected in Step 1 using crowdsourcing. We employ three different workers in Generated questions that are found invalid by all three workers on a single aspect or at least two workers on at least two aspects are automatically rejected. Those that are marked invalid on multiple aspects but do not fall into the former category are manually checked by an expert annotator (one of the authors). All other questions are approved. Steps 1 and 2 were run multiple times until all questions were approved. 4.2.3 Step 3: Expanding Question Variety. Our main motivation for expanding the question variety is to add new ways of asking implicit questions. To this end, we task a new set of workers to paraphrase the questions we obtained and validated in Steps 1 and 2. Each worker receives all three versions of the questions from Step 1 as input and is asked to produce a new (paraphrased) question the expresses the same meaning. Note that this set of workers do not get to see the original sentences, only the questions generated from them by other workers. For each set of three questions, two additional paraphrases were collected. Considering that generating paraphrases proved to be a much simpler task than generating questions from review sentences, no additional quality assurance steps were necessary. Final Dataset Out of the 1,115 candidate sentences, 277 were labeled as nonapplicable (not containing relevant usage-related information), which is below 25%. This shows that our high-precision approach to selecting candidate sentences is effective. We note that our sentence selection method works better for some categories than for others. The fraction of viable sentences ranges between 52% (Espresso machine category) to 84% (Backpacking pack category). For the remaining 838 sentences, a total of five questions are generated, three based on the candidate sentence and two via paraphrasing. Table 1 shows two example sentences from our dataset. The total cost of generating the dataset was $1,200. RESULTS AND ANALYSIS With our experiments, we aim to answer the following research question: How effective is our method for generating implicit questions for preference elicitation? Specifically, given a candidate sentence as input, our approach should either generate a question or label it as non-applicable (N/A), if a usage-related question cannot be generated. Experimental Setup We train small, base, and large T5 models, which vary in the number of layers, self-attention heads, and the dimension of the final feedforward layer. The difference is shown in the number of parameters in Table 2. We use 80% of the data for training, while the rest is test data. In our training, we employ teacher forcing [29], regularization by early stopping [21], and adaptive gradient method AdamW [16] with linear learning rate decay. For each sentence, we have either N/A or a set of reference questions. We evaluate question generation both as a classification task, in terms of Accuracy (detecting N/A), and as a machine translation task, where the set of human-generated questions serve as reference translations. Specifically, we report on BLEU-4, which uses modified n-gram precision up to 4-grams [23], and ROUGE-L, a recall-based metric based on the longest common subsequence [15]. Table 2 shows the results in terms of non-applicability classification (Accuracy) and question generation (BLEU and ROUGE). On both tasks, larger pre-trained models tend to perform better, which is Sentence Great for making smoothies with frozen fruit. This product is excellent for doing the job. Generated -Are you looking for a blender that's great for making smoothies with frozen fruit? N/A questions -Would you be interested in a blender that is great for making smoothies with frozen fruit? (The input sentence passes our candidate selection heuristic, but the activity is -Are you interested in a blender for making smoothies with frozen fruit? too broad and can apply to any item.) Paraphrases -Do you want a blender that's great for making smoothies with frozen fruit? -Would you like a blender that is great for making smoothies with frozen fruit? expected. The difference, however, is more pronounced for nonapplicability detection than for question generation. We further investigate how the amount of training data affects model performance, by considering different ways of data reduction. We use the best performing model for this experiment, i.e., T5 large. In sentence-based data reduction, shown in Fig. 3 (Left), only a subset of the available sentences is used for training (using all available questions corresponding to those sentences). We observe a drop in Accuracy when we reduce the amount of training data to 25% or lower, while question generation performance is less severely affected. In question-based data reduction, shown in Fig. 3 (Right), we split the dataset based on the number of questions available for each sentence. We consider using a single question (Q1), the three initially generated questions (Q3), and the three initial questions plus the two paraphrases (Q5). We find that reducing the number of questions has surprisingly little effect. This suggests that it is more beneficial to collect a small number of questions for a larger set of sentences than vice versa. Analysis A closer look at specific sentence-question pairs reveals two patterns that leave room for future improvement. We find that some of the generated questions are too generic. These are correct in terms of grammar and structure, but unsuitable for eliciting meaningful user preferences, e.g., Do you need a grill that is good for grilling certain things? Instead of returning N/A (which is indeed the corresponding response in our dataset), the model generated a question that is so vague and generic that it is hard to think of a scenario where it would not be answered affirmatively. The second pattern concerns complex questions that ask about more than one usage or activity, e.g., Are you looking for a backpacking pack that is a good size for traveling on an airplane or going on a camping trip for a few days or packing for a few days trip? This question is too complex to elicit any meaningful information without the user having to elaborate which options they agree with and which they do not. Such questions should instead be split into several simpler ones where it is both easier to interpret the question and to answer it. Note that crowd workers were not instructed to simplify complex questions, therefore it is not surprising that is what the model has learned. CONCLUSION AND FUTURE DIRECTIONS In this paper, we have studied the question of how a conversational recommender system can solicit user's needs through natural language by using indirect questions about how the product wanted will be used. This contrasts with most prior work that considers how to directly ask about desired product attributes. Our method starts with a corpus of reviews, then identifies statements that characterize how products are used, and how this ties to product attributes. These statements are then transformed into preference elicitation questions. We show that our approach effectively selects such statements (with high precision), and transforms them into effective questions. We emphasize that this work focuses on this first stage of recommendation, understanding the user's needs, and doing this in an engaging way. The most important future direction is determining how answers to these questions should best be applied to the recommendation task once the user's need is understood. Here, we believe that sentence embedding techniques are likely to be effective. Second, as this work builds on top of large language models, language safety is a key consideration warranting further study before our approach could be used in practice. Nevertheless, during experimentation we did not observe concerning language nor hallucinations. We also note that the offline question generation process lends itself well to even manual control over the language model output.
2021-09-14T13:07:34.161Z
2021-09-13T00:00:00.000
{ "year": 2021, "sha1": "8951a6729c92041ac5ce203c3dc909bbc24153be", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2111.13463", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "135e4a2b92f32cda1ead358cd9c17556279269e3", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
73581227
pes2o/s2orc
v3-fos-license
Research on Dietary Nutrition Supplement of Taekwondo Players To comprehend the characteristics of Taekwondo and the problems lying in the nutrient and dietary structure of Taekwondo players through this study and then formulate a scientific and reasonable structure in nutrient and dietary for Taekwondo players, in order to improve their athletic ability in the meantime of improving their physical quality. To comprehend energy consumption characteristics of Taekwondo and current situation of nutrient and dietary structure of Taekwondo players through methods of literature collection and comparative research and then come up with a scientific and reasonable structure in nutrient and dietary. A reasonable nutrient and dietary structure can not only improve physical function and guarantee a normal internal metabolism of Taekwondo players, but also can effectively improve their athletic ability and improve their performance in competition. INTRODUCTION The promotion in athletic ability of Taekwondo players does not only rely on daily strict training and good physical quality, but also depend on a scientific and reasonable supplement in dietary nutrition to a large extent.Speed, flexibility and explosive power are basic qualities of Taekwondo players in competition and in training; characteristics of competition events require long-time and high-intensity aerobic exercise capacity and short-time and high-intensity anaerobic capacity in energy supply (Li and He, 2013); it also requires athletes to accomplish technical movements in high-speed and intense confrontation with high quality, however if Taekwondo players cannot obtain timely supplement of dietary nutrition, their athletic ability will be affected to a large extent.Therefore, Taekwondo training center must formulate a scientific and reasonable structure of dietary nutrition according to characteristics of energy consumption of Taekwondo and physical quality of Taekwondo players, to supply carbohydrate, protein, water, vitamin and inorganic salt in time and only in this way can it effectively guarantee a normal metabolism of Taekwondo players and improve their athletic ability (Yongjin, 2014). RESEARCH OBJECTS AND METHODS Research objects: Twelve Taekwondo players from a Taekwondo club are selected as research objects, with an average age of 22.35 and an average weight of 75.5 kg and there is no distinct difference in physical quality among these 12 players. Research methods: Literature research method: By means of literature in characteristics of Taekwondo and characteristics of energy consumption of Taekwondo players collected from library, internet and readers' center, to analyze and summarize the problems in dietary nutrition supplement of Taekwondo players of our country and to put forward relevant solutions based on those problems, providing a reliable theoretical foundation for this study. Comparative research method: To summarize daily intake of various food and nutrient of Taekwondo players through eight-week field investigation of research objects and compare these results with standards of athletes and then analyze and summarize current situation of dietary nutrition of Taekwondo players. RESULTS Ratio table of intake of thermal energy and thermal nutrition and energy supply in three meals (Table 1). It is can be observed from the data above that proportion of carbohydrate, fat and protein in Taekwondo players' diet is 49.1, 28.7 and 22.2%, respectively while standard proportion of carbohydrate, fat and protein in Taekwondo players' diet in our country is 50-5, 30-35 and 13-15%, respectively.Proportion of protein in Taekwondo players' diet is on the high side and proportions of fat and protein are on the low side.While energy supply standards of three meals of Taekwondo players in our country are breakfast 35%, lunch 40% and dinner 25% and currently there are certain deviations in energy supply proportion of three meals of Taekwondo players.The reasons for this phenomenon is that Daily intake of inorganic salt of Taekwondo players (Table 2). From the Table 3 above it can be observed that in daily diet of Taekwondo players, daily intake of inorganic salt such as sodium, potassium, calcium and phosphorus are lower than standard of athletes, which has affected the athletic ability of Taekwondo players to a large extent.It may be connected with Daily intake of vitamin of Taekwondo players (Table 3). From the table above it can be observed that in daily diet of Taekwondo players, daily intake of vitamin is lower than standard of athletes, which may be caused by traditional cooking habit of our country, because lots of vitamin is destroyed during the process of processing vegetable and other food, or it may be caused by closed training since Taekwondo players cannot go out and buy fruit, plus there is basically no supplement of fruit or raw vegetable, leading to a insufficient intake of vitamin as a result of less fruit (Yongjin, 2014). Characteristics of taekwondo and analysis of energy consumption of taekwondo: Characteristics of taekwondo: Taekwondo is a group activity of offensive and defensive confrontation which surrounded by rebound and shoot, also an event with rapid change of offense and defense in physical ability and technical ability in the type of speed and power and strong confrontation.All of these indicate that Taekwondo is not only a sports event of technical ability, but also a sports event with high requirements in physical ability.According to regulations of FIBA, shoot clock of Taekwondo is 24 sec in a Taekwondo game; therefore basically players have to complete a physical confrontation within 24 sec.In the meantime, players may run back and forth about 150 times in Taekwondo field during an intense competition and maximum times can be about 200 times, total distance of which could be around 5000-6000 m.Secondly, players may complete 160 times of jump within 48 min during a competition and some high-level players may complete 80 times of scurry with a distance of 10 m and sometimes over 15 m.It can be observed that there are characteristics of high intensity, high density and long time, with high requirements of physical ability of players. Analysis on energy consumption of taekwondo: It can be concluded from the characteristics of Taekwondo that Taekwondo is a sports event requiring mixture energy supply, which takes intermittent anaerobic energy-supply as primary.During an intense Taekwondo game, energy mainly comes from anaerobic metabolism which takes phosphagen system and glycolysis system as primary.Under general conditions, time limit of a Taekwondo game is 48 minutes, which is relatively long and during the exercise process there are frequently explosive jump and throw.During this time main energy substance consumed by players are carbohydrate, fat and protein, primary of which is muscle glycogen consumption.It is showed according to related information that there is a direct relation between tiredness and diminishment of muscle glycogen reserve: when blood-sugar content is on the low side, energy supply cannot meet the need for muscular activity, which will lead to a diminishment in athletic ability of Taekwondo players.In the meantime of exercise, some metabolites will be generated from energy metabolism inside the body.Although human body can eliminate those metabolites, there will be massive accumulation of metabolites which cannot be eliminated in time during a high-intensity Taekwondo game, which will cause destruction in environmental balance in the body, leading to a diminishment in athletic ability.For instance, Taekwondo players often need to do explosive scurry and scram, leading to the accumulation of creatine in muscle and diminishment of PH value in muscle, causing a diminishment of muscle function and affecting athletic ability of players.In conclusion, energy consumed by players includes both the energy provided by anaerobic metabolism and the energy provided by aerobic metabolism, primary of which is the energy provided by anaerobic metabolism of phosphagen-glycolysis (Xin, 2011). Requirements of dietary nutrition of taekwondo: Since Taekwondo is a sports event of high intensity and high energy consumption, Taekwondo players have relatively high requirements towards dietary nutrition (Song, 2014). Carbohydrate: Carbohydrate is the uppermost energy substance in human body and two thirds of heat is provided by carbohydrate, so it is very important for Taekwondo players to scientifically and reasonably take supplement of carbohydrate.Therefore, Taekwondo players shall increase reserve of muscle glycogen in a scientific and reasonable way before the game.During an intense Taekwondo game, players shall timely supply carbohydrate with low tension and low osmotic pressure, in order to effectively improve their athletic ability and relieve tiredness. Fat: Fat accounts for 25-30% of total heat in human body, which has the function of protecting each tissue and organ, heat preservation and alleviating impact force.During a Taekwondo activity, metabolic exhaustion of carbohydrate will diminish after strengthening fat metabolism of Taekwondo players, which in turn will effectively improve their endurance.Therefore, training center shall further strengthen the supplement of fat since fat is an essential nutrient substance for Taekwondo players. Protein: The energy provided by protein accounts for approximately 15% of total energy of Taekwondo players and it is involved in the restoration and reconstruction of bad body tissue and protein is related to the muscle contraction and the excitability transportation of nervous system.Therefore, protein is an essential nutrient substance for Taekwondo players and training center shall further strengthen the supplement of protein. Water: Water is an essential substance of sustaining life and there is a close connection between the metabolism and transportation of various kinds of energy substance and water.During intense competitions or training, it will cause dehydration if there is no timely supplement of water after massive perspiration of Taekwondo players.It is showed according to relevant research that it is mild dehydration when dehydration reaches 2% of body weight, which will cause thirsty feeling and will influence athletic ability of Taekwondo players to certain extent.And when dehydration reaches approximately 5% of body weight, it will cause a distinct diminishment in athletic ability of Taekwondo players.Therefore, Taekwondo players must pay attention to the supplement of water during intense Taekwondo activity. Vitamin and inorganic salt: Micronutrient, such as vitamin and inorganic salt, has special significance in Taekwondo, among which vitamin B1 can effectively promote energy metabolism and maintain normal excitability of nervous system of Taekwondo players.Vitamin E can effectively promote synthesis of protein, improve blood circulation and then improve athletic ability of Taekwondo players.And phosphorus in inorganic salt can synthesize phosphate, which can further strengthen phosphorization process and effectively improve athletic ability of Taekwondo players.It will influence the metabolism of muscle energy and athletic ability once there is an insufficiency of phosphorus of Taekwondo players.In the meantime, various kinds of inorganic salt cannot be synthesized in the body but can be discharged outside the body during the process of metabolism.Therefore, Taekwondo players must pay attention to the supplement of vitamin, inorganic salt and various kinds of microelements and only in this way can it improve athletic ability of Taekwondo players (Hongyan, 2012). ADVICE ON SUPPLEMENT OF DIETARY NUTRITION OF TAEKWONDO PLAYERS To set up dietician department and formulate scientific and reasonable diet structure: Taekwondo training center shall hire professional sports dietitian to be in charge of dietary nutrition of Taekwondo players.Firstly, sports dietitian shall formulate different structures of nutrient diet according to physical function and requirements of various kinds of nutrient elements of each Taekwondo player.Secondly, sports dietitian shall also formulate different structures of nutrient diet according to different phase of training of Taekwondo players.Only in this way can it be possible to effectively improve physical quality of Taekwondo players and improve their athletic ability.Lastly, sports dietitian shall also strengthen cooperation with coach and strengthen propaganda of rational and nutrient diet, trying to have every Taekwondo player actively accomplish rational diet. Supplement of carbohydrate: As energy-supplying substance of Taekwondo, carbohydrate has low oxygen consumption and high energy-supplying efficiency and is the main energy substance of aerobic energy-supply and anaerobic energy-supply; therefore Taekwondo players shall further strengthen the supplement of carbohydrate.Firstly, in daily diet Taekwondo players shall eat more grain in order to increase the reserve of carbohydrate of Taekwondo players.Secondly, during competition or high-intensity training, Taekwondo players shall drink oligose carbohydrate beverage in half-time interval to avoid massive loss of body fluid and consumption of glycogen, which will effectively strengthen their athletic ability and relieve tiredness (Jianguo, 2013). Strengthen intake of water: Taekwondo is a sports event with intense physical confrontation.After strenuous exercise, Taekwondo players will release massive sweat and shall pay attention to the supplement of water.During training or time-out, Taekwondo players must take supplement of proper amount of fluid of 44-500 mL; do not drink until thirsty; do not drink mineral water during exercise which will quickly lower plasma osmotic pressure and increase urine volume, leading to water loss on the contrary; hypertonic beverage and carbonated beverage are even more forbidden while some beverage containing 0.9% of sodium chloride, 0.5% of glucose, certain amount of potassium chloride and magnesium aspartate will be better choice. Supplement of vitamin and inorganic salt: During daily nutrition diet, Taekwondo players must further strengthen the supplement of vitamin and inorganic salt.Firstly, Taekwondo players shall eat more beans, nuts, lean meat, animal giblets, millet, Chinese cabbage, fermented food, rice, cucumber, eel, eggs and milk, which are rich in vitamin B1, B2, in order to effectively supply vitamin of Taekwondo players and guarantee a normal internal metabolism.Secondly, Taekwondo players shall also further strengthen the supplement of inorganic salt, such as calcium, magnesium, ferrum and zinc.For instance, Taekwondo players shall take one glass of soybean milk for breakfast and take a glass of milk for lunch and dinner and before sleep, so as to take supplement of vitamin D and magnesium in the meantime of calcium supplement, which will also improve the absorption of calcium.And during intense competitions or Taekwondo training, Taekwondo players shall drink some functional beverage rich in inorganic salt, so as to effectively take supplement of various kinds of inorganic salt in the meantime of water supplement. CONCLUSION Diet is the most important material basis of keeping physical capability and basic nutrition of Taekwondo players.Only with reasonable dietary nutrition can it be possible to promote healthy development, to eliminate sports fatigue, to improve athletic ability of players and to lay a solid foundation for achieving excellent performance.Therefore, Taekwondo training center must formulate a scientific and reasonable plan of dietary nutrition according to characteristics of Taekwondo and physical function of each Taekwondo player, taking supplement of various kinds of nutrient elements in time and only in this way can it effectively improve physical quality of Taekwondo players and improve their athletic ability. Table 1 : Ratio table of intake of thermal energy and thermal nutrition and energy supply in three meals of Taekwondo players
2018-12-15T04:18:02.067Z
2014-10-10T00:00:00.000
{ "year": 2014, "sha1": "1ed07aeb3e2693992765262dfe8ddda12f300dd1", "oa_license": "CCBY", "oa_url": "https://www.maxwellsci.com/announce/AJFST/6-1171-1174.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "1ed07aeb3e2693992765262dfe8ddda12f300dd1", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Engineering" ] }
240517753
pes2o/s2orc
v3-fos-license
Characteristic Respondents with Creatinine Levels in Patients Undergoing Hemodialysis Received 13 February 2021 Accepted 4 August 2021 Published 5 September 2021 Kidneys have an important role in the body to maintain electrolyte composition, volume stability, and extracellular fluid. The important function of the kidneys is to filter the end products or waste products of the body's metabolism, for example creatinine. Creatinine level is a parameter of renal function, so it is necessary to know the patient characteristics related to creatinine levels. The purpose of this study was to analyze the relationship between the characteristics of age, sex, occupation and duration of hemodialysis with the creatinine levels of patients undergoing hemodialysis. This study used an analytic cross-sectional study design. The population of this study were 74 patients with chronic renal failure who underwent regular hemodialysis twice a week in Bangli Hospital. The sampling technique used was purposive sampling. Creatinine data is secondary data obtained from documents written in the Hemodialysis Room at Bangli Hospital. The occupation with creatinine levels (p = 0.099), sex with creatinine levels (p = 0.094), length of hemodialysis with creatinine levels (p = 0.406), age with creatinine levels (p = 0.046). There is a relationship between age characteristics and creatinine levels in patients undergoing hemodialysis. Keyword: This open access articleisunder the CC-BY-SA license. INTRODUCTION The kidneys have an important role in the body to maintain electrolyte composition, volume stability and extracellular fluid. An important function of the kidneys is to filter the end products or waste products of the body's metabolism, such as creatinine, if metabolic waste accumulates in the body, then these substances can become toxic in the body, especially in the kidneys. There are several causes for increased serum creatinine levels, namely kidney disease, excessive fatigue, kidney dysfunction accompanied by infection, use of drugs that are toxic to the kidneys, dehydration, and uncontrolled hypertension. Kidney function can be assisted in creatinine testing, if the kidneys fail, the creatinine will increase, so dialysis medication or a kidney transplant is absolutely necessary (Nuratmini, 2019). Chronic kidney failure is one of the incidence of disease that is a health problem around the world, this disease has increased every year, the occurrence of kidney failure causes the risk of heart and blood vessel disease and increases mortality (Setyaningsih, Puspita, & Rosyidi, 2019). Approximately 1 in 10 of the world's population is identified as having chronic kidney problems. According to the Ministry of Health (2018) BPJS Kesehatan Indonesia, kidney disease is second only to heart disease. Creatinine is a normal product of muscle metabolism and is excreted in fairly constant levels, regardless of factors, such as fluid intake, diet and exercise (Nurarif & Kusuma, 2015). Creatinine is found in muscles, brain, and in the blood in a phosphorylated form as phosphocreatine. Very little amount of creatinine is present in normal urine. Creatinine is transported through the bloodstream to the kidneys. Then the kidneys filter most of the creatinine and excrete it into the urine. Creatinine levels will change in response to kidney dysfunction. Serum creatinine will increase with decreased glomerular filtering ability. Serum creatinine levels reflect the most sensitive kidney damage because it is produced constantly by the body (D.G. A Suryawan, Arjani, & I G Sudarmanto, 2016). From this description, creatinine level is a parameter of kidney function, so it is necessary to know the patient characteristics related to creatinine levels. The purpose of this study was to analyze the relationship between the characteristics of age, gender, occupation and long-term of hemodialysis with the creatinine levels of patients undergoing hemodialysis. Research participants The population of this study were 74 patients with CRF who undergoing regular HD twice a week in Bangli Hospital. The sampling technique used was purposive sampling. Research procedure This study used an analytic cross-sectional study design to determine the relationship between patient characteristics and creatinine levels in patients undergoing hemodialysis. This study first asked permission to the education and training department, then the head of the hemodialysis room to ask for the latest patient creatinine data. Instrument Creatinine data is secondary data obtained from documents written in the HD Room at Bangli Hospital. The data that has been collected is then tabulated into a data collection matrix that has been made by the researcher. Data analysis The data can be described using descriptive statistics and to determine the relationship between patient characteristics and creatinine levels using the Pearson Spearman parametric statistical test. Based on the research results, it was obtained that the creatinine level of most respondents had high levels (98.6%), and there were (1.4%) had normal creatinine levels. The highest creatinine level was 17.22 mg / dL, the lowest was 1.14 mg / dL. After the analysis, it can be seen that there is no work relationship with creatinine levels (p = 0.099), there is no gender relationship with creatinine levels (p = 0.094), there is no long-term relationship between HD and creatinine levels (p = 0.406), but there is a relationship. age characteristics with creatinine levels (p = 0.046). DISCUSSION Based on the examination of creatinine levels after undergoing HD in 74 patients with Chronic Renal Failure, it is known that most patients in the early elderly age range (46-55 years) have high creatinine levels (100%). This is in line with research conducted by (Nuratmini, 2019). Patients with chronic kidney failure, the elderly are more influenced by lifestyle, stress, fatigue / excessive visual activity, energy drinking habits, consumption of supplemental drinks and lack of drinking water are factors that trigger chronic kidney failure. Based on the examination of creatinine levels after undergoing HD in 74 patients with Chronic Renal Failure, it is known that most of the high creatinine levels were found in male patients (100%). This is in line with the research conducted by (Yudhawati, Supriati, & Wihastuti, 2019) that more men experience chronic kidney failure as many as 61 people (53.3%). Meanwhile, normal creatinine levels after HD were found in the male sex, amounting to 1 person (2.1%). This is in line with research conducted by (Paramita, 2019) that creatinine levels are above normal in men due to an unhealthy lifestyle that is applied every day, such as dietary patterns in patients with chronic kidney failure, usually men forget not to smoke. , drinking alcohol, not exercising diligently but drinking supplements, so it is necessary to have the patient's self-awareness to regulate the right food to maintain creatinine levels during hemodialysis therapy. Based on the examination of creatinine levels before undergoing HD in 74 patients with Chronic Renal Failure, based on the examination of creatinine levels after undergoing HD in 74 patients with Chronic Renal Failure, it is known that some private worker respondents have high creatinine levels (96.5%). One of the factors for increasing creatinine levels in patients with chronic kidney failure is excessive physical activity, excessive muscle mass results in increased creatinine levels in the glomeruli, this results in the kidneys being unable to function properly, so hemodialysis therapy is needed, this can be seen from the results of research that stated that more kidney failure patients are still working, namely workers. The more the patient forces himself to work, the muscle mass progresses slowly with a decrease in kidney function, if kidney function decreases, the creatinine level increases (Isnabella, 2017). Based on research conducted at the Hemodialysis Unit at Bangli Hospital, Bangli Regency, it is known that the length of time for chronic kidney failure patients undergoing hemodialysis therapy, most of the respondents whose HD duration is 1-5 years have high creatinine levels (100%). In line with the research (Daryaswanti, 2019) most of the people who underwent HD were less than 5 years old. The creatinine level examination of patients with Chronic Renal Failure in this study was carried out by the patient undergoing the first hemodialysis therapy in that month. Creatinine levels in patients with Chronic Renal Failure are within normal limits <0.5 mg / dl. Based on the research, it is known that there is no decrease in creatinine levels in patients with chronic renal failure after undergoing hemodialysis therapy, with an average hemoglobin level of 98.6%. These results are in line with those conveyed by (Nuratmini, 2019) in patients with Chronic Renal Failure before undergoing hemodialysis therapy, their creatinine levels experienced an abnormal increase. But routinely undergoing hemodialysis therapy did not show a decrease in creatinine levels back to normal levels. experience an increase in impaired erythropoietin production which causes erythropoietin deficiency and erythropoietin death early. Hemodialysis therapy does not completely help replace renal function, although patients undergo routine hemodialysis therapy, hemodialysis therapy is limited to controlling uremia symptoms and maintaining patient survival and maintaining renal function in patients with chronic renal failure. Based on research conducted at the Hemodialysis Unit at Bangli Hospital, Bangli Regency, it is known that Chronic Kidney Failure patients undergo hemodialysis therapy. Based on secondary data collected regarding creatinine levels after undergoing HD, most patients still have high creatinine levels (98,6 %). This creatinine level needs to be monitored as an indicator of kidney damage and this examination is carried out every time you undergo HD therapy, it appears that the creatinine levels of patients who are going to undergo HD therapy are changing, even exceeding normal levels. High creatinine levels after hemodialysis therapy can be caused by the large creatinine molecular weight of 113-Da (Dalton), so that it is difficult for creatinine molecules to be eliminated from the bloodstream during the hemodialysis process. Hemodialysis therapy is able to reduce creatinine levels in the blood but is unable to clear creatinine adequately so that levels depend on muscle mass. The normal value of creatinine levels in men is 0.6-1.4 mg / dL while in women 0.5-1.2 mg / dL (Dugdale, 2013). By the age of 60 years, the number of kidney nephrons will decrease due to damage. Therefore, kidney function will decrease. The reduced number of nephrons causes the remaining nephrons to take over the function of the damaged nephrons, so that the work of the remaining nephrons is getting heavier. This is one of the factors in the occurrence of chronic kidney failure. Based on the results of the study, that in the elderly the creatinine levels were higher than the younger ones. To determine the status of an elderly person, the appropriate reference values must be used. Reports show that older people have higher serum creatinine concentrations than young adults. Whether this increase is an actual effect of aging or due to an increased incidence of disease with increasing age is debated (Tiao, Semmens, Masarei, & Michael, 2002). Limitation of The Study The limitation of this study was that it was conducted in one hospital, so it cannot be generalized. CONCLUSIONS The number of kidney nephrons will decrease due to damage. The reduced number of nephrons causes the remaining nephrons to take over the function of the damaged nephrons, so that the work of the remaining nephrons is getting heavier. This is one of the factors in the occurrence of chronic kidney failure. Based on the results of the study, that in the elderly the creatinine levels were higher than the younger ones.
2021-11-04T00:08:39.781Z
2021-09-14T00:00:00.000
{ "year": 2021, "sha1": "f4f0820fd074cc89cbcc20071514f364ac24dd95", "oa_license": "CCBYSA", "oa_url": "https://aisyah.journalpress.id/index.php/jika/article/download/6S109/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "8b3be79956b53b03826a98abfeb85e11a6aae859", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Medicine" ] }
80775230
pes2o/s2orc
v3-fos-license
To Supplement or not to Supplement ? The Rationale of Vitamin D Supplementation in Systemic Lupus Erythematosus REVIEW ARTICLE To Supplement or not to Supplement? The Rationale of Vitamin D Supplementation in Systemic Lupus Erythematosus Alessandra Nerviani, Daniele Mauro, Michele Gilio, Rosa Daniela Grembiale and Myles J. Lewis Experimental Medicine and Rheumatology, William Harvey Research Institute, Barts and The London School of Medicine and Dentistry, Queen Mary University of London, London, UK IRCAD, Interdisciplinary Research Center of Autoimmune Diseases, Novara, Italy Internal Medicine Emergency Department San Carlo Hospital, Department of Health Sciences, University of Catanzaro "Magna Graecia", Catanzaro, Italy Department of Health Sciences, University of Catanzaro “Magna Graecia”, Catanzaro, Italy INTRODUCTION Systemic Lupus Erythematosus (SLE) is a chronic multifactorial systemic autoimmune disease affecting women more frequently than men, and with a peak of incidence in childbearing age [1].Abnormal activation of the immune system, chronic inflammation, and tissue damage constitute the hallmark of the disease.SLE clinical manifestations are widely heterogeneous, ranging from mild symptoms of fatigue and oral ulcerations to life-threatening renal and neurologic disease complications.Typically, the disease fluctuates between clinical flares and quiescence; however, recurrent flares may ultimately lead to irreversible organ damage [2].The aetiology of lupus has not been fully elucidated yet, but it has been associated with a variety of factors including genetic and epigenetic predisposition, female sex hormones, and environmental factors such as infections, Ultraviolet (UV) exposition and cigarette smoking [3].Even if remarkable advances have been made in unravelling lupus pathogenesis, this remains not entirely defined.Increased production of type I interferon (IFN) by the innate immune cells, activation of T helper (Th) 1/Th17 with lowered interleukin (IL) 4 production [4], defects in the clearance of apoptotic debris, persistence of autoantigens, and release of autoantibodies are among the crucial events leading to SLE development.Eventually, damage to target organs and tissues is mediated by Immune Complexes (IC) deposition and complement activation.To date, treatment of lupus mostly depends on immunosuppressive agents; nonetheless, the complexity of the pathogenic mechanisms involved might offer several further options for immunomodulatory therap in the future [5]. Vitamin D is a steroid hormone, primarily known for its role in the regulation of the calcium and phosphorus homeostasis and bone protection but, more recently, a potential novel role for vitamin D as a modulator of the immune system has been described too.Once activated, vitamin D can exert its activity by binding Vitamin D Receptors (VDRs).In human, VDRs are widely expressed including numerous immune cells, suggesting that vitamin D may play an essential function in controlling immune system responses.This finding has encouraged several studies aiming at elucidating the immunomodulatory properties of the vitamin D/VDR axis [6,7].Defective signalling surely determines bone health and development's issues, but it could also associate with increased risk of multiple chronic diseases like autoimmune conditions, infectious diseases, and cancer [8].In 1995, Muller et al. firstly described the link between low vitamin D levels and lupus [9]; since then, several subsequent reports confirmed a higher prevalence of vitamin D deficiency amongst SLE patients compared to the general population, often also observing a correlation with the disease severity [10 -16]. Whether or not vitamin D deficiency could ultimately contribute to SLE onset, progression or clinical phenotype is still an unanswered question.Clinical trials assessing the therapeutic efficacy of vitamin D supplementation as an immunomodulatory agent in SLE have given contrasting results, but more extensive studies will hopefully help to shed light on this topic. Here, we will summarise the current knowledge about vitamin D deficiency prevalence, risk factors and possible pathogenic role in SLE; also, critical molecular studies aiming at an in-depth characterisation of the immunomodulatory effects of vitamin D will be reviewed.Finally, we will focus on the link between vitamin D deficiency and clinical aspects of SLE and will recapitulate the results of the clinical trials assessing the effects of vitamin D supplementation in SLE. VITAMIN D METABOLISM Vitamin D is a steroid hormone essential for calcium metabolism and bone protection.It is partly obtained from the diet (vitamin D2 or ergocalciferol), but it predominantly derives from the photo-conversion of the 7dehydroxycholesterol into vitamin D3 (or cholecalciferol) occurring in the skin in response to UV radiation [17].Both ergocalciferol and cholecalciferol need to undergo chemical modifications to be biologically active.The activation requires two steps: i) in the liver, cytochrome p450 (CYP) hydroxylases, particularly CYP2R1, convert cholecalciferol into 25-dihydroxy-vitamin D3 [25(OH)D3]; ii) in the kidney, the 1α-hydroxylase CYP27B1 generates the active form 1,25-dihydroxy-vitamin D3 [1,25(OH)2D3], or calcitriol, which is around 10 times more effective than the 25(OH)D3.To maintain adequate levels of calcitriol, the 24-hydroxylase (CYP24A1) acts as a negative feedback on the vitamin D activation by hydroxylating both the 25(OH)D3 and the 1,25(OH)2D3 to generate less active molecules.The Parathyroid Hormone (PTH) is capable of modulating the equilibrium between the "activator" CYP27B1 and the "inhibitor" CYP24A1, shifting the system towards calcitriol formation.Low concentration of calcium in the serum upregulates PTH; conversely, high levels of calcitriol can down-regulate and suppress it [18,19].More recently, the phosphaturic hormone fibroblast growth factor 23 (FGF23) emerged as a negative regulator of the 1,25(OH)2D3 generation [20]. Historically, 1,25(OH)2D3 was known for its ability to enable the absorption of calcium in the gastrointestinal tract to control the calcium homeostasis.Constant low levels of calcitriol impair intake of calcium from the intestine and favour the mobilisation of calcium from the bone ultimately leading to pathologies such as osteomalacia, osteoporosis and rickets [21,22].1,25(OH)2D3 acts by binding its receptor VDR and mediating its conformational modification; VDR works as a transcription factor regulating the DNA-expression of the Vitamin D Response Elements (VDREs) [23].Of notable importance in the context of autoimmune diseases as SLE the effect of the chronic use of corticosteroids, which can increment the activity of the calcitriol-inhibitor CYP24A1 while lowering the intestinal absorption of calcium [24,25]. Vitamin D Deficiency Definition In current practice, the vitamin D status is evaluated by measuring the serum concentration of 25(OH)D3, a mirror of the most abundant pool sequestred in adipose tissue and muscles [26].Analytical variability and discrepancies in epidemiologic studies raise the controversy on a universally accepted definition of vitamin D deficiency.Several techniques ranging from liquid chromatography and chemiluminescence to immunoassay have been developed, resulting in an intra-sample variability of up to 20% [27].So far, liquid chromatography is probably the most reliable methodology as it is not influenced by the presence of other vitamin D species and metabolites [28].In 2010, an international collaborative venture was launched by the National Institutes of Health aiming at promoting standardisation of the laboratory measurement of 25(OH)D3 and defining the appropriate concentration of plasmatic vitamin D [29].Several international health and scientific organisations have approached this issue including the World Health Organization (WHO), the Institute of Medicine (IOM) and the Endocrine Society (ES), reaching different conclusions regarding the desirable level of circulating 25(OH)D3. The ES sets the threshold to 30 ng/mL while the IOM to 20 ng/mL, discrepancy partially justified by the different target population considered.Nevertheless, both measures seem poorly representative for non-white ethnicities [27,30,31].Despite using different criteria, both the ES and the IOM based their statements on a systematic review of the literature focusing mainly on 'skeletal' outcomes such as PHT inflexion point, calcium absorption, osteomalacia, rickets, Bone Mineral Density (BMD), and fractures [27,30,31].In the absence of a universally accepted definition, most of the studies currently use a cut-off of 30 ng/mL to designate vitamin D insufficiency, and values under 20 ng/mL for vitamin D deficiency [17,30,32,33].Further studies focusing on non-musculoskeletal outcomes and representative of a more varied genetic background/ethnicity remain to be carried out [28]. Prevalence Of Hypovitaminosis D In Disease Vitamin D deficiency is common in the general population and, as expected, its prevalence increases with higher latitudes [17].Nonetheless, the still significant frequency of vitamin D deficiency in countries with high sun exposure suggests that other factors other than UV radiation influence the levels of 25(OH)D3, e.g.genetics, diet, cultural habits and clothing [17,34].A growing number of preclinical works is shedding light on the pleiotropic effects of vitamin D on virtually any cells.Thus, it is not surprising that a plethora of observational studies described altered levels of 25(OH)D3 in multiple conditions such as diabetes [35], atherosclerosis, cancer, and autoimmunity.Indeed, in most case-control studies on autoimmune diseases including SLE, patients had persistently lower vitamin D levels than controls [17, 36 -39].The inclusion of healthy controls as comparator group is mandatory to rightly interpret data on vitamin D deficiency/insufficiency because of its broad diffusion and the high variability of its prevalence worldwide.Among 14 controlled studies reviewed by Reynolds and Bruce in 2017 [40], 12 of them showed significantly lower levels of serum vitamin D in SLE, while only two failed to detect different average values between cases and controls [11,41].Hence, despite a prevailing consensus that SLE patients have significantly lower levels of vitamin D, variations in assay techniques, cut-off values, seasonality, ethnicity, age, sex, disease duration and latitude, all contribute to making the frequency of vitamin D deficiency/insufficiency challenging to compare between different studies.Overall, the rate of vitamin D deficiency/insufficiency considerably diverge across multiple cohorts, ranging from 8 to 30% [10, 42 -47]. Risk Factors For Vitamin D Deficiency Recurrently finding lower vitamin D levels in SLE patients raises a burning question still far to be answered: is vitamin D deficit a contributing factor to SLE development?Or, is it merely a consequence of the disease?Several plausible hypotheses about this relationship have been advanced.Reduced UV exposure is indeed frequent in SLE patients.Active sun avoidance and use of high factor sunblock are recommended by clinicians worldwide due to the photosensitivity typical of the disease [14,48].Additionally, it is conceivable that symptoms such as fatigue and polyarthralgia may limit the outdoor activity of SLE patient impacting on the UV exposure and vitamin D metabolism.However, a recent study by Shoenfield et al. analysing CYP24A1 polymorphisms showed that the photosensitivity and the sun exposure behaviour did not correlate with vitamin D levels, suggesting that the genetic background is a stronger driver of vitamin D status [49,50].Renal involvement typical of SLE may also affect the vitamin D metabolism by impairing its 1-hydroxylation occurring in kidneys.Epidemiological studies identified nephritis as one of the most influential predictors of vitamin D deficiency [14,42,51].Medications could, in theory, participate to reducing the levels of vitamin D. Corticosteroids, for instance, may lower the intestinal calcium absorption and increase vitamin D catabolism; therefore, when prescribed for SLE treatment, steroids may contribute to vitamin D deficiency [51 -53].Though, even if plausible, this cannot fully explain the low level of vitamin D found in SLE patients, especially before the diagnosis when still steroids-naïve [48,49].Genetic variations also participate in regulating the absorption and metabolism of 25(OH)D3 [54].For instance, intronic polymorphisms of the regulatory region of the gene encoding the negative regulator CYP24A1 [18] have been associated with SLE: in subjects with increased genetic risk for SLE development, the presence of two copies of the minor allele is able to increase 25(OH)D3 levels reducing the risk of having the disease [50]. Finally, 25(OH)D3 may also act as a negative acute phase reactant, dropping in consequence of acute or chronic inflammation.This relationship has been recently confirmed by a meta-analysis demonstrating that serum 25(OH)D3 levels decrease following traumatic events as orthopaedic surgery and acute pancreatitis and inversely correlate with C-Reactive Protein values [55]. ROLE OF VITAMIN D IN THE IMMUNE SYSTEM REGULATION As previously mentioned, the discovery of the expression of VDRs by a plethora of innate and adaptive immune cells [56 -58] allowed to hypothesise a role for vitamin D in the immune system regulation and, potentially, in the pathogenesis/progression of conditions characterised by an impaired functioning of the immune response.In vitro studies have shown in multiple cell types that the vitamin D/VDR axis has important and broad immunomodulatory properties, overall mediating a negative regulation of several immunological abnormalities lupus-related.In the adaptive system, 1,25(OH)2D3 can decrease the proliferation of B cells by up-regulating their apoptosis, inhibit the B cells-to-plasma cells differentiation and the immunoglobulin (Ig) class switching, and limit the production of autoantibodies including the anti-double strand (ds) DNA [59 -61].Since B lymphocytes are CYP27B1-expressing cells, an autocrine regulation of the response to calcitriol seems likely [61,62].Similarly, also in T cells the main effect of the vitamin D stimulation is a modulation of their activation.Numerous T cell populations like Th CD4+ and CD8+ cells express the VDR and can be targeted by vitamin D [57], and the VDR expression seems to be induced in response to an initial T-Cell Receptor (TCR) signalling [63].On the one hand, vitamin D negatively regulates the release of proinflammatory cytokines, e.g.IL-17A and IL-12p70, and reduces the relative percentage of the Th17 subset [64,65].Contrariwise, it raises, at least temporarily, the number of the T-regulatory (Treg) cells and the expression of Tregspecific markers [66].Among the innate immune cells, both DCs and monocytes/macrophages express the VDR and the activating enzyme CYP27B1 [67].In monocytes, treatment with 1,25(OH)2D3 decreases MHCII and CD80/CD86 expression [68], inhibits monocytes differentiation [69] and limits the pro-inflammatory effects secondary to the activation of Toll-Like-Receptors (TLR), e.g.TLR 9 [70].Vitamin D-treated macrophages show an M2-preferential phenotype [71], characterised by reduced production of Tumor Necrosis Factor (TNF)-α, IL-1β, IL-6, and nitric oxide but increased IL-10 [72], and have a limited ability to activate T cells [73].Vitamin D can modulate the immune response also by inhibiting the maturation of DCs [69]: these immature and tolerogenic DCs have immunomodulatory properties and are able to promote Treg differentiation while restraining the proliferation of inflammatory T cells [74 -76].To note, when monocyte-derived DCs from lupus patients were treated with dexamethasone in combination with vitamin D3 they became able to promote IL10-expressing-Treg and to inhibit the pro-inflammatory T cell phenotype [77].Neutrophils express VDRs too [78]; studies in SLE showed that, when bound by their ligand 1,25(OH)2D3, the activated VDR mediates an overall improvement of the endothelial damage secondary to the decreased generation of Neutrophils-Extracellular-Traps (NET) [79].Some of the vitamin D effects observed in vitro were replicated in animal models of lupus, for instance, its ability to favour Treg differentiation and Foxp3 expression [80], to reduce IL-17/23, IFN-gamma and IL-6, and to decrease the titre of anti-dsDNA antibodies [80,81].A potential therapeutic role for vitamin D in improving clinical manifestations of lupus was hypothesised based on the decreased severity of the disease observed in MRL/1 mice treated with 1,25(OH)2D3 [82].Immunomodulatory properties of vitamin D were also confirmed in both controls and SLE patients treated with vitamin D supplementation; in healthy volunteers receiving 12-weeks of oral cholecalciferol supplementation (140000 IU/month) the number of Treg cells significantly increased [83].Also, the pro-inflammatory cytokines production by cells isolated from vitamin D-deficient but otherwise healthy participants was reduced in subjects who corrected vitamin D levels following supplementation [84].The chance that vitamin D could act as an immunomodulatory therapeutic agent prompted numerous studies assessing the role of vitamin D supplements in improving the immune and clinical responses in SLE.As it will be discussed afterwards, cholecalciferol administration to lupus patients seems to mediate a shift of the ratio between Th1/Th17 effector cells and Treg in favour of the latter [85 -87], meanwhile decreasing the number of memory B cells and the production of anti-dsDNA antibodies [85 -87].Consistently, a negative correlation between 25(OH)D3 levels and the presence of anti-dsDNA antibodies was repeatedly observed [11,90].Finally, some [11,88], however not all the studies [91] showed a negative correlation between the serum vitamin D values and the IFN signature in lupus patients. DOES HYPOVITAMINOSIS D PLAY A ROLE IN SLE DEVELOPMENT? As discussed above, although the high prevalence of vitamin D deficiency in lupus has been broadly demonstrated and accepted, its potential role in the development, progression and clinical manifestations of the disease is still under investigation.Several studies tried to establish a pathogenic function for impaired vitamin D levels in autoimmune diseases, though this sounds scientifically challenging.The body of evidence on the immunological and nonimmunological disease-associated pathways potentially controlled by vitamin D is exponentially growing, but conclusive mechanistic correlations in human are elusive and hard to prove, particularly because most of the available data come from observational studies.Interestingly, significant lower vitamin D levels have been observed in subjects with anti-nuclear antibodies (ANA) positivity but not clinically proven SLE, hence suggesting that a breach of the immune tolerance may be more common in vitamin D deficient subjects [88].A retrospective analysis of hospital admissions records in England related to diseases associated with vitamin D deficiency, including osteomalacia and rickets, revealed an increased future risk of developing immune-mediated conditions such as SLE, Rheumatoid Arthritis (RA) and systemic sclerosis in these patients [92].However, due to the intrinsic limitations of this kind of study, confounders and reverse causality cannot be ruled out [92].More recently, vitamin D deficiency in high-risk subjects (SLE siblings), along with CYP24A1 polymorphisms, have been associated with higher prevalence of SLE onset within a follow-up period of 6 years [50].In keeping with this, patients who progress from an undifferentiated Connective Tissue Disease (CTD) to a defined CTD seem more likely to have lower vitamin D levels than the non-progressors [93]. As it will be discussed later in this manuscript, experimental vitamin D administration (or deprivation) in animal models of SLE offers some insight on this topic.It seems indeed that the administration of 1,25(OH)2D3 to MRL/1 mice, a model of spontaneous SLE, prevents dermatological lesions such as alopecia and ear necrosis [82], and reduces the severity of proteinuria and arthritis, overall increasing the lifespan [94]. Vitamin D Receptor (VDR) Gene Polymorphisms Correlate With Risk of SLE Genetics may further help to elucidate the link between vitamin D and SLE.Some of the numerous polymorphisms located within the VDR genes have been indeed associated with a higher risk of developing SLE in multiple studies.Meta-analyses of genetic studies confirmed the correlation for some of the SNPs in VDR in Asians but not in Caucasians.Among those, the most extensively studied mutations are TaqI(rs731236), BsmI(rs1544410), ApaI(rs7975232), and FokI(rs2228570) [95].More specifically, the B allele in the VDR BsmI associates with a raised risk of SLE in the general population, with the strongest correlation in Asians and a lack of association in Caucasians.The association between the VDR FokI and the risk of SLE was confirmed too; though, a subsequent sub-analysis performed categorising patients for ethnicity again failed to identify any correlation in Caucasians.Data coming from the three genetic studies about ApaI revealed an association only in patients of African origin, and they should be taken anyway with some caution because of the limited sample size [96,97] CORRELATION BETWEEN VITAMIN D DEFICIENCY AND CLINICAL AND SEROLOGICAL MANIFESTATIONS IN SLE In keeping with the above-described modulatory properties of 25(OH)D3 on the immune system cells, a considerable effort has been made over the last decades to investigate the association between vitamin D levels and lupus severity, disease progression, immunologic status, and comorbidities.To date, even if numerous studies have been published worldwide over the last decade (Table 1), data in this field are not as yet conclusive: while some studies reported an inverse correlation between vitamin D levels and lupus disease activity, disease flares, Cardiovascular (CV) involvement, renal disease, fatigue, and anti-dsDNA titre, these results were not constantly replicated.The interest in evaluating the correlation between vitamin D and clinical manifestations is not anyway limited to lupus but has been raised in other autoimmune conditions [98].For instance, in two metanalyses lately published, a significant inverse correlation between serum 25(OH)D3 levels and disease severity has been found in both Crohn's disease and RA [37,38]. Vitamin D and SLE Disease Activity Over the last decade, a considerable number of reports investigated the association between the serum concentration of vitamin D and the severity of the disease in lupus patients.Unfortunately, heterogeneous indexes have been used for assessing the disease activity (e.g.SLEDAI, SELENA-SLEDAI, BILAG, ECLAM) making somehow tricky the direct comparison between trials.Independently of the score used, a significant proportion of these observational studies showed the existence of an association between lower 25(OH)D3 serum concentration and higher disease activity [11, 42, 43, 45, 89, 103, 105, 108, 109, 121, 126, 128] [129, 131, 132, 135, 136].However, this correlation was not confirmed in all the studies [10,12,44,48,90,101,106,114,118,124,130,133].The reasons beyond these discrepant results might be several; as mentioned, various indexes of disease activity have been used throughout the studies, as not a single standard measurement exists.Moreover, the different ethnicity of the subjects included could also play a substantial role.Despite the evidence of an association between vitamin D levels and disease activity, a direct causal relationship has not been found yet and cannot be driven as a conclusion from observational studies.On the one hand, in keeping with the effects of vitamin D/VDR on the immune system, deficit in vitamin D might represent a trigger for the development of autoimmunity and more aggressive disease.On the other hand, however, 25(OH)D3 concentration might be lowered secondary to the presence of systemic inflammation.Interestingly, even if a continuous high disease activity correlates with organ damage, most of the studies failed to show a correlation between low serum 25(OH)D3 and lupus-related organ damage [105]; in some circumstances, lower vitamin D levels have been associated with disease flares [42,112,122].A more consistent consensus has been raised with regards to the negative correlation between vitamin D and ANA titres [11,46,88,90,100,128,129], in keeping with the in vitro ability of vitamin D of inhibiting B cells activation and autoantibodies production. It is possible that some confounding factors could drive the link between low levels of 25(OH)D3 and severity/features of the disease.A meta-analysis published in 2014 analysed the results of 11 articles reporting a Pearson correlation coefficient between vitamin D levels and disease activity, more than 20 patients and at least one confounder factor for vitamin D serum concentration.Here the Authors showed that the most commonly identified confounding factors were renal function, proteinuria, BMI, and concurrent treatment including Disease Modifying Anti-Rheumatic Drugs (DMARDs), steroids and vitamin D supplementation [114].Among the specific clinical manifestations lupus-related, nephritis [14,42,51,108,123] and CV involvement have been the more often associated with vitamin D deficiency (the correlation with CV manifestations will be discussed in details in the next paragraph). Vitamin D and Cardiovascular Disease in Lupus Patients With regards to CV disease, it is well accepted that patients affected by SLE have an increased CV risk, which especially manifests at an earlier age in comparison to the general population and translates into a higher mortality CVrelated [138,139].The raised prevalence of CV events can be explained by the contribution of risk factors related to both the disease itself, such as chronic inflammation, and the disease treatment, including long-term use of steroids, both in association with traditional CV risk factors (e.g.smoking, hypertension, high low-density lipoprotein levels, obesity, impaired glucose metabolism) [140].Accelerated atherosclerosis triggered by traditional and disease-related risk factors such as disease duration, raised homocysteine levels and pro-inflammatory cytokines [141], and the metabolic syndrome seem to be particularly important, the latter being present in almost half of lupus patients at the disease onset and being associated with the cumulative damage of organs and tissues [142,143].Since in the general population vitamin D deficiency has been described as a risk factor for the occurrence of CV disease [144 -146], its association and role in the development of CV disease has also been assessed in the context of SLE.Once again, even if data in this field are somehow contradicting, there is substantial evidence that vitamin D deficiency associated with CV risk factors in lupus [42,113,116,126,134,147,148].Some Authors have supported the direct correlation between low vitamin D levels and the age-adjusted total area of the carotid plaque in lupus patients [127].This has not been confirmed in a different study [126], which, nonetheless, highlighted how vitamin D deficiency associated with increased aortic stiffness [126].The relationship between low vitamin D and metabolic syndrome has been shown too [99].Evidence from observational studies prompted interventional trials aiming at assessing the value of vitamin D supplementation for controlling/reducing CV risk factors.Results from the Women's Health Initiative (including 36282 post-menopausal women) did not support a role for vitamin D in modifying the CV risk in the general female population; however, the design of the study, which allowed a personal supplementation of vitamin D in the untreated arm might have constituted a fundamental confounding factor [149].A metanalysis published by Chowdhury et al. in 2014 considering 73 observational studies and 22 randomised controlled trials concluded that a negative correlation between vitamin D levels and mortality rate (including for CV-related causes) exists in the general population and that the supplementation with 25(OH)D3 can decrease the overall mortality in adults (average age 56-85 years old) [150].Thus, although in the absence of robust lines of evidence, vitamin D supplementation is encouraged in lupus patients [151] in keeping with the raised CV risk and related mortality. Vitamin D and Fatigue Lupus-Related Fatigue is one of the most common symptoms described by patients affected by SLE, being present in around 80% of all lupus patients and conferring disability in more than half of patients [152].Vitamin D deficiency has been reported in several studies as a factor associated with fatigue in SLE, even when no other clinical correlations were found [48,104,133,153].Salman-Monte et al., for instance, have recently shown that non-supplemented SLE female patients with insufficient vitamin D levels had significantly higher fatigue compared to subjects with normal vitamin D serum ranges [104].Moreover, increased 25(OH)D3 levels secondary to supplementation seem to have a favourable influence on fatigue as suggested by the significant inverse correlation between changes in vitamin D levels and differences in the VAS fatigue score post-supplementation [133]. VITAMIN D SUPPLEMENTATION IN SLE: GOALS, REGIMENS AND THERAPEUTIC EFFECTS At the time being, universally accepted guidelines about which categories of patients need to be tested for vitamin D deficiency have not been published yet, but recommendations come from diverse societies and organisations.For instance, the National Osteoporosis Society (NOS) suggested that only patients with bone or musculoskeletal symptoms should be tested [154] while the ES advised the measurement of vitamin D for patients affected by obesity, liver and chronic kidney disease and, more generally, subjects of Hispanic and African-American ancestry [30].The controversy, which was later discussed in two additional reports [155,156], effectively exemplifies the difficulties in finding an international agreement in the field of vitamin D, already evident in the discrepancies of results reported in the observational studies listed above. Similarly to the screening, no worldwide-accepted guidelines currently exist with regards to the supplementation of vitamin D (target levels and therapeutic regimes) in both the general population and in specific groups of patients, e.g.SLE.Cholecalciferol is the most common form of vitamin D used for supplementation in routine care [30].The amount of vitamin D intake recommended hugely vary according to the different guidelines, from 600 IU/day (only dietary intake) advocated for the general population by the IOM [157] to 1500-2000 IU/day for subjects at high-risk as suggested by the ES [30].The NOS proposed for patients with values < 30 nmol/l a loading dose of 300000 IU followed by a maintenance dose of 800-2000 IU/day [154].Conversely, the European Society for Clinical and Economic Aspects of Osteoporosis and Osteoarthritis recommended supplementation of 800 to 1000 IU/day for baseline 25(OH)D3 values below 50 nmol/L (20 ng/mL) [158].A comprehensive metanalysis including 11 randomised trials of vitamin D supplementation concluded that treatment with ≥ 800 IU/day was helpful for preventing nonvertebral fractures in older adults [159], but there is no mention about any potential extra-skeletal effect of vitamin D. Despite the favourable effects of vitamin D administration in murine models of autoimmunity, the efficacy of vitamin D supplementation as an immunomodulatory agent for treating lupus and other autoimmune diseases is currently under debate.A conventional therapeutic approach to SLE patients is missing; in routine care, the goal of the supplementation is the prevention of fractures and the protection of the bone, usually represented by values > 30-40 ng/ml [160].The adjustment of the dose at individual levels should be made based on specific risk factors, in particular, the concurrent use of steroids in keeping with their known negative action on vitamin D absorption [33, 161, 162] [163].A large inception study including 223 Spanish newly diagnosed SLE patients and assessing the nature of treatment during the first year of follow-up concluded that vitamin D supplementation was below the optimal dose required [164]. Multiple interventional trials have been performed, but further will be needed to clarify the potential therapeutic effects of vitamin D in SLE beyond the bone protection.To date, only conventional doses of vitamin D have been used in trials, aiming to correct the deficiency but not to achieve predefined serum levels.Vitamin D supplementation is advantageously very well tolerated and only rarely toxic, and complications associated to vitamin D toxicity (e.g.hypercalcemia, hypercalciuria, calcifications) start appearing when levels are > 80 ng/ml [160].Though, it would be interesting to evaluate the effects of higher doses of vitamin D in patients with lupus, trying to reproduce and enhance the immuno-modulation observed in vitro and animal models.Here, we have listed and discussed the most relevant interventional trials of vitamin D supplementation in lupus patients. Interventional Clinical Trials A large double-blind randomised controlled trial published in 2013 by Abou-Raya et al. included 267 SLE Egyptian patients with SLEDAI >1 and serum vitamin D level < 75 nmol/L randomised to receive cholecalciferol 2000 IU/day or placebo in a 2:1 ratio.At the end of the study (12 months), the SLEDAI was improved in the treatment arm; moreover, compared to the placebo group, cholecalciferol-treated patients had lower anti-dsDNA and inflammatory cytokines serum levels [89].In the same year, Petri et al. assessed the effects of vitamin D supplementation in 1006 SLE patients (Hopkins Lupus Cohort).Lupus patients with baseline 25(OH)D3 levels < 40 ng/mL (around 80% of the recruited subjects) received 50000 IU/week of ergocalciferol in combination with a total of 400 IU calcium/cholecalciferol daily.Patients who corrected the vitamin D deficiency had a modest but significant improvement of the disease activity measured with the SELENA version of SLEDAI.Moreover, an amelioration of the urine-protein-to-creatinine ratio was observed too [165].Ruiz-Irastorza et al. assessed the relationship between vitamin D supplementation and changes in clinical variables in 60 patients with SLE treated as per routine care for vitamin D deficiency and recruited in a previous observational study.Despite increasing the concentration of serum vitamin D, a remarkable proportion of patients still had low levels post-supplementation.The advantageous effect of increasing vitamin D levels was observed in the VAS fatigue.However, no significant effects were found on lupus disease activity or organ damage [133].Similar conclusions, i.e. the absence of modifications in disease activity (assessed by SLEDAI) following vitamin D treatment were drawn in an open-label study enrolling 20 lupus patients and treating them with 100000 IU/weekly for one month followed by 100000 IU/monthly for six months.On the other hand, anti-dsDNA titres, memory B cells number, and the percentage of Th1/Th17 decreased while Treg increased.At the end of the trial, patients had reached significantly higher levels of vitamin D compared to baseline [85].In 2015, Aranow et al. reported the results of a double-blind randomised controlled trial evaluating the effects of vitamin D supplementation on the IFN signature response in SLE patients.57 lupus patients with baseline vitamin D < 20 ng/ml and stable inactive disease were enrolled.Patients were required to have the presence of an IFN signature at baseline; this was quantified by gene expression of 3 IFN-related genes.Patients were randomised in 3 groups to receive 4000 IU/day of cholecalciferol, 2000 IU/day cholecalciferol and placebo for 12 weeks.At the end of the study, 16/33 patients receiving active treatment replenished vitamin D serum level, but no significant IFN-signature response was observed even in patients who achieved adequate vitamin D concentrations.The absence of IFN response might be explained by the relatively short time of the study and by the disease status at baseline (inactive) [91].In another recent randomised trial, 34 female lupus patients received two different regimens of vitamin D supplementation for a total of 2 years.Patients were supplemented with one of the two schemes for 12 months and afterwards switched to the second therapeutic strategy.One scheme, standard, consisted of cholecalciferol 25000 IU/month; the other, more intensive, of 300000 IU initial loading followed by 50000 IU/month.Overall, the study failed to find clinical efficacy (disease activity and serology) of vitamin D supplementation in lupus patients independently of the regimen.It showed, however, favourable immunological variations such as enrichment of the Treg and increased release of Th2 cytokines.Remarkably, only the most intensive regimen allowed the achievement of adequate levels of vitamin D [86,166].The superiority of a high loading dose of cholecalciferol in correcting vitamin D deficiency has been previously observed [167].In another randomised placebo-controlled study, 45 Vitamin Ddeficient lupus patients were enrolled and received vitamin D (50.000 UI/week for 12 weeks followed by 50000 UI/month for three months); additional 45 patients were randomised to receive placebo.Even if the level of vitamin D significantly increased after the supplementation (but not in the placebo group), there was no difference in the SLEDAI between the two groups [168]. in a placebo-controlled trial and randomised 1:1 to receive cholecalciferol 50000 IU/week or placebo for 24 weeks.Vitamin D supplementation significantly improved the diseaserelated fatigue; moreover, a significant difference in SLEDAI and ECLAM was reported in favour of the treated group [169].The beneficial effects of vitamin D on juvenile-onset SLE patients have also been established in the study of AlSaleem et al. in which 28 children (24 with low vitamin D levels) with lupus were recruited and received cholecalciferol 2000 IU/daily.After 12 weeks, a significant proportion of patients had improvement in SLEDAI score and autoantibodies titres [170]. The influence of vitamin D supplementation on the endothelial function, known to be impaired in patients with SLE [171], was evaluated in a pilot case-control study recently published by Kamen et al.Lupus patients vitamin D-deficient were randomised to receive oral vitamin D supplementation or placebo.In the absence of replenishment of the vitamin D levels (not reaching ≥32 ng/mL), the Flow-Mediated-Dilation (FMD), which is an indirect measure of the endothelial function, did not improve.Contrariwise, around 50% of the patients who increased vitamin D concentration had better values of FMD by the end of the trial [172].Furthermore, in vitamin D-deficient patients treated with oral supplementation, a positive correlation between the improvement of the FMD values and the change in the vitamin levels post-treatment was proved [173].Overall, these positive results call further larger studies assessing this aspect in lupus patients. The potential favourable action of vitamin D supplementation on the endothelial function is not disease-specific; in fact, a single high dose of oral vitamin D was able to significantly improve FMD values in patients affected by type 2 diabetes mellitus in comparison with healthy controls [174].Ex vivo studies corroborated the possible vitamin D ability of positively enhancing the endothelial repair mechanisms and the global endothelial function [173,175], for example by reducing the NETosis [79].Studies in experimental models of SLE (MRL/lpr) also showed that lower levels of vitamin D correlated with impaired endothelium-dependent vasodilation and defective neoangiogenesis in agreement with the human findings [176]. Conclusive Remarks on Interventional Trials Overall, drawing definitive conclusions from the interventional studies discussed above is still not feasible because of the controversies of the results.Several reasons can explain the disagreement between findings: the limited number of patients included in some trials; a still relatively low number of double-blind randomised controlled trials; the heterogeneous features of patients enrolled (e.g., different baseline vitamin D levels, various disease activity, concomitant treatment); and a non-univocal treatment regimen (dose/duration/final goal). The central open question remains whether or not vitamin D might constitute a valuable therapeutic approach in modulating the immune response and the clinical/serological manifestations of lupus, potentially acting as sparing agent for other more harmful medications currently in use.Numerous revisions of the literature have been lately published, but rarely the Authors reached an incontrovertible consensus towards one or other conclusions [33,40].It is plausible that the lack of clinical effects vitamin D-related in some studies lies in an inadequate therapeutic approach regarding the dose, the duration and the patients' selection.Since it has been observed that patients with autoimmune diseases have persistently raised values of PTH, it is likely though that the goal of the supplementation should be the PTH suppression and not a "target" vitamin D plasmatic concentration [177].The increasing interest for the therapeutic utility of vitamin D supplementation in the prevention and management of pathologic conditions is not limited to lupus but also involves other major chronic diseases, both autoimmune and not (type 1 diabetes, multiple sclerosis, and CV disorders) [178].In conclusion, the promising results reported in some studies [89,165] need to be confirmed, and further large clinical trials are therefore warranted in this field. CONCLUSIONS AND TAKE HOME MESSAGES Patients with SLE are more prone to be vitamin D deficient compared to the general population; however, 1. vitamin D deficiency is common also in healthy individuals.Potential determinants of vitamin D deficiency in SLE include reduced UV exposure, genetic variations, 2. corticosteroid treatment, and renal disease.Current knowledge is not conclusive with regards to the role of vitamin D deficiency in the development of 3. autoimmunity and, specifically, SLE.Increased risk of SLE associates with polymorphisms of the VDR; higher incidence of vitamin D deficiency in ANA-positive non-lupus subjects and siblings of lupus patients (high-risk subjects) are in favour a causal relationship, but this has not been confirmed yet.Immune cells express VDRs ubiquitously.Overall, vitamin D up-regulates anti-inflammatory responses, a shift 4. towards Treg and Th2, reduced B cells activation and Ig production (including anti-dsDNA), and enhanced tolerogenicity of dendritic cells.In experimental models of lupus, vitamin D supplementation can improve the disease. Numerous observational studies have investigated the correlation between vitamin D levels and 5. clinical/serological manifestations of lupus with contrasting results.A negative relationship between vitamin D levels and disease activity, renal disease, CV risk factors and complications, fatigue, and anti-dsDNA titres have been described but not conclusively accepted. Several interventional studies have tried to define the therapeutic value of vitamin D supplementation on disease 6. activity, renal function, CV risk, fatigue, immunological profiles, and IFN-signature, however, once again, drawing controversial conclusions.Further large clinical trials with well-defined therapeutic protocols and goals are warranted to shed light on this topic. CONSENT FOR PUBLICATION Not applicable Table 1 . Relevant studies published over the last decade investigating the correlation between vitamin D levels and clinical and serological manifestations in SLE. [102]-Galil et al, 2017[102]123 SLE and 100 controls (Egypt) Negative correlation between vitamin D and SLEDAI in the highdisease activity group and patients with lupus nephritis Negative correlation between 25(OH)D3 and IFN-α serum level/gene expression (> in patients with lupus nephritis) Eloi et al, 2017 [103] 199 SLE patients (Brazil) Negative correlation between vitamin D and SLEDAI Salman-Monte et al, 2016 [104] 102 female SLE patients (Spain) Negative correlation between vitamin D insufficiency and fatigue 25(OH)D3 lower levels associated with more oral corticosteroids Gao et al, 2016 [105] 121 SLE patients and 150 controls (China) Severe vitamin D deficiency is prevalent in moderate/high disease activity (SLEDAI), but no correlation with organ damage (SDI) Simioni et al, 2016 [106] 153 SLE patients and 85 controls (Brazil) No correlation between vitamin D and SLEDAI Lower levels of vitamin D associate with leukopenia Kokic et al, 2016 [107] 22 female SLE patients and 21 controls (Croatia) Negative correlation between vitamin D levels and IFN-γ dsDNA titre, plasma/gene expression of IFNα Higher levels of plasma IFN-α in treatment-naïve SLE patients compared to treated patients and controls McGhie et al, 2014 [117] 75 patients with SLE (Jamaica) Negative correlation between vitamin D and BILAG score (trend) D3: negative correlation with anti-dsDNA, anti-Sm, and IgG levels; positive correlation with complement levels 25(OH)D3 lower levels associated with higher prevalence of pericarditis, neuropsychiatric diseases and deep vein thrombosis (Table ) contd... .. Vitamin D Supplementation in SLE The Open Rheumatology Journal, 2018, Volume 12 233 Lima et al. instead confirmed similar results as published by Abou-Raya et al. and Petri et al. in young adults affected by juvenile SLE.40 patients were enrolled
2019-01-05T09:35:01.923Z
2018-12-27T00:00:00.000
{ "year": 2018, "sha1": "491889a247f565fa578a179ea547d1524f577617", "oa_license": "CCBY", "oa_url": "https://openrheumatologyjournal.com/VOLUME/12/PAGE/226/PDF/", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "491889a247f565fa578a179ea547d1524f577617", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
266968581
pes2o/s2orc
v3-fos-license
Prediction of Liver Enzyme Elevation Using Supervised Machine Learning in Patients With Rheumatoid Arthritis on Treatment with Methotrexate Objective The aim of this study is to develop a machine learning (ML) model to accurately predict liver enzyme elevation in rheumatoid arthritis (RA) patients on treatment with methotrexate (MTX) using electronic health record (EHR) data from a real-world RA cohort. Methods Demographic, clinical, biochemical, and prescription information from 569 RA patients initiated on MTX were collected retrospectively. The primary outcome was the liver transaminase elevation above the upper limit of normal (40 IU/mL), following the initiation of MTX. The total dataset was randomly split into a training (80%) and test set (20%) and used to develop a random forest classifier model. The best model was selected after hyper-parameter tuning and fivefold cross-validation. Results A total of 104 (18.2%) patients developed elevated transaminase while on MTX therapy. The best-performing predictive model had an accuracy/F1 score of 0.87. The top 10 predictive features were then used to create a limited feature model that retained most of the predictive accuracy, with an accuracy/F1 score of 0.86. Baseline high-normal transaminase levels, and higher lymphocyte and neutrophil blood count proportions were the highest predictors of elevated transaminase levels after MTX therapy. Conclusion Our proof-of-concept study suggests the possibility of building a well-performing ML model to predict liver transaminase elevation in RA patients being treated with MTX. Similar ML models could be used to identify “high-risk” patients and target them for early stratification. Introduction Methotrexate (MTX) is the most used disease-modifying anti-rheumatic drug (DMARD) in the treatment of rheumatoid arthritis (RA) [1].Chronic use of this medication warrants close laboratory monitoring given its propensity for liver damage and myelosuppression [2].MTX-associated hepatic dysfunction is a welldescribed adverse effect seen in up to a quarter of patients on long-term treatment [3].The risk of hepatic dysfunction increases in patients with pre-existing liver damage, including non-alcoholic steatohepatitis (NASH), alcohol consumption, chronic viral hepatitis, and concurrent use of other hepatotoxic medications.Monitoring of therapy with frequent liver enzyme testing is recommended every two to three months for patients on a stable dose of MTX according to American College of Rheumatology (ACR) guidelines [4]. Recent advances in health care coupled with the extensive use of electronic health records (EHRs) have led to the accumulation of large amounts of patient data.Approaches such as machine learning (ML) allow us to leverage these large data sets to predict outcomes and support clinical decision-making [5].There are two main types of ML algorithms or methodologies.The first is supervised ML, which aims to predict a known or predetermined outcome using labelled input data.Commonly used supervised ML algorithms are support vector machines and random forest algorithms.The other type is called unsupervised ML, where there are no outputs to predict; instead, the aim is to find naturally occurring patterns or groupings within the unsorted data using massive statistical computing power.Clustering and dimensionality reduction are typical examples of unsupervised learning [6]. Over the past decade, ML has been applied to various aspects of healthcare, and the field of rheumatology has been no exception.ML models have shown promise in various issues, ranging from automated detection of disease flares to predicting response to therapy [7][8][9][10][11][12][13][14][15][16][17].One can note how ML has used diverse data sources to predict disease classification and genetic patterns and answer research questions accurately in rheumatology.ML modeling depends not only on the dataset size but also on the nature of the dataset.Overlap between the target variable and the dataset categories can result in bias and lead to overfitting.An overfitted ML model does well in testing but leads to inaccurate real-world applicability. There is, however, a paucity of literature showing the application of ML for the prediction of therapy-related adverse events in rheumatology.The development of such models would allow the identification of high-risk patients who can be potentially targeted for closer monitoring to prevent these adverse outcomes.Conversely, low-risk patients can avoid extra outpatient visits, thereby reducing costs. RA is the most common autoimmune inflammatory arthritis, with a global age-standardized point prevalence of 247 per 100,000 patients [18].All clinical practice guidelines worldwide emphasize low-dose MTX as the cornerstone of treatment in RA [19].Thus, developing a predictive model will help stratify and individualize long-term treatment for a sizable cohort of RA patients.To the best of our knowledge, no study has been published on predicting MTX-associated adverse events in RA using ML.With this background, we conducted this study using data from a real-world RA cohort.These data were then analyzed to develop a supervised ML model to predict the occurrence of liver enzyme elevation in RA patients on MTX for three months or longer. Study design, participants, and settings We conducted a retrospective cohort study, with ethical clearance obtained from the Institutional Ethics Committee (ECASM-AIMS-2023-507).Participant consent was waived as it was a retrospective analysis of routinely collected clinical data.We retrospectively reviewed the EHRs of all outpatient visits between April 2016 and September 2018 at the Department of Rheumatology of a quaternary care center in Kerala, India.Then, we included patients diagnosed with RA by a treating rheumatologist according to the 2010 ACR guidelines and were prescribed MTX.We also included seronegative patients with radiological features suggestive of RA.We used a step-up regimen at our center with patients being started on low-dose weekly MTX with or without bridge steroids and intra-articular injections on the first visit unless there were any absolute contraindications.The initial doses ranged from 10 to 15 mg per week and were titrated upward to 20-25 mg per week either singly or split daily dose according to the disease activity assessment.Patients who did not respond to monotherapy were hiked to combination DMARD therapy.The choice of the second agent depended on many factors such as patient preferences, cost, comorbid conditions, and the discretion of the treating rheumatologist.Patients who received biologics were excluded from the cohort.This was done to prevent selection bias in the data modelling, as patients are often shifted to biological therapy when they cannot tolerate full-dose MTX monotherapy, MTX and DMARD, or targeted synthetic DMARD (JAKi) combination therapy. Data collection From the EHRs, the following details were collected for the included participants: age, sex, the initial dose of MTX (mg/week), the maximum dose of MTX used (mg/week), and the duration of therapy (in years).In addition to the absolute doses, the initial and maximal doses were also grouped into <12.5 mg/week, >12.5 to <20 mg/week, and <20 mg/week for data analysis by the ML algorithm.The duration of therapy was also divided into more than 1.5 years and less than 1.5 years to ensure the clinical applicability of the model and avoid any risk of overfitting.The baseline transaminase levels (serum glutamate pyruvate transaminase[SGPT]/serum glutamic-oxaloacetic transaminase [SGOT]) and serum creatinine levels were also noted.We also collected treatment details of any concomitant use of leflunomide, sulfasalazine, mycophenolate mofetil, and hydroxychloroquine in combination or as a triple regimen.The DMARD prescriptions were classified according to different levels of the step-up protocol that individual patients were exposed to, i.e., patients on MTX monotherapy who had a flare-up and were hiked up to a combination DMARD therapy were included in both groups.This was because patients who are escalated to a combination therapy are often de-escalated to monotherapy over time.Also, the following baseline hemogram values were collected: hemoglobin (gm/dL), total white cell count, neutrophil and lymphocyte percentage, and neutrophil-lymphocyte ratio.The details about the presence of fatty liver on ultrasound at a grade of moderate/high were also collected from the electronic health system if they were available before the initiation of MTX.The primary outcome variable was an elevation of SGOT or SGPT more than the upper limits of the normal range at our reference laboratory (40 IU/L for both SGOT and SGPT).The tests were conducted using a Beckman-Coulter automated analyzer based on the spectrophotometry method in the central biochemistry laboratory of the quaternary center. Statistical methods All data from the electronic medical records were tabulated on MS Excel 2019 (Microsoft Corporation, Redmond, WA, USA) to perform descriptive analysis.Categorical data were presented as numbers and percentages.Continuous data were presented as means and standard deviations.Fisher's exact and Pearson's chi-square tests were used to test categorical data distribution, and odds ratio with 95% confidence interval was reported.Student's t-test and Mann-Whitney U tests were used to compare continuous data. Machine learning methodology The data were imported to Python 3.0 using the JupyterLab (Project Jupyter, 2019) coding environment to apply the ML risk algorithms.The Scikit-learn Python library, specifically developed for data science and ML, was used for ML modeling.The development of hepatotoxicity was the target variable that was to be predicted.All the other 23 variables collected above were the inputs used.Missing values were considered absent/zero.This study cohort of RA patients (input) and the primary outcome (liver enzyme elevation) was split into training and validation cohorts.The training cohort was derived from a random sampling of 80% of the RA cohort, and the "validation" cohort comprised the remaining 20%.The "training" cohort data were imported into the Random Forest classifier algorithm.The random forest algorithm can be considered one of ML's representative algorithms and is known for its simplicity and effectiveness.It can also be defined as a decision tree-based classifier that chooses the best classification tree as the final classifier's algorithm classification via voting.Decision trees work by learning simple decision rules extracted from the data features.The deeper the tree, the more complex the decision rules and the better the fit of the model attained [20]. We also used a Python tool known as RandomisedSearchCV for hyper-parameter tuning to find the optimal parameters (Sklearn.Model_selection.RandomizedSearchCV, 2019).The best model developed using these 21 features was assessed using accuracy score, classification report (with precision, recall, and f-score), and receiver operating curve (ROC) plotting, along with the area under the curve (AUC) calculation.The feature importance of each variable in predicting liver enzyme elevation in the trained model were also calculated.Following this, the top 10 important features were then selected, and a limited features classifier model was trained using the same cohort, again split into training and validation sets (80% vs 20%).The performance of this limited features model was also assessed using precision, recall, and f-score, in addition to ROC plotting with AUC analysis.Figure 1 graphically shows the study methodology used to develop these classifiers (full and limited features). Results Records of RA patients who visited the rheumatology outpatient clinic from September 1, 2016, to August 31, 2018, were examined for eligibility, and 569 patients were included for analysis from those.Among the included patients, 104 (18.2%) developed an elevation of liver enzymes while on MTX therapy.The baseline characteristics of the cohort stratified for the development of liver enzyme elevation are shown in Table 1.The top 10 features and their scores are shown graphically in Figure 2. From the top 10 features shown in Figure 2, a limited feature random classifier model was trained again.A precision of 0.81, recall of 0.86, and an accuracy/F1 score of 0.86 were achieved in this limited feature model.Figure 3 shows the AUC of the full-feature model versus the limited feature models, respectively (0.658 vs 0.635). Discussion Our study highlights the feasibility of developing an ML model to predict the development of liver enzyme elevation in RA patients on treatment with MTX.We reviewed the EHRs of 569 patients with RA and extracted their clinical, biochemical, and prescription information for analysis.These data were then used to train a random forest model to predict the risk of liver enzyme elevation, which achieved an accuracy score of 0.87.Our analysis showed that the most important attributes that had the strongest influence on the development of hepatic dysfunction were white cell count along with the neutrophil and lymphocyte percentages, hemoglobin levels, duration of therapy, age, NLR ratio, creatinine, and baseline SGOT and SGPT levels.The limited feature model developed using the most important attributes also proved reliable, retaining much of the predictive accuracy. As most of these parameters are readily available during a clinical visit, it is possible to easily apply these features during their first visit to predict the risk of a therapy-related adverse event.The current ACR guidelines recommend monthly monitoring of SGOT and SGPT for the first three months and then two to three monthly after that [5].Using a prediction model, we could potentially stratify these patients based on the risk of therapy-related adverse events.Low-risk patients could be identified for less frequent testing and in-person visits, thus helping to cut costs.In contrast, high-risk patients could be identified for lower ceiling doses of MTX and planned for early tapering, thus minimizing the risk of liver damage. Accurate risk prediction is a cornerstone of public health.Several risk prediction scores or tools have been developed to predict the risk of cardiovascular events and malignancies [21,22].However, such individual risk assessment tools are lacking in autoimmune diseases. Elevation of liver enzymes is the second most common adverse event noted in patients on treatment with MTX, with more than 20% reporting this event in a large meta-analysis of MTX monotherapy [23].The risk of this iatrogenic adverse event is even higher in combination DMARD therapy patients [24].Other studies also have shown similar risk factors for developing MTX-related hepatic dysfunction as identified by our full feature model.Baseline elevation of SGPT was identified as the strongest predictor of SGPT elevations during MTX therapy in a recent study of 213 RA patients followed up for a mean duration of 4.3 years [25]. Another important risk factor for the elevation of liver enzymes was the presence of non-alcoholic fatty liver disease (NAFLD) [19].We excluded all ultrasounds conducted after the initiation of MTX; to avoid overfitting the model, as patients with elevated liver enzymes are more likely to have undergone liver biopsies.Studies have shown the role of NAFLD screening in predicting MTX associated with hepatic dysfunction [26].As with all ML models, the greater the number of data points analyzed, the greater the accuracy of the model [27].Therefore, inputting additional data (NAFLD status, genomic data) in future studies could lead to a more accurate model.Another limitation of our study was the population demographic sampled; as the model was developed using only patients of South Asian (Indian) descent, transferring this model to other populations may be difficult. Conclusions We successfully developed a highly accurate ML model to predict the risk of liver enzyme elevation in RA patients on treatment with MTX.Similar models could help individualize treatment approaches and minimize therapy-related adverse events.Further prospective studies with expanded analysis including pharmacogenomic and other risk factors (such as NAFLD) involving larger and more diverse populations could develop more robust and clinically useful models. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work.Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work. FIGURE 3 : FIGURE 3: Comparison of ROC curves of liver enzyme elevation machine learning models: (A) complete feature vs. (B) limited feature ROC, receiver operating characteristic TABLE 1 : MTX-RA cohort stratified by the incidence of hepatic dysfunction (N = 569) Using the SciKit Random Classifier algorithm, an ML model was derived.The best model was selected after hyper-parameter tuning and fivefold cross-validation).A precision of 0.89, recall of 0.87, and an accuracy/F1 score of 0.87 were achieved.The feature importance of the input variables are shown in Table2.
2024-01-13T16:02:50.886Z
2024-01-01T00:00:00.000
{ "year": 2024, "sha1": "c02dfcd7e17eb9a4b9b062c8927834930ec553d0", "oa_license": "CCBY", "oa_url": "https://assets.cureus.com/uploads/original_article/pdf/210592/20240111-20759-16jl8t8.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "33fd15acc717e777ee5919bcb6cece9e188a7a35", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [] }
208268875
pes2o/s2orc
v3-fos-license
miR-212 and miR-132 Are Downregulated in Neurally Derived Plasma Exosomes of Alzheimer’s Patients It was recently discovered that brain cells release extracellular vesicles (EV) which can pass from brain into blood. These findings raise the possibility that brain-derived EV’s present in blood can be used to monitor disease processes occurring in the cerebrum. Since the levels of certain micro-RNAs (miRNAs) have been reported to be altered in Alzheimer’s disease (AD) brain, we sought to assess miRNA dysregulation in AD brain tissue and to determine if these changes were reflected in neural EVs isolated from blood of subjects with AD. To this end, we employed high-content miRNA arrays to search for differences in miRNAs in RNA pools from brain tissue of AD (n = 5), high pathological control (HPC) (n = 5), or cognitively intact pathology-free controls (n = 5). Twelve miRNAs were altered by >1.5-fold in AD compared to controls, and six of these were also changed compared to HPCs. Analysis of hits in brain extracts from 11 AD, 7 HPCs and 9 controls revealed a similar fold difference in these six miRNAs, with three showing statistically significant group differences and one with a strong trend toward group differences. Thereafter, we focused on the four miRNAs that showed group differences and measured their content in neurally derived blood EVs isolated from 63 subjects: 16 patients with early stage dementia and a CSF Aβ42+ tau profile consistent with AD, 16 individuals with mild cognitive impairment (MCI) and an AD CSF profile, and 31 cognitively intact controls with normal CSF Aβ42+ tau levels. ROC analysis indicated that measurement of miR-132-3p in neurally-derived plasma EVs showed good sensitivity and specificity to diagnose AD, but did not effectively separate individuals with AD-MCI from controls. Moreover, when we measured the levels of a related miRNA, miR-212, we found that this miRNA was also decreased in neural EVs from AD patients compared to controls. Our results suggest that measurement of miR-132 and miR-212 in neural EVs should be further investigated as a diagnostic aid for AD and as a potential theragnostic. It was recently discovered that brain cells release extracellular vesicles (EV) which can pass from brain into blood. These findings raise the possibility that brain-derived EV's present in blood can be used to monitor disease processes occurring in the cerebrum. Since the levels of certain micro-RNAs (miRNAs) have been reported to be altered in Alzheimer's disease (AD) brain, we sought to assess miRNA dysregulation in AD brain tissue and to determine if these changes were reflected in neural EVs isolated from blood of subjects with AD. To this end, we employed high-content miRNA arrays to search for differences in miRNAs in RNA pools from brain tissue of AD (n = 5), high pathological control (HPC) (n = 5), or cognitively intact pathology-free controls (n = 5). Twelve miRNAs were altered by >1.5-fold in AD compared to controls, and six of these were also changed compared to HPCs. Analysis of hits in brain extracts from 11 AD, 7 HPCs and 9 controls revealed a similar fold difference in these six miRNAs, with three showing statistically significant group differences and one with a strong trend toward group differences. Thereafter, we focused on the four miRNAs that showed group differences and measured their content in neurally derived blood EVs isolated from 63 subjects: 16 patients with early stage dementia and a CSF Aβ42+ tau profile consistent with AD, 16 individuals with mild cognitive impairment (MCI) and an AD CSF profile, and 31 cognitively intact controls with normal CSF Aβ42+ tau levels. ROC analysis indicated that measurement of miR-132-3p in neurally-derived plasma EVs showed good sensitivity and specificity to diagnose AD, but did not effectively separate INTRODUCTION Alzheimer's disease is a devastating disorder for which there is no cure or effective treatment. Symptom onset is insidious and even in sophisticated centers clinical diagnosis is imperfect (Salloway et al., 2014;Monsell et al., 2015). Advances in brain imaging and the development of robust immunoassays to measure tau and amyloid β-protein (Aβ) in cerebrospinal fluid (CSF) have greatly aided diagnosis (Blennow et al., 2012). Specifically, measurement of tau and Aβ in CSF, or quantitation of amyloid or tangle pathology by PET imaging, can be used to identify mild cognitive impairment (MCI), a frequent precursor of AD (Jack et al., 2013;Bao et al., 2017), and use of these markers is now common in clinical research (Blennow et al., 2012;Jack et al., 2013). But PET imaging is expensive and is restricted to use in certain geographies. Assessment of markers in CSF is more amendable for general use, but CSF sampling remains unpopular with patients. Thus, there is a pressing need for less costly and intrusive, and more widely available biomarkers that can replace or supplement current CSF and PET markers (Bateman et al., 2019). Measurement of a blood-based analyte would be ideal since blood collection is widely accepted by patients and can be done almost anywhere. Unlike CSF, the contents of blood are influenced by many organs and therefore changes in blood analytes are often not sensitive to minor changes that occur in brain. For instance, while the levels of Aβ42 in CSF change early in AD, measurement of Aβ42 in plasma has not proved as useful (Janelidze et al., 2016). The discovery that brain cells release extracellular vesicles (EVs) and a portion of these enter the bloodstream offers the potential of monitoring changes occurring in the brain by isolating EVs from venous blood. EVs are small packages of cytosol encapsulated by a lipid bilayer and are released by all cells, including neurons and glia (Fruhbeis et al., 2012;Raposo and Stoorvogel, 2013;Paolicelli et al., 2019). Exosome is the term most frequently applied to EVs, but there are many classes of EVs, primarily defined by their cellular origin. Exosomes arise through the endolysosomal pathway and are specifically formed by inward budding of multivesicular bodies to produce intraluminal vesicles. After maturation, multivesicular bodies can fuse with the plasma membrane and exosomes are released (Pan et al., 1985;Johnstone et al., 1987). The method used employs a proprietary reagent, ExoQuick, to precipitate total "exosomes" from plasma, followed by enrichment of material of "neural" origin using an antibody to the neural cell adhesion molecules, L1CAM (Fiandaca et al., 2015). In two independent studies, analysis of samples prepared using this protocol revealed significant increases in pT181-tau, pS396tau and Aβ1-42 in patients with AD vs. those from age-matched normal cognitively intact controls (Fiandaca et al., 2015;Winston et al., 2016). Other protein cargo in neural exosomes have also been found to change in AD (Goetzl et al., 2019). However, the ability to discriminate AD from AD-MCI and controls required a statistical approach that constrains comparison across cohorts analyzed on different occasions. Furthermore, a more recent publication employing a similar method failed to detect elevation of pT181-tau or Aβ1-42 in neural exosomes from AD subjects (Guix et al., 2018). Micro-RNAs (miRNA) are small non-coding RNA molecules which base-pair with complementary sequences within mRNA to decrease post-transcriptional gene expression (O'Brien et al., 2018) and accumulating evidence indicates that the expression of certain miRNAs are altered in AD (Lau et al., 2013;Patrick et al., 2017;Pichler et al., 2017;El Fatimy et al., 2018;Swarbrick et al., 2019). However, it is unknown whether miRNA dysregulation in the brain is reflected in neural exosomes. To this end, we performed an unbiased analysis of miRNAs in human brain tissue to inform miRNA candidates for targeted analysis in neurally derived plasma exosomes. First, we measured miRNAs in pooled RNA isolated from five individuals with AD, five cognitively intact high pathology controls (HPCs), and five pathology-free controls. HPC refers to subjects with sufficient amyloid plaques and neurofibrillary tangles to be diagnosed as AD, but at the time of death these individuals had no evidence of cognitive impairment. Recently, it had been suggested to consider such individuals as AD stage 1 and 2 (Jack et al., 2018). Results were highly similar across the pooled and individual brain samples. In a total of 27 individual brain samples: 9 controls, 7 HPC and 11 AD, miR-182-5p, miR-591, miR-32-3p, and miR-132-3p were dysregulated in AD vs. both controls and HPC. Armed with this information, we asked whether the miRNAs changed in AD brain were also altered in neural exosomes isolated from plasma. Since, diagnosis of AD based on clinical criteria alone is uncertain (Monsell et al., 2015) we were careful to use specimens from individuals whose clinical diagnosis was confirmed by the best current biomarkers -CSF Aβ42 and tau. Using plasma from patients that conformed to these criteria we prepared neural exosomes from 31 cognitively intact biomarkernegative controls, 16 CSF biomarker-positive MCI, and 16 CSF biomarker-positive AD. RNA was extracted from neural exosomes, reversed transcribed and analyzed by qRT-PCR for the four miRNAs most altered in AD brains. miR-132 was decreased by ∼9-fold in AD exosomes vs. controls, and ∼5.4 fold in AD vs. AD-MCI. Since miRNAs within the same cluster are often co-regulated, we also measured miR-212, a miRNA in tandem to miR-132. Strikingly, the levels of miR-212 were reduced fourfold in AD samples compared to controls. These results suggest that concerted dysregulation of miR-132 and miR-212 occurs in AD and that measuring the levels of these miRNAs in neurally derived exosomes may offer a window on changes occurring in AD brain. Our results in neural exosomes are consistent with prior studies that detected down-regulation of miR-132 and miR-212 in AD brain (Pichler et al., 2017) and down-regulation of miR-132 in blood (Denk et al., 2018). Brain Donors Human specimens were obtained from Rush Alzheimer's Disease Center, Rush University Medical Center and used in accordance with the Partners Institutional Review Board (Protocol: Walsh BWH 2011). Samples of frozen frontal cortex were from the Religious Orders Study (ROS), a longitudinal clinicalpathological cohort study in aging and Alzheimer's Disease . Participants from religious communities were enrolled at a time when they were free of known dementia. The study was approved by the Institutional Review Board of Rush University Center. All participants signed an informed consent, Anatomic Gift Act, and a repository consent to allow their data and biospecimens to be shared for research. They were followed over time until brain donation. Overall follow-up and autopsy rates were about 95%. Each participant underwent uniform structured annual cognitive and clinical evaluations. A diagnosis AD dementia required evidence of meaningful cognitive decline in two domains of cognition, one of which was episodic memory. Clinical summary diagnostic opinion was made post-mortem based on all available clinical information by a neurologist blinded to pathologic data. A neuropathologic diagnosis of high (1), intermediate (2), low (3), or no AD (4) was based on the modified NIA-Reagan Diagnosis of AD. This assessment relies on the severity and distribution of both neurofibrillary tangles and neuritic plaques (Bennett et al., 2006). Individuals were categorized as controls, high pathology controls (HPC), or AD based on combined clinical and pathological criteria ( Table 1). Individuals with an NIA-Reagan score of 1 or 2 and a clinical diagnosis of AD dementia were defined AD (n = 11), whereas patients with an NIA-Reagan Score of 1 or 2 and no evidence of dementia were designated HPC (n = 8). Patients in the control group had an NIA-Reagan score of 3 and no evidence of dementia. Study Participants, CSF and Blood Collection Specimens were from research participants in the UCSD Shiley-Marcos Alzheimer's Disease Research Center, collected and used in accordance with IRB approval ( Table 2). Each participant donated both ETDA plasma and CSF, which were obtained using standardized protocols. CSF was collected into polypropylene tubes, centrifuged at 1,500 × g for 10 min, and aliquoted into polypropylene storage tubes. Blood was drawn on the same day as lumbar puncture, and plasma was isolated and aliquoted. On the day of analysis, samples were thawed at room temperature, and used for subsequent exosome isolation. 97% was recovered and subjected to centrifugation at 100,000 × g and 4 • C for 70 min to pellet exosomes. RNA Isolation and miRNA Analysis From Brain Tissue Gray matter from frozen cortex was dissected on dry ice. Total RNA was extracted from ∼20 mg gray matter using a miRNeasy Advanced Kit (Qiagen, Louisville, KY, United States) according to the manufacturer's guidance. Briefly, samples were suspended in 200 µL ice cold RIPA buffer followed by addition of 120 µL kit-provided RPL buffer. Tissue was homogenized by gently pipetting samples up and down. Forty µL kit-provided RPP buffer was added to samples and vortexed for ∼30 s to facilitate precipitation of proteins. Samples were then centrifuged at 12,000 × g for 3 min to pellet the precipitate and debris, and approximately 230 (µL of supernatant was transferred to a new 1.5 mL tube. An equal volume of isopropanol was added and mixed by pipetting before transferring the entire sample to an RNeasy UCP MinElute column. Columns containing sample were centrifuged at 12,000 (x g for 1 minute. Flowthrough was discarded and columns were washed according to the manufacturer''s protocol. RNA was eluted with two rounds of 20 (L RNase-free water. RNA yield and quality were quantified on a nanodrop. Fifty ng/µL aliquots of RNA were prepared and stored at −20 • C. One hundred and fifty ng of RNA was used for reverse transcription (RT) using a miSCRIPT RT II kit (Qiagen, Louisville, KY, United States), where samples were incubated at 37 • C for 60 min followed by inactivation at 95 • C for 5 min. RT efficiency was determined by measuring levels of miRTC, a RT control. RNA Isolation and miRNA Analysis From iN Cells and iN-Derived Exosomes iN cells were harvested and centrifuged at 300 × g for 10 min to pellet cells. Cell pellets were washed with PBS and lysed with RPL buffer. Exosomes from iN CM were isolated by differential centrifugation as described above, Frontiers in Neuroscience | www.frontiersin.org washed with PBS, and lysed in RPL buffer. Subsequent steps were followed identically to the isolation of miRNA from brain tissue. Isolation of Exosomes and Enrichment of Neuronal Exosomes To verify that our methods could reliably detect miRNA expression in an exosomal fraction of human plasma, and to optimize preparative and analytical methods prior to processing precious clinical samples, we first tested experimental conditions using human plasma (each in technical duplicate) from two healthy volunteers. The method to isolate neuron-derived exosomes from plasma has been validated and described in detail before (Guix et al., 2018), and is a slightly modified version of the procedure pioneered by the Goetzl/Kapogiannis group . Five µL thrombin (System Bioscience, Palo Alto, CA, United States) was added to 500 µL aliquots of plasma to induce clot formation and allow the removal of fibrin and related proteins. Reactions were mixed by inversion and incubated for 30 min at room temperature before dilution with 495 µL Ca 2+ -and Mg 2+ -free Dulbecco's Phosphate-Buffered Saline (DPBS) (Sigma-Aldrich, St. Louis, MO, United States) containing 3x phosphatase (Thermo Fisher, Carlsbad, CA, United States) and protease (Roche, Branchburg, NJ, United States) inhibitors. Thereafter, the samples were centrifuged at 6000 × g for 20 min at 4 • C and the supernatant transferred to a new tube. Next, 252 µL ExoQuick (System Bioscience, Palo Alto, CA, United States) was added to the supernatants, samples mixed by inversion and left to stand at 4 • C for 1 h. Vesicles present in the serum were recovered by centrifugation at 1500 × g for 20 min at 4 • C and resuspended by vortexing in 500 µL MilliQ water containing 3 × phosphatase and protease inhibitors. ExoQuick pellets were then mixed overnight at 4 • C on a vertical rotating mixer. To isolate neuronal exosomes from the suspensions of total plasma exosomes, 4 µg biotinylated anti-L1CAM antibody (eBioscience, San Diego, CA, United States) in 42 µL 3% bovine serum albumin in DPBS (BSA/DPBS) was added to the resuspended vesicles and incubated on a rotating mixer for 1 h at 4 • C. Thereafter, 15 µL of pre-washed Streptavidin-Plus UltraLink Resin (Thermo Fisher, Danvers, MA, United States) in 25 µL 3% BSA/DPBS was added to each sample and incubated for 4 h at 4 • C with rotation. Neuronal exosomes bound to the antibody/resin complex were recovered by centrifugation at 200 × g for 10 min at 4 • C, and washed once with 3% BSA/DPBS before elution in 200 µL 0.1 M glycine (pH 3.0). Resin was removed by centrifugation at 4500 × g for 5 min at 4 • C. Thereafter, 195 µL supernatant was neutralized with 15 µL of 1 M TrisHCl (pH 8.0) and exosomes were lysed by adding 360 µL M-PER (Thermo Fisher, Carlsbad, CA, United States), 25 µL 3% BSA/DPBS containing 1× phosphatase and protease inhibitors, and 2 freeze-thaw cycles before downstream analysis. Bioanalyzer analysis showed that samples were enriched with small RNA species following elution from RNeasy UCP MinElute columns. We cannot completely rule out that some plasma RNA outside of exosomes might be isolated along with exosomal RNA in our protocol. However, this seems unlikely, since free-floating RNA is rapidly degraded in plasma, and RNA protected in RNAbinding proteins or lipids would not allow isolation by L1CAM immunoprecipitation. miRNA Quantitative PCR Array The miRNA expression profile of RNA extracted from brain tissue was analyzed using a miScript miRNA PCR Array Human miRBase Profiler HC Plate 1, MIHS-3401Z (Qiagen, Hilden, Germany). Each array contains 372 unique miRNA assays, 3 snoRNA and 3 snRNA housekeeping genes, duplicate RT primer assays to evaluate RT efficiency, and duplicate PCR primer assays to evaluate the efficiency of the qPCR reaction (for a full list of miRNAs detectable using miScript arrays see: https://b2b.qiagen.com/~/media/genetable/mi/hs/34/ mihs-3401z). Six µL RNA (initial concentration = 50 ng/µL) was prepared in 20 µL reverse-transcription reactions (final concentration = 15 ng/µL) using miScript HiSpec Buffer where mature miRNAs are polyadenylated by Poly(A)polymerase and reverse transcribed into cDNA using oligo-dT primers. The oligo-dT primers have a 3 degenerate anchor and a universal tag sequence on the 5 end, allowing amplification of mature miRNA in real-time PCR. Each reverse-transcription reaction was further diluted with 90 µL RNAse-free water (4.5 fold dilution) before being used as template for real-time PCR analysis on the miScript miRNA HC PCR Array. The array contained miRNA-specific miScript primers, the miScript SYBR Green kit with the miScript universal primer (reverse primer), and QuantiTect SYBR Green PCR Master Mix. miRNA cDNA was quantified using a ViiA 7 Real-Time PCR System (Applied Biosystems, Waltham, MA, United States). The thermal cycling protocol was as follows: 95 • C for 10 min (PCR activation step), 45 amplification cycles at 94 • C for 15 s (denaturation), at 55 • C for 30 s (annealing), and at 70 • C for 30 s (extension), followed by fluorescence data collection. A no-template control (NTC) of RNase-free water was co-purified and profiled like the samples to measure background. We used the 3 snoRNAs and 3 snRNAs in addition to miR-16 as endogenous normalization controls in the arrays. MicroRNA expression was quantified as delta Ct values, where Ct = threshold cycle, delta Ct = (Ct target miRNA minus the average Ct of the 7 normalization controls). Relative miRNA expression was calculated as ddCt (AD vs. CTRL) = mean dCtAD -mean dCtCTRL or ddCt (AD vs. HPC) = mean dCtAD -mean dHPC. Higher values indicate higher expression. Signals with a ddCt ≥ | 0.58|, which corresponds to a fold change of ≥| 1.5|, were considered as differentially regulated. The assay showed good technical reproducibility when repeated three times on three different days (Supplementary Figure S2). Quantitative PCR of Individual miRNAs QPCR of individual miRNAs was carried out as described for the miRNA array with the following modifications: cDNA was diluted 1:100 with RNAse free water in primer specific qPCR for each miR. For qPCR validation in brains, miR-16 and SNORD95 was used as endogenous normalization controls, whereas miR-16 alone was used as normalization control for qPCR of miRNAs in neural exosomes. miR-16 has been evaluated as an endogenous normalization control in plasma, and similar plasma levels were found in AD and controls (Bhatnagar et al., 2014). Moreover, several previous studies found miR-16 levels to be consistent across AD and control brains (Hebert et al., 2008;Banzhaf-Strathmann et al., 2014). A list of all primers used for qRT-PCR is presented in Supplementary Table S1. Statistics Statistical analyses were carried out using GraphPad Prism, version 8 (La Jolla, CA, United States). The Shapiro-Wilk test was used to determine normal distribution of data. Significance was determined by one-way ANOVA followed by Tukey's post hoc test for normally distributed data. Otherwise, Kruskal-Wallis test followed by Dunn's post hoc test was used. Mean age of brain and plasma donors was highly similar between diagnostic groups, and the age-distribution did not differ significantly. Thus, we did not control for age in our analyses. P-values are reported without correction for multiple testing because of the exploratory nature of this study. The significance threshold was set to a twosided p < 0.05. RESULTS miRNAs Are Altered in AD Brains To determine whether miRNAs are dysregulated in AD brain, we pooled RNA isolated from the brains of five individuals who died with AD, five individuals who had AD pathology but were without cognitive impairment at the time of death, and FIGURE 1 | miRNAs are differentially expressed in AD compared to control and HPC brains. (A) Workflow of miRNA candidate discovery. RNA was isolated from cortical brain tissue of individuals who were cognitively intact (n = 5), AD (n = 5) or HPC (n = 5) (see Table 1 for demographic data). RNA from the 5 control, 5 AD or 5 HPC brains were pooled and used as template for cDNA synthesis. cDNA from pooled control, AD or HPC were used as templates for qRT-PCR. (B) Comparative analysis of profiled miRNAs between disease groups. miRNAs that are upregulated (red) and downregulated (blue) by >2-fold are indicated. Each point represents log10(2-Ct) average values from each cDNA pool run in triplicate (3 plates). (C) miRNAs that were significantly altered by >2-fold. A total of 16 miRNAs were dysregulated in AD brains compared to control or HPC brains. Six of those miRNAs were consistently altered specifically in AD brains (miR-608, -219a-5p, 1304-5p, -182-5p, -566, -591). Six additional miRNAs were upregulated >2-fold in AD brains compared to control, but <2-fold compared to HPC brains (miR-32-5p, -519e-5p, 132-3p, 212-3p, -662, -138-3p). Red text = upregulated miRNAs; blue text = downregulated miRNAs. five individuals who were without cognitive impairment normal and free of AD pathology ( Table 1). RNA was extracted from cortical gray matter, pooled, reverse transcribed into cDNA, and cDNA was used for miRNA assays ( Figure 1A). Since it is often difficult to distinguish miRNAs that are altered due to disease from normal aging, comparative analysis of profiled miRNAs was performed between AD brains relative to control, and relative to HPC. Of the 372 miRNAs that were measured, 12 miRNAs were changed >2-fold in AD brains compared to controls (4 miRNAs were up-and 8 mIRNAs were downregulated, respectively) ( Table 3 and Figure 1B). Compared to HPC brains, 10 miRNAs were changed >2-fold (4 up-and 6 downregulated, respectively) ( Table 3 and Figure 1B). Six miRNAs were up (miR-608, miR-219a-5p) or downregulated (miR-566, -182-5p, -591, -1304-5p), specifically in AD brains vs. both HPCs and controls ( Figure 1C and Table 3). Compared to controls, 8 miRNAs were increased in HPC brains no miRNAs were decreased ( Figure 1B, right panel). To assess the robustness of the findings obtained using miRNA array analysis, we used miR-specific qRT-PCR to measure levels of the 12 miRNAs that were found to be most changed in our array experiments. In our initial qRT-PCR we used exactly the same pools as employed for the array analysis. The changes detected by array analysis of pooled brain RNA (Table 3) were largely confirmed by qRT-PCR analysis of individual miRs ( Table 4, columns 1 and 2). Nine of 12 miRs showed the same directional and comparable magnitude changes by both arrays and qRT-PCR, but miR-1304, miR-138-1-3p, and miR-519e-5p changed in opposite directions (compare Tables 3, 4). Thereafter, we undertook qRT-PCR analysis of RNA preparations from each of the 15 individual brains used for the initial pools. When values were averaged for each disease group (i.e., for the 5 AD, the 5 HPC and the 5 controls) the direction of change detected by qRT-PCR and array analysis were in complete agreement (compare Tables 3, 4, columns 3 and 4). We then went on to use qRT-PCR to measure the levels of the same 12 miRs using RNA isolated from a total of 27 brains (11 AD, 7 HPC, and 9 cognitively intact controls). Here the results were similar to those obtained using array analysis of pooled samples (Table 3) and of the 15 individual samples ( Table 4, compare columns 3 and 4 vs. columns 5 and 6). Two of these 12 miRNAs (miR-182-5p, miR-32-5p) were significantly changed from controls. miR-132-3p was significantly different between disease groups (p < 0.05), but did not reach statistical significance in pair-wise comparisons (control vs. AD, p = 0.130, HPC vs. AD, p = 0.094). miR-219a-5p showed a strong trend for change between disease groups (p = 0.051) (Figure 2). It is important to note that levels of miR-9, a neuron-specific miR, was unchanged between disease groups (p = 0.769). The latter finding indicates that dysregulation of miR-182-5p, miR-32-5p, and miR-132-3p are disease-specific. miRNAs Can Be Detected in Neural Exosomes Derived From Blood To determine whether miRNAs can be reliably detected in neural Exosomes, we initially measured reference miRNAs in exosomes isolated from human iPSC-derived cortical neurons (iNs) (Guix et al., 2018). Ubiquitously expressed miR-16 (Chevillet et al., 2014) was detected with cycle threshold (Ct) values of 27 and 31 in iN cell lysates and their exosomes, respectively. Neuron-specific miR-9 (Coolen et al., 2013) was readily detected in both iN cell lysates (Ct = 25) and exosome (Ct = 32), whereas miR-451a, a peripherally expressed miRNA was barely detectable with ct values near the upper limit of reliable quantitation. These data are consistent with the relative enrichment of neuronal miRs in neurons and neuron-derived exosomes, and support the use of miR-9 and miR-451a as markers to assess the neuronal origin of exosomes ( Figure 3A). Next, we examined whether we could discern a difference in the levels of miR-9 and miR451-a in total plasma exosomes vs. neural exosomes isolated from plasma using L1CAM ( Figure 4B). Exosomes were isolated from the plasma of two healthy donors. RNA was extracted from total ExoQuick pellets or L1CAM-precipitated neural exosomes and used for qPCR ( Figure 3B). MiR-9 was low in total exosomes (Ct = 34-37), while miR-451a was higher (Ct = 26-31). In contrast, L1CAM-isolated exosomes had high levels of miR-9 (Ct = 27-33), suggesting the L1CAM IP effectively enriched for neural exosomes (Figure 3C). The fact that we observed comparable levels of miR-451a in both total and L1CAM exosomes suggest our neural exosome preparation contained both neural and peripheral exosomes. Values for the house-keeping miR-16 were similar (Ct = 30-33) in both L1CAM and total exosomes. Notably, immunoprecipitation with the non-specific mAb, 46-4, did not lead to an enrichment of the neural miR-9. Together, these results confirm that measurement of miR-9 can be used to assess neural enrichment. Levels of miR-132 and miR-212 Are Decreased in Neural Exosomes in Alzheimer's Disease Diagnosis of AD based on clinical criteria alone is imperfect and typically 20-30% of individuals diagnosed as having AD have other neurological disorders, and considerable numbers FIGURE 2 | miRNA validation in AD brain tissue by qRT-PCR. miRNA candidates were measured by miRNA-specific qRT PCR. (A) Neuron-specific miR-9 levels, which was similar between disease groups in array analysis, were also unchanged by qRT-PCR [X 2 (2) = 0.5263, p = 0.769]. Two of the original hits (B) miR-182-5p ( * * = 0.009) and (C) miR-32-5p (control vs. HPC, * = 0.049; control vs. HPC, * = 0.025) identified by miRNA arrays were validated by qRT-PCR. (D) miR-132-3p was significantly changed between disease groups [X 2 (2) = 6.156, p = 0.046], but did not reach statistical significance in pair-wise comparisons. (E) MiR-219a-5p showed a strong trend for change between the three disease groups [X 2 (2) = 5.906, p = 0.051]. (F) miR-212-3p levels were not significantly changed between disease groups. Fold change was determined by 2 − Ct relative to miR-16. Significance was determined by Kruskal-Wallis test followed by Dunn's post hoc test. of individuals that have AD are misdiagnosed as having other neurologic disorders (Monsell et al., 2015). To avoid use of samples with uncertain diagnose we employed samples from individuals whose clinical diagnosis was confirmed by CSF Aβ42 and tau. Using these criteria 63 subjects from the UCSD Shiley-Marcos Alzheimer's Disease Research Center were identified (31 cognitively intact biomarker-negative controls, 16 CSF biomarker-positive MCI, and 16 CSF biomarker-positive AD) and their plasma used to isolate neural exosomes. RNA was extracted from neural exosomes, reversed transcribed and analyzed by qRT-PCR for five of the 12 miRNAs found to be altered in AD brains plus two miRNAs used to assess neuronal origin (miR-9 and miR-451a). The five miRNAs (miR-132-3p, miR-182-5p, miR-32-5p, miR-219a-5p, and miR-591) analyzed were chosen because they were consistently elevated in AD vs. control brain by >1.5-fold (Tables 3, 4). In accord with our initial experiments using L1CAM-isolated exosomes (Figure 3), neuron-specific miR-9 was readily detected in all EV preps (Ct ≤ 35) and at higher levels than the peripherally derived miR-451a (Ct ≤ 40). Similar to brain samples, fold-changes of miR-9 and miR-451a were comparable between disease groups when normalized to miR-16 (Figures 4A,B). miRNA-219a-5p could not be reliably detected in neural exosomes, and miR-32-5p, miR-182-5p, and miR-591 were unchanged between disease groups (Supplementary Figure S3). In contrast, miR-132 was significantly decreased (Figure 4C). miR-132, a miRNA that was regulated between AD, HPC, and control brains in our study, as well as in prior studies (Wong et al., 2013;Salta et al., 2016;Pichler et al., 2017), was decreased by ∼9-fold (log2 fold decrease of 3.15) in plasma-derived neural exosomes in AD vs. controls, and ∼5.4 fold in AD vs. AD-MCI ( Figure 4C). Based on this result and the fact that miRNAs within the same cluster are often co-regulated, we sought to test whether miR-212, a miRNA in tandem to miR-132, was also altered in AD. Strikingly, the levels of miR-212 were reduced 4-fold in AD samples compared to controls (Figure 4D). Interestingly, higher CSF Tau/Aβ1-42 levels showed modest, but statistically significant association with lower levels of both miR-132 and miR-212 in plasma neural exosomes. Collectively, these results suggest that concerted dysregulation of miR-132 and miR-212 occurs in AD and that measuring the levels of these miRNAs in neurally derived exosome may offer a window on changes occurring in AD brain. Receiver operating characteristic (ROC) curves is a common tool used to assess the diagnostic utility of novel biomarkers. ROC analysis (Figures 5A,B) revealed that miR-132 levels separated controls from AD-MCI with an AUC of 0.58 (95% CI: 0.38-0.78), and controls from AD dementia with an AUC of 0.77 (95% CI: 0.61-0.93). miR-212 showed better discrimination than miR-132 between both AD-MCI and controls, and AD and controls (Figure 5). ROC analysis of miR-212 levels separated controls from AD-MCI with an AUC of 0.68 (95% CI: 0.5-0.86), and controls from AD dementia with an AUC of 0.84 (95% CI: 0.72-0.96) (Figures 5C,D). miR-212 achieves a sensitivity (the ability to predict AD cases) of 92.2% (95% CI: 68.5-99.6%), and a specificity (the ability to exclude controls) of 69.0% (95% CI: 50.8-82.7%) at the best cut-off point determined by Youden's J statistics. Overall, our ROC analyses indicate that measurement of miR-212 in neurally derived plasma exosomes showed sufficient diagnostic sensitivity and specificity to pursue its use as a potential screening assay for AD. FIGURE 3 | miRNA analysis of brain-derived exosomes in blood. (A) C(t) values for neuron-specific miR-9, plasma abundant miR-451a, and ubiquitously expressed miR-16 from induced neuronal (iNs) cell lysates (CL), and iN exosomes isolated by ultracentrifugation of iN conditioned medium. (B) Flow chart for the detection of miRNAs from plasma exosomes. RNAs were isolated from total plasma exosomes (Exoquick pellets), or neural exosomes immunoprecipitated (IP) from total plasma exosomes with anti-L1CAM antibody. RNA was reverse transcribed with Qiagen's HiFlex buffer to generate cDNAs, and subsequently used as template for qPCR using miRNA-specific primers for miRNA detection. C(t) values for miR-9, miR-451a, and miR-16 from (C) plasma-derived total-and L1CAM (L1)immunoprecipitated exosomes. As a negative control, an irrelevant antibody, 46-4 was used for immunoisolation. Total and IP'ed exosomes were isolated in duplicate (#1 and #2) from 2 healthy control donors (Plasma 1 and plasma 2). Low Ct values indicate high levels of miRNA, and high Ct values indicate low levels of miRNA. The reliable CT limit is ∼40. DISCUSSION Despite considerable progress in our understanding of AD significant gaps remain and there is a desperate need for disease biomarkers that can offer insight about pathogenesis and which may serve as theragnostic indicators. To address this void, we exploited two exciting areas of AD research, namely miRNAs and neural exosomes. miRNAs are small regulatory molecules that post-transcriptionally repress gene expression and thereby regulate diverse biological processes, including neuronal differentiation, plasticity, survival, and regeneration (Kosik, 2006;Sambandan et al., 2017). Several neuronal miRNAs have been directly linked to AD, but there has been controversy as to which miRNAs are most important (Herrera-Espejo et al., 2019;Swarbrick et al., 2019). In order to better define the most relevant miRNAs that are dysregulated in AD we conducted unbiased profiling of frontal cortex from three distinct groups of age-matched elderly individuals. Unlike some previous studies which compared only AD vs. controls (Salta et al., 2016;Pichler et al., 2017) we also compared AD vs. HPCs to distinguish between miRNAs that were altered by active neurodegeneration vs. those that may change in the absence of disease, but in response to amyloid and tangles. To ensure the robustness of our data we analyzed the same 5 AD, 5 HPC and 5 control samples in several different ways and then examined these 15 samples alongside additional AD, HPC and control samples. Initially, we used miRNA array analysis in an unbiased effort to profile differences between pools of RNA from AD, HPCs, and controls. Thereafter, we used miRspecific qRT-PCR to investigate the levels of 12 miRNAs that were most altered in the array experiments using both disease group pools of RNA, and RNA from individual brains. This approach yielded similar results regardless of the method used or whether RNA was from pools or individual brains, and these changes were maintained when additional brains were analyzed. Although all 12 miRNAs showed consistent fold differences (>1.5 fold) between AD, HPCs, and controls, only three miRNAs (miR-182-5p, miR-32-5p, and miR-132-3p) stood out in terms of group-based differences. When we turned to analyze neural exosomes, we assessed miR-182-5p, miR-32-5p, and miR-132-3p, plus two other miRNAs (miR-219a-5p and miR-591) which in brain tissue showed consistent fold changes. Based on our iPSC-neuron experiments, we also used miR-9 and miR-451a to assess the neuronal origin FIGURE 4 | miR-132/212 is significantly decreased in AD neural exosomes isolated from plasma. (A) neural-specific miR-9 was abundantly detected in all measured samples and was not significantly different between disease groups [X 2 (2) = 1.195, p = 0.550]. (B) non-specific miR-451a were also detected in samples, but was unchanged between disease groups [F(2) = 0.035, p = 0.965]. Of the miRNAs significantly altered in AD brains, (C) miR-132 was significantly decreased in neural-derived plasma exosomes from AD subjects ( * * = 0.0333). (D) miR-212, a miRNA transcribed in tandem to miR-132, was also significantly decreased in AD ( * * * = 0.001). Significance was determined by one-way ANOVA followed by Tukey's post hoc test for normally distributed data. Otherwise, Kruskal-Wallis test followed by Dunn's post hoc test was used. of our exosome preparations. miR-9 is one of the most highly expressed miRNAs in the vertebrate brain (Coolen et al., 2013) and here we show that miR-9 is enriched in human iPSCderived neurons and neuronal exosomes. As we had anticipated, the peripherally expressed miR-451a (Okamoto et al., 2018) was barely detected in human neurons or neuronal exosomes. Applying these markers to our L1CAM-isolated plasma exosomes we found that miR-9 was readily detected, whereas miR-451a was present at only low levels. It is important to note that L1CAM is not uniquely expressed in brain, but is also found in kidney cells. Therefore, we cannot completely rule out, that a small contribution may come from L1CAM-positive kidney exosomes. However, the relative enrichment of neural-toperipheral miRNAs in L1CAM-isolated exosomes indicates that the bulk of these exosomes are of neuronal origin. Consequently, the use of miR-9 should provide the field with a to tool to differentiate between neural and non-neuronal exosomes. Of the four miRNAs most dysregulated in AD brain only miR-132 was significantly altered in AD neural exosomes. miR-219 could not be reliably detected, whereas, miR-32 and miR-182 were reliably detected, but their levels were similar across disease groups in neural exosomes. Why these miRNAs are dysregulated in brain, but unaltered in bona fide neural exosomes is not completely clear, but it is reasonable to assume that the miRNAs measured in brain tissue are derived from many different cell types, whereas our neural exosomes are predominantly neuronal in origin. In this regard, it is interesting to note that miR-132 is one of the most abundant brain-enriched miRNAs, whereas miR-182 is highly enriched in cells of myeloid linage (Pucella et al., 2015). Indeed, in future studies it will be important to determine if the four miRNAs we found to be most changed in AD brain are dysregulated in exosomes from microglia, astrocytes or oligodendrocytes. Notwithstanding the importance of investigating miRNAs in exosomes from other brain cell types, it is intriguing that miR-132 is downregulated in both AD brain and neural exosomes. This finding is consistent with numerous other studies which have found miR-132 to be dysregulated in AD brain and with a host of studies which tie miR-132 down-regulation to AD pathogenesis (Lau et al., 2013;Wong et al., 2013;Hernandez-Rapp et al., 2016;Salta et al., 2016;Patrick et al., 2017;Pichler et al., 2017). For instance, reductions in miR-132 appears to occur before neuronal loss and in vitro miR-132 protects neurons against both Aβ and glutamate (Wong et al., 2013), and overexpression of miR-132 reduces tau pathology and caspase-3-dependent apoptosis in tau transgenic mice (El Fatimy et al., 2018). Since miRNAs within the same cluster are often co-regulated, we measured miR-212 in neural exosomes. The sequence of miR-212 is closely similar FIGURE 5 | Receiver-operator-characteristics (ROC) curves demonstrate that miR-132 and miR 212 in neural exosomes evidence good separation between NCI and AD. ROC curves of (A) miR-132, and (B) miR-212 allowed good separation of NCI from AD patients. Neither (C) miR-132, nor (D) miR-212 levels in NCI vs. AD-MCI patients allowed good separation. Area under the curves (AUC) for all ROC analyses were calculated using a non-parametric approach. to that of miR-132 and both are enriched in neurons and exert regulatory functions on neuronal survival, maturation, plasticity and memory (Wanet et al., 2012). Moreover, downregulation of the miR-132/212 cluster in the frontal cortex had been reported for patients with amnestic MCI and mild AD (Weinberg et al., 2015). Here, we found that miR-212 was strongly dysregulated in AD neural exosomes, and tended to be decreased in AD brain. Why the decrease in miR-212 was significant in neural exosomes, but not brain is unclear, but (as with miR-182) this may relate to the relative distribution of miRNAs in neurons vs. glia. Importantly, ROC analysis indicated that measurement of miR-132-3p and miR-212-3p in neurally derived plasma exosomes showed good sensitivity and specificity to diagnose AD, but did not effectively separate individuals with AD-MCI from controls. The criteria used to identify dysregulated miRNAs in AD brain required that they were specifically altered in AD vs. both HPCs and controls. Given that at least some HPCs maybe on path to AD it is perhaps not surprising that miRNAs that are preferentially dysregulated in AD brain compared with HPC brain do not discriminate AD from AD-MCI when measured in neural exosomes. Future studies should investigate whether miRNAs that are dysregulated in HPC brain compared to control brain might discriminate AD-MCI from controls when quantified in neural exosomes. Nonetheless, quantification of miR-132 and miR-212 in neural exosomes should aid diagnosis of symptomatic patients. While this is not the population most in need of blood-based biomarkers, measurement of neural exosome miR-132 and miR-212 may have theragnostic potential and it would be particularly interesting to measure these markers longitudinally. Future studies should determine whether the observed changes in miR-132 and miR-212 can also be seen in total plasma exosomes or are specific for neural exosomes. DATA AVAILABILITY STATEMENT The raw data from our array experiments can be accessed through https://www.ebi.ac.uk/arrayexpress/browse.html?query= under the accession number E-MTAB-8283. All other data supporting the conclusions of this manuscript will be made available by the authors, without undue reservation, to any qualified researcher. Religious Orders Study resources can be requested at the Rush Alzheimer's Disease Center Research Resource Sharing Hub at www.radc.rush.edu. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the Partners Institutional Review Board. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS DW conceived the project, designed and supervised the research, and wrote the manuscript. DC performed the exosome isolations, analyzed the brain tissue and exosomes for miR content, analyzed data, prepared figures, and wrote the manuscript. DM coded and decoded the specimen designations to ensure all experiments were done blind to the disease status of donors/patient, conducted statistical analysis, prepared figures, and wrote the manuscript. WL dissected the brain tissue and assisted with preparation of iPSC-derived neurons. DS provided the critical guidance. MM and DK provided expert guidance on the preparation of neural exosomes. DG and RR supplied archived samples and relevant clinical data. DB provided brain samples, and detailed clinical and postmortem data. All authors critically appraised the manuscript. FUNDING This work was supported by grants to DW from the Alzheimer's Drug Discovery Foundation and by an Alzheimer's Association Zenith Award. DM is a research fellow funded by the German Research Foundation (DFG ME 4858/1-1). DW is an Alzheimer's Association Zenith Fellow. The Religious Orders Study was supported by NIA grants P30AG10161 and R01AG15819. ACKNOWLEDGMENTS We are grateful to brain donors and their families. We thank clinician colleagues for referring the patients, collecting blood, and performing lumbar punctures. We acknowledge the ARCND for providing access to the real-time PCR system. We also thank Dr. Tracy Young-Pearse for YZ1 human iPSC cells used to generate the iPSC neurons employed in this study. SUPPLEMENTARY MATERIAL The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fnins. 2019.01208/full#supplementary-material FIGURE S1 | Identification of AD and AD-MCI subjects based on cognitive testing and best available CSF biomarkers used for subsequent exosome isolation. A total of 63 plasma samples were obtained from the UCSD biomarker collection. CSF from these subjects was analyzed and levels of Aβ42 and tau yielded clear segregation of the disease groups. FIGURE S2 | Replicate plates within pooled disease groups show good correlation. Spearman correlation between replicates of different plates run on different days for the same sample/cDNA show high correlation with an R 2 > 0.99 for all. TABLE S1 | miRNA list and primers.
2019-11-26T14:12:09.180Z
2019-11-26T00:00:00.000
{ "year": 2019, "sha1": "9cf2f160d8e2d98ec9f82f4c52a618c81ae60206", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnins.2019.01208/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9cf2f160d8e2d98ec9f82f4c52a618c81ae60206", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
246603065
pes2o/s2orc
v3-fos-license
The soil mite Cunaxa capreolus (Acari: Cunaxidae) as a predator of the root-knot nematode, Meloidogyne incognita and the citrus Nematode, Tylenchulus semipenetrans: Implications for biological control Plant-parasitic nematodes (PPNs) are dangerous pests, causing serious losses to the world’s agricultural crops. As soil-dwelling predaceous mites are known as potential biological control agents against many pests, we investigated the interactions between the cunaxid mite, Cunaxa capreolus (Berlese), and two plant parasites (root-knot nematode, Meloidogyne incognita (Kofoid and White) and the citrus Nematode, Tylenchulus semipenetrans Cobb under laboratory conditions. The predatory mite C. capreolus completed its life-span when fed on egg masses EM and second-stage juveniles J2 of M. incognita and J2 juveniles of T. semipenetrans as food sources in the laboratory in sealed arenas at 32ºC, 60% relative humidity in the dark. Males developed slightly faster than females irrespective of different prey. Adult females lived longer than males and showed a higher rate of food consumption. Life table parameters indicated that feeding C. capreolus on J2 juveniles of M. incognita and J2 juveniles of T. semipenetrans led to the highest reproduction rate (rm= 0.185 and 0.167 females/female/day), while feeding on EM of M. incognita gave the lowest reproduction rate (rm= 0.085). The results show that C. capreolus multiplied rapidly when juveniles of M. incognita and T. semipenetrans were offered as prey, indicating the mite’s potential for regulating population densities of these two pests. Future research should focus on understanding the crop and soil management applications required to enable this cunaxid mite and other predatory species to thrive. The implications of these results on biological control of plant parasitic nematodes are discussed. Introduction Plant-parasitic nematodes (PPNs) are important pests causing economically high yield losses in plants cultivated worldwide, turning horticultural areas into poor land, unviable for crop production (Szabó et al. 2012). It is estimated that the annual losses are up to 80 billion dollars a year (Jones et al. 2013). Consequently, the integrated management of major nematode pests is essential to improve world crop production. In Saudi Arabia, damages and losses are caused mainly by Meloidogyne spp. and the citrus nematode Tylenchulus semipenetrans Fouly, 2005; Al-Yahya, 2018). The citrus nematode causes the disease known as citrus slow decline, which limits citrus production across a range of edaphic and environmental conditions (Campos-Herrera et al. 2019). Root knot nematode Meloidogyne incognita (Kofoid &White), is one of the most important nematodes associated with reduction in the yield of agricultural crops and their quality in the world (Thongkaewyuan and Chairin 2018). In Saudi Arabia, losses to plants caused by M. incognita are more severe and complex than in cold countries since the climate is suitable for the developmental activity and reproduction of nematodes throughout the year (Colagiero and Ciancio 2011). Control of plant parasitic nematodes by nematicides has become less desirable because of increased environmental awareness, public concerns about nematicides residues and contamination of food and water (Athanassiou andPalyvos 2006; Sikora et al. 2008). Because of the aforementioned concerns, alternative methods should be developed, such as biological control agents for the management of plant-parasitic nematodes. Common soil predators feed on plant parasitic nematodes and may have potential in biological control (Ekmen et al.2010; Heidemann et al. 2011. Most of these predators of root-knot nematodes are widely distributed and common in soils, including mites, predatory free-living nematodes, collembolans (springtails) and other organisms (Agbenin, 2011; Stirling et al. 2017; Campos-Herrera et al. 2019. Soil predatory mites are among the most effective biocontrol agents of several pests (Navarro-Campos et al. 2012), and nematodes, as first reported by Linford and Oliveira (1938). Later on, numerous studies have been conducted to investigate nematophagous mites. Muraoka and Ishibasi (1976) identified the predation of Cephalobus sp. (Nematoda: Cephalobidae) by 41 species of soil predatory mites. Sharma (1971) indicated that Hypoaspis aculeifer (Canestrini) (Acari: Laelapidae) significantly reduced the population of Tylenchorhynchus dubius (Bütschli, 1873) (Nematoda: Tylenchida) on potted plants. Imbriani and Mankau (1983) reported the predation of Aphelenchus avenae Bastian, 1865 (Nematoda: Aphelenchidae) and eight other nematode species by Lasioseius scapulatus Kennett, 1958 (Acari: Ascidae). Hypoaspis calcuttaensis (Acari: Laelapidae) showed great capability in consuming saprophagous, plant-parasitic and predaceous nematodes (Bilgrami, 1997). Walter and Kaplan (1991) reported that the cunaxid mite Coleoscirus simplex (Ewing, 1917) (Acari: Cunaxidae) fed on colonies of greenhouse cultures of root-knot nematode (Meloidogyne spp.), where it preys on vermiform nematodes and soil arthropods. Oliveira et al. (2007) estimated the consumption rate of Pergalumna sp. (Acari: Oribatida: Galumnidae) on the root-lesion nematode Pratylenchus coffeae (Zimmermann, 1898) (Nematoda: Pratylenchidae) and secondstage of root-knot nematode M. javanica. Chen et al. (2013) estimated that the predatory mite Blattisocius dolichus significantly reduced the density of Radopholus similis. Currently, there are more than 400 known species in the family Cunaxidae around the world (Skvarla and Dowling 2019). All members of this family are considered to be free-living predators feeding on nematodes, fungal spores, spider mites, fungus gnats, small insects as well as eggs of other soil-inhabiting micro arthropods (Skvarla et al. 2014). Knowledge about Cunaxidae fauna of Saudi Arabia is limited by only two species; Cunaxa setirostris (Hermann) and Cunaxa capreolus (Berlese) both been reported in debris and top soil layers of eucalyptus trees and date palm, Phoenix dactylifera in Qassim region and Sakaka governorate, Kingdom of Saudi Arabia, respectively Rehiayani, 2011 andElmoghazy, 2016). However; there is no study to date which has examined the potential of C. capreolus on M. incognita or T. semipenetrans in Saudi Arabia or anywhere else. Therefore, the objective of this study is to report on the feeding behavior and life history of a cunaxid mite, C. capreolus that colonized root-knot nematode cultures in Saudi Arabia, and discuss biological control of root knot nematode M. incognita and citrus Nematode, T. semipenetrans within an ecological framework. Nematodes The root knot, Meloidogyne incognita and the citrus nematode, Tylenchulus semipenetrans were collected from greenhouse and citrus orchard field respectively at the Agricultural Experimental Station, College of Agriculture and Veterinary Medicine, Qassim University, Al-Mulida district (26.3489°N, 43.7668°E), Saudi Arabia. Egg masses of root knot nematode for these experiments were obtained from greenhouse cultures of a population of Meloidogyne incognita originally isolated from eggplant and were reared on tomato (Lycopersicon esculentum Mill. cv. Peterson) under greenhouse condition. For each experiment, Meloidogyneinfected roots of tomato were collected from 10-week-old greenhouse cultures and washed free of soil. The roots were cut into 1-to 2-cm long segments, with each segment containing two egg masses. The Citrus nematodes, Tylenchulus semipenetrans were collected from soil samples including roots, from lemon trees and the roots were gently washed free of soil and cut into 2-to 3-cm long segments, with each segment containing one or two egg masses. The egg mass and larval stages provides an important tool in studying population dynamics of Meloidogyne spp. and other plant parasitic nematodes having their eggs aggregated in gelatinous matrices (Byrd et al. 1972). The second stage juveniles of Meloidogyne incognita were extracted from tomato root while Tylenchulus semipenetrans from soil mixed with lemon roots. The juveniles of nematodes were extracted by sieving and sucrose centrifugation method (Jenkins, 1964). The pore size of sieve used was 38 micrometers. Sieve size (38 micrometer opening) or 400-mesh. The juvenile's numbers were adjusted to 100 J2/ml for experiments by taking one ml of nematode solution to nematode counting slide and counted using a 40x magnification dissecting microscope. The centrifugal flotation method used in this study is one of the best methods that allows isolation of active as well as slow-moving and inactive nematodes (Bezooijen, 2006) Predatory mite The mite C. capreolus was originally extracted from tomato greenhouse soil at the Experimental Research Station, Qassim University, Buraidah, Al-Qassim, Saudi Arabia. Quantitative samples were composed of three equidistant cores, 30 mm in diameter and 80 mm in depth, from each pot culture. The subsamples were combined and extracted in a Berlese-Tullgren funnels, using 20-cm-diameter powder funnels (Krantz, 1978) with a rheostat controlled light source for 24 h, by which time > 99% of the active mites had been extracted and was maintained in darkness at 32 ± 1°C, 60 ± 5% RH, with second-stage juveniles of Meloidogyne incognita (Kofoid &White) (Tylenchida: Meloidogynidae) supplied as food resource in the Laboratory of Acarology, Qassim University. Experimental arenas All experiments were conducted in rearing cells (2 cm in diameter and 0.8 cm deep) filled with a mixture of activated charcoal and plaster of Paris at a 1:7 ratio to a depth of 0.5 cm and covered by a glass slide to prevent mites from escaping, and the two parts were held together by a binder clip. Prey Three different preys were evaluated for their effect on development, oviposition, fecundity, life table parameters and predation rate of C. capreolus. • 2. Second-stage juveniles (J2) of T. semipenetrans. Continuous predation of C. capreolus on J2 juveniles of M. incognita, J2 juveniles of T. semipenetrans and EM of M. incognita Gravid females were transferred into rearing cells with a moistened brush with second-stage juveniles of M. incognita and allowed to lay eggs for one day and resultant eggs were then isolated for the different biological experiments. Eggs were placed singly on individual rearing cells, and the newly hatched larvae (50 for every test) were supplied with the food resource to be evaluated (one of the three preys). After the deutonymph stage, males were put with the females for mating. Males were then transferred into new arenas and individually reared until their death. Three experiments were designed to quantify the amount of predation of J2 juveniles of M. incognita, of T. semipenetrans and EM of M. incognita. In the first experiment, 100 J2 juveniles of M. incognita were added daily to each rearing cell. In the second experiment, 100 J2 juveniles of T. semipenetrans were added daily to each rearing cell. In the third experiment, two M. incognita egg masses and drop of water were added daily to each rearing cell, and each rearing cell was sealed and put into an incubator at 32 ± 1°C, 60 ± 5% RH in darkness. Replacement of the prey was carried out daily and records of developmental rate, predation rate, reproduction and behavior observation were reported twice a day under a standard binocular microscope, and predators were transferred to new arenas every 2-3 days, to keep a constant prey supply. The eggs of mites and prey residue were removed daily from the rearing cells. The necessity of mating was determined by adding adult males to independent arenas with virgin females of various ages and scoring for subsequent production of eggs. The developmental time and survival to adult stage of the females used in the experiments of the progeny (N = 50) of each female in each treatment were observed to calculate life-table parameters, following (Hulting et al. 1990). Statistical analysis The life history data, the number of eggs deposited and number of preys consumed of all individuals by C. capreolus on three type of prey were analyzed according to Analysis of variance (ANOVA) and simple correlation using SAS program (SAS Institute, 2005). Also, the difference between means was conducted at the 5% level by Duncan's Multiple Range Test (DMRT). The life table parameters of the cunaxid mite, C. capreolus were calculated according to (Hulting et al.1990). Life-history of C. capreolus Cunaxa capreolus was able to complete its life cycle, including egg, larva, protonymph, deutonymph, tritonymph and adult, when using egg masses and J2 juveniles of M. incognita and J2 juveniles T. semipenetrans as food resource. Life history of C. capreolus females pass through one larval and three nymphal stages before reaching adulthood, while male has one larval and two nymphal stages (protonymph and deutonymph). Each motile stage is proceeded by a quiescent one. The development times of immature stages of C. capreolus fed on EM of M. incognita, J2 juveniles of M. incognita and J2 juveniles of T. semipenetrans are presented in (Table 1). To mature from egg to adult, the females required 20.65, 15.60, 16.22 days on EM of M. incognita, J2 juveniles of M. incognita and J2 juveniles of T. semipenetrans, respectively (Table 1). Total development time (from egg to adult) of C. capreolus was slightly faster in males than in females, which may ensure insemination of females soon after their emergence, a prerequisite for the onset of oviposition. The generation period and adult longevity lasted 25.92 and 22.72 days, 18.65 and 27.62 days and 19.33 and 26.44 days when C. capreolus fed on EM of M. incognita, J2 juveniles of M. incognita and J2 juveniles of T. semipenetrans, respectively (Table 2). Female always deposited its eggs singly and at random in protected places. Mating was necessary for oviposition in C. capreolus for the maximum reproduction of the females, as unmated females produced lower numbers of eggs compared to mated ones. The sex-ratio was calculated from the developmental experiment. The value females 80, 70 and 56%, Table 5 when predatory mite fed on J2 juveniles of M. incognita, J2 juveniles of T. semipenetrans and EM of M. incognita, respectively. The longest oviposition period was observed when C. capreolus fed on J2 juveniles of M. incognita with 22.12 days. The life span period, likewise followed the same trend on the different prey. Table 3 shows the numbers of J2 juveniles of M. incognita, J2 juveniles of T. semipenetrans and EM of M. incognita prey consumed by predatory females and males. Both immature female and male C. capreolus kept preying on J2 juveniles of M. incognita and J2 juveniles of T. semipenetrans within one week with no significant difference (F female = 1.61, F male = 1.56, Fecundity Results presented in Table 4 showed that C. capreolus females fed on J2 juveniles of M. incognita exhibited the highest fecundity while feeding on EM of M. incognita led to the lowest rate of fecundity and oviposition. The total numbers of deposited eggs by each female mite was significantly higher for female fed J2 juveniles of M. incognita and followed by J2 juveniles of T. semipenetrans and then EM of M. incognita, which occupied the last rank (F = 129.4 and P <0.01). Feeding behavior he soil predatory mite C. capreolus searched actively for nematodes around the experimental arena. Once a nematode was found, the predatory mite probed it with its first pair of legs and pedipalps, snatched it with its chelicerae, and devoured it. The chelicerae are the main killing and feeding organs. The first pair of legs are used to hold the prey during attack and feeding. After each predation, the predatory mite cleared its mouthpart with its first pair of legs, and immediately started the next search. Cunaxa capreolus took one minute to consume a nematode. Several specimens were observed feeding on the EM of root-knot nematodes M. incognita. The possibility that C. capreolus fed only on the gelatinous matrix that surrounds the nematode eggs cannot be excluded, though several mite specimens have been observed with the rostrum and the chelicerae penetrated into the gelatinous matrix and fed on the inside of Discussion Few studies have been carried out on the life history of predatory cunaxid mite, C. capreolus being fed on different prey. Zaher et al. (1975) reported that C. capreolus preying on booklice completed development in 16.5 days, and had an oviposition rate of 43.5 eggs/female/ at 30 ± 1°C, which agrees closely with the current findings. When feeding on free-living nematodes, C. capreolus reached maturity in 17.35 days at a temperature of 35°C ± 2 and 75 ± 5% R.H. (Mostafa et al. 2016). C. capreolus when fed on booklice (Psocoptera), at 15°C had extremely prolonged nymphal stages and a low oviposition rate of 0.41 egg/day/female Zaher et al. (1975) at 30°C andMostafa et al. (2016) at 35°C for C. capreolus fed on various diets. The differences encountered in the cited literature may be due to the differences in foods and experimental conditions. Omar and Mohamed (2014) also determined the duration of the various adult stages of Cunaxa setirostris (Hermann) which were longer than in the present study, except for being close in the pre-oviposition period when fed on Tetranychus urticae Koch, Tydeus californicus (Banks) and Eutetranychus africanus (Tucker). These differences could be reflecting different foods. In the same study, total fecundity ranged between 18 and 73 eggs. Total fecundity when fed on T. californicus approximated the current result. On the other hand, Zhang (2003) reported that the generation time of Cunaxid mite, Coleoscirus simplex lasted 14 days, with the daily rate of deposited eggs 4.35 egg/ female. Among the data presented for nine species of cunaxidae fed on various diets, mean oviposition rates ranged between 0.4 and 2.60 eggs/day (Zaher et al. 1975; Soliman et al. 1975; Walter and Kaplan 1991; Arbabi and Singh 2000; Castro and Moraes 2010and Mostafa et al. 2016. Only 2 of 9 results are higher than the current finding. When C. capreolus fed on (EM) of M. incognita, there was a significant increase in development and pre-oviposition period, and a reduction in oviposition period and fecundity, subsequently, predators performance was poor. Although females of C. capreolus could feed on EM of M. incognita, their oviposition rate was lower than that on J2 juveniles of M. incognita and J2 juveniles of T. semipenetrans. J2 juveniles of M. incognita is thought to be a profitable prey species for C. capreolus, while EM of M. incognita are only subsidiary, or alternative, prey. The maximum reproduction (2.01 and 1.90 eggs/♀/day) was recorded when C. capreolus fed on J2 juveniles of M. incognita and J2 juveniles of T. semipenetrans, while the minimum reproduction (0.65 eggs/♀/day) was observed when C. capreolus fed on EM of M. incognita. It seems that individuals of J2 juveniles of M. incognita and J2 juveniles of T. semipenetrans are more suitable as main prey than EM of M. incognita. It is of interest to note that Zaher et al. (1975) revealed that the Citrus brown mite Eutetranychus orientalis gave the highest reproductive rate compared with booklice (Psocoptera). Similar results were recorded by , Walter and Kaplan (1991) and Mostafa et al. (2016). Based on studies with stigmaeids and phytoseiids the general picture arises that food quality has influence on developmental time duration and much influence on fecundity and immature viability (Al-Azzazy 2002; Gnanvossou et al. 2003; Al-Azzazy 2005; Al-Azzazy 2018. On the other hand, several authors (Momen and Hussein 1999; Castro and Moraes 2010) have stated that the presence of alternative food should help predatory mites to survive periods of prey scarcity and thus prevent severe declines in the soil-dwelling predatory mite populations during shortages of primary foods. To reduce dominance of plant parasitic nematodes within the plant community, effective biocontrol agents must focus their actions upon the target plant without harming other vegetation. Our experiments in laboratory arenas clearly show that the predatory mite C. capreolus has the capacity to kill or damage large numbers of plant parasitic nematodes. All active development stages of the predatory mite preyed on nematodes. Mite tritonymphs were most voracious and consumed many more nematodes than larvae, protonymphs and deutonymphs did. Daily consumption increased significantly in the adult. The consumption rate increased from pre-oviposition to oviposition periods and reduced in the post-oviposition period. In this study, the number of consumed (EM) of M. incognita by female and male C. capreolus were not less than 0.38 and 0.336, average 1.04 and 0.86 per day at 32°C, 60% relative humidity during 42 days, respectively, which showed that C. capreolus possesses the continuous and stable preying ability on EM of M. incognita. Al Rehiayani and Fouly (2005) reported that, when 200 individuals of Cosmolaelaps simplex were released to the rhizosphere soil of potted citrus seedlings two weeks after inoculating 1,000 juveniles of T. semipenetrans, the number juveniles decreased by 65% compared to the nematode alone control 75 days after predatory mites were released. Life table parameters indicated that C. capreolus had high biotic potential when preying upon J2 juveniles of M. incognita and J2 juveniles of T. semipenetrans, while C. capreolus had low biotic potential when fed on EM of M. incognita. The population growth parameters were more favorable for C. capreolus fed on J2 juveniles of M. incognita and J2 juveniles of T. semipenetrans compared to EM of M. incognita. This is confirmed by the intrinsic rate of natural increase (r m ) which was 0.185, 0.167 on J2 juveniles of M. incognita and J2 juveniles of T. semipenetrans while was 0.085 on EM of M. incognita. It is certain that the observed low potential is not an intrinsic characteristic of C. capreolus, but rather the result of the unsuitability of EM of M. incognita as prey. This statement reflects the high fertility of C. capreolus when fed on J2 juveniles of M. incognita and J2 juveniles of T. semipenetrans. These results indicate that J2 juveniles of M. incognita and J2 juveniles of T. semipenetrans provides C. capreolus with higher reproductive capability than does EM of M. incognita. Suggesting that J2 juveniles of M. incognita and J2 juveniles of T. semipenetrans could be evaluated in future studies as prey for mass production of C. capreolus. The gelatinous matrix of egg masses of M. incognita may serve as a barrier to the invasion of some soil predatory mites, small arthropods and microbial antagonists in the soil (Orion et al. 2001). This may be one of the reasons why this species (as well as cunaxids in general) are often found in very low numbers on (EM) of M. incognita, compared to Blattisociidae, Oribatidae and Ascidae (Al Rehiayani and Fouly 2005; Chunling Xu et al. 2014. Our results confirmed that the predatory mite C. capreolus has an inherent potential for the control of M. incognita and T. semipenetrans. and the presented information will be important in the management of these pests. It appears that this mite, as well as other possible biological agents, may be important in balancing these pest nematode populations in field ecosystems. Finally, the findings discussed above would help to gain a better understanding of the efficacy and utilization techniques of predatory Cunaxid mite, C. capreolus, as a facultative predator, in biological control programs of plant parasitic nematodes. Further work needs to be done in the presence of nematodes in soil in pots and micro plots.
2022-02-06T16:36:06.473Z
2022-02-03T00:00:00.000
{ "year": 2022, "sha1": "a0340a5738b663a541f1360a4f5dcc013cd01e93", "oa_license": "CCBY", "oa_url": "https://www1.montpellier.inra.fr/CBGP/acarologia/export_pdf.php?id=4492&typefile=1", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "564bbcf35888011568af1d47f98a3bbab05cad2f", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
225254643
pes2o/s2orc
v3-fos-license
Gaming and the Virtual Sublime This page intentionally left blank Gaming and the Virtual Sublime: Rhetoric, Awe, Fear, and Death in Contemporary Video Games Can you have a transformative experience as a result of falling through a programming error in the latest triple-A title? Does looking out across a vast virtual vista of undulating mountains and tumultuous seas edge you closer to the sublime? In an effort to answer these sorts of questions, Gaming and the Virtual Sublime considers the 'virtual sublime' as a conceptual toolbox for understanding our affective engagement with contemporary interactive entertainment. Through a detailed examination of the history of the sublime, from pseudo-Longinus' jigsaw puzzle of the sublime in rhetoric, through the eighteenth-century obsession with beauty and terror, past the Kantian mathematical and dynamical sublime, all the way to Lyotard's 'unpresentable event' and Deleuze's work on chaos and rhythm, this book road-tests these differing components in a far-reaching exploration of how video games - as virtual spaces of affect - might reshape our opportunities for sublime experience. Using playthroughs, developer diaries, forums discussions and contemporaneous reviews, and games ranging from the heartbreak of That Dragon: Cancer through to the abject body-horror of Outlast (with a dash of Tetris in-between) are discussed in terms the experience(s) of play, their design and their co-creation with gamers with a specific focus on rhetoric and narrative; awe; fear and terror; death and boredom. Written in an engaging and accessible style, this book is a must-read for philosophers, scholars, and those interested in games and popular culture more broadly. Acknowledgments My thanks go to a number of colleagues who have supported me with ideas and advice during the writing of this book, including Dr Jack Denham, Dr David Hill, Dr Adam Formby, Professor David Beer and Dr Steven Hirschler (who all chipped in on matters sublime and ludological) and Dr Rosie Smith (who listened to me grumble). Even more substantive thanks go to my wife and my amazing daughter, both of whom put up with far too many disrupted evenings and weekends. Chapter 1 Introduction: What are Games for? Peacock Hey!, Byron and the Significance of Skeletons My daughter has had an imaginary friend called Peacock Hey! since she was two years old (she has just had her 4th birthday at the time of writing). Peacock Hey! is a 100-feet tall mermaid whose age varies wildly depending on the context of the story she is the protagonist of. These stories all have a similar thread: something my daughter has been asked to do has previously been experienced by Peacock Hey! and, because Peacock Hey! has done something before, it's okay for my daughter to do it too. Peacock Hey!, despite not being real, makes the world a more familiar place, a less frightening place. Although not as well-known as Don Juan, Byron's (2004) poem Childe Harold's Pilgrimage still manages to capture the awe of those men -and it was ostensibly men -fanning out across Europe on the original 'Grand Tour'. In the section titled 'There is a Pleasure in the Pathless Woods', Byron writes There is a pleasure in the pathless woods, There is a rapture on the lonely shore There is a society where none intrudes By the deep Sea, and music in its roar: I love not Man the less, but Nature more […] Byron's poem was written following his travels through the Mediterranean (Heffernan, 2006) and this passage sees him considering the vitality and inspiration of nature; the enjoyment of losing oneself in the forest, the power and ferocity of the sea. What Byron is describing is the ability of nature to instil in a person a sense of awe, or even what some would term the sublime (see Needler, 2010). Now the obvious question is what connects my daughter's imaginary friend to Byron's Grand Tour? The answer is that both exemplify, in different ways, attempts to codify and comprehend experiences that initially transcend our ability to understand them. For my daughter, this is framed largely through her life course. She is four, going on five, so many of the things that adults take for granted are new and daunting -the unknown that only becomes known through experience. Byron, as he identifies in the poem, uses the verse to try and explain that which is perhaps beyond explanation. Both of these examples get to the root of the sort of issues this book will explore in how we try to make sense of experiences that challenge our perception of ourselves and the world around us. Further in to Childe Harold's Pilgrimage, Byron contrasts his admiration for the wonders of the natural world by offering a description of late eighteenth century Greece that Shaw (2013) describes as 'withering ' (p. 148). In Byron's work, Shaw sees elements of the sublime, in identifying a connection between the isolated exemplar and the universal, the individual experience connected to the whole which may or may not have something to do with that 'fatal divide between the human and the divine' (p. 153) often identified in what came to be called the Romantic sublime. For my purposes, it demonstrates at the very least a classic example of the relationship between subject and object, and the range of impacts each can have on the other. Before proceeding, it is perhaps worth giving some context to the book. My first piece of published writing as an academic explored the ways in which communities of video game players and fans (I will endeavour to use 'gamers' throughout) used a single-player role-playing game -Fallout 4 (Bethesda, 2015) -to discuss their own fears of dying (Spokes, 2018). Specifically, the game is set in a post-apocalyptic future where the East Coast of the United States has been largely destroyed as part of a nuclear war. Gamers navigate a hellish landscape of collapsed buildings and mutated animals, trying to make sense not only of what happened, but where humanity is headed. The totality of the destruction is immense, but what chimed most with gamers were the day-to-day tragedies of individual deaths, typified by skeletons placed in the game environment. Some of the skeletons were arm-inarm, suggesting a final embrace at the point of impact, others were going about their everyday lives, shopping at the supermarket with their children in the trolley when the bomb fell. Gamers used these skeletons to situate and try to imagine their own deaths, or avoidance of death, in a similar scenario: the game was an opportunity for discussion, and also a way or attempting to deal with the magnitude of the death of humanity (albeit in a representational sense). Video games, and virtual worlds more broadly, engage us on both micro-and macro-levels. As Muriel and Crawford (2020, p. 140) observe, 'video games and their culture can help us understand wider social topics such as agency, power, everyday life and identity in contemporary society' and in working through this book I am keen to consider in detail the relationship between video games as the individualistic pursuit they are frequently typified as, and video games as virtual and simulational spaces for testing us, for allowing us to explore their wider ramifications as reflective of the social conditions, practices and processes we engage in and with: in the context of the latter, much like Peacock Hey!, video games may act as a proxy for making the macro more manageable. But equally I want to explore the ways in which they push us towards connecting with ideas and environments that challenge and antagonise our understanding of how things are. Using case studies from contemporary video gaming, this book can be thought of as an experiment in applying various philosophies of the sublime in an effort to see how well they work in the context of interactive media. My focus is on unpacking a variety of philosophical ideas in relation to four key areas: rhetoric, awe, fear and death. I am interested in whether or not it possible to experience the awe of a Byronic landscape through the television screen, if fear can be propagated in a pizzeria staffed by animatronic animals? Can repeating the same action over and over and over again be the path to a transcendent experience? More than that though, the central question this book is asking is how useful the concept of the sublime is in understanding our recent entanglements with representative virtual and simulational spaces: is the sublime fit for purpose, or does it need recasting for the virtual age? Where best to begin… At this early stage, it is already worth reflecting on the why this chapter and indeed the book is about 'gaming'. By 'gaming' I mean specifically the processes, practices and experiences of playing video games (which is what gamers do!). Throughout I will be exploring ideas that may be applicable to games more broadly, including things like 'play' and 'affect', but my primary interest is in the application of the sublime to the video game. To caveat this, I am not suggesting that the sublime is something applicable to all games, but rather that our understsanding of video games might be enriched -or at least more thoroughly scrutinisedthrough a conceptual framework based on sublime ideas. Throughout the book I will use 'game(s)' and 'video game(s)' interchangeably, much like 'player' and 'gamer': if I refer to 'games' in a broader sense outside of the video game, this will be clearly demarcated. The bulk of this chapter then will set some groundwork for exploring video games as well as developing some of the associated terminology that will be used in this book. Firstly, it is important to understand what a contested term 'video game' is, so some definitional work is definitely in order to establish a working description to problematise later on. Secondly, I'll double-down on differentiating the focus of this book from 'games' more generally as this will enable us to better understand competing arguments about what games do and why we play them. Here I will reflect on both Huizinga and Caillois' discussions on things like 'play'. Stemming from this, contemporaneous research that outlines an ontological distinction between games as objects and games as processes will allow the interrogation of dominant schools of thought in the associated field of Game Studies. Thirdly, research on the impact of games, in terms of their power to offer new 'possibility spaces' (Salen & Zimmerman, 2004), socio-political engagement and critical tools for understanding a changing world will begin to show the affective resonance of this form of popular culture, and how far removed it has become from simplistic notions of passively consumed entertainment. What are Video Games? An obvious question, right -but what a video game is is heavily contested. Initially, a 'video game' can be challenged terminologically: Perron and Wolf (2009, p. 6) acknowledge this by explaining that although the field of academic research into video games has developed exponentially in the last 20 years or so, 'a set of agreed-upon terms has been slow to develop, even for the name of the subject itself ("video games", "videogames", "computer games", "digital games", etc.)'. This issue is compounded, they continue, by games journalists using a variety of different terms, and professional organisations similarly muddying the waters. 1 The video games industry also uses all sorts of terms, for example, 'electronic entertainment'. 2 It is important then to identify that there is by no means agreement around terminology, but also that it is useful to have a term in use for the sake of clarity. Following Crawford's (2012) detailed discussion, this book will adopt 'video games' as the standard term throughout, for the simple reason of frequency: it is after all the most used term. What a video game is could be understood definitionally. I could say that a video game is 'a computer game that you play by using controls or buttons to move images on a screen' (Collins Dictionary, Video Game, 2019) but as with the use of the term 'video game', this definition is also problematic. For a start, the definition suggests a video game is a 'computer game' -a terminological concern -before stating that you 'play' it using controls. What you do with a video game depends very much on your understanding of 'play', as well as arguments about the nature of active versus passive engagement though the manipulation of 'images on a screen'. So not that helpful. Perhaps a video game can be understood simply as a form of interactive media? There are a few ways of situating games here -again, Crawford (2012) is invaluable in detailing the arguments for and against the view that games are media -including how video games are entertainment products first and foremost. This certainly chimes with their rise to prominence as a cultural product (Donovan, 2010); for example, in 2018 Grand Theft Auto V (Rockstar North, 2013) became the most profitable entertainment product of all time (Donnelly, 2018) when compared to film and literature. In addition to this, lots and lots of people play video games. Video games can be thought of as products of culture industries spread across the globe, where large development teams spend many years and many millions of dollars designing products to sell to gamers. Video games have considerable reach in terms of how many people come into contact with them, directly or indirectly. Red Dead Redemption 2 (RDR2) (Rockstar Studios, 2018), according to some estimates (Takahashi, 2018), had 2,800 employees working on its development for seven years at a cost of around $170million, but given that Take Two, the owner of the developer, made $725million in the three days after the release date (Sarkar, 2018) selling 17 million copies of the title, this seems like both an indication of the size and scope of the industry, but also what a sound financial investment video game development appears to be at present. If video games are media, it is worth thinking about them in relation to both their production and their consumption. As Warde (2005) argues, consumption is not simply the end result of production, and is not entirely about passive engagement with particular objects and practices, but is instead an active series of processes and relationships that reinforce as well as challenge a variety of socio-cultural structures (consumption as a type of subcultural capital for instance). As a form of material culture, video games offer a way of exploring these relationships as a type of social reality, where different consumers use video games in a variety of ways that complicates any simplistic binary between production and consumption: simply put, video games are a lens through which we can explore contemporary culture (Denham & Spokes, 2018;Muriel & Crawford, 2018). The scale and scope of video games makes them a viable locus of academic study and research, and research into video games as I'll detail throughout, is necessarily diverse and is able to draw on a variety of perspectives. As Grey (2009, p. 1) contends, games can 'be read critically, not simply as expressions of culture or as products of consumption, but as objects through which we can think'; this thinking might involve the formal qualities of the game itself in relation to interactions between programmers and players (Cremin, 2012), methodological issues around capturing and detailing what constitutes 'play' (Giddings, 2009) or the role of memory in creating players identities and associated narratives (Mukherjee, 2011). Bogost (2010, p. ix) argues that video games have important persuasive powers in terms of how we see the world around us: Video games can … disrupt and change fundamental attitudes and beliefs about the world, leading to potentially significant longterm social change. I believe that this power is not equivalent to the content of video games, as the serious games community claims. Rather, this power lies in the very way video games mount claims through procedural rhetorics. Thus, all kinds of video games … possess the power to mount equally meaningful expression. As Nieborg & Hermes (2008) tell us, a multitude of disciplinary approaches and analytical tools can be used to explore video games, and reciprocally video games may help to illuminate current debates in other disciplines. We can also see these sorts of activities in the related field of Games Studies, and it is worth spending some time working through the development of key arguments in this area of research to shed more light on what a video game is and what a video games does. Video Games and Play Play is something intrinsic to all games, video or otherwise, and the notion of play is central to numerous positions in the field of Game Studies. Debates around what play is and what happens to us when we play -echoing the opening question about what a video game is -typically emerge in response to the work of two scholars: Dutch cultural theorist Johan Huizinga (particularly his 1938 book Homo Ludens) and French sociologist Roger Caillois (in Man, Play and Games, originally published in 1961). 3 Huizinga (1955) defines play in relation to its form, as something that stands outside of quotidian experience but that completely encapsulates a person in the sense of its absence of utility and its emphasis on the imaginary: he also describes it as a 'free activity ' (pp. 34-35). Play, he goes on, takes place in a demarcated spatio-temporal location and is governed by specific rules that result in group coherence which again underscores the separation from everyday life. In the context of video games, on a rudimentary level we can already see the ways in which play is operationalised: imaginary worlds, a leisure pursuit separate from work, rules as defined by developers. But what Huizinga speaks to is essentially a series of binary distinctions, a dialectic of sorts. The absence of utility suggests that Huizinga sees play as the opposite to seriousness (Ehrmann, Lewis & Lewis, 1968), that play cannot form part of our everyday experience that the imaginary runs counter to reality. This binary is important when thinking about video games, because not only is play frequently considered a quotidian experience, the philosophical notion of the virtual challenges these distinctions, particularly the representational and the real (see Chapter 4). Huizinga's work has had considerable impact on the field of Game Studies including academic research in relation to video games: for example, Salen and Zimmerman's (2004) landmark study on play-through-game-design is titled Rules of Play, and expressly unpacks Huizinga's work in defining 'meaningful' play (pp. 31-36). Likewise Juul's (2005) work on the constituent elements of the video game again draws on both the power of the imaginary and the need to see play through codified systems of rules (p. 1). Caillois' (2001) work, situating games and play as conditional in many social structures and behaviours, reproduces some of Huizinga's original contentions (free, separate, rule-based, make-believe) through 'six characteristics' including the notion of games being unproductive. Whilst many binary distinctions remain, the interplay between 'play' and 'games' is more pronounced, with Caillois developing a spectrum that runs from ludus (codified rules for structure action) through to paidia (or playfulness, the more unstructured spontaneous nature of play). His argument, crudely distilled, is that the push and pull between ludus and paidia is what leads to potential instability in cultures as rules are routinely established and reformed. Ehrmann, Lewis, and Lewis (1968) acknowledge the important ways in which Caillois attends to this neglected area of Huizinga's work, but decry his obsession with categorisation (p. 32). Thankfully his expanded categorisation does push past Huizinga's focus on competition in the aforementioned two types of play (ludus and paidia) and four differing forms: these forms -agon (competition), alea (chance), mimicry (role play) and ilinx (perception altering) -have in turn informed scholars of Games Studies, including Walther's (2003) application of these categories to video games, notably Hitman: Codename 47 (IO Interactive, 2000). For Walther, the suitability of these categories rests firstly on accepting the ways in which the categories may interrelate. Hitman is initially about mimicry in that the player must located their 'play-mood' in response to the role (which is make-believe), but this itself involves a complicated relationship between the gamer, their avatar and the game space. Following this, mimicry becomes agon, whereby the game design directs gamers towards rule-specific competitions (missions that need completing). Gaming here is clearly multiple, challenging the simplistic delineations previously discussed. Sutton-Smith, in his two-volume study of games cunningly titled The Study of Games (Avedon & Sutton-Smith, 1971) considers multiplicity in terms of how, various groups -from academics and educators to the military -have their own definitions so that the meaning of games and play are not inherent, but vary depending on the people thinking about them and engaging with them. This resonates in van Vught and Glas' (2018) analysis of games as objects and games as processes. They consider these positions as 'opposing ontological strategies' with the former occupying a space where 'the game object provides some core structure that encourages or even enforces certain play actions to be performed' (p. 4) and the later more about the action or processes involved in play, rather than the object itself ('this mission that happens to feature in Hitman: Codename 47 is exciting!'). Play is bifurcated depending on the view of the gamer. In the first instance, play is more of a methodological concern and in the second, play is the 'object of analytical interest' and whilst this again implies a dialectic, it is one underscored by multiple meanings of gaming and play that are gamer-dependent. Despite not engaging with video games explicitly, 4 Sutton-Smith's appraisal is still regarded as helpful interjection, recognising as it does the multiple ways in which play can be conceptualised in relation to video games (Juul, 2001). Similarly, Tavinor (2009, p. 32) argues that scholars of video games need to 'construct a definition that offers the possibility that there may be more than one way to be a videogame [sic.]'. Indeed, in Game Studies there have historically been two frequently oppositional approaches to understanding the video game -ludological and narratological -and both these positions are worth exploring to strengthen the foundations of this study. I have already detailed a handful of binaries in studies of games, both in a historical and contemporaneous sense (play as structured vs. unstructured, games as objects vs. games as process) and, in terms of video games, there is a similar distinction that has previously seen video games scholars framing games as either ludological or narratological. It is worth saying that much of the initial antagonism between key players in this debate has abated, but it nonetheless demonstrates entrenched views on what video games are, what they do, and how they can be understood. In addition, these debates can also be understood through the sublime, where ludology maps on to the embodied, affective experience of a sublime happening and narratology could be seen as the more traditional, pseudo-Longinian emphasis on rhetoric (see Chapter 2). For clarity I am going to present these debates in chronological order, rather than by perpetuating the binary. In Hamlet on the holodeck: The future of narrative in cyberspace, Janet Murray (2017, p. 10) situates video games in terms of their 'promise to reshape the spectrum of narrative expression, not by replacing the novel or the movie but by continuing their timeless bardic work within another framework'. By contextualising video games this way, Murray imposes narrative sensibilities on the medium, or rather she shows how video games are narratological in that as a 'new compositional tool [they should be placed] as firmly as possible in the hands of the storytellers' (p. 284). At the time, Game Studies scholars were less than conducive to what they saw as a type of reductionism. Aarseth (1997) sees video games not as passively read texts, but as ergodic experiences, whereby the gamer is required to actively participate instead. More dramatically, Juul (1998) states that 'computer games are not narratives […] rather the narrative tends to be isolated from or even work against the computergame-ness of the game', which sets the scene for a divorcing of story from play. Further to this, Eskelinen (2001) argues how outside academic theory people are usually excellent at making distinctions between narrative, drama and games. If I throw a ball at you I don't expect you to drop it and wait until it starts telling stories. From here we might characterise a series of skirmishes between the two camps. For the ludologists, a sustained critique of games-as-narratives can be found in the work of Frasca (2007) and Mäyrä (2008) to name a few. Another notable battle played out between Jenkins (2004) and Eskelinen (2004), with Jenkins attempting to offer spatialised dynamics as an inbetween space -because 'it makes sense to think of game designers less as storytellers than as narrative architects' (p. 121) -which Eskelinen characterises as a position ignorant of key critiques which breaks no new ground in theorising video games. These arguments are increasingly met with calls for a middle ground from both narratologists and games designers/developers (see Mukherjee, 2015). Murray having perhaps unwittingly instigated the initial conflict, appeals for calm, stating that 'Game studies, like any organised pursuit of knowledge, is not a zero-sum team contest, but a multidimensional, open-ended puzzle that we all are engaged in cooperatively solving'. She calls it 'the last word' (Murray, 2005), and I am inclined to agree. What we glean from these disagreements are the impassioned positions that some scholars take over what a game is and what it does. Is it a narrative or is it about play? Once again, the discussion is sometimes reduced to an unproductive binary distinction, whereas, as demonstrated earlier, there are multiple readings of what games are. Perhaps the most useful thing to take from these arguments, as Crawford (2012) does, is the sorts of features we tend to find in games: things like agency (what the gamer does or is able to do) and interactivity, both of which will thread throughout the empirical chapters of this book (Chapters 5-8). Games involve an interrelationship between play and narrative, the locus of which is where successful titles afford gamers affective experiences that both have a lasting impact and push towards what might be considered a sublime encounter.
2020-08-20T10:05:13.795Z
2020-08-28T00:00:00.000
{ "year": 2020, "sha1": "09d98d6c8758fb845e6af7cf53b53dce159f30c5", "oa_license": "CCBYNC", "oa_url": "http://ray.yorksj.ac.uk/id/eprint/4629/3/GATVS%20-%20Chapter%207%20-%20Fear.pdf", "oa_status": "GREEN", "pdf_src": "MergedPDFExtraction", "pdf_hash": "fe12fe104e02bed3345c790f00e2e225fca598e3", "s2fieldsofstudy": [ "Art" ], "extfieldsofstudy": [ "Sociology" ] }
252111070
pes2o/s2orc
v3-fos-license
Knowledge-enhanced Iterative Instruction Generation and Reasoning for Knowledge Base Question Answering Multi-hop Knowledge Base Question Answering(KBQA) aims to find the answer entity in a knowledge base which is several hops from the topic entity mentioned in the question. Existing Retrieval-based approaches first generate instructions from the question and then use them to guide the multi-hop reasoning on the knowledge graph. As the instructions are fixed during the whole reasoning procedure and the knowledge graph is not considered in instruction generation, the model cannot revise its mistake once it predicts an intermediate entity incorrectly. To handle this, we propose KBIGER(Knowledge Base Iterative Instruction GEnerating and Reasoning), a novel and efficient approach to generate the instructions dynamically with the help of reasoning graph. Instead of generating all the instructions before reasoning, we take the (k-1)-th reasoning graph into consideration to build the k-th instruction. In this way, the model could check the prediction from the graph and generate new instructions to revise the incorrect prediction of intermediate entities. We do experiments on two multi-hop KBQA benchmarks and outperform the existing approaches, becoming the new-state-of-the-art. Further experiments show our method does detect the incorrect prediction of intermediate entities and has the ability to revise such errors. Introduction Knowledge Base Question Answering(KBQA) is a challenging task that aims to answer the natural language questions with the knowledge graph. With the fast development of deep learning, researchers leverage end-to-end neural networks Fig. 1: An example from WebQSP dataset. The red arrows denote the right reasoning path, the blue arrows denote a wrong reasoning path and the purple arrow denotes the revision in our approach. [12,20] to solve this task by automatically learning entity and relation representations, followed by predicting the intermediate or answer entity. Recently for the KBQA community, there have been more and more interests in solving complicated questions where the answer entities are multiple hops away from the topic entities. One popular way to solve multi-hop KBQA is the information retrieval-based methods, which generate instructions from questions and then retrieve the answer from the Knowledge Graph by using the instructions to guide the reasoning [9,18]. Although achieving good performance on multi-hop KBQA, existing retrieval-based methods treat instruction generation and reasoning as two separate components. Such methods first use the question solely to generate the instructions for all the hops at once and then use them to guide the reasoning. As the instructions are fixed during the whole reasoning procedure and the knowledge graph is not considered in instruction generation, the model cannot revise its mistake once it predicts an intermediate entity incorrectly. When the model reasons to an incorrect intermediate entity, the fixed instruction will guide the reasoning from the wrong prediction, which induces error accumulation. We take an example in WebQSP, as shown in Figure 1, to show the importance of knowledge graph on predicting intermediate entity and revising error of incorrect predictions. For the query question "Who does Brian Dawkins play for 2011", its topic entity is the football player "Brian Dawkins". Both "play" and "2011" can be attended to in the instruction generated at the first step. The baseline method NSM [9] reasons to a wrong intermediate entity "extra salary", perhaps because the model mistakes the time constraint "2011" as a regular number and chooses the "extra salary" entity, which is related with number and currency. At the second step, the instructions by NSM continue to guide the reasoning from the wrong intermediate entity and get the wrong answer "Clemson Tigers football", which does not satisfy the time constraint. However, with the knowledge graph information that the "roster" entity connects to the type entity "to 2011" and the named entity "Denver Broncos (football team)" but the entity "extra salary" is not linked to another entity by relation "team roster", the instructions could revise the error by re-selecting the "roster" as the intermediate entity and linking the predicate "'play" with the relation "team roster" to derive the answer entity "Denver Broncos". To introduce the knowledge graph into generating instructions from the query question, we propose our approach, Knowledge Base Iterative Instruction GEnerating and Reasoning(KBIGER). Our method has two components, the instruction generation component and the reasoning component. At each step, we generate one instruction and reason one hop over the graph under the guidance of the instruction. To generate the k-th instruction, we take both the question and the (k − 1)-th reasoning graph into consideration. In this way, our model could obtain results of the last reasoning step and will be able to revise the possible mistakes by generating the new instruction. Then we utilize the instruction created to extend the reasoning path. Besides, we also adopt distilling learning to enhance the supervision signal of intermediate entity, following [9]. We do experiments on two benchmark datasets in the field of KBQA and our approach outperforms the existing methods by 1.0 scores Hits@1 and 1.0 scores F1 in WebQSP, 1.4 score and 1.5 score in CWQ, becoming the new state-of-the-art. Our contributions can be concluded as three folds: 1. We are the first to consider the reasoning graph of previous steps when generating the new instruction, which makes the model be able to revise the errors in reasoning. 2. We create an iterative instruction generation and reasoning framework, instead of treating instruction generation and reasoning as two separate phases as current approaches. This framework could fuse information from question and knowledge graph in a deeper way. 3. Our approach outperforms existing methods in two benchmark datasets in this field, becoming the new state-of-the-art. Related Work Over the last decade, various methods have been developed for the KBQA task. Early works utilize machine-learned or hand-crafted modules like entity recognition and relation linking to find out the answer entity [22,1,5,6]. With the popularity of neural networks, recent researchers utilize end-to-end neural networks to solve this task. They can be categorized into two groups: semantic parsing based methods [22,12,8,16] and information retrieval based methods [26,25,3,15]. Semantic parsing methods convert natural language questions into logic forms by learning a parser and predicting the query graph step by step. However, the predicted graph is dependent on the prediction of last step and if at one step the model inserts the incorrect intermediate entity into the query graph, the prediction afterward will be unreasonable. Information retrieval-based methods retrieve answers from the knowledge base by learning and comparing representations of the question and the graph. [13] utilize Key-Value Memory Network to encode the questions and facts to retrieve answer entities. To decrease the noise in questions, [24] introduces variance reduction into retrieving answers from the knowledge graph. Under the setting of supplemented corpus and incomplete knowledge graph, [19] proposes PullNet to learn what to retrieve from a corpus. [18] proposes TransferNet to support both label relations and text relations in a unified framework, For purpose of enhancing the supervision of intermediate entity distribution, [9] adopts the teacher-student network in multi-hop KBQA. However, it ignores the utility of knowledge graph in generating information from questions and the instructions generated are fixed in the whole reasoning process, which undermine the ability to revise incorrect prediction of intermediate entities. Preliminary In this part, we introduce the concept of knowledge graph and the definition of multi-hop knowledge base question answering task(KBQA). Knowledge Graph(KG) A knowledge graph contains a series of factual triples and each triple is composed of two entities and one relation. A knowledge graph can be denoted as G = {(e, r, e )|e, e ∈ E, r ∈ R}, where G denotes the knowledge graph, E denotes the entity set and R denotes the relation set. A triple (e, r, e ) means relation r exists between the two entities e and e . We use N e to denote the entity neighbourhood of entity e, which includes all the triples including e, i.e., N e = {(e , r, e) ∈ G} ∪ {(e, r, e ) ∈ G}. Multi-hop KBQA Given a natural language question q that is answerable using the knowledge graph, the task aims to find the answer entity. The reasoning path starts from topic entity(i.e., entity mentioned in the question) to the answer entity. Other than the topic entity and answer entity, the entities in the reasoning path are called intermediate entity. If two entities are connected by one relation, the transition from one entity to another is called one hop. In multi-hop KBQA, the answer entity is connected to the topic entity by several hops. Our approach is made up of two components, Instruction Generation and Graph Reasoning. In the former component, we utilize the query and knowledge graph to generate some instructions for guiding the reasoning. In the graph reasoning component, we adopt GNN to reason over the knowledge graph. We also apply the teacher-student framework, following [9], to enhance the intermediate supervision signal. For each question, a subgraph of the knowledge graph is constructed by reserving the entities that are within n hops away from the topic entity to simplify the reasoning process, where n denotes the maximum hops between the topic entity and the answer. Instruction Generation Component The objective of this component is to utilize the text of query question and the information of knowledge graph to construct a series of instructions {i k } n k=1 ∈ R d , where i k denotes the k-th instruction for guiding the reasoning over the knowledge graph. We use BiLSTM to encode the question to obtain a contextual representation of each word in the question, where the representation of i-th word is denoted as h i . We utilize the last hidden state h l as the semantic representation of the question,i.e.,q = h l . Both question and the aggregation result of the previous reasoning graph are used to generate instructions. The instruction generated is adapted to the reasoning graph instead of fixed in the whole reasoning process. We utilize the attention mechanism to attend to different parts of the query at each step. We construct instructions as follows: where e (k−1) graph ∈ R d denotes the representation of (k − 1)-th reasoning graph which we will explain below and Graph Aggregation In this stage, we combine the query question and KB entity representation to generate the whole graph representation. We adopt the attention mechanism to assign weights to each entity in the reasoning graph and aggregate them into a graph representation. In this way, the model is aware of the graph structure of intermediate entities and has the ability to generate new instructions to revise the incorrect prediction. where W gate ∈ R d×d , b q ∈ R d are parameters to learn and "·" denotes inner product. Entity Initialization We believe the relations involving the entity contain important semantic information of the entity, which can be used to initialize entity. We set the initial entity embedding for each entity in the subgraph by considering the relations involving it: where (e , r, e) denotes a triple in the subgraph, r ∈ R d denotes the embedding vector of relation r and W E is parameter to learn. Reasoning Component The objective of this component is to reason over knowledge graph under the guidance of the instruction obtained by the previous instruction generation component. First, for each triple (e , r, e) in the subgraph, we learn a matching vector between the triple and the current instruction i (k) : where r denotes the embedding of relation r and W R are parameters to learn. Then for each entity e in the subgraph, we multiply the activating probability of each neighbour entity e by the matching vector m (k) (e ,r,e) ∈ R d and aggregate them as the representation of information from its neighbourhood: The activating probability of entities is derived from the distribution predicted by the previous reasoning component. We concatenate the previous entity representation with its neighbourhood representation and pass into a MLP network to update the entity representation: Algorithm 1 Iterative instruction generation and reasoning 1: For each entity in subgraph, initialize entity embedding by Eq.7. Let n denote the number of hops to reason. Use Glove and LSTM to obtain word embedding h j and question embedding q for the query question. 2: for k = 1,2,...,n do 3: Generate instruction based on i (k−1) and E (k−1) , get i (k) 4: Reason over knowledge graph based on i (k) , P (k−1) , E (k−1) and get P (k) , E (k) 5: end for 6: Based on the final entity distribution P (n) , take out the entity that has the activating probability over given threshold as answer entities. Then we compute the distribution of the k-th intermediate entities as follows: where each column of E (k) is the updated entity embedding e (k) ∈ R d and W E ∈ R d is parameter to learn. Algorithm To conclude the above process of iterative instruction generation and reasoning, we organize it as Algorithm 1. We generate the instruction from the question and reason over knowledge graph alternately. The instruction component sends instructions to guide the reasoning and the reasoning component provides the instruction component with knowledge from a related graph. This mechanism allows two components to mutually communicate information with each other. Teacher-Student Framework We adopt a teacher-student framework in our approach to enhance the supervision of intermediate entity distribution following [9]. Both the teacher network and student network have the same architecture where the instruction generation component and reasoning component progress iteratively. The teacher network learns the intermediate entity distribution as the supervision signal to guide the student network. The loss function of the student network is designed as follows: D KL (·) denotes the Kullback-Leibler divergence. P (k) s and P (k) t denotes the predicted distribution of the k-th intermediate entity by student network and teacher network. P * denotes the golden distribution of answer entity. λ is a hyper-parameter to tune. Experiments Datasets train dev test entities WebQSP 2,848 250 1,639 1,429.8 CWQ 27,639 3,519 3,531 1,305.8 Table 1: Statistics about the datasets. The column "entities" denotes the average number of entities in the subgraph for each question Datasets, Evaluation Metrics and Implementation Details We evaluate our method on two widely used datasets, WebQuestionsSP and Complex WebQuestion. Table 1 shows the statistics about the two datasets. WebQuestionsSP(WebQSP) [22] includes 4,327 natural language questions that are answerable based on Freebase knowledge graph [2], which contains millions of entities and triples. The answer entities of questions in WebQSP are either 1 hop or 2 hops away from the topic entity. Following [17], we prune the knowledge graph to contain the entities within 2 hops away from the mentioned entity. On average, there are 1,430 entities in each subgraph. Complex WebQuestions(CWQ) [21] is expanded from WebQSP by extending question entities and adding constraints to answers. It has 34,689 natural language questions which are up to 4 hops of reasoning over the graph. Following [19], we retrieve a subgraph for each question using PageRank algorithm. On average, there are 1,306 entities in each subgraph. Following [20,19,9], we treat the multi-hop KBQA task as a ranking task. For each question, we select a set of answer entities based on the distribution predicted. We utilize two evaluation metrics Hits@1 and F1 that are widely applied in the previous work [20,19,18,17]. Hits@1 measures the percent of the questions where the predicted answer entity that has maximum probability is in the set of ground-truth answer entities. F1 is computed by use of the set of predicted answer entities and the set of ground-truth answers. Hits@1 focuses on the entity with the maximum probability in the final distribution predicted and F1 focuses on the complete answer set. Before training the student network, we pre-train the teacher network on the multi-hop KBQA task. We optimize all models with Adam optimizer, where the batch size is set to 32 and the learning rate is set to 7e-4. The reasoning steps are set to 3 for WebQSP and 4 for CWQ. The coefficient in Eq.14 is set to 0.05. The hidden size of LSTM and GNN is set to 128. Baselines to Compare KV-Mem [13] takes advantage of Key-Value Memory Networks to encode knowledge graph triples and retrieve the answer entity. GraftNet [20] uses a variation of graph convolution network to update the entity embedding and predict the answer. PullNet [19] improves GraftNet by retrieving relevant documents and extracting more entities to expand the entity graph. EmbedKGQA [17] utilizes pre-trained knowledge embedding to predict answer entities based on the topic entity and the query question. NSM [9] takes use of the teacher framework to provide the distribution of intermediate entities as supervision signs for the student network. However, NSM fails to consider the utility of the knowledge graph in generating instructions from the question, which constrains the performance. TransferNet [18] unifies two forms of relations, label form and text form to reason over knowledge graph and predict the distribution of entities. The results of different approaches are presented in Table 2, by which we can observe the following conclusions: Our approach outperforms all the existing methods on both datasets and evaluation metrics, becoming the new state-ofthe-art. It is efficient to introduce the information from knowledge graph into generating instructions from the query question and iteratively proceed the instruction generation component with the reasoning component. Our approach outperforms the previous state-of-the-art NSM by 1.0 Hits@1 score and 1.0 F1 score in WebQSP as well as 1.4 Hits@1 score and 1.5 F1 score in CWQ. CWQ composes more complicated query questions and our model is good at answering complex questions with more hops of reasoning. Compared to the Hits@1 metric, the F1 metric counts on the prediction of the whole set of answer entities instead of the answer that has the maximum probability. The performance of our model in F1 metric shows it can predict the whole answer set instead of just one answer. Analysis In this part, we do two ablation studies and evaluate the effectiveness of our approach on revising the incorrect prediction of intermediate entities. Furthermore, we give an example from WebQSP to show the revision of incorrect prediction for intermediate entities. Ablation Study and Error Revision To demonstrate the effectiveness of different components of our approach, we do the following ablation study on WebQSP and CWQ datasets: Ablation study 1In this model, we revise the instruction computation by removing the component of graph entity and deriving all the n instructions before the reasoning: where n denotes the number of hops between answer and topic entity. We can see in this model, the instruction generated from questions do not consider the information of the knowledge graph, and the instructions are fixed in the whole reasoning process. Regardless of what the structure of knowledge graph is, the instructions sent by questions are the same. Ablation study 2 In this model, we remove the teacher network and only use the hard label in datasets for training. Table 3: Results of ablation models on WebQSP and CWQ. To explore whether the introduction of knowledge graph into instruction generation could identify the incorrect prediction of intermediate entities and revise them, we annotated the intermediate entity of the multi-hop data cases where the baseline NSM fails to answer the question correctly but we get the right answer. By checking the reasoning process, we classify the cases into 2 groups and compute the proportion: 1. the cases where we fail to predict the intermediate entity at first but revise the error in the following steps and answer the question correctly. 2. the cases where we answer the question correctly by predicting all the intermediate entities correctly. As shown in figure 3, among the cases where our model predicts correctly and baseline NSM fails to answer, 58% are cases where we revise the error of incorrect prediction of an intermediate entity. which shows the efficiency of introducing the knowledge graph into instruction generation in an iterative manner on decreasing the error accumulation along the reasoning path. Case Study In this section, we take one case from WebQSP to show the effectiveness of our method in revising the incorrect prediction of intermediate entities. In Figure 4, for the question "Who plays London Tipton in Suite Life on Deck?", the purple path denotes the right reasoning path: "Suite Life on Deck (TV series)" =⇒ series cast "cast" =⇒ actor starring "Brenda Song (actor)", where the entity "cast" satisfies the character constraint "Lonton Tipton". In the first step, our model and NSM predict the wrong intermediate entity "Suite Life of Zack and Cody" because of the high similarity between "Suite Life on Deck" and "Suite Life of Zack and Cody". NSM fails to revise the wrong prediction of the intermediate entity for lack of graph information and derives the wrong answer along the blue path. By contrast, utilizing the knowledge graph information that "Suite Life of Zack and Cody" does not connect to any entity that satisfies the character constraint "Lonton Tipton", our model revises the error of incorrect prediction for intermediate entity and obtain the correct answer "Brenda Song". At the first step, the graph aggregation in our model attends to content-related entities such as "Suite Life of Zack and Cody", which are one hop away from the topic entity. At the second step, the attention weights of "Brenda Song" and "Lonton Tipton" rise from 0.08 to 0.23, indicating the graph structure around the answer entity is focused on by our model. This case shows the effectiveness of introducing knowledge graph into instructions generation on revising the incorrect prediction of an intermediate entity. Conclusion and Future Work In this paper, we propose a novel and efficient approach KBIGER with the framework of iterative instruction generation and reasoning over the graph. We introduce the knowledge graph structure into instruction generation from the query question and it can revise the error of incorrect prediction for intermediate entities within the reasoning path. We conduct experiments on two benchmark datasets of this field and our approach outperforms all the existing methods. In the future, we will incorporate knowledge graph embedding into our framework to fuse the information from the query question and knowledge graph in a better manner.
2022-09-08T06:43:29.603Z
2022-09-07T00:00:00.000
{ "year": 2022, "sha1": "44915a445531b5487e39a8d8b1b8983e3037abc3", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "44915a445531b5487e39a8d8b1b8983e3037abc3", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
199174985
pes2o/s2orc
v3-fos-license
Effect of Skin Layer on Electric Impedance Scanning Imaging In study of Electric Impedance Scanning Imaging (EISI), some scholars believe that low electrical impedance of skin layer will reduce detection sensitivity of imaging system. However, in the previous numerical model analysis, there is little work to analyze the influence of skin layer. Based on actual size, electrical parameters of Chinese female breasts and detection probe structure, static electric field equation was solved by finite element method. Results show that skin layer not only reduces current detected by probe, but also significantly reduces sensitivity of probe. The smaller the conductivity of skin layer, the more obvious decrease in sensitivity. In practice, if conductivity of skin layer can be improved by some means, sensitivity of probe can be increased. Furthermore, if thickness and conductivity parameters of skin layer can be accurately obtained, and then influence of skin layer in overall EISI image can be eliminated, imaging quality and detection sensitivity of system should be significantly improved. Introduction Breast cancer is one of the most susceptible malignant tumors in women. Due to the significant differences in electrical admittance between breast cancerous tissue and normal tissue, many corresponding breast cancer detection or diagnosis techniques have been developed, such as electrical impedance tomography, breast surface potential diagnosis technology, four-electrode impedance measurement and Electric Impedance Scanning Imaging (EISI). EISI inspection has been widely used in breast cancer detection and diagnosis due to its simple operation, comfortable and economical cost. In an EISI examination, a constant voltage with a low frequency is applied to the hand-held stainless-steel cylinder. The scan probe with a planar array of electrodes is placed on breast and all electrodes in array are kept at virtual ground. Due to good electrical conductivity of human pectoralis muscle, it can be regarded as an equivalent potential surface, and an approximate parallel electric field was established between pectoralis muscle and probe. When there is a cancerous lesion in breast tissue, due to differences in electrical conductivity, the electric field originally distributed in parallel will be disturbed, and the current distribution detected by probe will be disturbed [1]. Seo et al. modeled EISI as a semi-infinite space. Through boundary element method, the imaging characteristics of presence of cancerous lesion in breast were simulated. It was preliminarily pointed out that current disturbance has a close relationship with depth of cancerous lesion [2]. Since the model does not involve actual size of breast and actual structure of probe, it can't reflect the boundary effects during imaging process. Scholz et al. modeled imaging process as a single volume conductor problem, and conducted a preliminary simulation analysis of imaging process based on finite element method [3]. However, Scholz's imaging model has following shortcomings: first, breast is equivalent to a single tissue, and skin layer is not considered; second, equivalent potential surface of pectoralis major muscle is regarded as same size as surface of probe, which reduces boundary effects to a certain extent. In fact, equivalent potential surface of pectoralis muscle is same as size of breast, and is larger than surface of probe. Third, idealize surface of probe as an ideal zero potential surface, and boundary conditions at electrode spacing are not considered. This paper constructs a breast imaging model that is closer to EISI. The current distribution of breast surface was obtained by using COMSOL software. The effect of skin layer on imaging is analyzed. Methodology and modelling Electrical parameter of human tissues can be regarded as electrical admittance form shown in equation (1), where σ is conductivity and ε 0 is dielectric constant in vacuum, and its value is 8.85×10 -12 F/m; ε is a relative dielectric constant, ω=2πf is angle frequency of excitation signal, and f is frequency of excitation signal. * 0 = σ σ ωε ε +j ( 1 ) EISI detection process is through application of low-frequency sinusoidal excitation signal (excitation frequency <20 KHz), and then detect current on surface of breast. Considering that there is no current source inside breast, there is no current accumulation effect, so potential distribution inside breast satisfies the typical Laplace equation (2) [4,5]. (2), Ω represents the breast to be detect, φ is the potential value inside breast. During EISI inspection, a sinusoidal excitation voltage is introduced into pectoralis major muscle by electrode rod held by patient (with good electrical conductivity). Then it flows out via breast tissue from probe electrode (both electrodes are virtual grounded) on surface of breast. Therefore, in addition to satisfy equation (2), it is also necessary to meet Dirichlet boundary condition shown in formula (3). Dirichlet boundary г 1 includes surface of probe (including surface of measuring electrode and guard electrode) and plane where pectoralis muscle is located. The boundary conditions are U = 0 and U = 2.0 V, respectively (EISI excitation voltage amplitude). 1 =U on ϕ Γ ( 3 ) With exception of boundary г 1 , other surfaces are collectively referred to as Newman boundary г 2 . Outer material in contact with boundary г 2 is air. Taking into account that electrical conductivity of skin is much greater than that of air, therefore, there is no boundary current at boundary г 2 . Distribution of electric fields described in equation (2) still needs to satisfy Newman boundary conditions (i.e., electrical isolation conditions) shown in equation (4) (2)-(4), potential distribution φ of EISI can be obtained. On surface of probe, considering that electrode can be considered as an ideal conductor, it can be regarded as an isopotential body, and its electric field tangent component is zero. Therefore, on contact surface of probe and breast, electric field strength in tissue is only normal component [4]. Current density distribution of electrode surface is calculated by means of equation (5). Current of measuring electrode can be achieved by integrals with equation (6). When excitation frequency is less than 100 KHz, conduction current in human tissue is much larger than displacement current, so dielectric effect can be ignored. EISI's measuring frequency is in range of 200-20KHz, so only conductivity value is considered in subsequent simulations. Female breast consists of breast tissue and surface skin. Breast tissue consists of glands and fats. Young women have dense breasts and breast tissue is dominated by glands. As age grows, glands gradually degenerate, and breast tissue gradually takes fat as main component. The size distribution of women's breasts in China is 100-170 mm [6], flat posture breast thickness between 30-60 mm [7]. Average thickness of breast skin is 5 mm and conductivity is 0.01 S/m. During EISI, breast is examined by a physician after pressing and smoothing, so it can be approximated as a square column with a side length of 100-170 mm and a thickness of 30-60 mm. The column has two layers of structure. The surface is skin layer. Fig.1 is a computational example. Fig.1 (a) is a breast EISI measuring model with a diameter of 100 mm. there is a tumor with a radius of 15 mm within breast tissue, and its depth is 20 mm; tumor tissue conductivity is 0.7 S/m, mammary tissue is fat, conductivity is 0.04 S/m, thickness of skin layer is 5 mm, and conductivity is 0.01 S/m. Fig.1 (b) is a grid model, measuring electrodes and guard electrode are manufactured by PCB copper coating process with a thickness of 100 um. Fig.1 (c) shows distribution of electric field lines. To solve equation (2)-(6), current distribution of measuring electrode on surface of probe can be obtained as shown in Fig.1 (d). In order to reduce current disturbance caused by larger breast than probe, guard electrode is designed around measuring electrodes (Square, side length of 3 mm, and interval space of 1 mm). This reduces interference of boundary effect on measuring electrodes to some extent. It is helpful to highlight current disturbance introduced by difference of electrical conductivity between tumor lesion and surrounding normal tissue. Effect of presence or absence of skin layer on EISI Combined with model and simulation method in previous section, skin effect on EISI was analyzed. Breast size was 100 mm, depth of lesion was 20 mm, radius of lesion was 15 mm, and conductivity of lesion was 0.7 S/m. On this basis, four situations are simulated respectively. Fig.2 (a) is case without considering skin layer, that is, thickness of breast tissue is 50 mm; breast tissue was of fat type, and conductivity was 0.04 S/m. Fig.2 (b) compared with Fig.2 (a), only skin layer was considered. Thickness of skin layer was 5 mm, and conductivity of skin layer was 0.01 S/m. Fig.2 (c) shows that skin layer is not considered, but breast tissue is a mixture of fat and gland, with a conductivity of 0.3 S/m. Similarly, in Fig.2 (d), only skin layer was considered. Thickness of skin layer was 5 mm, and conductivity of skin layer was 0.01 S/m. In order to objectively evaluate degree of current disturbance caused by cancerous lesions, this paper defined a breast cancer significance measure (BCSM): BCSM represents ratio between average detection current of four electrodes at center of probe and average detection current of 28 electrodes at four edges of probe. If BCSM is less than 1, it means that disturbance brought by cancerous lesion is submerged in boundary effect, and disturbance is difficult to be recognized. If breast tissue was fat, BCSM was 1.6939 without considering skin layer. If skin layer is taken into account, BCSM drops significantly to 1.2244. If breast tissue is dominated by glands, its conductivity significantly increases, and conductance ratio between cancerous lesion and breast tissue decreases, disturbance degree will be further reduced, and BCSM value is 1.1576. If skin layer is taken into account, BCSM value is only 1.0285, and it is difficult to detect current disturbance caused by cancerous lesion in Fig.2 (d). Fig.1. Conductivity of breast tissue is 0.3 S/m, radius of cancerous lesion is 15 mm, depth of cancerous lesion is 20 mm, conductivity of cancer lesion is 0.7 S/m, and thickness of skin layer is 5 mm. Conductivity of skin layer was 0.01, 0.02, 0.03 and 0.04 S/m, respectively. Fig.3 shows current distribution detected by probe in four cases and corresponding BCSM values. It can be seen from Fig.3, skin conductivity directly affects detection sensitivity. The higher conductivity of skin layer, the higher current detected by probe, and the higher BCSM value of image. Conclusion In EISI, skin layer has a great influence on imaging. Based on results and discussions presented above, conclusions are obtained as below: (1) Under same conditions, sensitivity of probe to detection of cancerous lesion is greatly reduced because conductivity of skin layer is very low. Presence of skin layer greatly reduces current detected by probe, thus greatly reducing degree of current disturbance caused by cancerous lesion. (2) Conductivity of skin layer directly affects sensitivity of EISI. The higher conductivity of skin layer, the higher sensitivity of detection. Conversely, lower sensitivity. (3) There was no significant correlation between skin thickness and sensitivity. In practice, if skin is moistened with physiological saline before detection, conductivity value of skin layer can be improved, which helps to improve sensitivity of EISI. If conductivity and thickness of skin layer can be accurately measured, effect of skin layer can be eliminated in impedance image and sensitivity of system can be improved. In addition, it can be seen from Fig.1 (c) that boundary effect of probe leads to a larger current detected by electrode at edge of probe, which further reduces current disturbance originally caused by cancerous lesion and decreases detection sensitivity of EISI. In next step, relationship between parameters such as width of guard electrode and space between guard electrode and outermost measuring electrodes and detection sensitivity of EISI should be further analyzed to improve system through optimization design.
2019-08-02T22:12:34.847Z
2019-06-01T00:00:00.000
{ "year": 2019, "sha1": "c6872c1c4be2f9d7dce56d532226a9c175ad5350", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1237/3/032023", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "09fd1cd62fc1281bb2649044ae56cd1028ea870c", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
15187613
pes2o/s2orc
v3-fos-license
Deletion of the Murine Cytochrome P450 Cyp2j Locus by Fused BAC-Mediated Recombination Identifies a Role for Cyp2j in the Pulmonary Vascular Response to Hypoxia Epoxyeicosatrienoic acids (EETs) confer vasoactive and cardioprotective functions. Genetic analysis of the contributions of these short-lived mediators to pathophysiology has been confounded to date by the allelic expansion in rodents of the portion of the genome syntenic to human CYP2J2, a gene encoding one of the principle cytochrome P450 epoxygenases responsible for the formation of EETs in humans. Mice have eight potentially functional genes that could direct the synthesis of epoxygenases with properties similar to those of CYP2J2. As an initial step towards understanding the role of the murine Cyp2j locus, we have created mice bearing a 626-kb deletion spanning the entire region syntenic to CYP2J2, using a combination of homologous and site-directed recombination strategies. A mouse strain in which the locus deletion was complemented by transgenic delivery of BAC sequences encoding human CYP2J2 was also created. Systemic and pulmonary hemodynamic measurements did not differ in wild-type, null, and complemented mice at baseline. However, hypoxic pulmonary vasoconstriction (HPV) during left mainstem bronchus occlusion was impaired and associated with reduced systemic oxygenation in null mice, but not in null mice bearing the human transgene. Administration of an epoxygenase inhibitor to wild-type mice also impaired HPV. These findings demonstrate that Cyp2j gene products regulate the pulmonary vascular response to hypoxia. Introduction Human cytochrome P450 2J2 (CYP2J2) is abundant in cardiovascular tissues and pulmonary endothelium [1] and metabolizes arachidonic acid (AA) to epoxyeicosatrienoic acids (EETs) and hydroxyeicosatetraenoic acids (HETEs), short-lived mediators that have potent vascular protective properties [2][3][4]. The mouse chromosomal locus syntenic to human CYP2J2 contains 10 genes (8 presumed genes and 2 pseudogenes) spanning 626 kb on chromosome 4. Gene clusters in the mouse that are syntenic to a single human gene are not uncommon, but their study is rarely straightforward. Mutant mice with short gene deletions can be generated through the conventional gene targeting strategies [5], but the deleted region rarely exceeds ten kilobases in most applications of the existing technology. Bacterial artificial chromosomes (BACs), which can have lengths up to ,250 kb, have been used for gene targeting [6][7][8], but even in these cases the length of the BAC creates a formal upper limit for the size of the deletion. Deletion of a large DNA region has been accomplished by sequential introduction of loxP sites followed by the expression of Cre recombinase in embryonic stem cells [9][10][11]. However, it is difficult to distinguish loxP sites integrated into the same autosome from those integrated into separate autosomes, and Cre-mediated recombination has relatively low efficiency when the distance between loxP sites is great [9][10][11]. Here we describe a method to join BACs using prokaryotic integrases to create a deletion replica in E. coli that is subsequently used to target the murine locus. CYP2J2 products elicit a variety of effects including fibrinolysis, vasodilation, and inhibition of inflammation [12][13][14]. However, a definitive identification of the contributions of Cyp2j genes in the cardiovascular system has remained challenging due to the expansion of the locus in mice. Murine Cyp2j isoforms may act as epoxygenases and hydroxylases to metabolize AA into EETs and HETEs [15]. It has been shown that the pulmonary vasoconstrictor response to alveolar hypoxia is ablated in mice deficient for cytosolic phospholipase A2a (cPLA2a), an enzyme that lies upstream of Cyp2j and is responsible for liberation of AA from esterified forms of phospholipids in the cell membrane [16]. There are four pathways downstream of cPLA2a mediating the metabolism of AA, including the cyclooxygenase (COX), lipoxygenase (LO), epoxygenase, and v-hydroxylase pathways. It has been shown that inhibition of COX or 5-LO pathways does not impair hypoxic pulmonary vasoconstriction (HPV) [17,18]. Previous studies have demonstrated that products of CYP epoxygenases and hydroxylases can produce pulmonary vasoconstriction and vasodilation, respectively. However, it is unknown which cytochrome P450 is the major contributor to the regulation of HPV -a mechanism unique to the pulmonary vasculature, that diverts blood flow away from poorly ventilated lung regions, thereby preserving oxygenation of systemic blood [4,16,19]. The pulmonary vasoconstrictor response to alveolar hypoxia is crucial for maintaining arterial oxygenation during acute respiratory failure and lung injury. Due to the diversity of potential metabolites and the challenges associated with their measurement, stemming from their propensity for rapid metabolism and multiple functionally-relevant isomeric forms, it has been challenging to precisely identify which gene family and which eicosanoids are the most relevant modulators of HPV. In this study, we describe a strategy to engineer large DNA fragments in bacteria and mammalian cells. We performed large scale ablation and human allelic complementation of the Cyp2j locus in mice using E. coli genetic techniques and bacterial artificial chromosome technology. Phenotypic characterization of the resulting Cyp2j-null and complemented mice showed that disruption of mouse Cyp2j genes did not alter pulmonary and systemic hemodynamic parameters at baseline. However, the increase in left lung pulmonary vascular resistance induced by selective left lung hypoxia was impaired in Cyp2j-null mice, but not in complemented mice, demonstrating that the Cyp2j genes contribute to hypoxic pulmonary vasoconstriction. Fusion of BACs using TP901 integrase in E. coli Because no single BAC has been reported to span the entire mouse Cyp2j locus, two BACs that contain the termini of the Cyp2j gene cluster were separately modified to permit joining by sitespecific recombination in E. coli ( Figure 1A, 1B). Homologous arms were amplified from the BAC clones and subcloned into one targeting vector containing a kanamycin resistance element and the TP901-1 integrase attB site, and into another targeting vector containing an ampicillin resistance element and a TP901-1 attP site. Following homologous recombination, the selectable markers and integrase sites were integrated into the two BACs, forming MT59BAC and MT39BAC ( Figure 1B), as identified using four PCR amplifications (using primers P1 to P8; Figure S1A). The PCR products were sequenced to confirm that the correct recombination products had formed. The MT59BAC and the MT39BAC containing the termini of the mouse Cyp2j locus were then fused in E. coli by site-specific recombination. A plasmid expressing TP901 integrase under control of the araBAD promoter was introduced into a bacterial strain harboring the BAC containing the kanamycin resistance element and the TP901 attB site. TP901 expression was induced prior to creation of electrocompetent cells, and the modified BAC bearing ampicillin resistance was introduced. Following selection for resistance to both kanamycin and ampicillin the fused BAC resulting from integrase was identified by PCR. Two pairs of primers (P9 to P12 shown in Figure 1B) were used to identify the integration events (PCR results shown in Figure S1B). PCR products were sequenced to confirm the desired TP901 attL and attR sites had formed (representative sequences shown in Figure S1B). The correctly fused BAC (FS BAC) was digested with SpeI and BamHI to confirm that no unwanted rearrangements had occurred ( Figure S1C). Excision of mouse Cyp2j gene cluster using a deletion replica in mouse ES cells The fused BAC was electroporated into mouse ES cells and geneticin-resistant clones were screened by multiplex ligationdependent probe amplification (MLPA) using five wild-type probes-5wt, 3wt, wtM1, wtM2 and wtM3, located within the region targeted for deletion of the Cyp2j gene locus, and three mutant probes-5 m, 3 m and neo-located within the engineered Cyp2j gene locus and recognizing vector sequences and the selectable marker ( Figure 2A). Two internal control probes, HP1 and ITGB3, that detect genes located on chromosome 8 and 11, respectively, were used to normalize signal intensities from probes for Cyp2j wild-type and mutant genes. The desired clones showed the expected pattern, in which the wild-type signal intensity (5wt, 3wt, wtM1, wtM2 and wtM3) was decreased approximately by half, indicating disruption of one allele ( Figure 2B, C). The areas of mutant signal intensities, including 5m, 3m, and neo, reflect the integrated copy numbers. Clones showing the lowest mutant signal intensity among the screened clones were considered likely to be single copy integrations (representative data shown in Figure 2B). The selectable markers and vector sequences were removed from two ES clones, 1C04 and 4G02, using R4 integrase. A plasmid expressing the integrase under the control of the chicken actin-CMV hybrid promoter was transfected into the two ES cell lines. The action of the R4 integrase resulted in excision of the sequences located between the R4 attB and attP sites, as illustrated in Figure S2. The recombination between attB and attP sites gives rise to a chromosomal R4 attL site (sequences shown in Figure S2) and a circularized attR remnant that has no mechanism for persistence during cell division. The deletion events were initially identified by PCR using primers P13 and P14. Thirty of 43 clones for 1C04 and 29 of 37 clones for 4G02 showed the expected 561 bp PCR fragment (data not shown). Successful R4 integrasemediated recombination was confirmed by sequencing the PCR products to detect the presence of the R4 attL site and by MLPA to confirm loss of the geneticin resistance allele (data not shown). Author Summary In mice and humans, the CYP2J class of cytochrome P450 epoxygenases metabolizes arachidonic acid (AA) to epoxyeicosatrienoic acids (EETs), short-lived mediators with effects on both the pulmonary and systemic vasculature. Genetic dissection of CYP2J function to date has been complicated by allelic expansion in the rodent genome. In this study, the mouse chromosomal locus syntenic to human CYP2J2, containing eight presumed genes and two pseudogenes, was deleted via generation of a recombinant template created by homologous and site-specific recombination steps that joined two precursor bacterial artificial chromosomes (BACs). The Cyp2j null mice were subsequently complemented by transgenic delivery of BAC sequences encoding human CYP2J2. Hypoxic pulmonary vasoconstriction (HPV) and systemic oxygenation during regional alveolar hypoxia were unexpectedly found to be impaired in null mice, but not in null mice bearing the transgenic human allele, suggesting that Cyp2j products contribute to the pulmonary vascular response to hypoxia. Generation of Cyp2j-null mice using mouse Cyp2j targeted ES cells Four clones of mouse Cyp2j target ES cells were microinjected into C57BL/6 blastocysts. Four chimeric mice were born from 4 clones of which one, from B6-white ES cells, showed germ line transmission: among 20 litters, 20 pups from 148 offspring (13%) were derived from ES cells. Heterozygous mice were mated to generate homozygous mutant mice (Cyp2j 2/2 ) and wild-type littermates (Cyp2j +/+ ). Genotyping by MLPA showed the absence of all internal regions located in the deletion region of the homozygotes ( Figure S3A). Mouse genotypes were also tested by PCR, as shown in Figure S3B. Generation of human CYP2J2 (hCYP2J2) transgenic mice using a modified BAC. A BAC clone was selected to provide the human CYP2J2 gene. A schematic diagram shows the procedure used to modify the BAC ( Figure 3A). The E. coli recombination events were identified by four PCR amplifications using primers P15 to P22 shown in Figure 3A ( Figure S4A). The PCR products were sequenced to confirm the anticipated recombination events. The correct recombinants were digested using EcoRI, NcoI, and HindIII, separately, to confirm the identity of the sequence ( Figure S4B). The recombinant BAC DNA was used to produce hCYP2J2 transgenic mice (C57BL/6 background). Four founders were identified from 35 offspring by DNA blotting and PCR ( Figure 3B). Three founders supported germ line transmission. These mice were backcrossed with Cyp2j 2/2 mice for more than 6 generations to derive Cyp2j 2/2 mice on the B6white background carrying a transgene specifying human CYP2J2 (Cyp2j 2/2 -Tg). Cyp2j gene expression in Cyp2j 2/2 and Cyp2j 2/2 -Tg mice Human CYP2J2 and the eight mouse Cyp2js share 66-83% similarity in protein sequence and 55-88% sequence identity for mRNA sequence ( Figure S5). RT-MLPA [20], a technology which allows detection and quantitation of nucleic acids having single nucleotide differences, as well as measurement of the expression of multiple genes in a single tube, was used to examine the expression of the eight Cyp2j genes in wild-type mice. Expression of 3 internal control genes, Tbp, Hprt, and Gapdh, was used to normalize the data. Each Cyp2j gene has a distinct expression pattern, as shown in Figure 4A. Kidney, liver, and gastrointestinal tissues are the major sites of Cyp2j isoform gene expression. Expression of six Cyp2j genes (2j5, 2j6, 2j8, 2j11, and 2j13) is detectable in liver and kidney. Cyp2j7 is expressed at low levels in the liver. Cyp2j13 is highly expressed in the kidney. Cyp2j9 shows expression in small intestine, liver, and brain. Cyp2j6 is broadly expressed: small Construction strategy. The WT 59BAC and WT 39BAC (location shown on A) were modified using 59 and 39 targeting vectors, respectively, through homologous recombination in E. coli. The resultant MT59BAC and MT39BAC lack sequences between the recombination arms (HR) and have acquired selectable markers (neomycin resistance, thymidine kinase sensitivity, and blasticidin resistance) and integrase sites (attB1 and attP1 of TP901, attB2 and attP2 of R4). The fused BAC (FS BAC) was generated through site-specific recombination between attB3 and attP3 sites carried by the MT59BAC and MT39BAC, respectively, and mediated by TP901 integrase. Amp r , Ampicillin resistance; Bsr r , blasticidin resistance; TK, herpes simplex thymidine kinase; neo r , kanamycin/geneticin resistance. doi:10.1371/journal.pgen.1003950.g001 intestine . stomach . thyroid . liver . large intestine . kidney . brain. Only low levels of expression of 2j5, 2j6, 2j9, 2j11, and 2j13 were detected in lung. RT-MLPA was also applied to RNA prepared from liver and kidney of Cyp2j 2/2 mice. No transcripts from Cyp2j genes were detected ( Figure 4B). To evaluate potential species differences in lineage-dependent expression, CYP2J2 gene expression was examined by quantitative reverse-transcriptase PCR (RT-PCR) using commercial pooled human cDNA preparations. The human gene is highly expressed in liver, and the abundance of transcripts in heart exceeds that in lung ( Figure 4C). In contrast, in RNA prepared from Cyp2j 2/2 -Tg mice, CYP2J2 mRNA levels were substantially higher in lung than in heart in Cyp2j 2/2 -Tg mice but not in Cyp2j +/+ -Tg mice ( Figure 4D), an observation that was verified in mice derived from two independent founders (data not shown). Effects of Cyp2j deletion on hemodynamic parameters To investigate whether Cyp2j deficiency affects systemic hemodynamic measurements, the blood pressure (BP) and heart rate (HR) were measured in conscious male and female Cyp2j +/+ and Cyp2j 2/2 mice by tail cuff plethysmography. Systemic blood pressure and HR did not differ between genotypes (Table 1). Invasive hemodynamic measurements in anesthetized Cyp2j +/+ and Cyp2j 2/2 mice of both sexes also did not reveal differences in HR, BP, cardiac output, systemic vascular resistance, or left ventricular systolic or diastolic function ( Table 2). Contribution of Cyp2j to hypoxic pulmonary vasoconstriction To assess the contribution of Cyp2j to HPV, we measured changes in the left pulmonary vascular resistance (LPVR) in response to left mainstem bronchial occlusion (LMBO) in Cyp2j +/+ and Cyp2j 2/2 mice. We used dynamic measurements of pulmonary pressure and flow in the left pulmonary artery during transient inferior vena cava occlusion to estimate the LPVR [16]. Before LMBO, LPVR was similar in Cyp2j +/+ (8065 mmHg?ml?min?g 21 ) and Cyp2j 2/2 mice (8866 mmHg?ml?min?g 21 ). In Cyp2j +/+ mice, LMBO decreased the left pulmonary arterial blood flow (Q LPA ) without changing the pulmonary arterial pressure (PAP), doubling the LPVR ( Figure 5A, Table S3). In contrast, LMBO did not change LPVR in Cyp2j 2/2 mice ( Figure 5A, Table S3), consistent with the absence of HPV. To estimate the impact of impaired HPV on systemic arterial oxygenation, we measured arterial blood gas tensions 5 minutes after LMBO, while the right lung was ventilated at an inspired oxygen fraction (F I O 2 ) of 1. Arterial oxygen partial pressure (PaO 2 ) was higher in Cyp2j +/+ than in Cyp2j 2/2 mice (247636 vs. 15369 mmHg, respectively; P,0.05; Table S3). However, there was no difference in blood pH a , the arterial partial pressure of carbon dioxide, or the concentration of HCO 3 2 (data not shown). Systemic oxygenation during LMBO was further assessed using an intra-arterial PaO 2 probe in a subset of Cyp2j +/+ and Cyp2j 2/2 mice. No difference in PaO 2 before LMBO was detected between Cyp2j +/+ and Cyp2j 2/2 mice ( Figure 5B). After LMBO, PaO 2 decreased in both genotypes to its new steady state within 2 min; however, Cyp2j 2/2 mice had a lower PaO 2 than did Cyp2j +/+ mice during LMBO ( Figure 5B). These observations confirm the presence of increased intrapulmonary shunting during LMBO in Cyp2j 2/2 mice and are consistent with absent HPV. Human CYP2J2 restores HPV in Cyp2j 2/2 mice Since CYP2J2 is the only human gene homologous or paralogous to multiple murine Cyp2j genes, we investigated whether or not complementation with CYP2J2 could restore HPV in Cyp2j 2/2 mice. At baseline, hemodynamic parameters did not differ between Cyp2j +/+ and Cyp2j 2/2 -Tg mice (Table S3). LMBO increased LPVR in Cyp2j +/+ and Cyp2j 2/2 -Tg mice to a similar extent ( Figure 5C), indicating that HPV is preserved in Cyp2j 2/2 -Tg mice. Furthermore, during LMBO, arterial oxygen partial pressure (PaO 2 ) did not differ between Cyp2j 2/2 -Tg and Cyp2j +/+ mice (Table S3), consistent with preserved HPV. These results suggest that presence of the single human CYP2J isoform in mice, in which the entire Cyp2j locus is deleted, is sufficient to permit pulmonary vasoconstriction. Inhibition of cytochrome-P450 epoxygenase activity attenuates HPV To exclude the possibility that the lifelong Cyp2j deficiency might lead to unanticipated compensatory mechanisms that could impair HPV in mice, we assessed HPV in Cyp2j +/+ mice treated with the epoxygenase inhibitor, N-methylsulfonyl-6-(2-propargyloxyphenyl) hexanamide (MS-PPOH). The LMBO-induced increase in LPVR was markedly attenuated in a dose-dependent manner when mice were studied 90 minutes after treatment with MS-PPOH ( Figure 5D). The PaO 2 during LMBO was lower in MS-PPOH-treated than in vehicle-treated mice. These results further confirm that cytochrome P450 epoxygenase enzymatic activity contributes to HPV in mice. L-NAME restores HPV in Cyp2j 2/2 mice To examine the possibility that HPV is impaired in Cyp2j 2/2 mice due to an alteration in the balance of vasoconstrictors and vasodilators, we studied the effects of enhancing pulmonary vascular tone by inhibiting nitric oxide (NO) production on HPV in Cyp2j 2/2 mice. At 30 minutes after administration of N G -nitro-L-arginine methylester (L-NAME, an inhibitor of NO synthases), before LMBO, hemodynamic parameters did not differ between Cyp2j +/+ and Cyp2j 2/2 mice (Table S3). During LMBO, inhibition of NO synthesis with L-NAME augmented the increase in LPVR in Cyp2j +/+ mice and restored the ability of LMBO to increase LPVR in Cyp2j 2/2 mice ( Figure 5E, Table S3). These findings demonstrate that the Cyp2j 2/2 mice retain the mechanisms necessary for the pulmonary vascular response to hypoxia and that HPV can be restored in these mice by enhancing vasoconstriction. EET levels in bronchial alveolar lavage fluid and oxygenase activity in pulmonary microsomes The plasma and urine EET levels did not differ between Cyp2j +/+ and Cyp2j 2/2 mice (data not shown). Levels of 11, 12-and 14, 15-EET, as reflected by the difference of 11, 12-and 14, 15-DHET levels before and after hydrolysis of EETs (measured by ELISA) were evaluated in bronchoalveolar lavage fluid (BALF) from Cyp2j +/+ , Cyp2j 2/2 and Cyp2j 2/2 -Tg mice. BALF EET levels did not differ among the genotypes ( Figure 6A, B). Moreover, the generation of EETs and DHETs by pulmonary microsomes from Cyp2j +/+ , Cyp2j 2/2 and Cyp2j 2/2 -Tg mice did not differ ( Figure 6C). Cyp2c44, 2c38, and 2c29 gene expression in lung and heart of Cyp2j +/+ , Cyp2j 2/2 and Cyp2j 2/2 -Tg mice In addition to members of the Cyp2j subfamily, members of the Cyp2c family of cytochrome P450 enzymes, including Cyp 2c44, 2c38, and 2c29, are able to metabolize AA to EETs in endothelial cells [21,22]. Expression of these three Cyp2c family members in lung and heart of Cyp2j +/+ , Cyp2j 2/2 and Cyp2j 2/2 -Tg mice was measured using quantitative RT-PCR. Pulmonary expression of the Cyp2c genes did not differ among genotypes, but deletion of the Cyp2j locus led to increased expression of the three Cyp2c genes in the heart (Figure 7). Discussion Allelic expansion in the rodent genome is a commonly encountered phenomenon that has the potential to reduce the utility of rodent models for understanding human gene function. At a minimum, the presence of multiple functional murine paralogs confounds the extrapolation to the human context of the results of single gene ablations in mice. An alternative to single gene analysis is the inactivation of an entire locus syntenic to the human gene of interest. For the most part, the genetic tools to undertake such inactivations have been relatively underdeveloped. In this report, we demonstrate the feasibility of joining multiple bacterial artificial chromosomes using site-specific recombination to form a deletion replica that can be used to induce genomic rearrangement in mice. Previous approaches for the deletion of large DNA fragments required two targeting vectors harboring loxP sites [9][10][11]. These approaches require sequential gene targeting. The fused BAC targeting approach represents a powerful and efficient method for developing genetically-modified mice for the purpose of characterizing the function of gene clusters or studying genetic diseases associated with large chromosomal DNA deletions. Using this technology, we generated Cyp2j-null mice in which the 626-kb Cyp2j locus is deleted, as well as mice carrying a transgene specifying the human CYP2J2 allele in context of a Cyp2j-null allele. It has previously been shown that overexpression of human CYP2J2 has cardiovascular protective effects in mice [13,14,23]. CYP2J synthesizes EETs in vascular endothelial cells [12,13]. Epoxygenase-derived EETs hyperpolarize vascular smooth muscle cells in kidney, brain, and heart, resulting in vasorelaxation [24][25][26][27]. Previously Athiracul et al. reported that female mice deficient in the Cyp2j5 gene on a 129/SvEv background exhibit increased systemic blood pressure [28]. We therefore expected that mice lacking the entire Cyp2j gene family would show systemic vascular effects. However, deletion of the Cyp2j locus did not affect baseline systemic hemodynamic parameters or left ventricular contractile function in either sex. The variance with previous observations might be attributable to differences in strain background or to the actions of other Cyp2j enzymes in Cyp2j5 2/2 mice which are not present in the Cyp2j 2/2 mice. Effects of Cyp2j5 deletion on pulmonary vascular function have not been reported for Cyp2j5 2/ 2 mice. In the pulmonary circulation, EETs enhance vasoconstrictor tone [3,19,29,30], likely via activation of TRPC6 channels in vascular smooth muscle cells [30]. Epoxygenase-derived EETs are reported to contribute to the pulmonary vascular response to hypoxia [30]. Moreover, 11, 12-and 14, 15-EET levels were recently reported to be increased in isolated-perfused murine lungs following exposure to hypoxia (F I O 2 0.01) for 10 minutes [31]. Previous studies of the roles of epoxygenases in the regulation of HPV have relied on chemically-synthesized EETs, pharmacological activators or inhibitors of cytochrome P450 enzymes, or lineage-restricted overexpression of CYPs [3,30], which cannot distinguish between the contributions of CYP2J and CYP2C. We did not detect an effect of deleting the Cyp2j locus on pulmonary vascular tone at baseline, but the pulmonary vasoconstrictor response to hypoxia was absent in Cyp2j 2/2 mice. There are several possible mechanisms by which the Cyp2j subfamily might regulate HPV. It is probable that Cyp2j 2/2 mice have reduced ability to generate pulmonary vasoconstricting EETs. Alternatively, it is possible that the deletion of Cyp2j subfamily shunts arachidonic acid into other pathways leading to the increased synthesis of cyclooxygenase, lipoxygenase, and v-hydroxylase products. Some of these products, such as prostacyclin or 20-HETE are known to be pulmonary vasodilators [32,33] and could impair HPV. We were unable to detect differences in EET levels in the plasma, urine, and BALF of wild-type and Cyp2j 2/2 mice. Moreover, we did not observe differences in the generation of EETs and DHETs by microsomes extracted from the lungs of wild-type, Cyp2j 2/2 , or Cyp2j 2/2 -Tg mice. Previous studies have shown that multiple cytochrome P450 enzymes, including CYP1A, CYP2B, CYP2C, CYP2D, CYP2G, CYP2J, CYP2N, and CYP4A subfamilies, are capable of EET biosynthesis in vitro [34]. It is conceivable that EET production by CYP isoforms other than Cyp2j could obscure the impact of Cyp2j deficiency on pulmonary and systemic EET generation. In lung tissues, immunohistochemistry studies detected CYP2C proteins exclusively in the serous cells of bronchial glands, whereas CYP2J proteins were detected in a variety of cell types including pulmonary vascular smooth muscle and endothelial cells [1,35]. We observed that Cyp2c genes were expressed in the lungs of wildtype and Cyp2j 2/2 mice with or without the human CYP2J2 transgene. In addition to actions mediated by their enzymatic activity, Cyp2j isoforms could function in signaling circuits via other mechanisms, for example, serving as scaffold proteins or mediators in signal transduction complexes that regulate HPV via mechanisms not dependent on epoxygenases or hydroxylases. The proposal that the catalytic activity of Cyp2j family members regulates HPV is supported by the finding of the present study that administration of an epoxygenase inhibitor, MS-PPOH, to wildtype mice impaired HPV in a dose-dependent manner. Transgenic introduction of human CYP2J2 into Cyp2j-deficient mice did not affect baseline hemodynamic parameters but restored the pulmonary vasoconstrictor response to LMBO. These results suggest that the human CYP2J2 functions in a manner similar to one or more of the murine Cyp2j isoforms in the regulation of pulmonary vascular tone by hypoxia. It is of note that the expression of human CYP2J2 transgene was greater in the lungs of Cyp2j 2/2 mice than in wild-type mice, suggesting the existence of a feedback loop designed to maintain expression of Cyp2j. Administration of the NO synthase inhibitor, L-NAME, restored HPV in Cyp2j-null mice, indicating that Cyp2j-deficient mice retain the ability to constrict their pulmonary vasculature in response to alveolar hypoxia. This result is in agreement with observations of Ichinose et al., who reported that L-NAME restores HPV in cPla2-deficient mice [16]. These observations suggest that HPV is highly sensitive to the balance of vasoconstrictors and vasodilators in the lung. Enhancing vasoconstrictor tone or reducing vasodilation restores HPV in a variety of settings [36]. Taken together, these findings suggest that, although EET biosynthesis potentially increases in response to hypoxia [31] can enhance HPV, cPLA2/CYP2J signaling is not required for the pulmonary vasculature to sense and respond to regional hypoxia. After genes duplicate, they often diverge in ways that can lead to new functions that were not exhibited by the parental gene. Mice have evolved eight Cyp2j genes and two pseudogenes. The results of this study have shown that the expression profile for each Cyp2j gene is distinct. CYP genes may also become specialized with respect to substrate specificity or product distribution. Cyp2j5 shares the highest nucleic acid sequence similarity with the human CYP2J2 gene, whereas Cyp2j6 and Cyp2j9 have the highest similarity with the sequence of the human protein (Fig. S5). It is conceivable that one or more of the mouse Cyp2j isoforms may have functions that differ from the single human CYP2J2 enzyme. However, our observations that both human and mouse CYP2J enzymes contribute to HPV suggest that the function of human CYP2J2 and one or more mouse Cyp2j isoforms has been conserved as the two genomes diverged during evolution. In conclusion, ablation of a large gene family through fused BAC-mediated homologous recombination in ES cells has generated mice in which the 626-kb murine Cyp2j gene cluster was deleted. The single human ortholog/paralog CYP2J2 was introduced transgenically to complement the deleted mouse locus. Surprisingly, genetically modulating Cyp2j activity did not affect baseline vascular function. However deletion of the Cyp2j gene locus resulted in a compromise of the pulmonary vasoconstrictor response to hypoxia. BAC clones Mouse BAC clones RP23-24J24 and RP23-70M4 and the human BAC clone RP11-163O24 were obtained from the Children's Hospital Oakland Research Institute. Sequences of oligonucleotides for PCR and probes for multiplex ligation-dependent probe amplification Primers P1 through P22, mouse genotyping primers, and RT-PCR primer sequences are shown in Table S1. The MLPA probe sequences are shown in Table S2. Integrase A codon-optimized TP901 integrase was designed by Dr. Changhong Pang and placed in an E. coli expression vector (pacycTP901_ermb) under the control of the araBAD promoter. A codon-optimized R4 integrase was inserted in the mammalian expression vector pEAK15 under the control of the chicken actin-CMV hybrid promoter. Animals All the animal studies conform to the Guide for the Care and Use of Laboratory Animals published by the National Institutes of Health and were approved by the Subcommittee on Research Animal Care of the Massachusetts General Hospital. Modification of BACs flanking the Cyp2j locus or containing human CYP2J2 The mouse Cyp2j and human CYP2J2 gene structures are shown in Figure 1A. Target vectors were endowed with four restriction enzyme sites to allow insertion of the recombination homology arms ( Figure 1B). Arms, 200-2000 bp in length, were amplified from BAC DNA and subcloned into the desired target vector. The target vector was cut with PI-SceI, and a fragment containing the homology arms and the selection cassette was recovered by gel purification and electroporated into competent cells containing the target BAC clone and a recombinase expression vector (pacycre-dabsce_ermb) bearing the bacteriophage l reda and b genes under the control of the araBAD promoter [6]. Electrocompetent cells were prepared by growing the BAC strain bearing the recombinase expression vector in LB medium containing 0.1% arabinose. Candidate recombinant clones were identified by growth on selective medium (kanamycin or ampicillin and chloramphenicol) and screened by PCR using primers flanking the arms (P1-8) ( Figure 1B). The authenticity of candidate clones was confirmed by sequencing of the resulting PCR products. DNA from a verified clone was electroporated into DH10B competent cells and individual colonies were re-streaked on different selection plates to confirm the removal of the recombinase plasmid. The resultant recombinant BACs MT59BAC and MT39BAC were digested with restriction enzymes chosen to distinguish the recombinant from the original BAC sequences (data not shown). A similar process was followed to trim the sequences flanking human CYP2J2 gene from a human BAC ( Figure 3A). Homologous arms were amplified from the wild-type BAC and subcloned into two target vectors containing two different selectable markers, conferring trimethoprim and ampicillin resistance. Integrase-mediated fusion of two modified BACs A plasmid expressing TP901 integrase under araBAD promoter control was introduced into bacteria harboring the recombinant BAC bearing the TP901 attB site. Electrocompetent cells were prepared from cells propagated in 0.1% arabinose. The recombinant BAC bearing the TP901 attP site and expressing ampicillin resistance was electroporated into cells that were then spread on plates containing kanamycin, ampicillin, and chloramphenicol. The fused BAC clones were screened by PCR, and PCR products were sequenced to confirm the formation of TP901 attL and attR sites. DNA was prepared from a correctly fused clone and Removal of the vector and selectable sequence by R4 integrase action in ES cells A transfection mixture containing a plasmid capable of expressing R4 integrase and Lipofectamine 2000 (Invitrogen) was prepared according to the manufacturer's instructions and incubated for 20-30 min at room temperature. Target ES cells at 80-90% confluence were trypsinized using 0.1% trypsin in 10 mM EDTA and resuspended in fresh ES cell culture medium with low (2%) serum at 3610 5 cells per mL. The ES cell suspension (10 mL) was mixed with the transfection mixture and replated on a 10 cm feeder plate. After 24 h, the negative selective agent, ganciclovir (2.5 mM), was added. Transformant colonies were visible after 8-10 days of culture. ES clone screening by MLPA MLPA was performed as described previously [37]. Fragment analysis was carried out on an ABI 3730XL DNA analyzer. Blastocyst injection and testing for germ line transmission ES cells were injected into C57/BL6 mouse blastocysts to generate chimeric mice. Chimeras from ES cell clones derived from the B6-white ES cell line were mated with wild-type B6-white mice (B6(Cg)-Tyr c-2J /J, Jackson Laboratory, Bar Harbor, Maine, USA) to test germ line transmission identifiable by coat color difference. Heterozygous mice (Cyp2j +/2 ) were identified by MLPA. Cyp2j 2/2 and littermate-matched wild-type (Cyp2j +/+ ) mice were obtained by mating of pairs of Cyp2j +/2 mice. Generation of human CYP2J2 transgenic mice The recombinant human CYP2J2 BAC DNA was linearized by PI-SceI digestion. The purified DNA was microinjected into pronuclear zygotes from (C57BL/66DBA) F1 mice and embryos were transplanted into recipients for the generation of transgenic mice. The resultant transgenic mice (Cyp2j 2/2 -Tg) were backcrossed with B6-white Cyp2j 2/2 mice for more than 6 generations prior to molecular and physiological characterization. RT-MLPA The RT-MLPA probes used in this study are shown in Table S2. The RT-MLPA procedure was performed as described previously [20]. RNA preparation and real-time quantitative PCR Total RNA was extracted and purified using an RNeasy kit (Qiagen) from mouse tissues (6)(7)(8) week-old). The primers used are detailed in Table S1. Reverse transcription and real-time quantitative PCR (qPCR) reactions were prepared with Super-Script II Reverse Transcriptase (Invitrogen) and iQ SYBR Green Supermix (Bio-Rad) and run in triplicate on a Bio-Rad iQ5. The cDNA panels of human adult tissue were obtained from Clontech. Hemodynamic studies We studied mice of both sexes with an age range of 2-5 months, weighing 20-30 g. Animals in each experimental group were matched for body weight. Hemodynamic measurements in anesthetized Cyp2j +/+ and Cyp2j 2/2 mice Invasive hemodynamic measurements were performed in anesthetized Cyp2j +/+ and Cyp2j 2/2 mice, as described previously [39]. Briefly, mice were anesthetized by intraperitoneal injection of ketamine (120 mg?kg 21 ), fentanyl (0.05 mg?kg 21 ), and pancuronium (2 mg?kg 21 ). After intubation, animals were mechanically ventilated inspired oxygen fraction (F I O 2 ) 1.0, tidal volume 10 mL?g 21 , respiratory rate 120 breaths?min 21 , and a fluid-filled catheter was inserted into the left carotid artery for infusion of saline (0.5 mL?g 21 ?min 21 ). A second fluid-filled catheter was inserted into the right jugular vein for measurement of central venous pressure (CVP). A thoracotomy was performed, and a pressure-volume conductance catheter (Size 1F, Model PVR-1030, Millar Instruments, Inc., Houston, TX) was inserted via the apex into the left ventricle. Systemic vascular resistance (SVR) was calculated based on mean arterial pressure (MAP), CVP, and cardiac output (CO) using following formula: SVR = ([MAP-CVP]?CO 21 ). The following parameters were derived from left ventricular pressure-volume curves: LVESP, left ventricular endsystolic pressure; LVEDP, left ventricular end-diastolic pressure; EF, ejection fraction; SV, stroke volume; dP/dt max , maximum rate of developed left ventricular pressure; dP/dt min , minimum rate of developed left ventricular pressure; t time constant of isovolumic relaxation; SW, stroke work; Ea, arterial elastance. Measurement of HPV and arterial blood gases in mice To assess HPV, left lung pulmonary vascular resistance (LPVR) was estimated before and during left mainstem bronchial occlusion (LMBO) in Cyp2j +/+ , Cyp2j 2/2 , and Cyp2j 2/2 -Tg mice (n = 10, 10, and 5, respectively), using methods described previously [16]. Briefly, mice were anesthetized, mechanically ventilated at F I O 2 of 1.0, and then subjected to a thoracotomy. An arterial line was inserted into the right carotid artery, a custom-made catheter was placed into the main pulmonary artery, and a flow probe was positioned around the left pulmonary artery. MAP, pulmonary arterial pressure (PAP), and left pulmonary arterial blood flow (Q LPA ) were continuously measured and recorded before and during LMBO. To estimate the LPVR, the inferior vena cava was partially occluded to transiently reduce CO until Q LPA was reduced by approximately 50%. LPVR was calculated from the slope of the PAP/Q LPA relationship. The increase in LPVR induced by LMBO (DLPVR) was obtained by calculating the change in the mean value of the PAP/Q LPA slopes in each mouse. Five minutes after LMBO, arterial blood was sampled from the right carotid artery. Blood gas tension analyses were measured by using an ABL800 FLEX analyzer (Radiometer America, Inc., Westlake, USA). To further assess the impact of Cyp2j deficiency on systemic oxygenation during LMBO in 4 Cyp2j +/+ and 3 Cyp2j 2/2 mice, a flexible polarographic Clark-type oxygen micro probe (0.5 mm OD; LICOX CC1.R, GMS, Kiel-Mielkendorf, Germany) was advanced into the aortic arch via the right carotid artery. Arterial oxygen partial pressure (PaO 2 ) was measured in real time and recorded continuously. The PaO 2 electrodes were calibrated before and after each experiment in air at ambient pressure according to the manufacturer's instructions. Measurements of effects of NOS inhibition on HPV DLPVR was measured 30 minutes after intravenous administration of L-NAME, dissolved in 50 mL vehicle (normal saline) at a dose of 100 mg?kg 21 in Cyp2j +/+ mice (n = 5) and Cyp2j 2/2 mice (n = 6). The dose was chosen based on results from a previous study [40]. Preparation of pulmonary microsomal fractions and quantification of microsomal EET and DHET synthesis Microsomal fractions were prepared from both lungs of Cyp2j +/+ , Cyp2j 2/2 , and Cyp2j 2/2 -Tg mice (n = 4 per group) as described previously [42,43]. To determine the eicosanoids generated, samples containing microsomal fractions were incubated with 1 mg of arachidonic acid and 1mM NADPH for 1 hour at 37C in a shaking water bath. Reactions were terminated by adding 2 volume of HPLC grade methanol and stored at 280C prior to analysis to each sample. The generated eicosanoid profiles were determined by LC-MS-MS, as previously described [44]. Statistical analysis All data are expressed as means 6 SEM. P values,0.05 were considered statistically significant. Statistical analyses were performed using Prism 5 software (GraphPad Software Inc., La Jolla, CA). For hemodynamic experiments, a two-way ANOVA with repeated measures was used to compare differences between groups. However, when the interaction P value between time and condition was significant, a one-way ANOVA with post hoc Bonferroni tests (two-tailed) for normally distributed data or a Kruskal-Wallis test (two-tailed) with a post hoc Dunn's test for data that was not normally distributed was used. Measurements within the same experimental group were compared by a paired t-test. If the normality test failed, Mann-Whitney rank sum test was applied. Figure S1 PCR identification of recombinant BACs and enzymatic digestion of a fused BAC. (A) The recombinant MT59BAC and MT39BAC candidates were screened by amplifying a homology region using primers P1-8 positioned outside of the homologous recombination region (HR) (shown in Figure 1B). 1-5: Recombinant clones; 2 : PCR control without primers; + : PCR control to amplify HR. (B) The fused BACs (FS BAC) formed by TP901 integrase action were screened by amplifying attL1 and attR1 regions using primers P9-12 (shown in Figure 1B) and validated by sequencing of the PCR products. TP901 recognizes attB1 and attP1 sites (DNA sequences are shown below) on MT59BAC and MT39BAC, respectively, and mediate recombination between the consensus sequences (shown in red) on attB1 and attP1. The left side of attB1 links to the right side of attP1 to form attL1 and left side of attP1 links to the right side of attB1 to form attR1 (sequences shown below). 1-4: FS BAC clones; 2 : negative controls to amplify MT59BAC and MT39BAC. Representative sequencing data showing attL and attR are presented below the sequences. (C) FS BAC was digested by SpeI and BamHI and fractionated by 1% agrose gel. The fragment sizes were predicted by the web map program (http:// pga.mgh.harvard.edu/web_apps/web_map/start). FS BAC clones with correct restriction fragments were selected. M1: l DNA/ HindIII marker; M2: 1 Kb DNA ladder. (TIF) Figure S2 Selectable markers and other extraneous sequences were eliminated from target ES clones using R4 integrase. R4 integrase recognizes attB2 and attP2 sites (DNA sequences are shown below) on the engineered allele and mediate recombination between consensus sequences (boxed) of attB2 and attP2. The left side of attB2 links to the right side of attP2 causing the formation of attL2 with deletion of the sequence between attB2 and attP2. The deleted ES clones were screened by PCR using P13 and P14 primers and validated by sequence verification of the formation of the R4 attL2 site. (TIF) Figure S3 Genotypes of wild-type, Cyp2j 2/+ and Cyp2j 2/2 mice were identified by MLPA (A) and PCR (B). The neo resistance element is lost in Cyp2j 2/+ and Cyp2j 2/2 mice, and all the wildtype sequences (5wt, 3wt, wtM1, wtM2, wtM3) are absent from
2017-04-08T20:08:16.512Z
2013-11-01T00:00:00.000
{ "year": 2013, "sha1": "9cd1dcd2bc44e1eaff1498069ee8e14ecbd351a0", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosgenetics/article/file?id=10.1371/journal.pgen.1003950&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9cd1dcd2bc44e1eaff1498069ee8e14ecbd351a0", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
225052051
pes2o/s2orc
v3-fos-license
Physician preparedness for resource allocation decisions under pandemic conditions: A cross-sectional survey of Canadian physicians, April 2020 Background Under the pandemic conditions created by the novel coronavirus of 2019 (COVID-19), physicians have faced difficult choices allocating scarce resources, including but not limited to critical care beds and ventilators. Past experiences with severe acute respiratory syndrome (SARS) and current reports suggest that making these decisions carries a heavy emotional toll for physicians around the world. We sought to explore Canadian physicians’ preparedness and attitudes regarding resource allocation decisions. Methods From April 3 to April 13, 2020, we conducted an 8-question online survey of physicians practicing in the region of Ottawa, Ontario, Canada, organized around 4 themes: physician preparedness for resource rationing, physician preparedness to offer palliative care, attitudes towards resource allocation policy, and approaches to resource allocation decision-making. Results We collected 219 responses, of which 165 were used for analysis. The majority (78%) of respondents felt "somewhat" or "a little prepared" to make resource allocation decisions, and 13% felt “not at all prepared.” A majority of respondents (63%) expected the provision of palliative care to be “very” or “somewhat difficult.” Most respondents (83%) either strongly or somewhat agreed that there should be policy to guide resource allocation. Physicians overwhelmingly agreed on certain factors that would be important in resource allocation, including whether patients were likely to survive, and whether they had dementia and other significant comorbidities. Respondents generally did not feel confident that they would have the social support they needed at the time of making resource allocation decisions. Interpretation This rapidly implemented survey suggests that a sample of Canadian physicians feel underprepared to make resource allocation decisions, and desire both more emotional support and clear, transparent, evidence-based policy. Methods From April 3 to April 13, 2020, we conducted an 8-question online survey of physicians practicing in the region of Ottawa, Ontario, Canada, organized around 4 themes: physician preparedness for resource rationing, physician preparedness to offer palliative care, attitudes towards resource allocation policy, and approaches to resource allocation decision-making. Results We collected 219 responses, of which 165 were used for analysis. The majority (78%) of respondents felt "somewhat" or "a little prepared" to make resource allocation decisions, and 13% felt "not at all prepared." A majority of respondents (63%) expected the provision of palliative care to be "very" or "somewhat difficult." Most respondents (83%) either strongly or somewhat agreed that there should be policy to guide resource allocation. Physicians overwhelmingly agreed on certain factors that would be important in resource allocation, including whether patients were likely to survive, and whether they had dementia and other significant comorbidities. Respondents generally did not feel confident that they would have the social support they needed at the time of making resource allocation decisions. Introduction Under the pandemic conditions created by COVID-19, physicians around the world have faced difficult choices around the allocation of scarce resources, including but not limited to critical care beds and ventilators [1,2]. Making these decisions has left Italian physicians "weeping in hospital hallways," [3] and emerging reports suggest that American physicians have been similarly affected [4] by the emotional toll of the "toughest triage" [5]. One such resource specific to this region, predicted a worst-case scenario of more than 13,000 Ontario patients left to die due to insufficient resources [6] and another predicted the possibility of an insufficiency of intensive care unit (ICU) beds [7]. While many ethical frameworks exist for allocating resources in pandemics [8][9][10][11], we have few empirically-based insights into physicians' attitudes and beliefs surrounding resource allocation decisions in the era of COVID-19 [12]. In early April 2020, we launched a survey of physicians practicing in the region of Ottawa, Ontario, Canada. Ottawa is the national capital, a cosmopolitan and bilingual city of approximately 1 million. We sought to capture physician preparedness to make resource allocation decisions, their anticipated approach to these decisions, their awareness of existing guidelines, their comfort with the provision of palliative care under pandemic circumstances, and their desire for services to support their decision-making. We hypothesized that our respondents would not feel prepared to make these decisions, would be unaware of any specific guidelines, and would use commonly cited factors such as age and comorbidities to make resource allocation decisions. We timed our survey to predate the expected surge of COVID-related hospitalizations in our region. At the time the survey was launched, there were approximately a dozen patients with COVID-19 admitted to The Ottawa Hospital, the academic tertiary care centre in our region. We kept the survey open only for only 10 days so as to capture a specific moment in time when resource allocation was a foremost concern. As a reflection of how quickly the field was moving, the province of Ontario issued a guideline on triaging access to critical care beds during the time we were designing the survey [13]. Under hypothetical "surge" conditions that were ultimately never met, patients would be excluded from critical care according to diseasespecific mortality thresholds that become increasingly more stringent as resources become more limited. Methods Data for this study were collected via an online survey (S1 Appendix) administered via the Qualtrics XM survey platform (Denver, Colorado) [14] between April 3 and 13, 2020. Approval from the Ottawa Health Sciences Network Research Ethics Board (application 20200208-01H (2118)) was sought and obtained. Prior to fielding, the survey was piloted on a convenience sample to determine its length and resolve areas of ambiguity. Survey respondents did not receive any incentive and participation was voluntary. The survey included eight questions on four main themes: physician preparedness for resource rationing, physician preparedness for palliative care, attitudes towards resource allocation policy, and approaches to resource allocation decision-making. Responses were entirely anonymous and no identifying information such as IP addresses or emails was collected. The risk of repeat participation was minimized in two ways: First, using an option within the survey platform that prevents duplicate participants with a browser cookie; and second, by removing responses that were less than 40% complete. PLOS ONE The population of interest included staff physicians in the Ottawa region, with a sample frame defined by membership in a Facebook group for local physicians and/or mailing lists belonging to the following groups: the Departments of Medicine, Anesthesia, Critical Care and Emergency Medicine at The University of Ottawa / Ottawa Hospital; the Divisions of Neurology and Palliative Care; the Regional Ethics program; and the Ottawa Hospital Research Institute. This group was selected to capture a broad cross-section of physicians within a defined geographic area. Analysis was primarily descriptive, but appropriate inferential statistics were performed where comparisons between groups or responses were indicated. Pairwise between-group comparisons were corrected using Tukey's honest significant difference test. For more information about survey procedures, please see S1 Checklist, the completed Checklist for Reporting Results of Internet E-Surveys (CHERRIES) checklist [15]. Responses regarding the content of guidelines were thematically analyzed by two independent reviewers. Results The initial sample included 219 partial and complete responses. Of these, 54 were less than 40% complete and were removed prior to analysis to minimize data duplication. This left a final sample of 165 for analysis ( Table 1). The majority of responses (70.3%) came from departmental mailing lists, 29.1% came from the Facebook group and 0.6% from a link for participants referred to the survey by previous respondents. Physician preparedness for resource rationing Respondents were asked, "Imagine that you have two patients who require a ventilator but only one ventilator is available. How prepared do you feel to determine who will receive the ventilator?" (Fig 1). The majority of respondents endorsed being "somewhat" or "a little prepared" (78%). However, more than one in ten (13%) described themselves as "not at all prepared." When analyzed by specialty groups with at least 10 respondents (Fig 2), critical care/ anaesthesia physicians described themselves as being significantly more prepared to make decisions on ventilator allocation than all other specialities (all ps � .05). Family medicine practitioners described themselves as less prepared than all other specialties with the exception of surgery (vs. surgery: p = .94; vs. others: all ps < .05). No other statistically significant correlations were found with regards to specialty. Physician preparedness for palliative care Respondents were asked how they expected to feel about providing palliative care to a patient who had been denied life-saving treatment because of resource allocation (Fig 3). A substantial majority (63%) expected this situation to be "very" or "somewhat difficult." Most respondents (55%) described being "somewhat" or "very comfortable" with having goals of care conversations, though respondents were substantially less comfortable with having a goals of care conversation with the patient's family via telephone or videolink, with 22% of respondent describing themselves as "not at all comfortable" doing this. Physicians and resource allocation policy A slight majority (53%) of respondents were aware of any existing policy about resource allocation in pandemics; of those, 61% were aware of the provincial triage policy that had been released the week the survey was launched. Most respondents (83%) either strongly or somewhat agreed that there should be policy to determine who should receive critical care resources in the event of scarcity. Virtually all participants (96%) stated that they would follow such a policy if it aligned with their own values. However, in the hypothetical case that a policy did not align with their own values, the percentage of respondents who stated that they would follow the policy in all circumstances decreased from 32% to 9%, though the majority (65%) would still follow the policy in "all" or "most circumstances" (Fig 4). PLOS ONE Physician preparedness survey of resource allocation decision-making under pandemic conditions PLOS ONE Physician preparedness survey of resource allocation decision-making under pandemic conditions Policy recommendations When asked to define what would be most important to include in a policy on resource allocation, explicitness on how to follow it (20%) and transparency in how the policy was designed (18%) emerged as the most common themes. Other frequent responses included some statement on what would be expected of physicians when enacting these policies (9%), the importance of including an evidence-base in the policy (8%), and the importance of addressing issues of legal culpability (3%) ( Table 2). Some respondents (20%) wanted to remove decisionmaking responsibility (and thereby guilt and emotional responsibility) from individual physicians, while other respondents (9%) preferred a more flexible policy that would allow physicians the leeway to make choices in line with their individual values. Physician decision-making during resource scarcity To assess which factors would most strongly influence resource allocation decisions, we asked respondents to make a series of choices about which of two patients should receive a ventilator using a 5-point Likert-type scale with the points Definitely Patient A, Probably Patient A, Unsure, Probably Patient B, and Definitely Patient B. Each choice differentiated the two patients on just one or two factors (e.g., age, comorbidities), and stated that all else should be considered equal between them. This choice paradigm was intended to reveal the importance of different patient characteristics for the average respondent. However, we also directly asked respondents to rate the importance of key factors. Thus, these two questions provide us with both revealed and stated importance ratings. Fig 5 below shows the average likelihood of choosing the patient on the right for each of the choices. Note that 3 was the midpoint ("unsure") of the scale. In all cases, physicians were significantly more likely, on average, to choose the patient characteristics described on the right (all ts > |3.3|, ps < .001), with one exception: gender (t[163] = 1.9, p = .06). However, as the figure makes clear, physicians prioritized survival, cognitive status, comorbidity severity and age when making resource allocation decisions. The mean score for likelihood of survival is close to the maximum value of "definitely Patient B [who has a higher survival chance]", while the age-related item (age 72 versus 40), at 4.0, averages exactly on the scale point of "probably Patient B [the younger patient]." PLOS ONE Physician preparedness survey of resource allocation decision-making under pandemic conditions In an attempt to provide a closer behavioural test of how physicians prioritize between some of the key factors such as age and comorbidities, pairings combining these factors (Fig 6) were included. In these cases, respondents tended to prioritize both survival and presence of comorbidities over age. Finally, participants directly rated the importance of various patient characteristics (Fig 7). Respondents' stated order of importance reflected their approach to the patient comparisons in that likelihood of survival, comorbidities and dementia were the strongest determinants. Social support and moral injury Respondents to this survey were asked if they felt confident that they could access necessary mental health resources at the time of making a resource allocation decision. Among respondents, 38% felt not at all confident, while only 13% felt very confident that they would be able to access these resources. However, a majority (64%) felt either "somewhat confident" or "very confident" that they would be able to access mental health resources after making an allocation decision. Physicians reported being most likely to turn to other physicians (44%), followed by family members (28%), professional counsellors (16%), religious advisors (7%) and no one (4%). The most common volunteered responses were turning to hospital ethics support and non-medical friends, both with 4 responses (3%). Discussion This rapidly implemented survey provides insights into physician preparedness to make resource allocation decisions utilizing a series of hypothetical conditions related to COVID-19. the effects of resource insufficiency on physician decision-making, and for healthcare policymaking in pandemics. While this topic has engendered significant debate in the medical literature and popular press in recent weeks, little empirical data has been available to provide context for this debate. Our results provide some insights, albeit from a single Canadian PLOS ONE Physician preparedness survey of resource allocation decision-making under pandemic conditions region, and acquired in a context of anticipation for the COVID surge. We present three major takeaways from this survey: 1. A majority of surveyed physicians reported feeling under-prepared, described themselves as anxious or sad when imagining themselves making tough decisions around allocating resources, and anticipated guilt afterwards. Participants were generally not confident in their ability to access mental health support at the time of making these decisions, though a majority believed they would be able to access support afterwards. Our survey suggests that Canadian physicians are likely to experience similar mental health challenges to colleagues in New York, Italy and China [16] due to COVID-19. Clear guidelines on resource allocation, acknowledgement of the emotional toll of making these decisions, and robust support systems are needed may help alleviate these pressures for physicians worldwide. We suggest that local and national physician organizations should play an important role in supporting physician mental health during this difficult time [17]. 2. Respondents demonstrated a strong appetite for transparently-developed, evidence-based, and clear-to-follow guidelines to inform resource allocation decisions. Based on our results, we would encourage institutions to seek to develop documents with a solid, evolving evidence-base, that considers implementation, addresses legal issues, and provides guidance on how to communicate with patients and families. 3. A consistent set of factors emerged as being important to most physicians' decision-making around resource allocation: survival likelihood, cognitive function, comorbidities and daily function. However, significant disagreement existed around other factors including age and citizenship, and our ability to draw conclusions about these factors is limited. In the absence of agreement and standard practices, physicians are at risk of being influenced by unconscious biases in the way resources are rationed [18]. This tendency may be exacerbated when information about the patient or the clinical scenario is limited. The largest limitation of this work was the fact that participants self-selected into the study rather than being recruited via probability-based sampling. The survey was distributed through several email lists as well as through two Ottawa physician Facebook groups, whose memberships overlap. As such, we are also unable to report an exact response rate, though we estimate that 10-15% of physicians employed by The Ottawa Hospital may have responded within the brief administration period. The goal of the survey was not to provide a perfectly representative picture of staff physicians in Ottawa, but to obtain input that will be useful for the creation of policy and practice. In addition, the sample was regionally limited; however, the Ottawa area is multicultural and has a strong medical academic program. As such, we posit that our results are reasonably generalizable to other regions. Finally, because of the fast-moving nature of the COVID-19 pandemic, policies on resource allocation were being written and released while this survey was recruiting participants, and this may have affected respondent knowledge of extant policies. Future directions of research raised by this work include systematically reviewing existing frameworks for resource allocation decisions under pandemic conditions, developing methods to raise awareness of existing policies so that physicians are as well-informed as possible, and examining interventions to provide support to physicians at the time of decision-making. Supporting information S1 Appendix. Preparing for resource rationing under pandemic conditions. (DOCX)
2020-10-24T13:05:44.544Z
2020-10-22T00:00:00.000
{ "year": 2020, "sha1": "c9d1ae58e9a053a939c4f673a67b664f92801bec", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0238842&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b9c68da3453c51096093d7d273d5a5bf16a9f5ff", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
251504493
pes2o/s2orc
v3-fos-license
Cross-Sectional Study of University Students’ Attitudes to ‘On Campus’ Delivery of COVID-19, MenACWY and MMR Vaccines and Future-Proofing Vaccine Roll-Out Strategies University students are a critical group for vaccination programmes against COVID-19, meningococcal disease (MenACWY) and measles, mumps and rubella (MMR). We aimed to evaluate risk factors for vaccine hesitancy and views about on-campus vaccine delivery among university students. Data were obtained through a cross-sectional anonymous online questionnaire study of undergraduate students in June 2021 and analysed by univariate and multivariate tests to detect associations. Complete data were obtained from 827 participants (7.6% response-rate). Self-reporting of COVID-19 vaccine status indicated uptake by two-thirds (64%; 527/827), willing for 23% (194/827), refusal by 5% (40/827) and uncertain results for 8% (66/827). Hesitancy for COVID-19 vaccines was 5% (40/761). COVID-19 vaccine hesitancy was associated with Black ethnicity (aOR, 7.01, 95% CI, 1.8–27.3) and concerns about vaccine side-effects (aOR, 1.72; 95% CI, 1.23–2.39). Uncertainty about vaccine status was frequently observed for MMR (11%) and MenACWY (26%) vaccines. Campus-associated COVID-19 vaccine campaigns were favoured by UK-based students (definitely, 45%; somewhat, 16%) and UK-based international students (definitely, 62%; somewhat, 12%). Limitations of this study were use of use of a cross-sectional approach, self-selection of the response cohort, slight biases in the demographics and a strict definition of vaccine hesitancy. Vaccine hesitancy and uncertainty about vaccine status are concerns for effective vaccine programmes. Extending capabilities of digital platforms for accessing vaccine information and sector-wide implementation of on-campus vaccine delivery are strategies for improving vaccine uptake among students. Future studies of vaccine hesitancy among students should aim to extend our observations to student populations in a wider range of university settings and with broader definitions of vaccine hesitancy. Introduction Young people are an important risk group for vaccination programmes due to their high mobility, inexperience of accessing medical systems and relatively high levels of vaccine hesitancy compared to older populations [1,2]. Within this group, university students are particularly at risk of contracting and transmitting infectious diseases because of their high levels of transmission-associated behaviours and mixing of geographically diverse intakes [3][4][5]. These risks have been exemplified by outbreaks of COVID-19 on university campuses as students returned to campus-based activities after initial lockdowns [6,7]. Facilitating access of university students to vaccines is a key mechanism for enhancing vaccine uptake and preventing infectious disease outbreaks while minimising the need for highly restrictive measures such as lockdowns, social distancing and online learning. Multiple previous studies have been conducted to determine the reasons for COVID-19 vaccine hesitancy in the general population. Vaccine hesitancy or acceptance has been assessed by a range of measures including use of the SAGE guidelines, Likert-scaled acceptance questions, attitudinal measures and actual uptake or intention to uptake (as utilised herein) [8][9][10][11]. A recent meta-analysis of vaccine acceptance in higher income countries reported vaccine hesitancy rates of at least 30% in half the studies (n = 97) with lower socioeconomic status being the most impactful contributory factor in lower-middle income countries/regions and perceived vaccine safety in more affluent countries/regions [8]. Common demographics for COVID-19 vaccine hesitancy in the reported literature include females, younger age groups, being from a minority ethnic group and lower education or income levels [12][13][14]. Studies of student populations have yielded a range of findings. Factors associated with higher vaccine acceptance are knowledge of COVID-19 vaccines, trust in authorities and high perceived vaccine safety and effectiveness and uptake of vaccination in the family [10,11,[15][16][17][18][19][20]. Consequently, perceived accessibility barriers (physical or financial), concerns about vaccine side-effects, speed of development and previous COVID-19 infection have been associated with vaccine hesitancy [11,18,[20][21][22][23]. A potentially important issue is whether news about the rare but serious side-effect of blood clots associated with the licensed AstraZeneca COVID-19 vaccine might have affected uptake among students [24,25]. Prior to the COVID-19 pandemic, a major concern for student populations was the prevention of cases and outbreaks of meningococcal disease, measles and mumps. Rising levels of infections due to a MenW cc11-lineage strain led to introduction of the MenACWY vaccine into the UK school-age vaccination programme and new university entrants from August 2015 [26,27]. Outbreaks of mumps among students were also observed in 2019, leading to student-focussed information campaigns to encourage uptake of the MMR vaccine [28]. The MenACWY and MMR vaccines are currently offered free of charge to all university students in the UK, including overseas students. National lockdowns to contain the rapid spread of SARS-CoV-2 in 2020 led to major reductions in cases of meningococcal disease, measles and mumps, but there is now a concern that ending lockdowns and increased social mixing could lead to rises in these serious vaccine-preventable diseases [29,30]. These effects may be compounded by disruption of school-based immunisation programmes during the pandemic, which may have resulted in a serious risk of long-term weakening of individual and herd (population) protection. Studies prior to the COVID-19 pandemic reported uptake rates of the MenACWY vaccines among students at 68-71% [9,27,31]. In general, students are expected to obtain their vaccines prior to arrival at university. However, uptake can be enhanced by 'on campus' vaccine campaigns as exemplified by the University of Nottingham's highly effective delivery of the MenACWY vaccine for incoming university entrants [27]. Vaccine hesitancy has been examined for the MenACWY vaccine. Blagden et al. [9] reported that vaccine uptake was strongly associated with a high perceived effectiveness of the vaccine but did not find any barriers, such as vaccine side-effects or inconvenience. A meta-study by Wishnant et al. [32] found that the only factors strongly associated with uptake of meningococcal vaccines among students were perceived risks of contracting meningococcal disease and the severity of meningococcal infections. Overall, these studies indicate that vaccine hesitancy is not a major barrier to meningococcal vaccine uptake but are equivocal about how vaccine uptake can be increased. To evaluate the barriers to uptake of vaccines among students and to inform university vaccination policies, we assessed the attitudes, knowledge, perceived vaccine status and willingness for uptake of COVID-19, MMR and MenACWY vaccines among university students during the roll-out of COVID-19 vaccines to 18-year-olds and above in the UK. Ethics All study participants gave their informed consent for inclusion before participation in the study. The study was conducted in accordance with the Helsinki Declaration and ethical approval was given by the University of Leicester (UoL) Medicine and Biological Sciences Research Ethics Committee (reference number 29522). Context of Questionnaire Delivery and Derivation The questionnaire was emailed to students on three occasions between the 1st and 21st of June 2021. Access to COVID-19 vaccines in the UK was extended to the 25-29, 23-24, 21-22 and 18-20 age brackets on the 7th, 15th, 16th and 17th June 2021, respectively (https://www.england.nhs.uk/2021/06/21-and-22-year-olds-to-be-offered-covid-19-jab-from-today/, accessed on 9 August 2022 and https://www.england.nhs.uk/2021 /06/nhs-invites-all-adults-to-get-a-covid-jab-in-final-push/, accessed on 9 August 2022). Prior to these dates, only healthcare workers and medical students as well as individuals in vulnerable categories were eligible for COVID-19 vaccines in the <30-year-old age bracket. At the time the questionnaire was designed, we assumed most students would be unvaccinated when they completed the questionnaire. Questionnaire Delivery, Structure and Content The questionnaire was administered via Online Surveys. Between 1 and 21 June 2021, the research team sent an invitation email, and two reminder emails, to all 10,869 campus-based University of Leicester undergraduate students ( Figure S1). Each invitation email contained a unique link to the questionnaire that could not be reused. Completed questionnaires were de-identified by automatic assignment of another unique identifier by the software, thereby uncoupling the questionnaires from the original email address. The initial email included an invitation to voluntarily participate in follow-up interviews and a prize draw with five prizes of GBP 200 being offered and subsequently delivered. The questionnaire consisted of a participant information sheet followed by three informed consent questions. Remaining parts of the questionnaire were only accessible if approval to all three consent questions was granted. Apart from 28, questions were compulsory; 'Prefer not to answer' or 'Don't know' responses were included for most questions so that participants could opt not to provide specific responses (Table S1). The questionnaire consisted of 29 questions split into four sections: demographics, vaccines, experiences of COVID-19 disease and other pandemic experiences (e.g., harassment). Questions were multiple choice or scaled answers with one free text box (Question 28) and three questions with answer-dependent questions (Table S1). Questions 2, 6, 7, 10, 12, 17, 19, 20 and 26 were identical to or modifications of questions utilised in UK-REACH questionnaire 2_ver_1.2 (23 March 2021). Questions 13-16 were written by the authors and piloted with University of Leicester students prior to the pandemic as part of another study [31]. 8,9,11,[21][22][23][24][25][27][28][29] were written by the authors for this study. Question 26 is the self-determination scale. Question 17 utilised four statements from the VAX scale of Martin and Petrie [33]. A VAX score was derived for each participant by reversing the scores for statements 17.2, 17.3 and 17.5 followed by rescaling the sum of all four scores on a 0 to 1 scale that represents maximum to minimum hesitancy, respectively. Internal consistency for VAX score was tested with Cronbach's alpha. Definitions of Primary Endpoints Primary endpoints in our analyses were vaccine hesitancy, VAX scores and willingness for on-campus vaccination programmes. Vaccine hesitancy was defined as providing a response to question 10 that included the phrase 'have decided not to have the vaccine'. Vaccine-willing students were those whose response included either 'I have already had' or 'intend to have the vaccine'. VAX scores have been utilised as predictors of vaccine hesitancy [33]. VAX scores were derived for all students and analysed for differences between ethnic groups and term time residence locations as an alternate measure of vaccine hesitancy. The potential utilisation of on-campus vaccination programmes was defined based on responses to question 15 split between those in favour (definitely increase, somewhat increase) and those who were ambivalent (neither, somewhat decrease, definitely decrease). Statistical Analysis Sample size was determined by the total number of surveys completed with all responses being considered in the analysis. De-identified survey responses were analysed using R version 4.0.3 with the tidyverse (data handling), jsonlite 1.7.1 (data extraction), ggplot2 3.32 (general graphing), gtsummary 1.4.2 (tabulation), UpSetR 1.4.0 (graphing of sets) and likert 1.3.5 (graphing of Likert-style responses) packages [34][35][36][37][38][39][40] all downloaded via the Comprehensive R Network (CRAN; https://cran.r-project.org). In order to determine if the demographics of our study participants were similar to those of other UK universities and for weighting of the multivariable analyses, we obtained demographic data from HESA (Higher Education Statistics Agency, Cheltenham, UK; https://hesa.ac.uk, accessed on 2 August 2021). Similarly, we compared vaccination rates in our study with local, regional and national vaccination rates obtained from the UK Coronavirus Dashboard (https://coronavirus.data.gov.uk, accessed on 6 July 2021). Univariable analyses were performed on unweighted survey results using chi-squared, Fisher's exact or Wilcoxon's rank sum tests. False discovery rate (FDR) correction was used to adjust p-values for multiple testing. COVID-19 vaccine hesitancy and preference for on-campus vaccinations (COVID-19 and MMR) were dichotomised and used as dependent variables. Multivariable analysis was performed using logistic regression (glm; binomial link function) on both unweighted and weighted survey results, with vaccine hesitancy and preference for on-campus vaccinations (COVID-19 and MMR) as dependent variables. Predictors included gender, ethnic group, age group, course studied, year of study, experience of harassment, experience of COVID-19-related death, concern over side-effects from the Oxford/AstraZeneca vaccine, concern over hospitalisation from COVID-19, concern over spreading COVID-19, home area (local, national, international), residence while studying (home, halls of residence, private accommodation, other) and a psychometric score on self-determination/fatalism. Home area was derived using information on postcodes and international status with students being classified as local if they came from either Leicester or the wider county of Leicestershire, 'national' for students from the rest of the UK and 'international' for students ordinarily living overseas. Experiences of COVID-19-related deaths were classified into a Yes or No category according to responses to question 21 (Table S1) with the Yes responses including family members, friends or others. Survey data were weighted using the raking method in the survey package version 4.0 [41] for R based on national student distributions for ethnic group (White, Asian, Black and other) and gender (Tables S2-S4). Constraints in the national data made it necessary to remove students of unknown gender (n = 6). Given the low rate of vaccine hesitancy and the inherent limitations on creating meaningful and representative training and test data subsets, no attempt was made to assess model performance (e.g., using ROC AUC). Multivariate models were used to identify significant factors influencing vaccine hesitancy rather than for the purpose of generating an effective predictive model of hesitancy. Statistical differences between the distributions of VAX scores for different groups were determined using pairwise Dunn tests with FDR correction. All tests were two-sided with a corrected p-value (FDR correction) of <0.05 considered significant. Response Rate, Sample Characteristics and COVID-19 Vaccination Uptake In June 2021, all University of Leicester (UoL) undergraduate students were invited to participate in a study of the uptake and attitudes to COVID-19 vaccines. Complete answers were provided by 7.6% (827/10,869) of participants. Respondents were young (94% 18-25-year-olds), ethnically diverse (25% Asian, 8% Black, 58% White, 9% other) and included 10% (n = 86) international students. Response rates were higher among females (11% above the level for UK universities and 14% above the UoL level) and had an ethnicity profile intermediate between UoL and UK university undergraduate populations (Tables S3 and S4). The distribution among year of study was, however, strongly representative of the UoL population (Table S5). Two thirds (64%) of students (527/827) reported having had a COVID-19 vaccine at the time of questionnaire completion: 74% (390/527) had Pfizer/BioNTech, 23% (121/527) AstraZeneca and 3% (16/527) another vaccine. A further 194 students (23%) expressed a willingness to become vaccinated, giving a total of 85% who had been or were willing to be vaccinated. Results for 66 students were excluded from further analysis of vaccine hesitancy due to uncertainty about their intention to vaccinate (the selected response was 'I have not had a vaccination but have been told that I will be offered a vaccination in the near future'). Removing these students from the denominator gave an overall willingness rate of 95% (721/761). There were 40 students (5%) who indicated that they had refused or would refuse a COVID-19 vaccine. Univariable Analysis of Vaccine Hesitancy The results of the univariable analysis for vaccine hesitancy are shown using 40 students (5%) who indicated that they had refused or would refuse a COVID-19 vaccine. Ethnicity, concerns around side-effects (particularly the AstraZeneca vaccine), concerns around spreading COVID-19 to others, place of residence while studying and VAX score were all found to be significantly associated with hesitancy after correcting for multiple testing. There was a weak trend for an association of age with vaccine hesitancy; this could not, however, be explored further due to banded collection of age data and the narrow age range of this cohort. Analysis of the individual VAX scale questions (see Figure S2) showed that only 29% of hesitant students disagreed that natural exposure to a disease was safer than vaccination compared to 80% of vaccine-willing students. By contrast, 70% of hesitant students, but also 54% of willing students, had concerns about the safety of vaccines (the statement was 'Although most vaccines appear to be safe, there may be problems that we have not yet discovered'). Approximately half (49%) the hesitant students and 82% of willing students agreed that >95% vaccine coverage was required to prevent the spread of infectious diseases. Multivariable Analysis of Vaccine Hesitancy The multivariable analysis identified associations between vaccine hesitancy and ethnicity, course of study, side-effects and place of term-time residence, as found in the univariate analysis, and additionally with experiences of death among contacts ( Table 2). For course studied, those studying medicine and allied professions (e.g., midwifery, nursing and physiotherapy) had a significantly lower likelihood of being vaccine-hesitant (OR 0.1, 95% CI 0.02-0.5, adjusted p = 0.021) compared to humanities, law and social sciences. For ethnicity, hesitancy among Black students had a high odds ratio (OR 7.01, 95% CI 1.81-27.3, adjusted p-value = 0.021) as compared to White students ( Table 2). Students living in private accommodation (OR 0.13, 95% CI 0.04-0.38, adjusted p = 0.004) were less vaccine-hesitant than students living at home. Hesitancy was strongly associated with concerns over side-effects from the AstraZeneca ( Table 2) and Pfizer/BioNTech vaccines (OR 2.1, 95% CI 1.5-3.0, adjusted p < 0.001; data not shown). Concerns about side-effects were, however, lower for the Pfizer/BioNTech vaccine ( Figure S4). Surprisingly, the multivariate analysis detected a positive association between experiences of a COVID-19-related death in a family member, friend or other contact with vaccine hesitancy (Table 2). This association remained even when only close contacts (friends; family) were considered (odds ratio 6.4, 95% CI 1.9-21.6, adjusted p = 0.02; data not shown). Analysis of VAX Scores The Asian ethnic group had significantly lower VAX scores than both White and other ethnic groups, while the Black ethnic group had significantly lower VAX scores than White ethnicity, indicating a higher level of vaccine hesitancy in Asian and Black groups (Figure 1). Similarly, we observed that home students had significantly lower VAX scores than students living in private or other accommodation (Figure 1). The mean VAX score for students living in halls was higher than but not significantly different from those living at home, indicating a trend for home students to be more vaccine-hesitant than those who lived in other locations during this academic year. VAX scores in our sample had a low but acceptable internal consistency score (Cronbach's alpha = 0.62; 95% CI 0.58-0.66) and a negative association with our independent measure of vaccine hesitancy (rank biserial correlation r = 0.63; Wilcoxon rank sum test p < 0.0001), as expected. Table S1 and Figure S2). VAX scores range from 0 to 1 representing high to low vaccine hesitancy. The VAX scores were determined for all individuals in four broad ethnic groups (a) or places of residence during the university term (b). Graphs show violin plots with the median scores indicated by a red circle. Box, IQR; line, IQR + 1.5 times IQR; line within box, median. p-values were derived using pairwise Dunn tests with FDR correction. Knowledge of MMR and MenACWY Vaccine Status among Students Views on MMR and MenACWY vaccines are shown in Table 3. Very few UK students (2-4%) self-reported not having had the MMR or MenACWY vaccines, but an additional 8% did not know if they had had their MMR vaccine and 23% did not know if they had had their MenACWY vaccine (Table 3). International students were more likely not to know their vaccination status compared to local students (Table 3). Additionally, 15% of UK students did not know that the MMR and MenACWY were available free of charge in the UK and 6% reported not knowing that COVID-19 vaccines were also available free of Table S1 and Figure S2). VAX scores range from 0 to 1 representing high to low vaccine hesitancy. The VAX scores were determined for all individuals in four broad ethnic groups (a) or places of residence during the university term (b). Graphs show violin plots with the median scores indicated by a red circle. Box, IQR; line, IQR + 1.5 times IQR; line within box, median. p-values were derived using pairwise Dunn tests with FDR correction. Knowledge of MMR and MenACWY Vaccine Status among Students Views on MMR and MenACWY vaccines are shown in Table 3. Very few UK students (2-4%) self-reported not having had the MMR or MenACWY vaccines, but an additional 8% did not know if they had had their MMR vaccine and 23% did not know if they had had their MenACWY vaccine (Table 3). International students were more likely not to know their vaccination status compared to local students (Table 3). Additionally, 15% of UK students did not know that the MMR and MenACWY were available free of charge in the UK and 6% reported not knowing that COVID-19 vaccines were also available free of charge. Again, these proportions were significantly higher among international students. More than half (57% and 61%, respectively) of students favoured on-campus MMR/MenACWY and COVID-19 vaccine provision, respectively. UK-based international students were also highly supportive of this provision with 52-62% selecting a 'definitely increase' response for these vaccines (Table 3). Univariable and Multivariable Analyses of Attitudes to On-Campus Vaccinations The univariable analysis of on-campus MMR/MenACWY vaccine programmes identified a significant association with MMR vaccine status indicating that those who responded with either a 'Yes' or 'Don't Know' response for their vaccine status were in favour of these programmes (Table S7). However, these responses were not significant in the multivariate analysis after correction for multiple testing (Table S8). The univariable analysis of on-campus COVID-19 vaccine provision found significant associations with COVID-19 vaccine hesitancy and term-time residence (Table S9). In the multivariable analysis, vaccine hesitancy was negatively correlated with on-campus provision (Table S8). Multivariable logistic regression of term-time residence indicated that students studying in halls (OR 3.5 95% CI 1.6-7.6, adjusted p = 0.021) or private accommodation (OR 2.6, 95% CI 1.3-4.99, adjusted p = 0.03) were in favour of this provision. Discussion University students are a critical group for illness and spread of infectious diseases and hence are an important target for vaccination programmes. Our survey of University of Leicester students was unique in that we evaluated both attitudes to and mechanisms for uptake of the three major vaccines targeted to this population group in the UK. Our study indicates that ethnicity, concerns over side-effects and place of residence are key determinants of COVID-19 vaccine hesitancy. We also found high levels of uncertainty among students about their MenACWY and MMR vaccine status. As an approach to facilitating vaccine uptake, students were asked about on-campus provision of vaccines and reported being in favour of this approach. Based on our findings we elaborate key recommendations for improved vaccine delivery to this population sector. Our study observed a high level of uptake (64%) despite this age group only becoming eligible for COVID-19 vaccines during the data collection period. These uptake levels were significantly higher than the wider young adult population at that time (p < 0.0001 as compared to Leicester 18-24-year-olds; Figure S1), suggesting that these students were more proactive about accessing COVID-19 vaccines than their peers. High uptake may be partially attributable to a bias for pro-vaccine students to participate in the study and/or to surge vaccinations in the Leicester COVID-19 hotspot just prior to initiation of the survey. Our observation of a high willingness for uptake of COVID-19 vaccines (95%) was similar to the rates reported in an ONS study of UK university students [42] and may reflect the effectiveness of vaccine delivery and information campaigns targeted to students. Intriguingly, 93% (37/40) of the COVID-19 vaccine-hesitant individuals reported having had at least one of the MMR and MenACWY vaccines, suggesting that these individuals are either specifically concerned about the COVID-19 vaccines or that vaccine hesitancy has developed during their transition to adulthood. Determinants of Vaccine Hesitancy among University Students Identifying vaccine-hesitant individuals is a key concern for vaccination programmes. In our study, uptake of the COVID-19 vaccine or intention to vaccinate was found to be relatively high concurring with findings from other studies conducted with student groups. For example, Di Giuseppe et al. [19] performed a study among university students and employees in an Italian university in 2020 and reported that the willingness to obtain a COVID-19 vaccine was 84.1%. Similarly, research conducted among university students in the UK has also reported high vaccine uptake among this group, with uptake increasing over time [43,44]. Although vaccine hesitancy was low, we found that hesitancy was strongly linked to ethnicity and more specifically to Black ethnicity in our univariate and multivariate analyses, respectively. Furthermore, analysis of VAX scores for all individuals showed that Asian and Black ethnic groups had significantly lower VAX scores indicating a general trend towards hesitancy among the minority ethnic groups (Figure 1). Other studies have also found evidence of vaccine hesitancy associated with ethnicity [45][46][47][48][49] and specifically with students of Black ethnicity [48]. Hesitancy in these groups has been linked to discrimination, mistrust of healthcare organisations, misinformation, lower perceived vaccine efficacy/safety [45,50]. A substantial proportion of vaccine-hesitant individuals (37.5%; 15/40) in our study agreed with a statement that COVID-19 vaccines had not been thoroughly tested in different ethnic groups (Table S9) suggestive of the element of mistrust that was previously shown to significantly influence vaccine-uptake decision-making [51]. A novel finding was of an association between vaccine hesitancy and students who lived at home and significantly lower average VAX scores for students living at home as compared to those living in private accommodation or other accommodation types (Figure 1). While the home student group was small (i.e., 7% of national and 19% of international students), higher vaccine hesitancy among these students may be due to these students being less concerned about spreading COVID-19 than those living away from home (50% and 62%, respectively). Conversely, the multivariate regression showed lower levels of vaccine hesitancy among students concerned about spreading COVID-19 (OR 0.5, 95% CI 0.3-0.81, p = 0.024) as also observed by Szmyd et al. [52] for a cohort of Polish students. This attitude of vaccine hesitancy among home students may have arisen as a result of reduced day-to-day social interactions leading to a lower perceived risk of the potential for spreading COVID-19. Our study was performed a few months after concerns about side-effects of the As-traZeneca vaccine were widely publicised. Associations between these concerns and vaccine hesitancy were detected in both our univariate and multivariate models, indicating that this important factor could reduce vaccine uptake. Nevertheless, uptake and willingness levels were high, suggesting that other factors may override the perceived risks of side-effects. MMR and MenACWY Vaccine Status and On-Campus Vaccination Preferences An important strategy for increasing MMR and MenACWY vaccine uptake among young adults is making them aware of their vaccination status [53]. This is now demonstrably possible via digital applications such as the NHS App and EU Digital COVID certificate. Our survey found high levels of uncertainty among students about their MMR (11% did not know) and MenACWY (26% did not know) vaccine status and 3-4% who reported no uptake (Table 3). In a 2019/2020 questionnaire performed just prior to the pandemic, 16% and 54% of University of Leicester students reported not knowing their MMR and MenACWY vaccine status, respectively [31]. Estimates of actual vaccine uptake in England indicate that uptake is <90% for both the MMR and MenACWY vaccines (last reported in July 2019 and 2017/2018, respectively) with the MMR vaccination levels being below the >95% coverage recommended by the World Health Organisation for preventing measles and mumps outbreaks [28, 30,54]. Many students who are uncertain about their vaccine status may not have had these vaccines, particularly international students (who reported high rates of uncertainty about their MMR and MenACWY vaccine status). These students will be at a higher potential risk of contracting and/or spreading the diseases targeted by these vaccines. Public Health England recommends that anyone who is uncertain about their vaccine status or has missed a vaccine dose should be offered these vaccines [55]. Most students will be unaware of this recommendation; indeed, 20% of UoL students did not know that these vaccines are free (Table 3)). High proportions of UK students and UK-based international were in favour of on-campus provision of MenACWY, MMR and COVID-19 vaccines, with the latter group potentially reflecting difficulties in understanding how to access the UK medical system. The statistically significant evidence of support for provision of on-campus COVID-19 vaccinations indicates that students value easy access and that this strategy could help to address deficits in vaccine uptake of all vaccines relevant to this age group. Recommendations Harnessing new approaches developed during the UK COVID-19 vaccine roll-out is a potential positive legacy of the pandemic to build back better for future generations. Empowering digitally aware young people to take responsibility for their own health and to engage in community health policies is an achievable, cost-efficient outcome with far-reaching personal and population benefits. A key recommendation is for provision of vaccine status and access information for all vaccines on digital platforms (e.g., NHSapp in the UK) so that individuals can make informed decisions about taking up missed vaccinations. Specific delivery of vaccines to international students should be a gold standard for the university sector combined with wide adoption of on-campus vaccination programmes. The benefits of implementation of these recommendations would be protection of more individuals and improved population protection through reduced transmission. Strengths and Limitations A strength of this study is that it is the first to simultaneously evaluate uptake, knowledge and attitudes to COVID-19, MMR and MenACWY vaccines among university students. A further strength is the high number and ethnic diversity of the participant population. The use of multivariable regression was a strength that allowed for adjustment for confounders and for identification of significant associations between variables and vaccination parameters with the potential to inform vaccination policies. Findings from this study may, however, be affected by the inherent limitations of crosssectional studies. Our approach of utilising emails to send out invitations and an online survey form may have limited access for some potential participants. The self-selecting nature of the response cohort and a response rate of 8%, despite incentives, indicates that there may be bias due to demographics. The strengths and limitations arising from a range of demographics were considered above, but it is possible that biases from other unaccounted demographics (e.g., socioeconomic status) may confound generalisability of our data to the wider UK student population. We note that distribution in ethnic group and gender differ in our response cohort to both the University of Leicester and the wider UK student demographic and have attempted to mitigate these effects by using a weighted multivariate analysis. A further limitation is that a pilot study was not completed with the entire questionnaire. A significant potential limitation of our study, and inherent in many studies of vaccination, is enhanced participation by individuals with pro-vaccine attitudes and reduced participation by vaccine-hesitant individuals. Our study may also have been subject to social desirability bias due to the study survey being delivered through the University of Leicester email system and being promoted by the senior university team. Our fully anonymised survey system and online formats was designed to counter both of these biases while inducements to participate was designed to minimise the former bias. Our level of vaccine hesitancy as determined by vaccine uptake is similar to other studies. However, this strict determination of vaccine hesitancy may have missed the full range of hesitancy and excluded students who obtained the vaccine despite having a degree of vaccine hesitancy. Conclusions The findings from this study indicate that there may be differences in uptake and access to the COVID-19, MenACWY and MMR vaccines among university students. Students of Black ethnicity and those residing at home were less likely to be vaccinated with COVID-19 vaccines. Further research on the reasons for hesitancy may be required in order to delivery more effective, 'tailored' vaccine information and to develop methods for enhancing trust and acceptance of vaccines in these groups. High levels of uncertainty about personal vaccine status and availability of the MMR and MenACWY vaccines were observed and are likely to impact vaccine uptake. On campus vaccination delivery was found to be widely favoured particularly by on-campus and international students. These knowledge gaps and delivery approaches should be considered in future studentfocussed vaccination campaigns and explored through additional research. Our findings indicate that adopting 'best practices' of easy access and digital vaccine information within the university-sector may break down barriers and future-proof uptake of all required vaccines among students. Supplementary Materials: The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/vaccines10081287/s1, Figure S1: Comparison of the vaccination levels of UniCoVac participants to comparator populations; Figure S2: Comparison of vaccine confidence scores for vaccine-hesitant and non-hesitant participants; Figure S3: Experiences of deaths among friends, family or others; Figure S4: Concerns about the side-effects of COVID-19 vaccines; Table S1: UniCoVac Questionnaire; Table S2: Weighting values for the weighted multivariate analysis; Table S3: Comparison of ethnicity demographics of UniCoVac questionnaire participants to University of Leicester and UK universities; Table S4: Comparison of gender demographics of UniCoVac questionnaire participants to University of Leicester and UK universities; Table S5: Comparison of demographics for Year groups between UniCoVac questionnaire participants and University of Leicester undergraduate population; Table S6: Classification of questionnaire respondents for vaccine hesitancy (dichotomous); Table S7: Univariable analysis of attitudes to on campus delivery of MMR and MenACWY vaccines; Table S8: Multivariate analysis of attitudes to on campus delivery of MMR and MenACWY vaccines; Table S9: Reasons for hesitancy of vaccine-hesitant individuals.
2022-08-12T15:10:42.598Z
2022-08-01T00:00:00.000
{ "year": 2022, "sha1": "c1bf55c8b11937561fc50ccec7a66ec296a12db4", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-393X/10/8/1287/pdf?version=1660117372", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d0c55d439c842608995fdf3a183be28a24360077", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
219091797
pes2o/s2orc
v3-fos-license
Critical state desiccation induced shrinkage of biomass treated compacted soil as pavement foundation Volumetric shrinkage critical state induced by desiccation of compacted earth treated and improved with Rice Husk Ash (RHA) and utilized as pavement foundation has been investigated in the laboratory. The critical state study was important to determine at what level of earth improvement with the biomass material can be considered safe in terms of shrinkage behavior of treated soft soil utilized in pavement construction. Rice Husk (RH) is an agricultural waste discharged during rice production and is disposed on landfills. It causes land pollution in places like Abakaliki and Ebonyi State, Nigeria as whole where rice farming is the predominant occupation. Ground and soil improvement with the use of rice husk ash has contributed in the management of solid waste in the developing countries where rice production is a common occupation with its attendant solid waste generation. However, the use of biomass-based binders like RHA has reduced the rate at which oxides of carbon are released into the atmosphere, which contribute to greenhouse emission effects. This exercise is an environmentally conscious procedure that keeps the environment safe from the hazards of carbon and its oxides. the preliminary investigation on the test soil showed that it is classified as an A-7-6 soil group according to AASHTO classification. The index properties showed that the soil was highly plastic with high clay content. The RHA was utilized at the rate of 2%, 4%, 6%, 8% and 10% by weight of treated solid to improve the soil volumetric shrinkage induced desiccation at molding moisture conditions of 2% dry, 0%, 2% wet, and 4% wet of optimum moisture. This was necessary to establish the behavior of hydraulically bound foundation materials subjected to dry and wet conditions during the rise and fall of water tables. Results of the laboratory investigation have shown that at 2% dry of optimum moisture, the volumetric shrinkage (VS) behavior performed best and reduced below the critical line of 4% at 8% by weight addition of RHA and maintained that consistent reduction at 10% addition of RHA. Generally, the critical point was achieved at 8% by weight addition of RHA at all the molding moisture conditions. This is an indication that at molding moisture conditions between -2% to 4%, 8% by weight addition of the admixture, RHA can be utilized as an environmentally conscious construction material to improve the VS of soils for use as hydraulically bound materials in pavement foundation construction. Michael E. ONYIA is a senior lecturer and current head, department of civil engineering at the University of Nigeria, Nsukka. Favour Adaugo ONYELOWE is an undergraduate student of the department of biotechnology, Ebonyi state university, Abakaliki, Nigeria. Duc BUI VAN is a lecturer at the Hanoi University of Mining and Geology, Hanoi, Vietnam and member of the research group of Geotechnical engineering, Construction Materials and sustainability, HUMG, Vietnam. A. Bunyamin SALAHUDEEN is senior lecturer and researcher at the University of Jos, Nigeria. Adrian O. EBEREMU is an Associate Professor at the Ahmadu Bello University, Zaria, Nigeria with research experience in geotechnical and geoenvironmental engineering. Kolawole J. OSINUBI is an astute professor of geoenvironmental engineering at the Ahmadu Bello University, Zaria, Nigeria. Agapitus A. AMADI is an Associate Professor of soil mechanics and foundation engineering at the Federal University of Technology, Akure. Eze ONUKWUGHA is a senior academic with teaching, research and management experiences at the Federal Polytechnic, Nekede, Nigeria. Adegboyega O. ODUMADE is a doctoral research student at the University of Nigeria, Nsukka while his main research base is the Alex Ekwueme Federal University Ndufu Alike Ikwo, Nigeria. Ikechukwu C. CHIGBO is an academic at the Alex Ekwueme Federal University Ndufu Alike Ikwo, Nigeria with research and teaching interest in geotechnical engineering. Zubair SAING is an Associate Professor at the Universitas Muhammadiyah Maluku Utara, Ternate, Indonesia. Chidozie IKPA is currently a technologist in the department of civil engineering laboratory of the Alex Ekwueme University, Ndufu Alike Ikwo, Nigeria. Talal AMHADI is a doctoral research student at the Department of Construction and Civil Engineering, Ecole de Technologie Superieure (ETS), University of Quebec, Montreal, Canada. Introduction Solid waste handling and management around the world and especially in the third world countries have contributed to the alarming amounts of carbon oxides emission and on the threat of global warming. On the other hand, Laterite has served as foundation material for pavement ever since road existed, the need to analyze this material and subsequently improve on the quality of roads has been a serious issue to highway Engineers. One critical concern in the use of laterite is the swell -shrink property of the lateritic material. Expansive soils (lateritic soils with high plasticity index above 17) are prone to large volume changes that are directly proportional to moisture exposure. Therefore, the low bearing strength and compressibility behavior as a result of constant exposure to moisture under hydraulically bound conditions causes severe damage and deterioration to subgrade. Due to the rise and fall of the water table which is triggered by rainfall percolation and subsequent capillary action or suction, the compacted sub-grade layer is constantly subjected to swelling and shrinkage cycle [1][2][3][4]. The road embankment is regularly under hydraulically bound condition, and the compacted earth serves as hydraulic barrier in waste containment facilities [5] and also as subgrade/ embankment layer of the road. Swelling and shrinkage properties of laterite have contributed immensely to road failures, and therefore measures to ameliorate these would be of great importance to the Engineer. The swelling and shrinkage properties of soil are not excluded in stabilized soil but can be minimized, this is due to the physical and chemical properties of soil, swelling occurs during water absorption and shrinks when water dries up. Generally, swelling and shrinking are major factors which affect the development of fissures in lateritic soil [6]. These fissures separate the soil surface and gradually close down into the deeper soil and in turn give rise to different problems of stability. Clayey soil is observed to give different characteristics in wet and dry conditions, it possess desirable sorption characteristics when wet and crack with dust emitted when dry [7]. These cracks break down continuity of soil mass, thereby reducing its strength and soil stability is affected. It grants surface water easy infiltration into the soil. When laterite is soaked with infiltrated water, it collapses and loose strength. Structures built on a soil with stability problem such as this, especially pavements, will in turn collapse with the soil. To a large extent, fissures are closely related with the swell -shrink characters. Modification and stabilization are two basic soil improvement technique [8], these enhance the mechanical behavior to suit construction requirements. Additives such as cement, lime, fly ash and bituminous materials have been used in common practice to improve the strength and other characteristics of laterite [9]. This has kept the cost of roads construction increasingly high due to increasing cost of the stabilizing agents. Thus, the use of agro-industrial wastes (such as rice husk ash) will lead to a considerably reduced cost of construction, minimize environmental hazards and enhance economical value chain of the farmers [10]. Rice husk ash has been validated through several researches as an effective partial replacement for stabilizing agents such as cement, lime etc. Globally, it is estimated that an average of about 160 million tons of rice husk ash are produced annually [10]. The addition of RHA reduces the plasticity and increase volume stability as well as the strength of the soil [11]. Combined application of RHA and lime can modify the expansive soil by reducing the swelling index and improve its strength and bearing capacity. In the above background analysis, it can be observed that relatively new and sustainable approach, which are environmentally conscious soil improvement has emerged. It is also important to note at this stage that for materials to be considered to be utilized in geotechnical engineering works, a maximum requirement of 4% for the volumetric shrinkage strain is recommended [12]. This paper seeks to study the effect on critical state desiccation of rice husk ash treated soil compacted earth when used as pavement foundation. Soil: The soil utilized in the laboratory experimentation was obtained by disturbed sampling method at depths of 1 to 3 meters from a borrow site located in Amaoba, Ikwuano Local Government Area, Abia State, Nigeria. The map location of the soil sample source is presented in Fig. 1. where rice farming and production is prevalent and common. Conversely, Rice Husk Ash (RHA) is derived from the direct combustion of the above lignocellulosic biomass released from rice production. Rice Husk Combustion: Rice Husk (RH) is a biofuel that combusts releasing high amount carbon and its oxides. The method of combustion employed here is the controlled incineration method developed by K. C. Onyelowe et al. [13] known as the Solid Waste Incineration NaOH Oxides of Carbon Entrapment Model. In this method, the oxides of carbon released are entrapped in a reaction jar containing 40 v/v NaOH solution, which has very strong affinity with oxides of carbon. At the end of the process, ash is derived and soda ash, baking soda and hydrogen gas are released through the outlet as presented in the waste valorization and gas sequestration cycle (see Fig. 2). Fig. 2 Biomass valorization, carbon sequestration and hydrogen gas separation and capture cycle 2. ábra Biomassza hasznosítás, szén megkötés, hidrogén gáz elválasztás és megkötés körforgása Preparation of Specimens: 2000 g of crushed open-air dried soil sample passing through sieve number 4; aperture of 4.76 mm was secured and readied for the experimentation. The specimens were prepared by deep and thorough blending and mixing at molding moisture contents of 2% dry, 0%, 2% wet and 4% wet of optimum moisture content, of the soil and varying percentages of the RHA in proportions of 2%, 4%, 6%, 8% and 10% by weight of treated solid. The standard proctor mold was used in this exercise to compact the specimen in accordance with appropriate standards. Index Properties: Moisture content of the natural soil, specific gravity, and gradation characteristics were determined in accordance with experimental protocols presented in British Standard International [14]. Atterberg Limits: The consistency limits; liquid limit, plastic limit and plasticity index values were determined for the both the natural and treated soils in accordance with British Standard International [14; 15]. It was observed that the liquid limits decreased from 54% for the natural soil to 31% at the RHA treatment of 8% and increased to 36% at the RHA treatment of 10%. The plastic limit showed similar behavior and decreased from 26% for the natural soil to 17% at 8% treatment with RHA and increased again to 10% treatment of RHA. Generally, the plasticity index consequently showed the same trend and reduced to medium plastic condition of 14% at 8% treatment with RHA from a highly plastic condition of 28% for the natural soil. Compaction Properties: The treated specimens were compacted using the standard proctor mold in accordance with British Standard International requirements presented in BS 1377 [14] and BS 1924 [15] to determine the maximum dry density (MDD) and the optimum moisture content (OMC). Volumetric Shrinkage: In an effort to determine the volumetric shrinkage (%) of the specimens, the compacted materials were extruded from the standard proctor compaction mold and air dried for 12 days under temperature conditions of 24±2°C and measurements of diameters and heights were taken every other three (3) with the aid of a Vernier caliper of 0.01 mm precision. The changes in dimensions of the specimens with drying days were observed and recorded. Table 1 represents the index and preliminary properties of the test soil. From Table 1 the following can be observed and deduced; Index and preliminary properties of the natural and treated soil Soil Classification using AASHTO System: AASHTO soil classification system classified soils in accordance with their performance as subgrade. To classify the soil, laboratory tests including sieve analysis, hydrometer analysis, and Atterberg limits are used to determine the group of the soil. In the AASHTO system [16], the soil is classified into seven major groups: A-1 through A-7 and the soil under investigation is classified as A-7-6 as more than 35% of the sample passed through sieve no 200, i.e., 38%, and its liquid and plastic limits are 54% and 26%, which exceeds 41% and 11% respectively, being the minimum requirements for liquid and plastic limits of soils in this category. However, soils in this category are generally rated for subgrade use as fair to poor. Unified Soil Classification System: On the basis of the Unified Soil Classification System (USCS), soil with > 50 % of sample mass retained on the 0.074 mm sieve is term coarsegrained and if > 50 % of the coarse fraction is retained on 4.76 mm sieve, the soil is classified as gravel but if ≥ 50 % of the coarse fraction passes 4.76 mm sieve, such soil is sandy soil. Also the gravelly of sandy soil is classified further as well graded gravel/sand (GW or SW) or poorly graded gravel/sand (GP or SP) if percentage of the soil fines is < 5 %, but if the percentage of the soil fines is > 12 %, the soil plasticity index is plotted against its liquid limit and the soil is classified as gravel/sandy clayey (GC or SC) or gravel/sandy silty soil (GM or SM) depending on its position on the chart. On the other hand, if ≥ 50 % of the sample mass passes the 0.074 mm sieve, the soil is classified as fine-grained soil. The plasticity index of the soil fine-grained is then plotted against its liquid limit on plasticity chart to further distinguish the soil as silt or clay of low, medium, or high plasticity. Based on the Unified Soil Classification System (USCS) results shown in Table 1, the soil is poorly graded (GP) and is a clayey soil of high plasticity (CH). However, it was observed that addition of RHA at 4%, 6%, 8% and 10% improved the soil to clayey soil of medium plasticity (CL). Sieve Analysis: The grain size analysis test results for the soil sample were summarized in Table 1. The values which were obtained from the gradation tests were analyzed with respect to the effect of pre-treatment and soil variations along lateral and depth wise. According to British Standard [14] the %passing the 0.075mm BS sieve should be < 35% for subgrade. From the sieve analysis result carried out, 38% of the particles passed through the 0.075mm sieve. Hence the soil is not suitable for use as pavement subgrade. Soil Consistency Index: Soil consistency index examined was Atterberg limit and widely used to differentiate soil types and states. The liquid limit, w L and plastic limit, w P , plasticity index I P , and linear shrinkage of the soil were determined in order to examine the influence of RHA contents used in its treatment. In Table 1 above, the results of Atterberg limits were represented, and it was clearly indicated that the values of the liquid limit w L (54%) and plastic limit (26%) w P decreased with increasing proportion of RHA, from 54% to 48% and 26% to 23% at 2%, 48% to 42% and 23% to 20% at 4%, 42% to 36% and 20% to 19% at 6% and from 36% to 31% and 19% to 17% at 8% for liquid limit and plastic limit respectively. An increase was also observed in the values of liquid and plastic limits at 10% addition of RHA, from 31% to 36% and 17% to 20% respectively. According to BS 1377 [14], subgrade/fill material should have liquid limit ≤50% and plasticity index ≤30% while for sub-base, liquid limit should be ≤ 30% and plasticity index ≤ 12%. However, it can be observed that this soil is unsuitable for use as a pavement subgrade in its original state, but stabilizing same using RHA reduced the liquid limit and plasticity index, making it very suitable for use as pavement subgrade. Specific Gravity: Specific gravity of the soil samples under investigation was determined using AASHTO T22 03, T85-91 procedures [17]. The specific gravity of the soil sample from the result above gave 2.05 and it is perfect for identifying poorly graded soils. It was also observed that addition of RHA at 2%, 4%, 6% and 8% increased the specific gravity of the soil sample to 2.08, 2.25, 2.42 and 2.70 respectively. At 10% addition of RHA, a decrease in specific gravity of the soil from 2.70 to 2.50 was observed. This however shows that RHA is a very good stabilizer for poorly graded soils. MDD (mg/m 3 ) and OMC (%): Compaction Test was carried out and tabulated to determine the compaction properties i.e., Maximum Dry Density and Optimum Moisture Content of the studied soil. The compaction result shows that the maximum dry density (MDD) of the sample has a value of 1.85mg/m 3 and optimum moisture content (OMC) of 12%. The range of values that may be anticipated when using the standard proctor test methods are: for clay, maximum dry density (MDD) may fall before 1.44 mg/m 3 and 1.685 mg/m 3 and optimum moisture content (OMC) may fall between 20-30%, for silty-clay MDD is usually between 1.6 mg/m 3 and 1.845 mg/m 3 and OMC ranged between 15-25% and for sandy clay, MDD usually ranged between 1.76 mg/m 3 and 2.165 mg/m 3 and OMC between 8 and 15%. Thus, looking at the results of the soil samples, it is observed that the sample is sandy-clay. The low values of the dry density indicate that the natural deposits are loose and accounts for the high void ratio. However, the addition of RHA increased the value of the MDD, while decreasing the value of the OMC respectively from 1.85 mg/m 3 and 12% to 2.00 mg/m 3 and 11% at 2%, 2.30 mg/m 3 and 9% at 4%, 2.70 mg/m 3 and 8% at 6% and 2.85 mg/m 3 and 8% at 8%. A decrease in value of MDD and increase in OMC was also observed at 10% addition of RHA 2.85 mg/m 3 and 8% to 2.75 mg/m 3 and 10% respectively. These substantial improvements recorded on the addition of RHA are due to the high binding strength and the aluminosilicate composition of the additive (see Table 2) pH Value: Soil pH is a measure of the acidity or alkalinity of the water held in its pores. The pH scale goes from 0 to 14, with 7 representing neutral. From pH 7 to 0 the soil is increasingly acidic, while from 7 to 14 it is increasingly alkaline. Table 1 shows that the soil has a pH value of 7.2 and the addition of RHA at higher proportions further increases the soil's alkalinity. Effect of Rice Husk Ash (RHA), Drying Time (DT) and Molding Moisture (MM) on the Volumetric Shrinkage of Treated Soil Up to 10% by weight of RHA was utilized in the treatment of soft soil and the volumetric shrinkage (VS) behavior as a result of this treatment procedure was observed. These blending and mixing were achieved with four various molding moisture conditions of -2%, 0%, 2% and 4% of optimum moisture and subjected to different drying periods to a maximum of 12 days and measurements were recorded on 0, 3, 6, 9 and 12 days. Tables 3, 4, 5 and 6 present the graphical behavior of VS against percentage RHA treatment and tabulated behavior of the soil treated with RHA, molded under different moisture conditions and dried at different days. It can be observed that the VS reduced consistently with increased RHA where the soil was treated under 2% dry of optimum moisture in Fig. 3. Throughout the curing period, the VS also reduced considerably and all fell below VS of 4%, which according to standard requirements is the critical point above which a material cannot be considered good for use as a subgrade foundation under hydraulic bound conditions. This shows that RHA can be used to treat soft soils of similar properties under 2% dry of optimum molding conditions beyond 8% by weight. This behavior is attributed to the highly pozzolanic properties of RHA, which acted as an environmental conscious binder improving hydration reaction, cementation, calcination, and flocculation and reduced consistently the tendency for the treated material to be affected by shrinkage [18; 19]. Another reason for this behavior was because of the cation exchange that took place at the interface between soil polarized ions and those of the admixture within the diffused layer. Thirdly the fineness of the ash material contributed to filling process within the voids, which improved the space-mass index of the treated material. In Fig. 4, the treatment process was achieved under optimum moisture molding conditions i.e. 0% of optimum moisture. This condition is hardly experienced in the field where foundations are subjected to rise and fall of water table like the pavement foundation. This is due to the fact that water table rises when it rains and drops during dry seasons. In Nigeria for example, during spring and fall in many parts of the country, there always rain and this brings about the rise in water table while during winter, the water table falls considerably. So, pavement foundations, which are hydraulically bound structures suffer as a result. It can be observed that at 8% by weight addition of RHA, the VS fell below the critical point (4%), which is safe for treated soil materials to be utilized as subgrade foundation materials. In Fig. 5, wetting moisture condition of molding started where the treated material was molded with 2% wet of optimum moisture. This happens when the water table rises and expose foundation materials to moisture ingress or migration through the poorly compacted or cracked layers. It can also be observed that at 8% by weight addition of RHA, the VS dropped to below 4% point safe for materials to be used but beyond that to 10% addition, the VS abruptly increased again beyond the critical line. This shows that, at the molding moisture condition of 2% wet of optimum, the RHA cannot be utilized beyond 8% again (see Table 3). This behavior was due to the added moisture which restarted the hydration reaction and created space for volume changes thereby increasing the shrinkage properties again in a renewed cycle. In Fig. 6, the molding moisture was increased and the condition wasn't encouraging as the behavior of the VS resided above the critical line except also at 8% by weight addition of RHA and at 12 days drying time. This is due to excessive molding moisture within the clayey mass of treated soil, which increase swelling properties and VS properties. On the hand, Figs. 7, 8, 9 and 10 show the response of VS with respect to the various drying periods in days. It can be observed that the VS reduced with increased exposure to drying conditions corresponding also to increased addition of RHA. The best result was obtained at 2% dry of optimum molding moisture condition and at the RHA treatment rate of 8% by weight. This shows that the longer the foundation materials treated with RHA stay through the drying period, the better and more reliable the strength development and a corresponding improvement in the volumetric shrinkage (VS) [20]. It is also important to remark that the VS obtained at the optimum moisture condition is reliable but beyond that line, on the molding moisture side, the treated material start to suffer the effect of moisture exposure. It is obvious that at molding moisture of 4% wet of optimum, the treated soil fails to meet the VS requirements. Conclusion The following remarks can be made from the laboratory investigation of the critical state volumetric shrinkage of compacted earth treated and improved with rice husk ash; a. The index properties of the test soil showed that it is an A-7-6 soil group according to AASHTO method, it is highly plastic with the I P above 17%, it is poorly graded and expansive with high shrinkage properties, which made it undesirable to be used as a pavement foundation soil subjected to hydraulically bound conditions. b. The RHA as a lignocellulosic and amorphous material has high pozzolanic reaction on the treated soil thereby improving the shrinkage properties under different molding moisture conditions. c. The VS of the treated soil reduced consistently with the addition of RHA and with the days of exposure (drying) and important to note is that the VS reduced below the critical line 4% at 8% by weight addition of RHA and was established as the proportion that met the condition for the soil material improvement under varying molding moisture conditions between -2% and 4%. Meanwhile, the drying period proved that the longer the days within the drying conditions, the better the VS. it shows that water migration and ingress during rise in water table for hydraulically bound structures like the pavement foundation causes the VS to increase above the critical level thereby causing failures of the pavement facilities. d. Finally, and once again, Rice Husk Ash (RHA) has proven to serve as an environmentally conscious construction material with the potentials to improve soils and the ground for use as foundation materials. With the above results, it can comfortably replace the conventional cement and completely rid our planet with the sources of greenhouse emission for a healthier world of construction activities.
2020-04-23T09:09:42.469Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "d6b794f0c864a4bfdfb09b534387e481eecfa4c6", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.14382/epitoanyag-jsbcm.2020.7", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "31890cca18d53ad4e2688334cd0381f628727801", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Environmental Science" ] }
226154289
pes2o/s2orc
v3-fos-license
A Study on Comparative Analysis of Major Stock Indices of World Experts talk lots on integration of major stock indices of the world. In this research paper researcher has tried to establish integration between major stock indices of the world by calculating correlation and applying anova on daily return of 16 major stock indices of the world. In research it is found that preceding and succeeding time of opening the stock market plays vital roles in terms of effect on each other. To achieve the objectives of research, last 5 years daily closing price of these 16 indices is collected and analyzed for quantifying the level of correlation between different stock indices. As sufficient time period is taken and daily closing prices are analyzed so it is found there is not significant difference in the daily return of these stock indices. I. Introduction A stock market or share market is the aggregation of buyers and sellers of stocks (also called shares), which shows ownership rights on businesses; these may include securities listed on a public stock exchange, as well as stock that is only traded privately, such as shares of private companies which are sold to investors through equity crowd funding platforms. Investment in the stock market is most often done via brokerage houses and electronic trading platforms. Investors make investments as per their investment strategies. This research focused on the analysis of major stock indices of world. In this research, Researcher have taken previous five years data of sixteen stock indices of world. The analysis includes data of sixteen different stock indices of countries from continents like Asia, Europe, America, Africa, and Australia. Six stock indices are from Asia and five are from Europe. In data collection, Researcher have taken the opening and closing values of all sixteen stock indices of last five years (1st Jan 2015 to 31st Dec 2019). Researcher have calculated returns given by these stock indices on day to day basis and also for the period of five years as a whole. In this research, Researcher have also stated the opening and closing times of all the selected stock indices as per the Indian standard time. It has played a very important role in getting meaningful findings from the project report. It helped me to find out that which stock indices open earlier than other indices. I can predict the movement of indices at some extent by studying the movement of other indices which opens earlier than them. II. Literature Review Jayshree, 2014: The research gap of this study was found out by conducting a detailed literature review of studies in different countries during the recent years. Found that the popular belief that the markets in general and Indian market in particular is more integrated with other global exchanges from 2002-03 onwards. This can very well be seen since the South Asian crisis of the mid-late nineties barely affected us particularly because we were insulated due to government policies and was just making the transition. However, in the later time periods, the influence of other stock markets increased on our BSE or NSE, but at a very low almost insignificant level. www.irj.iars.info www.researth.iars.info/index.php/curie Swetadri Samadder, 2018, This study investigated the stock market integration amongst major global stock markets, namely, Australia, Canada, France, Germany, India, UK and USA to examine the short-run and long-run relationships with Indian stock market and selected developed stock markets based on time series data for the period between 2001 (January 2) and 2016 (December 31). This study also examines the possibility of portfolio diversification between the Indian stock market and the developed stock markets. Low correlation is observed between Indian stock market and France stock market that indicates the possible gains from international diversifications. Granger causality test results based on VECM show that both Indian stock market and USA stock market are associated in the long-run but it would take long time to return to equilibrium and Indian stock market is associated with France, Germany and USA stock markets in the short-run, which entails that investors can earn reasonable benefits from international portfolio diversification in the short-run but benefits from international portfolio diversification in the long-run are restricted. Statistic, Runs test to analyze the data and reach to the results. The results derived by using various parametric and non-parametric tests clearly reject the null hypothesis of the stock markets of India and Pakistan being efficient in weak form. The study provides vital indications to investors, hedgers, arbitragers and speculators as well as the relevance of fundamental and technical analysis as far as the trading/investing in the capital markets of India and Pakistan is concerned. III. Research Objective ➢ To find out the returns given by the selected stock exchanges. ➢ Comparing the returns given by selected stock indices to each other. ➢ Get an idea about the direction of movement of stock indices. ➢ Find out the correlation between all selected stock indices to measure the strength of movement of stock indices towards each other. ➢ To find out if the difference between the return of stock indices is significant or not by performing the single factor ANOVA. IV. Research Design Here I am using Quantitative Research which is an organized way of gathering and analyzing data obtained from diverse sources. It includes the use of computational, statistical and mathematical tools to derive results. B. Data Collection This research report is totally based on the secondary data. Researcher have collected required data from websites and journals which are included in literature reviews C. Data Analysis Tools ➢ CAGR formula ➢ ANNOVA: Single Factor The American stock exchange (New York stock exchange) has given second highest return from the selected stock exchanges. V. Data Analysis and Interpretation National stock exchange of India is on the third place by giving 46.89% return. Indian stock exchange has given more returns than most of the stock exchanges during the 5 years period. It has given 3rd highest return from the selected stock exchanges. Stock indices with zero or negative returns: 1. Spain (-7.10%) 2. Shangai (-5.71%) 3. Singapore (-4.23%) B. Correlation of stock indices Interpretation: From the above correlation analysis of stock indices, we can say that all the stock indices are positively correlated with each other. That means the stock indices are moving in the same direction. if one indices is going in positive direction then the others will also go towards positive direction. Asian markets have good correlation with each other which is between 20-60%. This means the Asian markets are moving in the same direction with good strength. Correction of coefficient of return between major stock indices of the world European markets have the highest correlation between them which is between 50 to 90%. This means European markets are moving in same direction with high strength. American markets also have a good correlation between them which is 40% to 67%. That means American markets are also moving in the same direction but with comparatively less strength than European markets. C. ANOVA Test H0 -There is no significance difference in the daily return of selected major stock indices of the world. H1 -There is significance difference in the daily return of selected major stock indices of the world In our analysis, the F value is lower than the F-crit value which means the null hypothesis (there is no significant difference in the daily return of selected major stock indices of the world) is accepted. From analysis of variance we found out that there is no significant difference in the daily return of the selected major stock indices of the world. So, the null hypothesis is accepted and the alternate hypothesis is rejected. D. Analysis on yearly basis Two-way ANOVA Two-way ANOVA will be performed in which two factors will be studied. Average return given by individual stock indices in five years. 2. Yearly return from all stock indices for five years on year to year basis. E. Hypothesis for Stock Indices (Columns) Below table shows the average return given by the selected major stock indices every year. H0 -There is no significant difference in the yearly return of selected major stock indices of the world H1 -There is significant difference in the yearly return of selected major stock indices of the world F. Hypothesis for Year (Rows) H0 -There is no significant difference in the return of selected major stock indices of the world on year to year basis. H1 -There is significant difference in the return of selected major stock indices of the world on year to year basis According to this two-way anova, there is significant difference in yearly returns from selected d stock indices and there is no significant difference in total return from all stock indices on year to year basis. ➢ Rows (yearly return from individual stock indices) Here the F value is bigger than F crit, F > Fcrit So, the null hypothesis H0 (There is no significant difference in the return of selected major stock indices of the world on year to year basis) will be rejected. This means that there is significant difference in the total yearly returns from all stock indices. VI. FINDINGS ➢ Brazil, New York and Indian stock exchanges have given highest returns respectively in period of five years. ➢ Spain, Shanghai and Singapore have given lowest returns respectively in period of five years. All of them have given negative return. ➢ Asian markets open earlier than other stock markets of world. ➢ Indian stock index has given more return from other five stock indices of Asia in last five years. ➢ All of the selected major stock indices for our study are positively correlated which means they are moving in the same direction. ➢ European stock markets are highly correlated because the correlation between them is up to 90 percent. ➢ Asian and American stock indices are moderately correlated up to 60 and 67 % respectively. ➢ According to single factor analysis of variance there is no significant difference between the returns of selected major stock indices of the world. ➢ Highest correlation in our study is between stock indices of Germany and France which is 92 percent on the basis of analysis of daily return. ➢ Lowest correlation in our study is between stock indices of Brazil and shangai which is 11.09 percent. ➢ There is no significance difference in the yearly return of selected major stock indices of the world. ➢ There is significance difference in the return of selected major stock indices of the world on year to year basis. VII. CONCLUSION A stock market participant at some extent can predict the movement of stock indices opening later than other early opening stock indices based on the performance of those early opening stock indices. There are high probabilities of having a high correlation between stock indices belonging to same continent. European stock indices are the most correlated indices from selected major stock indices of the world. All sixteen stock indices from our study are positively correlated which means they all are moving in the same direction. Lower correlation is seen between the stock indices belonging to different continents as compared to those belonging to same continent. Positive correlation is seen between all stock indices which means all are moving in the same direction. Strength of movement depends upon the level of correlation. A good performance by the major stock indices of a country represents the good economic health and progress of that country. For example, stock markets of most of the countries are facing decrease in value because of corona virus pandemic.
2020-10-30T15:04:17.513Z
2020-02-08T00:00:00.000
{ "year": 2020, "sha1": "ec88d1e826a6837b5c2838913798004acc10cae1", "oa_license": "CCBYNC", "oa_url": "https://researth.iars.info/index.php/curie/article/download/113/92", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "6d18f70e2dcb22813ad0558f64af700ff57493f4", "s2fieldsofstudy": [ "Economics", "Business" ], "extfieldsofstudy": [ "Mathematics" ] }
218964272
pes2o/s2orc
v3-fos-license
Low-Cost Air Quality Sensors: One-Year Field Comparative Measurement of Different Gas Sensors and Particle Counters with Reference Monitors at Tušimice Observatory With attention increasing regarding the level of air pollution in different metropolitan and industrial areas worldwide, interest in expanding the monitoring networks by low-cost air quality sensors is also increasing. Although the role of these small and affordable sensors is rather supplementary, determination of the measurement uncertainty is one of the main questions of their applicability because there is no certificate for quality assurance of these non-reference technologies. This paper presents the results of almost one-year field testing measurements, when the data from different low-cost sensors (for SO2, NO2, O3, and CO: Cairclip, Envea, FR; for PM1, PM2.5, and PM10: PMS7003, Plantower, CHN, and OPC-N2, Alphasense, UK) were compared with co-located reference monitors used within the Czech national ambient air quality monitoring network. The results showed that in addition to the given reduced measurement accuracy of the sensors, the data quality depends on the early detection of defective units and changes caused by the effect of meteorological conditions (effect of air temperature and humidity on gas sensors and effect of air humidity with condensation conditions on particle counters), or by the interference of different pollutants (especially in gas sensors). Comparative measurement is necessary prior to each sensor’s field applications. Introduction Similarly to other countries, in the Czech Republic, the public's interest in the current state of ambient air quality is increasing, especially in cities and locations exposed to industrial sources of pollution. Although the national air quality network is representatively deployed over the entire territory, covering all types of monitoring sites (urban, industrial, and background) and potential air pollution sources (traffic, agricultural, and industrial), requests to widen the spatial resolution of the measurement network (to almost personal exposure) are still increasing in the public sector [1][2][3]. During the last three years, the Czech Hydrometeorological Institute (CHMI) has recorded several requests for assistance in processing data from public projects that applied sensors in cities or other places of interest. Most of these projects suffered from severe shortcomings in the following points: Appropriate sensor placement (study design) to monitor the given target; The indicated order of these points is very important because effective and successful sensor application depends mainly on the first four points. Unfortunately, most of the applicants for assistance turn to the experts only at points five or six, but it should be emphasized that even professionally processed data cannot save a poorly designed project. This paper deals with the two main points highlighted in bold above (sensor type selection and measurement quality control). Among all the issues, the selection of appropriate and reliable sensors is always a challenging goal [4]. There is a wide range of air quality sensors available on the market, which grows every year, while no regulatory legislation or standards for quality control exist yet. Although some activities in creating international standards for air quality sensors evaluation have been already started (e.g., the European Committee for Standardization [5]), in the meantime, different specialized institutions are trying to objectively evaluate and show the measurement quality of recently produced sensor units (by performing standard statistical procedures including descriptive statistics, calculation of correlation coefficients, coefficients of determination, or measurement errors [4,[6][7][8][9]). A common summary of all these independent evaluations is that this miniaturized technology has some limits, manifested primarily by different performance in real outdoor (uncontrolled) conditions than in laboratory evaluations under controlled conditions (e.g., [4,10]). The measurement quality of electrochemical gas sensors is usually susceptible to changes in ambient air temperature (T) and relative humidity (RH) [4] (for the effect of increasing sensor unit temperature on sensor sensitivity or zero offset, see Mead et al. [11]), and to cross-interference of various gases (especially between O 3 and NO 2 ; [11,12]), which often leads to the overestimation of the real concentrations [7,13]. Miniaturized optical particle counters are susceptible especially to high RH conditions (close to water condensing conditions), which may lead to erroneous estimation of particle size and mass concentration due to potential particle hygroscopic growth (particles' ability to bind water) [14]. All of the above-mentioned negative effects on sensor measurement quality can be filtered out of the measured datasets using different correction procedures (e.g., [7,9,13,14]) if they are clearly defined at least at the beginning of the measurement (or also during the comparative measurement; ideally applied on a daily correction routine [15]). Some sensor manufacturers state that they have already implemented certain correction algorithms (for the elimination of cross-sensitivity or T and RH effects) in the sensors' processing units. However, the algorithms used are usually not published, so the verification of the effectiveness of applied methods is again impossible without performing comparative control measurement of the given sensor unit. Under the circumstances that measurement quality of both gas and aerosol sensors can be affected by changes in ambient meteorological conditions (T, RH) is evident, that the effect of seasonality (the effect of particular months) plays an important role [4,13,16]. The different timing of short-term comparative measurement tests (within days) may, therefore, be one of the main reasons that mixed information about the performance quality of particular sensor types is reported across literature [4,6,9,14]. Although this is known as one of the shortcomings of this topic [14], there are (so far) only very few studies providing results from long-term field comparisons (lasting at least three months or more) of different types of air quality sensors [13,[16][17][18]. The aim of this study is to show the performance of different Cairpol gas sensor pairs (Cairclip for SO 2 , NO 2 , O 3 , and CO) and miniature Plantower (PMS7003) and Alphasense (OPC-N2) particle counter pairs (for PM 1 , PM 2.5 , and PM 10 mass concentration) within almost one year of continuous field comparative measurement with the corresponding reference monitors and equivalent optical particle monitor used in the CHMI ambient air quality network (all data used are available in Supplement 2). This paper follows up on recently published studies [13,17] and complements and extends the results obtained. Study Area and Experimental Design Field-testing measurement of different kinds of air quality sensors took place at Tušimice Observatory (the northwest area of the Czech Republic; GPS: 50 • 22 35.59" N, 13 • 19 39.76" E), a professional station of the CHMI focused on the integration of ground-based and remote sensing methods in meteorology and air quality measurement. The station is located in a semi-agricultural and semi-industrial background, surrounded by three brown coal-fired power plants and a spacious open-pit brown coal mine. All the sensor types were tested in pairs (to control intra-sensors variability) and installed in the appropriate housings (ventilated boxes; Figure 1) on the roof of the automatic ambient air quality monitoring reference station. Comparative testing measurement was carried out continuously between the end of 2017 and the beginning of 2019 (Cairpol gas sensors measured from November 2017 until September 2018, Plantower particle counters from March 2018 until December 2018, and Alphasense from September 2018 until January 2019). Cairpol Gas Sensors The detailed specifications of Cairclip SO2, NO2, O3, and CO electrochemical sensors (Cairpol, Envea, France (FR); Figure 1a) have been described in our previous study [13], so here we cover them only briefly. Cairclip sensors are small, tube-shaped, autonomous measuring units (weight 55 g). Each sensor unit has its own battery with an operating time from 24 to 36 h (or it can be connected directly to a 5V DC power supply with a current demand of 500 mA) and a small screen, where the measured values and device status are displayed. The fundamental technical parameters of the particular sensors are listed in Table 1. All the units have an optionally adjustable measuring period from 1 min to 15 or 60 min. The operating conditions stated by the manufacturer are in temperatures (Ts) from −20 °C to 50 °C and relative humidity (RH) from 15% to 90% (non-condensing conditions). Special attention can be given to the O3 Cairpol sensor, which is actually a combined type of sensor for O3/NO2. It has the same limit of detection, range of measurement, uncertainty, and admitted effect of temperature on zero value drift as the NO2 Cairclip sensor itself [19,20]. Although the exact algorithm for sensor response on O3 separately is not known [19,21,22], given the strong positive correlation of concentrations measured by the combined O3 sensor with concentrations measured by the Cairpol sensor for SO2, NO2, or CO (see the Results section (Section 3.1)), we assume that it indicates a modified sum of O3, NO2, and possibly other oxidants' values [11][12][13]22,23]. Cairpol Gas Sensors The detailed specifications of Cairclip SO 2 , NO 2 , O 3 , and CO electrochemical sensors (Cairpol, Envea, France (FR); Figure 1a) have been described in our previous study [13], so here we cover them only briefly. Cairclip sensors are small, tube-shaped, autonomous measuring units (weight 55 g). Each sensor unit has its own battery with an operating time from 24 to 36 h (or it can be connected directly to a 5V DC power supply with a current demand of 500 mA) and a small screen, where the measured values and device status are displayed. The fundamental technical parameters of the particular sensors are listed in Table 1. All the units have an optionally adjustable measuring period from 1 min to 15 or 60 min. The operating conditions stated by the manufacturer are in temperatures (Ts) from −20 • C to 50 • C and relative humidity (RH) from 15% to 90% (non-condensing conditions). Special attention can be given to the O 3 Cairpol sensor, which is actually a combined type of sensor for O 3 /NO 2 . It has the same limit of detection, range of measurement, uncertainty, and admitted effect of temperature on zero value drift as the NO 2 Cairclip sensor itself [19,20]. Although the exact algorithm for sensor response on O 3 separately is not known [19,21,22], given the strong positive correlation of concentrations measured by the combined O 3 sensor with concentrations measured by the Cairpol sensor for SO 2 , NO 2 , or CO (see the Results section (Section 3.1)), we assume that it indicates a modified sum of O 3 , NO 2 , and possibly other oxidants' values [11][12][13]22,23]. Plantower and Alphasense Miniature Particle Counters The PMS7003 optical particle counters (Plantower, China (CHN); Figure 1b) for measuring mass concentrations of PM 1 , PM 2.5 , and PM 10 (by converting the particle number concentration in an air volume of 0.1 L) are small boxes with dimensions of 48 × 37 × 12 mm. This particle analyzer is not a fully autonomous measurement unit because it is powered externally (power supply 4.5-5.5 V DC, current demand 100 mA) and needs to be connected to a processing unit ( Figure 1b). The fundamental technical parameters of PMS7003 analyzers are listed in Table 2. The sampling frequency is 1 s. The operating conditions stated by the manufacturer are in Ts from −10 • C to 60 • C and RH from 0% to 99% [26]. The OPC-N2 (Alphasense, United Kingdom (UK); Figure 1c) optical particle counters for measuring particle number concentration and mass concentrations of PM 1 , PM 2.5 , and PM 10 (conversion to air volume 1.2 L) are small measuring units with dimensions of 75 × 60 × 65 mm. Similarly to the previous analyzers (PMS7003 from Plantower), even these units need to be powered externally (power supply 4.8-5.2 V DC, current demand 175 mA) and connected to a processing unit ( Figure 1c). For technical specifications, see Table 2 again. The sampling interval is optional, from 1 to 10 s. The operating conditions are stated in Ts from −20 • C to 50 • C and RH from 0% to 95% (under non-condensing conditions) [27]. Table 2. Technical parameters of Plantower and Alphasense optical particle counters specified by the manufacturers [26,27]. Reference Monitors and Other Equivalent Methods During the testing measurement, all the above-mentioned sensors were compared to the appropriate reference monitors (RMs) or to other equivalent analyzers (Fidas200 particle analyzer) currently used in the CHMI ambient air quality monitoring network. For monitoring gaseous pollutants, RMs from Teledyne API company (San Diego, CA, USA) were used: the SO 2 analyzer T100 (UV fluorescence method with minimum measurement range 0-50 ppb, maximum range 0-20 ppm, and limit of detection 0.4 ppb), the NO 2 analyzer T200 (chemiluminescence detection method with the same measurement ranges and limit of detection as the aforementioned T100), and the O 3 analyzer Atmosphere 2020, 11, 492 5 of 15 T400 (UV absorption method with minimum range 0-100 ppb, maximum range 0-10 ppm, and limit of detection <0.4 ppb) [28]. For monitoring aerosol concentrations in fractions PM 2.5 and PM 10 , reference monitors MP101M (Environment SA, Envea, FR) based on radiometry (beta ray absorption) were used (measurement range 0-10,000 µg/m 3 , limit of detection 0.5 µg/m 3 ) [29]. Given the similarity of measurement technology, we have also used for sensor comparison the Fidas200 (Palas, Germany (DE)) optical particle counter for PM 1 , PM 2.5 , and PM 10 (measuring particle number concentration of up to 64 size channels with a range of 1-20,000 particles/cm 3 and mass concentrations with a range of 0-1500 µg/m 3 ) [30]. The Fidas200 optical counter is equipped with an Intelligent Aerosol Drying System (IADS), which ensures water removal before particle measurement. During the last year, the Fidas200 was found to be a suitable equivalent monitor to an RM for the determination of PM mass concentrations in ambient air quality in the Czech Republic (according to successful tests of equivalence; published only within the CHMI [31]). Data Analysis and Data Control The measured data from all tested sensors were cleaned of any outages and negative values (treated as missing values; mentioned further as not available (NA)) before processing. However, to show the real sensor performance, all the other measured values (even the possible outliers) were left in the dataset for basic statistical processing in the first stage. The hourly averages of all measured concentrations were calculated from 10 min of data (gas pollutants in ppb or ppm, aerosol particles in µg/m 3 ). In the event that more than 40% of the values were missing in any particular hour, the whole hourly average was considered to be NA. Firstly, summary statistics of all measured values during the field-testing period were performed (mean values and standard deviations (SDs) of concentrations measured by sensor pairs and by RMs), including the intra-sensors correlation within pairs of similar sensor types. Given the non-normal distribution of most of the measured values (Figures S1-S4 in the Supplement 1), non-parametric Spearman's rank correlation coefficients (r S ) were used in this study (similarly as in Bauerová et al. [13] and Fishbain et al. [32]). Further summary statistics of different sensors' performance were calculated, including the presence (indicating the sensors' availability over time in percentage [32]), correlation with RMs and other equivalent monitors (r S ), and measurement errors for indicating the differences between the sensor and RM measurements (calculated as mean bias error (MBE), mean absolute error (MAE) and root mean square error (RMSE); see e.g., Feenstra et al. [6]). Finally, in the second stage, an identification of significant outliers (defined as higher than 3× max of the hourly average RM concentration reached during the testing period; [13,33]) was performed and their representation in the dataset was expressed in percentage. In the case of suitable sensors (with correlation coefficients r S resulting from comparison with RMs at least >0.50), the coefficients of determination (R 2 ) were determined according to the best-fitting regression equation in comparison with the RM (not only linear relationships). In addition, the potential effect of ambient T or RH on the sensors' measurement quality was assessed according to the values of r S and R 2 . Cairpol Gas Sensors The summary statistic of concentrations measured by different Cairclip gas sensors and by corresponding RMs is listed in Table 3 (except RM for CO, which is not available at the testing station). The results showed that despite the significant strong correlations (for all sensor types r S > 0.80) of the measured concentrations within the pairs of particular sensors (intra-sensors correlation), significant data drifts were also found within the pairs of SO 2 and CO sensors (in the case of the SO 2 sensors, a difference in mean concentration values of about 60 ppb from the beginning of the measurement, i.e., in 100% of the data; in the case of the CO sensors, a difference of about 10 ppm after three months of measurement, i.e., in 75.2% of the data; see Figure 2 or Figures S1 and S4 in Atmosphere 2020, 11, 492 6 of 15 Supplement 1). In the case of the NO 2 and O 3 sensors, such data drifts within the pairs did not appear during the entire testing measurement (only for a selected time period; see Figure 2). The percentage of valid data (hourly average concentrations of different gases) measured by Cairclip sensors in particular months of the testing period is shown in Table S1 in Supplement 1. The comparison with the RMs showed very weak measurement quality in the case of the SO 2 and NO 2 sensors. There were high differences in the measured concentrations (Table 3, Figures S1 and S2 in Supplement 1), and the calculated measurement errors were very high (Table 4). In the case of the SO 2 Cairclip sensors, no correlation with the RM was detected (r S around zero; Figure S5), in the case of the NO 2 sensors the correlation with the RM was significant but negative (r S = −0.26 in both tested units; Table 4, Figure S6). The strongest correlations (r S = 0.68 for both units) and the lowest measurement errors occurred during the inter-comparison with the RM detected in the combined O 3 Cairclip sensors (Table 4, Figures S3 and S7; compared with the concentrations measured by the O 3 RM). The concentrations measured by these combined sensors were, however, also significantly positively inter-correlated with the concentrations measured by all other types of Cairclip gas sensors (SO 2 sensors r S > 0.98, NO 2 sensors r S = 1.00, CO sensors r S > 0.79; see Table S2 in Supplement 1 and Figure 2). In all the Cairclip sensors, significant correlations of measured gas concentrations with the ambient air T (r S > 0.79) and RH (r S < −0.50) were found (Table S2 in Supplement 1). Even in the case of the best-performing combined O 3 sensors, the measurement quality changed significantly during the testing period (see the differences between the cold period from November to March and the warm period from April to September in Figure 3). The better sensor performance was reached during the warmer months when the real O 3 concentrations reached the values over 40 ppb ( Figure S8). The presence of significant outliers (values > 3× max RM one hourly average concentration) was 51% and 10% in the case of SO 2 Cairclip sensors (SO 2 _Cair1 and SO 2 _Cair2, respectively), and 0.01% in the case of NO 2 Cairclip sensors (both units). In combined O 3 Cairclip sensors, no significant outliers were detected during the testing period. Plantower and Alphasense Particle Counters The summary statistic of PM1, PM2.5, and PM10 mass concentrations measured by two tested types of particle counters-PMS7003 Plantower and OPC-N2 Alphasense-and by RMs or a Fidas200 Plantower and Alphasense Particle Counters The summary statistic of PM 1 , PM 2.5 , and PM 10 mass concentrations measured by two tested types of particle counters-PMS7003 Plantower and OPC-N2 Alphasense-and by RMs or a Fidas200 monitor is listed in Table 5 (in the case of an RM, no PM 1 mass concentrations are available). The intra-sensors comparison within pairs showed highly significant correlations in the measured PM concentrations in both the Plantower and Alphasense sensors (both types r S > 0.95 in all PM fractions; see Table 5). In both sensor types, no significant data drifts were found within the sensor pairs, although the OPC-N2 sensors had a tendency to differ in mean and standard deviation (SD) concentration values (especially in the case of PM 2.5 and PM 10 fractions, which had a higher occurrence of outlying values; see Table 5). The percentage of valid data (hourly average PM concentrations) measured by Plantower and Alphasense particle counters in particular months of the testing period is shown in Tables S3 and S4, respectively, in Supplement 1. The comparison with the RMs and equivalent Fidas200 showed a very good measurement quality of the PMS7003 Plantower particle counters. The means and SDs of all measured PM fraction concentrations corresponded very well with both control monitors, and no significant outliers appeared ( Table 5, Figures S9-S11 in Supplement 1). This also resulted in a significant positive correlation of the measured data (with optical Fidas200 r S > 0.70 for all fractions measured within a sensor pair, with radiometric RM r S > 0.62 for PM 2.5 and PM 10 fractions; Table 6, Figure 4 and Figure S12), and low measurement error of these sensing units (Table 6). In the case of the OPC-N2 Alphasense particle counters, the measurement quality was considerably weaker in comparison with the Fidas200 or RM. The mean values and SDs of the concentrations measured by the sensors and by the control monitors differed significantly in all PM fractions (Table 5, Figures S13-S15). Despite a strong positive correlation with both control monitors (with Fidas200 r S > 0.75 for all fractions, with RM r S > 0.63 for PM 2.5 and PM 10 ; Table 6, Figure 5, and Figure S16 in Supplement 1), the high values of the measurement errors showed the presence of extreme outliers in all PM fractions analyzed by OPC-N2 sensors ( Table 6; the maximum PM 1 concentration measured by the OPC-N2 was 256.6 µg/m 3 , the maximum PM 2.5 concentration was 569.8 µg/m 3 , and the maximum PM 10 concentration was 9036.7 µg/m 3 ). In both tested particle counter types, the concentrations of all measured fractions correlated weakly negatively (yet statistically significantly) with ambient T (Plantower sensors r S < −0.24, Alphasense sensors r S < −0.16) and significantly positively with RH (Plantower sensors r S >0.46, Alphasense sensors r S > 0.57; see Tables S5 and S6 in Supplement 1). In the case of the PMS7003 particle counters, there were no extreme outliers in the measured concentrations detected in relation to the effect of changing T and RH. Conversely, in the OPC-N2 particle counters, 5.4% of the PM 2.5 data and 6.2% of the PM 10 data were determined as extreme outliers (>3× max RM one hourly average concentration); most of them were detected at the time with high ambient RH (RH > 90%; see Figure 6). Table 5. Summary statistics of particulate matter (PM 1 , PM 2.5 , and PM 10 ) concentrations (µg/m 3 ) measured by pairs of Plantower (PMS7003) and Alphasense (OPC-N2) particle counters (ID 1 and 2) and by a corresponding Fidas200 optical particle counter and reference monitors (RMs). Type of Sensor Sensor ID Fidas200 RM Table 6. Summary of Plantower (PMS7003) and Alphasense (OPC-N2) particle counters' performance statistics in comparison with a Fidas200 optical particle counter and the reference monitors (RMs). Discussion Small sensors can undoubtedly serve as an affordable and easy-to-use complementary solution for further development of the ambient air quality monitoring network. Nevertheless, due to the limits of this miniaturized technology, special attention should be given to the selection of suitable sensor types during the planning of specific intent and to the subsequent data control and verification before and continuously during each application. This study was performed to show the individual Figure 5. The relationship between the PM 2.5 and PM 10 concentrations (µg/m 3 ) measured by Alphasense OPC-N2 particle counters (Alpha1 in gray, Alpha2 in blue; including outliers) and by control monitors: (a) PM 2.5 concentration comparison with equivalent optical Fidas200 monitor; (b) PM 2.5 concentration comparison with radiometric RM; (c) PM 10 concentration comparison with equivalent optical Fidas200 monitor; (d) PM 10 concentration comparison with radiometric RM. In the case of PM 2.5 , the coefficients of determination (R 2 ) were estimated from the linear best-fit regression lines. In the case of PM 10 , the y-axis is converted to a logarithmic scale and the R 2 values were estimated from the power best-fit regression lines. Discussion Small sensors can undoubtedly serve as an affordable and easy-to-use complementary solution for further development of the ambient air quality monitoring network. Nevertheless, due to the limits of this miniaturized technology, special attention should be given to the selection of suitable sensor types during the planning of specific intent and to the subsequent data control and verification before and continuously during each application. This study was performed to show the individual measurement quality of different gas sensors and particle counters and the possible changes during Discussion Small sensors can undoubtedly serve as an affordable and easy-to-use complementary solution for further development of the ambient air quality monitoring network. Nevertheless, due to the limits of this miniaturized technology, special attention should be given to the selection of suitable sensor types during the planning of specific intent and to the subsequent data control and verification before and continuously during each application. This study was performed to show the individual measurement quality of different gas sensors and particle counters and the possible changes during long-lasting outdoor measurement while compared to corresponding reference monitors. Cairpol Gas Sensors The results of the Cairpol Cairclip gas sensors showed a highly unsatisfactory performance of the SO 2 , NO 2 , and CO sensor units. The measured SO 2 and CO concentrations drifted significantly within pairs of identical sensor types. Such data drifts arising in electrochemical gas sensors can be caused by several reasons [34]; one which is often discussed is the aging of the sensor unit [35][36][37]. In our case, the data drift of SO 2 concentrations was observed in the first sensor (SO 2 _Cair1) right from the beginning of the testing measurement (see Figure S1 in Supplement 1). Therefore, we assumed the presence of a defective or poorly calibrated unit (from the manufacturer). Conversely, in the case of CO, the data drift appeared in the second sensor unit (CO_Cair2) after three months of measurements. Given that it occurred exactly at the same time that there was also a certain increase in other concentrations measured by the NO 2 and O 3 sensors (Figure 2), an erroneous measurement caused by sudden interference with other gases cannot be ruled out. Furthermore, the comparison of the Cairpol gas sensors with the corresponding RMs showed very weak measurement quality in the case of the SO 2 and NO 2 Cairclip sensors (no relationship and a significantly negative relationship with RM concentrations; respectively). Although we have found no other study describing the field performance of SO 2 Cairclip sensors (except our previous study [13]), the weak results in SO 2 measurement were also recorded in other sensors from different manufacturers [9,22]. In the case of the NO 2 Cairclip sensors, some comparative field studies are available, but the information about measurement quality varies widely [7,18,22]. Our results showed that, in both cases, the concentrations measured by the Cairclip sensors were inappropriately overestimated against the real SO 2 and NO 2 concentrations, and, therefore, these sensor units were again evaluated as non-compliant and probably defective. The best performance was observed in the combined O 3 Cairclip sensors, where both the intra-sensors comparison in pairs and the comparison with the RM achieved very satisfactory results (similarly as in Jiao et al. [22]). However, it should be pointed out here that the quality of the O 3 sensor measurement changed significantly during the year, when in the warmer months (from April to September) the sensors' performance was significantly better (R 2 up to 0.79), than in the colder months (where almost no relationship with the RM was observed, R 2 < 0.34 [13]; Figure 3). This can be explained by the lowered reactivity of this sensor on low ambient O 3 concentrations (during the colder months) and on the other hand better reactivity during the warmer months, when the O 3 concentrations are naturally higher. With respect to the improvement of the mutual relationship between the sensor and RM O 3 measurement under the real concentrations over 40 ppb (see Figure S8 in Supplement 1), we assume that the effective limit of detection of the combined O 3 Cairclip sensor may be actually at least a half more than the value of 20 ppb stated by the manufacturer. At the same time, given the strong correlation of the combined O 3 sensors with all the other Cairpol sensors (Figure 2), we cannot rule out even a certain effect of interference with other gases. Continuous data control and post-measurement data validation (by the application of some correction indices [7,9,13,14]) should, therefore, always be considered here. It should also be mentioned that, during our long-term field testing, we reached the maximum lifetime of electrochemical Cairclip sensors after 11 months of continuous measurement. After this period, all gas concentrations measured by all sensor units drifted significantly to unreal stable values and, therefore, the sensors were dismounted. To our knowledge, there is no other study for the comparison of the operational lifetime duration of these sensors. Plantower and Alphasense Particle Counters Similarly to some other studies focused on field comparative measurement of Plantower and Alphasense miniature particle counters [14,16,38], we found very good results in the intra-sensors comparison of measured concentrations within pairs of identical sensor types. In both cases, no significant data drifts appeared during the testing period, although for the Alphasense OPC-N2 particle counters, a higher variability in PM concentrations measured within the sensor pair was recorded (especially in maximum concentrations; see also Feenstra et al. [6] or Bulot et al. [16]). The comparison with the RMs and Fidas200 monitor showed very satisfactory results in the case of the Plantower PMS7003 sensors (see also the course of hourly concentrations in Figure S17 in Supplement 1). In all PM fractions, the concentrations measured by the Plantower sensors were systematically lightly overestimated against the concentrations measured by both control monitors (similarly as in the studies by Zheng et al. [38] and Bulot et al. [16]). Naturally, better performance was found when compared to the optical Fidas200 monitor (given the similarity of the measuring method), than to radiometric RMs. Unlike with the Alphasense, we did not detect any extreme outliers for Plantower sensors during the entire testing period (lasting 10 months). With regard to the very good performance of the Plantower sensors, we assume that the manufacturer may have applied a very effective correction algorithm in the sensor processing unit. In the case of the Alphasense OPC-N2 sensors we found, similarly to Crilley et al. [14] and Feinberg et al. [18], data artifacts (outliers in the form of extremely high concentrations) in all of the aerosol fractions (PM 1 , PM 2.5 , and PM 10 ) under high ambient relative humidity conditions (RH > 80%; Figure 6). The most noticeable effect of high air humidity on measurement error occasion was seen in the case of PM 10 fraction, where the maximum hourly average concentrations even reached above 9000 µg/m 3 . We assume that such extreme measurement errors were caused by the high particle hygroscopicity under the condensation conditions (increased particle water content), which resulted in wrong particle size detection and its mass concentration [14]. Overall, the OPC-N2 counters tended to significantly overestimate the real PM concentrations ( Figure S17 in Supplement 1), which is reflected by weaker and not always linear relationships with the control monitors ( Figure 5; similarly as in other studies [6,14,18]). Therefore, we join in the recommendation for continuous data control and post-measurement data validation while using Alphasense particle counters for ambient air monitoring [14]. The maximum limit was not reached in any of the tested particle counters before completion of the comparative measurement (Plantower sensors tested within a 10-month period, Alphasense sensors tested only for five months). Conclusions Four miniaturized Cairpol gas sensors and two different miniaturized particle counters (Plantower and Alphasense) were tested in duplicates against collocated reference and other control monitors in a long-lasting comparative measurement at the Tušimice Observatory. The SO 2 , NO 2 , and CO Cairclip sensors were identified as inappropriate due to their weak measurement quality or data inconsistency within sensor pairs. The combined O 3 Cairpol sensor achieved satisfactory results, but interference with meteorological conditions and other gases definitely needs to be considered during data processing and interpretation. Among the particle counters, the Plantower PMS7003 definitely showed a higher measurement quality than the Alphasense OPC-N2 counters, which were more likely to be affected by high relative humidity and had a higher occurrence of measurement outliers. We believe that this paper can help to clarify the real (outdoor) sensor performance and possible tendency to change the measurement quality over time. The results of this study show that long-term comparison studies are of great importance and should be further supported and developed by scientists. They can serve the general public, as an important material for decision making when purchasing suitable sensor types and for the suppression of common mistakes during their application, or the manufacturers, for identifying main issues and for further product development. Finally, we still believe that comparative measurements with RMs have to be necessarily implemented at least before each field application (ideally also during the measurement at given intervals), because it is the only way to detect possible sensor failures or systematic and random measurement deviations of sensors.
2020-05-21T00:09:17.371Z
2020-05-11T00:00:00.000
{ "year": 2020, "sha1": "3a7b3529cdfee8da309b6366691a7bedee9a6e35", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/atmos11050492", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "6dbd67bbb2e2b44f6892d88c63271fe313e75219", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
235841919
pes2o/s2orc
v3-fos-license
How using smart buildings technology can improve indoor environmental quality in educational buildings . An educational building must integrate smart building strategies to ensure indoor environmental quality. Thermal, acoustic, visual comfort and indoor air quality are to be considered, otherwise they can develop the sick building syndrome. Smart buildings solve this potential problem by providing a highly efficient living ambience that includes safety, comfort and a good quality of living/learning/working experience, that helps the users achieve their best possible performance. These buildings should integrate advanced technologies such as automated systems and the implementation of architectural skins, well and functional designed spaces and architectural features that act as active bioclimatic solutions. The following is a case study of an architectural project for an elementary and junior high school academic campus in the state of Nuevo León, Mexico that has to deal with the extreme climate conditions of the location, while applying the best alternative and bioclimatic strategies through the implementation of inmotics, a responsive architectural skin, sustainable construction systems and native vegetation. In doing so, a comprehensive environmentally friendly building is created, taking advantage of the surrounding natural conditions, using the latest environmentally oriented systems and technologies. The result is a healthy, safe, and productive space for its users that greatly benefits the teaching-learning process. Introduction As Francisco Vargas and Isabel Gallego [1] mention, the different ways in which we interpret environmental conditions have created concepts such as the sick building, in relation to air or interior environmental quality, which seeks to understand the complexity of an indoor microclimate, the possible contaminants in closed environments, and their implications on the health of a building's users. The optimal conditions of the interior environments must generate comfort and well-being for people, promoting a healthy lifestyle at work, school, and in leisure and entertainment areas. Society needs clean, safe and well ventilated spaces that allow them to carry out their daily activities in an adequate way. This is why it is necessary to "achieve a balance between social standards, the use of energy and sustainable development, seeking comfort without polluting and without increasing the consumption of energy sources that degrade the environment" [1]. According to the National Institute for Occupational Safety and Health (2013) [2] Indoor Environmental Quality (IEQ) is the quality of a building's environment regarding the health and wellbeing of its occupants. Many people have noticed that they developed symptoms that cannot be linked to a specific illness or cause, and/or their health and performance are affected when they spend time in some buildings. The particular conditions of these buildings create the emergence of sick building syndrome, observed in many contemporary cities. According to Lili Rodríguez and Jorge Alonzo [3], this syndrome is characterized by a series of discomforts presented by the occupants within the buildings caused by physical, chemical, microbiological, outdoor contaminants and even psychosocial factors. Sick building syndrome is created "when the users of a building experience discomfort in their general health conditions, causing poor comfort, absenteeism and low productivity" [3]. Understanding and applying strategies that generate indoor environmental quality in buildings is of crucial importance, especially in buildings for educational purposes. According to Brink, Loomans, Hobach, and Kort these optimal conditions in the classrooms can contribute to the quality of the teaching-learning process. "Thermal, acoustic, visual and indoor air quality conditions can be of great support for tasks in the classroom of teachers and students" [4]. The buildings that best respond to an environmental design are smart buildings. Their goal is to provide greater efficiency and control in the building's operation, responding to a better quality of life to users inside the buildings [5]. In this way, an active strategy that is used in these buildings is the implementation of architectural skin as an advanced technology that protect the facades from climatic constraints, and regulate an adequate temperature within the buildings. The implementation of a responsive architectural skin, allows the building to respond to ongoing needs through passive and active strategies, that provide a characteristic impact on the performance of buildings. This can be achieved by control strategies and renewable energy systems that improve the energy consumption of the building, its comfort and its functional characteristics for its occupants [6]. This paper explains the problems and effects that contemporary buildings have generated today. The strategies that resolve the environmental conditions of buildings are mentioned through the concepts of intelligent buildings and responsive architecture that helps reduce the energy consumption of the building and allows to maintain an ideal thermal temperature inside the interior spaces. In this paper we seek to highlight the strategies that were established to create optimal indoor environmental conditions in a Montessori school project as a case study. The school is located in San Pedro Garza García, a municipality of the great metropolitan area of the city of Monterrey, Nuevo León, a region of extreme climate conditions in northeast Mexico (semi-arid climate with hot summers, mild winters and little rain year-round), in which technology will be implemented through the incorporation of responsive architectural skin and an artisanal construction system. The problem with contemporary buildings Nowadays there is a trend of building constructions with facades designed entirely of curtain walls. However, these designs present a serious energy problem in cities with a Mediterranean climate: "These glass buildings consume more energy than others because in summer there is a huge demand for air conditioning, while in winter, on the coldest days, there is a loss of heat that has to be compensated" says Antonio Cerrillo [7]. Although this can be used as a strategy to capture light and heat in Nordic areas, these types of buildings become the worst possible construction in Mediterranean cities, from an environmental perspective. The glass generates a good aesthetic since it gives a sense of amplitude and allows the spaces to be well lit. For this reason, this practice is observed a lot in office buildings where large windows are used. However, these buildings generate a large energy expenditure due to its high need for cooling caused by internal heat generation. There are insulating types of glass, but its benefits on energy efficiency are less compared to opaque walls. When designing it is necessary to take into account the external environment and don't rely on air conditioning systems which consumes so much electricity. As mentioned by Maria Ángela Daza, Diana Ximena Martínez and Paola Andrea Caro [8] "over time, changes in the designs of buildings designed to improve energy efficiency have made homes and offices more and more hermetic". Therefore, this type of buildings, which are called "sick buildings", have deficiencies in their indoor generated climate affecting people's health, causing symptoms such as headaches, irritation of the mucous membranes and a feeling of fatigue. Also, technological advances in construction have led to the use of synthetic materials that can easily accumulate pollutants such as volatile compounds, coming from different sources such as heating systems, paints and varnishes in furniture, construction materials or cleaning supplies. Derek Clements-Croome and Li Bazhan [9] confirm that low indoor air quality and pollution are the factors that contribute the most to the sick building syndrome, in addition to extreme temperature variations, very humid, stale or dry air, that is to say poor quality conditions of an indoor environment. This creates undesirable environments within buildings which lowers the productivity of their occupants. It is necessary to create building designs that are well integrated to the surrounding environment and climate in order to improve its energy performance. It is important to acknowledge that glass and steel facades have different behaviours depending on the orientation, time and specific climatic conditions of the place. In Mexico, due to its geographic position, if these types of facades are located on the west face of a building, they would receive the highest solar incidence of the day, overheating the entire building. In contrast, if they are located on the north facade, they would be capturing indirect light, which can be used as a lighting strategy. Therefore, the use of glass facades should not be ruled out, but rather learn how, where and when to use them in order to provide a feeling of comfort inside the building. 3 Opting for an environmental design As Randal Thomas [10] mentions, environmental design is not a new concept, but has been applied since the Industrial Revolution when engineers were able to construct buildings in almost any climate around the world. However, to achieve this, high-grade energy has been used to address problems caused by weather conditions and problems resulting from building design. Therefore, it is necessary to reduce the dependency of buildings on fossil fuels, but at the same time achieve the environmental quality that provides comfort to users inside the building. Thomas advises to take advantage of the natural sources of energy, lighting and ventilation, such as the sun and winds. We must stop using the fuel reserves that turn into carbon dioxide when we generate energy with them. In doing so, our activity generates other pollutants released into the environment which have catastrophic effects on the ozone layer. It´s necessary to redesign the way we construct buildings to improve the environmental quality, while efficiently reducing the negative impact on the global environment. According to Rafael Serra and Helena Coch [11], humans are related to the environment since they have energy exchanges of all kinds. The human body seeks to maintain stable internal conditions in a fluctuating environment. This is known as homeostasis. We have specific organs, known as homeostatic mechanisms, that regulate the body's response to environmental loads, such as climatic, lighting, acoustic, and even psychic. It is necessary to make an analysis of the quality of the environment to establish the necessary conditions needed to provide for adequate, sound surroundings. To achieve this, the architecture should adapt accordingly to provide comfort to the users. A good design of habitable environments should integrate environmental parameters, "energetic manifestations, which express the physical and environmental characteristics in a living space, regardless of the use of the space and its occupants", as well as user comfort factors, "conditions outside the environment that influence its appreciation" [11]. Thermal comfort In addition to what was mentioned by Rafael Serra and Helena Coch [11] one of the general principles of environmental comfort is thermal comfort, which considers thermal parameters of the environment that influence the human body. These are the air temperature, temperature radiation, relative air humidity and air velocity. According to Dima Stouhi [12] one of the first elements to provide thermal comfort is to create an envelope in the building that acts as a filter between the outdoor climate and the indoor environment. This way an interior microclimate is established and stabilized reducing the use of energy-consuming mechanical systems. It is necessary to consider the insulation, solar gain, thermal inertia and air ventilation. Solar gain is the "amount of heat that is generated as the rays are absorbed by the building" [12]. This is controlled by the orientation of the volumes, the wall-window ratio, the level of insulation and the amount of shade of nearby elements. Thermal inertia is "how slowly the temperature of a building reaches that of its surroundings" and is controlled by the materials and type of structure of the building. Likewise, a comprehensive and sound management of the indoors air circulation is necessary so the ventilation efficiently gets rid of the environmental humidity. Acoustic comfort According to Lindsey Leardi [13] "acoustic quality depends on how well sound sources are controlled". Unwanted noises are taken into account, whether they are external, internal, caused by impact or emitted by equipment. "The way the human ear perceives sound depends directly on the levels of reverberation and absorption within the building". To achieve acoustic comfort you must first make a good design based on a complete site analysis. However, in the end the acoustic performance is in the hands of the finishes. In order to obtain a quality acoustic space, special care must be taken when selecting technologically advanced materials in order to moderate noise levels. Acoustics is a key element in all types of buildings, such as hospitals, theatres, cultural centres, housing, workplaces and, above all, educational buildings. Visual comfort José Tomás Franco [14] mentions that this comfort has to do with the correct balance of natural lighting with artificial lighting. Some factors are taken into account for the environmental quality of a space such as the level of flickering, glare, blindness caused by lighting, and uniform distribution of light. "Too much and too little light can cause visual discomfort. Large changes in light levels or sharp contrast (perceived as glare) can cause stress and fatigue" [15]. Natural light should always be prioritized in any design. This is the most comfortable lighting for the human eyes, as they adapt to it naturally. Taking advantage of natural lighting has a positive impact on our health and well-being, increasing our lucidity and productivity during the day, which are necessary factors in an optimal educational setting. It also generates significant energy savings, decreasing the excessive use of artificial light. According to Tomás Franco [14], when designing a new project, the orientation of the building on the site must be carefully considered, in order to use the most and best possible natural light from the correct design of their openings. Smart buildings Buildings that pursue environmental quality are called smart buildings. Monteiro, Kowal, Azevedo, Naked, Hammad and Pereira [16] explain that there are different interpretations about these buildings by several researchers. Moten [17] mentions that a smart building is the incorporation of technologies and systems to improve the quality of life of users and the operation of the building. Likewise, Sinopoli [18] explains that an intelligent building is crafted by the use of constructive and automated technological systems that provide safety and comfort in the lifestyle of people. Buckman, Mayfield and Beck also state that "intelligent buildings are constructions that integrate and account for intelligence, operation, control and materials and construction as an entire building system with adaptability, not reactivity, at the core, in order to meet the drivers for building progression: energy and efficiency, longevity, and comfort and satisfaction" [19]. On the other hand, some researchers have focused on the term intelligent buildings, while others have called it smart buildings. However, there is a difference between these two terms [16]. Hoy and Tara [20] explain the evolution of construction technologies, from primitive to simple, from automated to intelligent to smart buildings. An intelligent building is a simple building whose systems are controlled manually, and an automated building is one whose systems are controlled automatically through the use of sensors that adapt to the needs of the users. On the other hand, smart buildings collect information about how the building is working in real time, using a series of sensors and systems that record the number of people in each of the spaces in any given period of time. The term smart building has been known since 1980, in an era characterized by telecommunications and computer industry technologies [18]. In addition, this concept has evolved through the development of new technologies that have generated smarter processes to be used in a more flexible, interactive and sustainable way [16]. Characteristics of smart buildings According to Sinopoli [18], intelligent buildings are characterized by the transmission of technological breakthroughs and the union of building systems, provide the most useful approach to design and implement technological construction systems. Similarly, smart buildings are a critical component for the energy use and sustainability of buildings and the smart grid. Smart buildings differ from previous generations in their ability to respond, adapt and the flexibility they have in their resources. Thus, adaptability is a fundamental feature of smart buildings. It is the ability to use internal and external information from various sources to prepare the building for specific events before they occur, such as people's perceptions of comfort at different times of the year, changes in the use of the building and the variable characteristics of the occupancy data. Besides adaptability, which is the fundamental piece of an intelligent building, there are four other factors to consider: intelligence, operation, materials and design, and control to define a building as an intelligent system. Positive impact of smart buildings on society and architecture Sinopoli [18] mentions that smart buildings use technology to make the building more efficient, create a healthy and productive environment for users, and promote a safe environment with efficient and sustainable energy. According to Ochoa and Guedi [21] the design of smart buildings has created a new expectation about the architectural design solution, which has allowed to harness intelligence using the least amount of energy possible to act on sustainability and environmental design. This can be achieved through the design of passive and active strategies of an intelligent building delivering the best comfort to the user with the least possible use of energy. Active strategies are those that adjust to the building according to the internal or external conditions of its environment, achieving comfort and reducing the energy consumption of the building through the most advanced technologies; while passive strategies are those employed by the designer with the aim of responding to the climatic conditions or needs of the environment. Architectural skin According to Urquiza [22] one of the ideas that has changed in technological advances is the concept of the facade, an element that is not only identified as the separation that exists between the interior and exterior space of the building, but that also seeks to interact with the environment around it. In this way, the facade becomes the envelope that covers the exterior of a building from the interior, also known as the architectural skin. In addition, Anwar and Wu [23] state that the skin in architectural design is a metaphor for the envelopes of buildings, the layer that protects them from the external factors of the environment and helps keep an adequate interior thermal temperature for users. Likewise, the architectural skin aims to establish a dialogue with the user, who can interact through its technological elements or unique characteristics. The design of an architectural skin should have a sustainable objective, considering the establishment of energy and the minimization of energy consumption, using materials and construction details that allows the transfer of energy through the envelope of the building [22]. Responsive architecture The skin of a traditional building provides stability, regulates air pressure and protects the interior of the building from external environmental factors such as radiation, rain and wind. The architectural skin is an essential component to solve the problems of responsive architecture. Intelligence can be implemented in the architectural skin as a solution for the environmental elements that affect the building [24]. Barozzi, Lienhard, Zanelli and Monticelli mention the following concept: "Responsive: the term refers to a reactive system, i.e. a system that moves and is opposed to the term manipulate, which means that the system is moved and controlled from outside and refers to passive environments" [25]. Besides, responsive architecture is responsible for measuring environmental factors so adapt their shape, color or character accordingly. The goal is to promote the energy efficiency of buildings with the implementation of technologies such as control systems and sensors [24]. Responsive architectural skin As mentioned by Urquiza [22], at the beginning of the technological era, energy was not directed to preserve the environment, but used to maintain the interior comfort of the building, leading to the consumption of machines and increasing pollution so then, an efficient interaction with nature was sought. As stated by Faviono, Goia, Perino and Serra [26] the technologies that promote the relationship between the production of renewable energy and the load construction profile are fundamental, since they make use of the responsive elements of buildings. These are elements of buildings that act with a dynamic and active behaviour that improve the exploitation of available natural energy sources. In the technological context, Advanced Integrated Facades (AIF) are a responsive element of buildings that control the energy that flows from the outside to the inside. This facade is an interesting application of advanced solutions for building envelopes, which have been incorporated into research activities and industrial developments. Likewise, the Multifunctional Facade Modules have been developed, architectural skin that present a dynamic behaviour and incorporate different systems that modulate energy and mass flow to reduce energy demand and maximize the quality of the environment. The author mentions some characteristics of the AIF's, such as the ability to transmit, reject and moderate solar radiation, heat losses inside the building and the ability to preserve thermal energy and redirect it. 6 Case study 6 [27], "It is characterized by providing a prepared environment: orderly, aesthetic, simple, real". These prepared or planned environments give children opportunities to engage in work that is interesting to each of them. They are free to focus, discover and investigate on what interests them the most on any given time. The environment in which they work is large, open, and well illuminated spaces, adapted to the size of children and young people that promotes their independence in the exploration and learning process. The educational compound is made up of three main areas. First, the administrative area: management offices, admissions, a teacher training center, meeting rooms, and filing areas. Second, academic spaces including classrooms, a library, music, art, dance, and computer rooms, as well as a cafeteria, theatre, and gym. Finally, the sports field that includes basketball, soccer, volleyball courts, as well as a recreation space for parents. The objective of this project is to develop an architectural proposal that introduces an innovative solution to fulfil the needs of a Montessori method based school, adapted to the present conditions and those of the future generations for a basic education campus (pre-elementary, elementary and junior highschool) in the metropolitan area of Monterrey, Nuevo León, Mexico. School should be an ideal space for children to learn and develop their cognitive, motor, and verbal skills, as well as an interest in discovering the world around them. Our project is developed north of the municipality of San Pedro Garza García in Nuevo León, Mexico. According to data from INEGI (Instituto Nacional de Estadísticas, Geografía e Informática de Mexico -National Institute of Statistics, Geography, and Informatics), [28] 68% of the territory of Nuevo León has an extreme dry and semi-dry climate, especially towards the Sierra Madre Occidental. This project would be located on the slopes of this mountain chain, bordering with the Santa Catarina River which has a basin area of 1,804.7km2 and is located more than 2,200m above sea level. According to Comisión Nacional del Agua (the National Water Commision) [29] the riverbed is mostly dry as Servicios de Agua y Drenaje de Monterrey (Monterrey Water and Sewage Services) have located around 30 water wells with depths between 80 and 114 meters. The annual temperature of Nuevo León ranges between 9° C and 34° C in average and rainfall is, in general, quite scarce, although some regions can register an annual rainfall greater than 800mm. The general annual average of the State oscillates between 300 and 600mm with rains occurring mainly during the summer, making it a very humid place. According to data from the Secretaría de Desarrollo Sustentable de Nuevo León (Secretary of Sustainable Development of Nuevo León) [30] the average relative humidity of the San Pedro Garza García area is 59.9%, the ideal being between 40% and 60%; wind is also considered an important factor in the variation of the temperature. Within the metropolitan area, this specific location reports one of the highest wind speeds, rising above 22 km/h from the east and southeast. According to the Red de Monitoreo del Gobierno del Estado de Nuevo León (Monitoring Network of the Government of the State of Nuevo Leon) [31], the property is located within an area that has an acceptable regular air and health index of 65ugr/m3 as a percentage of PM10 (particles in the environment of less than 10 micrometers), the ideal being less than 50 ugr/m3. Strategies to generate indoor environmental quality Considering the extreme climatic conditions of the area, it was sought to implement solar capture strategies for heat and lighting, the use of winds, the catchment of rainwater, to protect the building from the environment, generating comfort and indoor environmental quality. Rammed earth construction system To build this project, it was decided to apply rammed earth, a construction system where the transformation of the soil and the building are part of the same process. This consists of filling a formwork with layers of soil, compacting each one of them with a tamper, until the desired height of the wall is reached. The formwork is made up of two parallel wooden planks joined by a crossbar with a considerable thickness of variable width between 50 and 70 cm. As Byron Febres explains [32], the rammed earth technique has been used for millennia in regions such as India, China, Egypt, Bolivia and Peru. It is known that this building technique was used as early as 1700 AC by the Chinese to create fortifications, palaces and even in the construction of some sections of the Great Wall. Since then, it had been used by many civilizations. This building method increased its popularity and mass application in the 19th century, when French builders developed manuals, that were translated into different languages, explaining this construction process, thus expanding this technique around the world. The rammed earth is a method that requires highly specialized labour and it is not encourage selfconstruction. However, rammed earth is an excellent material. According to the engineer Byron Febres [32], it transpires, like clay, is hygroscopic, meaning it can absorb moisture from the environment, and has the ability to expand. It can store cold and heat, making it a great insulator and has low thermal expansion, as well as a great acoustic behaviour, due to its thicknesses. Also, when the material is hardened, it has good resistance against wear and tear. These characteristics make this technique an excellent bioclimatic strategy that helps to maintain a stable temperature inside the building throughout the year, regardless of intense cold or extreme heat such as that experienced in Nuevo León. In addition, the rammed earth wall provides a characteristic aesthetic that generates in users a sense of appreciation towards the materials of the earth and the natural environment. It is used on exterior walls of the buildings and the facades include large windows to take advantage of natural light. As it is not possible to have these glass facades over all of the sides of each building, they were implemented only on the main facade of each one, thus solving the lighting issue. Native Vegetation To counteract the solar incidence on the most vulnerable facades, native vegetation is used as protection. Special attention is paid to the green areas of the site, due to the characteristics of its behaviour with the climate. Green areas cool down spaces and increase the humidity of the environment in the surroundings. In this way, the predominant presence of trees and shrubs of different and varied dimensions will lower the temperature of the place generating a microclimate. The vegetation is used as a passive cooling strategy since it allows to manipulate the winds taking advantage of the volumetric shape of the complex. In the same way, Zupancic, Westmacott and Buthius [33] mention this strategy has the ability to filter pollutants in the air and mitigate the temperature of both the air and the soil, providing some health benefits for users. They benefit directly from the use of vegetation due to the shade it produces, as well as indirectly thanks to the phenomenon of evapotranspiration: "the water is returned from the earth's surface to the atmosphere in form of water vapor " [34]. According to the Plan Municipal de Desarrollo Urbano (Municipal Plan of Urban Development) [35], the city of San Pedro Garza García is in close proximity to the natural areas of the Sierra Madre Oriental and Cerro de Las Mitras (Mitras Mountain). The project seeks to highlight the privileged location of the property and the high quality of native trees, since the area where it is built preserves its original vegetation. With the help of these strategies, the goal is to generate a microclimate that provides environmental quality within the complex. The microclimate defines the set of climatic conditions typical of a geographic point or reduced area and represents a local modification of the general climate of the region due to the influence of different ecological factors [36]. To create it, in addition to the use of mud (rammed earth technique) and the vast native trees and vegetation within the property, a responsive architectural skin is used. Responsive Skin The responsive skin that protects the building is made up of a self-supporting structure in the shape of two intersecting octagonal domes of approximately 150 meters in diameter, which encompass the entire project buildings. This structure is made up of modules in the form of trapezoidal prisms, and because these are adapted to the curved shape of the dome, they have very varied measurements from 2.5m x 4m to 0.9m x 3.25m. These modules were generated parametrically, so they are more open in their upper part, depending on the entry of the prevailing winds. That is why the largest modules are located in the southeast part of the structure. Fig. 2. Modules structure A responsive skin is made up of smart materials that respond to changing environmental conditions. In the upper part of each module, an EFTE (ethylene tetrafluoroethylene) element, which reacts to temperature variations by rotating, is used allowing or preventing the passage of ultraviolet rays through the structure. This material has been increasingly used in architecture due to its bioclimatic qualities. It is a film made of a heat-resistant polymer that has a great ability to regulate the environmental conditions inside buildings, according to Patrick Lynch [37]. Besides, it has a low coefficient of friction that prevents dust and dirt from sticking to its surface, thus dropping the maintenance requirements. It provides a high light transmission letting in natural sunlight, allowing visibility through the structure. And it also has a high chemical resistance, stable to changes in temperature and ultraviolet rays. In short, it is a robust thermal insulator, a great solar protection for the project. Likewise, Urquiza [22] mentions that the responsive architectural skin behaves as an active bioclimatic strategy that acts according to external environmental circumstances to maintain an ideal thermal condition in the interior spaces of the building, without using an additional energy system, such as heating or the air conditioning. The author explains that there are two main factors that influence human bioclimatic comfort: air temperature and relative humidity. These factors can be modified by three parameters: air flow velocity, solar radiation, and evaporation. These parameters are used in the project by controlling the speed of the air flow through natural cross ventilation, which generates a cool environment in each of the buildings. Solar radiation is controlled by regulating the temperature throughout the buildings with the help of the architectural skin provided by the modules and by offsetting the solar incidence of the most vulnerable facades. Finally, the integration of native vegetation in the project helps to deal with the need for humid air to control evaporation. Inmotics Smart buildings have a wide variety of automated and efficient technologies that reduce energy consumption, improve the quality of life of users and record necessary information through different sensors and control systems. One of these technologies is inmotics. Antonio Núñez [38] mentions that the inmotics "encompasses the integration of the facilities of tertiary sector buildings", like hotels, hospitals, office buildings, universities, or large areas, such as parking lots and sports fields. Inmotics, as building automation, is an active control system which allows the building to function intelligently, just as home automation does with homes. "Its systems allow an efficient management of energy consumption, security, accessibility and comfort of the building" [39]. The project has automated systems that respond efficiently to the user needs. It has a hybrid ventilation system that makes use of natural and mechanical ventilation. The automated ventilation system detects when spaces need ventilation and can also be programmed to detect levels of contaminants, such as CO2. This mechanical ventilation system is used on top of natural ventilation, as it is not a constant and steady steam [40]. Energy savers elements as dimmers, motion sensors and controlled systems are also added to help reduce energy consumption through lighting, heating, and cooling in each building. To enhance the interior comfort of the building, systems are used to regulate the temperature, the intensity of the light using the most of natural light with the help of automated curtains and blinds, as well as sound and video systems through applications on cell phones and screens. Accessibility and security features are built into automated systems allocating customized buzzers, so users can communicate internally or externally. Access control is also applied by means of high security systems that authenticates the identity of the visitor, as well as the use of alarms and surveillance cameras to supervise the safety of the smallest children. Likewise, it incorporates a fire protection system that not only activates the alarms in the event of a gas leak, but also shuts down the power to prevent electrical sparks from triggering fires and opens the windows automatically to ventilate the affected spaces. Conclusion The analyzed bibliography for this approach confirms the importance of having an optimal indoor environmental quality in an educational building and rely in the fact that it has a positive impact on user productivity. According to Jungsoo Kim and Richard de Dear [41], temperature, noise volume, size of spaces, visual perception, colors, and textures are some basic factors that need to stay at acceptable levels as they greatly influence the wellbeing of people within a building. Likewise, air quality, the amount of light and visual comfort are some secondary factors that proportionally decrease or increase the level of user satisfaction. This is essential in educational buildings, since according to Montiel Vaquer [42] there is a correlation between the architectural environment and children or young people who live, develop, or interact within it. A quality environment can help meet the objectives of an educational project. An educational project should not be conceptualized only on the protection issue, it must be designed thinking on how its future users will optimally develop. More affective and sensitive constructions are needed to the specific needs of children and young people. According to Mora [43] mental performance deteriorates when people do not feel comfortable in specific spaces or if the conditions of the place are not the most suitable to attain a specific mental activity. The implementation of the rammed earth wall technique is ideal in the construction of educational buildings in this region of Nuevo León in northeastern Mexico, due to its thermal and acoustic characteristics. It transforms the educational sites into optimal spaces where children and young people experience a quality and fulfilling learning process. The usage of native trees and vegetation around the property, along with the artisanal construction system of the rammed earth, generate a sense of belonging in the users, making them feel connected with nature. This feeling of connection is important to awaken the interest and respect for their environment and will encourage them to take care and protect it, just as the space itself does with them. Likewise, the responsive architectural skin serves as an environmentally oriented technology, by using efficiently climatic elements such as wind and sun rays to regulate the climatic characteristics inside the premises, creating an ideal microclimate for users. It works as an active bioclimatic strategy that adapts to the climatic conditions of the site and obtains positive impacts on the interior and exterior spaces of the project. The architectural skin is large enough to protect the buildings and is fully customized, allowing convenient levels of light and ventilation to flow through the building contributing to an environmental quality needed for the educational complex. The building incorporates other technological alternatives like inmotics, which adapts to the current rhythm of life of children, teachers and collaborators in the different spaces of the project, as well as the implementation of technological advances that contribute to make it a more human, flexible and multifunctional project. In addition to providing energy efficiency, comfort and environmental benefits, the project has a strong social impact, as mentioned by Juan Carlos Sarasúa [44]. People with disabilities and the elderly are provided with greater autonomy, empowering them to have control of the facilities through a remote control system adapted to their specific needs. The combination of environmental design strategies implemented in this project, respond to the current problems of the site, provide better comfort to users, reduce energy consumption, and take advantage of the natural factors of the environment, such as ventilation, natural lighting and sunlight. This project becomes a meeting point between technology, education, architecture, and nature.
2021-07-16T00:06:05.391Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "a012d88570490dbc63a3be2598907133152ae915", "oa_license": "CCBY", "oa_url": "https://www.shs-conferences.org/articles/shsconf/pdf/2021/13/shsconf_etltc2021_03003.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "af75a36e2d78e0a505a62d37473d3ba08d17b590", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
208035348
pes2o/s2orc
v3-fos-license
Proposal for an Empirical Japanese Diet Score and the Japanese Diet Pyramid A traditional Japanese diet (JD) has been widely regarded as healthy, contributing to longevity. The modern Japanese lifestyle has become markedly westernized, and it is speculated that the number of people who eat JD is decreasing. A simple evaluation of people with low adherence to JD will help improve dietary life. We developed a simple assessment tool that can capture JD, and examined factors associated with low adherence to JD. A total of 1458 subjects aged 18 to 84 years completed a brief self-administered diet history questionnaire. We constructed an empirical Japanese diet score (eJDS) consisting of 12 items from the common characteristics of a JD. In our participants, 47.7% of subjects reported low adherence to JD and only 11.1% demonstrated high adherence. In multivariate logistic regression analysis, younger age persons, physically inactive persons, and heavy drinkers were associated with low adherence to JD. Based on the cutoff values of eJDS, we proposed to create a Japanese diet pyramid that is easy to use visually. In conclusion, the eJDS and the Japanese diet pyramid will be useful tools for nutrition education and dietary guidance. Introduction The traditional Japanese diet (JD) has been widely regarded as healthy, contributing to longevity and protecting against several noncommunicable diseases (NCD) [1,2]. However, the modern Japanese lifestyle has become markedly westernized, and it is speculated that the number of people who eat traditional JD is decreasing [3]. Several studies have examined factors associated with low adherence to healthy diets, such as Mediterranean diet [4][5][6], because finding a low-adherence group and improving their dietary habits seems to contribute to reducing the risk of NCD. Similarly, the detection of people with low adherence to traditional JD will help improve dietary life, but it has been studied very little in Japan [7]. In addition, the association between JD adherence and nutrient intakes is interesting but has not been considered in detail, so we need to examine further. In order to solve the above problems, a method to detect JD adherence is important. Previously, studies have devised a traditional JD extracted by factor or principal component analysis. Characteristic components include rice [8], miso soup [8][9][10], soybean products [7][8][9][10][11][12][13][14][15], vegetables [7][8][9][10][11][12][13][14][15], fruits [7,[12][13][14][15], fish [9][10][11][12][13]15], Japanese pickles [11,12,15], seaweed [9][10][11][12][13][14][15], mushrooms [7,12,14,15], and green tea [7,[9][10][11][12][13]. However, approaches using a posteriori dietary pattern analysis include critical methodological issues [16]. The statistical analyses needed in a posteriori dietary pattern analysis are too complicated. Additionally, generalizability to other study populations may be limited, because the dietary patterns defined by factor or principal component analysis are extracted from a selected study population. Therefore, it is important to develop a hypothesis-driven dietary score that can capture a traditional JD and can be applied across different study populations. Although there are some scores to identify a JD [17][18][19][20], they focus on the low-salt diets, overall balance of meals, and main and side dishes. In addition, there is no visual representation of a traditional JD pattern, like the Mediterranean diet pyramid [21][22][23]. The aim of this study is to develop an empirical Japanese diet score (eJDS) as a simple assessment tool, to investigate the relationship between adherence to Japanese diet and nutritional intake, to build a visually comprehensible JD pyramid, and to examine factors associated with low adherence to a traditional JD. Participants A total of 1607 adults, aged 18 to 84 years, were invited to participate in this study. Participants were recruited from eight workplaces, one local college, and four different areas in central Kinki, Japan. To encourage participation in this study, we took posters or recruitment forms to the college, workplaces, and community-based health classes. Of those invited, 105 persons refused to participate. The remaining 1502 eligible subjects were given a diet history questionnaire. Among them, we excluded 9 subjects who did not complete the questionnaire, 26 subjects who had implausibly low or high estimated caloric intakes (<600 or >4000 kcal per day), and 9 subjects who had missing information for factors needed for statistical adjustment. A total of 1458 participants (781 men, 677 women) were included in this analysis. Participants included 967 working professionals including industrial workers, office workers, formal caregivers, or nursing staff; 233 college students; and 258 community-dwelling adults and elderly (retired, unemployed or housewives). This study was performed in accordance with the Helsinki Declaration. Study protocols were approved by the Institutional Review Board of Kio University (H26-10), and written informed consent was obtained from each participant. Assessment of Nutrient Intake Using the BDHQ, values for nutrition were estimated. Combined with standard serving size, the intake frequencies were converted into the average daily intake for each food item. Estimates of nutrients were calculated using an ad hoc computer algorithm for the BDHQ that was based on the corresponding food composition list in the Standard Tables of Food Composition in Japan [27]. Based on this, the associations between adherence to JD and nutrient intake were examined. The Japanese Diet Pyramid Using the eJDS cut-off value, a visually easy-to-understand pyramid was constructed with reference to the Mediterranean diet pyramid [21][22][23]. Other Variables Body mass index (BMI) was calculated as weight in kilograms divided by the square of height in meters. Subjects were classified by BMI category using the Asian standard (BMI <18.5, 18.5-22.9, 23.0-27.4, ≥27.5 kg/m 2 ) [28]. A self-reported questionnaire assessed current smoking status (yes, no) and physical activity (active, sedentary). Alcohol consumption was evaluated by BDHQ information and was categorized as none/low (men, <10 g per day; women, <5 g per day), moderate (men, 10-30 g per day; women, 5-15 g per day), and high (men, >30 g per day; women, >15 g per day). Statistical Analysis Statistical analysis was performed using SPSS statistics version 21.0 (IBM Corp, Armonk, NY, USA). Continuous variables were described as means ± standard deviation (SD), and the differences between two groups were compared using Student's t-test. Those among three or more groups were compared using one-way analysis of variance. Sample distribution between genders was compared using chi-squared test with standardized residual method. Sample distribution among low, moderate, and high adherence groups were also compared using the chi-squared test. To identify factors associated with low adherence to JD, crude and adjusted (for sex, age classes, BMI levels, smoking status, alcohol intakes, physical activity, and recruited background) odds ratios and 95% confidence intervals were calculated by models of logistic regression. p-values < 0.05 were considered statistically significant. Results The eJDS of all subjects ranged from 0 to 10 and no subject had ≥11 score; 11.1% of subjects reported high adherence to JD, whereas 41.2% reported moderate adherence, and 47.7% reported low adherence. Sample distribution of eJDS between genders was not different (p = 0.587). Low adherence rates in men and women were 47.9% and 47.6%, respectively. Moderate adherence rates in men and women were 42.8.% and 38.4%, respectively. High adherence rates in men and women were 9.3% and 13.0%, respectively. Distribution of low, moderate, and high adherence between genders were also not different (p = 0.069) ( Table 1). Younger subjects had significantly lower scores than older subjects. Smokers had significantly lower scores than nonsmokers, and physically inactive people had significantly lower scores than active subjects. Moderate-or high-drinkers tended to have low scores. College students had the lowest eJDS, whereas community-dwelling adults and elderly had the highest scores (Table 2). In addition, sample distribution among low, moderate, and high adherence groups were significant in age, smoking habit, physical activity status, alcohol drinking classes and background (Table 2). 10 (0.7) eJDS, empirical Japanese diet score; JD, Japanese diet. Data are expressed as means ± SD or numbers (%). a Alcohol consumption was categorized as none or low (men, <10 g per day; women, <5 g per day), medium (men, 10-30 g per day; women, 5-15 g per day), or high (men, >30 g per day; women, >15 g per day). b By Student's t-test or ANOVA. c By chi-squared test. BMI, body mass index; eJDS, empirical Japanese diet score. A higher adherence to JD was significantly associated with many nutrient intakes and inversely correlated with saturated fat. Although protein intake was correlated with adherence to JD, it was due to fish and soy consumption other than meat. Salt intake increased with the adherence increases. On the other hand, JD adherence was positively correlated with potassium intake (Table 3). Based on the cutoff of eJDS, we proposed to create a Japanese diet pyramid (Figure 1). At the base of our pyramid, green tea should be drunk every day. Rice, miso soup, vegetables, and fruits are also core elements to be consumed every day. The weekly inclusion of fish, soy products, pickles, seaweed, mushrooms, and Japanese-style confectionery (wagashi) are recommended, as these foods have many healthful effects. At the top of the pyramid, meat and meat products are foods to limit. Based on the cutoff of eJDS, we proposed to create a Japanese diet pyramid (Figure 1). At the base of our pyramid, green tea should be drunk every day. Rice, miso soup, vegetables, and fruits are also core elements to be consumed every day. The weekly inclusion of fish, soy products, pickles, seaweed, mushrooms, and Japanese-style confectionery (wagashi) are recommended, as these foods have many healthful effects. At the top of the pyramid, meat and meat products are foods to limit. Crude logistic regression analysis revealed the following factors were associated with low adherence to JD: age between 18 and 34 years, those between 35 and 49 years and those between 50 and 64 years compared with those aged ≥65; smokers, as compared with nonsmokers; high alcohol drinkers compared with no/low alcohol drinkers; physically inactive compared with active persons; and college students and workers compared with community-dwelling adults and elderly. Multivariate adjusted logistic regression analysis revealed the following factors were associated with low adherence to JD: age between 18 and 34 years (OR 4.49, 95%CI 2.10-9.61), those between 35 and 49 years (OR 4.15, 95%CI 2.02-8.50) and those between 50 and 64 years (OR 2.36, 95%CI 1.18-4.74) compared with those aged ≥65; physically inactive (OR 1.31, 95%CI 1.05-1.64) compared with active persons; and high alcohol drinkers (OR 1.88, 95%CI 1.40-2.52) compared with no/low alcohol drinkers. In multivariate adjusted analyses, the background and smoking habits were no longer significantly associated with low adherence risk (Table 4). Crude logistic regression analysis revealed the following factors were associated with low adherence to JD: age between 18 and 34 years, those between 35 and 49 years and those between 50 and 64 years compared with those aged ≥65; smokers, as compared with nonsmokers; high alcohol drinkers compared with no/low alcohol drinkers; physically inactive compared with active persons; and college students and workers compared with community-dwelling adults and elderly. Multivariate adjusted logistic regression analysis revealed the following factors were associated with low adherence to JD: age between 18 and 34 years (OR 4.49, 95%CI 2.10-9.61), those between 35 and 49 years (OR 4.15, 95%CI 2.02-8.50) and those between 50 and 64 years (OR 2.36, 95%CI 1.18-4.74) compared with those aged ≥65; physically inactive (OR 1.31, 95%CI 1.05-1.64) compared with active persons; and high alcohol drinkers (OR 1.88, 95%CI 1.40-2.52) compared with no/low alcohol drinkers. In multivariate adjusted analyses, the background and smoking habits were no longer significantly associated with low adherence risk (Table 4). Discussion Our study developed a simple assessment tool (eJDS) that can capture a traditional JD pattern. It draws upon the common components of a traditional JD foods as indicated by literature review [7][8][9][10][11][12][13][14][15]. It should be noted that we incorporated Japanese-style confections (wagashi) into our score. The beneficial effects of wagashi may be attributable to the use of adzuki beans (red beans). Adzuki beans contain relatively high amounts of plant protein, fiber, and polyphenols, and a lower fat content compared with western-style confectionaries. Because a low consumption of meat was a traditional dietary habit before the 1960s in Japan [25,26], we also added this component. Regarding a cutoff of score, many other diet scores (e.g., Mediterranean diet score) have used the median intake of people in their sample [29]. However, it is not always universal, so we used z-score that is a distribution with different standard deviation values replaced by standard normal distribution. We evaluated as follows: z ≥ 0.5, eat often; 0.5 > z ≥ −0.5, moderate; z < −0.5, rarely eat. In general, this may be considered legitimate. We set the cut-off of each food intake with a z-score of 0.5 or higher, except for meat (z < −0.5). Using this score, we examined the associations between adherence to JD and nutrient intake. A higher adherence to JD was significantly associated with many nutrient intakes (protein, carbohydrate, fiber, salt and potassium) and inversely correlated with unsaturated fat. In a study of elderly Japanese [20], the Japanese Diet Index was positively correlated with the intakes of many nutrients (protein, fiber, vitamins A, C, and E, calcium, iron, sodium, potassium, and magnesium) and was negatively correlated with sugar and saturated fat. Another study has also reported that the Japanese Diet Index was positively correlated with the intakes of nutrients (protein, fiber, vitamins A, C, and E, calcium, iron, sodium, potassium, and magnesium) and was negatively correlated with saturated fat [30]. These results suggest that high adherence to JD indicates better overall nutrient intakes, except for sodium. Indeed, the disadvantage of JD is said to be high salt. It is well known that salt intake is positively associated with high blood pressure [31]. Therefore, we recommend using low-salt soy sauce and cooking methods that do not use salt (spices and sourness). On the other hand, potassium intake was highly correlated with JD adherence. Potassium intake from JD may protect against hypertension. Over the past few decades, the Japanese lifestyle has become markedly more westernized [3]. Unhealthy dietary habits have become more prevalent and switching from a traditional JD to a westernized dietary pattern may play an important role in increases in obesity, metabolic diseases, and cardiovascular disease [8,[10][11][12][13]25]. A notable finding of our study was that only 11.1% of subjects reported high adherence to a JD, whereas nearly half of people reported low adherence. Several studies have examined factors associated with low adherence to healthy diets, such as Mediterranean diet [4][5][6]. A study of adults in Morocco reported that obesity, single persons, and persons from rural areas were associated with low adherence to Mediterranean diet [4]. In Spanish women, young age, smoking, and sedentary lifestyle were associated with low adherence to Mediterranean diet [5]. Furthermore, in PREDIMED trials (a primary prevention study conducted in Spain), abdominal obesity, smoking, and single persons were associated with low adherence to Mediterranean diet [6]. On the other hand, to our knowledge, there are no studies using JD scores for factors related to low adherence. However, in a study using the principal component analysis, it has been reported that subjects with lower JD pattern are more likely to be young, men, physically inactive, and current smokers [7]. We evaluated the factors associated with low adherence to JD and reported that younger age persons, physically inactive persons, and high alcohol drinkers are associated with low adherence to the JD. From these perspectives, strategies to encourage a JD should target younger age persons, persons with sedentary behavior, and alcohol drinkers. In a crude model of logistic regression analysis, we also found that smoking is also a factor of low adherence to JD. Combining previous reports with our results, smokers may also be targeted. Visual and easy-to-understand tools are necessary for dietary guidance and nutrition education. Using the eJDS cut-off value, we created a visual Japanese diet pyramid with reference to the Mediterranean diet pyramid [21][22][23]. Previously, an upside-down Japanese food pyramid (Spinning Top) has been proposed, but it focuses on the overall balance of the meals [18]. Other Japanese diet scores [17,19,20,30] do not suggest graphics like the Mediterranean food pyramid. It is desirable to build a healthy weekly menu by using our pyramid. This study has some limitations. First to be considered is the setting of a threshold for implausible caloric intake. There are two ways of thinking about underreporting and overreporting calorie intake. One is to calculate EER for each subject and include it in the analysis when the energy intake calculated from BDHQ is 0.5 times or more and less than 1.5 times [32]. Another one is a method that excludes only the intake that is not vague, more specifically, a method based on less than 600 kcal and more than 4000 kcal [33]. It is said that there is a tendency of underreporting in persons with severe obesity, but the Japanese have few severely obese people. We also adopted the latter method. Second, the upper limit of rice consumption must be better defined, as a higher consumption of white rice is reported to correlate to type 2 diabetes [34]. Unfortunately, the upper limit has not been set in our study. Third, this study was conducted among a sample population living in an urban, suburban, and rural setting, but we did not include subjects such as fishermen or residents of mountain villages. Finally, educational history and economic conditions are important factors related to eating habits, but we do not have data on them. Conclusions The empirical Japanese diet score (eJDS) is a simple assessment tool that captures a traditional JD adherence, and higher eJDS positively correlated with good nutrient intake. Only 11.1% of subjects reported high adherence to a JD, whereas nearly half of people reported low adherence. Younger age persons, physically inactive persons, and heavy drinkers were associated with low adherence to JD. Additionally, the Japanese diet pyramid created by the cutoffs of eJDS is a visual and easy-to-understand tool. Further research is needed to demonstrate the effectiveness of dietary guidance and nutrition education using these tools.
2019-11-14T17:07:11.121Z
2019-11-01T00:00:00.000
{ "year": 2019, "sha1": "fa740a1889b447c5fa80ffafacaeeb3fad20436c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6643/11/11/2741/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b83b4a1da94631c53bdc4ab0cac7a44f8a8f4648", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
118545979
pes2o/s2orc
v3-fos-license
The Klein-Gordon equation in Machian model The non-local Machian model is regarded as an alternative theory of gravitation which states that all particles in the Universe as a 'gravitationally entangled' statistical ensemble. It is shown that the Klein-Gordon equation can be derived within this Machian model of the universe. The crucial point of the derivation is the activity of the Machian energy background field which causing a fluctuation about the average momentum of a particle, the non-locality problem in quantum theory is addressed in this framework. I. INTRODUCTION It has been demonstrated during last decade that a wide class of field theoretical models can be given a thermodynamic interpretation. For instance, both the Newton law and the Einstein equation can be derived by using thermodynamical methods [1][2] [3]. Similar investigations showed that Coulomb law and the Maxwell equations could also be derived in such way [4]. Furthermore, the analogies between classical statistical mechanics and quantum mechanics are also well focused [5][6] [7]. Since the conception that classical field theory explained by the laws of thermodynamics and statistical physics generates many questions, those surprising relationships are still regarded as just analogies. The central problem is the necessity of modifying the notion of locality for the underlining theories when considering physical theories as emergent phenomena from some thermodynamical processes. It is known that at least quantum mechanic models with hidden-variables lead to the problems of locality [8]. In fact, the research for an acceptable model of quantum gravity already has motivated several authors to introduce non-local corrections for special [9] and general [10] relativity theories. One example of thermodynamical approach is Machian model of gravity [11]. The new feature of this model is the non-locality which assumes that all particles of the universe are involved in the gravitational interaction between any two particles. Adopted the thermodynamical-statistical formulation, the rest mass of a particle is a measure of its long-range collective gravitational interactions with all other particles inside the horizon [12]. In ref. [13], the Machian model imitates basic features of special and general relativity theories. Ref. [14] has shown that both the Schrödinger equation and the Planck constant can be derived within this Machian model of the universe. Non-locality is known to all as an essential feature of quantum mechanics [15]. It makes sense to suppose that the non-locality in the Machian model of gravity and quantum mechanics have the same origin. Many efforts have been made to derive the equations from thermodynamical-statistical approach. Some of attempts are like: using the nonequilibrium thermodynamics [16], within the hydrodynamical interpretation of quantum mechanics [17], in Nelson's stochastic mechanics [18], from the 'exact uncertainty principle' [19], or using the hidden-variable theory of de Broglie and Bohm [20] etc. In this paper, by using classical statistical mechanics for the Machian universe, the Klein-Gordon Equation is obtained. In fact, the equation can be directly derived by means of action from the non-local Machian energy of a particle. The only inputs would be given by the existence of an associated space-pervading field with the non-local Machian energy, i.e. next to the usual action function describing a classical physical system, interaction from all particles of the universe is considered. It is thus assumed that the energy of the non-local gravitational interactions of the particle pervades all of universe. Due to the notion of the Machian energy background field, the classical Lagrangian should be modified, i.e. there should be a fluctuation term concerning the average momentum of a particle. This is the key point of deriving the Klein-Gordon equation from the variation principle as shall be shown. The structure of this paper is as following. First, we review the idea of Machian model. Then the energy balance equation is given and the standard relativistic formulas are proposed by adopting Mach's interpretation of inertia [12] [13]. Second, due to the assumed activity of the Machian energy background field, the complete action integral of a particle is obtained. Third, from the formalism of classical statistical mechanics, we derive the equation of motion of particle and show that it is equivalent to the Klein-Gordon equation. Finally, the paper ends with a brief summary. II. THE MODEL OF MACHIAN UNIVERSE In Machian model all the particles in the Universe are non-locally interacting to each other and the universe can be considered as a statistical ensemble of 'gravitationally entangled' particles [12][13] [14]. When considering the gravitational interaction of the close objects, one can replace the distant universe by a spherical shell of the effective mass M and the effective radius R which is consisting of the ensemble of N uniformly distributed identical particles of gravitational mass m g . One of the main assumptions of the model is that the 'universal' gravitational potential of any particle from the ensemble is [21]: The non-local Machian interactions between particles reveal the main differences from the standard physics, for each particle in the 'gravitationally entangled' universe interacts with all the other (N − 1) particles. So the number of interacting pairs is N(N − 1)/2, and the Machian energy of a single particle take the form of [12]: It is shown that the assumption, the whole Universe is involved in local interactions, weakens the observed strength of gravity effectively by a factor related to the number of particles in the Universe. Adopting the energy additivity as in the Newtonian realm the energy balance equation for a particle is given as: Here T is its kinetic energy with respect to the preferred frame and U corresponds to all local interactions causing an acceleration of the particle. Aspects of relativity can be explained in this model with a preferred frame using Machs principle [13]. Because having any kinetic energy T , the value of E 0 should be the same for all inertial observers (U = 0) in the homogeneous universe, the relativity principle recovers: Here m 0 is the inertial mass and the definition of inertia in this case reduces to the equivalence principle m 0 = m g . It is shown that the mass transformation has been found and the well known 'relativistic' effect of the particle's inertia growth depending on its velocity has a dynamical nature which is a consequence of the energy balance condition (3). All other 'relativistic' effect could be obtained as well, including the formula of the linear momentum: So the formulas of special relativity and the justification of the 4-dimensional geometrical formulation appear in this framework naturally. III. THE COMPLETE ACTION INTEGRAL OF A PARTICLE In this section base on the assumed activity of the Machian energy background field, an additional fluctuation which is subject to the average momentum of a particle is produced and the complete action integral is obtained. In ref. [14] the description of physical systems by ensembles may be introduced at quite a fundamental level using notations of probability and action principle. We start with a particle: Furthermore the portion of the action of the universe for a single member of the ensemble should take the same order of the Planck constant (A ∼ −2π ). As in the models [16][17] [18][19] [20], we assume that the probability of finding the particle in the configuration space is described by a probability density P (x, t). It obeys the normalization condition: By means of P (x, t) the action integral (6) for a particle in the ensemble can be written as: Now consider the case when only one 'free' particle moves with respect to the preferred frame and local potential acting on the particle would not be considered (U = 0). We introduce the action function S(x, t) [14] S = p µ x µ + const Here S(x, t) is related to the particle velocity v(x, t) and momentum p µ via: The action integral (8) in this case has the form: For the case when all N particles in the universe are at rest with respect to each other, we get: Where the particle's energy (3) coincides with the energy of its non-local gravitational interactions with the universe and takes the form as E = E 0 = −m g φ. In other words action function S 0 means the activity of the Machian energy background field. We shall assume in the following that this energy would cause an additional velocity fluctuation field relative to the original state of motion of a particle Due to the assumed Machian energy which contributes a non-classical dynamics, p µ = ∂ µ S is only an average momentum with momentum fluctuations f µ around ∂ µ S. Thus, the physical momentum is: No particular underlying physical models tend to be assumed for the fundamentally nonanalyzable [20]. Instead, one could regard ′ f ′ µ as the fluctuations. We could retain the notion of particle trajectories and consider a physically motivated proposal of the form (14). It is natural to assume that the momentum fluctuations f µ are linearly uncorrelated with the average momentum p µ = ∂ µ S. Hence: Thus, with p given by equation (14), it is proposed that the complete action integral of a particle immersed in the Machian energy background field is given by: Where (∆f ) 2 is the average of square momentum fluctuation given by: Base on the above discussion which states that the additional momentum fluctuation f µ is due to the activity of the Machian energy background field. In this case f µ could be described by the action function S 0 in eq.(12) Combining eqs. (16) and (18), we get: This is the action integral for a 'free' particle involving a fluctuation field in a fourdimensional volume in accordance with relativistic kinematics. IV. DERIVATION OF KLEIN-GORDON EQUATION The formula (19) is written for a particle moving with respect to the Universe. Moreover, the Lagrangian becomes: Upon the principle of least action, we set δP = δS = 0 at the boundaries. First we perform the fixed end-point variation in S of the Euler-Lagrange equation then it provides the covariant continuity equation: In the simplest case when a typical particle of the 'universal' ensemble is at rest with respect to the Universe, we have: Since P 0 is time-independent, this equation shows that all the other (N − 1) particles of the ensemble are at rest with respect to the preferred frame. In another case when only one particle moves but all other (N − 1) particles are at rest, we have a stationary distribution function as well. Hence, the continuity equation (22) reduces to: Motivated by ref. [16], the solution of the above equation can be written in the form: As to this probability density, we note that the only movement of the particle deviating from the classical path must be due to the activity of the Machian energy background as given in eqs. (12) and (18), respectively. Moreover we obtain from equation (25) that: In terms of the definition of the velocity fluctuation field (13), we have: where this relation could lead to the momentum fluctuation Now performing the fixed end-point variation in P of the Euler-Lagrange equations then with eq.(27) it provides the Hamilton-Jacobi equation we get: As the last two terms of LHS are identical to the relativistic expression for the "quantum potential" term [19]. This potential is usually interpreted as the source of uncertainty and non-locality, which make quantum systems different from classical ones described by pure Hamilton-Jacobi equation. However, the motion of 'gravitationally entangled' particles are at any distance in this approach. Then we have obtained the relativistic Hamilton-Jacobi-Bohm equation It is known that a justification of the quantization condition is essential to derive the quantum equations using concepts from classical physics [ As is well known, the eqs. We have thus succeeded in deriving the Klein-Gordon equation from the action with the assumptions relating to the Machian energy background field associated to each particle. V. CONCLUSION In this paper the Machian model which is an alternative theory of gravitation has been introduced and it could lead to the main features of the special relativity. Then a quantum behavior of matter in the Machian model of the universe is considered. The notion of non-locality that all particles in the universe interact with each other via a long-range gravitational force allows us to treat all the particles in the universe as the members of a 'universal' statistical ensemble. Using classical statistical mechanics, the Klein-Gordon equation is derived from the system of non-linear continuity and Hamilton-Jacobi equations, as well as the relativity principle in the Machian model with the preferred frame. The crucial point of the derivation is the activity of the Machian energy background field which causing a fluctuation about the average momentum of a particle, the non-locality problem in quantum theory is addressed in this framework.
2011-07-20T12:32:20.000Z
2010-12-12T00:00:00.000
{ "year": 2011, "sha1": "412a84395c69ee603a4f001e90abef1681827193", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1012.2560", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "412a84395c69ee603a4f001e90abef1681827193", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
237769213
pes2o/s2orc
v3-fos-license
Elaboration, characterization and color stability of an isotonic beverage based on whey permeate with carotenoid powder from pequi The aim of our research was to elaborate, characterize and evaluate the stability under light/darkness and different temperatures (4 and 25 oC) of the color of an isotonic beverage based on whey permeate with carotenoid extract powder from pequi, and verify its microbiological safety and sensory acceptance. The 3% (w/v) concentration of powdered carotenoid from pequi was chosen because it has osmolality (314.89 mOsmol/L) in the range of hydroelectrolytic beverages and light-yellow tint. The beverage was evaluated for minerals Na (662 mg/L), K (1363.73 mg/L), total carotenoids (75.9 mg/L) and antioxidant capacity by the radicals ABTS (10.79 μmol equivalent Trolox/100 mL) and DPPH (73.38 μmol equivalent Trolox/100 mL), had good sensory acceptance by athletes and remained within microbiological criteria during stability study. The color coordinate L has undergone less change and C, greater change. The condition darkness at 4 oC showed less change in yellow tint, for more than 30 days (t1/2=121.6 day). Due to its characteristics, the beverage has potential benefits for consumption by athletes, because besides being isotonic, it has bioactive properties of pequi carotenoids and natural constituents of whey permeate. The use of permeate, often discarded as effluent, has benefits for the environment. Introduction The growing appeal of a healthy life among the consumers has increased the demand for natural products with added value, such as beverages with health-promoting properties, which has led to the development of isotonic beverages enriched with fruits as a source of nutrients and bioactive compounds (Ferreira et al., 2020;Gironés-Vilaplana, Mena, Moreno, & García-Viguera, 2014;Porfírio et al., 2019;Raman, Ambalam, & Doble, 2019). Whey permeate is a co-product of the production of concentrated and isolate protein from whey by ultrafiltration. It is obtained in large volume and has limited applications, being discarded, in most cases, as effluent. Its chemical composition includes mainly water, lactose, minerals, non-protein nitrogen and low molar mass compounds such as water-soluble vitamins (riboflavin, pantothenic and nicotinic acid) (Parashar, Jin, Mason, Chae, & Bressler, 2016), in addition to minerals such as calcium, sodium, magnesium and potassium that confer electrolytic characteristics (Fontes, Alves, Fontes, & Minim, 2015) and exercise activity in biological processes (Petrus, Assis, & Faria, 2005). Thus, whey permeate has osmolyte characteristics and natural nutrients, which shows its promising use as a basis in the formulation of isotonic beverages designed to assist hydration after physical activity (Fontes et al., 2015). Natural dyes, extracted from fruits and vegetables, can be used in the formulation of beverages with isotonic characteristics, providing color and bioactive properties to the final product. Pequi (Caryocar brasiliense Camb.) is a typical Brazilian Cerrado fruit that has bioactive compounds with antioxidant capacity such as phenolic and carotenoid compounds, molecules associated with reducing the risk of developing chronic diseases such as cancer, cataracts, age-related macular degeneration and cardiovascular diseases. However, carotenoids are sensitive to heat and insoluble in water (de Mendonça et al., 2017;Machado, Mello, & Hubinger, 2013;Nascimento-Silva & Naves, 2019). According to Schweikert (2017), the major forms of solubility of carotenoids nowadays are oily suspensions, oil-in-water emulsions and water-dispersible powders. There are some studies in the literature on ways to increase the solubility and stability of colored beverages with natural carotenoids ( Research, Society andDevelopment, v. 10, n. 8, e41810817233, 2021 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v10i8.17233 3 Kispert 2019). Another study by Pinto et al. (2018) reports on extracts rich in pequi carotenoids encapsulated by drying in a foam layer to protect against the deleterious effects of exposure to light and emulsified to be solubilized in an aqueous base. In this context, the use of natural fruit extracts such as pequi, as dye and the whey permeate, ingredients with bioactive properties for the production of isotonic beverages, beverages associated with sports activities, becomes advantageous for the food industries due to the growing demand for health-promoting foods for athletes. However, there are many limitations to the commercial use of natural dyes due to the low stability, making the application of natural extracts dependent on factors such as: pH, temperature, presence of oxygen, light, ascorbic acid, cofactors and chemical structure. Thus, the aim of this study was to elaborate an isotonic beverage based on permeate from the ultrafiltration of whey with carotenoid extract powder from pequi, and to study the color stability of this product under light and different temperatures (4 and 25 ºC), in addition to conducting microbiological and sensory analyzes on the product to verify its safety and acceptance. Permeate of whey ultrafiltration The permeate used was obtained by the ultrafiltration of whey using the membrane system of the Innovative Laboratory of the Federal University of Viçosa (UFV, Viçosa, Minas Gerais, Brazil). The WGM Systems plant was used in ultrafiltration pilot standpoint equipped with a polysulfone/polyamide spiral membrane and with a molecular weight cut-off at 10 kDa. We used the temperature between 45 and 50 ºC to maximize the permeate flow during the concentration of the milk phase. It is necessary because milk has lower viscosity in this interval, which minimizes the precipitation of calcium salts. The pressure variation applied to the milk was 0.99 atm, with inlet pressure of 2.96 atm and outlet pressure of 1.97 atm. The membrane filtration area was 6 m 2 (Ferreira et al., 2020). P O CP Source: Authors. Elaboration of isotonic beverage To define the formulation of the isotonic beverage, preliminary tests were carried out with different concentrations of carotenoid extract in pequi powder (1%, 2%, 3% and 4% (m/v)) with the purpose of presenting the final beverage osmolality between 270 and 330 mOmol/L and yellow color characteristic of powdered carotenoid that was added to the formulations. The beverage made with 3% of carotenoids powder from pequi was the one that met the requirements described previously (data not shown). To elaborate the isotonic beverage according to Fontes, Alves, Fontes, & Minim (2015), 0.0075% (w/v) sucralose was added to the permeate of whey ultrafiltration. Then, it was acidified to pH 3.5 using a 2% (w/v) citric acid solution. The mixture underwent a pasteurization process (62.8-65 °C/30 min) and was then cooled to 40 °C for the addition of the preservatives (0.01% (w/v) potassium sorbate and 0.05% (w/v) sodium benzoate). We then cooled it to a temperature of 20 °C again to add the carotenoids in pequi powder and aroma identical to natural (0.01% (m/v)). The formulated beverages were distributed (50 mL) in previously sterilized transparent and amber bottles of 60 mL (121 ºC/15 min). Having defined the formulation of the beverage that met the requirements of osmolality and with a greater intensity of yellow color, the beverage was prepared for its physical, chemical and sensory characterization (addition of a pineapple or passion fruit flavor), in addition to microbiological analyzes during the color stability study. Centesimal composition The determinations of moisture, total nitrogen and fixed mineral residue were performed as described by AOAC (2006). The total carbohydrate content was obtained from the difference in percentage of the sum of the other nutrients (protein, fixed mineral residue, and water) and lactose by titrimetric analytical procedure (ISO/IDF, 2007). Titratable acidity, pH and total soluble solids The beverages were characterized in relation to pH, titratable acidity, total soluble solids according to the methods described by AOAC (2006). (1) was used. (1) Where is cryoscopic water constant (1.86 °C mol/kg) and is freezing point temperature of the beverage samples, in degrees Celsius. Minerals To determine sodium, potassium, calcium, magnesium, and phosphorus, the isotonic beverage was digested according to the methodology described by Gomes & Oliveira (2011). Sodium and potassium concentrations was determined by flame photometry (Celm FC-280), while atomic absorption (Spectraa 220FS) was used for calcium and magnesium concentrations. The atomic absorption spectrophotometry (Femtro 600S) was used for phosphorus using analytical curves for the quantification of each mineral. Color Parameter lightness (L * ), redness (a * ), yellowness (b * ) of isotonic beverage were carried out using the Colorquest XE (HunterLab, Reston, VA) colorimeter. The equipment, connected to a computer provided with the universal software system, was duly calibrated for included reflectance, 10º observer angle and D65 illuminant. From the parameters, the cylindrical coordinate H * (hue angle) was calculated according to Pinto et al. (2018), and C * (chroma) and total color difference (E) according to Mutsokoti et al. (2017). Total Carotenoids Content Total carotenoids content was determined according to the method described by Rodriguez-Amaya, (2001). The carotenoids were extracted with acetone, separated in petroleum ether, diluted in a volumetric flask and subsequently read in a spectrophotometer (UV BEL Photonics SP 1105) at the wavelength of 450 nm to determine the total carotenoid content, expressed in β-carotenoids. Antioxidant capacity The in vitro antioxidant capacity was analyzed using spectrophotometric methods (spectrophotometer-BEL Photonics UV-M51) in low light environment. The antioxidant capacity was determined by the capture of the free radical ABTS according to the methodology described by Re et al. (1999) with modifications. In an amber bottle, the same volume of ABTS solution (7.0 mmol/L) was mixed with potassium persulfate solution (2.45 mmol/L), which remained in the dark for 12-16 h for the generation of the ABTS•+ chromophore cation. After this period, the solution of the radical was diluted in 80% ethanol until reached an absorbance of 0.700 ± 0.005, at wavelength of 734 nm, in a spectrophotometer previously calibrated with 80% ethanol (white). Then, we added 0.5 mL aqueous beverage dilutions and 3.5 mL of the ABTS•+ radical solution to test tubes, followed by homogenization in a tube shaker. After 6 min of reaction, the absorbance was read of the samples at same wavelength. As control, we used 80% ethanol instead of the addition in the samples. The antioxidant capacity was also determined by the DPPH assay following the methodology described by Brand-Williams, Cuvelier, e Berset (1995), with modifications. Aliquots of 0.1 mL of aqueous beverage dilutions were transferred to test tubes, followed by the addition of 2.9 mL of 60 µmol/L methanolic solution of the DPPH•+ radical previously prepared. After homogenization in a tube shaker, they were allowed to rest, in the dark, for 25 min. The absorbance at 515 nm was used for reading the samples in a spectrophotometer, previously calibrated with methanol (white). As control, we used methanol instead of the addition in the samples. Analytical Trolox curves were constructed for each method, and the results were expressed in μmol equivalent of Trolox/100 mL of the isotonic beverage. Microbiological analysis Microbiological analyses were carried out according to a methodology standardized by the American Public Health Association (APHA, 2001), evaluating Most Probable Number (MPN) for coliforms at 35 ºC, total counts of aerobic mesophilic microorganisms, filamentous fungi and yeasts and psychotropic and identification of the presence or absence of Salmonella spp. Color stability study of isotonic beverage To assess the effect of light on color stability, beverage samples were packaged in transparent flasks, then stored in a light cabinet, at 25 ºC containing lamps in the color temperature range corresponding to daylight, which more approaches sunlight (~ 6500 Kelvin), keeping a distance of about 4 cm from each other, protected from any other light source, at 25 ºC, and the other flasks, in a display refrigerator at 4 ºC, adapted with fluorescent lamps. The beverages in amber bottles were distributed and placed in closed cardboard boxes and stored in a display refrigerator (4 ºC) and at room temperature (25 ºC) in the darkness for 30 days. The kinetics of changing the color coordinates a * , b * , C * and h * were evaluated to determine the speed of change (k) and the half-life (t 1/2 -time required to change a given color coordinate in 50%). For the calculation of the rate of change of the color coordinates, the first order kinetics was admitted according to Eq. (2). (2) Where, C is the value of the color coordinate after the storage time of the beverage samples under conditions of light and controlled temperature and C0 is the value of the color coordinate at time zero; k the speed of change and t the time. By linear regression analysis, the rates of change were estimated. To calculate the half-life (t 1/2), from the values obtained for k, the values of t 1/2, were calculated from the first order relations according to Eq. (3). (3) Sensory acceptance test After approval by the Ethics Committee (CAAE: 56368416.7.0000.5153) the acceptance of isotonic beverages formulated with pineapple and passion fruit aroma were assessed by 100 athletes, untrained tasters, from the Athletic Association-LUVE / UFV in the Sensory Analysis laboratory at UFV. Acceptance was assessed according to the attributes color, flavor, aroma and global impression, using the nine-point hedonic scale according to Lawless & Heymann (2010). The samples were presented refrigerated in transparent 50 mL disposable cups identified with three-digit codes, monadically, sequentially and randomly to the evaluators. At each evaluation, the evaluators were instructed to rinse the mouth with water to remove possible residues inside the mouth. Statistical analysis The experiment to study the color stability of the isotonic beverage formulated with carotenoid powder from pequi based on the permeate was conducted in a completely randomized design in a split-plot scheme. The controlled light and temperature conditions in the plot, totaling 4 storage conditions: light at 25 ºC, light at 4 ºC; dark at 25 ºC and dark at 4 ºC. In the subplot, storage time (zero to 30 days) was a factor. The data were interpreted by analysis of variance (ANOVA), using t test (sensory analysis), Tukey test for comparison of means and regression analysis, adopting a level of 5% probability. Statistical Analyzes System (SAS), version 9.4, licensed for use by UFV, was used to carry out the statistical analysis. Table 1 shows the results of the analysis of the characterization of the isotonic beverage made with ultrafiltration permeate of whey with 3% carotenoid powder from pequi added. The total carbohydrate content obtained (7.88% (w/v) is within the range that defines isotonic beverages (6% to 8%) and is also adequate according to Resolution RDC 18 (ANVISA, 2010), which is up to 8% in the ready-to-beverage beverage. Beverage characterization The total soluble solids content of the isotonic beverage analyzed in the present study showed a value equal to 7.9 ºBrix, a value higher than that found by Ferreira et al. (2020) (5.83 ºBrix), in an isotonic beverage with anthocyanin extract from jabuticaba. By drying in a foam mat, the carotenoids, natural dyes of the present work, had the addition of wall materials (maltodextrin) and emulsifier (soy lecithin), which increased the soluble solids content of the isotonic beverage. Isotonic beverages must have high acidity (pH<4.6) to limit the growth of microorganisms and guarantee their microbiological stability and safety ( Research, Society andDevelopment, v. 10, n. 8, e41810817233, 2021 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v10i8.17233 8 elaboration of the beverage citric acid was added to the permeate. The pH value of the isotonic beverage ready for consumption was 3.66 and for total titratable acidity 0.51% (w/v) (expressed as citric acid) (Table 1). Fontes et al. (2015), when studying electrolyte beverage based on skimmed milk permeate, found a pH value of 3.42 and an acidity of 0.66% (expressed as citric acid). These results demonstrate small variations between the same characteristics for the same type of beverage. Regarding the mineral content of the elaborated beverage, the Na, K, Ca concentrations are within the values required by the legislation (ANVISA, 2010). The concentrations of these minerals and P, were similar to those obtained in the characterization of an isotonic beverage made with ultrafiltration permeate added with anthocyanins extract from jabuticaba in the work of Ferreira et al. (2020). When compared to commercial isotonics of different brands, isotonics with standardized mineral concentration, the elaborated beverage presented a concentration of minerals higher than those found by Coombes The color of the beverage is provided by carotenoids, with the total carotenoid content equal to 77.95 µg/100 g (Table 1). Carotenoids, in addition to providing color to the beverage, also confer antioxidant characteristics, as they are able to eliminate reactive oxygen species (ROS) and inhibit the formation of antioxidant species (Mariutti, Chisté, & Mercadante, 2018). The permeate, the base of the elaborated isotonic beverage, is, in itself, a practically sterile product. Even so, a slow pasteurization was performed on the permeate, with the addition of chemical preservatives and the beverages were poured into previously sterilized glass bottles. The beverage had a relatively low microbiological count for filamentous fungi and yeasts, aerobic and psychotropic mesophiles below 10 1 CFU/mL, Most Probable Number (MNP) of coliforms at 35 ºC below 3 NMP/mL and absence of Salmonella spp. throughout the color stability study, under different storage conditions, indicating the microbiological safety of the beverage. Effect of storage conditions Effects of exposure or not of light and temperature, refrigeration (4 ºC) and environment (25 ºC) were observed in the color coordinates in samples of isotonic beverages during 30 days of storage (Table 2). Visually, beverages stored in the presence of light became whitish after the storage period (Figure 2), due to the presence of ingredients that were added to the formulation, such as maltodextrin and emulsifiers, that influenced the final color of the samples stored under the presence of light. It should be noted that with 40 days and 60 days of storage, the color of the beverages was still preserved in the conditions of darkness at 25 ºC and 4 ºC, respectively. Research, Society andDevelopment, v. 10, n. 8, e41810817233, 2021 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v10i8.17233 The loss of yellow tint is directly related to the degradation of carotenoids. Because they are molecules with a highly unsaturated structure, a characteristic that gives them the property of being antioxidants by sequestering singlet oxygen, interacting with free radicals, they are more susceptible to isomerization and oxidation, being unstable at high temperatures and light (Maiani et al., 2009;Rodriguez-Amaya, 2001). The color of the beverages stored in the dark condition was preserved when compared to the beverages stored in the presence of light. The condition of storage under refrigeration temperature also preserved the color more when compared to storage under room temperature, even in the darkness. According to Boon et al. (2010) exposure to light degrades carotenoids by producing radical cations leading to rapid degradation of carotenoids and consequently loss of yellow color. In the work of Chen, Peng, & Chen (1996) Regarding the variable ΔE (global color difference), the higher its value, the greater the total color difference of the product over time in relation to the product at time zero, on the day it was produced. The values found for ΔE showed that isotonic beverages stored in the condition of presence of light reached the limit of detection of the global color difference more quickly and would be easily perceived by the consumer. According to Schubring (2009) differences of the order of 3 can be described as "very pronounced", since limits below 1 (SCHUBRING, 2009) and 2 (MESNIER et al., 2014) are not perceptible by human eyes. Effect of time on storage conditions The storage time of 30 days affected the color coordinates of the beverages under the studied conditions (p>0.05). It can be seen in Figure 3 that the luminosity (L*), suffered little influence from the storage time, regardless of the storage conditions, due to the fact that the beverage samples initially presented a light color. The change in coordinate a * occurs quickly until 12h, in light conditions at 25 ºC and 4 ºC and then begins to slow down the change up to 30 days of storage ( Figure 3). For the condition without light at 25 ºC, the change in the value a * remains increasing until the 7th day and from that point, it decelerates until the 30th day. However, in the condition without light at 4 ºC, there was no change until the 6th day ( Figure 3) and from then on, a slight decrease was observed up to 30 days of storage. The b * coordinate, which represents the yellow chromaticity parameter, and the cylindrical coordinates C * and h * also underwent similar changes in light conditions at 25 ºC and 4 ºC, with increasing changes up to the 7 th day, presenting deceleration curves up to 30º storage day (Figure 3). In the darkness conditions at 25 ºC and at 4 ºC, a decrease in the values b*, C* and h* is observed during the 30 days of storage, however with less change in the condition without light at 4 ºC. Therefore, the first order kinetic model was applied and adjustment to experimental data (p<0.05) was observed for color coordinates (a * , b * , C * and h * ) in samples of isotonic beverages stored under storage in the presence and darkness at 4 ºC and 25 ºC to represent changes in a given period of time. It is observed that the coordinate a * (green chromaticity), presented a rate of change very close in the light conditions at 25 ºC and 4 ºC and the most affected in these conditions, for a short period of time. Probably, the rate of change in this coordinate is due to the degradation of riboflavin (vitamin B2), which is greenish yellow, is responsible for the color of whey or for ultrafiltration permeate and is bleached by oxidation, being degraded under these conditions of the study (Goulding, Fox, & O'Mahony, 2020). The change in the b * coordinate (yellow chromaticity) is related to the degradation of the carotenoid powder from pequi that was added in the formulation of the beverage and that presents a characteristic yellow color. It can be seen in Table 3 that the rates of change for the b * coordinate were higher under conditions of light, indicating intense oxidation of the carotenoids. On the other hand, the rate of change in the b * coordinate in the darkness was lower with a lower rate of change for refrigeration temperature (0.0066 days). As expected, the saturation (C * ) and the hue (h * ) of the yellow color of isotonic beverages stored under the light at 25 ºC and 4 ºC underwent significant changes up to 7 days, according to the rates of changes found ( As the samples of isotonic beverages stored in the conditions of darkness at 25 ºC and 4 ºC, obtained the best results for maintaining the color (Figure 2), the studies extended to 40 and 60 days, respectively. Sensory acceptance test The averages and standard deviations of the scores attributed by the tasters to the attributes in samples of isotonic beverages formulated with whey ultrafiltration permeate added to pequi powder carotenoid extract and natural pineapple or passion fruit aroma are shown in Table 4. Values presented represent an average of 100 judges ± standard deviation. Averages followed by the same letters, on the same line, do not differ from each other by t test (p>0.05). Source: Authors. The analysis of variance (ANOVA) showed that the averages of all evaluated attributes did not differ significantly (p>0.05) between the formulated beverages. Both beverages were equally accepted by the tasters with respect to all attributes. The color attribute was the one with the highest average, 6, which indicates that consumers liked it slightly more. The other attributes obtained an average above 5, indicating that consumers were indifferent in terms of taste, aroma and overall impression. The ricotta cheese whey-based sport-beverage prepared by Valadao e Geremias de Andrade (2015) had an acceptance equal to 6.3, a value close to that found in the whey permeate base beverage of this work. In the work of Martins, Chiapetta, Conclusion The storage condition in the presence of light at 25 ºC showed a greater and faster change in color coordinates and consequently degradation of carotenoids in the isotonic beverage. The condition of storage in the darkness, under refrigeration (4 ºC) proved to be the condition with the least degradation (the yellow color remained more stable) for the storage of the beverage. The isotonic beverage was accepted by athletes, consumers for whom the beverage is intended, being considered a promising supplement for obtaining carotenoids from pequi, which give the product a natural yellow tint and bioactive properties with antioxidant and pro-vitamin A activity, in addition to vitamin B2 and natural minerals from whey ultrafiltration permeate, which assist in the replacement of electrolytes lost during physical activity. The beverage can also present benefits to the environment, as it makes possible the use of permeate, which is often discarded as effluent. New isotonic beverages can be elaborated with natural sources of carotenoids and new stability studies can be carried out in order to study the stability of the color and these carotenoids under different storage conditions.
2021-09-28T01:08:49.297Z
2021-07-15T00:00:00.000
{ "year": 2021, "sha1": "877fddc25d2e0d7c3dd5c8973cecb00431862c37", "oa_license": "CCBY", "oa_url": "https://rsdjournal.org/index.php/rsd/article/download/17233/15631", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "5f25bdb32d621d7524c30dd5a4f7d67c3d3ad737", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
247223517
pes2o/s2orc
v3-fos-license
Surfactant Therapy for Respiratory Distress Syndrome in High- and Ultra-High-Altitude Settings Objective The objective of this study is to investigate the therapeutic effect of surfactant replacement therapy (SRT) on respiratory distress syndrome (RDS) in premature infants in the Qinghai-Tibet Plateau. Materials and Methods This multi-center retrospective cohort study collected and screened reasonable clinical data of 337 premature infants with RDS from 10 hospitals in the Qinghai-Tibet Plateau from 2015 to 2017. We grouped the cases by rationally analyzing their baseline characteristics, using logistic analysis to evaluate each factor's effect on the prognosis of the infants, and comparing the short-term improvement in blood gas and mortality after SRT treatment at different altitudes, in high-altitude (1,500–3,500 m) and ultra-high-altitude (3,500–5,500 m) groups. Results Independent of altitude, the mortality rate of children with RDS in the SRT group was significantly lower than that of children in the non-SRT group (both P < 0.05). The effect of SRT on preterm infants with RDS in the high-altitude group [odds ratio (OR) = 0.44, 95% confidence interval (CI) = 0.22–0.87, P = 0.02] was better than that in the infants in the ultra-high-altitude group (OR = 0.26, 95% CI = 0.13–0.58, P < 0.01), with death rates of 34.34 and 49.71%, respectively. Similarly, after SRT, the improvement of PaO2/FiO2 and pH of children at high altitude was significantly better than those of children at ultra-high altitude (all P < 0.01). Conclusions SRT plays a prominent role in curing infants with RDS in both high- and ultra-high-altitude regions, although with better effects at high rather than ultra-high altitude. This study provides a basis for further large-scale studies on SRT for RDS treatment at high altitudes. Objective: The objective of this study is to investigate the therapeutic effect of surfactant replacement therapy (SRT) on respiratory distress syndrome (RDS) in premature infants in the Qinghai-Tibet Plateau. Materials and Methods: This multi-center retrospective cohort study collected and screened reasonable clinical data of 337 premature infants with RDS from 10 hospitals in the Qinghai-Tibet Plateau from 2015 to 2017. We grouped the cases by rationally analyzing their baseline characteristics, using logistic analysis to evaluate each factor's effect on the prognosis of the infants, and comparing the short-term improvement in blood gas and mortality after SRT treatment at different altitudes, in high-altitude (1,500-3,500 m) and ultra-high-altitude (3,500-5,500 m) groups. Results: Independent of altitude, the mortality rate of children with RDS in the SRT group was significantly lower than that of children in the non-SRT group (both P < 0.05). The effect of SRT on preterm infants with RDS in the high-altitude group [odds ratio (OR) = 0.44, 95% confidence interval (CI) = 0.22-0.87, P = 0.02] was better than that in the infants in the ultra-high-altitude group (OR = 0.26, 95% CI = 0.13-0.58, P < 0.01), with death rates of 34.34 and 49.71%, respectively. INTRODUCTION Respiratory distress syndrome (RDS) due to surfactant deficiency is a major cause of morbidity and mortality due to respiratory failure in newborn infants, especially those born prematurely (1,2). Since the 1990s, surfactant replacement therapy (SRT) has been effective in treating RDS (3) and has been confirmed to effectively reduce RDS-related mortality, pneumothorax incidence, and the risk of chronic lung disease (4)(5)(6)(7)(8). Nowadays, SRT is part of the core treatment strategy for RDS, which can prevent the collapse of alveoli and increase lung compliance, thereby improving survival and reducing respiratory morbidities (9). Additionally, the development of neonatal care and lessinvasive methods of surfactant delivery has further promoted the widespread clinical use of SRT (10,11). High-altitude regions are regions at an altitude >1,500 m; globally, about 2% of the population lives in these regions (12). Additionally, three altitude regions can be defined according to the Society of Mountain Medicine: high altitude (1,500-3,500 m above sea level), ultra-high altitude (3,500-5,500 m above sea level), and extreme altitude (above 5,500 m above sea level) (13). In high-altitude regions, due to gravity, with increasing altitude, the atmospheric pressure decreases. While the proportion of oxygen in the atmosphere remains unchanged, the oxygen partial pressure and, hence, the driving pressure for gas exchange in the lungs decrease, and a hypoxic environment is formed (14). This determines the uniqueness of various pulmonary diseases and their therapy in high-altitude regions. Since the 20th century, in the Qinghai-Tibet Plateau, SRT has been used in the therapy of neonatal RDS. Recently, as medical technology gradually developed in high-altitude regions, the use of technologies such as ventilation support has increased as well. Additionally, in high-altitude areas, mechanical ventilation has been shown to effectively improve the arterial partial oxygen pressure in newborns with RDS and reduce their mortality (15). However, most clinical research data on SRT treatment for RDS so far comes from plain or hilly areas, and relevant research in high-altitude regions is non-existent. Due to various reasons, such as lagging medical and economic progress and differences in language and customs, collecting data of clinical cases in high-altitude regions is very difficult. Although SRT is now more commonly used in RDS treatment in high-altitude regions, there is no specific study to confirm the efficacy of this therapy in such conditions or the effects of SRT at different altitudes. Therefore, clarifying the effect of SRT on RDS in premature infants in high-altitude regions is important to improve the prenatal healthcare quality in high-altitude regions. This study aimed to investigate the important role of SRT therapy in premature infants with RDS in the Qinghai-Tibet Plateau. Subjects This was a multi-center retrospective cohort study conducted in 10 hospitals ( and Qinghai Women and Children's Hospital) in the Tibetan Plateau from January 2015 to December 2017. We reviewed 632 cases of preterm infants with RDS and finally included 337 cases according to the inclusion criteria (Figure 1). The study was approved by the Ethics Committee of Shengjing Hospital Affiliated to China Medical University and registered in ClinicalTrials.gov (NCT03440333). Written informed consent from the participants' legal guardian/next of kin was not required to participate in this study in accordance with the national legislation and the institutional requirements. Sample Collection A total of 337 clinical cases were grouped and evaluated based on the use of SRT and the altitude of the infants' birth area. We collected other basic information, which may affect the children's outcomes. In addition, we included SRT use, the altitude of the infant's birthplace, and indicators of treatment effect, such as mortality, blood gas results, etc. Then, we compared the various indicators under different treatment methods and at different altitudes to determine whether there was any difference in the therapeutic effect. Patient Identification The diagnosis of RDS was based on clinical manifestations (dyspnea, nasal flaring, groaning, and cyanosis after birth) and chest X-ray (CR) findings. Based on the characteristics of the infants' CR, RDS can be classified into four degrees: grade 1 (ground glass shadowing), grade 2 (ground glass shadowing with air bronchograms), grade 3 (confluent alveolar shadowing), and grade 4 (complete white lungs obscuring the cardiac border) (16). In this study, we set grades 1 and 2 as mild RDS and grades 3 and 4 as severe RDS. There were no obvious differences in the methods and technology used for RDS therapy among the neonatal intensive care unit (NICU) of the 10 hospitals. Furthermore, all infants were intubated to receive a surfactant (Curosurf, Chiesi Pharmaceuticals, Parma, Italy), which was used only once at a dose of 200 mg/kg. Data Analysis The Wilcoxon test (Mann-Whitney U test) was used to compare categorical indicators between the subgroups. The Fisher exact test was adopted to analyze the relationships between the characteristics. Multivariate analysis via logistic regression was performed for quantizing the factors' effects. For detailed comparisons, each altitude category was modeled twice, including a model of patients without SRT treatment at high altitude (Model I), a model of the whole population at high altitude (Model II), a model of the population without SRT at ultra-high altitude (Model III), and a multiple regression model of the whole sample (Model IV). Redundancy analysis (RDA) was performed to remove redundant information in blood gas targets. The significance levels (type I error, α) were unified to 0.05 in this study. All analyses were performed on R (version 3.6.3, 64-bit, 2020-02-29). General Information We identified 632 infants in the Tibetan Plateau diagnosed with RDS during the study period. Of these, we excluded 215 patients as they were not hospitalized after diagnosis or had incomplete clinical data; 35 for having congenital malformations, genetic metabolic abnormalities, intrauterine infections, and chromosomal diseases; 29 whose gestational ages were outside the study range; and 16 infants whose birthplace altitudes were <1,500 m, leaving 337 patients for analysis (Figure 1). The study population characteristics are shown in Table 1. Most subjects were male (209, 60.2%), and the median birth weight was 1.7 kg. The Apgar score at 5 min varied between 7 and 8 points. Over 80% of subjects had a gestational age in 28-33 weeks + 6 days. Overall, 174 (51.6%) were judged as having severe RDS. SRT was used in 150 patients (44.5%), and the rest were treated without SRT. Only the Apgar score at 5 min and RDS severity were significantly different at baseline between the highand ultra-high-altitude groups. Particularly, the change in health status of patients showed a significant difference between the high-altitude and ultra-high-altitude groups, and the death rates were 34.34% and 49.71%, respectively (P = 0.006; Figure 2A). At the same time, SRT impacted significantly on the improvements of death at both high-altitude and ultra-high-altitude groups (P = 0.013, 0.047; Figures 2B,C). Multivariate Analysis of Factor Contributions to RDS Treatment Binary logistic regression analysis based on altitude and therapeutic intervention was used to analyze the risk factors related to change in status. The result showed that RDS severity [odds ratio (OR) = 3.76, P = 0.03) was a risk factor and birth weight (OR = 0.22, P = 0.03) was a protective factor in the non-SRT subset of the high-altitude group ( Figure 4A). Nevertheless, SRT (OR = 0.44, P = 0.02) became a new protective factor in the model of overall high-altitude samples (Figure 4B). Further correlation analysis in the ultra-high-altitude group showed that RDS severity (OR = 6.42, P = 0.00) was still a risk factor, while gestational age (OR = 0.16, P = 0.01) was a protective factor in the non-SRT subset (Figure 4C). Similar to the model of overall high-altitude samples, SRT (OR = 0.28, P < 0.01) still appeared protective in the model of overall ultra-highaltitude samples ( Figure 4D). Notably, the OR of risk factors decreased and that of protective factors increased after importing SRT at two altitude levels. To compare the role of indicators in the model, we calculated each hotspot coding feature and summarized the importance of variables. On this occasion, the scores represented the similarity between the model and the variables. Interestingly, by leading in SRT, although the SRT score in the overall high-altitude group model (score = 0.4) was slightly lower than in the overall ultrahigh-altitude group model (score = 0.64), there was a greater decrease in protective factors' scores (birth weight: 0.65-0.38; gestational age: 0.86-0.55) ( Table 2). Synthesis Evaluation by Blood Gas Targets Blood gas analyses were used to demonstrate diverse improvements in patients at various altitudes. To avoid amplifying redundant information from various times, pH, PaCO 2 , and PaO 2 /FiO 2 (p/f) were tested; RDA was conducted to reveal the comprehensive effect of variables in the high-altitude and ultra-high-altitude group, and prognostic pH, PaCO 2 , and p/f at 6, 12, and 24 h after SRT were regarded as "outcome variables" and sex, Apgar score at 5 min, RDS severity, birth weight, and gestational age as "environmental variables". The results showed that the highest value of RDA1 axis in variables is 0.9188 in the severity of RDS, indicating that RDS severity was the most influential factor in the high-altitude group, while it was the Apgar score at 5 min (|RDA1| = 0.6581) in the ultra-high-altitude group. Additionally, other results showed that the lowest value of RDA1 in variables is 0.1703 in gestational age (GA), indicating GA was the least influential factor in the high-altitude group, while it was sex (|RDA1| = 0.1093) in the ultra-high-altitude group (Figures 5A,B). We chose pH, PaCO 2 , and p/f and examined the changes with SRT at different altitudes. The temporal dynamics curves showed clear differences in pH and p/f at various altitudes over time. In summary, compared with patients at ultra-high altitude, patients who were at high altitude do faster entry or almost normal entry ranges after SRT (Figures 5C-E). DISCUSSION To our knowledge, this multi-center retrospective study is the first study focusing on the efficacy of SRT in the treatment for RDS in premature infants in high-altitude regions. In this study, we provide a theoretical basis to continue to use exogenous pulmonary surfactant to treat RDS in premature infants in highaltitude regions. There were significant differences in patients who were and were not treated using SRT in both high-and ultrahigh-altitude regions. In fact, other indicators did not differ after strictly controlled grouping, considered unmatched patients and therefore were not excluded outside subgroup classification. This shows that SRT can effectively reduce alveolar surface tension and increase lung compliance in preterm infants with RDS in high-altitude regions. SRT can achieve better lung dynamics and oxygenation, reduce hypoxic damage in various organs and systems, and improve the prognosis. Based on the early improvement in the infants' blood gas analysis results and the improved final prognosis, the efficacy of SRT is better in high-altitude regions than in ultra-highaltitude regions. This result may be related to the unique natural environmental characteristics of high-altitude areas, mainly the low atmospheric pressure. Preterm infants fail to fully expand the lungs after birth due to RDS, and a vicious cycle of worsening hypoxemia can result from pulmonary vasoconstriction, pulmonary hypoperfusion, increased right heart pressure, and right to left shunting across the foramen ovale and the ductus arteriosus (17). As a result of the worsening oxygenation, preterm infants with RDS born at high altitude might have a greater demand for SRT after early mechanical or non-invasive ventilation than those born at lower altitudes (18). Under hypoxia, pulmonary capillary damage and changes in membrane permeability function occur (19,20); aquaporin expression in lung tissue is inhibited (21), and the activity of Na+-K+-ATPase on the lung base membrane is inhibited (22). All these conditions could cause severe pulmonary edema in infants, in turn influencing the effect of SRT on RDS in preterm infants. In addition, the exposure of the lung surfactant to highaltitude-induced oxidative stress may result in the peroxidation of unsaturated phospholipids, surfactant inactivation, airspace collapse, and impaired gas exchange, which would reduce SRT's curative effects (23). Furthermore, the hypoxic environment in high-altitude regions appears to increase the incidence of intrauterine growth retardation (24). Compared to areas at sea level, the development of various organs of the same gestational age fetuses in high-altitude regions is slower. Usually, when the fetus reaches 35 weeks of gestation, the synthesis and secretion of pulmonary surfactant by alveolar type II Model II for whole samples in the high-altitude group, (C) Model III for non-SRT samples in the ultra-high-altitude group, and (D) Model IV for whole samples in the ultra-high-altitude group. OR, odds ratio; 95% CI, 95% confidence interval; RDS, respiratory distress syndrome; SRT, surfactant replacement therapy. 0 represents P < 0.01. epithelial cells rapidly increase and transfer to the surface of the alveoli. However, in high-altitude regions, fetuses at 35 weeks of gestation may not have reached this peak period; thus, they may have less lung surfactant. SRT administration time and dosage also have an impact on its therapeutic effect in infants with RDS (25,26). It is believed that early administration after RDS diagnosis is more conducive to reducing the mortality. Based on the above analysis, we could reduce the symptoms of hypoxia by increasing the duration of exogenous pulmonary surfactant use or by the early use of non-invasive ventilation such as continuous positive airway pressure (CPAP) to ensure effective exogenous SRT in infants with RDS. However, due to the limited economic and medical conditions in high-altitude regions, few infants with RDS can receive multiple SRT to meet their physiological needs, and most receive it either once or not at all. In the future, developing economic and medical conditions in high-altitude regions may allow the comparison of the efficacy of SRT with different drug dosages for neonatal or preterm infants with RDS. Model I to Model IV are consistent with Figure 4. RDS, respiratory distress syndrome; SRT, surfactant replacement therapy. Factors such as sex, gestational age, RDS severity, and birth weight affect SRT efficacy. Therefore, we conducted a multifactor analysis to compare their effects on SRT efficacy in highaltitude regions. RDS severity was a risk factor in all groups. Even with SRT, the prognosis of preterm infants with RDS was largely affected by RDS severity in both high-altitude and ultrahigh-altitude groups. This means that in high-altitude regions, for infants with severe RDS, exogenous SRT may not achieve the expected results, which needs to be considered in advance. Some maternal conditions during pregnancy, such as hypertension, diabetes, etc., will also affect the maturity of the lungs of the infant, which may affect the efficacy of SRT. Various studies on SRT have shown that the effect of a first dose of pulmonary surfactant of 200 mg/kg is better than 100 mg/kg (10) and that early initiation of CPAP with subsequent selective surfactant replacement is superior to prophylactic surfactant therapy (27,28). However, these studies only included infants at sea level or in non-high-altitude areas. In high-altitude areas, due to factors such as hypoxia, the response of infants with RDS to various treatments might differ, and this requires further study. Our research proves the value of SRT for RDS in high-altitude regions. In addition, on the basis of this research, more in-depth research can be carried out, for example, studies on immature or ultra-low birth weight infants. Our study had some limitations. First, some objective conditions in the high-altitude regions had great impact on our data collection. However, this also confirms the importance of the data collected for this study. Second, due to medical conditions, economic conditions, and other restrictions, many mothers did not undergo systematic examinations during pregnancy, which has led to a regrettable lack of some baseline characteristics in our data collection. With the continued development of highaltitude areas, we will further expand the sample size and add to the various indicators for children and pregnant women in future studies. Third, other complications of preterm infants, such as patent ductus arteriosus (PDA), will also affect the treatment of RDS. However, due to objective medical conditions, we have not successfully collected data related to PDA in the hospitals of Qinghai-Tibet Plateau, which is quite regrettable. In the future, we will try our best to promote the application of new medical technologies (such as bedside ultrasound technology), continue to work on neonatal diseases and treatment-related research in high altitude areas, and further improve these contents. Our research was limited to the Qinghai-Tibet Plateau, and the characteristics of other high-altitude areas worldwide might differ; therefore, our results need to be validated by studies in other high-altitude areas. We hope that our findings can provide insights for the treatment of RDS in other regions. We also look forward to cooperating with other regions to conduct a comprehensive large-scale multi-center research. CONCLUSION In conclusion, this multi-center retrospective study confirms that SRT is effective for the treatment of RDS in premature infants in high-altitude regions, but its therapeutic effect was affected by the plateau environment. In high-altitude regions, SRT efficacy decreases with increasing altitude. The present results serve as initial evidence of SRT use in high-altitude and reflect the need for standardized guidelines for SRT in high-altitude regions. DATA AVAILABILITY STATEMENT The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/supplementary material. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Ethics Committee of Shengjing Hospital Affiliated to China Medical University. Written informed consent from the participants' legal guardian/next of kin was not required to participate in this study in accordance with the national legislation and the institutional requirements.
2022-03-04T14:30:57.594Z
2022-03-04T00:00:00.000
{ "year": 2022, "sha1": "3611a98eb9288529407ecbf488f89300ee76dc49", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "3611a98eb9288529407ecbf488f89300ee76dc49", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
210123072
pes2o/s2orc
v3-fos-license
δ13C of terrestrial vegetation records Toarcian CO2 and climate gradients Throughout Earth’s history, variations in atmospheric CO2 concentration modulated climate. Understanding changes in atmospheric carbon cycle is therefore pivotal in predicting consequences of recent global warming. Here, we report stable carbon isotopes (δ13C) of molecular land plant fossils complemented by bulk organic and inorganic carbon fractions for early Toarcian (Early Jurassic) sediments that coincided with global warming and a carbon cycle perturbation. The carbon cycle perturbation is expressed by a negative excursion in the δ13C records established for the different substrates. Based on differences in the magnitude of the carbon isotope excursion recorded in land plants and marine substrates we infer that the early Toarcian warming was paralleled by an increase in atmospheric CO2 levels from ~500 ppmv to ~1000 ppmv. Our data suggest that rising atmospheric CO2 levels resulted from the injection of 12C-enriched methane and its subsequent oxidation to CO2. Based on the cyclic nature of the CIE we concluded that methane was released from climate sensitive reservoirs, in particular permafrost areas. Moderate volcanic CO2 emissions led to a destabilization of the labile permafrost carbon pool triggering the onset of Toarcian climate change only. The main carbon cycle perturbation then subsequently was driven by a self-sustained demise of a carbon-rich cryosphere progressing from mid to high latitudes as reflected by latitudinal climate gradients recorded in land plant carbon isotopes. Understanding changes in atmospheric carbon cycle is therefore pivotal in predicting consequences of recent global warming. Here, we report stable carbon isotopes (δ 13 c) of molecular land plant fossils complemented by bulk organic and inorganic carbon fractions for early toarcian (early Jurassic) sediments that coincided with global warming and a carbon cycle perturbation. the carbon cycle perturbation is expressed by a negative excursion in the δ 13 C records established for the different substrates. Based on differences in the magnitude of the carbon isotope excursion recorded in land plants and marine substrates we infer that the early toarcian warming was paralleled by an increase in atmospheric co 2 levels from ~500 ppmv to ~1000 ppmv. Our data suggest that rising atmospheric co 2 levels resulted from the injection of 12 c-enriched methane and its subsequent oxidation to co 2 . Based on the cyclic nature of the cie we concluded that methane was released from climate sensitive reservoirs, in particular permafrost areas. Moderate volcanic co 2 emissions led to a destabilization of the labile permafrost carbon pool triggering the onset of toarcian climate change only. the main carbon cycle perturbation then subsequently was driven by a self-sustained demise of a carbon-rich cryosphere progressing from mid to high latitudes as reflected by latitudinal climate gradients recorded in land plant carbon isotopes. Anthropogenic fossil carbon emissions steadily increase atmospheric CO 2 levels and thereby impact on Earth's climate and carbon cycle 1 . As a consequence rising global temperatures can led to a reactivation of carbon stored in permafrost regions that upon its release to the atmosphere will further accelerate global warming 2 . Melting polar ice caps and sea level rise, climate extremes and enhanced stress for marine and continental ecosystems have been proven to be direct consequences of global warming [3][4][5] . Predictions on the evolution of Earth's climate system, the carbon cycle and the response of ecosystems are, however, problematic. Thus, investigation of sediment archives that record ancient climate perturbation can serve as analogues for recent climate change and can thereby guide in predicting consequences of global warming and its cascade of consequences. Here, we address changes in Earth's climate and carbon cycle that occurred in conjunction with the early Toarcian Oceanic Anoxic Event (Early Jurassic; ∼183 Ma). This study utilizes stable carbon isotopes recorded in different substrates, facilitating the reconstruction of changes in the global carbon cycle, atmospheric CO 2 levels and latitudinal climate gradients during the early Toarcian global warming. Background Around the globe, sediment archives that span the early Toarcian record profound environmental changes. A rapid high-amplitude sea level rise paralleled by a decline in oxygen isotope values of macrofossil calcite, has been interpreted to reflect a rise in sea water temperatures that was potentially accompanied by a reduction in the volume of land-based ice caps [6][7][8][9] . Rising global temperatures evolved parallel to an increase in atmospheric CO 2 level inferred from stomata data 10 . In the marine realm global warming led to expansion of marine death zones and triggered the genesis of the Toarcian Oceanic Anoxic Event (T-OAE) 11 , whereas on land it caused substantial shifts in floral assemblages 10,12,13 . A hallmark of the early Toarcian is a negative carbon isotope excursion (CIE) that is interpreted to reflect a global carbon cycle perturbation, caused by injections of 12 C-enriched carbon into Earth's hydro-atmosphere system 9,14-18 . Carbon sources are debated controversially and comprise a volcanic CO 2 and/or thermogenic CH 4 associated with the emplacement of the Karoo-Ferrar Large Igneous Province of southern Gondwana 10,19 , destabilization of methane hydrates 14,16 , increased rates of wetland methanogenesis 17 , or permafrost decay and thermokarst blowout events during global warming 9 . The CIE has been reported in marine and terrestrial organic matter as well as in marine carbonates 10,12,14,15 , suggesting that the carbon cycle perturbation affected the entire exchangeable carbon reservoir. A decline in δ 13 C documented in land plant-derived lipids indicates atmospheric 13 C depletion and substantiates a perturbation of the atmospheric carbon cycle 18,20,21 . However, current δ 13 C records of land plant-derived lipids cover only a brief stratigraphic interval and provide no information on the recovery phase of the CIE and on the long-term evolution of the atmospheric carbon reservoir. Moreover, information on atmospheric CO 2 concentration and its absolute change during the early Toarcian warming event are based on stomata data from a single section only and span the onset of the CIE 10 . Reconstruction of atmospheric CO 2 concentration may further be complicated by stratigraphic gaps and methodological limitation 10,22 . Here we utilize compound-specific carbon isotope data of land plant wax lipids to reconstruct changes in the atmospheric carbon reservoir across the early Toarcian carbon cycle perturbation and the associated climate event. The δ 13 C analysis of land plant-derived wax lipids, compounds not affected by the differential preservation of fossilized wood fragments 15 , provide a robust method for reconstructing changes in the isotopic composition of the atmospheric carbon reservoir. The compound-specific δ 13 C record is complemented by δ 13 C data from marine calcite that reflect changes in the oceanic carbon reservoir. The reconciliation of δ 13 C excursions in land plant and marine substrates allows reconstruction of changes in the entire exchangeable carbon reservoir. Moreover, parallel evaluation of marine and terrestrial carbon isotope excursions provide information not only on changes in atmospheric CO 2 concentration but also on absolute atmospheric CO 2 levels prior to and during the early Toarcian carbon cycle perturbation 23,24 . Study site. In this study we investigated upper Pliensbachian to lower Toarcian sediments, represented by the Emaciatum to Serpentinum ammonite zones and the NJT5b to NJT6 nannofossil zones, cropping out at La Cerradura (Subbetic, southern Spain) 25 . Ammonite assemblages in combination with coccolithophore-based biostratigraphic data indicate that the sediments can be correlated with the T-OAE 26 , which is further supported by paleontological and geochemical data 25 . Sediments, mainly marlstone-limestone alternation, were deposited in a fragmented marine platform with hemipelagic sedimentation at a paleolatitude of about 26°N at the southern Iberian paleomargin. Floral assemblages suggest that during the Early Jurassic (183 Ma) the study site was located in the semi-arid climate belt 27 (Fig. 1). Results and Discussion An atmospheric record of the toarcian carbon cycle perturbation. The early Toarcian carbon cycle perturbation is expressed in δ 13 C org and δ 13 C carb data by negative excursions of −3.4‰ and −1.2‰, respectively ( Table 1 in the SI). A shift towards lower δ 13 C values occurred in a stepwise manner at the Polymorphum-Serpentinum zonal transition (Fig. 2). Stratigraphic position as well as pattern and pacing of the CIE at La Cerradura match trends from other locations documenting a multiphasic carbon cycle perturbation 9,15,16 . The δ 13 C signatures of terrestrial n-alkanes recording a negative CIE with a magnitude of −3.7‰ (−3.1‰ on average) (Fig. 2) parallel the high-resolution δ 13 C bulk data ( Table 2 in the SI). The stepped CIE character is documented in the δ 13 C n-alkane record, confirming that the CIE reflects multiple re-occurring carbon injections into the Earth's ocean-atmosphere system 9,16 . Moreover our data unequivocally demonstrates that the Toarcian carbon cycle perturbation affected not only the marine but also the atmospheric carbon reservoir, as previously shown by Pienkowski et al. (ref. 12 ) and Hesselbo et al. (refs. 14,15 ). The −3.7‰ magnitude of the CIE at La Cerradura is similar to that reported in long chain n-alkanes from the Sichuan Basin (China) 18 , but is slightly lower than the −5.3‰ CIE (−4.2‰ on average) determined for terrigenic n-alkanes from the Cleveland Basin (UK) 20 (Fig. 2). Differences in the magnitude may originate from low stratigraphic coverage of compound-specific δ 13 C values and/or stratigraphically incompleteness of the sections. Higher and variant magnitudes in the range from −3.5 to −8.0‰ reported in δ 13 C wood ( Table 3 in the SI) 12,14,15,21 can be attributed to differential preservation states (e.g. jet, charcoal), molecular heterogeneity, or taxonomic impact on the isotopic signature of fossil wood 15 . Moreover, when preserved as jet (degraded wood), microbial reworking and impregnation by marine taxa during exposure to seawater, can alter the initial δ 13 C signature 15 . While δ 13 C n-alkane records for different basins show similar trends and magnitudes of the CIE, their absolute δ 13 C values differ. The δ 13 C n-alkane records from the Cleveland and Sichuan basins both show base values more depleted in 13 C by about 4 to 5‰ when compared with base values from La Cerradura (Fig. 2). This offset relates to latitudinal climate gradients associated with different floral assemblages and precipitation rates impacting on δ 13 C of land plants 23,[28][29][30] (Fig. S4 in the SI). During the Early Jurassic the Cleveland and Sichuan basins were located in a winter-wet temperate climate belt, while southern Iberia was situated in the winter-wet to semi-arid climate belt 27 (Fig. 1). Lower precipitation rates in the latter are expressed in a dominance of xerophytic flora 31,32 and are evident in clay mineral assemblages 33 . Accordingly, differences in the δ 13 C n-alkane values from the different basins reflect a strong latitudinal climate gradient. A dominance of exceptionally long n-alkanes in samples from La Cerradura (Figs. S1, S2 in the SI) confirms organic matter contributions from xerophytic flora. Therefore, δ 13 C n-alkane at La Cerradura records the terrestrial δ 13 C pool as part of the global carbon cycle. Quantifying atmospheric co 2 levels across the early toarcian cie. The early Toarcian CIE was associated not only with changes in the isotopic composition of the exchangeable C-reservoir, but also with changes in atmospheric pCO 2 Stable carbon isotopes determined on fossilized land plant lipids, bulk organic and carbonate carbon from the La Cerradura section (southern Spain) show a stepped negative CIE at the Polymorphum-Serpentinum zonal transition, confirming that the early Toarcian carbon cycle perturbation affected the entire exchangeable carbon reservoir. At La Cerradura terrigenic lipids record a magnitude in Δ 13 C n-alkane of −3.7‰, which is comparable to the magnitude documented for the CIE recorded in plant wax alkanes from the Cleveland (UK) 20 and Sichuan Basins (China) 18 . Differences in the absolute values of plant wax alkane δ 13 C reflect environmental conditions in different climate belts (see Fig. 1). www.nature.com/scientificreports www.nature.com/scientificreports/ 1200 ppmv and 250 to 1800 ppmv in pre-CIE and CIE intervals, respectively. However, fragmentary deposition, stratigraphic incompleteness, and very low number of data points complicate robust stomatal pCO 2 estimates. Moreover, there is also a poor calibration of the stomata proxy that can also respond towards environmental factors other than atmospheric CO 2 22,34 . An alternative approach for determining ancient pCO 2 levels is based on the observation that the isotopic fractionation of C3 land plants will vary not only with precipitation rates, but also with pCO 2 23,24,35 . This CO 2 effect results in a higher isotopic fractionation when pCO 2 levels increase and thereby cause higher CIE magnitudes in terrigenic than in marine substrates 24 . Offsets in CIE magnitude of terrigenic versus marine substrates thus facilitate determination of absolute atmospheric pCO 2 levels 24,35 (for details we refer to the supplementary information). However, as pointed out by Schubert & Jahren (ref. 23 ) and Lomax et al. (ref. 36 ) under enhanced water stress the carbon isotopic signatures of C3 plants vary as a function of precipitation rates and then do not unambiguously reflect past atmospheric CO 2 concentration. According to recent observations, a strong impact of precipitation rates on δ 13 C of land plant biomass has been documented for vegetation in areas with mean annual precipitation rates < 2200 mm/year. On the contrary, precipitation seems to have no significant impact on the land plant δ 13 C in areas with high mean annual precipitation rates 23 . The dominance of xerophytic flora in the southern Iberian paleomargin, which here is represented by the La Cerradura section, suggests low paleo-precipitation rates and eventually enhanced paleo-water stress 31,32 . When compared to localities at higher latitudes, lower paleo-precipitation rates also manifested themselves in the 13 C-enrichment of the land plant biomass. We can, however, speculate only about absolute paleo-precipitation rates at the southern Iberian paleomargin, which complicate evaluating the impact of water stress on the δ 13 C land plant biomass. In order to calculate pCO 2 levels and to minimize the effect of different paleo-precipitation rates, we compared data from the La Cerradura section, located in a semi-arid climate belt, with data from Yorkshire (UK) 20 and from China 18 that were both situated in a humid climate belt (Fig. 1). In particular the δ 13 C n-alkane data from sites situated in a humid climate are supposed to vary in dependency of changing atmospheric CO 2 levels 36 . Moreover, a CO 2 dependence of the land plant δ 13 C has also been documented for vegetation growing under low water treatment 29 . It is therefore reasonable to assume that changes in δ 13 C n-alkane at all sites will also vary as a function of changes in the atmospheric CO 2 concentration. This assumption is underpinned by the consistent evolution and similar magnitudes of the CIE seen in the δ 13 C n-alkane at all sites investigated (Fig. 2). Based on δ 13 C n-alkane data from La Cerradura (this study), Yorkshire 20 and China 18 we calculated a maximal magnitude in the CIE terrigenic of −4.2‰ (−3.1‰ on average). A higher CIE terrigenic of −5.4‰ is achieved when including δ 13 C data of fossil wood and phytoclasts ( Table 3 in the SI). Following the approach of Schubert & Jahren (ref. 24 ), we determined the magnitude of the CIE in marine substrates (CIE marine ) by using δ 13 C carb data from oxygenated marine basins only. This includes data from organic matter-lean sediments deposited at the southern part of the West Tethys Shelf. At these areas the seafloor preferentially remained oxygenated throughout the early Toarcian 37 . For such settings organic matter-induced carbonate diagenesis and/or CO 2 recycling in stratified water bodies that may alter the δ 13 C signature can assumed to be minimal or can even be excluded 38,39 . Carbon isotope data from marine organic matter is not included in our calculation, as δ 13 C org values can be affected by mixing of organic matter of marine phototrophic and non-phototrophic organisms or land plants 24 . We calculated an average CIE marine of −2‰ ( Table 3 in the SI), which is similar to the −2 to −3‰ estimate by Suan et al. (ref. 40 ). Using the δ 13 C n-alkane based CIE terrigenic and the CIE marine we calculated a ΔCIE (ΔCIE = CIE terrigenic -CIE marine ) of −1.5 and −2.2‰, for average and maximal values of the CIE terrigenic , respectively. Including δ 13 C data from fossil wood yields a ΔCIE of about −3.4‰. Calculation of pCO 2 levels prior to the CIE (pCO 2(init) ) and during the climax of the CIE (pCO 2(CIE) ) further requires an estimation for the ΔpCO 2 that here is derived from mass balance calculations in dependency of the CIE marine and the isotopic signature of the respective carbon source. We calculated ΔpCO 2 values for carbon sources with isotopic signatures characteristic for: i) biogenic methane emissions (δ 13 C: −70‰ 41,42 ) ii) gas hydrates (δ 13 C: −60‰ 43 ), iii) thermogenic methane (δ 13 C: −35‰ 43 ) and iv) a source dominated by volcanogenic CO 2 (δ 13 C: > −10‰ 42 ) (for details see supplementary information). For an isotopically-light carbon source (−70 to −50‰) and a ΔCIE of −2.2‰ and −3.4‰, we calculated values for pCO 2(init) ∼600 ppmv and of ∼400 ppmv, respectively, whereas for pCO 2(CIE) we obtained 1200 and 850 ppmv, respectively (Fig. 3). Initially low pre-CIE CO 2 estimates will be affected by a maximum uncertainty of about +350/−100 ppmv, while a higher maximum uncertainty of about +1000/−400 ppmv must be assumed for CO 2 estimates during the CIE 44 . Errors result from uncertainties in the model-curve fit of the experimental data 23 and from uncertainties in the input parameters used to calculate pCO 2 44 (Fig. S3 in the SI). The error range also includes uncertainties arising from unknown paleoenvironmental conditions under which fossil plants grew 44 . The uncertainty can be assumed to be comparable to those associated with other methods for past pCO 2 reconstruction 22,44 . Isotope-based estimates are close to the stomata-based pCO 2 assessment 10 . However, in contrast to McElwain et al. (ref. 10 ), our data attest to a doubling in pCO 2 instead of a threefold increase (Fig. 3). Our results strongly suggest that an early Toarcian carbon cycle perturbation was caused by carbon released in form of 12 C-enriched methane from a cryosphere collapse 9 or, alternatively, from marine gas hydrates 14 or wetlands 17 . With respect to uncertainties in the ΔCIE value and in the δ 13 C-based CO 2 reconstruction 44 , thermogenic methane release from thermal alteration of organic matter-rich sediments during the Karoo-Ferrar emplacement 10,19 would be plausible as well. Such a scenario is, however, not supported by geochemical data 45,46 and is further difficult to reconcile with the orbitally-forced cyclic pattern of the CIE that is only explained by carbon release from climate-sensitive reservoirs responding to changes in Earth's solar orbit 9,16 . On the contrary, release of biogenic and thermogenic methane from glacier-and permafrost-capped reservoirs would be a plausible scenario 9 that is supported by recent observations 41 . Assuming volcanic CO 2 emission as being the major driver of the early Toarcian climate change would require the release of enormous amounts of CO 2 that would have shifted pCO 2 levels from about 1000 ppmv during Scientific RepoRtS | (2020) 10:117 | https://doi.org/10.1038/s41598-019-56710-6 www.nature.com/scientificreports www.nature.com/scientificreports/ pre-event times to more than 4000 ppmv during the CIE (Fig. 3). Thus, direct volcanic CO 2 emissions fail in explaining both, the magnitude of the CIE and of climate change (Fig. 3). A plausible scenario would be that the emplacement of the Karoo-Ferrar Large Igneous province released small quantities of volcanic CO 2 and eventually some thermogenic methane from Gondwana coals. Both initiated a moderate rise in global temperatures, triggering the release of 12 C-enriched carbon from mid-latitudinal climate-sensitive reservoirs. In combination with changes in Earth's solar orbit this atmospheric carbon increase stimulated a self-sustaining cryosphere demise prograding to higher latitudes and thereby releasing even more cryosphere-stored carbon, a process assumed to be the major driver of the early Toarcian climate and environmental change 9 . Our results allow us to postulate that the early Toarcian carbon cycle perturbation and associated climate changes were driven primarily by the release of huge quantities of 12 C-enriched methane from climate sensitive cryosphere reservoirs. conclusions The compound-specific carbon isotope record for land plant-derived long-chain n-alkanes from Iberia provides a robust long-term record of changes in the atmospheric carbon reservoir that occurred in concert with the early Toarcian global warming. The presence of a negative CIE in long-chain n-alkanes that parallels bulk organic and inorganic δ 13 C trends confirms 13 C-depletion of the entire exchangeable carbon reservoir, in particular atmospheric 13 C-depletion. Based on offsets in the magnitude of the CIE reported in terrigenic and marine substrates, we calculated a doubling in atmospheric CO 2 levels paralleled the carbon cycle perturbation and global warming. Carbon added to the ocean-atmosphere system was strongly enriched in 12 C derived from climate-sensitive cryosphere reservoirs. Karoo-Ferrar volcanism may have triggered global warming but volcanic CO 2 emissions fail to explain the magnitude of the carbon cycle perturbation. Accordingly, volcanic CO 2 was only a trigger but not the driver of the early Toarcian climate change, which was caused by successive and self-attenuating cryosphere collapse. Our data suggest that environmental changes that occurred concomitant to the T-CIE were linked to the release of huge amount of cryosphere methane to the Earth's ocean-atmosphere system. Material and Methods Sampling. Geochemical analysis have been performed at sample material that has been taken at the La Cerradura section after removal of surface rocks that potentially experience alteration due to weathering. All samples have been taken at least 30 cm below surface. Rock samples were crushed and powdered in order to obtain a homogenous and representative sample. Prior to geochemical analysis the powdered sample material was dried in an oven at 40 °C for 48 h. Stable carbon isotope analysis of the bulk organic matter and carbonate. Stable carbon isotope analysis for bulk organic carbon (δ 13 C org ) were performed on decalcified sample material 9 . Decalcification was achieved by treating the sample material with hydrochloric acid (HCl, 10% and 25%) to remove carbonate-bound and if present dolomite-bound carbon. Afterwards, samples were washed, neutralized with deionized water and dried in an oven at 40 °C for 48 h. Stable carbon isotope analysis was performed using a Thermo Finnigan Delta V isotope ratio mass spectrometer coupled to a Flash EA via a Conflow III interface. In dependency of the carbon source and its isotopic signature different and partly contrasting CO 2 scenarios can be proposed. The best fit scenario is achieved for carbon sources enriched in 12 C, suggesting that CIE and climate change were driven by carbon injections from cryosphere collapse 9 , or gas hydrates and wetlands 17 (low pCO 2 scenario, blue and orange asterisks). Such a scenario agrees with stomata-based pCO 2 estimates 10 . Contribution from thermogenic methane released from fossil hydrocarbon sources would be plausible as well (moderate pCO 2 scenario, green and red asterisks). On the contrary, scenarios invoking volcanic CO 2 emissions as primary driver of the early Toarcian carbon cycle perturbation are not supported by our data (high pCO 2 www.nature.com/scientificreports www.nature.com/scientificreports/ The carbonate fraction was measured for its carbon isotopes using a Kiel III carbonate preparation line connected to a Thermo Fisher 252 mass spectrometer. Powered and homogenized samples were treated with 103% phosphoric acid at 70 °C 47 . Carbon isotope ratios of the organic matter and the carbonate are expressed in conventional delta notation: δ sample (‰) = [(R sample − R standard )/R standard − 1] × 1000, where R is the ratio of 13 C/ 12 C of the sample and the V-PDB standard for carbon. Reproducibility and accuracy were monitored by replicate standard and sample analysis and are better than 0.1‰. Stable carbon isotope analysis of land plant n-alkanes. Total lipid extracts for selected samples were obtained from solvent extraction using a Soxhlet apparatus. As extraction solvent we used a mixture of dichloromethane (DCM) and methanol (MeOH) (9:1, v/v). Similar to the method applied by Ruebsam et al. (ref. 48 ) total bitumen extracts were separated into aliphatic, aromatic and polar hydrocarbon fractions by silica gel-column chromatography (8 ml SPE column, 2.8 g Silica 60 mesh, 25-40 μm) using solvents with increasing polarity in an LCTECH automated SPE system. The aliphatic hydrocarbon fractions were treated with activated copper turnings in order to remove elemental sulfur. GC-MS measurements of the aliphatic hydrocarbon fractions were performed on an Agilent 5975B MSD interfaced to an Agilent 7890 A GC equipped with a quartz capillary (Agilent DB1-HT; 60 m length, 0.25 mm inner diameter, 0.25 μm film thickness). The temperature program of the GC oven used was: 70 °C (5 min isothermal) to 140 °C at 10 °C/min, then to 325 °C at 3 °C/min (held for 7 min). The quadrupole MS was operating in scan mode in the m/z 50 to 750 range. Compounds of interest were identified via characteristic mass spectra and were integrated manually using the GC/MSD Masshunter Software (Agilent Technologies) 48 . Aliphatic hydrocarbon fractions of all samples analyzed are clearly dominated by odd-numbered long-chain n-alkanes (Fig. S1 in the SI), originated in land plants 49 . Cyclic aliphatic hydrocarbons (steroids, hopanoids) are present as well, but occur at very low abundances (acyclic/cyclic > 10; Figs. S1 and S2 in the SI). Moreover, the temperature program of the GC oven was modified to minimize co-elution of the odd-numbered n-alkanes with cyclic aliphatic hydrocarbons (Fig. S2 in the SI). Due to the clear dominance of long-chain n-alkanes and the absence of co-elution with cyclic aliphatic hydrocarbons compound-specific δ 13 C analysis for the long-chain n-alkanes was performed on untreated aliphatic hydrocarbon fractions, without previous mole-sieving as commonly applied 50 . Gas chromatography-isotope ratio mass spectrometry (GC-irMS) was performed following the methodology described in Plet et al. (ref. 50 ) using a Thermo Scientific Trace GC Ultra interfaced to a Thermo Scientific Delta V Advantage mass spectrometer via a GC isolink and a Conflow IV. The δ 13 C values of the compounds were determined by integrating the ion currents of masses 44, 45 and 46, and are reported in permil (‰) relative to the VPDB standard. Reported values are the average of at least two analyses with standard deviation of <0.5‰. calculation of pco 2 levels. Calculation of pCO 2 levels follows the approach by Schubert & Jahren (ref. 24 ) and is based on the differences in the magnitude of a CIE reported in land plant organic matter and marine substrates. Assessment of methodical uncertainties is based on the work by Cui and Schubert (ref. 44 ) and varies as a function of absolute pCO 2 concentration. Details on the calculations are provided in the supplementary information.
2020-01-10T15:28:36.695Z
2020-01-10T00:00:00.000
{ "year": 2020, "sha1": "2f89ecc96073516a36ccae0134ffe88f3003b268", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-019-56710-6.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2f89ecc96073516a36ccae0134ffe88f3003b268", "s2fieldsofstudy": [ "Environmental Science", "Geography" ], "extfieldsofstudy": [ "Environmental Science", "Medicine" ] }
772347
pes2o/s2orc
v3-fos-license
In Vivo Measurement of Conduction Velocities in Afferent and Efferent Nerve Fibre Groups in Mice Electrophysiological investigations in mice, particularly with altered myelination, require reference data of the nerve conduction velocity (CV). CVs of different fibre groups were determined in the hindlimb of anaesthetized adult mice. Differentiation between afferent and efferent fibres was performed by recording at dorsal roots and stimulating at ventral roots, respectively. Correspondingly, recording or stimulation was performed at peripheral hindlimb nerves. Stimulation was performed with graded strength to differentiate between fibre groups. CVs of the same fibre groups were different in different nerves of the hindlimb. CVs for motor fibres were for the tibial (PB). CVs of higher threshold afferents, presumably muscle and cutaneous, cover a broad range and do not really exhibit nerve specific differences. Ranges are for group II 22-38 m/s, for group III 9-19 m/s, and for group IV 0.8-0.9 m/s. Incontrovertible evidence was found for the presence of motor fibres in the sural nerve. The results are useful as references for further electrophysiological investigations particularly in genetically modified mice with myelination changes. Introduction In recent years, the mouse with its high reproduction rates and short gestation times has gained a lot of scientific attention within the field of neuroscience, especially with the use of transgenic varieties. In addition, mouse models exist for a variety of human diseases with hereditary backgrounds. In principle, these models for neurological diseases are an ideal tool for in vivo neurological screening and investigations on the phenotypic appearance of the disease. However, in vivo electrophysiological studies in mice are rare, because of the delicate preparations required and the difficulties in keeping anaesthetized mice alive long enough. Often, accuracy of the measurements may be afflicted by gross simplification of the methods. For example, the range of fastest conduction velocities (CV) of afferents from the hindlimb was stated as to be 10-100 m/s (Biscoe et al. 1977), down to 11.2±5.3 m/s (Yoshida and Matsuda 1979). Even though the CV of the fastest motor efferents (Aα fibres) in different mice strains should not vary too much (Tabata et al. 2000), the measurements on CV of motor efferents of the hindlimb of wild type mice have generated a wide range of different values (Toyoshima et al. 1986, Huxley et al. 1998, Ng et al. 1998, Wallace et al. 1999, Tabata et al. 2000, Weiss et al. 2001, Song et al. 2003, Haupt and Stoffel 2004. Diverging data for afferent fibre groups have also been published (Biscoe et al. 1977, von Burg et al. 1979, Yoshida and Matsuda 1979, Norido et al. 1984, Koltzenburg et al. 1997, Cain et al. 2001, Nakagawa et al. 2001, most of which have been collected in vitro. When whole nerves were tested for their conduction velocity, measurements for CV Vol. 61 concern the fastest fibres, except for in a few cases where e.g., cutaneous fibres were identified by natural stimulation (Koltzenburg et al. 1997). A few measurements have also been made on C-fibres (Koltzenburg et al. 1997, Cain et al. 2001 or on slow unmyelinated fibres (Yoshida et al. 1979). We have now measured for the first time CVs for different fibre groups (Aα, Aγ, different muscle and cutaneous afferents) and for different hindlimb nerves. This differentiation has not been published or investigated before. Methods The experiments were carried out on 26 adult male wild type mice (C57BL/6IN), weighing 19-34 g (age: 104-151 days; mean 124 days). The set-up is based on a further development of the experimental design published by Ellrich and Wesselak (2003). Nearly the whole preparation has to be done under binocular microscope. Anaesthesia was induced with 70 mg/kg pentobarbital i.p. (dissolved in 0.9 % NaCl). After about 5 min, when the anaesthesia was deep enough to abolish the pinna reflex or the blink reflex, the animals were fixed on an adjustable heat pad. The temperature was controlled by a rectally inserted probe, thus, keeping the body temperature close to 37 °C throughout the experiment. When needed additional heating was performed from above by a lamp (70 watt) which was positioned over the mice. The jugular vein was cannulated and anaesthesia was continued by injection of methohexital (Brevimytal, Lilly, 0.5 % in NaCl, 40-60 mg/kg/h). Then, a tracheotomy was performed with insertion of a tracheal tube for later artificial ventilation. The ECG was recorded by two needles inserted into the forelimbs. Changes of the heart rate and temperature were used to control the anaesthetic state. A laminectomy was performed from vertebrae L1 to L5 to expose the lower spinal cord segments up to L4 and to expose the dorsal roots L4 and L5 in full length (cf. Dibaj et al. 2010(cf. Dibaj et al. , 2011. For preparation of hindlimb nerves, the triceps surae and the posterior biceps muscle were exposed from the dorsal side and, in general, 4 nerves were prepared and mounted for stimulation or recording: nerve to posterior biceps (PB), distal tibial nerve (Tib, excluding the nerves to gastrocnemius-soleus and flexor digitorum longus), common peroneal nerve (Per), and sural nerve (Sur). Additionally, in some experiments the common sciatic nerve (Isch), which was always left in continuity, was used for stimulation or recording. PB is a pure muscle nerve, while Tib and Per are mixed cutaneous and muscle nerves. Sur, which in other species (cat, rat) is a pure cutaneous nerve, seems to include a small muscle nerve fraction in mice (see below). The nerves were always prepared submerged in Ringer's solution to avoid desiccation. After this procedure, the mice were connected to artificial ventilation with a gas mixture of CO 2 (2.5 %), O 2 (47.5 %), and N 2 (50 %) at 120 strokes/min (160-200 μl/stroke). These values turned out to be adequate for sufficient respiration with positive pressure inspiration and passive expiration (cf. Dibaj et al. 2010(cf. Dibaj et al. , 2011. For recording, the spine was fixed by clamps, and the prepared nerves and the spinal cord were covered with mineral oil in pools which had been formed by skin flaps. The dorsal and/or ventral roots L4 were cut and mounted for stimulation and/or recording. The fibres of the investigated nerves mostly arise from L4. In a few experiments, L5 was prepared for stimulation and/or recording, too. Recording and stimulating was performed with bipolar platinum hook electrodes. Distance between the hooks was 1.5-2 mm. Rectangular, constant voltage stimulation pulses were used with a duration of 0.1 ms. The interval between two successive stimuli was mostly 1.8 s. For high threshold fibres (group III and IV), stimulation interval was enhanced to 2.8 or 3.8 s, and the pulse duration was enhanced to 0.2 or 0.5 ms. Stimulation strength is always given in multiples of the threshold strength for the lowest threshold fibres at the stimulation site ("T"; e.g. threshold for group II fibres is about 1.6 times of the threshold for group I fibres, i.e. 1.6T). Recording was done with a sampling rate of 40-50 kHz. The signal was appropriately pre-amplified and filtered. Bandwidths of the filters were 10 Hz to 100 kHz for responses of myelinated fibres and 0.1 Hz to 10 kHz for C-fibres. Generally 8 responses were averaged for more accurate measurements. In a few experiments the temperature was measured by a small thermoprobe (diameter less than 1 mm) in the oil-filled pools at the spinal cord (32-34 °C) and at the hindlimb (31-33 °C). In the hindlimb pool the distance from the cathode of the stimulating electrode or from the recording electrode, respectively, to the proximal entrance of the nerves into the tissue between muscles was ≤3 mm. In the spinal cord pool the corresponding distance was ≤ 5 mm. This means that maximally 8 mm i.e. maximally about 20 % of the investigated nerves were running free through oil pools. To verify the presence of motor efferents in the sural nerve, in particular experiments the initially intact sural nerve was prepared, mounted and stimulated as proximal as possible. A second surface electrode was put more distally at the sural nerve for recording and a third electrode was located at the lateral edge of the foot on the exposed muscle for EMG recording. Then, the sural nerve was cut proximal to the stimulation electrode and, afterwards, first the lateral and then the medial branch of the nerve were cut and the persistence of the toe twitch and EMG potential was tested in each case. At the end of the experiment, the length of the nerves and roots were measured in situ. For this purpose the sciatic nerve and the nerve plexus near the spine were exposed at full length, and a thin cotton thread was laid on it, up to the L4 root to measure the length. This method guaranteed neither shrinkage nor lengthening of the nerve after complete dissection. To calculate CV, latencies from stimulus onset time to the beginning of a potential for the measured fibre group was measured to ensure calculation for the fastest fibres of the corresponding group. The calculation was always based on the distance between the stimulating cathode and the first electrode of the recording pair. The developed set up for mice turned out to be very reliable. The experiments lasted up to 10 h (with a preparation time of about 2 h). The experiments comply with the guide of the National Research Council for care and use of laboratory animals and passed the ethics commission of the Medical Faculty of the University of Göttingen. General findings CV was measured with the above-mentioned techniques for motor efferents and muscle and cutaneous afferents, with different fibre groups from low threshold myelinated to high threshold unmyelinated fibres. Regularly, the thresholds of the nerves and the spinal cord roots were between 50 and 100 mV, and the time difference between the front of the response potential and the rising edge of the stimulus was taken as conduction time. Utilization time of the stimulus was not considered, however, measurements were always done on responses elicited by a stimulus 1.5-2 times higher than the threshold of the corresponding fibre group. Measurement of the response latency with stimulation above 1.5-2 times threshold of the corresponding fibre group was avoided to ensure stimulation of the fibre group only at or very close to the cathode. Below 1.5 T for a fibre group, the response latency for the fibre group was enhanced, while above 5 T the response latency decreased distinctly. Between 1.5-5 T, the response latency was quite constant. Because of short distances (37-42 mm) between stimulating cathode and first recording electrode, this kind of approach was extremely important for high conduction velocities, i.e. low threshold fibres. For CV below 10 m/s, the above mentioned factors do not lead to large discrepancies in CV calculation. Table 1 gives a list of the results obtained on efferents and afferents of different nerves. The results are described in detail below. The left column gives the corresponding nerve, which was stimulated or recorded from, and the upper row gives the fibre group. Afferent and efferent groups are indicated by abbreviations (aff., eff.). No values were gained for the efferents of PB. Group I is always meant for the fastest fibres of a nerve. Due to the short distance between stimulation and recording electrode and the very short PB nerve, large stimulus artefacts prevented a systematic investigation of slower afferents of this nerve with higher stimulus strength. Only in one experiment these afferents could clearly be recorded. The p-values in column 2 and 3 give the significance of the difference of these CV-values compared to the always higher values of Per. Motor efferents To get information about motor efferents, either ventral root L4 was cut at its spinal cord entry and electrically stimulated, while a muscle nerve was recorded, or a muscle nerve was electrically stimulated, and we recorded from the cut end of the ventral root L4. Figure 1 shows examples for responses of αand γ−efferents of Tib, Per and Sur. Responses of efferents were always found in all 3 nerves indicating that the sural nerve is not a pure cutaneous nerve like it is in cat (see below). Stimulating with graded strength and recording from efferents in many cases, especially for Tib efferents, gave some indications for two groups of α-efferents (Fig. 1A), reflecting a faster (possibly phasic) and a slower (possibly tonic) group of motor efferents. With stimulation strengths of 5 T (Fig. 1Ac) and higher, slower motor efferents could be activated with responses which were small compared to those of α-fibres and which conduct in the range of γ-fibres. In this experiment, the conduction lengths were for Tib 37 mm, for Per 35 mm and for Sur 33 mm. Accordingly, the MECV for the fast α-fibres were 38.5±4.0 m/s (Tib), 46.7±4.7 m/s (Per), and 39.3±3.1 m/s (Sur), and for the slower γ-fibres were 16.7±3.0 m/s (Tib), 22.2±4.4 m/s (Per) and 12.0±0.8 m/s (Sur) (see Table 1). Strikingly, the velocities for Per were higher than for Tib and Sur for both αand γ-fibres in every experiment. We were not able to measure MECV for PB efferents. Although we performed measurements from PB afferents with recordings from dorsal roots, reliable recordings from ventral roots while stimulating the PB failed. Efferent motor fibres in the sural nerve (Fig. 2 and Supplementary Video) The existence of efferent motor fibres in the sural nerve was approved by stimulation of the sural nerve which induced a ventrally directed twitch of the small toe (Supplementary Video) which could be approved by the EMG (Fig. 2C). This twitch persisted after cutting the sural nerve centrally (Fig. 2C) and it persisted after peripheral cutting of the smaller lateral branch of the sural nerve ( Fig. 2D) but disappeared after the medial branch had been cut (Fig. 2E). In the experiment shown in Figure 2 the latency between proximal stimulation of the sural nerve and the response at the distal ENG recording electrode at the nerve was 0.22 ms; the distance between the stimulating cathode and the ENG electrode was 6 mm (cf. Fig. 2A). This means a conduction velocity of the approved efferent fibres in the sural nerve of 37 m/s. This is in the range determined before (see above). The distance between the stimulating cathode and the entrance of the nerve into the M. flexor digiti minimi was 24 mm (cf. Fig. 2A), the distance from the entrance to the EMG electrode approximately 3 mm. The latency between the stimulus and the beginning of the EMG wave was 1.82 ms. With a conduction velocity of 37 m/s (see above) the conduction time for the 24 mm to the muscle would be 0.65 ms. Thus, the residual latency, which includes the synaptic delay and the time for the conduction with reduced velocity in the fine nerve fibre branches in the muscle and over the muscle fibres, amounts to 1.17 ms. This value is well in the range which has been determined before (0.726-1.375 ms, Reed 1984). Per (B) and Sur (C); responses from the centrally cut VRL4. Stimulation near threshold for α-efferents (a) up to stimulation adequate for γ-efferents (d). Thresholds were calibrated for the lowest threshold afferents of the corresponding nerve. Latency measurements were performed with stimulation strengths like in (b). α-fibre responses with a double peak (Ab-d) or a shoulder (Bb, Cc), indicating a faster and a slower α-fibre group. Stimulation strengths in the upper range elicit responses of fibres with velocities in the range of γ-fibres (*). The latter responses are often also double peaked. Note that in mice the sural nerve is containing efferents, too. Arrows indicate stimulation time. Records were averaged (8 consecutive responses each); sampling rate during recording was 50 kHz. Rectangle stimulation pulses, onset marked by an up-arrow, duration 0.1 ms (stimulation interval 1.84 s for all), except for Ad, where the stimulation pulse was 0.2 ms wide. Muscle afferents The common Per, the Tib and the Sur contain afferents of both muscles and the skin. It is generally impossible to get pure responses of muscle afferents from these nerves with electrical stimulation. Responses from the dorsal root L4 always led to potentials with multiple shoulders and peaks representing the different fibre groups when stimulating those nerves with graded stimulation. The same was true when the dorsal root L4 was stimulated and the antidromic potential was recorded from the nerves. The only pure muscle nerve we prepared was the PB, and the responses had distinct peaks for group I to group IV afferents (Fig. 3). The fastest group I afferents in the shown example had MACV of 45.3 m/s (Fig. 3A) and may be attributed to group Ia afferents and possibly some group Ib fibres (cf. Discussion). The whole range of MACV of PB was 39-46 m/s over all experiments. Single shoulders of the peaks may represent single fibres of each group. The fastest MACV which was observed when stimulating the dorsal root and Vol. 61 recording from the sciatic nerve was 47 m/s. In the example of Figure 3B presumed group II afferents (cf. Discussion) appeared with a threshold of about 1.6 T and a latency of 1.32 ms and had a MACV of 23.1 m/s. Fig. 3. Afferent fibre groups of the PB. Stimulation of the PB and recording of DRL4 with different stimulation strengths. The threshold stimulation strength was adjusted to the threshold of the PB afferents. The threshold for PB afferents was about 1.2 times of the threshold for the lowest threshold afferents in the sciatic nerve. All responses were averaged (8 responses each). Sampling rate in A-C was 50 kHz, in D 10 kHz. Rectangle stimulation pulses of 0.1 ms duration for all stimulations except for D, where 0.2 ms were used, onset marked by an up-arrow. The interval between stimulations was 1.84 s for all, except for C and D where the interval was 3.84 s. Responses of a further group of muscle afferents appeared after a latency of 3.44 ms (Fig. 3C); the threshold of this peak was around 7 T, and the MACV of the fastest fibres of this group was 8.9 m/s. Therefore, these fibres may be assumed to belong to group III. This fibre group did not follow stimulation of higher frequencies, and therefore, the peak of this group better appeared with lower stimulation rates (here 3.84 s interval). Unmyelinated afferents can often hardly be detected. However, in our experiments we have been successful in most cases for muscle and cutaneous afferents, when the duration of the stimulus was enhanced to at least 0.2 ms and the inter-stimulus interval was increased to at least 3 s. Additionally, the short distances kept the dispersion of the potentials small. The example in Figure 3D shows the response of group IV muscle afferents. It appeared after a latency of about 38 ms, thus the MACV was 0.8 m/s, and the corresponding fibres can be assumed to belong to group IV. Although group IV was much slower than group III, and the dispersion of the signal from different fibres was much higher, the amplitude was in the range of that of group III fibres. This is probably due to the fact that (at least in the cat) the number of group IV afferents considerably outnumbers the number of group III afferents (Boyd and Davey 1968). This may compensate for the fact, that action potentials of unmyelinated fibres are small compared to action potentials of group III fibres. A further analysis of the MACV of group I muscle afferents of Tib, Per and Sur was performed and compared with that of PB. Regarding the mixed responses from cutaneous and muscle afferents, the first deflections of the potentials as shown for example in Figure 4 may be assumed to originate from group I muscle afferents. In our example, MACV for the Tib group I afferents was 47. .1 m/s) for Sur. Strikingly, group I afferents of Per (from mainly phasic muscles) were significantly faster than the afferents from other nerves (see Table 1, mean values are given ± standard deviation (S.D.), statistical significance (p<0.05) was determined using ANOVA followed by Tukey test). Afferents of mixed nerves In mixed cutaneous/muscle nerves, our experiments do not allow a distinction to be made between muscle afferents slower than group I and cutaneous afferents. As mentioned above, Sur is a mixed nerve in mice. A large spectrum of conduction velocities was found between the group I range and the typical group III range. Because the single peaks were not very prominent and often provided only a shoulder within the declining potential, a continuum of conduction velocities formed the main potential, exposing some not very prominent focal points. Potentials appearing at thresholds of about 1.8-2 T and higher may be associated with group II fibres (cf. Discussion). As shown in Figure 3, the group II peak in a muscle nerve is a distinct one, but small compared to the group I peak. We may therefore assume that the large potential appearing after the group I peak in mixed nerves (Fig. 4) is mainly of cutaneous group II origin, associated with CACV of 30.3±4.9 m/s for Tib, 28.8±4.0 m/s for Sur, and 27.6±4.6 m/s for Per. CV between 10 and 15 m/s was always associated with thresholds of 7.5 T or higher. This group of afferents may be assigned to group III afferents. However, in mixed nerves like the tibial nerve a differentiation between muscle and cutaneous afferents was impossible, and, moreover, there was no real separation between peaks of group II and group III origin. Below 10 m/s there was a large gap down to about 1 m/s and less. The response patterns shown in Figures 4A and 4B comprise some differences due to the different stimulus/recording conditions. In B, the only fibres which were recorded from were Tib fibres, irrespective of the stimulation strength. In A, with higher stimulation strength, the stimulation possibly also spread to branches of the nerve not on the elctrodes, including fibres of the triceps surae and of flexor digitorum longus. These added to the response of the root as it contains also all or at least most afferents of these nerves. Thus, the latency of the beginning of the response potential originating from group I fibres shortened, and the amplitude of this response still increased with increasing stimulus strength well above group I maximum (cf. Fig. 4A, 20 T to 50 T). This effect would not affect the latency of responses of higher threshold afferents (or at least not to the same extent) since the current spread centrally to the cathode is Vol. 61 probably not strong enough to exceed their threshold. As mentioned above the possible effect of current spread would also not affect the determination of the CV of fast fibres because it was done with low stimulus strength (≤2T). The group IV (group C) afferents with conduction velocities of 1 m/s and less were well circumscribed ( Fig. 4A and B, 110 T and 200 T), although a differentiation between muscular and cutaneous afferents is impossible here. The average CV of this group is 0.87 m/s (n=28, S.D. ±0.05 m/s, range 0.78-1.01 m/s). This CV was reduced, when the dorsal root was blocked by TTX (0.540-0.544 m/s), according to earlier findings in the rat (Steffens et al. 2001). Generally, the better method to record the compound action potential of unmyelinated fibre groups of mixed nerves is the antidromic method with the nerves being peripherally cut and with stimulation of the dorsal root and recording of the peripheral nerve, because the potentials from the whole dorsal root are always contaminated by tonic spindle afferent activity from intact nerves coming through this dorsal root. This tonic activity partly also provided for the noisy recording line in Figure 4A at 110 T. Discussion We have determined the CV of afferent and efferent nerve fibres of the hindlimb of the mouse in vivo. For that purpose a set-up was developed to keep a laminectomised mouse with dorsal and/or ventral roots and hindlimb nerves prepared for electrophysiology for several hours, and to stimulate and record different parts of the exposed nervous structures. To get reliable data, it was necessary to keep the mouse in a reasonable state for a longer period (several hours). Okada et al. (2001) as well as Ellrich and Wesselak (2003) touch this problem and particularly the second mentioned publication provides for valuable information concerning this special point. In mice, distances to determine CV are up to 42 mm for the longest nerves used (Tib or Per). An error of maximally 4 mm of the in situ measurement of the distance between stimulating and recording electrode would lead to errors of maximally ±10 % of the CV. This error adds to the inaccuracies done by the recording device: When the sampling rate is chosen too low, e.g. 20 kHz, which is enough for humans or cats, ±6 % inaccuracy adds to the result when we have a CV of about 50 m/s and 40 mm distance. The inaccuracy increases with shorter nerves like PB. The problem of the short distances has already been addressed (Biscoe et al. 1977). Some differences found in the literature for afferents and efferents of mice may possibly be ascribed to this problem. Particularly, if, as often done, the CV of the fastest motor efferents in vivo was determined by electrical stimulation with percutaneously inserted needle electrodes at two points (sciatic nerve and the distal tibial nerve) and EMG recording of the foot the exact site of stimulation is not quite clear. If then in addition the distance is only measured at the surface of the leg the results may differ to some extent. This difference may become crucial if pure transcutaneous stimulation was performed with surface electrodes (26 m/s for fastest motor efferents, Haupt and Stoffel 2004). Moreover, the technique with two point nerve stimulation and EMG recording did only allow for determination of the fastest motor efferents. Concerning the accuracy of length measurements, uncovering the nerve and roots while leaving them in situ after death of the animal seems to be the more adequate method to get reliable results. The method used in in vitro studies, where the roots and/or nerves are completely dissected inhere the problem of possible shrinkage (von Burg et al. 1979) of the nerve or elongation, particularly of very thin nerves like Sur. Some problem of current spread with higher stimulus strengths from bipolar stimulation at the nerve, as done in our experiments, can not be excluded. However, such an effect on the determination of the CV of a fibre group could be excluded since the CVs were determined with stimulus strengths close to the threshold of the fibre groups. The method to measure time differences between either two stimulation sites (for EMG recording) or between two recording sites has the advantage of eliminating the inaccuracy emerging from the unknown utilization time of the stimulus. However, with stimulus strengths of at least 1.5 times higher than the threshold and with 0.1 ms stimulus duration the stimulation is well above threshold. Therefore, in our experiments a utilization time of less than 0.05 ms (for the fast fibres) can be assumed. For an example of 40 mm nerve length and 1 ms conduction time this would mean an underestimation of the conduction velocity of less than 5 %. Despite the different stimulation and recording techniques the values for the fastest motor efferents determined in our experiments are roughly in a comparable range as those of most former investigators (45.9±7.21 m/s, Low and McLeod 1975;46±5 m/s, Sima and Robertson 1978;about 38 m/s, Toyoshima et al. 1986;38.2±6.3 m/s, Huxley et al. 1998;46.5±2.5 m/s, Ng et al. 1998;40 m/s, Wallace et al. 1999;41.5-50.5 m/s, Tabata et al. 2000;53±8 m/s, Weiss et al. 2001;47.13±3.28 m/s, Song et al. 2003; see however also Hirst et al. 1979, up to 70 m/s andBiscoe et al. 1977, 10-100 m/s in dorsal roots, 50-70 m/s in ventral roots), whereby the more recent investigations demonstrated that transgenic mice or pathological mouse strains with myelin or metabolic disorders may develop a distinct reduction of the nervous CV, partly even without distinct behavioural disorders. Most of those investigations concentrated on the fastest motor efferents, while our technique also enabled the determination of the CV of afferent fibres and of slower fibre groups, which could be compared to the CV of fast efferent motor fibres in the same experiment. Some of the discrepancies between the values of the mentioned investigations and our results may probably be explained by differences of the methods as mentioned above. If there might have been an influence of the age of adult mice on the MCV is not quite clear. Significantly slower MCV for mice of an age of 69-90 days compared to mice over 100 days old have been claimed (Huizar et al. 1975, Robertson andSima 1980). Other authors did not find a correlation between age and MCV (Huxley et al. 1998). Mice under the age of 30 days seem to show clearly slower CACV of the peripheral nerve fibres than their adult counterparts (Koltzenburg et al. 1997, Weiss et al. 2001. It can be presumed that group Ia fibres contribute to the group of the fastest and lowest threshold afferent fibres. However, a contribution of group Ib fibres can not be excluded. At least, even in the cat with its longer conduction distances a certain differentiation between the groups Ia and Ib by threshold and conduction velocity is only possible in the proximal muscle nerves e.g. PBSt, but not in the more distal muscle nerves e.g. GS (Bradley and Eccles 1953, Eccles et al. 1957, Laporte and Bessou 1957. When the fibre groups of muscle afferents in the mouse are not completely different from those in the cat, it can be assumed that the second wave occurring with increasing stimulus strength and some longer latency after the group I are group II afferents. Like in the cat the presumed group II afferents in mice (1) had a threshold around group I maximum, (2) had their maximum between around 5 and 10 T and (3) had a conduction velocity around 60 % of the group I afferents (Eccles and Lundberg 1959. In this context it should be mentioned that particularly in the higher threshold group II range a certain fraction of group II muscle afferents may originate not from muscle spindles but from free nerve endings (for a review see Schomburg 1990). At least in the rat higher numbers of group II afferents and even some group I afferents from muscles may originate from free nerve endings Payne 1965, 1966). Corresponding to group II muscle afferents group III and group IV muscle afferents were classified by their threshold and conduction velocity (Eccles and Lundberg 1959, for review see Schomburg 1990). Because the temperature in the oil pools of the hindlimb and at the spinal cord was in the lower thirties it cannot be excluded that the determined CVs are somewhat underestimated. However, since at least 80 % of the length of the measured nerves was in situ running deep in the tissue between the muscles, that means in an environment with a temperature close to body core temperature of 37 °C it can be assumed that the determined values are close to the normal CVs. At least, even in the living animal the temperature in the more distal parts of the limb is lower than the core temperature. According to investigations in cat (Paintal 1965), the reduction of the conduction velocity of myelinated nerve fibres induced by a reduced temperature from 36 °C to 32 °C would be about 15 %. This means, that a conduction velocity of 40 m/s measured at 32 °C would be about 46 m/s at 36 °C. However, since in our experiments only 20 % of the nerve length of about 40 mm was exposed to lower temperatures it can be assumed that the calculated conduction velocities in our experiments were only less than 1.2-1.5 m/s underestimated. Another error could arise from the fact that in our experiments, compared to most others, part of the conduction is via the spinal roots and for the afferents additionally via the spinal ganglion. In the dorsal roots centrally to the root ganglia a reduced conduction velocity of between 43 % (Loeb 1976) and 82 % (Czéh et al. 1977), both values determined in cat, of the peripheral conduction velocity was observed. Assuming a reduction to about 60 % (mean value between those two values) and about 5 mm conduction in the roots (about 12.5 % of the conduction distance) this would mean an underestimation of the peripheral conduction velocity for the fastest afferents of roughly 2 m/s and for the slowest fibres of less than 0.15 m/s. Additionally, the conduction Vol. 61 via the spinal root ganglion is delayed by approximately 0.1 ms (0.07-0.11 at 5 °C in the frog, Dun 1955; 0.1 ms in the rabbit, Mac Leod 1958) which would result in another underestimation of the calculated peripheral conduction velocity of afferent fibres of about 9 %. According to a missing intercalated ganglion, the determination of the conduction velocity of motor efferents was less affected by the included central part to the conduction distance. The present data indicate that for the motor efferents as well as for the afferents it is necessary to distinguish between the different hindlimb nerves. This differentiation together with discrimination between α-efferents and γ-efferents, and the finding that in mice the Sur contains fibres of the ventral root, presumingly αand γ-efferents, and which may be species specific, are new. Concerning the afferents, the discrimination between cutaneous and muscle afferents is not possible with our method, unless the nerve tested is a pure muscle nerve or a pure cutaneous nerve. As mentioned, Sur has to be taken as a mixed nerve; however, PB can be taken as a pure muscle nerve. Conflict of Interest There is no conflict of interest.
2016-11-07T18:53:47.765Z
2012-01-31T00:00:00.000
{ "year": 2012, "sha1": "28ffca2956a95e2c6b08e3269f74ec2040ba9730", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.33549/physiolres.932248", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "02022eb511364824c7c97866e9c006a6b5b6f8d4", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
219475128
pes2o/s2orc
v3-fos-license
Reducing the Delay for Decoding Instructions by Predicting Their Source Register Operands : The fetched instructions would have data dependency with in-flight ones in the pipeline execution of a processor, so the dependency prevents the processor from executing the incoming instructions for guaranteeing the program’s correctness. The register and memory dependencies are detected in the decode and memory stages, respectively. In a small embedded processor that supports as many ISAsas possible to reduce code size, the instruction decoding to identify register usage with the dependence check generally results in long delay and sometimes a critical path in its implementation. For reducing the delay, this paper proposes two methods—One method assumes the widely used source register operand bit-fields without fully decoding the instructions. However, this assumption would cause additional stalls due to the incorrect prediction; thus, it would degrade the performance. To solve this problem, as the other method, we adopt a table-based way to store the dependence history and later use this information for more precisely predicting the dependency. We applied our methods to the commercial EISC embedded processor with the Samsung 65nm process; thus, we reduced the critical path delay and increased its maximum operating frequency by 12.5% and achieved an average 11.4% speed-up in the execution time of the EEMBC applications. We also improved the static, dynamic power consumption, and EDP by 7.2%, 8.5%, and 13.6%, respectively, despite the implementation area overhead of 2.5%. Introduction Most processors [1-5] adopt a pipeline architecture to increase their operating frequency by dividing the execution flow into multiple stages and improve execution throughput by executing them independently. However, during the pipeline execution, control and data dependency between newly fetched and in-flight instructions may occur, which makes an instruction scheduler insert stalls into the pipeline for guaranteeing the program correctness, and therefore degrades the execution throughput. To minimize the control hazard, a branch predictor [6] has been widely adapted to predict a target address of a branch instruction from the previous branch history, and the processor fetches instructions from the predicted target address. If the branch predictor mispredicts, the processor flushes the speculatively executed instructions and starts to fetch new instructions from the correct target address. The prediction accuracy is critical to the performance due to the flush overhead. On the other hand, to resolve the data dependency, the processor identifies register names or memory addresses of the source and destination operands from the in-flight instructions. An in-order processor inserts stalls to the pipeline if the data dependency is detected, and an out-of-order processor would issue independent instructions instead of pipeline stalls for delivering higher performance. • To out best of the knowledge, our study is the first to predict source and destination operands in ISAs in a decode stage to reduce the decoder delay with minimal overhead, instead of adding pre-decoding logic in a fetch stage like in the previous works. • For this purpose, we propose the pre-decoder to predict the source and destination operands' positions in ISAs and the table-based data hazard predictor to remove the incorrect pre-decoding, thus minimizing the unnecessary data hazard. • We applied our schemes to the commercially available EISC embedded processor and showed the applicability of our study to real products. This paper consists of the following: Section 2 introduces prior studies related to this work. Section 3 describes the motivation of our work, and Section 4 presents our two schemes with their implementation. Section 5 evaluates their performance regarding speedup, power, implementation cost, and so on. Finally, the conclusion of this paper is given in Section 6. Related Work In general, a method to increase the operating frequency of a processor is to add more pipeline stages [9]. However, this method would consume more power and make the implementations for removing or hiding stalls caused by dependence complicated. A typical technique of hiding latency from the data dependency is a register forwarding [10], where its propagation logic passes execution or memory access results directly to the functional units that need them. There are several studies about the pre-decoding for reducing the decoding latency, and most of them placed a pre-decoding unit in the fetch stage instead of the decode stage for supporting the decoder execution, not for actual decoding. The authors of Reference [11] places a pre-decoding unit between the main memory and an instruction cache and identifies the instruction type. The pre-decoded type information allows the instructions to be quickly delivered to the associated functional units, resulting in the improved operating frequency of a processor. MIPS R1000 pre-decoded instructions and rearranged source and destination fields to be in the same position for every instruction for the easy decoding in the decode stage [12]. Similarly, x86 processors have used the pre-decoding, which assesses the length of instruction so the subsequent decoders can handle it efficiently since x86 instructions can vary from one to multiple bytes in length [13]. The branch predictor needs to determine whether an instruction is a branch, and if so, its branch type for selecting the corresponding predicted target. This decision makes the decode logic lie in the critical path, so the predictor pre-decodes the instructions before storing instructions in an instruction cache [14]. All of the approaches needed an additional decoder for the pre-decoding in the fetch stage. Also, they needed extra memory space for storing special bits per instruction for the decoded information and aligned instructions. Therefore, they resulted in a high area and power overhead. Santana et al. [15] stored the decoded instructions in the memory hierarchy instead of using the extra buffer. Another use of the pre-decoding is for avoiding the repeated decoding the same instruction. The authors of Reference [16] proposed a hardware folding technique that dynamically transforms Java bytecodes groups into RISC instructions, storing them in a cache to enable reuse. In our study, we get a hint from the instruction structure and predict the data dependency between instructions, so reducing the latency of the decoder without a large amount of additional circuitry or memory. Figure 1 shows the commercially available simple embedded EISC processor architecture used in this study [7]. The processor supports a single-issue in-order execution with 16-bit instructions and 32-bit data width and consists of four-stage pipelines: FE (Fetch), DEC (Decode), EX/WB (Execution/Write-Back), and MEM/WB (Memory/Write-Back). EISC Processor Architecture The FE stage predicts a target address in an "always not taken" manner for fetching instructions from the instruction memory without using an advanced branch predictor to minimize the implementation cost. If a branch miss-prediction occurs, the pipeline controller flushes the speculatively executed instructions and causes the FE stage to fetch the instruction from the correct target address. The DEC stage decodes the fetched instruction, and its register dependence unit checks whether there is a RAW register dependence between the decoded instruction and the instructions being executed in the EX and MEM stages. If there is no dependence, the pipeline controller issues the decoded instruction to the EX stage; Otherwise, the controller inserts a stall into the EX stage to ensure execution correctness. The processor does not support the data forwarding logic due to its design complexity. The EX stage supports 32-bit integer ALUs consisting of multi-cycle multiplier and divider. Finally, the MEM stage contains logic for accessing data memory through load and store commands. The instruction can be retired at either EX or MEM stages. One of the most critical factors in determining the implementation cost of a small embedded system would be a memory; so, an embedded processor, in general, supports complex instructions to use small code size. Therefore, the decode stage becomes more complicated and lengthy in time, and its logic is identified as a critical path in our target EISC core [17] Figure 1. The embedded EISC processor architecture. The dark shaded components, "instruction decoder" and "history table" were modified and added for our study, respectively. EISC Instruction Set Architecture We analyzed the EISC ISAs into the following three categories by the characteristics of the bit field, which refers to a source register operand: Source Field (SF), Source-Less (SL), and Encoded Field (EF). Table 1 shows the classified results, and the number of bit fields and the corresponding number of instructions. The Source Field (SF) contains instructions to identify one and more source register operands from their particular bit-field positions, [8:5] ], and [3:0]. In our proposal, we do not fully interpret them at the decode stage but assume that all the specific bit-fields represent the source register operands. Then, we compare the assumed source register values with the destination ones of the in-flight instructions for detecting the data dependency at the pipeline execution. More than half of the total ISAs belong to this category. The Source-Less (SL) is a category containing instructions which do not have any source register operands, which typically includes a direct branch and the LERI instructions [18]. Therefore, the SL instructions do not involve any data dependency, so dependence detection is not required. The Encoded Field (EF) includes instructions where multiple source register operands are encoded and described as a bit vector. The EF contains multiple memory access instructions by auto-increment/decrement operations, which can use up to eight source register operands. That is, the bit vector must be decoded to know the source registers. However, in our approach, we assume that such instructions always have the data dependency on in-flight instructions for eliminating the delay of decoding the bit vector. Figure 2 shows the dynamic instructions regarding the three categories when we ran the EEMBC benchmarks [8]. 73.5% 25.4% Most dynamic instructions belong to the SF category, so the processor needs to compare the values from the four predetermined bit-field positions with the destination register operands. The SL instructions occupied about 25.4% of the total ISA execution. Fortunately, the processor does not concern about any of their data dependency if it knows them belonging to the SL at their decoding. For the EF instructions, we assumed that they always have a data dependency on in-flight instructions to avoid complex decoding. However, since their occurrence is infrequent, about 1.1%, the performance degradation due to the assumption would be insignificant. In order to support our motivation further, we also analyzed the core instruction set of the MIPS 32-bit integer ISA [19] and the RISC-V 32-bit integer base ISA [20], and Table 2 shows our analysis results. In MIPS and RISC-V ISAs as well as EISC, most instructions acquire source register operands in specific bit-fields of the instruction (SF). For MIPS, these bit-fields are [25:21], [20:16], and for RISC-V, they are [27:24], [24:20], and [23:20], [19:15]. The proportion of instructions that do not have a source register operand (SL) is 6.5% and 13.6% for MIPS and RISC-V, respectively. Finally, the instructions which require the decoding of specific bit-fields to find the source register operands do not exist for MIPS, and RISC-V accounts for 13.6% for the total ISA. Therefore, most instructions in MIPS and RISC-V also belong to the SF, and since the number of their bit-fields is less than four, MIPS and RISC-V, as well as EISC, can fully support our motivation. Opportunity to Reduce the Incorrect Pre-Decoding Generally, in embedded applications, instructions are executed repeatedly, and as a result, if a data dependency associated with the instructions are mistakenly guessed, unnecessary stalls frequently occur. Therefore, to eliminate such occurrences, we considered a method of storing the data dependence history in a table and using the history information later. We traced the Program Counter (PC) of the instruction by executing the EEMBC program and measured the hit ratio of the table accesses with different table sizes. We assumed that the table was fully-associative. Figure 3 shows that the hit ratio of the 128 entry-table was about 67.2% on the average, which was close to the unlimit with a difference of about 13.6%. We decided that the size was the most appropriate for our design. In the applications, such as canrdr01, rspeed01, tblook01, a2time01, and puwmod01, their hit ratios were low, since 32.2%, 79.8%, 12.1%, 37.5%, and 100.0% of the total dynamic instructions were executed only once, respectively, as shown in Figure 4. Pre-Decoding Scheme The existing decoder decodes instructions completely at the decode stage; thus, it identifies their source register operands and compares them with the destination register operands of the in-flight instructions for detecting their data dependency. Figure 5a shows a code example to involve data dependency: I 3 is decoded fully at the decode stage. After that, its source register operand, r1, is identified and compared with the destination register operands of I 2 and I 1 (in-flight instructions), r1 and r4, respectively, in sequence. The decoder matches r1 between I 2 and I 3 , so the pipeline inserts stalls due to their data dependency. Our pre-decoding scheme does not fully decode instructions to identify the source register operands; instead, it obtains them from all the predetermined bit-field positions, that is, [7:4], [8:5], [3:1], and [3:0] of instructions as shown in Table 1. Then, the pipeline compares all of the pre-decoded source register operands with the destination register operands of the in-flight instructions to detect their data dependency, and it is shown in Figure 5b. When decoding I 3 (0xf023), we assume the source register operands from all the predetermined bit-fields, thus obtains four possible source operands (2, 1, 3, 3). Then, the pipeline compares them with the destination register operands of I 2 and I 1 (in-flight instructions), r1 and r4, respectively, in sequence. Since the number of instructions belonging to the SF category is large, the source register operands are recognized in the predetermined bit-fields at the decoding stage. The SL instructions are interpreted as soon as possible, that is, at the fetch stage. Since the number of EF instructions is minimal, it is not difficult to recognize the type of instructions at the decoding stage. Our scheme does not affect the program's correctness because the source register operand is always included in all the four predetermined bit-field values. Also, since we do not apply full instruction decoding, we can reduce its associated circuit delay. In a typical processor, the number of bit-fields used in the source register operand would not be significant because the decoding logic must be simple to implement. However, our method interprets more source register operands than what one instruction represents; thus, it can incur unnecessary stalls and consequently would increase the total execution cycle. The instructions belonging to the SL category do not have any source register operand. Therefore, if the pipeline knows that an instruction running at the decode stage belongs to the SL, the pipeline can issue the instruction without detecting a data dependency. To apply this to the EISC processor, we made the fetch stage identify whether or not the instruction belongs to the SL, and passed the identified information to the decode stage. The added SL identification unit does not affect the critical path. The EF instruction has an encoded bit vector to describe several source register operands at one time. In this case, particular bit-fields cannot be regarded as a source register operand, and the bit vector must be decoded to find the source register operands, which loses our delay gain in the simplified SF and SL execution. Therefore, to maximize the gain, we suppose that the EF instructions always have a dependence on the in-flight instructions. However, since the EF execution is rare, the performance degradation due to this very conservative assumption can be ignored, as discussed in Section 3. Since the number of EF instructions is small and we can find out whether an instruction belongs to the EF category with almost no implementation overhead, we implemented the EF related logic to the decode stage. Figure 6 shows the hardware logic about the pre-decoding scheme. For the SF instructions, the existing hardware was modified to compare all four bit-fields for possible source register operands with the destination register operands of the in-flight instructions at the same time. If at least one pair is matched, a data hazard is generated to the pipeline. Figure 4 showed that the instructions were executed repeatedly, which would result in many unnecessarily repeated stalls due to the incorrect assumption by the pre-decoding. To overcome the problem, we propose a table-based scheme to store the incorrect prediction and use the information for the later execution. We do not record the history of the dependence of every instruction because it requires a large history table. Because our pre-decoding accuracy is high, we manage only the wrong dependence history and can achieve significant performance benefits even with a small table. Table-Based Scheme We made the history table as a fully-associate cache, but its complexity is minimal, as shown in Figure 7. The table consists of 4-entries, one of which has an 11-bit tag, 1-bit valid, and 32-bit pattern history. Therefore, each entry can record the unnecessary data hazard history of 32 instructions, so our table can do up to a total of 128 instructions. We marked the occurrence of unnecessary data hazards as 1b'1 per instruction. Otherwise, we did as 1b'0. Because the program memory size of the EISC processor was 128 KB and the instruction size was 16-bit, the processor used 17-bit of PC, and its LSB was always 0. Also, since the size of the pattern history of each entry was 32-bit, 5-bit was used as an index for searching the entry. Therefore, the top 11-bit of the PC was used as a tag. Also, 44-bit was used for each entry by combining the tag, the valid, and the pattern history. As a result, we implemented the history table with a total of 176-bits. Algorithm 1 shows the insert and lookup functions for the history table. Whenever the pre-decoding scheme has issued the unnecessary data hazard, it sets the corresponding bit field in the history table by calling insertHisoryTable. If the access to the table is miss, the new entry is overwritten to the LRU entry (Lines 5∼6). For example, in Figure 7, the pre-decoding scheme predicted at the previous execution incorrectly and assumed that there was a RAW dependency between instructions from PC i + 2 to PC i + 4. Thus, to indicate the incorrect prediction, the function, insertHisoryTable is called, and the corresponding bit field is set. There is a RAW dependency between instructions from PC i + 4 to PC i + 6. However, in this case, the pre-decoding scheme predicted the dependency correctly at the previous execution. Thus, the history bit needs to be set to 0 even though there is a RAW dependency. So, the function is not called, since the default value is 0. During the history table lookup, if the target PC hits (Lines 9∼10) and the returned value is 1, we know that the pre-decoding scheme has issued the unnecessary data hazard before due to miss-prediction. If the returned value 0, it implies either no history or correct prediction from the previous execution. Therefore, we need to ignore the current incorrect dependence prediction for avoiding performance degradation. If the history table access is miss, the prediction result from the pre-decoding scheme is used (Line 12). When a branch instruction is executing at the execution stage, we do not record its related dependence to guarantee the program's correctness. Experimental Methodology We implemented and verified the single issue, and in-order 4-stage pipelined EISC processor with the proposed schemes onto the Xilinx Kintex-7 evaluation platform [21]. Since the EISC processor targets the embedded system, we measured its performance using the EEMBC benchmark suite [8] while excluding applications whose code and data sizes exceeded our memory sizes shown in Figure 1. We also implemented hardware performance counters (HPCs) inside the EISC processor to measure the execution and stall cycles of the applications in detail. In addition to the functional verification, we used the Samsung 65 nm process standard cell [22] and the Design compiler [23] for measuring its maximum operating frequency of the processor and implementation cost. Performance Evaluation We measured the execution and stall cycles by using the following three configurations: (1) the baseline EISC processor, (2) by adopting the pre-decoding scheme to the baseline one, and (3) by taking both the pre-decoding and the history schemes to the baseline one. Figure 8 shows the total execution cycles of our approaches, which are normalized by the baseline configuration. In our approaches, the total cycles increases due to stalls from the operand misprediction, and Figure 9 shows the increment ratio of the stall cycles in each instruction category. It should be noted that in our approaches, the total cycles increases, but the execution time decreases by increasing an operational frequency, and it is described in Section 5.3. Figure 8. The total cycle overhead of our approaches when running the EEMBC applications. All the measurements were normalized with that of the baseline EISC processor. The average execution cycle by the pre-decoding scheme was increased by about 2.7% compared to the baseline EISC processor due to that that we incorrectly predicted the source register fields and we made the assumption of dependence in all the EF instructions. As a result, we could not issue independent instructions of the in-flight instructions, and they incurred unnecessary data hazards in the pipeline. However, the pre-decoding with the history table significantly reduced the overhead, and only increased the execution cycles by about 1.0% on average. Also, we found that the increment ratio of the stall cycles was 1.6% by the pre-decoding scheme and 0.2% by the pre-decoding with the history table. Especially in autcor00, routelookup, fft00, idctrn01, and bezier01fixed, the increment of the execution cycles by only the pre-decoding scheme was significantly decreased. The applications included many loop iterations and incurred many unnecessary data hazards, so there were many opportunities for applying the history table to mitigate the increment of the stall cycles. The applications showed 98.6% hit ratio to the table. However, in the case of canrdr01, rspeed01, tblook01, a2time01, and puwmod01, they had poor temporal locality of instructions, and their hit ratio was only 11.2%. Therefore, the predictor could not mitigate the overhead of the increased stall cycles. In fbital01, conven00, bitmnp01, dither01, rotate01, and ospf, although they exploited high temporal locality, there were few additional data hazards; thus, the opportunity for the history table was also insufficient. From Figure 9, we found that, for the SF instructions, the pre-decoding scheme caused unnecessary data hazards in the overall EEMBC applications, which was particularly noticeable in autocor00, routelookup, fft00, idctrn01, and bezier01fixed. The overhead of the SF instructions due to the pre-decoding scheme was the largest in idctrn01, and the ratio was about 6.6%. However, the applications exploited high temporal locality; thus, the related overhead was decreased remarkably after applying the history table. We pre-decoded the SL instructions and did not cause any data hazard in the pipeline; thus, there was no increment due to the SL instruction. However, in the case of the EF instructions, we always issued a data hazard; therefore, their execution overhead was significant, noticeably in such applications with a large proportion of EF instruction as fft00, idctrn01, bezier01fixed, tblook01, and a2time01. However, autcor00 showed no increment by the EF instructions despite the large proportion of the EF instructions in the entire dynamic instructions. This result was why most EF instructions already had a data dependency with in-flight instructions. From the history table, the gain was low, that is, about 0.1%. In the SF instructions, autcor00, routelookup, fft00, idctrn01 and bezier01fixed applications exploited high temporal locality; thus, the related overhead was decreased remarkably after applying the history table. Operational Frequency and Speedup In the baseline EISC processor, we identified the path that performs the data dependency detection after instruction decoding as the critical path. Therefore, by pre-decoding source register fields, we could remove the detection logic from the critical path of the EISC processor, which increased the maximum operating frequency by 12.5%, that is, 370 MHz to 416.6 MHz. Figure 10 shows the speedup of the EEMBC applications by improving the frequency. Since the history table was not in the critical path, only one bar was presented. We could achieve an average speed of about 11.4%, and especially in autcor00, fbital00, conven00, and bitmnp01, we achieved speedup as 12.5% maximum since we almost removed the unnecessary stall cycles. Figure 10. Speedup of the EEMBC applications with respect to the baseline EISC processor by improving the maximum operating frequency by 12.5%. Area and Power To analyze the hardware overhead of the proposed EISC processor, we synthesized the baseline and our EISC processors with the same timing constraint for evaluating their area, static and dynamic power consumption. We incurred insignificant area overhead, that is, by 2.5% due to the pre-decoding and the history table logic. However, our proposed schemes had an advantage of the power consumption over the baseline processor. Figure 11 shows the static power per unit area of the NAND gate with respect to its driving strength, which shows that the NAND gate tends to consume more static power per unit area when the driving the gate becomes stronger. It means that when the logic area of a design with a strong driving strength gate and a design with a weak driving strength gate are the same, the latter consumes less static power than the former. In the case of the EISC processor, we had a better timing margin than the baseline EISC processor since we improved its critical path delay. Therefore, the EDA tool used a weak driving strength gate to synthesize our EISC processor. Therefore, the static consumption of the proposed approach was lower than that of the baseline EISC processor by about 7.2%. Our EISC processor searched for the unnecessary data hazards by using the history table in the fetch stage. Therefore, the dynamic power consumption of the fetch stage was increased by 8.2% due to the table implementation. However, the dynamic power consumption of the decoding stage was decreased by 16.7% by simplifying the decoding stage through the pre-decoding scheme and using the weak driving strength gates. Consequently, the total dynamic power consumption was reduced, that is, by 8.5%. Energy and EDP The operational frequency improved by 12.5%, so it reduced the execution time. Also, the synthesis tool used the weak driving strength gates, thus reducing power consumption. Therefore, our approach could reduce energy consumption and improve EDP. Figure 12 shows that the energy consumption of the EEMBC applications by our proposed EISC processor is normalized to the baseline result. Our EISC processor energy consumption was improved by about 4% on average compared to the baseline. The figure shows that the higher the speedup ratio of the EEMBC applications, the higher the energy improvement ratio. Figure 13 shows the normalized EDP of our EISC processor with respect to the baseline one, and the EDP of our processor was improved by about 13.6% on average. When considering Figure 10, we found that the more significant the speedup of the EEMBC application, the higher the EDP improvement ratio. The highest EDP improvement ratio, 15.3% was achieved in autcor00, fitbal00, conven00, bitmnp01, which had the highest speedup gain. On the other hand, the lowest EDP improvement ratio was 10.3% in bezier01fixed, which had the smallest speedup gain. The reciprocal of EDP would imply the metric, performance per power. Since our EDP is less than that of the original design in all cases, our design would be fit for low-powered IoT and embedded systems. Figure 13. EDP with our proposals, which was normalized with the baseline EISC processor. Conclusions Data dependency can occur between in-flight instructions in the pipeline stages of the processor. It is, therefore, necessary to detect the dependence between them by identifying their source and destination operands. The existing decoding method, which requires the complete decoding, increases the decoding latency when supporting complex ISAs; it would affect the critical path of the pipeline. In order to solve this problem, we proposed "a pre-decoding" method that performs data dependency detection by assuming the source register operand bit-fields based on the analysis result of ISAs without completely decoding the instructions. We analyzed the EISC ISAs and the dynamic instructions of the EEMBC applications for justifying our motivation. However, the pre-decoding method caused unnecessary stalls due to the incorrect assumption, which degraded the performance. To solve this problem, we adopted a table-based way to store the dependence history and later used this information for more precisely predicting the dependence. We modified the data dependence logic and the pipeline stages of the EISC processor to apply for our schemes. We measured the performance of the EISC processor on the FPGA platform, and we synthesized the EISC processor using the standard cell of the 65nm process to measure the maximum operating frequency of the processor and analyze the hardware overhead of the proposed scheme. As a result of applying the proposed methods to the EISC processor, we improved the critical path delay of the processor by about 12.5%, so achieved an average of 11.4% speedup with a small hardware area overhead of 2.5%. The static, dynamic power consumption and EDP of the processor were improved by 7.2%, 8.5%, and 13.6%, respectively. Our study was the first to predict and interpret source and destination operands in ISAs for reducing the decoding delay and minimizing the related overhead. As we discussed in the motivation section, other processors' ISAs have similar structural features to the EISC ISA. Therefore, we would apply our technology to other processors and make it useful. Also, the pattern history can work well with 32 statically consecutive instructions. Otherwise, the miss ratio in the history table increases, so incurring more stalls. In order to solve the problem, the structure of hardware would be modified so that the hardware cost would be increased. For example, we would increase the table size and the set associativity. Instead, we have a plan to use a software method, that is, tracing the executed instructions at runtime and using the profiled result for a compiler's jump optimization. The plan is left as our future work. Author Contributions: S.P. mainly performed the research for this paper, J.J., C.K., G.I.M., H.J.L. and S.W.K. supported his research in different ways. Also, S.W.K. leds this project, and therefore he is responsible for this research and publication. All authors have read and agreed to the published version of the manuscript. Funding: Ministry of Trade, Industry and Energy: 10052716 and N0001883.
2020-05-21T09:19:29.323Z
2020-05-16T00:00:00.000
{ "year": 2020, "sha1": "cac96d2f6efbf8ad8e424970c7a9923073367314", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-9292/9/5/820/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "99cf6bf5a8aa62a96d37005d7fec89ab27b302fa", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
125976489
pes2o/s2orc
v3-fos-license
The Pickands–Balkema–de Haan theorem for intuitionistic fuzzy events In the paper the space of observables with respect to a family of the intuitionistic fuzzy events is considered. We proved the modification of the Fisher–Tippett–Gnedenko theorem for sequence of independent intuitionistic fuzzy observables in paper [3]. Now we prove the modification of the Pickands–Balkema–de Haan theorem. Both are theorems of part of statistic, which is called the extreme value theory. Introduction The extreme value theory is a part of statistics, which deals with examination of probability of extreme and rare events with a large impact. The extreme value theory search endpoints of the distributions. The Fisher-Tippet-Gnedenko theorem says about convergence in probability distribution of maximums of independent, equally distributed random variables. An alternative to the maximal observation method is the method that models all observations that exceed any predefined boundary (ie. threshold). This method is used in the Pickands-Balkema-de Haan theorem. In [3] it was proved the modification of the Fisher-Tippett-Gnedenko theorem for sequence of independent intuitionistic fuzzy observables. Now we prove the modification of the Pickands-Balkema-de Haan theorem for sequence of independent intuitionistic fuzzy observables. One of the preferences of the Kolmogorov concept of probability is the agreement of replacement the notion event with notion of a set. Therefore it seems to be important also in the intuitionistic fuzzy probability theory to work with the notion of an intuitionistic fuzzy event as an intuitionistic fuzzy set. In the intuitionistic fuzzy probability theory instead of the probability P : S → [0, 1] an intuitionistic fuzzy state m : F → [0, 1] is considered, where F is a family of intuitionistic fuzzy subsets of Ω. And instead of a random variable ξ : Ω → R an intuitionistic fuzzy observable x : B(R) → F is considered. Our main idea is in a representation of a given sequence (y n ) n of intuitionistic fuzzy observables y n : B(R) → F by a probability space (Ω, S, P ) and a sequence (η n ) n of random variables η n : Ω → R. Then from the convergence of (η n ) n in distribution the convergence in distribution of (y n ) n follows. Of course to different sequences (y n ) n different probability spaces can be obtained. Anyway the transformation can be used for obtaining some new results about intuitionistic fuzzy states on F. Mention that the used Atanassov concept of intuitionistic fuzzy sets [1,2] is more general as the Zadeh notion of fuzzy sets [15,16]. Therefore in Section 2 some basic information about intuitionistic fuzzy states and intuitionistic fuzzy observables on families of intuitionistic fuzzy sets are presented [13]. Further in Section 3 the independence of intuitionistic fuzzy observables is studied. In Section 4 the basic notions from extreme value theory is studied. Finally in Section 5 the intuitionistic fuzzy excess distribution F u is studied and the Pikands-Balkema-de Haan theorem for intuitionistic fuzzy case is proved. Remark that in a whole text we use a notation "IF" for short a phrase "intuitionistic fuzzy". IF-events, IF-states and IF-observables Our main notion in the paper will be the notion of an IF-event, what is a pair of fuzzy events. If A = (µ A , ν A ) ∈ F, B = (µ B , ν B ) ∈ F, then we define the Lukasiewicz binary operations ⊕, on F by and the partial ordering is given by If f = χ A , then the corresponding IF-event has the form In this case A ⊕ B corresponds to the union of sets, A B to the product of sets and ≤ to the set inclusion. In the IF-probability theory ( [13]) instead of the notion of probability we use the notion of state. Probably the most useful result in the IF-state theory is the following representation theorem (see [11]): The third basic notion in the probability theory is the notion of an observable. Let J be the family of all intervals in R of the form Then the σ-algebra σ(J ) is denoted by B(R) and it is called the σ-algebra of Borel sets, its elements are called Borel sets. Definition 2.6. By an IF-observable on F we understand each mapping x : B(R) → F satisfying the following conditions: . Similarly as in the classical case the following two theorems can be proved ( [13]). Then Moreover if there exists R t 2 dF(t), then we define the IF-dispersion D 2 (x) by the formula Independence In the paper we shall work only with independent IF-observables. Of course first we must need the existence of the joint IF-observable. For this reason we shall define the product of IF-events ( [9]). The next important notion is the notion of a joint IF-observable and its existence (see [12]). Definition 3.2. Let x, y : B(R) → F be two IF-observables. The joint IF-observable of the IF-observables x, y is a mapping h : B(R 2 ) → F satisfying the following conditions: Proof. See [12] Definition 3.4. Let m be an IF-state. IF-observables Theorem 3.5. Let R N be the set of all sequences (t i ) i of real numbers. Let (x n ) n be a sequence of independent IF-observables in (F, m) with the same IF-distribution function. Then there exists a probability space (R N , σ(C), P ) with the following property. Define for each n ∈ N the mapping ξ n : Then (ξ n ) n is a sequence of independent random variables in a space (R N , σ(C), P ). Proof. Notation: A set C ⊂ R N is called a cylinder, if there exists n ∈ N , and D ∈ B(R n ) such that C = {(t i ) i : (t 1 , ..., t n ) ∈ D}. By C we shall denote the family of all cylinders in R N , by σ(C) the σ-algebra generated by C. Construction: Consider the measurable space (R N , σ(C)) a sequence (x n ) n of independent IF-observables x n : B(R) −→ F (i.e. x 1 , . . . , x n are independent for each n ∈ N ), and the states m n : The states m n are consisting, i.e. for each B ∈ B(R n ). Therefore by the Kolmogorov consistency theorem (see [14]) there exists the probability measure P : σ(C) −→ [0, 1] such that for each B ∈ C, where C is the family of all cylinders in R N and π n : R N → R n is a projection defined by π n (t i ) ∞ 1 = (t 1 , . . . , t n ). Let n ∈ N , A 1 , ..., A n ∈ B(R). Then Let If there exists IF-mean value E(x n ), then Similarly the equality D 2 (ξ n ) = D 2 (x n ) can be proved. We need the notion of convergence IF-observables yet (see [8]). Definition 3.6. Let x 1 , . . . , x n : B(R) → F be independent IF-observables and g n : R n → R be a Borel measurable function. Then the IF-observable y n = g n (x 1 , . . . , x n ) : B(R) → F is defined by the equality y n = h n •g −1 n , where h n : B(R n ) → F is the n-dimensional IF-observable (joint IF-observable of x 1 , . . . , x n ). 4. the IF-observable y n = 1 an max(x 1 , . . . , x n )−b n is defined by the equality y n = h n •g −1 n , where g n (u 1 , . . . , u n ) = 1 an max(u 1 , . . . , u n ) − b n . Definition 3.8. Let (y n ) n be a sequence of IF-observables in the IF-space (F, m). We say that (y n ) n converges in distribution to a function Ψ : R −→ [0, 1], if for each t ∈ R lim n→∞ m y n ((−∞, t)) = Ψ(t). Basic notions from extreme value theory 4.1 The Fisher-Tippett-Gnedenko theorem The next notions of the extreme value theory on real numbers can be found in [4][5][6] and [7]. Let X 1 , X 2 , ... be independent, equally distributed random variables of real numbers with a distribution function F : R → R defined by where x ∈ R. Denote M n maximum of n random variables for n ≥ 2. Theorem 4.1. (Fisher-Tippett-Gnedenko) Let X 1 , X 2 , ... be a sequence of independent, equally distributed random variables. If there exists the sequences of real constant a n > 0, b n and a nondegenerate distribution function H, such that then H is the distribution function one of the following three types of distributions: Weibull A parameter µ ∈ R is the location parameter and a parameter σ > 0 is the scale parameter. Gumbel, Frechet and Weibull distribution from Theorem 4.1 can be described with using a generalized distribution of extreme values -GEV: A parameter ε is called the shape parameter. The Pickands-Balkema-de Hann theorem In Section 4.1 the Fisher-Tippet-Gnedenko theorem says about convergence in probability distribution of maximums of independent, equally distributed random variables. An alternative to the maximal observation method is the method that models all observations that exceed any predefined boundary (i.e., threshold). Such the extremes occur "near" the upper end of distribution support, hence intuitively asymptotic behavior of M n must be related to the distribution function F in its right tail near the right endpoint. We denote by the right endpoint of F (see [4][5][6] and [7]). Definition 4.2. (Maximum domain of attraction -MDA) We say that the distribution function F of X i belongs to the maximum domain of attraction of the extreme value distributions H if there exists constants a n > 0, b n ∈ R such that lim n→∞ P M n − b n a n < x = H(x) holds. We write F ∈ MDA(H). Definition 4.3. (Excess distribution function) Let X be a random variable with distribution function F and right endpoint x F . For fixed u < x F , u > 0, is the excess distribution function of the random variable X (of the distribution function F ) over the threshold u. Remark 4.4. The excess distribution function F u can be expressed in the following form Definition 4.5. (Generalized Pareto distribution -GPD) Define the distribution function G ε,β by and β > 0 is the scale parameter. G ε,β is called the generalised Pareto distribution. We can extend the family by adding a location parameter ν ∈ R. Then we get the function G ε,ν,β by replacing the argument x above by x − ν in G ε,β . The support has to be adjusted accordingly. Remark 4.6. The GPD transforms into a number of other distributions depending on the value of ε. When ε > 0, it takes the form of the ordinary Pareto distribution. This case would be most relevant for financial time series data as it has a heavy tail. If ε = 0, the GPD corresponds to exponential distribution, and it is called a short-tailed, Pareto II type distribution for ε < 0. Theorem 4.7. (Pickands-Balkema-de Haan) Let F be an excess distribution. For every ε ∈ R, for some positive function β. Remark 4.8. Theorem 4.7 say that for some function β to be estimated from the data, the excess distribution F u converges to the generalised Pareto distribution G ε,β for large u. Remark 4.9. The GEV H ε , ε ∈ R, describes the limit distribution of normalised maxima. The GPD G ε,β , ε ∈ R, β > 0, appears as the limit distribution of scaled excesses over high thresholds. The Pickands-Balkema-de Hann theorem for IF-case Now we return to the IF-case. First we recall the Fisher-Tippett-Gnedenko theorem for a sequence of independent, equally distributed IF-observables, see [3]. Theorem 5.1. (Fisher-Tippett-Gnedenko) Let x 1 , x 2 , ... be a sequence of independent, equally distributed IF-observables such that D 2 (x n ) = σ 2 , E(x n ) = µ, (n = 1, 2, . . .). If there exists the sequences of real constant a n > 0, b n and a non-degenerate distribution function H, such that then H is the distribution function one of the following three types of distributions: Weibull There a parameter µ ∈ R is the location parameter and a parameter σ > 0 is the scale parameter. Let x be an IF-observable on F and F be an IF-distribution function of x. We denote by the right endpoint of IF-distribution function F. holds. We write F ∈ MDA(H). Definition 5.3. (Excess IF-distribution function) Let F be an IF-distribution function with right endpoint t F . For fixed u < t F , u > 0, for some positive function β. Proof. Let (x n ) n be a sequence of independent IF-observables in (F, m) with the same IFdistribution F. Consider the measure space (R N , σ(C), P ) and random variables ξ n ((t i ) i ) = t n , (n = 1, 2, ...). Then by Theorem 3.5 the random variables ξ n are independent. Denote F the distribution function of random variable ξ n . We can see that F = F and t F = t F , because Hence F u = F u . Therefore we obtain for every ε ∈ R, and lim for some positive function β. Finally from a classical Pickands-Balkema-de Haan theorem (see Theorem 4.7) we obtain Remark 5.5. Theorem 5.4 say that for some function β to be estimated from the data, the excess IF-distribution F u converges to the generalised Pareto distribution G ε,β for large u. Conclusion We have proved a very important assertion of mathematical statistics for IF-observables in IFtheory. Evidently the results can be applied also to fuzzy sets theory. On the other hand families of IF-events may be embedded to suitable MV-algebras. Therefore it would be useful to try to extend the Pickands-Balkema-de Haan theorem to probability MV-algebras.
2019-04-22T13:11:28.275Z
2018-05-01T00:00:00.000
{ "year": 2018, "sha1": "31af480fa0fb177eadc1d130396935d2a8575c62", "oa_license": "CCBYSA", "oa_url": "http://ifigenia.org/images/7/73/NIFS-24-2-063-075.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "f163470ab4761a927427b74ab5d6a6e14efddfb0", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
234932014
pes2o/s2orc
v3-fos-license
Microglial and Neuronal Cell Pyroptosis Induced by Oxygen–Glucose Deprivation/Reoxygenation Aggravates Cell Injury via Activation of the Caspase-1/GSDMD Signaling Pathway Pyroptosis is a new type of programmed cell death, which induces a strong pro-inflammatory reaction. However, the mechanism of pyroptosis after brain ischemia/reperfusion (I/R) and the interaction between different neural cell types are still unclear. This study comprehensively explored the mechanisms and interactions of microglial and neuronal pyroptosisin the simulated I/R environment in vitro. The BV2 (as microglial) and HT22(as neuronal) cells were treated by oxygen–glucose deprivation/reoxygenation (OGD/R). Both BV2 and HT22 cells underwent pyroptosis after OGD/R, and the pyroptosis occurred at earlier time point in HT22than that of BV2. Caspase-11 and Gasdermin E expression in BV2 and HT22 cells did not change significantly after OGD/R. Inhibition of caspase-1 or GSDMD activity, or down-regulation of GSDMD expression, alleviated pyroptosis in both BV2 and HT22 cells after OGD/R. Transwell studies further showed that OGD/R-treated HT22 or BV2 cells aggravated pyroptosis of adjacent non-OGD/R-treated cells, which could be relieved by inhibition of caspase-1 or GSDMD. These results suggested that OGD/R induces pyroptosis of microglia and neuronal cells and aggravates cell injury via activation of caspase-1/GSDMD signaling pathway. Our findings indicated that caspase-1 and GSDMD may be therapeutic targets after cerebral I/R. Introduction Stroke is a disease with high morbidity and mortality worldwide. Ischemic stroke accounts for about 80% of neurovascular injury [1]. At present, the treatment strategy of acute ischemic stroke is to restore blood flow as soon as possible, by using drug thrombolysis and mechanical thrombolysis [2,3]. However, the time frame for safe intervention is still limited for these treatments. It is estimated that less than 10% of patients with acute stroke benefit from reperfusion treatments, because delayed treatment can lead to worsened outcome [4], caused by reperfusion injury. An important mechanism of reperfusion injury is the inflammatory response following cerebral ischemia/reperfusion (I/R) [5]. However, the inflammatory response after cerebral I/R is a very complex process, which still needs to be explored. Recent studies have shown that the inflammasomes formed within brain cells (including neurons, microglia and astroglia) play an important role in inflammatory response to brain I/R injury [6][7][8][9]. Activation of caspase-1, a component of inflammasomes, induces the processing of Zhaofei Dong and Qingxia Peng are co-first authors. Although some studies have suggested the activation of GSDMD and pyroptosis in microglia and neurons in simulated ischemic conditions in vitro [21][22][23], however, it is still unclear whether there is interplay between microglial and neuronal pyroptosis after I/R, which results in inflammatory propagation. Moreover, there are no reports about activation of GSDME following cerebral I/R. This study comprehensively explores the mechanisms and interactions of microglial and neuronal pyroptosis during mimic I/R injury in vitro. Experimental Design This study consisted of three parts: In part 1 experiment, proportion of pyropotic BV2 (as microglial) and HT22 (as neuronal) cells, expression of pyroptosis-related protein and cell viability were observed during OGD/R (supplemental Fig. 4A). Part 2 experiment explored the effect of inhibition of caspase-1 or GSDMD on BV2 and HT22 cell pyroptosis following OGD/R (supplemental Fig. 4B). Part 3 experiment demonstrated the interaction of BV2 and HT22 cell pyroptosis after OGD/R by the transwell co-culture system (supplemental Fig. 4C, D). Oxygen-Glucose Deprivation/Reoxygenation (OGD/R) Model In order to mimic cerebral I/R an OGD/R in vitro model was used, the OGD/R model was constructed by replacing normal culture medium with glucose-free DMEM medium (Gibco, Invitrogen, USA), and then the cells was placed in a hypoxic incubator (Anoxomat Mark II, Mart Microbiology B.V, Netherlands) that contained a gas mixture of 1% O 2 , 5% CO 2 and 94% N 2 for 4 h (BV2 cells) or 2 h (HT22 cells) at 37 °C and then recovering normal gas and culture medium at the optimal time. Samples collected at various time points after OGD/R for subsequent experiments. Normal cultured cells were used as the control group. Detection of Cell Pyroptosis by Dye Uptake The dye uptake method was carried as previously reported [12]. The formation of discrete pores in the plasma membrane of cells underwent pyroptosis resulted in increased uptake of YO-PRO-1 iodide, a small (629 Da) membrane impermeable dye, with exclusion of a larger (1293 Da) membrane impermeable dye, ethidium homodimer-2 (Eth-D2). The cells at various time points after OGD/R in the culture dish of 10 cm in diameter was added to 1 µM YO-PRO-1 iodide (Invitrogen, Shanghai, China), 1 µM Eth-D2 (Invitrogen, Shanghai, China) and 2 µM Hoechst 33,342 (Invitrogen, Shanghai, China) for 10 min. Then wash the cells with PBS twice, 5 min each. Triton X-100 detergent (0.1%) was used as a positive control for dye uptake. Normally cultured cells are treated as controls. Images were captured using a ZEISS A2 inverted microscope (ZEISS, Germany). Dye-positive cells were counted in a total of 18 fields per group (6 wells per group, 3 fields per well). Immunofluorescence (IF) Analysis The cells were cultured in confocal dish. Then, the cells were fixed with 4% paraformaldehyde for 30 min, permeabilized for 10 min in 0.3% Triton X-100 and blocked using 5% BSA for 1 h. The cells were incubated with primary antibodies against caspase-1 (Abcam, ab179515, 1:200) or GSDMD (Abcam, ab209845, 1:200) at 4 °C overnight. The cells were washed three times for 10 min each with PBST (PBS tween-20, 0.05%), and then the secondary anti-Rabbit Alexa Fluor-488-conjugated antibody (Zsbio,1:300) was added for 1 h at room temperature on the dark. Nuclear were stained with 4′, 6-diamidino-2-phenylindole (DAPI). The images were taken with ZEISS confocal microscopy system and fluorescence intensity analysis was performed with ImageJ software. Enzyme Linked Immunosorbent Assay (ELISA) The levels of the inflammatory cytokines IL-1β, IL-18 and the Lactate dehydrogenase (LDH), an indicator for cell injury/death, were measured in the cell supernatants by ELISA according to the manufacturer's instructions (eBioscience, Thermo Fisher). Cell Viability The cell viability was assessed using MTT assay Kit (Promega, USA) according to manufacturer's instructions. Approximate 4.0 × 10 4 cells were plated in 96-well plates overnight. The cells were collected at various time points after carrying out OGD/R. Ten microlitre MTT solution was added to 100 µL of culture medium in each well and incubated for 4 h at 37 °C. Then the solution was removed and 100 µL of DMSO was added to each well. After mixed by oscillation 10 min, the optical density was measured at 570 nm by using the enzyme standard instrument. The normal cultured cells were as the control. Inhibition of Caspase-1 and GSDMD According to the results of our preliminary experiments, caspase-1 inhibitor AC-YVAD-CMK at 10 µM (supplemental Fig. 1), chemical inhibitor of GSDMD Necrosulfonamide (NSA) [24] at 20 µM (supplemental Fig. 2), and a siRNA of GSDMD (siGSDMD fragment 829 (supplemental Fig. 3) was selected for their optimal inhibition effect. The peak time point of pyroptosis was subjected to the following inhibition experiments. AC-YVAD-CMK (Sigma-Aldrich, Darmstadt, Germany) or NSA (MedChemExpress, USA) dissolved in dimethylsulphoxide (DMSO) was added to the culture medium 4 h before OGD treatment [24,25]. siGS-DMD-829 was transfected 48 h before OGD/R, and siGS-DMD-NC was transfected as the control (siGSDMD-NC does not inhibit GSDMD). RNA oligonucleotides purchased from GenePharma (Shanghai, China) (supplemental Table) were transfected into cells using lipofectamine RNAiMax (Invitrogen, USA) according to the manufacturer instructions. Then, samples at the peak time point of cell pyroptosis were collected for subsequent experiments (Table 1). Transwell Co-Culture System for BV2 and HT22 Cells The transwell co-culture system was designed as previous reported [26]. To observe the effect of HT22 cells on BV2 cells after OGD/R, the upper layer was plated with HT22 cells and BV2 cells were cultured on the bottom well of the chamber. HT22 cells were treated with OGD/R, with or without pretreatment of AC-YVAD-CMK or NSA or siGSDMD-829 before OGD/R (as described previously). Similarly, in order to observe the effect of BV2 cells on HT22 cells after OGD/R, the BV2 cells were seeded on the upper layer, treated with OGD/R with or without pretreatment of AC-YVAD-CMK or NSA or siGSDMD-829 before OGD/R, and HT22 cells were plated on the bottom layer of the chamber. The semi-permeable membrane allows sharing of culture medium and its components between cells grown in the upper and lower chambers of the same transwell. The selection of time points for samples harvested from the lower chamber was in accordance with the peak time point of the corresponding cell pyroptosis observed in the foregoing experiment. The samples of the lower cells were collected for subsequent experiments. Statistical Analysis Statistical analysis was performed using the GraphPad PrismSoftware. The ImageJ software was used to analyze the optical density of the western blot results and to calculate the number of pyroptotic cells. Values were presented as the means ± SEM with the homogeneity of variance. The Student t-test was used for the difference between the two groups of independent data. The comparison between multiple groups of samples was performed by one-way ANOVA. Sample sizes were chosen based on previous literature. Differences were considered statistically significant when P < 0.05 (*P < 0.05; **P < 0.01). BV2 and HT22 Cells Underwent Cell Pyroptosis After OGD/R The experimental results of the BV2 cell pyroptosis after OGD/R were as followed. The dye uptake method showed the highest proportion of pyroptotic cells appeared at 12 h after OGD/R (Fig. 1A, B). Western blot results demonstrated that GSDMD-N and cleaved-caspase-1 expression increased at 12 h after OGD/R, but the levels of GSDME-N and cleaved-caspase-11 were not upregulated in all time points after OGD/R ( Fig. 1C-K). Immunofluorescence results showed that caspase-1 and GSDMD expression increased at 12 h after OGD/R ( Fig. 1L-O). In addition, IL-1β, IL-18 and LDH in cell supernatant were significantly increased ( Fig. 1P-R) and the cell viability was significantly decreased at 6 h, 12 h (most obvious) and 24 h after OGD/R (Fig. 1S). Similar trend in changes of pyroptosis markers was also observed in neuronal cell line HT22 after OGD/R, which reached the peak at 6 h time point after OGD/R, a . The highest proportion of pyroptotic BV2 cells appeared at 12 h after OGD/R. 0.1% Triton X-100 was used as a positive control. Magnification, ×100. Scale bar, 100 µM. C-K Representative immunoblot and quantification results of pyroptosis-related proteins in BV2 cells at the indicated time points during OGD/R. GSDMD-N and cleaved-caspase-1 expression increased at 12 h after OGD/R. β-actin was used as a loading control. L-O Representative immunofluorescence staining (L, N) and histogram of the relative expression (M, O) of caspase-1 and GSDMD in BV2 cells. Caspase-1 and GSDMD expression increased at 12 h after OGD/R. Magnification, × 600. Scale bar, 16 µm. P-R ELISA results showed that the highest level of IL-1β, IL-18and LDH in cell supernatant of BV2 cells appeared at 12 h after OGD/R. S The cell viability of BV2 cells was detected by MTT assay kit. The minimum cell proliferative ability appeared at 12 h after OGD/R. Data were represented as mean ± SEM. n = 3, *P < 0.05, **P < 0.01. N.S no significant difference ▸ phenomenon that is 6 h earlier than BV2 cells (supplemental Fig. 5). Inhibition of Caspase-1 Alleviated BV2 and HT22 Cell Pyroptosis and GSDMD Activation After OGD/R Based on the above experimental results, BV2 cells at 12 h and HT22 cells at 6 h after OGD/R were used to observe the inhibitory effects of caspase-1 and GSDMD. After pretreatment of AC-YVAD-CMK, a caspase-1 inhibitor, the proportion of pyroptotic BV2 and HT22 cells was significantly decreased ( Fig. 2A, B, I, J). AC-YVAD-CMK reduced caspase-1 cleavage and activation (supplemental Fig. 1), and down-regulated the processing of GSDMD-N in cells after OGD/R ( Fig. 2C-E, K-M). Production of IL-1β, IL-18, and LDH in cell supernatant of BV2 cells was also significantly decreased in cells pre-treated with AC-YVAD-CMK (Fig. 2F-H) and production of IL-1β, IL-18, and LDH in cell supernatant of HT22 cells was also significantly decreased in cells pre-treated with AC-YVAD-CMK ( Fig. 2N-P). Notably, the levels of IL-1β, IL-18 and LDH in HT22 cells were less than that in BV2 cells. Inhibition of GSDMD Alleviated BV2 and HT22 Cell Pyroptosis After OGD/R After pretreatment of NSA, an inhibitor of GSDMD, which knocked down cellular GSDMD expression ( Fig. 3C-E, K-M), the proportion of pyroptotic BV2 and HT22 cells Fig. 2 Inhibition of caspase-1 alleviated BV2 and HT22 cell pyroptosis and GSDMD activation after OGD/R. A, B Representative immunofluorescence staining of BV2 cells by dye uptake method (A) and histogram of the percentage of pyroptotic BV2 cells (B). After BV2 cells were pretreated with AC-YVAD-CMK (YVAD), a specific inhibitor of caspase-1, the proportion of pyroptotic BV2 cells was decreased at 12 h after OGD/R. 0.1%Triton X-100 was used as a positive control. Magnification, ×100. Scale bar, 100 µM. C-E Representative immunoblot and its quantification of GSDMD protein in BV2 cell. After BV2 cells were pretreated with YVAD, the GSDMD-N expression was downregulated at 12 h after OGD/R. β-actin was used as a loading control. F-H ELISA results showed that the level of IL-1β, IL-18 and LDH in cell supernatant of BV2 cells pretreated with YVAD was significantly decreased at 12 h after OGD/R. I, J Representative immunofluorescence staining of HT22 cells by dye uptake method (I) and histogram of the percentage of pyroptotic HT22 cells (J). After HT22 cells were pretreated with YVAD, the proportion of pyroptotic HT22 cells was decreased at 6 h after OGD/R. 0.1%Triton X-100 was used as a positive control. Magnification, ×100. Scale bar, 100 µM. K-M Representative immunoblot and its quantification of GSDMD protein in HT22 cell. After HT22 cells were pretreated with YVAD, the GSDMD-N expression was downregulated at 6 h after OGD/R. β-actin was used as a loading control. N-P ELISA results showed that the level of IL-1β, IL-18 and LDH in cell supernatant of HT22 cells pretreated with YVAD was significantly decreased at 6 h after OGD/R. Data were represented as mean ± SEM. n = 3, *P < 0.05, **P < 0.01. N.S. no significant difference ▸ was significantly decreased (Fig. 3A, B, I-J). Production of IL-1β, IL-18, and LDH in cell supernatant was also significantly decreased in cells pre-treated with NSA ( Fig. 3F-H, N-P). In addition, cells were transfected with siGSDMD-829, which knocked down cellular GSDMD expression ( Fig. 4C-E, K-M), the proportion of pyroptotic BV2 and HT22 cells was significantly decreased (Fig. 4A, B, I-J). Production of IL-1β, IL-18, and LDH in cell supernatant was also significantly decreased in cells pre-treated with NSA ( Fig. 4F-H, N-P). Notably, the levels of IL-1β, IL-18 and LDH in HT22 cells were less than that in BV2 cells. HT22 and BV2 Cells Undergoing OGD/R Aggravated Pyroptosis of Adjacent Non-OGD/R-Treated Cells Using the transwell co-culture model, HT22 cells were seeded in the upper layer of the chamber and underwent OGD/R treatment for 6 h, then HT22 cells were co-cultured with BV2 cells for additional 12 h. Greater proportion of pyroptotic cells (Fig. 5A, B), increased GSDMD-N expression (Fig. 5C-E), higher level of IL-1β, IL-18 and LDH (Fig. 5F-H) and decreased cell viability (Fig. 5I) were observed in the co-cultured BV2 cells as compared with BV2 cells cultured alone and were treated by OGD/R for 12 h. Similar results were found in the transwell co-culturing experiments, in which BV2 cells were seeded in the upper layer and underwent OGD/R for 12 h, then co-cultured with HT22 cells in the lower layer for additional 6 h. Compared After BV2 cells were pretreated with NSA, a chemical inhibitor of GSDMD, the proportion of pyroptotic BV2 cells was decreased at 12 h after OGD/R. 0.1%Triton X-100 was used as a positive control. Magnification, ×100. Scale bar, 100 µM. C-E Representative immunoblot and its quantification of GSDMD protein in BV2 cell. After BV2 cells were pretreated with NSA, the GSDMD-N expression was downregulated at 12 h after OGD/R. β-actin was used as a loading control. F-H ELISA results showed that the level of IL-1β, IL-18 and LDH in cell supernatant of BV2 cells pretreated with NSA was significantly decreased at 12 h after OGD/R. I, J Representative immunofluorescence staining of HT22 cells by dye uptake method (I) and histogram of the percentage of pyroptotic HT22 cells (J). After HT22 cells were pretreated with NSA, the proportion of pyroptotic HT22 cells was decreased at 6 h after OGD/R. 0.1%Triton X-100 was used as a positive control. Magnification, ×100. Scale bar, 100 µM. K-M Representative immunoblot and its quantification of GSDMD protein in HT22 cell. After HT22 cells pretreated with NSA, the GSDMD-N expression was downregulated at 6 h after OGD/R. β-actin was used as a loading control. N-P ELISA results showed that the level of IL-1β, IL-18 and LDH in cell supernatant of HT22 cells pretreated with NSA was significantly decreased at 6 h after OGD/R. Data were represented as mean ± SEM. n = 3, *P < 0.05, **P < 0.01. N.S no significant difference ▸ with HT22 cells treated by OGD/R for 6 h, the HT22 cells co-cultured with OGD/R-treated BV2 cells showed greater proportion of pyroptotic cells (Fig. 5J-K), increased GSDMD-N expression (Fig. 5L-N), higher level of IL-1β, IL-18 and LDH (Fig. 5O-Q) and decreased cell viability (Fig. 5R). However, levels of secreted IL-1β, IL-18 and LDH found in HT22 culture medium could be contributed from co-cultured BV2 cells and therefore may not necessarily reflect the changes of HT22. Inhibition of Caspase-1 in HT22 Cells Before OGD/R Alleviated Pyroptosis of Adjacent BV2 Cells, and Vice Versa Using the transwell model, HT22 cells were pretreated with AC-YVAD-CMK before OGD/R, followed by co-culturing with BV2 cells seeded in the lower layer of the chamber for additional 12 h. Reduction in pyroptotic cells (Fig. 6A, B), decreased GSDMD-N expression (Fig. 6C-E), lowered levels of IL-1β, IL-18 and LDH (Fig. 6F-H) were observed in the co-cultured BV2 cells as compared with BV2 cells that were co-cultured with OGD/R-treated HT22 without AC-YVAD-CMK pretreatment. Similar phenomenon was found in the transwell co-culture model in which BV2 cells were pretreated with AC-YVAD-CMK before OGD/R, followed by co-culturing with HT22 cells in the same chamber for additional 6 h. Compared with HT22 cells in the transwell co-culture model, in which BV2 cells were not pretreated with AC-YVAD-CMK Fig. 4 Knockdown GSDMD with siRNA alleviated BV2 and HT22 cell pyroptosis after OGD/R. A, B Representative immunofluorescence staining of BV2 cells by dye uptake method (A) and histogram of the percentage of pyroptotic BV2 cells (B). After BV2 cells were pretreated with siRNA of GSDMD(siGSDMD, the proportion of pyroptotic BV2 cells was decreased at 12 h after OGD/R. 0.1%Triton X-100 was used as a positive control. Magnification, ×100. Scale bar, 100 µM. C-E Representative immunoblot and its quantification of GSDMD protein in BV2 cell. After BV2 cells were pretreated with siGSDMD, the GSDMD-N and GSDMD-F expression were downregulated at 12 h after OGD/R. β-actin was used as a loading control. F-H ELISA results showed that the level of IL-1β, IL-18 and LDH in cell supernatant of BV2 cells pretreated with siGSDMD was significantly decreased at 12 h after OGD/R. I, J Representative immunofluorescence staining of HT22 cells by dye uptake method (I) and histogram of the percentage of pyroptotic HT22 cells (J). After HT22 cells were pretreated with siGSDMD, the proportion of pyroptotic HT22 cells was decreased at 6 h after OGD/R. 0.1%Triton X-100 was used as a positive control. Magnification, ×100. Scale bar, 100 µM. K-M Representative immunoblot and its quantification of GSDMD protein in HT22 cell. After HT22 cells were pretreated with siGSDMD, the GSDMD-N and GSDMD-F expression were downregulated at 6 h after OGD/R. β-actin was used as a loading control. N-P ELISA results showed that the level of IL-1β, IL-18 and LDH in cell supernatant of HT22 cells pretreated with siGSDMD was significantly decreased at 6 h after OGD/R. Data were represented as mean ± SEM. n = 3, *P < 0.05, **P < 0.01. N.S no significant difference ▸ before OGD/R, the lower HT22 cells showed fewer proportion of pyroptotic cells (Fig. 6I, J), decreased GSDMD-N production ( Fig. 6K-M) and lowered level of IL-1β, IL-18 and LDH in the culture medium ( Fig. 6N-P). Similarly, levels of secreted IL-1β, IL-18 and LDH found in HT22 culture medium could be contributed from co-cultured BV2 cells and therefore may not necessarily reflect the changes of HT22. Inhibition of GSDMD in HT22 Cells Before OGD/R Alleviated Pyroptosis of Adjacent BV2 Cells, and Vice Versa Using the transwell model, HT22 cells were pretreated with NSA before OGD/R, followed by co-culturing with BV2 cells seeded in the lower layer of the chamber for additional 12 h. Reduction in pyroptotic cells (Fig. 7A, B), decreased GSDMD-N expression (Fig. 7C-E), lowered levels of IL-1β, IL-18 and LDH (Fig. 7F-H) were observed in the co-cultured BV2 cells as compared with BV2 cells that were co-cultured with OGD/R-treated HT22 without NSA pretreatment. Similar phenomenon was found in the transwell Fig. 4), and vice versa (A, B). Representative immunofluorescence staining of BV2 cells by dye uptake method (A) and histogram of the percentage of pyroptotic BV2 cells (B). Compared with BV2 cells treated by OGD/R for 12 h, there was greater pyroptotic proportion in BV2 cells which were plated in the lower layer of chamber and cultured with the solution of upper HT22 cells for 12 h (indicated as Transwell in Figure). 0.1% Triton X-100 was used as a positive control. Magnification, ×100. Scale bar, 100 µM. C-E Representative immunoblot and its quantification of GSDMD protein in BV2 cells. The GSDMD-N expression in Transwell BV2 cells was increased compared with that in BV2 cells treated by OGD/R for 12 h. β-actin was used as a loading control. F-H ELISA results showed that the level of IL-1β, IL-18and LDH in cell supernatant of Transwell BV2 cells was increased compared with that in BV2 cells treated with OGD/R for 12 h. I The cell viability of TranswellBV2 cells was decreased compared with that of BV2 cells treated by OGD/R for 12 h. J, K Representative immunofluorescence staining of HT22 cells by dye uptake method (J) and histogram of the percentage of pyroptotic HT22 cells (K). Compared with HT22 cells treated by OGD/R for 6 h, there was greater pyroptotic proportion in HT22 cells which were layed in the lower layer and cultured with the solution of upper BV2 cells for 6 h (indicated as Transwell in figure). 0.1% Triton X-100 was used as a positive control.Magnification, ×100. Scale bar, 100 µM. L-N Representative immunoblot and its quantification of GSDMD protein in HT22 cells. The GSDMD-N expression in Transwell HT22 cells was increased compared with that in HT22 cells treated by OGD/R for 6 h. β-actin was used as a loading control. O-Q ELISA results showed that the level of IL-1β, IL-18 and LDH in cell supernatant of Transwell HT22 cells was increased compared with that in HT22 cells treated by OGD/R for 6 h. R The cell viability of Transwell HT22 cells was decreased compared with that in HT22 cells treated by OGD/R for 6 h. Data were represented as mean ± SEM. n = 3, *P < 0.05, **P < 0.01. N.S no significant difference ▸ co-culture model in which BV2 cells were pretreated with NSA before OGD/R, followed by co-culturing with HT22 cells in the same chamber for additional 6 h. Compared with HT22 cells in the transwellco-culture model, in which BV2 cells were not pretreated with NSA before OGD/R, the lower HT22 cells showed fewer proportion of pyroptotic cells (Fig. 7I, J), decreased GSDMD-N production ( Fig. 7K-M) and lowered level of IL-1β, IL-18 and LDH in the culture medium ( Fig. 7N-P). Similarly, levels of secreted IL-1β, IL-18 and LDH found in HT22 culture medium could be contributed from co-cultured BV2 cells and therefore may not necessarily reflect the changes of HT22. Using the transwell model, HT22 cells were pretreated with siGSDMD-829 before OGD/R, followed by co-culturing with BV2 cells seeded in the lower layer of the chamber for additional 12 h. Reduction in pyroptotic cells (Fig. 8A, B), decreased GSDMD-N expression (Fig. 8C-E), lowered levels of IL-1β, IL-18 and LDH (Fig. 8F-H) were observed in the co-cultured BV2 cells as compared with BV2 cells that were co-cultured with OGD/R-treated HT22 without siGS-DMD-829 pretreatment. Similar phenomenon was found in the transwell co-culture model in which BV2 cells were pretreated with siGSDMD-829 before OGD/R, followed by Fig. 6 Inhibition of caspase-1 in HT22 cells before OGD/R alleviated pyroptosis of adjacent BV2 cells in the transwell co-culture model (as displayed in supplemental Fig. 4), and vice versa. A, B Representative immunofluorescence staining of BV2 cells by dye uptake method (A) and histogram of the percentage of pyroptotic BV2 cells (B). Compared with Transwell BV2 cells (as described in Fig. 7), there was less pyroptotic proportion in BV2 cells which were plated in the lower layer and cultured for 12 h with the solution of upper HT22 cells pretreated with AC-YVAD-CMK (YVAD) before OGD/R (indicated as YVAD + DMSO + Transwell in Figure). 0.1% Triton X-100 was used as a positive control. Magnification, ×100. Scale bar, 100 µM. C-E Representative immunoblot and its quantification of GSDMD protein in BV2 cells. The GSDMD-N expression in YVAD + DMSO + Transwell BV2 cells was decreased compared with that in Transwell BV2 cells. β-actin was used as a loading control. F-H ELISA results showed that the level of IL-1β, IL-18 and LDH in cell supernatant of YVAD + DMSO + Transwell BV2 cells was decreased compared with Transwell BV2 cells. I, J Representative immunofluorescence staining of HT22 cells by dye uptake method (I) and histogram of the percentage of pyroptotic BV2 cells (J). Compared with Transwell HT22 cells (as described in Fig. 7), there was less pyroptotic proportion in HT22 cells which were layed in the lower layer and cultured for 6 h with the solution of upper BV2 cells pretreated with YVAD before OGD/R (indicated as YVAD + DMSO + Transwell in Figure). 0.1%Triton X-100 was used as a positive control. Magnification, ×100. Scale bar, 100 µM. K-M Representative immunoblot and its quantification of GSDMD protein in HT22 cells. The GSDMD-N expression in YVAD + DMSO + Transwell HT22 cells was decreased compared with that in Transwell HT22 cells. β-actin was used as a loading control. N-P ELISA results showed that the level of IL-1β, IL-18 and LDH in cell supernatant of YVAD + DMSO + Transwell HT22 cells was decreased compared with that in Transwell HT22 cells. Data were represented as mean ± SEM. n = 3, *P < 0.05, **P < 0.01. N.S no significant difference ▸ co-culturing with HT22 cells in the same chamber for additional 6 h. Compared with HT22 cells in the transwell coculture model, in which BV2 cells were not pretreated with siGSDMD-829 before OGD/R, the lower HT22 cells showed fewer proportion of pyroptotic cells (Fig. 8I, J), decreased GSDMD-N production ( Fig. 8K-M) and lowered level of IL-1β, IL-18 and LDH in the culture medium ( Fig. 8N-P). Similarly, levels of secreted IL-1β, IL-18 and LDH found in HT22 culture medium could be contributed from co-cultured BV2 cells and therefore may not necessarily reflect the changes of HT22. Discussion To our knowledge, it is first study to comprehensively explore the mechanisms and interactions of microglial and neuronal cell pyroptosis after I/R. The results here showed that in the simulated I/R environment in vitro (the OGD/R model), both BV2 and HT22 cells underwent pyroptosis, and the onset of pyroptosis of HT22 cells was 6 h earlier than that of BV2 cells. Our data also showed no significant changes in caspase-11 and GSDME expression in BV2 and HT22 cells after OGD/R. Inhibition of caspase-1 or GSDMD Fig. 7 Inhibition of GSDMD by necrosulfonamide (NSA) in HT22 cells before OGD/R alleviated pyroptosis of adjacent BV2 cells in the transwell co-culture model (as displayed in supplemental Fig. 4), and vice versa. A, B Representative immunofluorescence staining of BV2 cells by dye uptake method (A) and histogram of the percentage of pyroptotic BV2 cells (B). Compared with Transwell BV2 cells (as described in Fig. 7), there was less pyroptotic proportion in BV2 cells which were layed in the lower layer and cultured for 12 h with the solution of upper HT22 cells pretreated with NSA before OGD/R (indicated as NSA + DMSO + Transwell in Figure). 0.1%Triton-X-100 was used as a positive control. Magnification, ×100. Scale bar, 100 µM. C-E Representative immunoblot and its quantification of GSDMD protein in BV2 cells. The GSDMD-N expression in NSA + DMSO + Transwell BV2 cells was decreased compared with that in Transwell BV2 cells. β-actin was used as a loading control. F-H ELISA results showed that the level of IL-1β, IL-18 and LDH in cell supernatant of NSA + DMSO + Transwell BV2 cells was decreased compared with Transwell BV2 cells. I, J Representative immunofluorescence staining of HT22 cells by dye uptake method (I) and histogram of the percentage of pyroptotic HT22 cells (J). Compared with Transwell HT22 cells (as described in Fig. 7), there was less pyroptotic proportion in TH22 cells which were plated in the lower layer and cultured for 6 h with the solution of upper BV2 cells pretreated with NSA before OGD/R (indicated as NSA + DMSO + Transwell in Figure). 0.1%Triton X-100 was used as a positive control. Magnification, ×100. Scale bar, 100 µM. K-M Representative immunoblot and its quantification of GSDMD protein in HT22 cells. The GSDMD-N expression in NSA + DMSO + Transwell HT22 cells was decreased compared with that in Transwell HT22 cells. β-actin was used as a loading control. N-P ELISA results showed that the level of IL-1β, IL-18 and LDH in cell supernatant of NSA + DMSO + Transwell HT22 cells was decreased compared with that in Transwell HT22 cells. Data were represented as mean ± SEM. n = 3, *P < 0.05, **P < 0.01. N.S no significant difference ▸ alleviated BV2 and HT22 cell pyroptosis after OGD/R, and GSDMD activation was suppressed by caspase-1 inhibitor. HT22 or BV2 cells undergoing OGD/R aggravated pyroptosis of adjacent BV2 or HT22 cells, respectively, which was relieved by inhibition of caspase-1 or GSDMD. A number of studies have identified that the activation of the Gasdermin family proteins is the biological marker for pyroptosis [27][28][29][30] and the role of pyroptosis is important during ischemic injury of the brain [21][22][23]. Few studies have observed that GSDMD was activated in primary cultured cortical neurons or microglia or BV2 cells undergoing OGD or OGD/R [21][22][23], but the time point of GSDMD activation in these studies is different from that in our study, which may be due to the difference in cell type or culture condition. Our study observed the activation of GSDMD in BV2 and HT22 cells after OGD/R. The results showed that HT22 cells developed cell pyroptosis earlier than BV2 cells, which may help explain the cell type-specific susceptibility in vivo because neurons are more vulnerable to ischemiainduced injury than microglia [31][32][33]. In our study, we also found no changes in GSDME activation in both neuronal cells and microglia after OGD/R. The mechanisms of BV2 and HT22 cell pyroptosis after OGD/R were further explored in our study. The results Fig. 8 Knockdown GSDMD by siRNA in HT22 cells before OGD/R alleviated pyroptosis of adjacent BV2 cells in the transwell co-culture model (as displayed in supplemental Fig. 4), and vice versa. A, B Representative immunofluorescence staining of BV2 cells by dye uptake method (A) and histogram of the percentage of pyroptotic BV2 cells (B). Compared with Transwell BV2 cells (as described in Fig. 7), there was less pyroptotic proportion in BV2 cells which were plated in the lower layer and cultured for 12 h with the solution of upper HT22 cells pretreated with siRNA of GSDMD (siGSDMD) before OGD/R (indicated as siGSDMD + Transwell in Figure). 0.1% Triton X-100 was used as a positive control. Magnification, ×100. Scale bar, 100 µM. C-E Representative immunoblot and its quantification of GSDMD protein in BV2 cells. The GSDMD-N expression in siGSDMD + Transwell BV2 cells was decreased compared with that in Transwell BV2 cells. β-actin was used as a loading control. F-H ELISA results showed that the level of IL-1β, IL-18 and LDH in cell supernatant of siGSDMD + Transwell BV2 cells was decreased compared with that in Transwell BV2 cells. I, J Representative immunofluorescence staining of HT22 cells by dye uptake method (I) and histogram of the percentage of pyroptotic HT22 cells (J). Compared with Transwell HT22 cells (as described in Fig. 7), there was less pyroptotic proportion in TH22 cells which were layed in the lower layer and cultured for 6 h with the solution of upper BV2 cells pretreated with siGSDMD (indicated as siGSDMD + Transwell in Figure). 0.1% Triton X-100 was used as a positive control. Magnification, ×100. Scale bar, 100 µM. K-M Representative immunoblot and its quantification of GSDMD protein in HT22 cells. The GSDMD-N expression in siGSDMD + Transwell HT22 cells was decreased compared with that in Transwell HT22 cells. β-actin was used as a loading control. N-P ELISA results showed that the level of IL-1β, IL-18and LDH in cell supernatant of siGSDMD + Transwell HT22 cells was decreased compared with that in Transwell HT22 cells. Data were represented as mean ± SEM. n = 3, *P < 0.05, **P < 0.01. N.S no significant difference ▸ demonstrated that inhibition of caspase-1 could alleviate GSDMD activation, and inhibition of caspase-1 or GSDMD could reduce the level of IL-1β, IL-18 and LDH in both cell types after OGD/R, which is similar to the results reported by Poh et al. [21] and Tang et al. [23]. However, caspase-11 and GSDME activation in BV2 and HT22 cells following OGD/R were not found in our study. Fann et al. [34] reported that caspase-11 expression was increased in primary cultured cortical neurons after OGD/R, which is different from our results. As described above, the difference may be due to cell type or culture condition. Based on our findings, we speculate that activation of the canonical inflammasome pathway (caspase-1/GSDMD) is crucial for cell pyroptosis but caspase-3/GSDME pathway may not be involved in pyroptotic cell death following brain I/R. Whether the non-canonical inflammasome pathway (caspase-11/GSDMD) play a role in cell pyroptosis following brain I/R remains to be determined. In addition, in our study, for the first time we showed that necrosulfonamide (NSA) was able to inhibit GSDMDinduced pyroptosis following I/R. The results showed here strongly suggest that NSA could be an effective suppressor for the activation of GSDMD induced by OGD/R. Further investigation on the protective effect of NSA in I/R animal model is needed. After cell pyroptosis, the release of cellular contents, including damage associated molecular patterns (DAMPs), and their binding to pattern recognition receptors (PRRs) on the adjacent cells result in inflammatory propagation [35,36]. In addition, studies of inflammatory diseases have demonstrated that inflammasome components released from the pyroptotic cells could re-assemble into functional complexes or could be engulfed by neighboring phagocytic cells to further activate caspase-1, which helps amplify and prolong the inflammatory response [37,38]. However, it is not clear yet whether the cell pyroptosis induced by I/R propagate inflammation. In this study, by using the transwell co-culture system, we observed that HT22 or BV2 cells undergoing OGD/R can not only produce pyroptosis phenotype, but also induce pyroptotic death of their neighboring BV2 or HT22 cells with more severe cell death phenotype than that of those cells treated with OGD/R alone. After inhibition of caspase-1 or GSDMD of HT22 or BV2 cells undergoing OGD/R, pyroptotic death of the neighboring BV2 or HT22 cells was alleviated. These results imply that the sequential induction of microglial and neuronal pyroptosis after cerebral I/R may result in an amplifying cascade of inflammatory response and prolonged tissue damage. In this study the transwell experiments were chosen to verify the mechanism of cell injury caused by the inflammation propagation after cell pyroptosis. The purpose could be achieved by the current experimental methods, so we did not observe the phenomena that the adjacent cells were the same as inhibition cells under the condition of OGD/R. This article has some limitations. First, we only arrived at our conclusion by knockout method, we should add gain-of-function study to help confirm the mechanistic link between caspase-1 and GSDMD in OGD/R-induced pyroptosis in both neurons and microglia. Second, cell apoptosis or necroptosis induced by OGD/R has also been reported [39]. In our cell model, we only focused on the occurrence of pyroptosis and did not investigate whether or not apoptosis or necroptosis was also involved, which is warranted for further investigation in the future. In summary, our study demonstrated that microglia and neurons underwent pyroptosis during mimic I/R injury in vitro, which was attributed to the activation of caspase-1/ GSDMD pathway without the involvement of other known pyroptosis pathways. Moreover, the cell pyroptosis occurred in neurons and microglia after I/R could trigger pyroptosis in other adjacent but non-OGD/R cells. Our findings suggest that caspase-1 and GSDMD are important therapeutic targets for cerebral I/R-induced tissue damage and inflammation (supplemental Fig. 6).
2020-02-27T09:31:56.353Z
2020-02-24T00:00:00.000
{ "year": 2023, "sha1": "9632e446d26893eaafb9deb53efc07f3f097ac2c", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-14822/latest.pdf", "oa_status": "GREEN", "pdf_src": "Springer", "pdf_hash": "c59f5406b6dc2e6b7a55fdc4f00e307192bdbccb", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Chemistry" ] }
235723687
pes2o/s2orc
v3-fos-license
Brazil’s sugarcane embitters the EU-Mercosur trade talks The Brazilian government’s decision to open the Amazon biome to sugarcane expansion reignited EU concerns regarding the sustainability of Brazil’s sugar sector, hindering the ratification of the EU-Mercosur trade agreement. Meanwhile, in the EU, certain conventional biofuels face stricter controls, whilst uncertainty surrounding the commercialisation of more sustainable advanced-biofuels renders bioethanol as a short- to medium-term fix. This paper examines Brazil’s land-use changes and associated greenhouse gas emissions arising from an EU driven ethanol import policy and projections for other 13 biocommodities. Results suggest that Brazil’s sugarcane could satisfy growing ethanol demand and comply with EU environmental criteria, since almost all sugarcane expansion is expected to occur on long-established pasturelands in the South and Midwest. However, expansion of sugarcane is also driven by competition for viable lands with other relevant commodities, mainly soy and beef. As a result, deforestation trends in the Amazon and Cerrado biomes linked to soy and beef production could jeopardize Brazil’s contribution to the Paris agreement with an additional 1 ± 0.3 billion CO2eq tonnes above its First NDC target by 2030. Trade talks with a narrow focus on a single commodity could thus risk unsustainable outcomes, calling for systemic sustainability benchmarks, should the deal be ratified. www.nature.com/scientificreports/ Zoning laws banning subsidies to sugar and ethanol production and a deforestation moratorium for soy in the Amazon have sent strong signals that Brazil was on the right track to become an important source of sustainable biofuels. However, since 2012 deforestation has been on the rise again with 2020's rate increased by 140% in relation to that of 2012 11 . In 2019 Brazil unexpectedly revoked the Agro-Ecological Zoning (AEZ) decree for sugarcane that forbade its expansion into the Amazon and other sensitive biomes 12 , thereby reigniting EU concerns regarding the sustainability of Brazil's sugarcane production. In addition, the controversial dismantling of Brazil's environmental policies together with the revelation that a large share of EU imports of beef and soy from the country were produced on illegally deforested lands 13 has complicated the ratification process of the EU-Mercosur trade deal 14 . Given uncertainties surrounding the potential mass-scale commercialisation of more sustainable advanced generation of biofuels, there is a renewed interest in first generation bioethanol as a short to medium-term fix to achieve EU decarbonisation targets. Nonetheless, public policy support for conventional liquid biofuels has also courted considerable controversy on GHG emissions leakages from direct (LUC) and indirect land-use change (iLUC), as well as feed and food security. To tackle these concerns, the EU launched a series of measures and initiatives. The Fuel Quality Directive laid out a roadmap for a set of credible criteria for the exclusive adoption of sustainable biofuel usage 15 . Subsequently, the EU revised the Renewable Energy Directive (REDII) and devised new environmental criteria for biofuel feedstock, limiting the use of highly iLUC-risk biofuels, specifically due to conversion of native vegetation to croplands 8 . Most recently, under the auspices of the Green Deal, the EU plans to bind its sustainability criteria to its trade relations with external partners. As part of this process the REDII will set the EU-wide renewable energy target to a minimum of 32%, while imposing restrictions on the use of palm-oil-based biofuels. In particular, after identifying that between 2008-2015, 45% of the expansion of palm oil took place in areas of high carbon stocks 16 , the EU is phasing-out biofuels linked to deforestation by 2030. Hence, palm oil biofuels (with some exceptions) will be disqualified as eligible for EU subsidy support and so treated as a regular fossil fuel 17 . As a key player in global bioethanol markets, Brazil could feature as a major replacement supplier. The compliance of Brazilian bioethanol with REDII criteria is therefore the prerequisite for allowing Brazilian producers to take full advantage of the EU-Mercosur trade rate quotas and a significant step forward in the ratification process of the trade agreement. To shed light on this issue, here we examine the future sustainability of EU imports of Brazilian bioethanol by quantitatively assessing the impacts of increased EU demand for bioethanol in terms of its implications for sugarcane expansion and associated land-use change in Brazil. A key issue is whether the direct and indirect land-use changes arising from such a demand increase would comply with EU environmental and sustainability criteria. Our study assesses a scenario in which to meet its first-generation biofuel mandate, the EU substitutes all biodiesel with bioethanol by 2030. To do so, we employ a state-of-the-art global trade simulation market model with a biobased focus, called MAGNET, to estimate the EU import demand for Brazilian bioethanol. The import demand trends from MAGNET are then inputted into a spatially-explicit land-use model of Brazil (Otimizagro) to forecast the resulting land-use changes. The use of a national model at a high spatial resolution (6.25 ha) is key to properly represent the diversity and complexity of Brazil's territory, including climates, socioeconomic conditions and regional governance systems. With a coverage of fourteen main crops, Otimizagro 18 simulates detailed land-use spatial patterns resulting from the expansion of sugarcane and other crops, forest plantation, secondary vegetation regrowth and deforestation trends together with resultant GHG emissions, thus reducing uncertainties surrounding potential LUC and iLUC 19 in Brazil. Results EU demand for Brazilian ethanol by 2030. The MAGNET model is used to simulate by 2030 the assumed phasing out of EU biodiesel production (POB-Phase Out of Biodiesel scenario) with compensating rises in its bioethanol capacity in order to hit first-generation biofuel mandate targets (Supplementary Table S1). The POB scenario is built directly upon the bioeconomy-baseline in MAGNET, as described in the Supplementary Information (Supplementary Information S1). The main model drivers behind this medium-term scenario are worldwide country projections of economic growth and population, biophysical (land productivities) and energy related drivers (fossil fuel prices, energy consumption and production trends) and the progressive expected implementation of EU first-and advanced-generation biofuel mandates 20,21 . In accordance with this scenario, EU imports of ethanol rise rapidly after 2020, leading to a larger EU reliance on imports from Brazil 22 . By 2030, the EU share of Brazilian bioethanol exports is expected to be 30% (1.13 billion litres), well above the 0.18 billion litres projected from a baseline scenario (Supplementary Figure S1). Total POB ethanol production reaches 52.24 billion litres in 2030. From a trade policy perspective, EU bioethanol imports rise above the TRQ limit set by the EU-Mercosur deal (650 thousand tonnes) in 2027. The Brazilian production of sugar achieves 52.1 million tonnes in 2030 23 . The total area of sugarcane to meet the demand for ethanol and sugar is 14.8 million hectares (Supplementary Figure S2), hence an increase of 45% (4.6 million hectares). Our figures, derived from projections of sugarcane productivity from the Brazilian Ministry of Agriculture 23 and ethanol/sugar conversion factors from the National Company of Supplying 24-31 (Section S2. Countrywide land-use changes and sugarcane expansion. Land-use changes due to sugarcane production are also driven by competition with other commodities for viable agricultural lands. Therefore, the allocation of sugarcane areas takes place simultaneously with the expansion (or reduction) of the other croplands, forest plantation along with the forest restoration needed to attain the compliance to the Forest Code, the prin- www.nature.com/scientificreports/ cipal law regulating forest conservation on private properties 33 . As a result, Otimizagro fully represents direct and indirect land use changes due to sugarcane expansion, including the displacement of marginal farming and ranching systems in favour of more lucrative crops (Fig. 1). The projections to 2030 for the main crops (Supplementary Table S2) follow the official estimates of the Brazilian Ministry of Agriculture 23 . Soybean production rises rapidly from 114 million tonnes in 2019 to 163 million tonnes by 2030, with exports representing more than 60% of total production. Double cropping systems that combine first-crop soybeans and second-crop corn account for total corn expansion, with a gradual reduction of first-crop corn. Soybean, sugarcane and second-crop corn areas, which represented more than 60% of the country's total cropland in 2019, are responsible for the largest increments by 2030-i.e., 34% (12 million hectares), 44% (4.5 million hectares) and 62% (8 million Figure S3), which considers a growing political support for predatory agriculture practices, land-grabbing and a progressive dismantling of the country environmental legislation including the Forest Code. This scenario follows closely the rising deforestation trend since 2012 (Supplementary Figure S4). Nevertheless, we also compare GHG emissions from the former scenario with those from a worst-case governance scenario that models the full reversal of the past environmental achievements in Brazil 10 . Regarding forest restoration, we included the targets of the National Plan for Native Vegetation Recovery, which aims at 12.5 million ha of forest restoration by 2035 34 . As a result, there is a gradual increase of secondary forests from 2.6 million hectares in 2019 to 7 million hectares in 2030. Land-use conversions to new sugarcane areas from 2019 to 2030 mainly occur in the Southeast and Midwest of the country (Fig. 2). The largest sugarcane expansion in absolute terms is expected to occur in the State of Sao Paulo (2 million ha), followed by the states of Mato Grosso do Sul (0.9 million ha) and Minas Gerais (0.7 million ha). Mato Grosso do Sul (115%), Minas Gerais (80%) and Goiás (63.5%) are also responsible for the highest rates of increase (Supplementary Table S3). Most of the sugarcane croplands in 2019 continue to be productive in 2030, representing 69% of the total sugarcane area (Supplementary Table S4). The conversion of native vegetation and other croplands (including food crops) to sugarcane is limited to less than 1% of the total area, resulting in a small loss of forested lands and displacement of other crops. Sugarcane expansion onto pasture accounts for more than 30% of the cumulative expansion of the country's agriculture from 2019 to 2030 (Supplementary Table S5). The decision of the Brazilian government to revoke the sugarcane zoning decree does not appear to influence sugarcane expansion into the Amazon. Indeed, the results show that only 2% of the total sugarcane area in 2030 (307 thousand ha) is within the AEZ restricted zone, most of which was already sugarcane in 2019 (74%). Similarly, new sugarcane croplands from forest clearance are marginal (Fig. 3). GHG emissions from land use, land-use change and forestry (LULUCF). Roughly 75% of current agricultural land remains so in 2030. The need for new cropland (14 million hectares) expands mainly onto current pastureland (91% of expansion), whilst only 5% and 4% comes from conversion of forest and savannah, respectively (Supplementary Table S5). Clearance of forests and savannahs (39 million ha) is largely linked to land speculation via predatory land-grabbing with subsequent cattle ranching occupation 35 . New land conversion to soybean mainly takes place in the Midwest and northern states (Fig. 1a), with about 7% of total expansion into high carbon forested lands with resultant high GHG emissions, hence a potential threat to the Amazon forest and Cerrado native vegetation 13 . Yet, the extent to which deforestation is due to pasture displacement as a result of large-scale expansion of soybean remains uncertain given the complexity of iLUC domino effects 19 . 36 (− 131 million CO 2 eq tonnes) 37 and our results is an additional 1 ± 0.3 billion tonnes. But this gap could be even larger, reaching 1.7 ± 0.4 billion tonnes, if the environmental governance in Brazil further wans (Supplementary Table S6). With most sugarcane cropland expansion expected to occur on long-established pasturelands, the GHG emissions from land-use change are limited (48% of total emissions), due to the pasture low carbon content 38 . However, cultivating degraded pasture requires the use of fertilizer (90 kg N/ha, on average) and limestone (2 tonne/ha) to achieve the expected sugarcane productivity per hectare 34 , representing an additional source of GHG emissions (46% of total emissions). In addition, some regions of Brazil, notably the northern states, rely on burning sugarcane straw to facilitate manual harvesting (5% of total emissions). Even though the AEZ envisaged to moderate this practice, only the state of Sao Paulo enacted a law in 2002 that aims to completely phase out the burning of sugarcane straw by 2021. Figure 4 and Supplementary Table S7 show the total sugarcane area and associated GHG emissions from 2019 to 2030. On average, the GHG emission is ca. 1.7 ± 0.17 tonnes of CO 2 eq ha −1 year −1 (total 24.7 ± 2.3 Mt CO 2 eq year −1 ). Discussion Our study shows that sugarcane croplands could meet domestic and international bioethanol demand without further deforestation. Indeed, most sugarcane expansion would occur at the expense of pasturelands in the Southeast and Midwest regions, given the concentration of sugar and ethanol mills, especially in Sao Paulo State, together with the well-developed system for transportation of ethanol, thereby reduced transportation and production costs. In addition, in this region ranching is, in general, economically less competitive than sugarcane. Converting pasture to sugarcane and achieving commercially viable yields require a substantial application of lime and fertilizers, which represents about 50% of GHG emission from sugarcane cultivation. However, these emissions are far lower than those from deforestation linked to crop expansion. Consequently, this strategy could represent an opportunity for the Brazilian sugarcane industry to meet the rising demand for ethanol and sugar while achieving the country's sectoral mitigation objectives (i.e., the NDC and Low Carbon Agriculture 39 targets) along with the compliance with national and international environmental standards (i.e., REDII and RenovaBio environmental criteria). To this end, taking a long-term perspective, the amendment of more sustainable supplies, such as biochar, could further increase soil proprieties and agricultural productivity of degraded pasturelands, while contributing to lower emissions 40,41 . The potential conversion of the Amazon and Cerrado native vegetation to sugarcane could be marginal, resulting in limited LULUCF emissions. Indirect land use changes are also www.nature.com/scientificreports/ far from certain, since cattle ranching intensification has been the most cost-effective solution to yield land for sugarcane expansion in the southeast of Brazil, especially in regions with easy access to grain production [42][43][44] . Even though revoking the AEZ for sugarcane meant a further step toward the weakening of environmental governance in Brazil, its consequence could manifest itself more in terms of tarnishing the image of Brazilian ethanol rather than resulting in a real expansion of sugarcane crops into the Amazon and Cerrado native vegetation. The 2018/2019 sugarcane production in the Amazonian states was less than 1% of country production 45 , and in the absence of restrictions, the sugarcane area is likely to double within the biome by 2030. Nevertheless, more than 97% of production is poised to occur in the mid-and southern Cerrado and Atlantic Forest biomes, with potentially little direct conversion from forests and savannah. This trend is expected to continue in the near future, since most of the projects for new ethanol plants are located near road infrastructure in the southern regions 32 . Moreover, the RenovaBio programme already incorporates sustainability criteria to avoid the use of biofuels grown on lands deforested after December 2017. Together with the Forest Code, these measures-if properly enforced-represent an effective legal tool to ensure that the ethanol supply chain remains deforestationfree. The displacement of other crops could be negligible, thereby avoiding potential concerns about regional food security and market stability. On the other hand, the country's agricultural expansion as a whole, raises sustainability concerns. If Brazil's need for new cropland (14 million ha) could be met solely on existing pastureland (91%), the main driver of deforestation continues to be significant losses of native forest and savannah vegetation (39 million ha) due to land speculation via predatory land-grabbing, with subsequent cattle ranching occupation 46 . The extent to which this is due to pasture displacement as a result of large-scale expansion of soybean onto already cleared areas remains uncertain given the complexity of iLUC domino effects 19 . However, there exists evidence that a share of agricultural commodities employing illegally deforested land is exported from Brazil to the EU market 13 . All of this not only tarnishes the reputation of Brazil's agribusiness, it also places an additional burden on other countries to mitigate climate change, if Brazil ultimately fails to fulfil its NDC contribution to the Paris agreement 10 . Conclusion Access to the EU single market for third country commodities is subjected to compliance with the EU sustainability criteria. However, to date, only the imports of a few commodities have been clearly regulated, for which compliance can be assessed. Among them, biofuels must comply with the environmental standards set by the updated EU Renewable Energy Directive (REDII), which limits biofuel feedstock expansion onto lands with high carbon stocks. The decision of the Brazilian government to open the Amazon and Pantanal biomes to sugarcane plantations thus reignited EU concerns about the sustainability of Brazilian ethanol-which has long been a sticking point in the 20 years' trade negotiations-complicating the ratification of the EU-Mercosur deal. Our study shows that the Brazilian sugarcane sector could meet the soaring domestic and international demand for ethanol without further deforestation. Nevertheless, this will require proper agricultural practices along with a sustainable intensification of ranching to free up land for agricultural expansion and as a result avoid iLUC in the form of pasture-displacement into distant forest areas. Although the increase of Brazilian ethanol production would still comply with the REDII environmental criteria, the recent high deforestation rates in the Amazon and Cerrado biomes could further undermine the ratification of the EU-Mercosur trade deal. The difference between the country's First NDC stipulated LULUCF emissions by 2030 and our results is an additional 1 ± 0.3 billion CO 2 eq tonnes, placing Brazil's contribution to the Paris Agreement at risk. Deforestation linked to production of other commodities exported to the EU, such as soybeans or meat, are not regulated by clear EU sustainability criteria, leading to potential disputes between the parties. Trade policy based on narrow attention to single commodities therefore risks unsustainable outcomes and could aim at the wrong target. The EU should negotiate responsive international agreements based on enforceable www.nature.com/scientificreports/ environmental criteria for all traded key commodities within a systemic, science-based understanding to halt EU-driven deforestation 47 and meet the Green Deal objectives of promoting sustainability across the whole supply chain, whose effectiveness has been recently questioned 2 . This must be bolstered, in parallel, by diplomatic efforts to support socioeconomic growth built upon Brazil's past history of strong environmental achievements 48 . Methods Modelling framework. Global Computable General Equilibrium (CGE) models have emerged as a tool for international impact assessment. Due to their considerable geographical coverage and trade connected macroeconomic systems, they are suitable for assessing the synergies and trade-offs, both domestic and internationally, arising from public policy. In this context, various CGE studies have examined the impacts of direct and indirect land-use change due to biofuel policies 49,50 . On the other hand, the lack of a fine spatial resolution in multiregion CGE modelling justifies a soft-coupling with a spatially-explicit land-use model. A high spatial resolution of land-use models allows the inclusion of detailed geographic features, such as terrain, soils, land tenure, land use zoning and other features, in order to provide a realistic picture of land use trends across the country as well as associated GHG emissions. Thus, we established a comprehensive methodological procedure by loosely coupling the global economic market simulation model, MAGNET, and the spatially explicit land-use model Otimizagro. This allows moving from a regional outlook on socioeconomic trends to a subnational analysis of land use patterns, with the proper spatial resolution (6.25 ha) to assess the compliance with EU environmental criteria for biofuels production. Otimizagro model. Otimizagro is a nationwide, spatially-explicit model that simulates land use, land-use change, forestry, deforestation, regrowth, and associated GHG emissions under various scenarios of agricultural land demand and deforestation policies for Brazil 10,13,18 . Otimizagro simulates nine annual crops (i.e. soy, sugarcane, corn, cotton, wheat, beans, rice, manioc, and tobacco), including single and double cropping; five perennial crops (i.e. arabica coffee, robusta coffee, oranges, bananas, and cocoa); and plantation forests. The model framework, developed using the Dinamica EGO platform 54 , is structured in four spatial levels: (i) Brazil's biomes, (ii) IBGE micro-regions, (iii) Brazilian municipalities, and (iv) a raster grid with 6.25 ha spatial resolution. Concurrent allocation of crops at raster cell resolution is a function of crop aptitude and profitability, calculated using regional selling prices, production and transportation costs. When the available land in a given micro-region (or other specified spatial unit) is insufficient to meet the specified land allocation, Otimizagro reallocates the distribution of remaining land demands to neighbouring regions, creating a spillover effect. Future demand for crops, and deforestation and regrowth rates are exogenous to the model 10,23,34 (Supplementary Information S2). The probability of deforestation is a function of spatial determinants, such as distances to roads and previously deforested areas. To account for GHG emissions from land-use, land-use change and forestry (LULUCF), Otimizagro calculates emissions and removals from biomass and soil according to the Third National Communication (TNC) of Brazil to the United Nations Framework Convention on Climate Change 55,56 . TNC database includes a biomass map (Supplementary Figure S5), a reference soil carbon stock map (Supplementary Figure S6) and carbon emission/removal rates (Supplementary Table S8, Supplementary Table S9 and Supplementary Table S10). Biomass parameters include live (aboveground and belowground) and dead carbon pools. In comparison with other biomass maps available for Brazil, the aboveground pool has intermediate average values 57 . For biomass calculation, in the initial year, native vegetation categories assume the values of the biomass map. Regrowth is assumed to stabilize at 44% of the original vegetation biomass. Biomass values are assigned to anthropic landuse categories according to Supplementary Table S9. For carbon soil, the model assumes that the stocks begin in equilibrium; thenceforth, the reference soil carbon stock is multiplied by soil carbon stock change factors. Annual carbon emissions are calculated cell by cell and attributing carbon stock changes according to a set of conditions. Soil carbon stock change follows equation Eq. S1. The stabilization threshold is the IPCC default www.nature.com/scientificreports/ Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2021-07-04T06:17:12.287Z
2021-07-02T00:00:00.000
{ "year": 2021, "sha1": "4396e8c8d3c2a35e8b2c0e7ca26ad4b534384fe6", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-021-93349-8.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2eb6956f198dfd26f8ec7fc1f7741ec4ce11854c", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
236195158
pes2o/s2orc
v3-fos-license
Clinicopathological and Prognostic Value of Programmed Cell Death 1 Expression in Hepatitis B Virus-related Hepatocellular Carcinoma: A Meta-analysis Background and Aims The efficacy of targeted programmed cell death 1/programmed death ligand 1 (PD-1/PD-L1) monoclonal antibodies (mAbs) has been confirmed in many solid malignant tumors. The overexpression of PD-1/PD-L1 serves as a biomarker to predict prognosis and clinical progression. However, the role of PD-1 in patients with hepatitis B virus-related hepatocellular carcinoma (HBV-HCC) remains indeterminate. Given that HBV is the most important cause for HCC, this study aimed to investigate the prognostic and clinicopathological value of PD-1 in HBV-HCC via a meta-analysis. Methods We searched PubMed, Embase, Scopus, the Cochrane Library, Web of Science and Google Scholar up to January 2021 for studies on the correlation between clinicopathology/prognosis and PD-1 in patients with HBV-HCC. The pooled hazard ratios (HRs) and 95% confidence intervals (CIs) were calculated to investigate the prognostic significance of PD-1 expression. The odds ratios (ORs) and 95% CIs were determined to explore the association between PD-1 expression and clinicopathological features. Results Our analysis included seven studies with 658 patients, which showed that high PD-1 expression was statistically correlated with poorer overall survival (HR=2.188, 95% CI: [1.262–3.115], p<0.001) and disease-free survival (HR=2.743, 95% CI: [1.980–3.506], p<0.001). PD-1 overexpression was correlated with multiple tumors (OR=2.268, 95% CI: [1.209–4.257], p=0.011), high level of alpha fetoprotein (AFP; OR=1.495, 95% CI: [1.005–2.223], p=0.047) and advanced Barcelona Clinic Liver Cancer (BCLC) stage (OR=3.738, 95% CI: [2.101–6.651], p<0.001). Conclusions Our meta-analysis revealed that the high level of PD-1 expression was associated with multiple tumors, high level of AFP and advanced BCLC stage. It significantly predicted a poor prognosis of HBV-HCC, which suggests that anti-PD-1 therapy for HBV-HCC patients is plausible. Introduction Hepatocellular carcinoma (HCC) is one of the most common cancers and the sixth leading cause of cancer-related deaths worldwide. 1 Although the development of healthcare and living standards have altered the etiology of HCC, most existing HCC cases are still associated with chronic hepatitis B virus (HBV) infection, especially in Asia and Africa. 2,3 More than half of the patients have missed the opportunity to accept surgical section at first diagnosis, which makes locoregional therapies and adjuvant therapies necessary. 4,5 Immunotherapy is a promising treatment of malignant tumors and programmed cell death 1/programmed death ligand 1 (PD-1/PD-L1) is the most commonly used target. 6,7 Belonging to the B7-CD28 of the immunoglobulin superfamily, PD-1 is a type I transmembrane glycoprotein, with a molecular weight of 50∼55 kDa. It is an important immunosuppressive receptor, mainly expressed in activated T cells, B cells, natural killer cells, monocytes and mesenchymal stem cells. Under physiological conditions, PD-1 recognizes antigens via T cell receptors and regulates the function of peripheral T cells as a part of the immune response modulation to allogenic materials or autoantigens, and prevents immune-related diseases. 8 However, the tumor environment induces the up-regulation of PD-1 molecules in infiltrating T cells. Tumor cells demonstrate a high expression of the PD-1 ligands PD-L1 and PD-L2, which may lead to the continuous activation of the PD-1 pathway in the tumor mi-croenvironment (TME) and the inhibition of T cell function, so that tumor cells can escape immune surveillance. 9,10 PD-L1 monoclonal antibodies (mAbs) can block this pathway and partially restore T cell function, allowing them to play defensive roles in eliminating tumor cells. 11 The efficacy of targeted PD-L1 mAb has been confirmed in breast cancer, melanoma, lung cancer and gastric cancer, wherein it stimulates inherent immune function, prolongs survival time and stabilizes disease progression. 12,13 Additionally, the overexpression of PD-1/PD-L1 has been found in the tumors mentioned above and has served as a biomarker to predict tumor prognosis and clinical progression. [14][15][16] In recent years, clinical trials of immunotherapy in HCC have been in full swing but the effects are still controversial. 17 This difference might lie in the complex microenvironment in HCC and the presence of multiple antigens. [18][19][20] Considering the high infection rate of HBV among HCC patients and the interaction between HBV and the immune system, it is necessary to analyze the relationship between the PD-1/PD-L1 level and progression of HBV-HCC. Moreover, some studies have already suggested that the level of PD-1/PD-L1 could predict the prognosis and clinicopathological characteristics of HBV-HCC patients, while others have argued that there is no correlation. [21][22][23][24][25][26][27] Thus, we conducted this analysis to explore the predictive value of PD-1 in HBV-HCC, to better understand the role of the PD-1/PD-L1 pathway with regards to immunotherapy in HBV-HCC patients and to design a more reasonable treatment plan for these patients. Literature search strategy We searched the literature databases to obtain as many related research articles as possible, including the databases of PubMed, Embase, Scopus, the Cochrane Library, Web of Science and Google Scholar. "Hepatitis B virus infection and hepatocellular carcinoma" and "PD-1 or programmed cell death 1 receptor" and "survival or prognosis or clinicopathology" were used as keywords for the search. All articles identified were reported in English and were published before January 2021. Eligibility criteria (1) All articles reviewed were reported in English with fulltexts. (2) Studies used humans diagnosed with HBV-HCC as subjects. (3) Articles reported the PD-1 level, either in clinical HCC tissues or serum. (4) The correlation between PD-1 expression and prognosis were detected, including survival/disease-free survival (OS/DFS), as well as clinicopathological features. (5) Researches supplied hazard ratio (HR), odds ratio (OR) and their 95% confidence intervals (CIs) or sufficient data to calculate such. When it came to repetitive studies, we chose to analyze the latest or those with the most comprehensive data. Data extraction Two authors performed the data extraction, separately. When it came to disagreements over the information, a third reviewer resolved them. The following data were extracted: name of first author, country, year of publication, method to detect PD-1 expression, cut-off value, survival, clinicopathological parameters (including age, sex, number of tumors, tumor size, liver cirrhosis, alpha fetopro- Quality assessment Using a 9-score system of the Newcastle-Ottawa quality assessment scale (NOS), 28 two authors evaluated the quality of articles separately. When it came to discrepancies in the score, authors involved a third reviewer to settle the differences through discussion and analysis. We judged articles according to three aspects: selection, comparability, and outcome assessment. Each article was given a score between 0 to 9 based on these parameters. Statistical analysis High or low PD-1 expression was defined according to the cut-off values provided by the original articles. Due to the difference in detection methods, the random effects model was performed regardless of whether the heterogeneity between studies was statistically significant or not. The correlation between PD-1 expression and survival was evaluated based on the combination of HRs and their 95% CIs. If HR was greater than 1, the ratio of patients with poor survival compared to patients with better survival among patients with high PD-1 expression was greater than 1, which indicated higher PD-1 expression correlated with poorer survival. If HR was less than 1, higher PD-1 expression was a protective parameter. ORs and 95% CIs were used to assess the correlation between PD-1 expression and clinicopathological features. In the same vein, ORs greater than 1 meant that among patients with poor clinicopathological characteristics, the ratio of high PD-1 expression patients to low-expression patients was greater than the ratio of high PD-1 expression patients to low-expression patients in patients without adverse clinical characteristics, which meant higher PD-1 expression was correlated with higher malignancy. At last, HRs and ORs were pooled using a meta-analysis. Sensitivity analysis was performed to evaluate the stability of the results. At each turn, one sample was deleted to observe the role of the individual study on the overall results. The potential publication bias was evaluated using Begg's test, which was also known as the rank correlation test. It was based on Kendall's tau rank correlation test to test the correlation between the standardized effect and the effect variance, namely the correlation between the testing effect and the sample size. Under the null hypothesis that there was no publication bias, the standardized effects could be considered to be independently distributed, and there was no correlation between the standardized effects. All data in the meta-analysis were synthesized using STA-TA15.0 Software (Stata MP). A p-value less than 0.05 was considered to be statistically significant. All p-values and 95% CIs were two-sided. Search selection In the present study, we identified 1,691 articles with the initial searching strategy. Of these studies, we excluded 560 duplicates and deleted another 1,109 records after screening titles or abstracts. After thoroughly reviewing the full texts of 22 potentially eligible articles, 7 trials meeting the inclusion criteria were included in the final analysis. Figure 1 demonstrates the detailed selection process. Table 1 lists the principal features of included studies. All studies were published in the last 20 years. The number of patients was 658 in total, ranging from 40 to 171. Two were cross-sectional studies and the rest were prospective studies. The total seven included studies were all implemented in China. Characteristics of the included studies Among the seven studies, five reported the correlation between survival and PD-1 expression, and some of them also explored the correlation between clinicopathological features and PD-1 expression, while two only examined the latter connection. Some did not directly report HRs and 95% CIs; hence, we calculated these statistics by adopting Kaplan-Meier curves. Heterogeneity existed in the method for detecting PD-1, the cut-off points of high PD-1 and the criteria of other clinical characteristics. We evaluated the study quality using the NOS. The score ranged from 6 to 8 ( Table 2), suggesting that the methodology of the studies was relatively reliable. Survival analysis OS was measured in three of the seven included studies. A total of 314 patients from these three studies were evaluated to examine the correlation between PD-1 expression and OS. No significant heterogeneity existed among the included studies (χ 2 =2.24; p=0.327; I 2 =10.6%). Pooled results by random modeling revealed that a high level of circulating PD-1 was related to poor prognosis in term of shorter OS (HR=2.19, 95% CI: 1.26-3.12, p<0.001; Fig. 2A). Publication bias and sensitivity analysis We used Begg's funnel plot to test potential publication bias. Sensitivity analysis was executed by sequentially omitting each trial one at a time. Supplementary Figure 1 shows the potential publication bias and sensitivity analysis results among the studies involved in the survival analysis. No apparent publication bias for analysis existed (Egger's test: p=0.153 for OS and p=0.202 for DFS; Begg's test: p=0.296 for OS and p=0.452 for DFS). The sensitivity analysis showed that no single trial remarkably altered the pooled results for OS and DFS, which indicated that our estimates were robust and reliable. Besides, there was no significant publication bias in the analysis of clinical features, either (Table 3). Sensitivity analysis demonstrated that deleting any single study did not remarkably affect the pooled ORs for the clinical parameters with significant differences (Supplementary Fig. 2). Discussion HCC, considered as a common highly aggressive tumor, has been disturbing the global public health system for its dismal prognosis. Elevated serum HBV DNA level serves as a reliable risk predictor independent of hepatitis B e antigen, serum alanine aminotransferase level, and liver cirrhosis. 29 HBV-HCC has been the focus of research on HCC. Given the high prevalence of HBV in China, it could be profound to provide insight into the relationship between HBV-HCC and cutting-edge immunotherapy. Since the first PD-1/PD-L1 inhibitors nivolumab and pem- brolizumab were approved by the Food and Drug Administration (FDA) in 2014, this class has been rapidly developed and was approved for several solid tumors, such as melanoma and Hodgkin lymphoma. 7 PD-1 is a negative regulator of T-cell activation by its mechanism of suppressing T-cell activity at different stages in the immune response when interacting with its two ligands, PD-L1 and PD-L2. When engaged by ligands, PD-1 inhibits kinase signaling pathways through phosphatase activity, rather than T-cell activation as in the physiological condition. 30,31 PD-1/PD-L1 overexpression has been noticed in various solid tumors, and several studies have concluded that the overexpression of PD-1/PD-L1 plays an important role in regulating the Tcell-mediated antitumor response, leading to poor prognosis. [32][33][34] Although the earliest and most widely recognized predictive biomarker was PD-L1, with several assays approved by FDA, it has not been proven as the definitive biomarker. In the TME with high density of CD8+ tumor infiltrating lymphocytes, PD-1, PD-L1/PD-L2 and cytotoxic Tlymphocyte antigen 4 (i.e. CTLA-4) might predict the prognosis and response to PD-1/PD-L1 blockade as well. 35 In our subgroup analysis, PD-1 expressed by TILs predicted DFS better than the circulating PD-1 in the serum. This may be because the PD-1 expressed by TILs can interact with PD-L1 secreted by tumor cells, leading to immune escape of tumor cells. PD-1 is secreted into serum after being expressed in cells. The PD-1 in TILs has higher predictive efficiency and it is recommended to routinely detect PD-1 in TILs. There have been several reviews and meta-analyses indicating the relationship between the high level of PD-L1 and worse prognosis in patients with HCC. [36][37][38] However, two meta-analyses published last year found high expression of PD-1 predicted a better prognosis of HCC patients. 39,40 We had reviewed some literature suggesting that high level of PD-1 is associated with a poor prognosis in HBV-HCC patients, and we decided to conduct a meta-analysis to explore the relationship between the PD-1 expression and prognosis in HBV-HCC. This meta-analysis, presented herein and based on seven studies with 658 patients, showed that the high expression of PD-1 statistically indicated a poor OS and DFS, whether the sample originated from blood or tumors. As for the clinicopathological parameters, our findings suggested that overexpression of PD-1 was significantly associated with multiple tumors, higher level of AFP and advanced BCLC stages of HCC. Chronic liver diseases with longstanding inflammation often induce T cell exhaustion and the appearance of regulatory T cells (i.e. Tregs). PD-1 is an important molecule in the pathway of immune escape of tumors. The increased expression can lead to immune escape of the tumor, leading to an increase of AFP and advanced BCLC stages. A tumor secreting AFP in large amounts is a prognostic sign of larger focus or multifocality, extrahepatic spread, and poor survival. Critelli et al. 41 found that fast-growing HCC has poor differentiation and Among the included studies, two showed that high PD-1 expression was associated with larger tumor size, one showed that high PD-1 expression was associated with smaller tumor size and the results of two others showed that the PD-1 expression and tumor size were not statistically correlated (Supplementary Fig. 3). Two of included studies reported the relationship between PD-1 expression and TNM stage, which both showed that the PD-1 expression and TNM stage were not statistically correlated (Supplementary Fig. 4). Since there were studies that confirmed that the outcome of PD-1 inhibitor may be correlated with tumor size and TNM stage 42,43 and two of the included studies indicated that high PD-1 expression was associated with larger tumor size, we speculated that the results may not be confirmative due to insufficient sample. It required further expansion of the sample to draw an accurate conclusion. The same was true for liver cirrhosis. Neither of the two articles that mentioned the relationship between cirrhosis and PD-1 expression showed the severity of cirrhosis. Most of the patients had low HBV DNA replication (<100 IU/mL). Liu et al. 27 analyzed PD-1 level in peripheral blood mononuclear cells rather than TILs, which might not reflect the exact situation of immune infiltration in HCC tissue. Hsu et al. 21 provided no information on antiviral therapy or details of cirrhosis. This could explain why such a vital factor of hepatitis B did not significantly correlate with PD-1 expression levels. The incidence of HCC increased with serum HBV DNA level in a dose-response relationship. Participants with persistent elevation of serum HBV DNA level during followup had the highest HCC risk. 29 Regular antiviral treatment in some patients may not necessarily increase the expression of PD-1. Grouping HBV-HCC patients according to the amount of HBV replication to analyze the relationship between PD-1 and prognosis can define the predictive value of PD-1 in such patients more clearly. The expression of PD-1 in T cells can be induced by upregulated PD-L1 in tumor cells and by other molecules. Some studies have suggested that PD-L1-positive tumor cells have prominent immune cell infiltration in HCC, such as CD3+ TILs (representing overall T cells), CD8+ TILs (representing cytotoxic T cells), and tumor-associated macrophages (i.e. TAMs). 27,37,44,45 This result may support the possibility of a role for an adaptive immune resistance mechanism. Some other research studies have pointed out that PD-1 expression is related to T cell exhaustion. Blocking the PD-1 pathway could reverse this phenotype, to restore anti-tumor immunity. [46][47][48] In HBV-HCC, the reactivation of oncofetal gene SALL4 by HBV counteracts miR-200c in PD-L1-induced T cell exhaustion. Overexpression of miR-200c antagonizes HBV-mediated PD-L1 expression by targeting the 3′-UTR of the CD274 gene (encoding PD-L1) directly, which reverses antiviral CD8+ T cell exhaustion. 49 Through analysis, we also found that the positive rate of PD-1 was relatively higher than that of PD-L1 in HCC tissue. In view of its high positive ratio, the diversity of detection methods and the stability of the results on tumor prognosis, we believe that PD-1 can be a marker to predict HBV-HCC prognosis. A meta-analysis in patients with pretreated advanced non-small-cell lung cancer indicated a slight benefit from anti-PD-1 compared to that from anti-PD-L1 inhibitors. 50 An indirect comparison in advanced squamous nonsmall-cell lung cancer showed that, for PD-L1 low/negative patients, pembrolizumab had superior OS (HR=0.43, range: 0.24-0.76; p<0.01/HR=0.74, range: 0.40-1.38; p=0. 35) and better progression-free survival (HR=0.80, range: 0.51-1.26; p=0.33/HR=0.46, range: 0.28-0.75; p<0.01) than atezolizumab. There has been no study comparing the efficacy between PD-1 inhibitors and PD-L1 inhibitors in HCC patients thus far. More clinical trials are required to determine which target is better for HBV-HCC, PD-1, or PD-L1. In our analysis, PD-1 had shown good predictive efficacy. Han et al. 44 pointed out that PD-1 expression in tumors was statistically related to the level in serum. The higher expression of PD-1 in tumors, the higher concentration of PD-1 in serum. The results of our analysis support this view. The PD-1 expression levels in tumor and in serum were consistent in predicting prognosis and clinical parameters. However, due to different detection methods and grouping levels, the relationship between PD-1 and survival may be inaccurate. Considering this point, though the heterogeneity was low, we still carried out the subgroup analysis to assure whether both of the detection methods could predict the prognosis. To our knowledge, our study is the first meta-analysis focused on the prognostic value of PD-1 expression in HBV-HCC patients specifically. The results are different from those in all HCC patients regardless of HBV. However, there are several limitations inherent to our study's design. First, for some studies, we used the Engauge Digitizer to calculate HRs, which may cause bias. Second, the number of studies was insufficient. There were only three studies that investigated the correlation between PD-1 expression and OS and four that focused on the correlation between PD-1 and DFS. All studies originated from China, which may have limited the data extrapolation. Third, the grouping criteria of PD-1 level and other parameters of included studies varied. Different standards may increase bias. Finally, though our analysis upon prognosis in HBV-HCC showed the opposite result from the study in HCC, the relationship of PD-1 expression was not associated with HBV infection. The mechanism between PD-1 and HBV needs further study. Conclusions Our meta-analyses revealed that PD-1 expression was significantly correlated with shorter OS, DFS, higher level of AFP, multiple tumors and advanced BCLC stage of HBV-HCC. Based on the included studies, we found that PD-1 expressed in tumors and blood could reflect the immune status of patients, thereby increasing the reliability of the results. We can preliminarily assume that HBV enhances the expression of PD-1 through certain mechanisms and leads to poor prognosis of HBV-HCC patients. The prognostic role of PD-1 in HBV-HCC and the mechanism of how HBV mediated PD-1 expression still demand further investigation.
2021-07-23T11:24:08.194Z
2021-05-18T00:00:00.000
{ "year": 2021, "sha1": "15791bb4299f988a5897d0c9004846f91d7c0b1f", "oa_license": "CCBYNC", "oa_url": "http://www3.med.unipmn.it/dispense_ebm/2009-2010/Corso%20Perfezionamento%20EBM_Faggiano/NOS_oxford.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4e20ab029d268b9b2c7c92855e9af2836c8d25fc", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
245623231
pes2o/s2orc
v3-fos-license
Successful Disinfection of a New Healthcare Facility Contaminated with Pseudomonas aeruginosa Contamination of water use points in health establishments is a frequent and concerning problem. Maintenance and disinfection of water systems can be inefficient. Sterilizing filters are commonly used at selected taps. We report diagnostic and corrective approaches that have succeeded in making a contaminated health facility sustainably compatible with its activity without restriction in taps use. The zones contaminated with pseudomonas as well as those, along the water networks, at risk of biofilm development were identified. Corrective measures on the network and various types of decontamination were carried out. At the end of this work, the bacterial load in the water significantly decreased and 219 out of 223 controls were negative for P. aeruginosa over 3 years of follow-up. Four positive results were linked to three taps not used for care which were satisfactorily treated locally. Errors at the design and setup phases of health facilities may result in resistant bacterial contamination. P. aeruginosa contamination of newly built healthcare facilities is an underreported problem. Guidelines on design, disinfection, and monitoring procedures of water networks of healthcare facilities should be adapted consequently and would certainly improve the offered care limiting patients’ risk and avoid many unwanted financial situations for the providers. Introduction Despite hygiene measures, microorganisms present in healthcare facilities may be responsible for nosocomial infections. Among encountered pathogenic microorganisms, Pseudomonas aeruginosa (P. aeruginosa) is one of the most frequently identified. Carried by water, it can have harmful consequences for patients and remains a major concern for healthcare professionals [1]. Being highly mobile, this bacterium is widespread in the environment. It naturally lives in fresh water, sea, moist soils, and on the surface of plants. It can survive and multiply in a large variety of liquids, on any type of medium and wet material, preferably between 4 • C and 45 • C. In water distribution systems, P. aeruginosa develops in biofilms [2,3] which constitutes a physical barrier preventing antimicrobial agents to reach all microorganisms. For this reason, commonly recommended doses of disinfectants are ineffective on P. aeruginosa [4,5]. Biofilms gradually liberate P. aeruginosa which circulates in the water distribution network and colonizes points where circulation is lower (large tubing volumes, dead or unused points, solenoid valve core, and taps). P. aeruginosa present in the distribution water systems has been shown to infect vulnerable patients [6,7]. From the medical point of view, P. aeruginosa is feared because is a multi-resistant microorganism with known induced morbidity [8] and directly linked mortality [9]. The water supply of healthcare facilities must be free of P. aeruginosa. Much effort has been dedicated to identify the origin of contamination in this context. Many reports have focused on manual or automatic faucets, distance from water mixers to the water points of use, as well as drains [6,10,11] Despite these efforts, P. aeruginosa contamination is very common and surprisingly, even in newly built edifices. A report from Dijon (France) [12] showed that 4% of newly built healthcare facilities had positive samples from P. aeruginosa pointing at an improvement compared with 19% for old ones, but also showing that the problem of P. aeruginosa contamination of the water distribution networks still remains. The contamination of points of use in the intensive care units by P. aeruginosa is very frequent, sometimes more than 60% of the points [13] and Pseudomonas species have been reported to account for 88.8% of the total bacterial strains identified in dialysis units [13]. However, the presence of P. aeruginosa seems to be very seldom observed in the public water supply and it is generally accepted that the contamination of the distribution networks in health facilities is due to a retrograde effect coming from the patients or the users [3]. However, a German study showed that almost 3% of the public buildings studied had P. aeruginosa in drinking water [14]. To improve the bacteriological quality of the water, many healthcare facilities use disposable sterilizing filters at the points of use (generally 0.2 µm filters). Indeed, the resistance of P. aeruginosa to the disinfectants has forced closing down many healthcare facilities and brought others to use water filters at the water points of use admitting a permanent contamination of the water distribution network [15]. This approach seems effective [16] but requires appropriate maintenance and control of the structural characteristics of the filters to ensure their effectiveness [17] and be combined with more general prevention measures [18]. The purchase and maintenance cost of disposable filters being very high, it is often limited to the field of care [19]. The remaining points of use accessible to patients less frail or visitors on the periphery of the care zone (public toilet, patient bathrooms, and hand washing basin taps) are frequently not protected. We were requested to disinfect a new dialysis building that could not be open to the public because of persistent P. aeruginosa contamination despite several previous decontamination attempts. We aimed at removing P. aeruginosa from the complete water distribution networks in order to subsequently guarantee a normal use of the building in its health care activities, and not being limited to the use of the protected points of the local water distribution and supply. In addition to the water distribution systems, P. aeruginosa may contaminate very precise sites or medical material or devices. Removing P. aeruginosa is often very difficult but achievable for medical devices, as it has been reported with repeated disinfection procedures and replacement of infected parts [20] or using specific antibacterial tools applied to selected points [21]. Such techniques are unfortunately not of use to disinfect a complete water distribution system of a building. To the best of our knowledge, no successful or satisfactory eradication procedure of P. aeruginosa from the entire water distribution network of a health care building has been previously reported. We report the method and actions that succeeded in eradicating the infection and allowed a dialysis centre to open without restriction in the use of the water distribution system. Our general approach might help when fighting bacterial contamination of other health care buildings to eradicate the infection, without the use of filters and rendering possible the dispense of care with no restrictions. Materials and Methods A multi-background working group involving architects, engineers responsible for the design of the water network, builders, plumbers, maintenance technicians, water quality engineers, pharmacists, hygienists and physicians was set to decontaminate the water distribution network. The action plan was organised as follows: Inspection and Evaluation of the Water Distribution System The water distribution system was analysed on floor layouts and engineering drawings. Risk areas of bacteriological development were pinpointed and visually identified in the building. The risk areas were identified considering the guidelines for water distribution networks for healthcare facilities issued by the French and British governments [22,23]. Practical guidelines and advice on how to reduce bacterial contamination in vitro and in experimental settings were taken from the available literature [24,25]. Bacterial Charge Quantification by ATPmetry Bacteriological load was quantified by ATPmetry [26]. Briefly, after a one-night period of stagnation during which no water was used, samples (60 mL) of water were taken directly (first jet) or after 2 min of running water (second jet) to assess whether the contamination was rather related to the point of use or to the upstream network. Adenosine triphosphate (ATP) was used as a direct quantification of the biomass to identify zones sensitive to biofilm formation. Water Sampling Water samples were taken and analysed by an approved laboratory (French Committee for Accreditation, COFRAC) [27]. Water sampling was performed according to the ISO 19458 [28] and FDT 90-520 standards [29]. Colony counts of aerobic bacteria at 22 • C and 36 • C were performed according to the ISO 6222 standard [30] and that of P. aeruginosa according to the standard ISO 16266 [31]. Two hundred and fifty mL were sampled and one hundred mL cultured to check for the presence of P. aeruginosa. Demonstration of P. aeruginosa Contamination in Specific Segments of the Water Distribution System Network parts (metal filters at the inlet of the network) and specialized handwash solenoid valves (Anios, Lille-Hellemmes, France) were dismantled under sterile conditions, swabbed, and cultured to detect and locate P. aeruginosa. Some PVC pipes were cut into thin strips under aseptic conditions, fixed 2.5% glutaraldehyde in PHEM buffer, pH 7.2 for 1 h at room temperature, followed by washing in PHEM buffer. Fixed samples were dehydrated using a graded ethanol series (30-100%), followed by 10 min in graded ethanol-hexamethyldisilazane, and then Hexamethyldisilazane alone. Subsequently, the samples were sputter coated with an approximative 10 nm thick gold film and examined under a scanning electron microscope (Hitachi S4000, at CoMET, INM Montpellier France, MRI-RIO Imaging facilities, Biocampus) using a lens detector with an acceleration voltage of 10 KV at calibrated magnifications. Corrective Measures Removal or replacements of major contaminated and contaminating sites were performed. Analysis of Previous Disinfection Procedures The first three disinfections were carried out by specialized companies with their own procedures. The first two disinfection procedures were examined on poorly detailed execution reports. The third procedure was observed while performed and evaluated upon completion. The fourth disinfection took into account the results of the previous disinfection audit. Hereafter, the description of the different disinfection procedures is provided. When the building was filled with water, a chemical disinfection of the sanitary water network using a chlorinated product (Ferrocid 5280 S, Kurita Europe APW GmbH, Ludwigshafen, Germany) was carried out by a specialized company. The second disinfection was carried out by the same specialized company using Ferrocid 8591 (BK Giulini GmbH, Ludwigshafen, Germany), a blend of acetic acid, hydrogen peroxide, and peroxyacetic acid in water not suitable for drinking water (registered by the German CA BAuA under No.-N-16308 for a product-type (PT11) for the preservation of water or other liquids used in cooling and processing systems [32]). The third disinfection was ordered from another specialized company. An audit was carried out and several critical points were identified: (1) the concentrated product used to disinfect was DYESE PL (Dyese, Aix en Provence, France), based on hydrogen peroxide and silver ions. Its bactericidal activity has been tested according to standards for the evaluation of the bactericidal activity of antiseptics and chemical disinfectants (EN 1040 [33], EN 1276 [34]). These standards indicate that the product is tested in laboratory tubes, but not in real conditions in distribution networks with controlled contamination levels (and presence of biofilm), so-called "dirty conditions". (2) The target level of hydrogen peroxide in the network was 1000 mg/L and the contact time was 4 h in line with the recommendations of the Scientific and Technical Centre for Buildings [35]. However, a pulse pump was used to introduce the disinfectant in the network without a device to regulate its flow as a function of the total water flow (no pulse water meter or flow meter). The water flow was variable depending on the number of open taps and their settings. Thus, with this method, even with a constant flow of disinfectant, the final concentration could not be even, and the minimal concentration wanted could not be guaranteed. (3) The level of hydrogen peroxide (H 2 O 2 ) was controlled at points of use using strip tests in which the maximum level indicator (100 mg/L H 2 O 2 ) was below the desired concentration. The samples were not diluted to adapt the test concentration ranges. Therefore, the target concentration (1000 mg/L H 2 O 2 ) could not be guaranteed throughout the circuit. The results showed a clear improvement of the bacteriological quality. Of the 17 samples, only 6% contained over 100 CFU/mL (max 620 CFU/mL) at 22 • C, and 24% contained over 10 CFU/mL (max 440 CFU/mL) at 36 • C. However, P. aeruginosa was still present in 4 samples. Therefore, an additional decontamination procedure was needed. Proposed Disinfection Procedure The fourth disinfection took into account the results of the previous disinfection audit. It used a proportional pump (DOSATRON ® INTERNATIONAL S.A. Tresses, France) and a disinfectant, Dialox ® (S&M FRANCE-BIOXAL, Chalon sur Saône, France), tested according to application standards in "dirty conditions" (NF 13727). Dialox ® is a disinfectant based on hydrogen peroxide and peracetic acid which is adapted for medical and water treatment devices. Water and disinfectant flows were adjusted to obtain 1000 mg/L of H 2 O 2 directly after the main water inlet and uniformly throughout the network using a proportional pump. The presence and concentration of the disinfectant was checked in samples diluted 1/20 using a strip with a concentration range 0-100 mg/L of H 2 O 2 . The network was filled up and checked to contain 1000 mg/L of H 2 O 2 at all sampling and user points. The contact time was more than 8 h. Water samples for analysis were taken at least 72 h after decontamination and rinsing, and repeated more than one week apart. These samples were taken in the first jet after a minimum period of stagnation of one night. Then, bacteriologic analyses at 22 • C and 36 • C were conducted in compliance with regulations. Maintenance Measures In order to avoid bacterial proliferation after disinfection, regular flushes (two min daily, weekly, or monthly according to risk level) of all points of use were introduced. Periodic sampling for laboratory analysis of selected points of use was performed to monitor the bacterial charge and water stability. A range of 10 to 20 water samples were taken quarterly. Inspection and Evaluation of the Water Distribution System The building was connected to the public water distribution network by an external pipe approximately 50 m long. This pipe fed a garden water tap, as well as two separated pipes in the basement level. One pipe going to the water treatment unit for dialysis and three outdoor taps, and the other one going to the sanitary water network (Figure 1). Maintenance Measures In order to avoid bacterial proliferation after disinfection, regular flushes (two min daily, weekly, or monthly according to risk level) of all points of use were introduced. Periodic sampling for laboratory analysis of selected points of use was performed to monitor the bacterial charge and water stability. A range of 10 to 20 water samples were taken quarterly. Inspection and Evaluation of the Water Distribution System The building was connected to the public water distribution network by an external pipe approximately 50 m long. This pipe fed a garden water tap, as well as two separated pipes in the basement level. One pipe going to the water treatment unit for dialysis and three outdoor taps, and the other one going to the sanitary water network (Figure 1). Figure 1. Schematic water distribution network. Original setting when the building was delivered. A garden water tap was connected to the main pipe. Three taps for outdoor use were connected to the pipe specifically dedicated to the treatment of dialysis water. A 50 L ion-exchange softener supplied 10 hot water tanks: two 300 L and eight 30 L capacity for the most distant points of use in the building. These hot water tanks had a mixing valve downstream to limit the temperature of hot water outputs to 50 °C to avoid the risk of burns for users. The record of the temperature at the entrance of the building at the time of the procedures was 15-16 °C, and inside the building it was 17-18 °C. The water treatment unit for dialysis includes filtration, softening, and de-chlorination steps before a double osmosis and supplies 34 points of use (dialysis treatment points). The sanitary water network supplied 56 points of use: 28 cold water outlets (5 mechanical taps and 23 contactless taps) and 28 mixing taps (cold and hot water). Hot Figure 1. Schematic water distribution network. Original setting when the building was delivered. A garden water tap was connected to the main pipe. Three taps for outdoor use were connected to the pipe specifically dedicated to the treatment of dialysis water. A 50 L ion-exchange softener supplied 10 hot water tanks: two 300 L and eight 30 L capacity for the most distant points of use in the building. These hot water tanks had a mixing valve downstream to limit the temperature of hot water outputs to 50 • C to avoid the risk of burns for users. The record of the temperature at the entrance of the building at the time of the procedures was 15-16 • C, and inside the building it was 17-18 • C. The water treatment unit for dialysis includes filtration, softening, and de-chlorination steps before a double osmosis and supplies 34 points of use (dialysis treatment points). The sanitary water network supplied 56 points of use: 28 cold water outlets (5 mechanical taps and 23 contactless taps) and 28 mixing taps (cold and hot water). Hot water (60 • C) was obtained after softening with a 50 L ion-exchange softener from two 300 L hot water tanks each supplying eight water mixers. The first tank supplies the area used by patients and visitors on levels 0 and 1. The second one supplies the area reserved to the staff on level 2. Eight 30 L hot water tanks individually supplied one to four water mixing taps each. Cooling water from 60 • C to 50 • C maximum was obtained by mixing valves placed between the hot water tanks and the water mixing taps; the distance between the water mixing taps and the mixing valves was between 3 m and 10 m long. For the control of different types of water (cold, softened, hot, mixed, and dialysis water), 12 sampling sites were installed on the different pipes. Inspection and evaluation of the water distribution system showed poor separation of water distribution networks (external taps connected to the pipe dedicated to dialysis water (Figure 1)), dead legs (Figure 2), unfavourable equipment (automatic taps), and significant Hygiene 2022, 2 6 lengths of piping after the large hot water tanks resulting in possible water temperature drops below 50 • C (Figure 1). valves placed between the hot water tanks and the water mixing taps; the distance between the water mixing taps and the mixing valves was between 3 m and 10 m long. For the control of different types of water (cold, softened, hot, mixed, and dialysis water), 12 sampling sites were installed on the different pipes. Inspection and evaluation of the water distribution system showed poor separation of water distribution networks (external taps connected to the pipe dedicated to dialysis water (Figure 1)), dead legs (Figure 2), unfavourable equipment (automatic taps), and significant lengths of piping after the large hot water tanks resulting in possible water temperature drops below 50 °C (Figure 1). ATPmetry Results Mapping of the water distribution networks by ATPmetry made it possible to draw up a microbiological intensity map of the installation to identify the zones and the points of use likely to create the conditions of biofilm formation. The majority of points of use and sampling taps on the different networks were checked (n = 40, Figure 3). It can be observed that a large majority of samples (71%) were over the recommended threshold of 2 log equivalent bact/mL, and a considerable proportion (38%) had very high content (≥3 log equivalent bact/mL). ATPmetry Results Mapping of the water distribution networks by ATPmetry made it possible to draw up a microbiological intensity map of the installation to identify the zones and the points of use likely to create the conditions of biofilm formation. The majority of points of use and sampling taps on the different networks were checked (n = 40, Figure 3). It can be observed that a large majority of samples (71%) were over the recommended threshold of 2 log equivalent bact/mL, and a considerable proportion (38%) had very high content (≥3 log equivalent bact/mL). To better identify the sites at risk of bacterial contamination, a comparison between bacterial content found in the first and second jets was carried out at different relevant points of use. Since the first jet is related to the local bacterial status and the second jet is rather related to the bacterial status of the water of the system, a high first/second jet bacterial number reduction (from 5.2 ± 0.4 to 2.4 ± 0.1 log equiv bact/mL) is strongly suggestive of local bacterial production. Thereby, significant contamination levels were observed and located on (i) automatic faucets, (ii) the softener, (iii) the network portion from the softener to the hot water tanks, (iv) the observed dead legs, and (v) showers equipped with thermostatic faucets. To better identify the sites at risk of bacterial contamination, a comparison between bacterial content found in the first and second jets was carried out at different relevant points of use. Since the first jet is related to the local bacterial status and the second jet is rather related to the bacterial status of the water of the system, a high first/second jet bacterial number reduction (from 5.2 ± 0.4 to 2.4 ± 0.1 log equiv bact/mL) is strongly Hygiene 2022, 2 7 suggestive of local bacterial production. Thereby, significant contamination levels were observed and located on (i) automatic faucets, (ii) the softener, (iii) the network portion from the softener to the hot water tanks, (iv) the observed dead legs, and (v) showers equipped with thermostatic faucets. Water Sampling Water samples from the sampling taps located on the pipes at the beginning of the sanitary water network and of the water treatment unit for dialysis were found negative for P. aeruginosa. On the other hand, most of the samples in the sites identified at risk (20 out of 26, 77%) were positive for P. aeruginosa: dead legs and automatic cold-water taps, after the sanitary water softener (sample sites and mixed water taps). Demonstration of P. aeruginosa Contamination in Specific Segments of the Water Distribution System • Filters of the main pipes The filters were opened in sterile conditions and parts were swabbed and cultured. The presence of P. aeruginosa was demonstrated in the filters at the entry of the water distribution network, whilst there was no P. aeruginosa in the sample taken before the filter to test the conformity of the water supply from the city provider. The presence of pebbles was observed in these filters. It is noteworthy that all samples were obtained in the old dialysis building at the same location, and importantly, shared the same water source that had been negative for P. aeruginosa during the 10-year exploitation period. Likewise, the filters in the old building were never found to contain any visible gravel. The origin of the contamination of the new building is likely to be related to the connection to the municipal water distribution system. The large number of bacteria along the network also shows that the first disinfection performed immediately after connecting the water network failed, as did the regular purging of water at points of use performed before opening the building to patients. We disassembled the solenoid valves from our six automated hand washing points from the care zone. We cultured samples from the internal parts of these solenoid valves. The presence of P. aeruginosa on the solenoid valve nucleus was confirmed in those already found positive in previous analyses. All solenoid valves were changed and the internal circuit was disinfected. The analyses that followed were no longer positive for the presence of P. aeruginosa. • Water treatment unit for dialysis. P. aeruginosa was also identified in two sampling sites located after the water softener of the treatment unit for dialysis and after the activated carbon column (before reverse osmosis). Investigations showed that the softener as well as the activated charcoal were free of P. aeruginosa whilst the sampling sites were infected. The tubing of these sampling sites was analysed ( Figure 4A). The piping was cut in sterile conditions and prepared for electron microscopy analysis. It showed the presence of bacteria with size and shape similar to that of P. aeruginosa ( Figure 4B). The setting of the sampling sites was modified to reduce the dead legs and avoid stagnation ( Figure 4C). Finally, a new disinfection of the water treatment unit was performed, which repeatedly demonstrated the absence of P. aeruginosa. sites was analysed ( Figure 4A). The piping was cut in sterile conditions and prepared for electron microscopy analysis. It showed the presence of bacteria with size and shape similar to that of P. aeruginosa ( Figure 4B). The setting of the sampling sites was modified to reduce the dead legs and avoid stagnation ( Figure 4C). Finally, a new disinfection of the water treatment unit was performed, which repeatedly demonstrated the absence of P. aeruginosa. Corrective Measures in the Water Distribution Network Several corrective and preventive measures were carried out to secure the network. The main ones are described in Table 1 and plotted in the engineering drawings of the supplementary figures (Supplementary Materials Figures S1-S4). The resulting network is schematized in Figure 5. Disinfection Eight days apart, the fourth disinfection specific to P. aeruginosa cultures performed returned negative. A new control, eight days later was also negative and the healthcare facility could be opened to the public. Three months later, three points of use were positive to P. aeruginosa. A supplementary disinfection procedure was carried out which was successful. Subsequent controls were negative for P. aeruginosa. Table 1. Facility features increasing the risk of contamination and proposed corrections. Feature Type Features Enhancing Contamination Correction Missing equipment On the main water supply, there was no possibility of carrying out controlled chemical decontamination. Installation of connectors to install a volume-controlled mixing pump for chemical disinfection. Network Outdoor taps were connected to a pipe dedicated to the treatment of water for dialysis. Disinfection Eight days apart, the fourth disinfection specific to P. aeruginosa cultures performed returned negative. A new control, eight days later was also negative and the healthcare facility could be opened to the public. Three months later, three points of use were positive to P. aeruginosa. A supplemen- Maintenance and Controls Periodic network rinsing and monitoring of the presence of P. aeruginosa was pursued during use. Over the following 3 years, 219 out of 223 (98%) analyses were negative (<1CFU/100 mL). Four positive results were linked to three taps not used for care which were satisfactorily treated locally. No recurrence of P. aeruginosa infection has been observed in the water treatment unit for dialysis. Discussion Previous reports on P. aeruginosa contamination mainly focus on retrograde contamination from the patients or users to the water network. There is scant information on the origin of a water contamination, the location where biofilms develop, or the methods to remove those [36]. A first successful attempt to decrease bacterial contamination of a multiple sclerosis centre has been reported [37]. In that study, the authors applied an electrochemical disinfection in the hot water network before the heater with a recirculation system. It represents an interesting approach that deserves being tested in cold water used for care. To the best of our knowledge, our report is the first case illustrating a contamination of a new health care building, describing the measures applied to eliminate it, and showing the unsatisfactory treatments as well as the finally successful approach. In addition, our experience underlines the importance of preventive measures, not only in the design of internal water distribution, but also and above all in the execution process of the construction phases, and in particular at the time of the arrival of water in the building, the disinfection of the networks, and the expectation of the opening to the public. Our approach, through a careful visual inspection of all water systems guided by the recommendations of the ministries of health and scientific literature, and by an ATPmetry mapping coupled with bacteriological results, allowed us to identify areas and points of use at risk of biofilm development. Design errors were identified and corrective measures were applied. We also observed that the sizing of the network was not adapted to the facility. The estimated water consumption was significantly higher than that actually used and the planned number of points of use exceeded that required for healthcare activity. The first favoured a slow speed of water circulation in the pipes and the second allowed the appearance of dead and unused legs, favouring bacterial growth and biofilm formation [38]. The capacity of the water softener protecting hot water piping from limestone has been estimated to be one hundred times greater than that actually used. As a consequence, the water flow in the softener was very low, again, allowing bacterial proliferation to occur. In addition, water at the main inlet was only moderately loaded with calcium, rendering questionable the necessity of the softener. For hand washing, the choice to generalize automatic non-touch censor taps was not suitable because these taps, especially with core solenoid valves, are known to be a trap for germs [39]. In the present experience, although the quality of the water supplied by the municipality is acceptable, the complete water network of the facility was shown to be contaminated shortly after connecting the main water inlet. It appears that the moment of connecting and filling up the distribution system is crucial, although there is no established protocol for its implementation. During the 3 months spent between the end of the construction works of the building and its equipment for healthcare activities, no particular measures, except periodic flushing, were scheduled to check and maintain the quality of water in the distribution networks. Nevertheless, air tightness checking of the water distribution networks and maintenance with super-chlorinated water or other recommended solutions are advised by good practice recommendations [40]. None of these actions were undertaken in our case. Indeed, several important problems were observed in the curative disinfection practices used. The disinfectant products used had not been validated in real conditions and the implementation was poorly controlled. Instead, disinfectants that are efficient for biofilm destruction, compatible with network materials, and without health risk should have been favoured. Thermal disinfection [41], if properly performed, is effective but not always technically feasible on the entire water network of a building. From a more general point of view, the design of sanitary networks must comply with numerous documents, national regulations, technical design documents, or technical guides. All parts of the network must also comply with national or international standards. These texts include different requirements, the first of which being that the quality of the water does not deteriorate in the network and remains in conformity with the standards of quality at any point of use. Amongst additional more technical requirements are that the water flow at all points should be in conformity with reference values without pressure drop; there should be no return of polluted water to the drinking water system; there should be protection against calcareous deposits; hot water temperatures should not reach a threshold considered to increase the risk of injury and burning at the point of use. In addition, the design of a water distribution network is limited by budget constraints being frequently at odds with the primary goal of good water quality for health facilities. Engineers and workers responsible for the implementation of the water networks are often better trained on the technical aspects of proper functioning than on the risks of water quality degradation described in the scientific literature and in the health guidelines. Water networks of new buildings can therefore be affected by microorganisms which can be sometimes pathogenic. Therefore, the creation of a multidisciplinary working group to design a water network, monitor the progress of the construction, and control the intermediate phase between the water connection and opening is of outmost importance. Conclusions In conclusion, P. aeruginosa contamination of newly constructed health facilities is very difficult to resolve. There is no effective method applicable to all buildings because network configurations, materials, water quality, or uses are different. However, with a good methodology it may be possible to decrease the contamination and possibly eradicate it. Proper diagnostic methods, corrective measures, and disinfection procedures carefully chosen and followed in their implementation should be used. Misunderstanding or non-compliance with regulatory texts or guidelines for the design, operation, and maintenance of the internal water distribution networks of new health care facilities can lead to poor water quality, sometimes incompatible with the function the edifice has been built for. The health risk linked to bacterial infections carried by the water distribution network in health care facilities may be unacceptably high [9] and the financial consequences of poor water quality important. Correcting design-based problems and mishandlings of the setup phase can represent a high cost, especially as it postpones the start of healthcare activity. International guidelines for the design, setup phase, and maintenance of water distribution networks of healthcare facilities should be developed in collaboration with builders, health authorities, and water quality specialists. Their implementation in new buildings should be specifically supervised by trained specialists at all stages, from design to start of activity. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/hygiene2010001/s1, Figure S1: Network corrections on basement (−1 level); Figure S2: Network corrections on the ground floor; Figure S3: Network corrections on the first floor; Figure S4: Network modifications on the second floor.
2022-01-02T16:14:52.328Z
2021-12-30T00:00:00.000
{ "year": 2021, "sha1": "e6e57f3378d2b562daf03d769236a43b1e732b6c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2673-947X/2/1/1/pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "4550dee4d4549894dbea372f83c94f1dd9ed5c74", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [] }
250077686
pes2o/s2orc
v3-fos-license
Using social media in Kenya to quantify road safety: an analysis of novel data Background Road traffic injuries are a large and growing cause of morbidity and mortality in low- and middle-income countries, especially in Africa. Systematic data collection for traffic incidents in Kenya is lacking and in many low- and middle-income countries available data sources are disparate or missing altogether. Many Kenyans use social media platforms, including Twitter; many road traffic incidents are publicly reported on the microblog platform. This study is a prospective cohort analysis of all tweets related to road traffic incidents in Kenya over a 24-month period (February 2019 to January 2021). Results A substantial number of unique road incidents (3882) from across Kenya were recorded during the 24-month study period. The details available for each incident are widely variable, as reported and posted on Twitter. Particular times of day and days of the week had a higher incidence of reported road traffic incidents. A total of 2043 injuries and 1503 fatalities were recorded. Conclusions Twitter and other digital social media platforms can provide a novel source for road traffic incident and injury data in a low- and middle-income country. The data collected allows for the potential identification of local and national trends and provides opportunities to advocate for improved roadways and health systems for the emergent care from road traffic incidents and associated traumatic injuries. Background Road traffic injuries are a growing cause of morbidity and mortality in low-and middle-income countries (LMICs), with a particularly high burden in Africa, including in Kenya [1][2][3]. In Kenya road traffic incidents are among the leading causes of both morbidity and mortality [4][5][6][7][8][9]. Road traffic-associated injuries are particularly common among users of motorcycles and public transportation (mutatus/mini-bus), as well as among pedestrians [4,8,10]. Broadly, across age groups, Kenyans have poor seatbelt and helmet utilization [11]. Among a cohort of head injured patients in a Kenyan emergency department (ED), none reported seatbelt or helmet use at the time of injury [12]. Furthermore, the Kenyan public has recognized the key role vehicles and over crowded public transit play in increasing risk of injury on Kenyan roadways [13,14]. Yet, systematic data collection for traffic incidents is broadly lacking across Africa, as well as in Kenya. The ability to target road and traffic safety improvements with tailored solutions requires an understanding of the burden of disease and the current strengths and weaknesses. Robust and accurate statistics around road traffic deaths, hospital data, population surveys, or police reports are often not available [1,2]. Prior work in Kenya has identified the need to "improve the collection and availability of accurate [road traffic injury] data" in the country [15]. There is recognized underreporting, variance from recognized international standards, and overall inadequate data collection around road traffic incidents in Kenya [16]. Previously, the World Health Organization has estimated that nearly 80% of road traffic fatalities were unreported in prior Kenyan government data [17]. The majority of Kenyans have access to mobile telephones (91% penetration per capita, compared to 80% across Africa) and many Kenyans use social media (an average of nearly 3 hours per day), including an estimated 50% of Kenyans who use the popular microblog and social media platform Twitter [18]. Prior research has evaluated crowdsourced data from mobile phone data and social media platforms such as Twitter to provide timely incident detection [19]. However, the preponderance of prior work in the space has been performed in high-income countries with robust and publicly available crash data, utilized real time GPS or accelerometer data, or had evaluated unique newsworthy events, as opposed to routine social media postings [20][21][22]. This study aims to evaluate publicly available social media posts regarding road traffic incidents in Kenya. We hypothesize that the data from Twitter can be used as one of the few publicly available sources of road traffic incident data in Kenya given the paucity of available government or other reliable data sources. Furthermore, we detail the epidemiology of Kenyan road traffic incidents using information contained in Twitter posts. Study design and setting This was an observation, prospective cohort study, wherein all identified road traffic incidents from February 1, 2019, to January 31, 2021, were included for analysis. Research staff from the Emergency Medicine Kenya Foundation's (EMKF) The Injury Prevention and Safety Initiative (TIPSI) actively monitor Twitter daily for reports of road traffic incidents in Kenya ( Fig. 1; these examples are from outside of the study period). TIPSI staff then retweet each unique incident onto their Twitter feed (@TIPSIKenya) and take care to ensure no duplicate events are included. Subsequently, research staff abstract key data from each traffic incident from the TIPSI Twitter timeline. Events involving nonmotor vehicle incidents, such as boats and planes, were excluded from the analysis. Outcomes Data regarding time of day and extent of injuries were included for analysis when available, based on the details included in a tweet. Information regarding casualties (number involved in an incident), injuries (number reported to have been injured), and fatalities (those who are reported to have died) were captured when available and generally were from what was known or available on scene. Data that is determined some time after an incident, like eventual outcomes such as death for individuals who received hospitalization, was not available and as such not included. Casualties were further divided into whether or not the incident involved vulnerable roadway users (such as motorcyclists, cyclists, and pedestrians). Analysis EMKF TIPSI staff monitor Twitter and abstract data into a Microsoft Excel © database. All statistical analyses were performed using STATA Statistical Software Release 15 (StataCorp LLC, College Station, TX). Twitter reports included information about the number of vehicles involved in each incident, revealing that single vehicle incidents were most common (64.47%). There were 1278 (32.9%) tweets that contained information about the casualties from (number of individuals involved in) an incident. Of the tweets reporting types of casualties, 441 (34.51%) involved at least one vulnerable road user. Additionally, a total of 510 tweets (13.1%) reported known human injuries, with a cumulative total of 2043 injuries identified during the data collection period. A further 696 (17.9%) tweets reported at least one fatality, and a cumulative total of 1503 fatalities were recorded. Discussion The methods outlined here allowed for the creation of a dataset with information about unique road incidents in Kenya, a topic of public health interest that has historically lacked sufficient reporting mechanisms. The work by the EMKF TIPSI team to monitor Twitter for unique road traffic incidents in Kenya indicates that this methodology of tracking is a viable source of epidemiologic data in a lower resources setting. It is of particular value given the lack of other publicly available data sources. Available data from the Kenyan National Transport and Safety Authority indicate higher numbers of fatalities from road traffic incidents than was identified through this dataset [23]. This analysis, with a lower fatality calculation, is somewhat expected given that information gleaned from Twitter presumably only contains information reported from the scene and does not follow up hospital or longer-term morbidity/ mortality. Furthermore, though many are, certainly not all road traffic incidents in Kenya are documented by Twitter users. Previous analysis of a single Kenyan Twitter user (@ Ma3route) was focused on geospatial analysis of road traffic incidents and showed significant clustering of incidents at several high risk locations and intersections [24]. This Twitter user is frequently found in the @TIPSIKenya feed. Interestingly, one analysis of a small subset of @Ma3route data showed that the majority of Tweets with geolocation data were found to be geographically accurate when verified in person, though the authors noted that most tweets about road traffic incidents do not have any associated geolocation data [25]. Limitations Several limitations are noted. There are challenges that come due to the manual process of monitoring Twitter for information and incidents, as well as a manual data abstraction process, which leave room for the possibility of missed incidents. Additionally, many Kenyans are active users of Twitter, but many more Kenyans do not use the platform at all. Furthermore, the TIPSI team does not have strict incident inclusion/exclusion criteria. Rather, the goal is to catalog any incident and all available data without duplicating previously noted incidents. The initial data source (tweets) are microblogs not meant to be complete or comprehensive public health data. As such, there is no standard format to road traffic data reported on the platform, and may include (or have missing) any number of pieces of information including a location (or reference to a nearby landmark or intersection), the number and types of vehicles, occupants involved, the extent of injuries or fatalities, and may or may not include photographs of the scene. The disparate information is unique to every tweet as the source socialmedia users provide data input on an ad hoc basis. Implications Road traffic accidents are a major cause of morbidity and mortality in Kenya, and while Twitter can provide some data for context, there is an urgent need to improve the quantification of the road traffic incidents in the country. Improved data would comprehensively include: geolocation, types and number of vehicles, numbers of occupants, types of injuries, the time of occurrence. Such improved data collection will make it possible to better assess the health and economic losses from road traffic incidents, and allow for improved advocacy and a more robust financial argument for improved roadways and road traffic safety. Future work should evaluate the ability for the existing Kenyan emergency medical care systems to adequately treat injuries and longer-term morbidity that results from road traffic incidents. Conclusion Twitter and other digital media platforms can provide a novel source for road traffic incident and injury data. In Kenya, the EMK Foundation TIPSI team has been able to collate data through routine social media monitoring. In Kenya, and in other LMICs, cell phones are pervasive and access to social media platforms such as Twitter provide a data source that can be used for public health epidemiology and advocacy. In countries or settings with inadequate systematic data collection of road traffic incidents, Twitter and other social media platforms may provide the best available data source and can serve as an important tool for advocacy and public health improvement. Furthermore, cell phones and social media platforms are well suited to provide support and enhance timely trauma care. In Kenya, the ability to use a cell phone or social media platform to quickly identify health facilities capable of providing trauma care and orient individuals on how to quickly get to such facilities is of urgent importance. Twitter can provide a novel method for identification and collation of road traffic incidents in Kenya and other LMICs. The data collected allows for the identification of local and national trends and provides the potential opportunities to advocate for improved roadways and health systems for the emergent care from road traffic incidents and associated traumatic injuries. Abbreviations LMICs: Low-and middle-income countries; ED: Emergency department; EMKF: Emergency Medicine Kenya Foundation's; TIPSI: The Injury Prevention and Safety Initiative.
2022-06-28T13:28:50.606Z
2022-06-28T00:00:00.000
{ "year": 2022, "sha1": "dc341210f8822e0f49ec66678d04205c93dcd3e2", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "dc341210f8822e0f49ec66678d04205c93dcd3e2", "s2fieldsofstudy": [ "Environmental Science", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
119386318
pes2o/s2orc
v3-fos-license
Comment on"Quantitative x-ray photoelectron spectroscopy: Quadrupole effects, shake up, Shirley background, and relative sensitivity factors from a database of true x-ray photoelectron spectra" This Comment demonstrates that a comparison analysis by Seah and Gilmore between experimental data on the X-ray photoelectron spectroscopy intensities and theoretical data by Trzhaskovskaya et al. is misleading due to a number of serious errors made by Seah and Gilmore (Phys. Rev. B, 73, 174113). This Comment demonstrates that a comparison analysis by Seah and Gilmore between experimental data on the X-ray photoelectron spectroscopy intensities and theoretical data by Trzhaskovskaya et al. is misleading due to a number of serious errors made by Seah and Gilmore (PRB 73 174113). PACS number(s): 33.60.Fy In a recent publication by Seah and Gilmore [1], a comparison analysis is provided between experimental X-ray photoelectron spectroscopy intensities measured at the National Physical Laboratory and theoretical data by Scofield [2,3] and by Trzhaskovskaya et al. [4,5]. Seah and Gilmore claim in the abstract that there is "excellent correlation between experimental intensities... and the theoretical intensities involving the dipole approximation using Scofield's cross sections. Here, more recent calculations for cross sections by Trzhaskovskaya et al. involving quadrupole terms are evaluated and it is shown that their cross sections diverge from the experimental database results by up to a factor of 5". Another conclusion in [1] is concerned with the photoionization cross section σ as well as the photoelectron angular distribution parameters β (the dipole parameter), γ, and δ (quadrupole ones) obtained by Scofield [3] for Ne and Ba at the photon energy k = 3 keV: "If these data are compared with the data of Trzhaskovskaya et al. [4,5], good agreement is found for β and δ, whereas γ is generally between 1.01 and 1.45 times greater and σ is between 0.44 and 0.94 of these earlier values". Note that here Seah and Gilmore [1] have given the wrong reference to one of another Scofield's papers instead of [3]. We contend that the overall comparison and conclusions concerning values of the photoion-2 ization cross section σ and the photoelectron angular distribution parameters β, γ, and δ presented in our papers [4,5], are invalid due to serious errors and shortcomings made in [1]: (i) Calculations by Scofield and by Trzhaskovskaya et al. are compared in [1] for several values of the photon energy k, in particular, for the K α line of magnesium k = 1.254 keV and for k = 3.0 keV. Photoionization cross sections [2,3] and the photoelectron angular distribution parameters [3] are presented by Scofield for these values of the PHOTON ENERGY k. In our papers [4,5], we give cross sections and angular parameters for nine values of the is the binding energy of the electron. This is pointed out everywhere in the text of the papers from the title to the Section "Explanation of Tables". Nevertheless Seah and Gilmore [1] determine interpolated values of σ, β, γ and δ from [4,5] using photoelectron energies E as though they were photon energies k. (ii) Comparing the photoionization cross sections for an open atomic subshell, one should take into consideration that values of σ are given in [4,5,6,7] for the completely filled subshells even though a subshell is an open one. This is always pointed out in Section "Explanation of Tables" of the papers. In contrast, Scofield [2] has not clearly indicated the manner in which cross sections for the open relativistic doublet subshells have been obtained. However analysis of the σ values from [2] lead to the suggestion that he has calculated a combined photoionization cross section σ nℓ per a real number of electrons in two subshells with total momenta j 1 = ℓ−1/2 and j 2 = ℓ+1/2 where n is the principal quantum number and ℓ is the orbital momentum. Then σ nℓ has been spread between the two relativistic subshells in accordance with their approximate statistical weights. Because of this, the only comparison of σ nℓ = σ nℓj 1 + σ nℓj 2 for a specific number of electrons is meaningful. This fact is disregarded in [1]. For the open subshell, Seah and Gilmore compare σ nℓj (S) of Scofield for a real number of electrons as mentioned above, with σ nℓj (T ) given in our tables [4,5,7] for the completely filled subshell, σ nℓj (T ) being found mistakenly (see point (i)). (iii) Besides, it is necessary to bear in mind that different theoretical assumptions may give rise to a difference in the results obtained. In specific cases, the difference may be of great importance. In the comparison of results of the two calculations, Seah and Gilmore do not consider the difference in models used by Scofield and by Trzhaskovskaya et al. [4,5]. Scofield has assumed the electrons in the initial and final states treated as moving in the same central Hartree-Dirac-Slater potential of the neutral atom (so-called the "no hole" model). By contrast, we have taken into account the hole in the atomic shell produced after ionization (the "hole" model) [4,5,6]. The hole has been considered in the framework of the frozen orbital approximation [4]. This is the only difference between theoretical models used by Scofield and by Trzhaskovskaya et al. Otherwise the atomic models are identical. In particular, both calculations of photoionization cross sections have been performed with allowance made for all multipole orders of the photon field. As to our calculations, the subshell cross sections are calculated with a numerical accuracy of 0.1%. The accuracy has been verified [8] by comparing our results with benchmark relativistic calculations for one-electron systems by Ichihara and Eichler [9]. In Fig. 1, the proper ratio of cross sections calculated by us and by Scofield [2] R σ = σ(T )/σ(S) is presented versus the atomic number Z for the 1s and 3d 5/2 shells at the photon energy k = 1.254 keV. Our calculations have been performed in two different ways: using exactly the same model as Scofield (see [7,10]), that is the "no hole" model (solid lines) and using the "hole" model [4,5,6] (dashed lines). Dark circles refer to the wrong ratio R σ shown by Seah and Gilmore in Fig. 4(a) from [1]. As evident from Fig. 1, solid lines practically coincide with the value R σ = 1.0 because as has been shown earlier, our calculations [7,10] using the "no hole" model agree with those by Scofield [2] within ∼ 1%. Dashed lines show that taking the hole into account results in a difference < ∼ 12% in values of σ for the cases under consideration. Dark circles located below R σ = 1.0 demonstrate that erroneous values of the ratio presented by Seah and Gilmore diverge from correct values just by up to a factor of 5. Dark circles located above R σ = 1.0 demonstrate the invalid comparison (see point (ii)) 4 between σ(T ) and σ(S) for the 3d 5/2 subshell which is the open one for elements with Z ≤ 28. We have also checked that cross sections σ for all appropriate shells of Ne and Ba for the photon energy k = 3.0 keV calculated by us using the "no hole" model agree with results by Scofield [3] also within ∼ 1%. A deviation of our values of σ obtained by the use of the "hole" model from data [3] does not exceed 8% rather than 56% claimed in [1]. Values of photoelectron angular distribution parameters presented in [4,5] have also been extracted by Seah and Gilmore erroneously (see point (i)). We show in Fig. 2 our correct calculations of parameter γ (solid lines) and erroneous values presented in Fig. 1(a) from [1] (dashed lines). The Z-dependence of γ is given for the photon energy k=1.254 keV and for the 2s, 2p 1/2 , and 3p 1/2 shells. As is seen in Fig. 2, for a specific shell, solid and dashed lines coincide for low Z when the binding energy is small as compared with the photon energy. As the binding energy increases and the photoelectron energy decreases, correct and erroneous curves become widely separated. The maximum discrepancy may be much more than 1.45 which is pointed out in [1]. Erroneous value of γ may differ from correct γ up to many times and even change sign as in the cases of the 2s and 2p 1/2 shells. It is obvious that the results presented in Figs. 1(b), 4(b), 4(c), 5, 11, and 12 of paper [1] are erroneous for the same reason. Comparison between our calculations of the non-dipole parameters γ and δ and calculations by Scofield [3] for several subshells of neon, copper, and barium is presented in Table I. We list ratios R γ = γ(T )/γ(S) and R δ = δ(T )/δ(S) where our calculations (T) have been performed using two models: "no hole" and "hole". We omit cases where magnitudes of γ and δ are very close to zero. Photon energy is equal 3.0 keV. This difference is associated with the fact that the k-dependence of γ has a minimum not far from k=3.0 keV. So curves γ(k) obtained with and without regard for the hole are shifted relative to each other as is seen in Fig. 3 for the 3p 3/2 shell of barium. Nonmonotonous behaviour of the parameters when curves β(k), γ(k), and δ(k) may take the form of oscillations has been discussed at length in our paper [11]. In such cases, all assumptions underlying the calculation, a minor difference in binding energies, and other calculational details may have a great impact on values of the parameters. In summary it should be emphasized that we have clearly demonstrated that dramatic deviations of the results by Trzhaskovskaya et al. from those by Scofield and consequently, the deviation of our results from experimental data reported by Seah and Gilmore, do not actually take place and are due to errors (i) and (ii) and shortcomings (iii) in paper [1]. Reasonable deviations between the two calculations are associated with somewhat different atomic models used in [2,3] and in [4,5,6].
2019-04-14T02:13:48.697Z
2007-03-20T00:00:00.000
{ "year": 2007, "sha1": "1d545e9428681c3b072b8a99de0e788b07e623fb", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/0703511", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "6ce31018b0f02177d82e2bdd8ea5a77bb430df0c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
240591561
pes2o/s2orc
v3-fos-license
Synthesis of (E)-2-(1H-tetrazole-5-yl)-3-phenylacrylenenitrile Derivatives Under Green Conditions Catalyzed by New Thermally Stable Magnetic Periodic Mesoporous Ognosilica Embedded With ZnO Nanoparticles ZnO nanoparticles embedded on the surface of magnetic isocyanurate-based periodic mesoporous organosilica (Fe 3 O 4 @PMO-ICS–ZnO) was prepared through modied environmental-benign procedure for the rst time and was properly characterized by appropriate spectroscopic and analytical methods or techniques used for mesoporous materials. The new thermally stable Fe 3 O 4 @PMO-ICS–ZnO materials with proper active sites, uniform particle size and surface area were investigated for the synthesis of medicinally important tetrazole derivatives through cascade condensation and concerted reactions as a representative of the Click Chemistry concept. The desired 5-substituted-1H-tetrazole derivatives were smoothly prepared in high to quantitative yields and good purity under green conditions. Low catalyst loading, very short reaction time and the use of green solvents such as EtOH and water instead of carcinogenic DMF as well as the possibility of easy separation and recyclability of the catalyst for at least ve consecutive runs without signicant loss of its activity are notable advantages of this new protocols compared to other recent introduced procedures. Introduction Since it's rst introduction by Barry Sharpless in 1999, "Click Chemistry" has been a very popular topic at the synthesis of heterocyclic compounds. In 2001, Sharpless introduced "Click Chemistry" as a set of reactions with speci c characteristics such as: modular, wide-range, high-yield only to produce a harmless byproduct that can be removed by non-chromatographic separation techniques 1-6 . Synthesis of tetrazole derivatives can be achieved in an eco-friendly approach such as using green models in particular water as solvent, room temperate, easy separation, low cost andwith good to excellent yields 7-11 . 1,3-dipolar cycloaddition reactions are one of the most popular "click chemistry" reactions. When such concerted reactions are performed through multicomponent reaction (MCR) stategy, they can be widely used in the synthesis of heterocyclic compounds including tetrazole derivatives [12][13][14] . The presence of four nitrogen atoms in the heteroaromatic ve-membered ring of tetrazole gives rise to electron-rich planar structural features [15][16][17][18][19][20] . Furthermore, the acidic nature of tetrazoles is due to the presence of free N-H in their structure and this property can lead to the formation of more complex aliphatic and aromatic heterocyclic compounds [21][22][23] . On the other hand, the heterocyclic tetrazole moiety can stabilize the negative charge using charge displacement and show same pKa values of the corresponding carboxylic acids 7 . As a result, tetrazoles can be used as a metabolic substitutes (bioisoesteres) for carboxylate groups.Indeed, these two groups of compounds are similar at pKa = 4.9 and become deprotonated at physiological pH. Also, tetrazoles have higher nitrogen content than other heterocycles and require almost the same electronic space as carboxylates. Consequently, these features have improved their uses in a wide range of applications including pharmaceuticals and drug design, explosives, agrochemicals, materials science, etc [24][25][26][27][28][29][30] . Especially, the tetrazole structures are similar to the pharmacological core of the Saran's family, which are in fact angiotensin II receptor blockers. These drugs are used to lower blood pressure and heart failure. Angiotensin II is a bioactive peptide that narrows the vessels through the contraction of the muscles around the heart 31 . Among the most important drugs in this category of drugs are Losartan and Valsartan (Figure 1). Indeed, these two active pharmaceutical ingredients (API) the rst of a new class of drugs to be introduced for clinical use in hypertension 32,33 . Many chemical studies on other analogue compounds have preserved antibacterial and antifungal properties. Tetrazole derivatives also show anti-in ammatory, analgesic, anti-cancer, anticonvulsant and diabetic kidney disease activities 26 . More than a hundred years since the invention of the rst tetrazole compound, many scientists have come up with different methods for preparation of the tetrazole compounds faster, easier, and less dangerous. The most common of these methods is a 1,3-dipolar cycloaddition reaction between nitrile and azides, rst reported by Hunch and Walgett in 1901 during the reaction of azide ions and hydrosoic acid under pressure during a [3 + 2] cycloaddition reaction. The chemistry approach has been achieved. [34][35][36] Sharples and his colleagues reported among the available methods an augmented [3 + 2] ring between an azide and p-toluenesulfonyl cyanide (p-TSCN) under solvent-free conditions and good yields and easy separation [37][38][39][40] . There are disadvantages to all these protocols that have not been applied, given the high toxicity of the materials, the explosive nature, and the low boiling point of hydrazoic acid (37 o C), the use of solvents that contaminant the environment. Hence, the use of high pressure systems has been shown an alternative to these methods. The most recent method for the synthesis of tetrazoles involves the multicomponent reactions (MCRs) strategy between aldehydes, sodium azide and nitriles. One of the advantages of this strategy is the use of available, easy and inexpensive basic compounds, namely aldehydes and sodium azide along with relatively expensive nitriles, which has made these reactions economically viable and easy to do on an industrial scale be economical and applicable [41][42][43][44][45] . Periodic mesoporous organosilicas (PMOs) have been emerged as one of the important issues of research in recent years. PMOs materials which were reported for the rst time in 1999 is a new branch of mesoporous materials, organic-inorganic hybrid with high-ordered structures have uniform pore size organic 9,46−59 . PMOs are essentially unique because of the advantage of combining a strong organicinorganic porous framework, with the inherent properties of organic components 49,60−63 . In this regard, precursors of bridged organosilica having hetero-aromatic isocyanurate moieties with high thermal stability and low toxicity would be very desirable 59,62−65 . On the other hand, PMOs demonstrate unique characteristics such as large and hollow spaces, high surface area, regular cavity wall structure, low density and good membrane permeability, material loading in large quantities. 62,63,66−81 Therefore, PMOs have been used effectively in many applications such as catalysis, drug and gene delivery, gas and molecule absorption and sensors. Hencee, we decided to use well-dispersed that have as an e cient catalyst for three component synthesis of functionalized tetrazoles in water as a green solvent and very short reaction times. To address the above challenges, we tried to explore the catalytic activity of the new ZnO nanoparticles embedded on the surface of magnetic isocyanurate-based periodic mesoporous organosilica (Fe 3 O 4 @PMO-ICS-ZnO) for the cascade reaction of different aromatic aldehydes and malononitrile to afford the corresponding Knoevenagel intermediate and subsequent [3 + 2] cycloaddition with sodium azide 82 (Scheme 1). Recovery and reuse of catalysts is an important aspect of catalytic reactions, especially in organic synthesis. Compared to conventional methods for catalyst separation from reactants, the use of a magnetic catalyst that is easily separated by a magnet near the reaction solution is an intelligently approach that increases separation speed, high e ciency, and less wastage catalyst and product during separation 64,83,84 . Results And Discussion After prepartaion of Fe 3 O 4 @PMO-ZnO, FT-IR spectrum were used for con rmation in the synthesis of asmade catalyst is presented in Figure 2. During this analysis, it was observed that the O-H bond peaks in the silanol groups present on the PMO surface were at 3415 cm -1 . Also, due to the presence of heterocyclic isocyanurate within the framework of prepared nanocatalyst, sharp absorption beak was observed in 1471 and 1689, which is related to the vibrations of carbonyl and C-N in the isocyanurate A vibrating sample magnetometer was used to measure the magnetism of the as-made catalytic sample at 300 K. As can be seen in the Figure 3, the hysteresis phenomenon is not found, and the "S" like curve is also a proof of the para magnetism of the compound synthesized at room temperature. The magnetization value was strongly enhanced by the external magnetic eld strength at the low eld region, with a saturation value of magnetization 50 emu/g for Fe 3 O 4 @PMO-ZnO at -15 KOe to 15 KOe. Thermo gravimetric analysis (TGA) was measured for the prepared catalyst at temperatures between 40°C and 800 °C. Figure 4 shows the weight loss of Fe 3 O 4 magnetized PMO with modi ed ZnO nanoparticles embedded on surface. Also due to the presence of isocyanurate bridging incorporated into the silica framework containing organic parts, the composite has been showed high thermal stability of about 480 °C. The decrease observed in mass at intense heat is also due to the organic nature of the PMO compounds. The structure of Fe 3 O 4 @PMO-ZnO nanoparticles was analyzed by XRD spectroscopy. The wide angle XRD patterns in Figure 5 The use of high-resolution eld emission scanning electron microscope (FE-SEM) is an appropriate detecting method for speci cation sample sizes on a nano to micron scale. Due to the different imaging modes in this system, the size of the synthesized particles can be completely characterized; the morphology and distribution of the particles can be analyzed. Figure 6c shows the particle size in the range of 45 to 60 nm, indicating homogeneity and well-de ned distribution of particles and suitable morphology for the Fe 3 O 4 @PMO-ZnO nanocatalyst. The energy dispersive X-ray (EDX) spectrum of the catalyst has been shown in Figure 7. This analysis was carried out to study the presence of expected elements in the material framework. The signals of iron, nitrogen, silicon, carbon, oxygen and zinc were observed in the sample. It is well known that for magnetic iron oxides three peaks are observed in the EDX spectrum for iron. These analyses are successfully con rming ZnO nanoparticles well embedded onto the magnetic catalyst. Furthermore, nitrogen adsorption and desorption isotherms were determined using BET and BJH methods, respectively, to determine the speci c surface area of the catalyst 1 and the pore size of the mesoporous for the sample (Figure 8). The obtained results demonstrated that this material has a typical mesoporous structure and a type IV isotherm that characterizes the presence of cylindrical pores on the mesoporous scale. Speci c surface area is approximately 194.88 m 2 /g, average pore size is 7.312 nm and volume pore is 0.3564 cm 3 /g. Optimization of conditions for the synthesis of tetrazole derivatives in the presence of magnetic Fe 3 O 4 @PMO-ZnO nanocatalyst (1) The effects of different loading catalyst, solvent and temperature in reaction time, yield reaction were systematically investigated in this step. The results have been summarized in Table 1. Considering the important effect of amount of catalyst, temperature, and type of solvent as well as the use or avoiding the use of solvent in the chemical reaction were investigated in order to obtain optimal conditions in the synthesis of derivatives of 1H-tetrazoles. Hence, the reaction of 4-chlorobenzaldehyde (4a, 1 mmol), malononitrile (2, 1 mmol) and sodium azide (3, 1.2 mmol) were selected as the model reaction. Initially, the model reaction was examined in the absence of catalyst under various conditions such as in EtOH solvent at ambient temperature and under re ux conditions. The dependence of the model reaction to catalyst and temperature was evident because of the very low yield of the desired product (E)-3-(4chlorophenyl)-2-(1H-tetrazol-5-yl)acrylonitrile 5a. The higher temperature conditions and the use of EtOH under re ux conditions afforded a much better result. (entries 1 and 2, Table 1) The latter have investigated the effect of the catalyst loading, over different parameters including, reaction time and product output. (entries 3-5, Table 1) As well as, increasing the catalyst content from 10 mg to 15 and 20 had very little effect on product e ciency. This scrutiny demonstrates the high potential of the catalyst in the expected composition synthesis, that originated from very large surface area of the PMOs. In the following, the effect of solvents such as water, water/EtOH mixture, toluene, DMF, ethyl acetate and acetonitrile on the reaction were inquired. The remarkable thing in this assessment is that the reaction in water solvent in terms of time and e ciency is much closer to re ux in EtOH. But due to of the high solubility of the desired product in EtOH and the formation of the product as a precipitate in the water solvent, the optimum conditions for EtOH re ux were considered. In other catalysts with catalyst 1, the result was that the reaction time when using non-operating PMO and magnetic PMO was not e cient as the use of the newly designed catalyst, and this show effective role of ZnO nanoparticle in progress of reaction under ideal conditions (entries 17,18, Table 1). Expanding the optimum conditions for the preparation of 1H-tetrazole derivatives in the presence of magnetic PMO with ZnO nanoparticles Excellent yields are obtained from a variety of aromatic, heterocyclic and aliphatic aldehydes within a few minutes at speci ed temperature. As the data in Table 2 shows, the high to excellent in a very short time was achieved for desired products 5a-v, respectively. In this regard, other aromatic aldehyde compounds with electron withdrawing groups were tested after the model reaction, such as 2-chlorobenzaldehyde (entry 9, Table 2) 3-bromobenzaldehyde (entry 13), 4-nitrobenzaldehyde (entry 4, Table 2) and so on. The aldehyde derivatives with electron donating groups such as 4-methoxybenzaldehyde, 2hydroxybenzaldehyde, 4-methylbenzaldehyde, and 2,4-dimethylamino benzaldehyde (entries 6,8,3,15 Table 2, respectively) also involved the reaction in very good time and at good yield. The response time of these aldehydes is generally less than that a complete electron withdrawing group such as NO2. Heterocyclic aldehydes also complete the reaction at the appropriate time, such as donor groups, due to the presence of an atom with a free electron pair that increases the electron charge in the aldehyde and increases the velocity in these derivatives (entries 10,11, Table 2). Proposed mechanism for the preparation of 5-substituted-1H-tetrazole derivatives A synthetic pathway for the one-pot preparation of (E)-2-(1H-tetrazole-5-yl)-3-phenylacrylenenitrile derivatives compounds using acidic ZnO nanoparticles embedded on the surface of magnetic isocyanurate-based periodic mesoporous organosilica (Fe 3 O 4 @PMO-ICS-ZnO, 1) is shown in Scheme 2. In the rst reaction step, the aromatic aldehydes (4) are activated by Fe 3 O 4 @PMO-ICS-ZnO step by step to condensate with the malononitrile C−H acid (2) and nally affording the Knoevenagel aryl/heteroarylidine malononitrile intermediate (III). This intermediate is subsequently involved in a cascade reaction with sodium azide (3) to the produce 5-membered tetrazole ring by the concerted [3 + 2] cycloaddition. The desired products 5a-v are separated easily from the reaction mixture after completion of using an external magnet. Comparison of the catalytic activity of nano-ordered Fe 3 O 4 @PMO-ICS-ZnO (1) in the synthesis of tetrazole derivatives with other catalysts Table 3 compares the previously reported methods in the scienti c literature for the synthesis of tetrazole derivatives with the present method. In the past, the use of inappropriate and adverse to environmental health solvents and extremely high reaction times have made these methods less commonly used in the synthesis of these compounds with biological properties. Now, the proposed method, using EtOH as a green solvent as well as a very short reaction time and excellent yield, easy separation of the catalyst due to its magnetization and pure separation of the product without the use of any dangerous and toxic reagents are the advantages of the present strategy. One of the advantages of using heterogeneous catalysts is their easy separation from the reaction mixture and their reuse in catalytic systems. Heterogeneous catalysts with magnetic properties have the greatest ease of separation. The catalytic activity and e ciency of periodic mesoporous organic silica for the preparation of 5-substitueted 1H-tetrazoles were evaluated by model reaction. Result in four consecutive times the use of the catalyst was a slight decrease in reaction yield after and no noticeable change in reaction time. These indicate the ability of the catalyst to be reused, with structural stability and high e ciency. The diagram of the catalyst recycling is given below (Figure 9). Reagents and instruments Chemical reagents with high purity were purchased from Merck and Aldrich and only liquid aldehydes were distilled before use. All reactants and the purity of the products were monitored by thin-layer chromatography (TLC) using aluminum plates with 0.2 mm think coated with silica gel F254 plates (Merck) and the UV lamp with wavelength of 245 nm was also used in this experiment. Product identi cation was performed using Shimaduzu infrared spectroscopy with potassium bromide and a 500 MHz Advance Broker core magnetic resonance spectroscopy was performed for hydrogen and carbon nuclei in DMSO solvent and at room temperature. The BET speci c surface area analysis was performed with ASAP 2020 TM and Thermal gravimetric analysis was obtained by Bahr company STA 504 equipment. X-ray diffraction pattern was performed with STOE apparatus and Field emission scanning electron microscopy images were recorded by Zeiss (EM10C) device. Melting points were measured using an electro thermal 9100 device. Typical isocyanurate (ICS, Aldrich) and 0.03 mol tetraethyl orthosilicate (TEOS, 3.12 g)) was added dropwise into that solution. General procedure for the synthesis of 5-substituted-1H-tetrazoles derivatives The reaction mixture of 4-chlorobenzaldehyde (1mmol), malononitrile (1mmol) and sodium azide (1.2mmol) was heated in the presence of nanoporous catalyst (1) at 80 °C under EtOH re ux conditions (Table 1). Under the conditions mentioned, the reaction progress was monitored by TLC (EtOAc/n-hexane, 1:3). After completion of the reaction in a very short time it was easily separated using a catalyst external magnet and the desired product was obtained by adding distilled water to the bottom of the container. The organic layer obtained was washed several times with distilled water to remove impurities from it until the solution was discolored. The solid product was easily re-crystallized in EtOH solvent for puri cation. The formation of our desired product was con rmed by 1 H NMR and 13 C NMR measurements, melting point, FT-IR and comparison with reported data. Conclusions ZnO nanoparticles embedded on the surface of magnetic isocyanurate-based periodic mesoporous organosilica (Fe 3 O 4 @PMO-ICS-ZnO) has been prepared through modi ed environmental-benign procedure for the rst time and properly characterized by appropriate spectroscopic and analytical methods or techniques used for mesoporous materials. The new thermally stable Fe 3 O 4 @PMO-ICS-ZnO materials with proper active sites, uniform particle size and surface area was found to be a superior magnetic catalyst for the synthesis of medicinally important tetrazole derivatives through cascade condensation and concerted reactions as a representative of the Click Chemistry concept. The desired 5substituted-1H-tetrazole derivatives were smoothly prepared in high to quantitative yields and good purity under green conditions. Low catalyst loading, very short reaction time and the use of green solvents such as EtOH and water instead of carcinogenic DMF as well as the possibility of easy separation and recyclability of the catalyst for at least ve consecutive runs without signi cant loss of its activity are notable advantages of this new protocols compared to other recent introduced procedures.
2021-09-09T20:48:24.020Z
2021-07-28T00:00:00.000
{ "year": 2021, "sha1": "adec16a3ed96d9887d34f32e8be03f329b965db0", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-751363/v1.pdf?c=1631901319000", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "674d74cec976ad629097d071a98c76315ea62c08", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Materials Science" ] }
231668015
pes2o/s2orc
v3-fos-license
Rare functional missense variants in CACNA1H: What can we learn from Writer’s cramp? Writer’s cramp (WC) is a task-specific focal dystonia that occurs selectively in the hand and arm during writing. Previous studies have shown a role for genetics in the pathology of task-specific focal dystonia. However, to date, no causal gene has been reported for task-specific focal dystonia, including WC. In this study, we investigated the genetic background of a large Dutch family with autosomal dominant‒inherited WC that was negative for mutations in known dystonia genes. Whole exome sequencing identified 4 rare variants of unknown significance that segregated in the family. One candidate gene was selected for follow-up, Calcium Voltage-Gated Channel Subunit Alpha1 H, CACNA1H, due to its links with the known dystonia gene Potassium Channel Tetramerization Domain Containing 17, KCTD17, and with paroxysmal movement disorders. Targeted resequencing of CACNA1H in 82 WC cases identified another rare, putative damaging variant in a familial WC case that did not segregate. Using structural modelling and functional studies in vitro, we show that both the segregating p.Arg481Cys variant and the non-segregating p.Glu1881Lys variant very likely cause structural changes to the Cav3.2 protein and lead to similar gains of function, as seen in an accelerated recovery from inactivation. Both mutant channels are thus available for re-activation earlier, which may lead to an increase in intracellular calcium and increased neuronal excitability. Overall, we conclude that rare functional variants in CACNA1H need to be interpreted very carefully, and additional studies are needed to prove that the p.Arg481Cys variant is the cause of WC in the large Dutch family. Writer's cramp (WC) is a task-specific focal dystonia that occurs selectively in the hand and arm during writing [1]. WC mainly affects the distal muscles of the arm but may spread to more proximal muscles and even to the nondominant hand over time. The prevalence of WC-the most common form of a task-specific dystonia-is estimated at 2.7:100,000 [2]. Task-specific focal dystonia is thought to have a multifactorial aetiology, given its increased familial occurrence, but no clear family history is present in the majority of cases [3]. A few genes have been associated with either WC or focal dystonia [4], verifying a role for genetics in the pathology of task-specific focal dystonia. In the present study, we aimed to identify the underlying cause in a Dutch family with genetically unexplained (no mutations found in known dystonia genes), dominantly inherited WC. The index patient (II-3; Fig. 1a writing. The daughter of patient II-3 is also reported to have difficulties with writing but has not been examined nor included in the genetic analysis. After performing whole exome sequencing (WES) in II:3 and III:7, as described before [5], we discovered several rare missense variants shared between the two affected cases, but only 4 variants segregated with disease phenotype after Sanger sequencing ( Table 1). All 4 variants exhibited Combined Annotation Dependent Depletion (CADD) Phred scores higher than 10 and were predicted to be probably damaging by Mutation Taster and/or Polyphen 2.0. Based on this data, these variants are classified as variants of unknown significance, and thus we could not define any of them as likely benign or likely pathogenic. Notably, an association between CACNA1H, which encodes a subunit of the neuronal voltage-gated T-type calcium channel Calcium Voltage-Gated Channel Subunit Alpha1 H, and dystonia has been proposed because a weighted dystonia gene co-expression network [6] directly connected CACNA1H to the known dystonia gene KCTD17, which encodes the protein Potassium Channel Tetramerization Domain Containing 17, leading to the assumption that both proteins function in the same signalling pathway. This was not the case for the other three candidate genes. Additionally, novel and rare variants in CACNA1H have been linked to childhood absence and idiopathic generalized epilepsy, familial developed WC in his early twenties. At 50 years of age, he showed severe mobile flexion dystonia in the thumb of the right hand combined with extension in the wrist during writing, with an Arm Dystonia Disability Scale (ADDS) score of 3. His mother (I-2, Fig. 1a) noticed difficulties with writing from the age of 54. At examination at age 88, she showed a mobile, predominant flexion dystonia with tremor of the right hand (ADDS 3) during writing. The sister of the index patient (II-6, Fig. 1a) exhibited right-sided WC characterized by a tremulous writing pattern (ADDS 2) from the age of 36 years. Her son (III-7) suffered from WC from the age of 18 years. He showed dystonic posturing of the right thumb during ◂ hyperaldosteronism, amyotrophic lateral sclerosis and severe congenital amyotrophy [7][8][9][10]. Given that epilepsy overlaps with paroxysmal movement disorders such as focal dystonia [11], and the observation that CACNA1H functions in similar biological pathways as other known dystonia genes, we attempted to validate a role for CAC-NA1H in WC by screening the complete coding region of CACNA1H using a targeted array in a cohort of 82 genetically undiagnosed WC cases (both sporadic and familial). We identified 3 additional rare missense variants in CACNA1H in 3 WC cases: the c.5989G > A p.Ala1997Tyr variant predicted to be benign by various programs, the c.314T > G p.Val105Gly variant that was also detected in a patient with spinocerebellar ataxia type 3, and variant c.5641G > A p.Glu1881Lys, which was predicted to be damaging but did not segregate (Fig. 1b). This data reinforces that CACNA1H is relatively tolerant for rare missense variants, as confirmed by its gene constraint score of 1.17 (gnomADv3.1) [12]. To further investigate the consequence of rare missense variants in CACNA1H, we performed structural and functional analysis of the two putative damaging variants, p.Arg481Cys and p.Glu1881Lys. Structural analysis using the Protein Data Bank (PDB) entry 5GJW, showed that the p.Arg481Cys caused a likely loss of stability of an α-helix bundle and likely affects the α-helix bundle interactions in the interface with the main domain (Fig. 1c). Additionally, the presence of a cysteine at position 481 could lead to the formation of a disulphide bond with a native cysteine at position 847, which is located within the bundle, and this may cause conformational restraints that influence protein folding, stability and function. The introduction of the positively charged lysine at position 1881 due to the p.Glu1881Lys variant is likely to cause movement of the positively charged arginines at positions 1596 and 1597, changing the protein structure in this interface (Fig. 1d). Furthermore, we performed functional analysis of the mutant and wild type (WT) Cav3.2 channels in transiently transfected HEK tsA-201 cells, as done before [13]. Both variants did not change the conductance of the channel, as we observed a similar current density compared to WT Cav3.2 (Fig. 1e, f ). However, the p.Glu1881Lys variant did cause a small, significant shift in the mean half-inactivation potential toward more positive potentials, and both variants led to an accelerated recovery from inactivation compared to WT Cav3.2 ( Fig. 1g-j). This implies that Cav3.2 channels carrying the p.Arg481Cys and p.Glu1881Lys variants are less likely to inactivate and are available for re-activation earlier. This gain of function may lead to an increase in intracellular calcium and increased neuronal excitability [14,15]. In summary, using WES, we identified 4 rare variants of unknown significance that segregated with the WC in the family. Given the established link between CACNA1H and the previously reported dystonia gene KCTD17 and its link with paroxysmal movement disorders, we focused our additional studies on a putative role of CACAN1H in WC. Our follow-up work highlights that the need for caution in interpreting in silico predictions of rare missense variants in large genes like CACNA1H as damaging. We show that both the segregating p.Arg481Cys variant and the non-segregating p.Glu1881Lys variant very likely cause structural changes to the protein and lead to a similar gain of function of the Cav3.2 channel. Whether the p.Arg481Cys variant is the cause of disease in the large Dutch family remains to be proven, but our study corroborates that rare, functional missense variants in CACNA1H are quite common and may associate with numerous disorders, including WC.
2021-01-22T14:38:39.303Z
2021-01-21T00:00:00.000
{ "year": 2021, "sha1": "c5cf3beb876bbf319cc638e08a4d5bcf3e8828e0", "oa_license": "CCBY", "oa_url": "https://molecularbrain.biomedcentral.com/track/pdf/10.1186/s13041-021-00736-3", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a6b3c6ac95b84840721f835d5152f9e3c8e96784", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
264554889
pes2o/s2orc
v3-fos-license
SPIRE: a Searchable, Planetary-scale mIcrobiome REsource Abstract Meta’omic data on microbial diversity and function accrue exponentially in public repositories, but derived information is often siloed according to data type, study or sampled microbial environment. Here we present SPIRE, a Searchable Planetary-scale mIcrobiome REsource that integrates various consistently processed metagenome-derived microbial data modalities across habitats, geography and phylogeny. SPIRE encompasses 99 146 metagenomic samples from 739 studies covering a wide array of microbial environments and augmented with manually-curated contextual data. Across a total metagenomic assembly of 16 Tbp, SPIRE comprises 35 billion predicted protein sequences and 1.16 million newly constructed metagenome-assembled genomes (MAGs) of medium or high quality. Beyond mapping to the high-quality genome reference provided by proGenomes3 (http://progenomes.embl.de), these novel MAGs form 92 134 novel species-level clusters, the majority of which are unclassified at species level using current tools. SPIRE enables taxonomic profiling of these species clusters via an updated, custom mOTUs database (https://motu-tool.org/) and includes several layers of functional annotation, as well as crosslinks to several (micro-)biological databases. The resource is accessible, searchable and browsable via http://spire.embl.de. Introduction Life on Earth is dominated by microbes: bacteria, archaea and small eukaryotes shape our world by driving biogeochemical cycles across ecosystems ( 1 ), they enable macroscopic life as plant and animal symbionts ( 2 ), and they represent by far the greatest biodiversity among known life ( 3 ).Yet most of this diversity remains biological 'dark matter' ( 4 ): although meta'omic techniques enable their study directly from sequencing data, the vast majority of microbes eludes laboratory cultivation and only a small fraction of the functional space encoded by microbial genes has been characterized ( 5 ,6 ).While sampling efforts have increased exponentially and generated petabytes of data in recent years ( 7 ), most major microbial habitats remain understudied to the extent that almost every newly sequenced metagenome adds 'novel' species (as inferred from metagenome-assembled genomes, MAGs) and thousands of 'novel' genes of unknown function to the census ( 8 ). The bulk of metagenomic data is generated in individual studies to address specific research questions.Heterogeneity in sample preparation ( 9 ), sequencing protocols and bioinformatic processing workflows ( 10 ,11 ) complicate comparisons of findings across studies.Several initiatives have sought to integrate and consolidate datasets by re-processing them using consistent pipelines.For example, QIITA ( 12 ), MGnify ( 7 ) or the Microbe Atlas Project ( https:// microbeatlas.org/) host millions of amplicon samples, whereas other projects, such as curatedMetagenomicData ( 13 ), GMrepo ( 14 ) and the Ocean-MicrobiomicsDatabase ( 15 ), focus on taxonomic and functional profiles of human-associated or ocean metagenomes.Large MAG catalogs for multiple biomes are hosted online as part of the DOE's IMG / M ( 16 ) and EBI's MGnify ( 7 ) resources.Moreover, the Genome Taxonomy Database (GTDB, 17 ) has advanced the field by consistently organizing both isolate genomes and quality-filtered MAGs into a common prokaryotic reference tree that guides standardized, phylogeny-informed taxonomies (18)(19)(20).The GTDB encom-passes 85 205 species-level genome clusters across 181 phyla (as of release r214, April 2023), two thirds of which are represented only by MAGs, while also providing widely used tools for genome quality control ( 21 ) and taxonomic classification ( 22 ).Overall, existing resources focus on either providing large gene or genome catalogs, on functional and taxonomic profiling, or on harmonizing contextual data given heterogeneous data submission and annotation practices, and are often restricted to individual microbial habitats or cordon data on different habitats off into distinct subsets. Here we introduce SPIRE, a Searchable, Planetary-scale, Integrated mIcrobiome REsource to study microbial diversity and function at global habitat, geographical and phylogenetic scales.As detailed below, SPIRE version1 encompasses 99 146 consistently processed whole-genome shotgun metagenomic samples from 739 distinct studies, integrated across environments and amended with manually curated contextual data, based on a newly developed lightweight 'microntology' of 92 terms describing microbial habitats and lifestyles.SPIRE combines 1.16 million newly constructed MAGs of medium or high quality ( 23 ) with the 907k high-quality reference genomes in proGenomes3 ( 24 ), clustered into 133 402 species-level genome clusters, 78 804 of which are unclassifiable at species level using current tools ( 22 ).Species clusters are profilable using mOTUs ( 25 ) via an updated custom database and pre-computed taxonomic profiles across all 99k metagenomic samples will be released as part of the resource.SPIRE further comprises 35 billion metagenomically called open reading frames (ORFs) with various layers of functional annotation, linked to clusters in the Global Microbial Gene Catalogue (GMGC, 8 ).SPIRE provides consistent integration of these heterogeneous data modalities and is designed to interoperate with other (micro-)biological resources, such as proGenomes ( 24 , https://progenomes.embl.de), the GMGC ( 8 , https://gmgc.embl.de), eggNOG ( 26 , http://eggnog6.embl.de ) and metaMap ( https:// metamap.biobyte.de/), among others.The resource can be accessed, browsed, and searched via https://spire.embl.de .D 779 Figure 1.Ov ervie w of sampled habit ats in SPIRE, as a subset of annot ated 'microntology' terms (see t able S1).Microntology terms are assigned using a 'multi-tag' system, meaning that individual samples can be annotated with multiple terms of varying granularity and redundantly within a flat hierarchy (e.g. a human fecal metagenome will be annotated as 'host-associated, animal host, mammalian host, human host, digestive tract, intestine', whereas a mangro v e-associated sample carries tags from both the 'aquatic' and 'terrestrial' term space, while moreo v er possibly being annotated as 'host-associated, plant host').Shown above is the total number of samples annotated to a subset of microntology terms under this system. Metagenome collection and dataset curation The core dataset underlying SPIRE was defined using a semiautomatic process, combining three data sources: (i) samples in the European Nucleotide Archive (ENA) meeting the criteria 'library_source = METAGENOMIC AND li-brary_strategy = WGS AND instrument_platform = ILLU-MINA AND base_count > = 10 ∧ 9 AND average read length > = 100' were selected from all projects where > = 20 samples satisfied the above criteria as of Sep 30th 2022; (ii) metagenomic samples available via the JGI's IMG / M resource ( 27 ) on Sep 30th 2019 (to comply with JGI data policies and embargo periods); (iii) manually selected 'allowlisted' studies of particular interest (e.g.providing data on exotic environments).For the resulting list, ENA project accessions were manually matched to publications where possible; in case of data submitted by the JGI, where each sample is associated with a distinct project accession, 'studies' were defined based on matched publications and as consistent groups based on sample metadata provided via IMG / M. The metagenomic sample set was further filtered and curated by (i) removing amplicon and isolate genome sequencing datasets erroneously annotated as shotgun metagenomes; (ii) identifying and removing erroneously submitted datasets (e.g.where both mates in 'paired end' data were identical); (iii) identifying and removing duplicates (submitted under distinct project or sample accessions); (iv) removing samples from controlled experimental setups (e.g.laboratory mice, pathogen challenges or defined in vitro communities); (v) flagging special cases such as microcosms, paleobiological samples or pre-enriched samples; (vi) resolving misfits with the European Nucleotide Archive (ENA) and Sequence Read Archive (SRA) data model, e.g. if distinct biological samples were erroneously submitted under the same biosample accession, but distinct experiment or run accessions; (vii) iden-tifying and combining technical replicates (distinct experiment accessions) for the same biological sample.For the resulting list, raw sequencing data was downloaded from the ENA. Following these steps, the final dataset in SPIRE comprises 99 146 metagenomic samples across 739 distinct studies. Curation of contextual data and overview of sampled environments Contextual data for each metagenomic sample was sourced (i) via annotation fields in ENA, (ii) via IMG / M metadata tables where applicable and (iii) directly from matched publications.Information was consolidated into common fields (e.g.latitude and longitude data were manually harmonized across different submitted formats).All samples were manually annotated against a newly developed ' microntology ' (see Table S1), a shallow and lightweight ontology of 92 terms to describe microbial habitats and lifestyles, crosslinked to terms in established resources such as the EnvO ( 28 ) or UBERON ( 29 ) ontologies.SPIRE sample annotation uses a 'multiple tag' system, meaning that each sample is described using a combination of concurrent tags, rather than one specific term in a (deep) hierarchy, allowing an annotation with increased flexibility, yet compatibility to established ontologies.As a result, for example, 68% of the ∼100k samples in SPIRE are annotated as 'host-associated' (66.5% as animal-associated, 56% as human-associated, 1.5% as plant-associated); 17% are aquatic samples (including 7.6% marine and 5.5% fresh water); 13.5% are terrestrial (including 6.4% soil samples); 10.3% are from anthropogenic or human-impacted environments (including 6.6% from built environments); see Figure 1 for details.Moreover, data included in SPIRE cover pole-topole latitudes, with samples from ∼200 countries and territories.Green indicates species clusters that contain both isolate genomes and MAGs.See Supplementary Table S2 for taxonomic classifications of all species clusters included in SPIRE. All SPIRE MAGs were taxonomically classified using gtdbtk v2.11 against release r207 ( 22 ) and consensus taxonomy for species clusters at each taxonomic level was assigned based on a majority vote, with manual resolution of a few remaining conflicting labels. 2 ).This large proportion of 'novel' species relative to the GTDB may in part be due to a conservative parametrization of the gtdb-tk classifier (favoring specificity over sensitivity), but it indicates that SPIRE covers a vast diversity of previously uncharacterized and undescribed microbial diversity .Notably , 28 856 SPIRE clusters unclassified at species level contain more than a single genome. Functional annotation Detection of orthologs and inference of putative function for metagenomically-called ORFs (see above) were performed using eggNOG-mapper v2 ( 44 ,45 ).ORFs were further annotated for putative roles in antibiotics resistance using DeepARG ( 46 ) Database design SPIRE relies on a mongoDB database as its foundation.Within this system, a repository of samples / MAGs and their attributes is stored.This data can be conveniently accessed through the web-based interface.Structured data such as annotation of genes and genomes is stored in a relational database management system to allow complex and time efficient queries. Website SPIRE is accessible, browsable, searchable and downloadable via spire.embl.de .The main access modes are by habitat / sample (searching based on accessions or metadata tags), by taxon (based on clade names and species-level clusters) and by genome (individual genomes within clusters).These modes are inter-accessible (e.g.browsing from a sample to a specific taxon observed therein, for which then multiple genomes can be accessed) and at each level, link-outs to relevant independent or third party databases are provided.We invite user contributions, suggestions for improvements and bug reports under spire.embl.de/ contribute. Outlook Given the exponential growth of publicly available metagenomic data, we anticipate biennial updates of the underlying data for SPIRE.We will continue to develop and update the processing pipeline to address rising computational demands and integrate novel or improved tools.Moreover, we will seek to extend the range of available functional annotations at gene and genome level, within the limits of computational scalability .Finally , and most importantly , we will continue to further integrate SPIRE with other resources such as proGenomes ( 24 ), eggNOG ( 26 ), the GMGC ( 8 ) and other ongoing efforts. Discussion SPIRE provides the largest sets of consistently processed metagenomes, newly generated MAGs and profilable microbial species clusters to date.Combined with a high degree of curation and integration of various data modalities (MAGs, contigs, genes, profiles, etc.), SPIRE is the most comprehensive resource available to study microbial diversity and function.Covering a broad range of habitats and geography, SPIRE enables true 'planetary-scale' analyses of microbiomes across various environments, including so far understudied ones.At the same time, SPIRE encompasses large amounts of 'novel', previously undescribed microbial diversity both at the gene and genome level.We are confident that SPIRE will enable and simplify a wide range of analyses for end users, ranging from the characterization of individual taxa or gene clusters of interest against a global data canvas, to truly 'planetary-scale' studies of microbial life across habitats and phylogeny. Figure 2 . Figure 2. Representation of taxonomic groups covered in SPIRE.Shown are the total number of species clusters (top) and total number of genomes (bottom) for the largest 25 bacterial and largest 15 archaeal phyla represented in SPIRE.Orange hues indicate clusters and genomes of isolates, as downloaded from proGenomes3 (progenomes.embl.de;'isolate only').Blue hues indicate clusters and genomes introduced in SPIRE ('MAGs only').Green indicates species clusters that contain both isolate genomes and MAGs.See Supplementary TableS2for taxonomic classifications of all species clusters included in SPIRE.
2023-10-30T06:17:18.455Z
2023-10-28T00:00:00.000
{ "year": 2023, "sha1": "a6d65f5a3a81dd4023fdbc73d2f06c70fce4d42b", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/nar/advance-article-pdf/doi/10.1093/nar/gkad943/52660738/gkad943.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "874a596684f06c1bced9d08cdf4c9f42a3837abd", "s2fieldsofstudy": [ "Environmental Science", "Biology", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
268521542
pes2o/s2orc
v3-fos-license
Connectedness to the young adult cancer community and post‐traumatic growth: A young adults with cancer in their prime study For young adults (YAs) with cancer, connecting with peer cancer survivors can provide a unique sense of community and may enhance post‐traumatic growth (PTG). This study examined the relationship between connectedness to the YA cancer community and PTG among YAs, independent of overall social support. -1 of 8 https://doi.org/10.1002/pon.6325Each year, approximately 7600 young adults (YAs) aged 20-39 are diagnosed with cancer in Canada. 1,2Although cancer remains a leading cause of death in YAs, survival rates are increasing resulting in a growing population of long-term YA survivors. 1,24][5][6] Despite the difficult experience of cancer, some individuals experience personal growth. 7The positive changes that result from highly challenging life experiences are defined as post-traumatic growth (PTG). 8cording to the model proposed by Tedeshi and Calhoun, 8 PTG results from cognitive processing that occurs during self-disclosure and persistent rumination about a traumatic event.Tedeshi and Calhoun explain that PTG develops as individuals attempt to manage their distress, make sense of their experience, and adapt to their new circumstances.PTG can manifest as an increased appreciation for life, more meaningful interpersonal relationships, a greater sense of personal strength, reorganization of priorities, and a richer existential experience.A 2019 systematic review reported that moderate-to-high PTG is experienced by 52% of individuals following a traumatic life event (including cancer) 9 and is more commonly experienced in younger individuals.Individuals who experience moderate-to-high PTG report lower levels of distress 10 and greater quality of life 11 over time. Social support is an integral aspect of PTG. 8 Social support directly influences PTG by providing a supportive environment to facilitate disclosure and cognitive reappraisal, and by offering new perspectives.Social support from people with similar experiences is particularly valuable because individuals are more likely to incorporate new perspectives from people who have "been there." 12Social support indirectly facilitates PTG by providing a buffer against distress, which can lead to negative rumination about an event. 8Two recent systematic reviews on the correlates of PTG, one involving adult cancer survivors 13 and the other involving pediatric cancer survivors, 14 both found significant positive correlations between social support and PTG.However, both reviews defined social support as the functional provisions of relationships. 15This neglects the important effect that simply belonging and feeling connected to a group or community can have on a person's health, regardless of whether any social support is explicitly exchanged. 16cial connectedness is a basic need and type of belonging. 17,18It has been defined as an attribute of the self that reflects a subjective sense of enduring interpersonal closeness with people and society. 17cial connectedness is related to, but distinct from, perceived social support.Social support focuses on the availability of support in one's social environment, whereas social connectedness focuses on the emotional closeness or distance between the self and others. 17though they are different constructs, connectedness is likewise associated with psychological adjustment. 17Social connectedness gives life meaning and purpose, and is achieved when people feel seen, heard, and valued. 19YAs may be vulnerable to feelings of low connectedness as they often report feeling isolated and disconnected from family and friends who are unable to relate to their experiences. 20Connectedness to cancer peers can provide a unique sense of community 21 and may contribute to PTG.However, limited research has examined social connectedness among YAs and its impact on PTG. The purpose of this study was to examine the relationship between connectedness and PTG among YAs.Our objectives were: (1) To examine the association between feeling connected to the YA community and PTG; and (2) To determine whether overall social support and feeling connected to the YA community independently contribute to PTG.We hypothesized that greater feelings of connectedness to the YA community would be associated with higher PTG, and that connectedness to the YA community and overall social support would independently contribute to PTG. | Participants To be eligible to participate, YAs must have been diagnosed with cancer between the ages 15-39 and be 19 or older and residing in Canada at the time of the study. | Survey procedures The recruitment strategy involved a media campaign led by YACC that included paid recruitment ads targeting YAs on social media, radio, television, and print media.The survey was also distributed to YACC's national network of contacts including YAs impacted by cancer, clinicians, and other cancer support services.The online survey was launched in June 2017 and closed in March 2018.All participants provided informed consent as outlined in the digital consent form. | Post-traumatic growth PTG was measured using Tedeschi and Calhoun's post-traumatic growth inventory (PTGI). 8The PTGI is a 21-item instrument that assesses five dimensions of PTG: New Possibilities, Relating to Others, Personal Strength, Appreciation of Life and Spiritual Change. Responses were measured on a six-point Likert scale, with higher scores representing greater PTG.The PTGI has been extensively used among cancer survivors and in studies of YA cancer survivors specifically. 22Similar to prior research, 9 we defined moderate-tohigh PTG as the upper 60% of the PTGI distribution which corresponded to a score of 51 or greater in this sample. | Connectedness to the YA community We measured feelings of connectedness to the YA community with the following one-item question that was created for the YACPRIME survey: "In general, how connected do you feel to the YA cancer community?"Responses were measured using a five-point Likert scale, ranging from "Not at All" to "Extremely."For the analysis, responses were categorized into: "Not connected" (Not at all) versus "Connected" (A little bit to Extremely Connected). | Social support Social support was measured using the Medical Outcomes Study Social Support Survey (MOS-SSS). 23The MOS-SSS is a 19-item instrument that assesses the perceived availability of four dimensions of social support: emotional/informational support; tangible support; affectionate support; and positive social interaction.It is not designed to assess social support from a particular source, but rather how often each kind of support is available from someone in your social network if needed.The survey instructions read: "How often is each of the following kinds of support available to you if you need it?"and an example survey item is: "Someone you can count on to listen to you when you need to talk."The responses were scored on a fivepoint Likert scale ranging from 1-None of the time to 5-All of the time.The scores were summed and scaled to an overall score ranging from 0 to 100, with higher scores indicating greater support.The MOS-SSS has been shown to be valid and reliable in a range of cancer populations. 24,25Like other studies, 26 we used the lower third of the sample distribution (score = 56.6/100.00)as the cut-off for low versus high social support. | Demographic and clinical data Demographic data collected included: age at the time of the survey, sex, race, and relationship status.Age was treated as a continuous variable.Relationship status was dichotomized into "in a relationship" versus single.Sex consisted of two options: male or female.Race was collapsed into broader categories and dichotomized as white or racial or ethnic minority for analysis purposes.Clinical data included cancer type and time-since-diagnosis. Time-since-diagnosis was treated as a continuous variable. | Statistical analysis Participant data and study measures were summarized using descriptive statistics and compared by social support group using independent t-tests for continuous variables and Chi-square tests for categorical variables.To evaluate factors associated with PTG, we used univariable and multivariable logistic regression to assess the unadjusted and adjusted effects of social connectedness to the YA community.Participant characteristics were included in the multivariable model as potential confounding variables.Separate models were evaluated for the full sample and each of the social support groups.To investigate whether social support and connectedness were distinct constructs, we compared level of connectedness among the low and high social support groups using a Chi-square test.Subsequently, we tested the interaction of social support and connectedness in the multivariable logistic regression model of PTG for the full sample.We controlled for age, time-sincediagnosis, sex, relationship status, and race/ethnicity in all multivariable models.All tests were two-sided and level of significance was set at 0.05.Analysis was conducted using R Statistical Software version 4.2.2. | Participant characteristics Demographic and clinical characteristics of the full sample and each social support group are presented in Table 1.In total, 444 individuals from the YACPRIME study were included in the analysis. Respondents were on average 34 (SD = 6.0) years of age and 4.8 (SD = 5.4) years since their diagnosis.The most common diagnosis was breast cancer (28.4%).Most respondents were female (86.7%), in a relationship (70.3%) and identified as White (87.4%).Respondents who identified as White were more likely to report high social support compared to those who identified as a racial or ethnic minority (p = 0.01).Those in a relationship were more likely to report high social support compared to those who were single (p < 0.001).No other differences were observed in level of overall social support based on participant characteristics. | Levels of connectedness and post-traumatic growth In the full sample, 28.8% (n = 128) of individuals reported no connection, 27.9% (n = 124) reported feeling a little bit connected, and 43.3% (n = 192) reported being somewhat to extremely connected to the YA community (Table 2).Level of connectedness to the YA community did not differ by social support group.However, as shown in Table 2, individuals in the high social support group had significantly higher total PTG scores and higher "relating to others" sub-scale scores than those in the low social support group. | Factors associated with PTG The results of the univariable analysis are shown in Table 3.In the full sample, factors associated with moderate-to-high PTG were connectedness to the YA community, high social support, greater time-since-diagnosis, and female sex.In the high social support group, connectedness to the YA community and greater time-sincediagnosis were associated with moderate-to-high PTG.In the low social support group, the association between connectedness to the YA community and moderate-to-high PTG was not statistically significant.However, identifying as female or a racial or ethnic minority were associated with moderate-to-high PTG. In the multivariable analysis (Table 4), connectedness to the YA community and greater-time-since diagnosis were associated with moderate-to-high PTG in all three models.Of note, the magnitude of the association between connectedness to the YA community and moderate-to-high PTG was higher in the low social support group (OR = 2.56, 95% CI: 1.19, 5.76) than the high social support group (OR = 1.75, 95% CI: 1.02, 3.01).In the full sample, high social support and female sex were also associated with higher odds of moderateto-high PTG.In the low social support group, identifying as female or as a racial or ethnic minority were also associated with higher odds of moderate-to-high PTG.The interaction term between social support and connectedness was not significant. | DISCUSSION Given their developmental stage, YA cancer survivors may be uniquely positioned to experience and benefit from PTG following a cancer diagnosis.This study revealed that YAs who feel connected to a community of YA cancer peers are more likely to experience In particular, YAs with low perceived social support but who feel connected to the YA community, and are female or a racial or ethnic minority have greater odds of experiencing moderate-to-high PTG. Thus, identifying interventions to boost feelings of connectedness to the YA community among YAs can contribute to positive psychosocial outcomes. While it has been established that social support is integral to PTG, 8 to our knowledge, this is the first study to demonstrate a relationship between connectedness and PTG.Prior research (in the non-cancer community) has shown that social connectedness contributes to psychosocial outcomes that are related to PTG, such as self-esteem and well-being.For example, Lee and Robbins demonstrated that social connectedness was positively related to selfesteem and negatively related to trait anxiety among women. 27[30] Our study demonstrates that connectedness and perceived social support are unique constructs that independently contribute to PTG.This finding aligns with Lee and Draper's conceptualization of social connectedness. 17As mentioned earlier, they proposed that social connectedness is a dimension of belongingness that is distinct from perceived social support.However, Lee and Draper contend that certain dimensions of social support correspond with social connectedness, such as the social integration sub-scale of the Social Provisions Scale.Further, research is warranted to investigate these relationships. Being female was associated with a higher odds of moderate-tohigh PTG in the full sample and in the low social support sample.This finding aligns with a meta-analysis of PTG in 70 studies, 31 that found a small to moderate effect of gender on PTG.Proposed reasons for this effect include differences in rumination and coping styles. 31udies have shown that females tend to engage in more positive and negative rumination than males. 31,32Rumination on constructive issues such as an awareness of personal strengths or the importance of social connections may be the mechanism leading to greater PTG in females. 12,31Females are also more likely to utilize emotion-focused coping strategies to manage stressors and related emotions, 33 which is proposed to be the core mechanism involved in the development of PTG. 12 Collectively these studies suggest that there are differences in how females and males cope with stress, as well as how they grow following traumatic events. Identifying as a racial or ethnic minority was also associated with a higher odds of moderate-to-high PTG, but only in the low social support model.Other studies have shown that African-American breast cancer survivors report greater PTG compared to other groups.For example, a study of PTG in attendees of a breast cancer survivorship program in the United States found significantly higher PTG in African American women compared to White women. 34kewise, a cross-sectional survey of PTG in over 800 breast cancer survivors in the United States found higher levels of PTG among African-American women compared to White or Hispanic women. Further, a sub-group analysis revealed that the relationship between race and PTG was mediated by religiosity. 35The authors hypothesized that the higher levels of PTG in African-American breast cancer survivors could be due to religious coping strategies or greater social support from the Church community. 35This underscores the importance of a wide social network and religious or spiritual community connectedness as a potential facilitator of PTG for some racial or ethnic minority individuals. | Study limitations This study has some limitations.Given the cross-sectional survey design, the timing of social support and PTG was not possible to discern.Self-reported information may be subject to social desirability bias; however, the anonymity of the survey likely reduced this possibility.The sample was predominantly female and White, which does not reflect the diversity of the Canadian population of YAs.This could be due to the use of online recruitment which has been shown to yield a less sociodemographic ally diverse sample of YAs in Canada than inperson, hospital-based recruitment in urban centers. 36In addition, the sample was in their mid-thirties and farther from diagnosis which may limit generalizability.Connectedness to the YA community was measured using a single item; a more robust multi-item measure of connectedness may more fully capture the nature and extent of connectedness for future study.Lastly, recruitment was conducted by YACC through its existing members and would presumably include many people who were connected to the YA community based on their use of YACC services.Hence studies conducted with different samples of YAs not connected to a cancer support organization may yield lower levels of connectedness to the YA community. | Clinical implications The results of this study illustrate that connectedness to the YA community is a unique source of support that contributes to PTG and the impact of feeling connected to the YA community may be particularly strong for YAs with low social support.The data also Data were obtained from the Young Adults with Cancer in their Prime (YACPRIME) study.This study was a collaborative patientoriented research project conducted in partnership with the nonprofit organization Young Adult Cancer Canada (YACC).The YACPRIME study consisted of a cross-sectional survey of the needs of YAs across Canada.It was conducted in accordance with the Declaration of Helsinki, and received ethical approval from Memorial University's Interdisciplinary Committee on Ethics in Human Research (ICEHR) #20170502-SC. 4 of 8 - MAH ET AL.moderate-to-high PTG, regardless of level of overall social support.In addition, YAs who have high perceived social support, are farther from diagnosis, and are female have higher odds of experiencing moderate-to-high PTG.However, YAs with low levels of social support are likely to benefit most from connection to the YA community. 444) High social support group (N = 293) Low social support group (N = 151) p-value b Descriptive characteristics of the sample. T A B L E 1 a Denotes statistically significant p-value based on alpha = 0.05.b p-value derived from Chi-square test for categorical variables and independent t-tests for continuous variables.c Estimate is comparing White and racial or ethnic minority groups. Full sample Stratified by social support group High social support Low social support Odds ratio (95% CI) p-value Odds ratio (95% CI) p-value Odds ratio (95% CI) p-value Levels of connectedness to the YA community and post-traumatic growth stratified by social support. T A B L E 2 a Denotes statistically significant p-value based on alpha = 0.05.b p-value derived from Chi-square test for categorical variables and independent t-test for continuous variables.T A B L E 3 Univariable logistic regression of factors associated post-traumatic growth.a Denotes statistically significant p-value based on alpha = 0.05.MAH ET AL. Full sample b Stratified by social support c High social support Low social support Odds ratio (95% CI) p-value Odds ratio (95% CI) p-value Odds ratio (95% CI) p-value Multivariable logistic regression of factors associated with post-traumatic growth.adjusted for connectedness to YA community, social support, social support*connectedness to YA community, relationship status, time-since-diagnosis, age, sex, and race.connectedness to YA community, relationship status, time-since-diagnosis, age, sex, and race/ethnicity.show that there is a significant proportion of YAs who do not feel connected to the YA community, despite the existence of community organizations for YAs with cancer.Future work should overcome barriers to accessing community support organizations among YAs and examine the support preferences of YAs not adequately represented in this research.This may include YAs who are male and who identify as a racial or ethnic minority.Health care systems and community programs should expand support offerings that target the different mechanisms by which social relationships can enhance wellbeing.CONCLUSION Connectedness to a community of YA cancer peers was associated with moderate-to-high PTG, regardless of level of existing social support.In particular, YAs who had low perceived social support but felt connected to the YA community and identified as female or a racial or ethnic minority had the greatest odds of experiencing moderate-to-high PTG.Future research should examine how to increase access to YA cancer communities and foster a sense of connectedness among YAs with cancer. T A B L E 4 b c adjusted for 5 |
2024-03-20T06:17:36.242Z
2024-03-01T00:00:00.000
{ "year": 2024, "sha1": "c72da44b9882e4e2963831730f8e18578162ec2d", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/pon.6325", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "51557e077e9defa853a28bec37c775ea4c8515e6", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
258055764
pes2o/s2orc
v3-fos-license
Institutionalization of corrupt activities among Indonesian rural household farmers in surround marginal teak forest areas Abstract Institutionalization of petty corruption among farmer households in the villages around the forest has focused on large and productive state forest areas managed by the State Forest Company (SFC). The activities occurred simultaneously. There is no data to reveal the activities. This research aimed to analyze how the corrupt mechanism transpired. A qualitative method was induced to detail the occasion of petty corruption at the teak stands. In-depth interviews with key informants, participatory observation, and Focus Group Discussion (FGD) with local communities demonstrated that marginal teak forests in Java, or at least in Lemah Gurih, are not accidental but seem “intentional” to support illegal logging practices. SFC’s conservatism in setting wood standards oriented to the global market’s needs and not adjusting to the increasing local market demand for teak wood triggered corruption. As a result, local forest officials can trade those wood products as personal income without violating company regulations. Simultaneously, together with the illegal wood collectors (IWCs), they also manipulate the calculation of timber yields to get unilateral benefits. The corrupt behavior is continuously institutionalized over generations by patron-client relationships, and they have become an integral part of the lives of local communities. The recommendation to reduce corrupt behavior is to improve forest management practices, a review of wood harvesting administration, wood quality standards, and changes in social relations between SFC and local communities to be a more formal relationship are critical. "intentional" to support illegal logging practices. SFC's conservatism in setting wood standards oriented to the global market's needs and not adjusting to the increasing local market demand for teak wood triggered corruption. As a result, local forest officials can trade those wood products as personal income without violating company regulations. Simultaneously, together with the illegal wood collectors (IWCs), they also manipulate the calculation of timber yields to get unilateral benefits. The corrupt behavior is continuously institutionalized over generations by patron-client relationships, and they have become an integral part of the lives of local communities. The recommendation to reduce corrupt behavior is to improve forest management practices, a review of wood harvesting administration, wood quality standards, and changes in social relations between SFC and local communities to be a more formal relationship are critical. Introduction "go there (to the forest) if you want pocket money, stop playing around!" 1 Corrupt practices, certainly petty corruption, then give birth to illegal logging practices by the community and the elites within a system (Robbins, 2000). However, it is the poor being accused of damaging the productive forests even when they only take a small number of forest yields, compared to the elites who take so much and destroy larger areas of the forests (Sunderlin et al., 2003;Van Khuc et al., 2018). Although the debate continues, the research confirms that population growth and poverty around the forest are the leading cause of forest degradation (Kissinger et al., 2012;Tsujino et al., 2016;Duguma et al., 2019). At a certain level, poverty is a driving factor for forest degradation, yet evidence also shows that high income increases forestland demand for agriculture, causing more significant forest damage (Fisher, 2004;Santika et al., 2019;Wunder, 2001). For Indonesia, the current livelihood activities to improve welfare and food availability become the endogenous factors for forest degradation while increasing forest accesses, values commodities, and rainfall are the exogenous factors (Medrilzam et al., 2014). The previous research on forest corruption has focused more on changes in agrarian structures, particularly the role of forests as a source of income for poor people and whether or not forests remain relevant to economic development and urbanization (Ali & Bahadur, 2018;Moeliono et al., 2017;Peluso, 2011;Rasmussen et al., 2017;Santika et al., 2019;Setiawan & Sunarto, 2019;Sunderlin et al., 2003;Thompson, 1999). The other research focuses more on the corruption forms (Meehan & Tacconi, 2017;Palmer, 2005;Robbins, 2000;Sundstrom, 2016b) that cause forest degradation, the affect the environment (Allnutt et al., 2013;Sinha et al., 2019;Van Khuc et al., 2018), and the socio-economy of local communities and the country's economy (Z. Wang et al., 2018;S. Wang et al., 2020). Researchers have also highlighted corrupt practices in local governments that facilitate illegal activities during the implementation of various global initiative projects, including the certification of forest products (Enrici, 2019;Kissinger et al., 2012;Maryudi & Myers, 2018;Setyowati & McDermott, 2017;Sheng et al., 2016;Williams et al., 2018). Studies on forest management corrupt processes, such as bribery and levies as the cause of illegality and forest degradation, remain minimal, particularly in developing countries (Meehan & Tacconi, 2017;Palmer, 2005;Sommer, 2017;Sundstrom, 2016b). Additionally, former research focuses on significant cases and abundant forests, illegality facilitated by petty corruption involving local communities and field officials as an inherent part of forest management policies (Bratamihardja et al., 2005). The marginal forests received less attention. Research on how forests are systematically maintained to remain marginal as a method for deriving uniliteral benefits has also not received sufficient attention. Thus, by revealing the illegal activities driven by the corrupt actors, this research will provide information on how illegal and corrupt practices happen in marginal teak forests. At the same time, although the forms and patterns of forest corruption have been explained comprehensively, many studies have examined how to eradicate those corrupt practices from the sociology perspective with still less attention. Thus, we would also explore the normalization process of corruption-through which the deviated behavior has gradually become an integral part of society, including how they build legitimacy so that corruption is considered to constitute fairness in society. The micro-sociology approach would clarify how actors-IWCs, local communities, state forest company officials, and furniture entrepreneurs-jointly anticipate formal rules to avoid legal penalties and build legitimacy (Robbins, 2000;Urinboyev & Everyday Corruption and Social Norms in Post-Soviet Uzbekistan (No. 19), 2019). In the final section, we formulated practical recommendations for cutting out this corrupt behavior to improve teak forest management and sustainability, particularly in East Java's marginal forests. Theoretical Underpinning The corrupt practices of field forest officials are more relative to petty corruption since they involve small resources and bribery so that the actors can do their corrupt practices freely through illegal activities (Rose-Ackerman, 1999;Callister, 1999;Rowley, 2008). The illegality concept illustrates forest management not following the national standard and current regulations, starting from licensing, logging, transporting, auctioning, and so on (Sundstrom, 2016b;Kleinschmit et al., 2016). By definition, illegality is the result of corrupt activities of forest managers, which may happen by allocating forest land, during the process of forest operations, and during law enforcement activities which all open up opportunities to break the law and to gain personal benefit unilaterally (Callister, 1999;Tacconi et al., 2003;Kleinschmit et al., 2016). Illegal activities occur on a largescale, such as illegal logging, and small scale, through bribery, also known as rotten institutions (Robbins, 2000). Corrupt behavior such as bribery or letting forest degradation happen at a certain level frequently have local cultural legitimacy (Palmer, 2005;Robbins, 2000;Williams et al., 2018), which such practices have already become a daily habit and include clientelism (Sundstrom, 2016b). In the present study, the illegal activities of illegal wood collectors (IWCs) represented their corrupt practices by bribing forest officials to get logs out of the standard mechanisms. We emphasized this fact since not all illegal activities are corruption, primarily related to the management of customary forests and other traditional accesses that sometimes are categorized as illegal by state law. Tacconi and Williams (2020) confirm that small illegal activities in forest management happen in many developing countries, including Indonesia, and are facilitated by forest officials" corrupt practices. The girdling or ring-barking to kill trees, so the trees are becoming legal to be cut down (Søreide, 2007), has the same pattern as destructing logs, so they do not fulfill the auction quality standard. Within this context, we use the corruption concept that facilitates illegal activities to analyze how actors work together at the operational level to commit illegal activities (Palmer, 2005;Robbins, 2000;Sundstrom, 2016a). Because it involves local communities and become an inseparable part of their livelihoods, there will be a process of social institutionalization involving traditional leadership, which Gore et al. (2013) call the development of "alternative norms" or rotten institution (Robbins, 2000). According to this definition, such corrupt practices will no longer be seen as deviant or commonplace, although, according to positive law, it cheats the prevailing regulations. Within this context, we translate corruption as activities that "outsmart" the law to gain unilateral benefits that illegal activities become normal. Meanwhile, the principle of unilateral profit-taking must be accompanied by the ability to build practical legitimacy so that the community accepts the activity no matter how distorted it is (Sadigov, 2017;Urinboyev & Everyday Corruption and Social Norms in Post-Soviet Uzbekistan (No. 19), 2019). As we know, the normalization process of corrupt activities, particularly petty corruption, has become an acceptable activity by local communities or even a way of illegally accessing forest resources. From this perspective, corruption, usually discussed under a political ecology approach, is discussed from a sociological perspective. As such, we focused on the social process of how the corruption grows to become a generally accepted behavior by society. We used this sociological perspective to complement the ongoing research on illegality, which focused on structural studies, such as motives, patterns, sanctions, and punishments, with a behavioral approach, especially political ecology. The "normalization" theory was chosen because it is one of the concepts in the latest criminal sociology, so it is expected to explain how deviant behavior is done, becomes part of people's lives, and is seen as normal (Prabowo & Cooper, 2016). Two processes can explain the normalization of corrupt activities, especially petty corruption, as a natural phenomenon in the community's social life: avoidance of stigma and normalization (Schoeneborn & Homberg, 2018). The former refers to the actor's was attempted to avoid the stigma of their illegal activity by obscuring the concept of "tip" with "bribery." The latter means building on the continuous legitimacy that there is a strong reason for their activity. Avoiding stigma is done by making particular rationalizations, such as norms that corrupt practices are done under specific considerations, for example, poverty. The actors may also use a comparison to defend their corrupt practices-that their activities are small and do not cause much loss compared to the ones by the elites, so the activities cannot be seen as destructive (Abood et al., 2015). Normalization refers to the process by which everyone recognizes corrupt activities as a typical form of tolerance for non-perpetrators, called "open secret." The discussion will include the normalization process of how actors developed social legitimacy and normalized their corrupt activities as part of community social institutions (Prabowo & Cooper, 2016;Fleming et al., 2020). Petty corruption will obtain legitimacy more quickly or be "internalized" and "normalized" more rapidly into normal activities or more quickly accepted, especially in the global south (Lindner, 2014;Sadigov, 2019;Suhardiman and Mollinga, 2017). Two processes can explain the normalization of corrupt activities, especially petty corruption, as a natural phenomenon in the community's social life: avoidance of stigma and normalization (Schoeneborn & Homberg, 2018). The former refers to the actor's was attempted to avoid the stigma of their illegal activity by obscuring the concept of "tip" with "bribery." The latter means building on the continuous legitimacy that there is a strong reason for their activity. Avoiding stigma is done by making particular rationalizations, such as norms that corrupt practices are done under specific considerations, for example, poverty. The actors may also use a comparison to defend their corrupt practices-that their activities are small and do not cause much loss compared to the ones by the elites, so the activities cannot be seen as destructive. Normalization refers to the process by which everyone recognizes corrupt activities as a typical form of tolerance for non-perpetrators, called "open secret." The discussion will include the normalization process of how actors developed social legitimacy and normalized their corrupt activities as part of community social institutions (Prabowo & Cooper, 2016;Fleming et al., 2020). Petty corruption will obtain legitimacy more quickly or be "internalized" and "normalized" more rapidly into normal activities or more quickly accepted, especially in the global south (Eckel and Eckel, 2012;Lindner, 2014;Sadigov, 2019;Suhardiman and Mollinga, 2017). Normalization consists of three pillars: institutionalization, rationalization, and socialization. Institutionalization refers to habit-forming. Then rationalization refers to efforts to defend the (wrong) actions. Next, socialization refers to actions to spread the deviated behavior to the public or between generations so it is accepted as normal behavior. Since they are not aspects but pillars, the three processes must happen simultaneously. In other words, if one or two pillars do not take place, then the normalization process of corruption will not occur. Illegal activities in the form of corruption in the study site have the same characteristic, in which the stolen logs were not in such a significant volume. Yet, it happens quite often on low-quality logs or small logs. Our study was conducted on the marginal teak forest by analyzing the small yet intense illegal activities agreed upon by the local community (Maryudi and Krott 2012;Sahide et al., 2016). However, they know that those activities violate the law. Based on the definition above, the present study focused on illegal activities involving bribery done by ICWs to forest officials-practices that had been going on for a long time and have become an integral part of the communities around the forests. We will also discuss how corrupt practices are normalized and become a regular activity by local communities and state forest managers. This approach is expected to provide a new explanation of how values and norms are built in such a way in the community to normalize illegal actions that violate the prevailing norms and laws. By analyzing several illegal activities in the marginal forest, we hope to add new knowledge that illegal activities also occur in marginal forests-where previously illegalities and corruption are mostly exposed in rich forests. In addition, this research also complements the previous analysis of petty corruption, which emphasizes individual motivation and behavioristic actions, such as responding to sanctions, and political ecology, which focuses on the power structure of corruption. We expected that this sociological approach could answer why something that violates formal regulations is still being done, not only from a "sanction" or "punishment" point of view but also from the complexity of social norm formation of these violations. Therefore, the concept of normalization or the process of introducing a norm is repeated. It was accepted as an integral part of society-the community considers corrupt behavior normal. Materials and Method Lemah Gurih village is located in Lembu Peteng District, East Java Province. Lemah Gurih is approximately 10 km from the nearest village and 25 km from its downtown area. Lemah Gurih village is situated in the middle of the state's forests, where 60% of the population relies on the forests for their livelihood. The limited legal access to the forests provides insight into how illegality facilitated by corruption affects the villagers' daily life. It is one of the villages that has grown and developed amidst forests controlled by the SFC since the colonial era. Lembu Peteng is a famous teak-producing region in Java because, geographically, its forest is an ecological unit containing the primary teak-producing forests in Java, namely Blora and Bojonegoro. Illegality is a sensitive issue, and researching the topic is tricky. We used a participatory observation approach to obtain accurate data (Routlet, 2017;Shah, 2017). We did not collect much data in the first three months -we merely strengthened our social relations with the community by participating in various social activities. We conducted interviews for sensitive issues individually at informants' homes after they completely trusted us not to report to the authorities about their activities. We used pseudonyms, including changing the names of villages and districts to respect and protect informants. This approach is vital for obtaining sensitive information without standard data collection, so participatory observation is needed (Pfadenhauer & Grenz, 2015). Based on that situation, purposive sampling was chosen in Lemah Gurih village. The research was conducted over 12 months and divided into three stages: July to October 2018, January to March 2019, and July to October 2019. The informants selected were four key informants (FSO-4), 62 participatory observation (SFC-F, SFC-O, D-SFC, FR, Pol, CT, SFE, and BFE), and 50 local communities (IWCs). The total of Informants was 114. The detailed list of interviewer numbers (114) and their role in forest governance are described in Table 1. The data was collected by short visits for unstructured observations and interviews within these intervals, especially to cross-check our data. Through in-depth interviews with key informants, including forest rangers and local communities, we deepened our knowledge of how legitimacy was built technically and socially and how these views were disseminated. Meanwhile, the snowball technique was used to trace the flow of teak wood to ascertain who the actors were, their interests, and how they built cooperation (Sadigov, 2019). Content analysis was conducted to illustrate the institutionalization of corruption, and that corruption is considered fair (Bengtsson, 2016). The descript analysis involved village community opinions, key informants, and legal reviews, including reviewing wood quality standards and auction mechanisms, interviews with the forestry service, and SFC officials, from field to provincial officials. Triangulation was done to ensure information validity and avoid bias before proceeding to the next stage. Then, we organized a focus group discussion (FGD) to attain and confirm the same perceptions (Alejandro & Rodr, 2008) on the amount of bribe to be paid, the actors involved, the views they held, and their perceptions. Depending on the theme, the FGD involved nine to ten people from forest farmers, local communities, young people, and community leaders. There were three stages of FGD to focus our discussion: (1) how to access forest resources, discussing their perceptions of what is legal and illegal, including historical aspects of their relationship with local forest officials, (2) how to explore the prices of various types of teak and who are the actors involved in trafficking, and (3) how to explore the normative views of local communities on these corrupt practices. The FGD revealed how the actors established justification or legitimacy to allow "internalization" and "normalization" occur gradually and be accepted as fair by the community. Meanwhile, to ascertain whether corruption in the forest was genuinely institutionalized and normalized, we explored police officers' views on these corrupt cases, including which category of corruption would be investigated if there were public reports. Result and Discussion Since the 1990s, the teak wood processing industry, particularly furniture, has been mushrooming, marked by the emergence of hundreds of small and medium furniture entrepreneurs as the primary buyers of illegal wood from teak forests around the area. Lembu Peteng district plays a significant role as the leading supplier of furniture to Surabaya and Malang, two major cities in East Java. Cheap labor and proximity to big cities make the furniture industry relatively competitive compared to other regions. Figure 1 shows the growth of the Lembu Peteng District Furniture Industry from 2015 to 2019. There were three critical points. First, we discussed how the marginal forest is maintained as it is so that the logs from this forest will not go to auctions for corrupt officials and the local community to take advantage of the logs. Second, we discussed the forms of illegal activities involving many teak forest management actors to gain unilateral advantages. Third, we discussed how deviated behavior is normalized and acceptable in the community and how the actors rationalize those corrupt activities so those illegal activities become an integral part of community life. Forest Marginalization: Accidental or Intentional? We did not definitively state that SFC intentionally or unintentionally caused marginal woods. Instead, we here illustrated how the marginal forest had provided many actors wider opportunities to gain a unilateral benefit, especially bribery in small amounts yet quite often. It is logical to ask why SFC, as big companies left thousands of hectares of their forest, remained marginal for a long time while other parts of their forests are so productive. Logically, the marginal forest condition produced low-quality logs that did not meet auction standards. The logs were considered waste, even though they were highly valued in the local market. It becomes an opportunity for corruption to occur where the officials deliberately downgrade the log quality of their logs so they can be taken illegally by paying a certain amount of money to the managers. People might think the situation-marginal forests-was normal due to unfavorable ecological conditions and poverty. Lembu Peteng District is geographically located in a wind channel, making it windier than most neighboring regions. Strong winds render good quality teak trees (straight, large, and tall) prone to collapse, and only teak wood with small and short diameters can survive. Thus, good quality teak is very vulnerable to collapse, and therefore planting local (low-quality) seedlings is considered the safest management approach, even though it is less productive. At a glance, this is very logical. The condition of marginal teak forests is ecologically reasonable. In reality, however, it is naïve since trees planted outside the state's forestland grow very well. Good quality teak can flourish and are not prone to collapse despite strong winds. Besides being located in a wind channel, the forestland is considered arid, so it is challenging to grow good-quality teak, and only local varieties can survive. However, although the forest area may look poor in nutrients and is dominated by bushes, the forest area is very fertile, technically integrated, and belongs to productive agricultural land. The annual burning of forests was another reason forest lands in the area were critical. The condition of such land may continue to be maintained as a basis for justifying the non-productive forest. In the meantime, SFC does not apply any fertilizer, even during the first planting. Weeds thrive well in Sumber Gurih forests, which signifies the arid area. However, weeds grow because the forest areas are left open and are planted only with small teak. The canopy is also not too shady, so the lands are exposed to the sun for a long time. This intention is almost the same as the behavior of companies that do girdle or ring barking to kill trees so that they become legal for harvesting (Søreide, 2007). Apart from the two natural conditions above, acute poverty is used to legitimate forest marginalization. The reason is that giving the poor almost complete access to use the area and extract forest products is legal and the obligation of the authorized parties. However, if we examine the twists and turns behind this community access, there is a substantial collusive relationship involving the poor. The situation provides poor people access to illegal logs and involves them as partners in corruption by using them as illegal teak wood collectors (IWCs). The poor communities around the forest become social forestry partners (Peluso, 1992b;Afroz et al., 2016;Rakatama and Pandit, 2020), but they are also positioned as the black sheep why the forest is marginalized. They are accused of disobeying the agreement because they destroy teak to slow the canopy's growth to use the land under the stands for food crops requiring lots of sunlight. Under the agreement, farmers may use the land under the stands for two to three years before the canopy covers the entire land. Farmers may cultivate the land for five or even ten years, especially in fertile lands with adequate infrastructure to transport their crops. SFC officials allow this condition for two main reasons: (1) to make it easier to find cheap labor and (2) to get an annual budget. During in depth-interviews with SFC officials, it can be revealed that wood fell under the auspices of local officials with compensation to provide various services to both internal company and higher-level SFC officials visiting or conducting official activities in the forest area. Initially, it was used as additional income for local SFC officials because their wage was relatively low, although the company was rich. Thus, for internal companies, such corrupt practices are a regular or public secret known by local and higher-level officials. Corruption has become an integral part of social interaction where understanding that corruption is an activity violating the norm has been lost. Clearing fallen trees on roads, fire extinguishing, and theft patrols often require fast cash, while the disbursement of company operational funds is relatively tight and slow. It is common for field officials to get paid for their job three months later. Even the company's routine activities, such as planting and plant care, are funded using loans to pay workers based on local customs. The situation has forced the officials to manage the available resources-at least they have some money to fund operational activities at the beginning of each fiscal year while waiting for the company's money. As such, corrupt practices seem to gain moral and technical legitimacy by the company. Therefore, individual motives and a work environment that supports such practices to thrive drive the institutionalization of corruption. SFC field officials also find it difficult to get reimbursement for some expenses they pay-they must be financially independent. Usually, such expenses are borne entirely by field staff, so they have to look for other income sources-and the easiest is corruption. The field level of SFC officials often has to provide direct social services such as cash donations for community activities, such as holiday celebrations, village governments, police, and various parties, to ensure its smooth operation. These funds cannot all be claimed as company expenses, especially if the amount and intensity are big enough to be borne by field officials. The unexpected operational costs borne by the field officials make the higher-level officials lose their authority when dealing with the corrupt practices of the field officials. Besides, the field officials have good community sociology skills because they contact the community daily. From the explanation above, marginal forests existing amid fertile forests have opened up many petty corruption opportunities. Thus, the marginal situation seems to be "maintained" because it involves many parties systemically. Nevertheless, to conclude straightforwardly that this condition is "intentional" seemed to be too speculative-yet, findings tend to direct our conclusions that way. SFC has different marketing schemes for different quality classes based on the lengths and diameters regulated in the national standard of SNI Number 7535:1:2010. Larger and smaller logs do not go to auctions; they are sold directly at the prices SFC officials determined. The lowest standard to meet the SFC criteria for auctions is specified at 1.8 to 2 meters. However, this does not fit with what is required by local furniture industries producing chairs, tables, and other household appliances that only require a shorter length of 50 to 100 cm. Using SFC timber with the company specification would be costly for the local industries. In order to get SFC timber at lower prices, IWCs bribe the officials to damage the teak without facing criminal charges. Locally referred to as "uang rokok" or "cigarette money," the forest officials usually receive bribes of IDR 20,000-30,000 (approximately 1.5-2 USD) per log. For a bundle fitting for motorbike transport, they obtain a sum of IDR 100-150,000. Manipulating Wood Quality Standards SFC has different marketing schemes for different quality classes based on the lengths and diameters regulated in the national standard of SNI Number 7535:1:2010. Larger and smaller logs do not go to auctions; they are sold directly at the prices SFC officials determined. The lowest standard to meet the SFC criteria for auctions is specified at 1.8 to 2 meters. However, it does not fit with what is required by local furniture industries producing chairs, tables, and other household appliances that only require a shorter length of 50 to 100 cm. Using SFC timber with the company specification would be costly for the local industries. In order to get SFC timber at lower prices, IWCs bribe the officials to damage the teak without facing criminal charges. Locally referred to as "uang rokok" or "cigarette money," the forest officials usually receive bribes of IDR 20,000-30,000 (approximately 1.5-2 USD) per log. For a bundle fitting for motorbike transport, they obtain a sum of IDR 100-150,000. Besides furniture, illegal logs are also used for wood flooring materials that do not require long timbers, so small and big furniture businessmen prefer buying from IWCs rather than obtaining from SFC auctions. Tiny-sized old teak wood is also an excellent raw material for carving doors, frames, and high artistic value crafts. Corruption is triggered by high wood standards, while the local market also needs high quality and low-quality wood. The situation opens up corruption opportunities because timbers from marginal forests are classified as low-quality logs not feasible for auctions. As such, SFC officials can sell the logs themselves, and the money goes into their pockets. According to Ali and Bahadur (2018), this strategy obscures the value of violations by these actors because there is a very slight difference between "uang rokok" and bribery. The example of side products below the SFC quality standards sold as firewood is presented in Figure 2. There are two mechanisms by which ICWs obtain timber. First, they buy the available "low quality" timber regardless of whether they have orders. In this case, they have to keep the timber in their storage. Secondly, IWCs buy timber from SFC officials when they receive orders from artisans. They ask SFC officials to enter the forest to find the required wood. Usually, before cutting, ICWs inform officials of their activities-they give details on the number of trees cut, where they cut the trees and the number of motorcycles or personnel involved. Sometimes, SFC officials tell IWCs the patrol area and when patrols will take place so that the ICWs can avoid them. When demand is high, they need help to fulfill the orders, and ICWs invite friends to help by transporting the timber using motorbikes. The use of motorbikes is crucial as it camouflages illegal activities. Transporting timber using trucks can be easily detected; they are considered as considered illegal loggers or criminals. However, the reality is that the many motorbikes going back and forth multiple times also can damage the forest. Using traditional tools is an alibi to show that the timber taken is only for household needs, such as firewood. Their skills are excellent, so cutting trees with machetes can be as precise as using a saw. Additionally, IWCs said that SFC officials did not limit the time and amount of wood as long as it comprised fallen wood or old and dead wood with a diameter of no more than 20 cm and a length of no more than 90 cm. Teak is sold for furniture and cut to pieces to give the impression that it will be used as firewood. IWCs can easily take the logs illegally because they help the company in teak management, such as planting, logging, repairing damaged roads during the rainy season, or cutting falling trees blocking the roads. SFC officials may ask residents to plant trees for free or with low wages as compensation from IWCs. The involvement of IWCs in repairing forest roads for free is also a form of compensation because they are allowed to access the forest at any time. The officials argued that the IWCs, actually also community members, were involved in improving road access because they also used the roads for transportation, even though these roads were often purely forest roads, not community roads. Another compensation provided by IWCs for being allowed to access the forest illegally is becoming loggers with low wages. IWCs, as a member of the community, also become members of forest farmers' groups. It is why they are also the SFC partners in forest social management, so they have a close relationship with SFC officials. Manipulating Wood Taxation Another corrupt practice by FSC officials is manipulating wood quantities -they report fewer logs going to the auctions. The rest is sold in the local market, so the proceeds do not need to be sent to the company because it is considered waste wood. This practice involves at least one logging official, IWC, local SFC official, district-level SFC official, and small and big furniture businessman. The calculated logs are sent to the auction office to be analyzed by the officials. Then, an order to cut the logs is sent to the field officials. As requested by the field officials, the transportation officials leave the logs to be taken by IWCs when the target is fulfilled. IWCs are being used as camouflage, a strategy to disguise logging as merely firewood collecting so that the officials can escape criminal charges. The reality shows that these officials sell the logs to IWCs or furniture businessmen. Besides getting money from selling the logs, IWCs are given easier access to other forest areas and prioritized in farming area distribution and land tenure extension. Land tenure formally lasts three years but can be extended to five years if the tree canopy has not met the standard and the cultivated plants under the stands can still grow well. IWCs also find it easy to access forest yields, legally and illegally, with the help of SFC officials. Teak without marks (a length of 200 cm and a diameter of 20 cm sold to local markets) are shown in Figure 3. The unrecorded logs have a diameter of 10 to 20 cm and a length between 2 to 3 meters, so they are relatively unnoticeable. We saw logs of 10 to 20 cm in diameter at the businessman workshop's furniture. However, the logs differed from those taken from the community forest, signed by the uniformity of age and diameter. Teak from community forests is rarely uniform in size since farmers rarely sell large quantities of teak simultaneously. The furniture businessman buys the wood from the forest officials to reduce production costs to sell the products within affordable price ranges. They must do this because they must compete with big companies selling furniture products from wood and iron. A Teak 320 cm long and 10 cm diameter that passes the SFC standard is shown in Figure 4. Besides being sold to other regions with the help of ICWs as the local people, record manipulation is also done to fulfill on-site direct purchases without going through the auction process. IWCs must bribe the field officials-they have to pay double: for the wood price and bribery. SFC and ICWs confirmed that the activity was allowed if the wood was for personal use and was not sold to other regions. The facts showed that the wood is kept to be sold in the following year or as furniture. This type of wood is not recorded, and the money from selling the wood is not paid to the company. The wood price is relatively lower. They could buy the best quality teak at a price below the market price in small quantities, a maximum of 5 to 10 trees from one felling location. IWCs even can take the smaller logs as bonuses. Buyers must be ready with cash after receiving the invoice before fellingusually, they are informed two to three weeks in advance. Since the best quality teak is costly and can only be sold after it is changed into furniture, only the rich can afford it. It means only entrepreneurs buy such teak, and only three to four people in Sumber Gurih can do so. The law enforcement efforts by the government within the last five years are depicted in Table 2. Institutionalization of Illegal Practices The first pillar of normalization of corrupt practices is institutionalization, through which the new and old actors start by introducing the activities, making the activities part of forest management, and making the activities a routine. IWCs introduce the activities by taking other people, including children, in felling, cutting, or transporting logs so they understand the forest management system. Through the process, IWCs inculcate values that what they do is normal and acceptable. Every new resident is introduced to the value, including new SFC officials who learn from their seniors. It took only five to six months to convince new officials, especially those from other districts, that what they did was acceptable. If SFC had a new leader, they would reduce their frequency of taking wood from the forest until they were sure that the new leader could accept what they did. The next pillar is rationalization. Rationalization refers to efforts to defend the (wrong) actionsat least, making the other parties see the actions as acceptable or not so harmful. One of the crucial reasons is community empowerment under social forest management. It is a form of compensation to forest farmer group members whom SFC employs in forest management with low wages. Community involvement strengthens the reason for poverty alleviation, so corrupt practices can be socially accepted. The labour wage who work in forest management are not fulltime but part-time workers, so their wages are below the government's minimum standard. Compared to other sectors, such as farm labourers, the relatively low wages in forest labour sectors are expected to be compensated by the right to take forest products below the auction standard, especially firewood. In practice, however, they take firewood and other wood to sell to furniture entrepreneurs in return for a certain amount of tribute to the officials. The descriptive term to validate such corrupt activities is "manggulo jati sak kuatmu nek mung kanggo kayu Note Fine Imprisonment Cutting trees with a length of less than 100 cm for firewood 0 3 It is considered a minor violation and tends to be ignored. Those caught by the SFC patrol were released. given because the people were caught by a joint patrol between SFC and the police. Taking teak stumps after felling 0 2 The officials caught them, and one person is still in jail. Cutting the whole teak 0 2 They were caught because they used a truck to transport the logs. Helping oficcials taking the logs hacing auction standards, but not marked during harvest. 0 0 No one wa caught since it was seen as compensation for lowpaid lumberjacks and bribery for SFC officials. Cutting down living trees for building houses. 1 2 Perpetrators will be arrested if they keep doing the same thing after being warned three times. bakar" or "take teak as much as you can if you only use it for firewood," as if the behavior of corruption is part of their daily life and routine. It is an elegant practice since local officials rarely communicate directly with workers, but they know each other's rights and obligations once the practice of taking wood is completed. If institutionalization and rationalization have been done, socialization follows. Socialization refers to actions to spread the deviated behavior to the public or between generations, so it is accepted as normal behavior. Socialization is the system that guarantees values and beliefs are transferred well among the people. Socialization consists of three stages: cooptation, incrementalism, and compromise stage. In the cooptation stage, new members are crammed with continuous corrupt values by other group members with no comparative values-introduce and inculcate norms, values, and beliefs to new members of ICWs and SFC. Besides vertical communication on forest management, corrupt activities are also socialized between generations in the household of farmer group members from childhood. With the model of farmer group membership inherited from generation to generation, there is a transformation of values through this organization, allowing corrupt habits to be naturally socialized between generations. Since individuals can access the forest resources if they join farmer groups, they will inevitably be co-opted by the situation known as the "cooptation stage" (Prabowo & Cooper, 2016) that the corrupt values are passed on in such a way that all group members are accustomed to these values and the corruption becomes sustainable. The next stage is the incrementalism stage. The crammed values attached to the forest farmer groups' daily activities become more entrenched. If cooptation deals with accepting existing values, the corruption begins to take a new form in the incrementalism stage. The community even encourages young people to enter the forest if they need money for snacks or cigarettes. They say, "kono mlebu kono nek pingin dapat uang ojo dolan ae" or "go there (forest) if you want pocket money, stop playing around." In the context of normalizing corruption, this is an effort to integrate corrupt values into daily activities so that local communities' activities in illegally accessing teak wood appear normal. If a person has been directed toward corruption, the next step of social compromise has been exceeded, with corruption repeated continuously on a broader scale. At the research location, local communities are usually involved in almost all forms of corruption, ranging from wood destruction, illegal wood harvesting, and reducing the reported number of logs. Table 3 illustrates the legitimacy and type of illegality in teak forests. Conclusion and Recommendation Marginal teak forests in Java, or at least in Lemah Gurih, were not accidental but seemed to be "intentional" to support illegal practices. Although we cannot conclude that the situation is intentional, findings direct us that way. Marginal forests produce low-quality logs but at high prices, leading to corrupt practices of forest managers. The transpired of corruption was start with the SFC officials abusing authority by providing opportunities for the poor, then becoming IWCs, to access forest wood for selling to furniture businessmen. Simultaneously, SFC officials take advantage of the permits they give to the community to illegally take wood products from the forest in various forms of circumvention to avoid formal legal snares. Community empowerment is the main reason for allowing people to access the forest, even though these activities are invisibly manipulated to benefit forestry officials. The reason is that field staff are often burdened with various operational costs, while both formal and informal financing is also a way of avoiding stigma so that the internal corrupt practices are tolerated by the structure above (Gorsira et al., 2018;Prabowo & Cooper, 2016). Therefore, the poor, positioned as the black sheep for forests to become marginal, are invited to become partners by forest managers to take unilateral advantages. Poor people are more vulnerable to bribery because they are highly dependent on the government's various social services (Peiffer & Rose, 2018). Illegal practices caused by corruption vary. The most common illegal practice is damaging the logs, so they do not fulfill the auction standard. The findings supported Søreide (2007), stating that harvesters intentionally did girdle or ring-barking to harvest trees after the trees were dead. Another pattern is field forest officials record fewer logs in the harvest time so that woods will not go to auctions. The case in Sumber Gurih showed that the trees that did not fulfill the auction standard could be sold in the public market. As such, they got personal benefits from the money received after selling the logs. The finding supported Huberts et al. (2006) that corruption comprises various patterns of misuse and manipulation of information, with officials underreporting the amount of wood and claiming the remaining wood as their personal property (Huberts et al., 2006). Although this corruption is only small-scale and involves only field officials, the pattern is so pervasive and widespread that it can no longer be categorized as petty corruption but is more rooted in systematic corruption. Sundstrom (2016a) and Palmer (2005) call this situation institutionalized forest corruption, where corruption is considered normal and is seen as normal in the social system, or what Prabowo and Cooper (2016) refer to as a rationalization process to gain legitimacy or rotten institutions by Robbins (2000). The community considered the corrupt practices of ICW as normal and acceptable. Although some villagers were caught by the police and imprisoned, and one person was still in jail when the article was written, their families did not consider it shameful. There is a growing perception that income from forests is only additional income even though it contributes a considerable amount to the household economy; thus, these activities are eventually considered normal. Even collecting wood in the forest is considered work training for young people before entering the real world of work-this activity is entirely seen as normal for the people around the forest. The perpetrators can build collective awareness that the corruption process violates the law so that it comes to be considered reasonable, or what Schoeneborn and Homberg (2018) call "avoiding stigma" by normalizing the activity. Illegal practices have become an integral part of the community, and IWCs are seen as a regular profession, just like a farmer, a teacher, or a driver. It may be because the community sees the corrupt practices as minor with low harmful impacts, as it merely meets the firewood needs. Hence, the behavior seems to attain strong legitimacy. The findings supported Palmer (2015), Sundstrom (2016b), and Sundstrom (2016a) that continuous illegal practices will gradually be accepted as normal by the community. The phenomenon of corruption in the marginal teak forest was complex, involving actors ranging from field officials, SFC officials, local communities, stolen wood collectors, and furniture businessmen. It cannot be understood with an economic or political approach. By keeping the forest marginal, the forest products enter the informal trade system, and the income does not go directly into the company's account. The practices root in communities and involve various parties, complete with almost perfect legitimacy, ongoing socialization, intergeneration, and institutionalization. Prabowo and Cooper (2016) refer to this as the normalization of the process whereby corrupt activities become socialized in the community, gain strong legitimacy, and are institutionalized through continuous habituation to become normal activities. Simultaneously, the practice of leaving teak forests as marginal can be seen from the ecological side as stemming from the limitations of natural resources and as a condition that is "maintained" so that poor forests can be used as a source of corruption (Peluso, 1992a;Rahut, 2016;Widianingsih, 2016). Unlike abundant forests, poor forests produce a great deal of wood that falls below company standards, and the wood yield is considered waste. The wood yield is not included in the company's account but in the field officials' account. SFC's conservatism in setting wood standards oriented to the global market's needs and not adjusting to the increasing market demand for local teak wood triggered corruption. The most reasonable way to reduce corrupt behavior is to change the standard of teak woods dealing with local markets that teak woods with low quality must be recorded in the auction process. It will affect the determination of more complete wood standards based on local market needs and not just global markets. Besides, to cut the dependency cycle, the social forestry relationship would no longer be used as a relationship of interdependence but rather as a professional relationship in that forest workers are valued following the labor market and not as compensation for their access to forest resources. Thus far, this dependency relationship has become a reason for the SFC to employ workers with illegal compensation, i.e., by providing access to taking teak from the forest illegally. The next research and management recommendation to reduce corrupt behavior is how to improve forest management in wood administration practices, a review of wood harvesting administration, wood quality standards implementation, and changes in social relations between SFC and local communities to be a more formal relationship.
2023-04-11T15:05:09.966Z
2023-04-09T00:00:00.000
{ "year": 2023, "sha1": "9827be9beb6c1852e41cd43820df4a81362cfdcb", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1080/23311886.2023.2187008", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "13bba743f3fe689a4619da9aa4801c31751b38c0", "s2fieldsofstudy": [ "Sociology", "Environmental Science" ], "extfieldsofstudy": [] }
208521855
pes2o/s2orc
v3-fos-license
Sample size re-estimation in crossover trials: application to the AIM HY-INFORM study Background Crossover designs are commonly utilised in randomised controlled trials investigating treatments for long-term chronic illnesses. One problem with this design is its inherent repeated measures necessitate the availability of an estimate of the within-person standard deviation (SD) to perform a sample size calculation, which may be rarely available at the design stage of a trial. Interim sample size re-estimation designs can be used to help alleviate this issue by adapting the sample size mid-way through the trial, using accrued information in a statistically robust way. Methods The AIM HY-INFORM study is part of the Informative Markers in Hypertension (AIM HY) Programme and comprises two crossover trials, each with a planned recruitment of 600 participants. The objective of the study is to test whether blood pressure response to first line antihypertensive treatment depends on ethnicity. An interim analysis is planned to reassess the assumptions of the planned sample size for the study. The aims of this paper are: (1) to provide a formula for sample size re-estimation in both crossover trials; and (2) to present a simulation study of the planned interim analysis to investigate alternative within-person SDs to that assumed. Results The AIM HY-INFORM protocol sample size calculation fixes the within-person SD to be 8 mmHg, giving > 90% power for a primary treatment effect of 4 mmHg. Using the method developed here and simulating the interim sample size reassessment, if we were to see a larger within-person SD of 9 mmHg at interim, 640 participants for 90% power 90% of the time in the three-period three-treatment design would be required. Similarly, in the four-period four-treatment crossover design, 602 participants would be required. Conclusions The formulas presented here provide a method for re-estimating the sample size in crossover trials. In the context of the AIM HY-INFORM study, simulating the interim analysis allows us to explore the results of a possible increase in the within-person SD from that assumed. Simulations show that without increasing the planned sample size of 600 participants, we can reasonably still expect to achieve 80% power with a small increase in the within-person SD from that assumed. Trial registration ClinicalTrials.gov, NCT02847338. Registered on 28 July 2016. Background Randomised crossover trials are a well-established design for long-term chronic illnesses such as hypertension [1]. The UK hypertension NICE guidance (CG127) stratifies hypertension treatment by age and self-defined ethnicity (SDE), with guidelines adopting a 'black versus white' approach [2]. The problems with this stratification include a lack of data from UK populations supporting the current SDE stratification and no reference to South Asiansthe largest ethnic minority group in the UK [2]. Consequently, the primary objective of the AIM HY-INFORM study is to determine if the response to existing first-line antihypertensive drugs differs by ethnic group, white British, black African/African Caribbean or South Asian, for participants who are newly diagnosed or established hypertensives. The AIM HY-INFORM study comprises two openlabel, randomised crossover trials: one three-period three-treatment (monotherapy) trial for participants newly diagnosed with hypertension and one four-period four-treatment (dual therapy) trial for participants with existing hypertension. The primary outcome is systolic blood pressure (SBP) mmHg; linear mixed models are the preferred method of analysis for a crossover design with a continuous outcome variable [1]. Repeated measurements of SBP taken from the same participant are correlated and this correlation needs to be accounted for in sample size calculations. That is, sample size estimation for any repeated measures design requires an estimate of the within-person standard deviation (SD). Taking estimates of the within-person SD from other studies can be unreliable due to differences in the study population and participant attributes, instruments and measurement techniques, or other background conditions, which can result in trials that are either under or over-powered [3,4]. With the absence of reliable prior estimates of the within-person SD available for the AIM-HY INFORM study, in particular for South Asian ethnicities, the calculation of the required sample size to ensure the desired power to detect a single treatmentby-ethnic interaction is challenging. To address this issue in many repeated measures contexts, sample size re-estimation designs have been considered [3,[5][6][7]. However, there is little, directly applicable work on sample size re-estimation at interim for crossover designs. Zucker and Denne describe a strategy to deal with the unknown within-person SD in a two-stage, repeated measures design that examines the accrued data at an interim point, obtaining an estimate of the within-person SD. They then use this estimate to update the covariance parameter in the linear mixed model and modify the sample size required to ensure the trial has sufficient power [3]. Moreover, a collection of papers have addressed sample size re-estimation in bioequivalence trials using a two-period two-treatment crossover design [8][9][10][11]. While more recently, methodology for blinded and unblinded sample size reestimation in multi-treatment crossover trials balanced for period was described [12]. None of these papers, however, directly allows for re-estimation in the context of the AIM HY-INFORM study, for reasons that will be described shortly. Here, we present the framework for sample size reestimation in both our 3 × 3 (monotherapy) and 4 × 4 (dual therapy) settings. Precisely, the study design and models used are described, along with the methods developed for re-estimation of the required sample size. The planned interim analysis in the AIM HY-INFORM study states in the protocol that after 50 individuals have completed their treatment sequence, the sample size calculation for both the mono and dual-therapy treatment rotations will be recalculated using a mid-trial estimate of the within-person SD. Therefore, following our initial descriptions, results from a simulation study carried out ahead of trial recruitment, with the aim of simulating the interim analysis to explore the effect of a larger within-person SD from that assumed in the protocol are presented. Study design AIM HY-INFORM is a multicentre, prospective study comprising two randomised, open-label crossover trials (three-period three-treatment monotherapy and fourperiod four-treatment dual therapy) in a multi-ethnic cohort of hypertensive participants, where each study requires separate randomisation to treatment sequences [2]. Participants on both trials self-identify into one of the following three ethnic groups (SDE): White (white British, white Irish or any other white background); Black or black British (black Caribbean, black African or any other black background); Asian or Asian British (Asian Indian, Asian Pakistani, Asian Bangladeshi or any other South Asian background). The monotherapy study is a 24-week three-period three-treatment crossover trial of newly diagnosed hypertensives with Ambulatory Blood Pressure Monitoring (ABPM) ≥ 135/85 mmHg. After initial screening and baseline measurement collection, participants are randomised with equal allocation to one of six sequences of treatments from two three-period three-treatment Latin square designs: ABC; ACB; BAC; BCA; CAB; and CBA [13]. Here, treatment A is 1-2 weeks of Amlodipine 5 mg followed by 6 -7 weeks of Amlodipine 10 mg, treatment B is 1-2 weeks of Lisinopril 10 mg followed by 6-7 weeks of Lisinopril 20 mg and treatment C is approximately eight weeks of 25 mg Chlortalidone (Fig. 1). The dual-therapy study is a 32-week four-period fourtreatment crossover trial of established hypertensives with ABPM > 135/85 and < 200/110. After initial screening and baseline measurement collection, participants are randomised with equal allocation to one of four sequences of treatments from a four-treatment four-period Williams square design: ABDC; BCAD; CDBA; and DACB. There are 24 possible Latin squares for a fourtreatment crossover design; the design used here is one of six special cases of the Latin square design which are balanced for first-order carry-over and are known as Williams Squares [13,14]. For participants on dual therapy, treatment A is eight weeks of Amlodipine 5 mg and Lisinopril 20 mg, treatment B is eight weeks of Amlodipine 5 mg and Chlortalidone 25 mg, treatment C is eight weeks of Lisinopril 20 mg and Chlortalidone 25 mg and treatment D is eight weeks of Amiloride 10 mg and Chlortalidone 25 mg (Fig. 2). The two randomised crossover trials (monotherapy and dual therapy) are open-label and require separate randomisation to treatment sequence. To control for balance permuted blocks within strata are implemented with an allocation ratio: 1:1:1:1:1:1 for the monotherapy trial and 1:1:1:1 for the dual-therapy trial. The monotherapy and dual-therapy trials are distinct and analysed using separate linear mixed models. The primary outcome of both trials is seated automated office SBP mmHg as measured eight weeks (± 4 days) after receiving each treatment. The primary objective of the trials is to test for a significant treatment-by-ethnic group interaction. With the absence of an available estimate of the within-participant SD for the trial participants, the protocol estimates a sample size of 600 participants assuming a fixed within-person SD of 8 mmHg, based on previous clinical trial data in people representative of the general population with either high normal blood pressure or mild hypertension [2]. More precisely, for a within-person SD of 8 mmHg and a single treatment-byethnic interaction of 3 mmHg, with all other interactions being 0 mmHg, the protocol outlines that a sample size of 600 participants produces 81.3% power to detect treatment-by-ethnic interactions using a global test of interaction at a 5% significance level. For the same fixed within-person SD of 8 mmHg and a larger single treatment-by-ethnic interaction of 4 mmHg, a sample size of 600 participants would give 98% power to detect a single treatment-by-ethnic interaction. A 10% overrecruitment allows for loss to follow-up, resulting in 220 planned enrolments from each ethnic group, for each trial [3]. In order to check the assumptions that the withinperson SD is 8 mmHg, there is a planned interim analysis after approximately 50 participants have completed their treatment sequence. The aim of the interim analysis is twofold: (1) to obtain an estimate of the within participant SD from trial participants; and (2) to use this estimate to calculate the sample size required for either 80 or 90% power to detect a treatment-by-ethnic interaction using a global test of interaction at a 5% significance level. The aim here is to describe the method for the sample size re-estimation and present results from a simulation study carried out ahead of the interim analysis. Sample size calculations for the protocol along with simulations, sample size and power calculations for the interim analysis were carried out using Stata Statistical Software: Release 14 (StataCorp LP, College Station, TX, USA). Models and sample size calculations Sample size re-estimation in the three-period threetreatment monotherapy crossover The linear mixed effects model for the three-period three-treatment crossover study has fixed effects for treatment, period, ethnic group and treatment-by-ethnic group interactions. Additionally, subject is included as a random effect. This is our unrestricted model. We compare this unrestricted model with a nested restricted model that does not contain the treatment-by-ethnic group interaction terms. Restricted model, assuming n participants are recruited in total: Unrestricted model: Here i = 1, …, n/18 indicates a particular individual, j = 1, 2, 3 indicates the time period, k = 1, …, 6 indicates the sequence a particular individual was allocated and l = 1, 2, 3 indicates which of the three ethnicities a particular individual self-defined as. That is, i, k and l together completely prescribe a particular individual in the trial (there are n unique combinations of these three indices); μ is an intercept term (the mean of the values y i1k1 ); τ d(j, k) is the direct effect of the treatment administered to a participant on sequence k in period j. That is, d(j, k) ∈ {A, B, C}. For identifiability purposes, we set τ A = 0; π j is a fixed effect for period, with π 1 = 0 for identifiability; e l is a fixed effect for ethnic group, with e 1 = 0 for identifiability; θ d(j, k)l is a fixed interaction effect for treatment d(j, k) by ethnic group l. For identifiability, we set We perform a global Wald test to see if the unrestricted model gives a significantly better fit than the restricted, in order to test the primary hypothesis of whether there is an interaction between treatment and ethnic group. To perform sample size re-estimation, we require the sampling distribution of a suitable test statistic under the null and alternative hypotheses. Precisely, setting θ = (θ B2 , θ B3 , θ C2 , θ C3 ) ⊤ , we test the following null hypothesis of no treatment-by-ethnic interactions against the following alternative H 1 : θ≠0: our estimates of the interaction terms computed using conventional maximum likelihood estimation, we can test H 0 using the following test statistic Moreover, we power our trial for an alternative in which there is a single non-zero treatment-by-ethnic The complexity of performing a hypothesis test in this setting, either in a fixed design or following sample size re-estimation, arises from the fact that the sampling distribution of T, the random unknown value of t, is in general complex to compute. Explicitly, while the numerator degrees of freedom in a suitable F-test would be 4, the denominator degrees of freedom is difficult to assign. Kenward and Roger [15] provided a comprehensive solution to this problem for fixed sample designs by computing the exact denominator degrees of freedom, but unfortunately their approach does not lend itself naturally to sample size re-estimation, as the degrees of freedom specification procedure is data-dependent. Accordingly, Grayling et al. [12] specified the denominator degrees of freedom as that of a corresponding multi-level single-stage ANOVA design. For our unrestricted model this would designate the denominator degrees of freedom, ν, for sample size n, as ν ¼ 3n−n−10 ¼ 2n−10: Here, the 3n term arises as the total number of measurements accrued, while 1 degree of freedom is subtracted for each participant, and for each fixed effect in the model. Thus, at the interim we suppose that we would reject H 0 at the end of the trial when t > F −1 (1 − α, 0, 4, 2n − 10), where F −1 (q, 0, m, n) is the 100q th quantile of an F(0, a, b)-distribution (a central F-distribution on a and b degrees of freedom). Combining this with a suitable non-central F-distribution under the chosen alternative at which the trial is to be powered, interim re-estimation can then be performed. Such an approach, however, was found by Grayling et al. [12] to, in many circumstances, lead to a notable inflation of the type-I error rate, and frequently provide power slightly below the desired level under the specified alternative. Therefore, they discussed the utility of a potential α-adjustment procedure, of using the above methodology for interim re-estimation but performing the final hypothesis test using the method of Kenward and Roger [15], and they explored the advantages of a sample size inflation factor. A problem with these adjustments as individual amendments to the basic reestimation procedure described above, however, is that predicting their impact on both the type-I error rate and the power can be difficult. Consequently, here, desiring to accurately control the type-I error rate while maintaining a high level of power, we make a heuristic conservative adjustment to the above degrees of freedom based on Hotelling's T 2 distribution [16]. Precisely, for x 1 , …, x n~Np (ϕ, Ω), the test statistic t 2 ¼ ðx−ϕÞ ⊤ d CovðxÞ −1 ðx−ϕÞ T 2 ðp; n−1Þ , Hotelling's T 2 distribution on p and n − 1 degrees of freedom. We can work with this distribution using standard statistical software via the following relationship In our case, we therefore suppose H 0 will be rejected when for Such a search is easy to perform using interval bisection over the discrete n. The final problem therefore is to specify d whereσ 2 e is the interim estimated within person SD, computed using REML (for the reasons outlined below). With this, our methodology for re-estimating the required sample size in the 3 × 3 trial is complete. However, we must still specify how the final analysis will be performed. Here, we use REML estimation as it takes into account the uncertainty in the fixed parameters in the model into account when estimating the random parameters, in theory leading to better estimates of the variance components with reduced bias when the number of groups is small [17]. Additionally, we use the Kenward-Roger approximation [15] to compute the denominator degrees of freedom in the final F-test. These choices are again made in order to limit the possibility of observing inflation to the type-I error rate. As AIM HY-INFORM is a confirmatory trial of treatment differences between different ethnic groups, inflation of the type-I errors should be avoided. Sample size re-estimation in the four-period four-treatment dual-therapy crossover The dual-therapy trial is a four-period four-treatment crossover. It will compare the same restricted and unrestricted models as for the three-period three-treatment monotherapy trial. However, we now have i = 1, …, n/12, j = 1, …, 4, k = 1, …, 4, and l = 1, 2, 3; d(j, k) ∈ {A, B, C, D}; For identifiability, we set Here, setting θ = (θ B2 , θ B3 , θ C2 , θ C3 , θ D2 , θ D3 ) ⊤ , we again test the following hypotheses H 0 : θ ¼ 0; H 1 : θ≠0: Applying the Hotelling's T 2 based adjustment described above, in this case at interim we suppose H 0 will be rejected when and choose the required sample size by determining the minimal n such that with δ = (δ, 0, 0, 0, 0, 0) ⊤ , and where in the Appendix we now justify the use of Finally, as for the 3 × 3 trial, the final analysis is performed using REML estimation and the Kenward-Roger correction. At the interim, 50 participants have completed either three treatment periods if they are on the monotherapy trial, or four treatment periods if they are on the dualtherapy trial. Inherently, we have more data from the four-treatment crossover and may anticipate better performance in this setting. It is important to note that at this stage we are not concerned with estimating treatment effects, we simply want the estimate,σ 2 e , of the within-person variance required for the sample size calculations outlined above. Simulation design To consider the variation in the estimate of the withinperson SD, a simulation study was carried out to investigate the effect on the requisite final sample size for different treatment effects and within-person SD scenarios, based on the sample size re-estimation calculations described above. In the protocol, the sample size calculation fixes the SD to be 8 mmHg giving power equal to 98% for a single 4 mmHg interaction effect with 600 participants, and 81.3% power with 600 participants and a single 3 mmHg interaction effect. Two separate simulation studies have been carried out assuming participants are on either the monotherapy or dual-therapy treatment regimes, in all cases assuming that each participant has received three (monotherapy) or four (dual-therapy) treatments and that there are no missing values. Participants are randomly generated with approximately equal numbers of participants on each of the six treatment sequences-ABC, ACB, BAC, BCA, CAB and CBA-for monotherapy and four treatment sequences-ABDC, BCAD, CDBA and DACB-for dual therapy. We consider scenarios in which 80% and 90% power to detect treatment-by-ethnic interactions are desired, using our outlined global test of interaction at a 5% significance level are assumed. What is important to realise is that in sample size calculations the within-person SD is typically fixed. In our simulation study, we set the desired power, the strength of a single treatment-by-ethnic interaction and a withinperson SD, and solve for sample size across numerous replicates. Accordingly, of principle interest is the distribution of the interim estimated values of the withinparticipant SD and the distribution of the final required sample sizes. The factors varied and held constant in these simulations are outlined in Table 1, resulting in 2 × 2 = 4 scenarios to assess for each of the two trials, with 1000 simulations carried out for each scenario-trial combination. We assume that there is no carryover effect and no treatment-by-period interaction. The between-person SD is held constant at 10 mmHg, we do not need to consider varying this as the procedures should be invariant to σ b with a sample size of 50. Our methodology for re-estimating the sample size requires a minimum acceptable sample size. In the protocol, a sample size of 600 is estimated for each monotherapy and dual-therapy trial, based on a within-person SD of 8 mmHg. As there is no plan to reduce the sample size at interim, we therefore set a minimum sample size of 600 for all simulations and scenarios investigated. Matlab and Stata code for sample size calculations can be found in Additional files 1, 2, 3 and 4. Simulation of the interim analysis and re-estimation of the sample size: monotherapy In the trial protocol, the sample size calculation fixed the SD to be 8 mmHg, giving > 80% power with 600 participants. With simulations, sampling variation means that we have a range of possible estimates of withinperson SD which means that in this case 90% of the time 640 participants would usually be fine to give us 90% power to detect the primary effect for a within person SD of 9 mmHg and a single treatment-by-ethnic interaction of 4 mmHg (Fig. 3). A larger within-person SD of 10 mmHg would mean that in 75% of the time 705 participants would usually be fine to give us 90% power to detect the same planned treatment effect (Fig. 3). Assuming a smaller single treatment-by-ethnic interaction of 3 mmHg, and a 1 mmHg increase in the assumed within-person SD, 75% of the time with a sample size of 797 participants would give us 80% power to detect the primary hypothesis. A 2 mmHg increase in within-person SD from that assumed in the protocol means that 75% of the time with a sample size of 979 participants would give us 80% power detect a treatment effect of 3 mmHg (Fig. 3). As would be expected, smaller values of δ, and larger values of σ e , imply larger required sample sizes. Figure 3 shows for the scenario with σ e = 9 mmHg and δ = 4 mmHg more than the planned 600 particpiants are required because of the nature of sample size re-estimation and the methodology used here. The conservative approach adopted here to try and control the type-I error rate means we have to push the sample size up a little to keep the type-II error rate down. That is, the use of Kenward-Roger and the Hotelling adjustments pushes the power down compared to the methodology used for the initial 600 estimate, so it is inevitable that the simulations here produce > 600 for the sample size re-estimation designs. Simulation of the interim analysis and re-estimation of the sample size: dual therapy For the dual-therapy trial, sample sizes required for the different simulation scenarios are similar to those estimated for the monotherapy trial. For 90% power to . 3 Cumulative density function for re-estimated sample size assuming 80% or 90% power to detect treatment-by-ethnic interactions using a global test of interaction at the 5% significance level for participants on the monotherapy treatment regime detect a treatment effect of 4 mmHg, 90% of the time 602 participants would usually be fine when the within-person SD is 9 mmHg. A larger within-person SD of 10 mmHg would mean that 75% of the time 692 participants would usually be fine to give us 90% power to detect the same treatment effect (Fig. 4). In both scenarios, the dualtherapy trial requires slightly fewer participants. Assuming a smaller single treatment-by-ethnic interaction of 3 mmHg, a 1 mmHg increase in the assumed within-person SD 75% of the time a sample size of 782 participants would give us 80% power to detect the primary hypothesis. A 2 mmHg increase in within-person SD from that assumed in the protocol means that 75% of trials with a sample size of 966 participants would give us 80% power detect a planned treatment effect of 3 mmHg; again, in both scenarios, fewer participants are required than in the monotherapy trial, as a result of having more measurements overall in the dual-therapy trial. In summary, for the same simulated treatment-byethnic interaction an increase in within-person SD requires a large sample size. For the same simulated within-person SD, a smaller planned treatment effect also requires a larger sample size, which is what would be expected from sample size calculation theory. The fact that the dual-therapy trial when compared with the monotherapy trial requires slightly fewer participants when comparing like-for-like is a result of the increased number of denominator degrees of freedom in the sample size calculation for the four-period fourtreatment compared with the three-period threetreatment crossover: 3n -14 = 136 compared with 2n -10 = 90 when n = 50. The increased number in the denominator degrees of freedom in the four-period fourtreatment compared with the three-period threetreatment crossover is in turn due to the increased number of observations per participant in the four-by-four crossover trial. Discussion A novel method for sample size re-estimation has been described for three-period three-treatment and fourperiod four-treatment crossover trials. Here we have dealt with a more complicated covariance matrix in a 3 × 3 and 4 × 4 randomised crossover setting that incorporates both a global test and allows for interaction terms in the linear mixed model. The simulation study allowed us to explore the outcome for a possible increase in the within-person SD from that assumed and used for the sample size calculations in the protocol ahead of trial recruitment and the interim analysis. As would be expected, an increase in the within-person SD or a smaller primary treatment effect would require a larger sample size. The fact that the dual-therapy trial requires slightly fewer participants than the monotherapy trial when all variables are like for like is a consequence of the increased degrees of Fig. 4 Cumulative density function for re-estimated sample size assuming 80% or 90% power to detect treatment-by-ethnic interactions using a global test of interaction at the 5% significance level for participants of the dual-therapy treatment regime freedom in the denominator of the F-test which is used in the sample size calculation. The simulation of the interim sample size indicates that we can only realistically aim for 80% in these scenarios without increasing the sample size above 600 participants. Conclusions The formulas presented here provide a means for reestimating the sample size in both three-period threetreatment and four-period four-treatment crossover trials. In the context of the AIM HY-INFORM study, simulating the planned interim analysis allows us to explore the outcome for a possible increase in the withinperson SD from that assumed in the protocol. Simulations show that without increasing the planned sample size of 600 participants, on each crossover trial, we can reasonably still expect to achieve 80% power with a small increased in the within person SD from that assumed. where X kl is the design matrix for a single individual on treatment k, of ethnicity l. Thus CovðβÞ can be found by evaluating the above expression, which can be readily achieved computationally using a symbolic algebra package. Performing these calculations in Matlab, and extracting the subcomponent corresponding toθ, we identified that in our re-estimation procedure. Equivalent calculations for the 4 × 4 trial also demonstrate that Additional file 2. Stata do file. Additional file 3. Three-period three-treatment sample size calculation code (Stata ado file). Additional file 4. Four-period four-treatment sample size calculation code (Stata ado file).
2019-12-02T16:02:22.183Z
2019-12-01T00:00:00.000
{ "year": 2019, "sha1": "62943c91aa008c4b9338195c767b467fe6a3314f", "oa_license": "CCBY", "oa_url": "https://trialsjournal.biomedcentral.com/track/pdf/10.1186/s13063-019-3724-6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "62943c91aa008c4b9338195c767b467fe6a3314f", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
1201318
pes2o/s2orc
v3-fos-license
Anti-smoking initiatives and current smoking among 19,643 adolescents in South Asia: findings from the Global Youth Tobacco Survey Background Cigarette smoking habit usually begins in adolescence. The developing countries in South Asia like Pakistan, India, Bangladesh, and Nepal, where the largest segment of the population is comprised of adolescents, are more susceptible to smoking epidemic and its consequences. Therefore, it is important to identify the association between anti-smoking initiatives and current smoking status in order to design effective interventions to curtail the smoking epidemic in this region. Methods This is a secondary analysis of national data from the Global Youth Tobacco Survey (GYTS) conducted in Pakistan (year 2003), India (year 2006), Bangladesh (year 2007), and Nepal (year 2007). GYTS is a school-based survey of students targeting adolescents of age 13–15 years. We examined the association of different ways of delivering anti-smoking messages with students’ current smoking status. Results A total of 19,643 schoolchildren were included in this study. The prevalence of current smoking was 5.4% with male predominance. No exposure to school teachings, family discussions regarding smoking hazards, and anti-smoking media messages was significantly associated with current smoking among male students. Participants who were deprived of family discussion regarding smoking hazards (girls: odds ratio (OR) 1.56, 95% confidence interval (CI) 0.84–2.89, p value 0.152; boys: OR 1.37, 95% CI 1.04–1.80, p value 0.025), those who had not seen media messages (girls: OR 2.89, 95% CI 1.58–5.28, p value <0.001; boys: OR 1.32, 95% CI 0.91–1.88, p value 0.134), and those who were not taught the harmful effects of smoking at school (girls: OR 2.00, 95% CI 0.95–4.21, p value 0.066; boys: OR 1.89, 95% CI 1.44–2.48, p value <0.001) had higher odds of being current smokers after multivariate adjustment. Conclusion School-going adolescents in South Asia (Pakistan, India, Nepal, and Bangladesh) who were not exposed to anti-tobacco media messages or were not taught about the harmful effects in school or at home had higher odds of being current smokers than their counterparts. Background Evidence suggests that the cigarette smoking habit usually begins before the attainment of adulthood [1], and adolescents, in particular, are more prone to develop nicotine dependence [2]. This finding is of immense concern for countries like Pakistan, India, Bangladesh, and Nepal, where cigarette smoking is highly prevalent among adolescents [3][4][5] and 26%-29% of national populations are comprised of individuals aged 14 years or younger [6]. Here, the largest segment of the population encompasses school-going children who are most susceptible towards experimentation with smoking. In countries with such demographic patterns, future trends of smoking-attributable morbidity and mortality should be determined accounting for the current inclination towards cigarette smoking among adolescents [3]. The emerging smoking epidemic along with its social, economic, and health consequences needs to be controlled in order to achieve tobacco elimination among school-going teenagers. Although several studies have evaluated the effectiveness of various tobacco control initiatives, only a few have taken into account the efficiency of such programs in smoking initiation and control among schoolchildren. Evidence suggests that the targeted approach is more efficient in controlling this tobacco pandemic [7]. It is essential to determine which initiatives or programs are most appreciated by the adolescents. In a resourcelimited country, evidence is needed to prove the effectiveness of any anti-tobacco initiatives before their implementation. Moreover, it is imperative to understand the particular socio-cultural context in this region, which may influence the youth's smoking behavior and hence may eventually affect their perception and response to anti-smoking initiatives. Although South Asia is the most densely populated region of the globe, health provisions and primary preventive efforts in particular are ignored and governmentsponsored anti-smoking initiatives are scarce [8]. Unfortunately, limited research has focused on this issue in the South Asian region. Therefore, the aim of this study was to determine the association of different anti-smoking initiatives with current smoking patterns in school-going children in South Asia. We also aimed to examine any gender difference in relation to anti-smoking campaigns and its impact on smoking patterns. This will guide both amendments and formulation of health policy and legislation for tobacco control in the region. South Asian region South Asia is the most densely populated geographical region in the world which harbors about a quarter of the world's population. According to the United Nations geographical region classification, South Asia is comprised of Afghanistan, Bangladesh, Bhutan, India, Iran, Maldives, Nepal, Pakistan, and Sri Lanka. Prior to 1947, India, Pakistan, and Bangladesh were a single country-British India. Furthermore, the spoken language Hindi is well understood in these countries leading to the dominance of Indian television and film media in this region. These factors among various others have resulted in shared societal norms in these countries. Moreover, they face common health challenges [9]. Study design and participants The Global Youth Tobacco Survey (GYTS) is a globally standardized cross-sectional survey to monitor youth tobacco use and track key tobacco control indicators in order to assist countries in the design, implementation, and assessment of tobacco control initiatives. This is a secondary analysis of national data from GYTS conducted in Pakistan (year 2003), India (year 2006), Bangladesh (year 2007), and Nepal (year 2007). The GYTS was a school-based survey targeting 13-to 15-year-old students. A multistage sample design was used to obtain a representative sample of students. At the first stage, schools were selected proportional to enrollment size. At the second stage, classes were randomly chosen and all students in selected classes were eligible to participate. The country coordinators followed local procedures for obtaining consent and ethical review. Details about GYTS could be found at http://www.cdc.gov/tobacco/global/. Response rate The response rates of schools and students for all countries were as follows: Pakistan (no response rates are provided from the Centers for Disease Control and Prevention (CDC) official website), India (96.7%, 82.3%), Bangladesh (100%, 88.9%), Nepal (98.0%, 96.6%), respectively. Questionnaire A self-administered questionnaire containing both a standard set of survey questions (for all countries) and additional questions (country specific) was used with computerscanable answer sheets. The GYTS questionnaire included data on the prevalence of cigarette and other tobacco use, initiation, susceptibility, perceptions, attitudes, access to tobacco products, exposure to secondhand smoke, school and media anti-smoking initiatives, advertisement, as well as basic demographic information. We included only those variables which were identical in the four selected countries. The following questions were asked to anticipate about anti-smoking initiatives in different domains: Home or family: 'Has anyone in your family discussed the harmful effects of smoking with you?' The responses for this question were 'Yes' or 'No'. Media: 'During the past 30 days (one month), how many anti-smoking media messages (e.g., television, radio, billboards, posters, newspapers, magazines, movies) have you seen?' The responses for this question were ' A lot', 'Few', or 'None'. Events: 'When you go to sports events, fairs, concerts, community events, or social gatherings, how often do you see anti-smoking messages?' The responses for this question were 'I never go to sports events', ' A lot', 'Sometimes', and 'Never'. School: 'During this school year, did you discuss in any of your classes the reasons why people your age smoke?' The responses for this question were 'Yes', 'No', and 'Not sure'. 'During this school year, were you taught in any of your classes about the dangers of smoking?' The responses for this question were 'Yes', 'No', and 'Not sure'. 'How long ago did you last discussed smoking and health as a part of a lesson?' The responses for this question were 'Never', 'This year' (recoded as 'This term'), 'Last year', 'Two years ago', 'Three years ago', and 'More than three years ago' (all recoded as 'Previous terms'). The question 'During this school year, were you taught in any of your classes about the effects of smoking like it makes your teeth yellow, causes wrinkles, or makes you smell bad?' could not be included as it was not asked in Bangladesh. We defined 'current smoker' as an adolescent who smoked cigarette on 1 or more days in the past 30 days (1 month) and 'non-smoker' as one who did not smoke in the past 30 days. Responses to the questions were all precoded. Data analysis We obtained the dataset from the CDC website and analyzed it through SPSS version 16.0 (Chicago, IL, USA). Age was categorized into two groups (≤14 years and 15-17 years) based on UNICEF early and late adolescent age limits [6]. Association between each explanatory variable and smoking had been explored initially through chi-square. Gender-stratified odds ratios (ORs) were calculated for exposure to different anti-smoking initiatives in relation to smoking status. Gender-based stratification was done for logistic regression analysis, because the literature reports gender-based differences in smoking prevalence [10]. We used 'complex samples' option to carry out the analysis in SPSS for multistage cluster sampling used in GYTS, accounting for country-specific PSU, stratum, and sample weight. Binary logistic regression model was used to estimate the odds ratios for association between current smoking and anti-smoking messages and other explanatory variables. An alpha level of 0.05 was established as the criterion for statistical significance for all analyses done. Total participants were 21,327; however, 1,684 participants were excluded due to missing data for any of the following variables (smoking status, 723; family discussed, 160; anti-smoking media messages, 177; antismoking messages at social gatherings, 89; taught about dangers of smoking, 160; why people of respondent's age smoke, 135; last discussed about smoking as a part of lesson, 102; age, 53; and gender, 85). Results The final analysis included the data of 19,643 individuals: Pakistan (3,455), India (11,157), Bangladesh (2,830), Nepal (2,201). Overall, the median age of the sample was 14 years. Of these, the percentage of males was 51.2%, 57.4%, 50.6%, and 54.2% for Bangladesh, India, Nepal, and Pakistan, respectively. Of these, 1,056 (5.4%) were current cigarette smokers and 18,587 (94.6%) were non-smokers. The male gender and age group of 15 to 17 years were significantly associated with current smoking (p values <0.001). In addition, no family discussion about harmful effects of cigarette smoking (p value <0.001), seeing a few antismoking media messages (p value <0.001), sometimes seeing anti-smoking messages at social gatherings or events (p value <0.001), no teachings about the dangers of smoking at school (p value <0.001), and having discussion about smoking in previous terms (p value <0.001) were significantly associated with current cigarette smoking. Demographic characteristics of the study sample are described in Table 1. On logistic regression (unadjusted) stratified analysis for females (Table 2), those who have seen no anti-smoking media messages as compared to those who have seen a lot (OR 2.99, 95% confidence interval (CI) 1.71-5.22, p value <0.001), those who have sometimes seen anti-smoking messages at social gatherings/events as compared to those who have seen a lot (OR 1.80, 95% CI 1.00-3.23, p value 0.047), those who were not taught dangers of smoking at school as compared to those who were taught (OR 1.94, 95% CI 1.01-3.72, p value 0.045), and those with whom smoking was discussed as a part of lesson in the previous terms as compared to those with whom it was discussed in this term (OR 1.88, 95% CI 1.15-3.07, p value 0.011) had significantly higher odds of being current smokers. Moreover, current smoking was associated with increased but non-significant odds for age (OR 1.43, 95% CI 0.74-2.76, p value 0.275), family discussion about harmful effects of smoking (OR 1.46, 95% CI 0.84-2.56, p value 0.176), and seeing a few anti-smoking media messages (OR 1.76, 95% CI 0.91-3.42, p value 0.092). Logistic regression (unadjusted) stratified analysis for males (Table 2) shows that boys who had no family discussion about harmful effects of smoking as compared to those who had discussion (OR 1.39, 95% CI 1.04-1.88, p value 0.025), those who had seen a few anti-smoking media messages as compared to those who had seen a lot (OR 1.97, 95% CI 1.42-2.74, p value <0.001), those who were not taught dangers of smoking at school as compared to those who were taught (OR 1.77, 95% CI 1.30-2.40, p value <0.001), and those with whom smoking was discussed as a part of lesson in the previous terms as compared to those with whom it was discussed in this term (OR 1.66, 95% CI 1.20-2.30, p value 0.002) had significantly higher odds of being current smokers. Moreover, current smoking was associated with increased but insignificant odds for age (OR 1.21, 95% CI 0.81-1.78, p value 0.347) and not seeing anti-smoking media messages (OR 1.24, 95% CI 0.88-1.76, p value 0.210). Multivariate logistic regression analysis for females after adjustment for age, family discussed about harmful effects of smoking, frequency of anti-smoking messages seen on media, frequency of anti-smoking messages seen at social gatherings/events, taught dangers of smoking at school, and last discussed about smoking and health as part of a lesson ( Similarly, multivariate logistic regression analysis after adjustment for male students shows that those with whom family did not discuss about harmful effects of smoking as compared to those with whom it was discussed (OR 1.37, 95% CI 1.04-1.80, p value 0.025), those who had seen a few anti-smoking media messages as compared to those who had seen a lot (OR 1.86, 95% CI 1.31-2.64, p value <0.001), those who were not taught about dangers of smoking at school as compared to those who were taught (OR 1.89, 95% CI 1.44-2.48, p value <0.001), and those with whom smoking was discussed in the previous terms as compared to Discussion In our study, the prevalence of current smoking was 5.4% in this South Asian region (Pakistan, India, Bangladesh, and Nepal) with male predominance. This estimate is lower than that in African (9, 2%), Western Pacific (6, 5%), and European regions (17, 9%) and the region of the Americas (17, 5%) and comparable with that in Eastern Mediterranean (5, 0%) and previously reported prevalence for Southeast Asia (4, 3%) [11]. The lower prevalence observed in our study could be due to underreporting of smoking by young people in this region which has been indicated by a previous study [5]. Students' lack of awareness regarding tobacco-related health risks was associated with current smoking. Direct communication of smoking hazards to adolescents at school was more strongly negatively associated with odds of current smoking behavior than family discussions regarding the same at home. Exposure to anti-smoking media messages on television, radio, billboards, posters, newspaper, magazines, and movies was also associated with non-smoking. It has been reported that anti-smoking media messages specifically on television proved to be very effective in hindering smoking among young adolescents (12-13 years), whereas exposure to radio or outdoor advertisements did not produce any significant effect [12], but we were unable to further elaborate our results. On contrary to the exposure of antismoking media messages, no exposure to anti-smoking messages at social gatherings/events such as fairs, concerts, sports, and community occasions has shown a protective effect against smoking. Comparable results were observed in Somaliland GYTS survey, which proposed that the content and mode of delivery of antismoking messages could impact the desired results [13]. Additionally, adolescents who never go to social gatherings or events were even less likely to smoke. This might be due to the fact that students who were not socializing were getting lesser opportunity to smoke or be influenced by their peers. However, we are unable to provide any explanation of this finding due to methodological constraints, as we do not have information on their family practices and parental supervision. The male predominance has also been observed in other studies conducted on school teenagers in this region [10]. Moreover, this difference continues at higher education levels such as colleges and universities [14] and even in the general population, irrespective of urban or rural areas of inhabitation [15]. This finding is in contrast with studies conducted outside South Asia which either observed no gender difference or found a female preponderance [16]. However, it is important to note that the effect of gender on smoking predisposition has been unclear [13] and the socio-cultural factors associated with smoking may be different among the regions. Moreover, females may be reluctant to reveal their smoking habit in our region due to cultural prohibition. In the Western countries, smoking among females has become as acceptable as in males, whereas in this region, it is still considered as an objectionable practice for females [17]. Due to strong social stigma attached with this habit, females may have underreported their smoking status. Previous studies have also reported that family-based prevention programs were inefficient in reducing the smoking susceptibility rates among 11-to 14-year-olds, which indicates further research into the dynamics of parent-child communication about smoking issues [18]. On the contrary, school-based anti-tobacco initiatives have shown to be very effective, especially if teachers as idols are discouraged from smoking within school premises [19]. Additionally, it was observed that school-based anti-smoking programs were ineffective if teachers continued to smoke in the presence of students [20]. Although tobacco control strategies need more improved approaches like involvement of the community and renewed policy-level interventions, in countries with several other competing health priorities and resource limitations, integration of tobacco control programs in the educational system could be the best way to achieve desired results with minimal financial and infrastructural burden [21]. According to the Health Belief Model, teaching youth about the dangers of smoking may reinforce their perception about the harmful effects and would result in their reduced risk to become smokers [13]. In addition, incorporation of education about specific resistance skills would be beneficial, as adolescents often do not know how to resist peer pressure [20]. Moreover, the role of parents and family in planning anti-smoking initiatives cannot be undermined, especially for non-school-going adolescents. We also found that family discussions about smoking hazards only among males were associated with non-smoking. We emphasize that our results are generalizable only to the school-going population which constitutes only a portion of the adolescent group in the South Asian region, since school enrolment rates have been reported to be low [22]. Strength and limitations This study examines a number of anti-smoking initiatives in the region that could be targeted in prevention programs. The questionnaire is standardized; hence, crosscountry analysis was attainable. The large sample size ensured high statistical power and precise estimates. We included national datasets; thus, the findings are fairly generalizable. Apart from the above mentioned strengths, this study has some limitations. Firstly, the data were collected through self-reporting. Therefore, participants might have misreported about the smoking status, which was not confirmed through any biomarker; however, evidence suggests that health risk behaviors are correctly and reliably reported by adolescents [23]. Secondly, there could be an element of recall bias especially about the frequency of anti-smoking messages which may have led to misclassification bias in this study; however, such misclassification is likely to be non-differential which should lead to null association between exposure to anti-smoking messages and their association with smoking. However, we observed a significant positive association which is unlikely due to the misclassification bias. Thirdly, out-ofschool adolescents could not be represented through this study, which remains a major limitation. Fourthly, some of the factors have shown higher odds for both genders but were non-significant in females. This could be due to the smaller proportion of smoking females in our sample. Conclusion We found that school-going adolescents, particularly males who were exposed to only a few anti-tobacco media messages or were not taught about the harmful effects in school or at home, were more likely to be current smokers than those who were. A combination of school-and home-based anti-smoking interventions may be effective in the control of adolescent smoking in this region; however, further interventional studies are required on the regional population.
2016-05-12T22:15:10.714Z
2014-02-25T00:00:00.000
{ "year": 2014, "sha1": "2a9a6aba168fccf2f4164bfd6b2c00535817e2ca", "oa_license": "CCBY", "oa_url": "https://harmreductionjournal.biomedcentral.com/track/pdf/10.1186/1477-7517-11-8", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2e6fd92f6039fd5c6b0d0785f98b6b317f23d767", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
263547962
pes2o/s2orc
v3-fos-license
Rational Design of Phe‐BODIPY Amino Acids as Fluorogenic Building Blocks for Peptide‐Based Detection of Urinary Tract Candida Infections Abstract Fungal infections caused by Candida species are among the most prevalent in hospitalized patients. However, current methods for the detection of Candida fungal cells in clinical samples rely on time‐consuming assays that hamper rapid and reliable diagnosis. Herein, we describe the rational development of new Phe‐BODIPY amino acids as small fluorogenic building blocks and their application to generate fluorescent antimicrobial peptides for rapid labelling of Candida cells in urine. We have used computational methods to analyse the fluorogenic behaviour of BODIPY‐substituted aromatic amino acids and performed bioactivity and confocal microscopy experiments in different strains to confirm the utility and versatility of peptides incorporating Phe‐BODIPYs. Finally, we have designed a simple and sensitive fluorescence‐based assay for the detection of Candida albicans in human urine samples. Coupling was carried out using Fmoc-AA-OH (4 eq.), coupling reagent (4 eq.), OxymaPure (4 eq.) and DIPEA (8 eq.) in DMF for 1 h. The resin was then washed with DMF (5 × 1 min), DCM (5 × 1 min) and filtered. The completion of the coupling step was confirmed using Kaiser Test. Before the next coupling cycle, Fmoc group is removed as described above. Cleavage from resin for compounds 12 and 14-17: The peptide was cleaved from the resin using 2% TFA, 2.5% TIS in DCM (5 × 1 min) (12, 15 and 17) or 2% TFA/DCM (14 and 16) and washed with DCM (2 × 1 min). The combined filtrates were collected into a round bottom flask containing DCM (10 mL) and concentrated under reduced pressure. Cleavage from resin for compound 13: The peptide was cleaved from the resin using 95% TFA, 2.5% TIS in DCM (1h) and washed with DCM (4 × 1 min). The combined filtrates were collected into a round bottom flask and concentrated under reduced pressure. After cleavage as described above, the crude peptide was precipitated by adding cold Et 2O (dropwise) and the resulting precipitate was decanted and dried (x2), obtaining 29 mg of benzyloxycarbonyl (Z) lysine-protected peptide. The crude peptide (24 mg, 0.013 mmol) was S7 deprotected by means of hydrogenation. Peptide was dissolved in HCOOH/DMF/MeOH (0.05:3.3:1) (1.9 mL), followed by addition of 20% Pd(OH)2/C (6.1 mg, ca. quarter of peptide mass). Then, the reaction flask was flushed with N2/vacuum cycles (×3) and filled with H2. The reaction mixture was stirred under H 2 at r.t. for 1 h (monitored by HPLC-MS). The catalyst was removed by filtration under Celite and washed with MeOH. Filtrate was collected in a round bottom flask and solvent was removed under reduced pressure. The residue was dissolved in CH 3CN:H2O and lyophilised. Purification was conducted by semi-Preparative HPLC using a 0-50% gradient over 25 min, with detection at 220 and 280 nm. Pure fractions were collected and lyophilised to afford pure peptide 12 as a white solid (2.3 mg, 10% yield). Fmoc-Pro-OH, Fmoc-Leu-OH, Fmoc-His(Trt)-OH, Fmoc-Ile-OH, Fmoc-Ser(Trt)-OH, Fmoc-Lys(Boc)-OH and Fmoc-Phe-OH were used as building blocks. After cleavage as described above, the peptide crude was precipitated by adding cold Et 2O (dropwise) and the resulting precipitate was decanted and dried to afford 36 mg of a white solid corresponding to the protected peptide. Then, 16 mg of cleaved peptide (1.0 eq.), PyOxim (1.5 eq.) and OxymaPure (1.5 eq.) were dissolved in DMF:ACN (1:1, 0.001 M). After setting the cocktail at -10 ˚C using a salted ice bath, DIPEA (3.0 eq.) was added, and the mixture stirred overnight at r.t. After S8 solvent removal under reduced pressure, the crude peptide was dissolved in TFA:TIS:H2O (95:2.5:2.5) for 1h to remove the side-chain protecting groups. Then, the crude was concentrated under reduced pressure followed by precipitation in cold Et 2O (dropwise). Purification was conducted by semi-Preparative HPLC using a 0-60% gradient over 25 min, with detection at 220 and 260 nm. Pure fractions were collected and lyophilised to afford pure peptide 14 as a white solid (3.0 mg, 26% yield from cyclization step). Fmoc-Lys(Boc)-OH, Fmoc-Ile-OH, Fmoc-Phe-OH and Fmoc-Trp-OH were used as building blocks. After cleavage as described above, the peptide crude was precipitated by adding cold Et 2O (dropwise) and the resulting precipitate was decanted and dried to afford 36 mg of a white solid corresponding to the protected peptide. Then, 16 mg of cleaved peptide (1.0 eq.), PyOxim (1.5 eq.) and OxymaPure (1.5 eq.) were dissolved in DMF:ACN (1:1, 0.001 M). After setting the cocktail at -10˚C using a salted ice bath, DIPEA (3.0 eq.) was added and the mixture stirred for 2 days at r.t. After solvent removal under reduced pressure, the crude peptide was dissolved in TFA:H 2O:DCM (30:2.5:67.5) for 40 min to remove the side-chain protecting groups. Then, the crude was concentrated under reduced pressure followed by precipitation in cold Et2O (dropwise). Purification was conducted by semi-Preparative HPLC using a 0-50% gradient S9 over 25 min, with detection at 220 and 280 nm. Pure fractions were collected and lyophilised to afford pure peptide 16 as a white solid (1.9 mg, 16% yield from cyclization step). Pure fractions were collected and lyophilised to afford pure peptide 18 as an orange solid (3.0 mg, 27% yield). Computational details DFT and TD-DFT calculations were performed with the M06-2X hybrid exchange-correlation functional [3] and the 6-311+G(2d,p) Pople basis set as implemented in the Gaussian 09 [4] package. This choice is supported by previous benchmarks performed on aza-BODIPY and BODIPY dyes, [5] which demonstrate that this level of theory provides good consistency with experimental trends for optical spectra, yet a systematic overshooting of the transition energies (by c.a. 0.4 eV). However, this systematic error is not a concern for the present study as we are not interested in theoretically reproducing the experimental spectra, rather than comparing the different molecules studied (e.g., transition state barriers) on the same footing. Numerical frequency calculations were used to ascertain the nature of the stationary points and the same increased integration grid (i.e., ultrafine (99,590)) respect to the default setting was used in all computations as this is recommended for describing correctly very low frequency modes. S11 Experimental protocols for spectroscopical and biological assays Fluorescence spectra and intensity acquisition. Absorbance and emission spectra were determined in the range of 400-700 nm (every 2 nm) at the indicated concentrations on 96 or 384-well plates using a BioTek Cytation 3 spectrophotometer. Environmental sensitivity was measured by comparing the fluorescence emission in MeOH vs glycerol (compounds 4-7) and in presence of liposome suspensions in PBS or PBS alone (compound 17). Measurement of extinction coefficients. For extinction coefficient measurements, the absorbance of each sample at the maximum excitation wavelength was recorded and the extinction coefficient was then determined by fitting the data to Beer's law. Measurement of quantum yields. Quantum yields were determined by measuring the integrated emission area of the fluorescence spectra in PBS or in the presence of phosphatidylcholine: cholesterol liposome suspensions in PBS and comparing it to the area measured for a reference compound rhodamine 101 in MeOH as the reference compound. [6] Different working solutions of compounds and reference ranging Stability tests of peptide 17 in urine samples. To assess photo-and urine stability, probe 17 was added into diluted urine samples (urine: water, 1:6) and incubated in 384-well plates at 37°C. For photostability determination, the fluorescence emission (644 nm) of probe 17 (30 µM) was monitored using a BioTek Cytation 3 spectrophotometer. Values were obtained as means from three independent experiments with n=3. To determine chemical integrity, the absorbance (570 nm) of probe 17 (100 µM) was monitored by HPLC-MS at different timepoints. Culture of fungal strains. All strains used in this experiment were grown on SAB agar at 37 °C for 3 days. Cells were harvest using sterile inoculation loop by taking a single colony and resuspending in PBS supplemented with 0.1% tween-20 (PBST). The concentration of cells was then quantified with a haemocytometer. For the determination of the minimum inhibitory concentration, cell density was adjusted to 10 6 cells mL -1 with 20% liquid Vogel's medium. Culture of E. coli. E. coli was grown on Lysogeny Broth (LB) agar at 37 °C for 1 day and harvested using sterile inoculation loop by taking a single colony and resuspending in PBST. The number of cells was quantified using a haemocytometer. For the determination of the minimum inhibitory concentration, cell density was adjusted to 10 6 cells mL -1 with 20% liquid LB medium. In vitro measurements of minimum inhibitory concentrations. Minimum inhibition concentration (MIC) measurements were performed as described previously with minor changes. [7] Each compound was dissolved in DMSO at concentration of 100 mM, this was used as the stock solution. For testing the MIC, the stock solution was further diluted in water to reach a concentration of 1 mM and was added in to a 96-well plate cell culture plate. A serial dilution was then performed within the 96-well plate, and the compound solutions at different concentrations were then mixed with conidia suspended in 20% Vogel's medium to reach a final volume of 100 µL per well. The final conidia concentration was 5 × 10 5 cells mL -1 in 20% Vogel's medium, the highest tested concentration of each compound was at 50 mg mL -1 . After 48 h incubation at 37°C, MIC was determined by brightfield microscopy from three independent experiments (n = 3). For testing the MIC against E. coli, the same protocol was followed apart from using liquid LB as the medium at a final concentration of 10%. Confocal live-cell microscopy. Probe 17 was mixed with Candida spp. to reach a final concentration of 10 µM and a final cell concentration of 5 × 10 5 cells/mL in PBS. The cells combined with the peptide were dispensed into the wells of Ibidi μ-slide 8 well (Ibidi GmbH, Germany) and incubated for 10 min at r.t. Live cell imaging of the germinated spores was performed using a Leica TCS SP8 confocal laser scanning microscope equipped with photomultiplier tubes, hybrid GaAsP detectors and a 63× water immersion objective and white light laser (575 nm was used for excitation S13 wavelength and 600-650 nm was used for emission). Images were taken at 15, 30 and 60 min timepoints. Images were processed using Imaris software 8.0 developed by Bitplane (Zurich, Switzerland).
2022-01-26T06:18:36.365Z
2022-01-24T00:00:00.000
{ "year": 2022, "sha1": "85c150960823015496565ce9da92704566fa2196", "oa_license": "CCBY", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9305947", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "2fe96e09126c5f8f7227daa550c52039c6d5a247", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
30062707
pes2o/s2orc
v3-fos-license
Determinants of Use of Household-level Water Chlorination Products in Rural Kenya, 2003–2005 Household-level water treatment products provide safe drinking water to at-risk populations, but relatively few people use them regularly; little is known about factors that influence uptake of this proven health intervention. We assessed uptake of these water treatments in Nyanza Province, Kenya, November 2003–February 2005. We interviewed users and non-user controls of a new household water treatment product regarding drinking water and socioeconomic factors. We calculated regional use-prevalence of these products based on 10 randomly selected villages in the Asembo region of Nyanza Province, Kenya. Thirty-eight percent of respondents reported ever using household-level treatment products. Initial use of a household-level product was associated with having turbid water as a source (adjusted odds ratio [AOR] = 16.6, p = 0.007), but consistent usage was more common for a less costly and more accessible product that did not address turbidity. A combination of social marketing, retail marketing, and donor subsidies may be necessary to extend the health benefits of household-level water treatment to populations most at risk. Introduction According to the United Nations Children's Fund [1], only 46% of the population of Kenya has access to improved water sources. Since not all water from improved sources meets World Health Organization (WHO) guidelines for potable water and since access to improved water may be intermittent, an even higher percentage do not have consistent access to safe water [2,3]. In rural Kenya, where there has been slow progress toward improved water systems, [4,5] people have another option for obtaining safe water. Household-level water treatment products offer an immediate, affordable alternative to resource-intensive networked systems for providing safe drinking water for Kenyans and millions of others throughout the developing world. While the health benefits of household-level products are well-documented [6][7][8][9], motivating consistent use remains a significant challenge [10]. Nyanza Province in western Kenya is among the poorest regions in Kenya with 2.4 million people or 64% of the population living below the poverty line [11]. The vast majority of homes are not equipped with electricity, few communities have even public taps, and all lack sewerage systems. Water is often collected daily from ponds and rivers where livestock also drink and stored in the family compounds in 10-20 liter clay or plastic containers. Water from these sources is often highly turbid due to organic sediments and contaminated with enteric pathogens. Products for household-level treatment of drinking water are available in the area. Locally-produced sodium hypochlorite solution (Jet Chemicals, Ltd, Kenya) has been socially marketed since May, 2003 and a flocculent-disinfectant product was introduced later that year. Developed and manufactured by the Procter & Gamble Company (Ohio, United States of America) when the flocculent-disinfectant is mixed with highly turbid water, debris quickly settles and the water becomes visibly clear and disinfected. Highly turbid water has high chlorine demand, and previous research has demonstrated the ability of the flocculent-disinfectant to render such water potable [12]. The locally produced sodium hypochlorite solution is a highly effective disinfectant under most conditions, but functional chlorine concentration, and the odor and taste of treated water, can be compromised by the organic materials in highly turbid water. Unlike the flocculent-disinfectant product, sodium hypochlorite solution does not improve the clarity of highly turbid water. A health outcomes study in Kenya demonstrated a general reduction in diarrhea for households using either household-level product versus traditional untreated water handling methods and a statistically significant 25% reduction in diarrhea among children < 2 years in compounds using flocculent-disinfectant compared to traditional untreated water handling methods [13]. Despite the benefits [9], studies have demonstrated that even the experience of decreased diarrheal disease burden is not adequate to motivate consistent behavior change [10]; clearly there are other factors at play. We hypothesized that the immediate reinforcement of visibly clear water would be a strong motivator for use of flocculent-disinfectant product. Understanding motivators and identifying successful distribution models for water treatments products could enhance uptake of household-level water treatment and increase the numbers of persons receiving the benefits of safe water worldwide. In 2003, a local non-governmental organization, the Society for Women and AIDS in Kenya (now known as the Safe Water and AIDS Project, or SWAP), began selling the flocculent-disinfectant in Asembo and Gem (subdistricts of Bondo and Siaya Districts, respectively). Social marketing of sodium hypochlorite solution began in the area in 2000 and SWAP began campaigns for the flocculent disinfectant beginning in 2003 after the health outcomes study introduced the product to the community. Campaigns included training and mobilization with community groups, presentations, distribution of educational materials, and micro-finance projects. In one pilot micro-enterprise project, local individuals and community groups had the opportunity to purchase quantities of the product and sell it in their villages at a small margin over wholesale cost. SWAP intensified activities starting in May 2003 and integrated the flocculent-disinfectant into its community education campaigns that already promoted the sodium hypochlorite solution with safe water storage and other health-related behaviors. From November 2003-February 2005, we conducted three studies to assess usage patterns of these two water treatment products and document use-prevalence in Asembo, Kenya. Figure 1 provides details on sales volume of the two products during the study period. The three studies included: (1) a baseline utilization study (November 2003); (2) a follow-up utilization study (January 2005); (3) a use-prevalence survey for water treatment products in the study area (February 2005). Campaigns related to household water treatment products were ongoing throughout this time period. Study Design The study assessed characteristics of persons who used the newly-available flocculent-disinfectant water treatment product. We defined a user as a person living in Asembo who purchased and used any quantity of the flocculent-disinfectant product to treat water for the family compound. Users were identified by review of records from SWAK and flocculent-disinfectant vendors. Non-user controls were randomly selected from family compounds in Asembo within 1 kilometer of one of the eight flocculent-disinfectant vendors using spatial mapping and census data from the CDC/KEMRI Demographic Surveillance System (DSS). After consent was obtained from the head of the family compound, interviews were conducted at each compound with the mother of the youngest child in the compound. All respondents answered questions about beliefs concerning water and diarrheal diseases, drinking water sources, water treatment and storage practices, familiarity with water treatment products, and indicators of socioeconomic status such as educational level, cash spending on hygiene products, housing characteristics and household goods. Researchers documented the presence or absence of soap, toothpaste, and water treatment products in each compound. Stored household water was tested for residual free chlorine using the N, N-diethyl-phenylenediamine colorimetric method (Colorwheel Chlorine Test Kit, Hach® Company, Loveland, CO). Analysis Data were entered into an Access® database with the Cardiff TELEform® image scanning system (Autonomy Cardiff Corporation, Vista, CA). Analysis was performed using STATA10 (Stata Corporation, College Station, TX). A socioeconomic status index was constructed using principal components analysis in the manner described by Vyas and Kumaranayake [14]. Bivariate analysis included two-sided Student's t-test of means for continuous variables and Fisher's exact test for categorical variables. Factors that were statistically significant at p < 0.05 were included in a multivariate model. We developed a multivariate logistic regression model to identify independent associations with use of the flocculent-disinfectant. Covariates and interaction terms were tested for significance and goodness-of-fit. Model checking was performed using likelihood ratio testing. Study Design Family compounds of flocculent-disinfectant users who participated in the 2003 utilization study were revisited. Efforts were made to locate the same person who was interviewed in 2003. Participants answered 64 standard questions regarding drinking water sources, water storage and treatment, and socioeconomic indicators for the household. Observers documented the presence of nine water treatment and hygiene items such as soap. Stored household water was tested for the presence of chlorine using a standard pool test kit (Aquality Professional Duo-Test, STA-RITE Industries, Delavan, WI). We defined reported consistent use based on number of sachets purchased relative to water consumption and conducted a separate analysis on the sub-group with confirmed use based on the presence of chlorine in the household drinking water at the time of the interview. Analysis Statistical analysis was completed using STATA10 and the methods described for the baseline study. Study Design This study documented use-prevalence for household-level water treatment products in the study area. We randomly selected 10 villages from Asembo. Flocculent-disinfectant had been available for sale since November 2003 in each of these villages and the surrounding area. An interviewer and a village health worker, using the most recent DSS census, visited all compounds in these villages. A person in the compound with responsibility for water handling answered four questions regarding household-level water treatments in the previous 7 days and since the short rains of 2003 (November-December 2003). Analysis Use-prevalence was calculated for flocculent-disinfectant and sodium hypochlorite during the two time periods. Baseline Utilization Study We enrolled 117 persons who met the definition of flocculent-disinfectant user and 193 control-persons who had never used the flocculent-disinfectant (Table 1). Flocculent-disinfectant users were more likely to use a turbid water source (Odds Ratio [OR] = 19.7, 95% Confidence Interval [CI] = 3.1-812) and to attribute diarrhea to their drinking water (OR = 2.5, CI = 1.4-4.6). Users were less likely to express the belief that diarrhea is a serious problem in the community (OR = 0.4, CI = 0.3-0.7). Mean spending on soap and toothpaste was significantly higher for users versus non-users (46.5 versus 37.2 Ksh, p = 0.02). The mean socioeconomic status index was significantly higher for users than non-users (p = 0.001). After adjustment for economic status index, spending on soap and toothpaste, and knowledge of the previous CDC/KEMRI study, two factors remained significantly associated with flocculent-disinfectant use. Use of turbid water sources was strongly associated with flocculent-disinfectant use (Adjusted Odds Ratio [AOR] = 19.7 CI = 2.5-153; p = 0.004). Those who used flocculent-disinfectant remained less likely to express the belief that diarrhea is a serious problem in the community (AOR = 0.4, CI = 0.3-0.7; p = 0.001). Follow-up Utilization Study Of the 117 users in the baseline utilization study, 104 (89%) completed questionnaires for the follow-up study. (Table 2) Of those interviewed, eight (8%) reported using flocculent-disinfectant in the past 7 days. Twenty-six (25%) had not used the flocculent-disinfectant since the time of the baseline study. Overall, 50 (48%) reported treating their water by some method in the past 7 days. Of those who did not use the flocculent-disinfectant consistently, the most commonly cited reasons were lack of availability (66%) and expense (20%). Of the 78 (75%) respondents reporting flocculent-disinfectant use since the study period in December 2003, 65 (83%) purchased the flocculent-disinfectant directly from a SWAP representative and only 11 (14%) reported purchase from a duka (small shop). In contrast, of the 74 (71%) respondents who reported use of sodium hypochlorite in that time same period, 37 (47%) reported purchase from a duka and 20 (26%) reported purchase from a market. Although 18 (17%) respondents reported daily use of either the flocculent-disinfectant or the sodium hypochlorite solution on the questionnaire, only 14 reported chlorinating the water stored in their home at the time of the interview; 11 of these 14 (11% of 104 total respondents) had free chlorine present in their stored water. On bivariate analysis, drinking water from turbid sources at least 4 months per year was reported by 96 (92%) respondents. Socioeconomic status was not associated with reported consistent use (OR = 0.9, CI = 0.7-1.3; p = 0.6). On multivariate analysis, after adjusting for economic status and awareness of the previous CDC/KEMRI study, respondents who reported consistent use were less likely than reported sporadic users to express the belief that their drinking water made their family sick (AOR = 0.34, CI = 0.1-0.9; p = 0.03). Socioeconomic status was not significantly associated with reported consistent use after adjustment for these other factors. These associations were essentially unchanged regardless of whether reported consistent use was defined by reported volume of flocculent-disinfectant used or by confirmation of presence of free chlorine in the household water at the time of the interview. Use-Prevalence Survey Of the 1,530 compounds listed in the most recent DSS census of Asembo, a total of 1,452 (95%) were included in the survey. Five-hundred-thirty-one (37%) compounds reported ever using the sodium hypochlorite solution compared with 105 (7%) who reported ever using the flocculent-disinfectant. Two-hundred-twenty-four (15%) reported use of the sodium hypochlorite in the past 7 days while 14 (1%) reported using the flocculent-disinfectant in that time period. Overall, 549 (38%) compounds reported ever using some form of household-level water treatment and 231 (16%) reported household-level water treatment in the past 7 days. Village-specific rates for ever using the flocculent-disinfectant varied from 0.7% to 16%. Rates for use of flocculent-disinfectant the past 7 days ranged from 0% (6 villages) to 13%. Reports of ever using the sodium hypochlorite solution ranged from 21% to 59%, while rates of use in the past 7 days ranged from 7% to 27%. These studies demonstrate a complex array of issues contributing to use of household-level water treatment products in western Kenya. While initial use of the flocculent-disinfectant was strongly associated with having turbid drinking water, this association did not persist in the study of reported consistent use. Although cost is often cited anecdotally as a reason for lack of use of household-level water treatment products, in our study economic status was not associated with reported consistent use among early users. Improvements in health do not seem to definitively influence use either: Luby et al. have demonstrated that even the experience of decreased diarrheal disease burden among residents of rural Guatemala was not adequate to motivate consistent use [10]. Dependence on a turbid water source emerged as the strongest motivator for flocculent-disinfectant use in this setting. The association with turbidity persisted after adjusting for socioeconomic status, spending on personal care items, and beliefs about the relationship between water and health. This result supports the hypothesis that the ability of flocculent-disinfectant to visibly clear turbid water is a compelling impetus to initial use. However, the allure of clearer water was not associated with reported consistent use based on the data from the follow-up survey. In this cohort with prior experience with flocculent-disinfectant and a high dependence on turbid water, sporadic use of sodium hypochlorite solution was comparable to use of the flocculent-disinfectant (71% versus 78%). The relatively high use of sodium hypochlorite despite the turbid water burden may be a reflection of familiarity with the sodium hypochlorite solution. Lower cost or greater ease of use for sodium hypochlorite may also have been determinants of use despite the advantages of flocculent-disinfectant for those using tubid water. Our data suggest that consumers often tried both locally available products, but reported using sodium hypochlorite more consistently than flocculent-disinfectant, for both the past year and the past week. Seventy-five percent of those who used the flocculentdisinfectant since 2003 also used sodium hypochlorite solution in that time period. In both the community as a whole and among those who used flocculent-disinfectant at baseline, the prevalence of sodium hypochlorite use eventually surpassed flocculent-disinfectant use. Thus, although dependence on turbid water correlated with trying flocculent-disinfectant, other factors appear to influence the decision to treat household water consistently and what product to use for this treatment. Since the time of the study, flocculent-disinfectant has expanded to national distribution networks in Kenya; this expansion may increase use by addressing the issues of availability that we found in our study. Economic factors clearly influenced usage patterns. The choice of sodium hypochlorite over flocculent-disinfectant may largely be a function of the difference in retail cost as sodium hypochlorite solution cost less than 1 US cent per 20 L of water treated while flocculent-disinfectant cost 12 US cents per 20 L treated. Use did in fact decline remarkably in the follow-up survey with 25% of initial users reporting they never used the flocculent-disinfectant product again; however, the lack of a statistically significant relationship between reported consistent use and socioeconomic status in the follow-up survey suggests that something besides finances also affects usage patterns. The manufacturer is undertaking price-reduction studies in rural Kenya for the flocculent-disinfectant; these may provide a sense of how much affordability ultimately impacts use. Lack of availability emerged as an important determinant of flocculent-disinfectant use based on data from the cohort of prior users. Based on qualitative data from interviews, problems with flocculent-disinfectant distribution caused gaps in availability that in turn pre-empted use. Availability of flocculent-disinfectant in the local market decreased dramatically after the change in credit policy at SWAP. Rural community groups who served as vendors during the initial phase of sales did not have adequate cash resources to purchase flocculent-disinfectant in bulk. Without income generation from the wholesale purchases, these groups could not sustain the retail market. Those who reported buying sodium hypochlorite reported purchases from multiple sources including dukas, markets and chemist shops. These locales are part of the indigenous consumer culture, and availability there made sodium hypochlorite much more accessible than flocculent-disinfectant, which had minimal penetration into these venues. Further penetration into the conventional retail sector may contribute to increased use of the flocculent-disinfectant through more consistent availability. The documented prevalence of nearly 40% for ever using household-level water treatment products in this rural Kenyan setting demonstrates their potential as a way for even severely economically disadvantaged persons to benefit from safe water. The challenge lies in getting households to adopt this proven intervention. Behavior change communication can help; teaching safe water handling in elementary schools and clinics has demonstrated increased household use of water treatment products in pilot studies [15,16]. These factors may not be sufficient motivation if prices are too high. In our context, micro-credit programs through an NGO made it possible for communities to purchase stock at wholesale prices thus making the products accessible to more people. In another study in western Kenya, Freeman et al. found that although awareness of household-level water treatment products was high across wealth quintiles, use dropped precipitously in the lowest quintile [17]. In the poorest segments of the population, where morbidity and mortality from waterborne diseases are highest, consistent use of either household-level water treatment product may require subsidies outside of the retail market for the foreseeable future. Inspiring sustained use will require consistent availability, affordability in the local context, and a more comprehensive understanding of the factors that motivate those who consistently treat their water. This understanding will require further research and data-driven implementation strategies that address the behavioral and economic issues along with the public health issues. Such strategies could be informed by more in-depth behavioral research to further explain the behaviors and choices documented in our studies and specifically assess the relationships between use and social marketing activities. The World Health Organization's International Network to Promote Household Water Treatment and Safe Storage, a collaboration of UN agencies, bilateral development agencies, international non-governmental organizations, research institutions, international professional associations, the private sector, and industry associations provides an integrated forum for identifying research needs on household-level water treatment and informing policies and programs [18]. The study was limited by several factors. Low prevalence of usage in the community made it difficult to determine robust statistical associations for factors affecting flocculent-disinfectant use. Small sample size also prevented comparisons between those who used various combinations of water treatments, and analysis of seasonality of use; however, the sample sizes were sufficient to identify major factors associated with use. Courtesy bias likely resulted in some over-reporting of use, based on the results of the utilization study in which 16% of those reporting chlorination did not test positive for residual free chlorine. In addition, cross-sectional studies do not permit an objective assessment of consistent use. Conclusions Household-level water treatment offers an immediate method for providing safe water to millions of people who will not have access to improved water delivery systems in the foreseeable future. These benefits cannot be realized without a better understanding of factors motivating use of the products. To increase usage of household-level water treatment in western Kenya, treatment products must first be consistently available at prices at risk-populations can afford. Availability in the traditional retail sector and through non-traditional vendors will maximize consumer access. NGOs play an important role in generating a consumer impulse for household-level water treatment products through community education and social marketing, but consistent use may require ongoing cost subsidies if the products are to reach those who need them most. The target population without access to adequate water infrastructure is generally the population with minimal financial resources. Visible clearing of turbid water and concern about waterborne diseases drive usage to some extent, but more complex factors appear to ultimately determine selection and consistent use of household-level water treatment products. If household-level treatment products are to fulfill their potential for improved health through safe water, multi-disciplinary implementation programs will need to address both the key barriers of access and affordability and the more nuanced challenge of positive behavior change.
2014-10-01T00:00:00.000Z
2010-10-01T00:00:00.000
{ "year": 2010, "sha1": "915f10ba4c396965dd887a3317664b0709d01d6d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/7/10/3842/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "edf952ef38578dcbb24521a61dd76476d0dba9f4", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geography", "Medicine" ] }
225171713
pes2o/s2orc
v3-fos-license
Fluorescence Bioanalysis of Bevacizumab Using Pre-Column and Post-Column Derivatization – Liquid Chromatography After Immunoaffinity Magnetic Purification This report presents two fluorescence labeling methods for therapeutic monoclonal antibody, bevacizumab, to increase its detection sensitivity for fluorescence detection. One method is high-temperature reversed-phase LC (HT-RPLC) following post-column fluorogenic derivatization using o -phthalaldehyde with thiol. Another method is pre-column derivatization using Zenon Alexa Fluor 488 protein-tag following size-exclusion chromatography (SEC). The calibration curves of bevacizumab were 1–50 μg/mL (post-column method) and 0.1–10 μg/mL (pre-column method). Both methods showed good correlation coefficients (r 2 > 0.991). The LOD and the LOQ of bevacizumab were, respectively, 0.13 and 0.43 μg/mL (post-column method) and 0.03 and 0.1 μg/mL (pre-column method). The sensitivities were about 2 and 10 times higher than that of native fluorescence detection. The proposed methods were applied to bevacizumab spiked human plasma samples. The bevacizumab in plasma samples was purified selectively with immunoaffinity beads and detected as a single peak using HT-RPLC or SEC with fluorescence detection. Introduction Therapeutic monoclonal antibodies (mAbs) typically possess long drug efficacy and few side effects. However, their pharmacokinetics (PK) and pharmacodynamics (PD) are much more complicated than those of low-molecular-weight pharmaceuticals [1,2]. Recently, PK and PD analyses of bevacizumab have attracted attention for planning optimal therapeutic programs in combination dose therapy such as FORFIRI plus bevacizumab [3,4], and for the evaluation of biological equivalencies in biosimilar development. To date, PK and PD analyses of bevacizumab has primarily used the ligand binding assay (LBA) [5][6][7]. While LBA allows for high sensitivity and high throughput analysis, there are several potential cross-reactivity and low accuracy [8]. In contrast, various liquid chromatographytandem mass spectrometry (LC-MS/MS) methods have been applied to analysis of therapeutic mAbs in serum or plasma samples [9][10][11][12][13][14][15]. These methods enable sensitive bioanalysis of therapeutic mAbs, but these methods present several limitations such as time-consuming trypsin digestion and manual purification process for tryptic peptides using solid-phase extraction cartridges. Against this background, we recently developed simple, sensitive, accurate and rapid quantification of therapeutic mAbs, bevacizumab and infliximab, in cancer and rheumatoid arthritis (RA) patient plasma using a combination of immunoaffinity magnetic purification and high-temperature reversed-phase LC (HT-RPLC) with fluorescence detection [16]. In this method, target drugs in blood samples are purified using immunoaffinity magnetic beads immobilized with anti-idiotype mAbs. The purified drugs are separated further using HT-RPLC [17][18][19][20], which is suitable for excellent separation among IgGs, using a large pore-size octyl column. The separated drugs are detected sensitively with their own fluorescence. This method does not require tryptic digestion or expensive LC-MS/MS instruments. We applied both methods to the bioanalyses of plasma samples obtained from cancer and RA patients who had been administered each drug. The recommended dosages of most therapeutic mAbs are of milligram per kilogram of body mass. Their blood concentrations are also of microgram per milliliter order. Therefore, similar methodology is expected to be applicable for the bioanalysis of many commercial therapeutic mAbs. However, applying this method to the bioanalysis of next-generation therapeutic mAbs such as antibody-drug-conjugate [21,22], recycling antibodies [23,24], and potelligent antibodies [25,26], which would be marketed as low-dose mAbs in the near future, is expected to be difficult because of its low detection sensitivity. Consequently, more sensitive detection methods than native fluorescence detection are expected to be necessary. Fluorescence labeling methods of proteins in LC or capillary electrophoresis analysis are roughly classifiable into two groups: chemical and biochemical modifications. Chemical modification targets functional groups such as amines and thiols of amino acid residues in proteins. For this purpose, low-molecular fluorescent probes represented by fluorescein, tetramethyl rhodamine, Alexa Fluor dyes, and CyDyes have generally been used [27]. However, for fluorescence derivatization targeting amines in proteins, control of the quantities of labeled fluorophores in a molecule is difficult. Furthermore, multiple introduction of fluorophores can induce a decrease of solubility and aggregation of analytes. Therefore, for high-sensitivity and high-resolution analysis of proteins, post-column fluorescence derivatization methods are expected to be preferred. Biochemical modification using a protein-tag such as Protein G [28] and Zenon labeling technology [29][30][31] has been used widely in fluorescence imaging and immunostaining. For this study, we developed two fluorescence labeling methods to enhance therapeutic mAbs bevacizumab detection sensitivity for fluorescence detection. The one is HT-RPLC following post-column fluorogenic derivatization using o-phthalaldehyde with thiol. These reagents react with ε-amino groups of lysine residue in bevacizumab under alkaline conditions to form highly fluorescent isoindole derivatives (Fig. 1a). The other is pre-column derivatization using Zenon Alexa Fluor 488 protein-tag followed by size-exclusion chromatography (SEC) (Fig. 1b). The post-column derivatization enables automatic fluorescence labeling in the LC system, and the pre-column derivatization allows IgG-specific fluorescence labeling. Both are expected to detect fluorescence in longer wavelength region than native fluorescence and increase detection sensitivity. The developed methods were validated in terms of sensitivity, linearity, and precision. Both methods were applied to bevacizumab containing human plasma samples. The bevacizumab in plasma samples was purified selectively using immunoaffinity beads immobilized with anti-idiotype mAbs. Our literature review indicates that this report is the first to describe sensitive LC analysis of commercial therapeutic mAb by pre-column and post-column fluorescence derivatization. Schematic diagram of (a) post-column and (b) pre-column fluorescence derivatization of bevacizumab. Reagents and solutions Deionized and distilled water that had been purified using purelab flex system (ELGA, Marlow, UK), was used to prepare all aqueous solutions. LC grade acetonitrile was purchased from Honeywell (Morris Plains, NJ, USA). Isopropanol was purchased from Kanto Chemical Co. Inc. (Tokyo, Japan). OPA and 2-mercaptoethanol were purchased from Wako Pure Chemical Industries Ltd. (Osaka, Japan). Bevacizumab (Avastin® 400 mg/16 mL Intravenous Infusion) was produced by Chugai Pharmaceutical Co. Ltd. (Tokyo, Japan). Zenon Alexa Fluor 488 Human IgG Labeling kit, Dynabeads M-280 Tosyl Activated (2.8 μm particle size) and DynaMag Spin were obtained from Thermo Fisher Scientific Inc. (Waltham, MA, USA). Anti-bevacizumab idiotype antibody was ELISA grade, obtained from Abnova Corp. (Taipei, Taiwan). Control human plasma was obtained from healthy volunteers. These volunteers understood the purpose and significance of this experiment and donated blood after providing written consent. All other chemicals were of the highest purity available and were used as received. Pre-column fluorescence derivatization-SEC A mixture of bevacizumab solution (37 μL) and 2 M aqueous sodium hydroxide (2 μL) was added to 200 μg/mL Zenon Alexa Fluor 488 Human IgG Labeling Kit (1 μL) and was then let to stand at ambient conditions for 10 min. The resulting solution (2 μL) was injected into the SEC system. The Nexera an ultra-high-performance liquid chromatography system (Shimadzu Corp.), consisted of a system controller (CBM-20A), an AC autosampler (SIL-30), a pump (LC-30AD), an online degasser (DGU-20A), a column oven (CTO-30A), and a fluorescence spectrometer (RF-20Axs) equipped with 12-μL flow cell. After the data were processed (Lab Solutions LC ver. 1.21), peak areas were estimated using the baseline-to-baseline method for quantification. An SEC column was used (300 × 4.6 mm I.D., 3 μm, KW402.5-4F; Shodex / Showa Denko K.K., Tokyo, Japan) for isocratic elution using 100 mM phosphate buffer (pH 7.0). The flow rate of the mobile phase and the column temperature were set, respectively, at 0.35 mL/min and ambient. The UV detection was monitored at 210 nm. The fluorescence was monitored, respectively, at excitation and emission wavelengths of 278 and 343 nm (native fluorescence) and 490 and 525 nm (Alexa Fluor 488 fluorescence). The collected data were analyzed (Lab Solutions LC v. 1.21). Peak areas and heights were estimated using the baseline-to-baseline method. Fluorescence spectroscopic analysis Fluorescence spectra were measured using a spectrofluorometer (FP-8300; Jasco Corp. Tokyo, Japan) in 10 mm × 10 mm quartz cells. Spectral bandwidth of 5 nm was used for both excitation and emission monochrometers. OPA in methanol (100 μL, 5 mM) and 5 mM 2-mercaptoethanol in 0.1 M borate buffer (pH 9.1) were added to an aliquot of aqueous bevacizumab solution (100 μL, 100 μg/mL). After vortex mixing, the mixture was left to stand at room temperature for 10 min, then it was mixed with 300 μL of isopropanol-acetonitrile-0.1% aqueous TFA (7:2:1, v/v). The resulting solution was subjected to fluorescence spectroscopic analysis. Fluorescence emission spectrum of OPA-bevacizumab derivative was recorded for 350-600 nm with excitation wavelength of 340 nm. Preparation of immunoaffinity magnetic beads Anti-bevacizumab idiotype antibody was coupled to tosyl-activated magnetic beads as described in our previous report [16]. A suspension of the dynabeads (33 μL, 1 mg) was placed in a 1.5-mL polypropylene tube. After removal of the solvent, the idiotype antibody (30 μg) dissolved in 100 μL of 100 mM sodium phosphate buffer (pH 7.4) containing 1.5 M ammonium sulfate was added. This mixture was vortex-mixed at room temperature for 20 hr using a microtube mixer MT-360 (Tomy Seiko Co. Ltd., Tokyo, Japan). After addition of 50 μL of 1 M tris-HCl buffer (pH 7.4) containing 1% peptone, the mixture was further vortex-mixed at room temperature for 20 hr. After removal of the solution, the remaining beads were washed with 100 mM sodium phosphate buffer (pH 7.4) containing 0.1% Tween 20 (washing buffer). After washing, beads were dispersed in 100 μL of 100 mM sodium phosphate buffer (pH 7.4) and stored as a suspension at 4°C. Immunoaffinity purification of bevacizumab from human plasma samples Affinity purification was done using 1 mg immunoaffinity magnetic beads per sample. After removal of the solvent, the immunoaffinity beads were added to 100 μL of bevacizumab-spiked plasma sample diluted 10 times with 100 mM phosphate buffer (pH 7.4). The mixture was incubated with vortex-mixing at room temperature for 1 hr. After incubation, the beads were washed twice with 100 μL of the washing buffer. Bevacizumab was then eluted once with 100 μL of 100 mM citrate buffer (pH 3.1). For HT-RPLCpost-column derivatization system, volumes of 2 μL aliquots were subjected. For pre-column derivatization -SEC analysis, volumes of 37 μL aliquots were subjected to the derivatization procedure described in Section 2.3. Following elution, the remaining beads were reused after equilibration with 100 μL of 100 mM sodium phosphate buffer (pH 7.4). Method validation To ascertain the validation parameters, peak areas were estimated using LabSolution LC. The baseline-to-baseline method was used for quantification. The assay precision was determined from repeated measurements of six concentrations of 1-50 μg/mL (1, 2, 5, 10, 20 and 50 μg /mL) for HT-RPLCpost-column fluorescence derivatization, and from 0.1-10 μg/mL (0.1, 0.5, 1, 2, 5, and 10 μg/mL) for pre-column fluorescence derivatization-SEC analyses, respectively. To ascertain the intra-day precision, these levels were analyzed five times daily. For inter-day precision, these levels were analyzed three times per day for five days (n = 15). These samples were subjected to the respective analytical procedures. For the quantitative analysis, calibration standard solutions (n = 5) were prepared by diluting the stock solutions with six concentration ranges of 1-50 μg/mL (1, 2, 5, 10, 20, and 50 μg/mL) for HT-RPLCpost-column fluorescence derivatization, and from 0.1 to 10 μg/mL (0.1, 0.5, 1, 2, 5, and 10 μg/mL) for pre-column fluorescence derivatization-SEC analyses, respectively. The equations of the calibration curves were determined using least squares linear prediction. The limit of detection (LOD) and the limit of quantitation (LOQ) were determined, respectively, from signal-to-noise ratios of 3 and 10. Concentrations of bevacizumab were calculated using the standard addition method. HT-RPLCpost-column fluorescence derivatization method For HT-RPLC analysis, a widepore Zorbax-300SB C8 column is used at high temperatures (>70°C) with a mobile phase containing solvents of high eluotropic strength, such as isopropanol and acetonitrile. Our previous study [16] revealed that HT-RPLC can achieve good separation with sharper peaks for four therapeutic mAbs (trastuzumab, infliximab, tocilizumab, and bevacizumab), which have very similar molecular weights, within 20 min. This result implies that the identification and quantification of target mAbs is possible even though immunoaffinity purification is insufficient. To optimize the LC-fluorescence detector conditions, fluorescence emission spectra of OPA derivative were measured (data not shown). As shown in the experimental section, reaction conditions for bevacizumab were set to be same as those of post-column derivatization. The emission maximum was approximately 450 nm, as reported for other amine-OPA derivatives [32,33]. The post-column derivatization reaction for bevacizumab proceeded successively in the presence of OPA and thiol in alkaline buffer with heating at 75°C. We optimized several factors related to the post-column fluorescence derivatization of bevacizumab (Fig. 2). Among three thiols we examined (2-mercaptoethanol, N-acetylcysteine, sodium 2-mercaptoethanesulfonate), 2-mercaptoethanol gave the greatest peak area (Fig. 2a). By varying the borate buffer pH in the range of 8.2-9.7, we found the maximum peak area to be 9.1 (Fig. 2b). We examined the effect of OPA concentration on fluorescence development for 5-50 mM (Fig. 2c). The fluorescence development was maximized at 30 mM. We also examined the reaction coil length of 25-200 cm on fluorescence development (Fig. 2d). Product formation increased to a maximum at 50 cm, after which constant peak area of the derivative was observed. We selected 100 cm as the optimum coil length. The total volume inside the coil was about 133 μL, and the reaction time calculated from the flow rate of the mixed reaction solution was about 0.30 minutes. It was shown that the post-column derivatization reaction using OPA proceeds rapidly even in such a short time. Figure 3 portrays a chromatogram of 50 μg/mL of bevacizumab analyzed using HT-RPLCpost-column fluorescence derivatization (red line). For comparison, the chromatogram of bevacizumab analyzed by HT-RPLC with native fluorescence detection is also shown as a black line. By post-column derivatization, the fluorescence peak intensity of bevacizumab was increased about twice despite having twice the volume of column elution into the fluorescence detector. As the chromatogram shows, immunoaffinity-purification was able to collect bevacizumab selectively from drug-spiked human plasma samples, and subsequent HT-RPLC post-column fluorescence derivatization helped to analyze bevacizumab with high sensitivity (Fig. 4). Pre-column fluorescence derivatization -SEC method For pre-column fluorescence derivatization of bevacizumab, we adopted Zenon Human IgG Labeling technology. In this technology, Zenon protein fragment binds specifically to Fc regions of IgGs with a ratio of 2:1 [29]. This technology is widely used in fluorescence and enzyme labeling of antibodies [30,31]. Because pre-column fluorescence derivatization using Zenon technology uses protein-protein interaction for labeling, HT-RPLC separation of the resulting derivatives performed with high eluotropic strength organic solvents, which are isopropanol and acetonitrile, and acidic solution as mobile phases, was impossible. Consequently, these resulting derivatives were separated by SEC mode. For pre-column fluorescence derivatization, bevacizumab was reacted with a Zenon labeling kit according to the manufacturer's instructions. Figure 5 shows SEC chromatograms of 100 μg/mL of bevacizumab or its fluorescence derivative with native fluorescence detection. The pH value represents the pH at which bevacizumab reacts with the Zenon labeling kit. The chromatograms respectively show (a) Zenon Alexa Fluor-bevacizumab derivative (pH 7.5), (b) Zenon Alexa Fluor-bevacizumab derivative (pH 3.5), (c) only Zenon reagent, and (d) only bevacizumab. From Fig. 5a, a peak corresponding to the complex of bevacizumab and Zenon reagent was detected around 6 min and was clearly discriminated with peaks of the Zenon reagent (Fig. 5c) and bevacizumab itself (Fig. 5d). Furthermore, formation of the complex produced a significant decrease of peak of the Zenon reagent itself at around 7.5 min by complexation. Without pH adjustment of reaction solution, insufficient formation of the complex was confirmed (Fig. 5b). These results suggest that reaction pH is an important factor affecting the complex formation. Therefore, pH adjustment of the eluate should be neutralized with a strong base such as sodium hydroxide. Fig. 6a and 6b, the pre-column fluorescence derivatization enhanced its fluorescence peak intensity by about 10 times. Figure 7 shows a chromatogram of 5 μg/mL of bevacizumab added to human plasma analyzed by pre-column fluorescence derivatization -SEC. The immunoaffinity purification described in Section 3.2 was sufficient to collect bevacizumab from human plasma samples selectively. Then the purified bevacizumab was converted to highly fluorescent Zenon complex and was detected as a single peak even in the SEC mode without interfering peaks. Pre-column fluorescence derivatization -SEC method The relations between the peak area and the bevacizumab concentration were all linear (correlation coefficients; 0.991, n = 5) over the concentrations of 0.1-10 μg/mL per 2-μL injection. The LOD and LOQ of the bevacizumab by pre-column derivatization were, respectively, 0.03 μg/mL and 0.1 μg/mL. These values were almost 2 times lower than those given by HT-RPLC with native fluorescence detection. The intra-day and inter-day precisions were, respectively, 1.3%-6.2% and 3.0%-10.9%. Conclusion For this study, we developed two fluorescence derivatization -LC methods for bevacizumab to increase its detection sensitivity for fluorescence detection. Bevacizumab was purified selectively from plasma using magnetic beads immobilized with commercial anti-idiotype antibody. In both methods, a peak corresponding to bevacizumab was detected within 10 min and detected as a single peak without interfering peaks. By pre-column and post-column fluorescence derivatization, detection sensitivities increased respectively about 10 and 2 times compared to native fluorescence detection. The intra-day and inter-day precisions showed good results for use in bioanalysis. Proposed methods are highly diverse. They are expected to be theoretically applicable to various therapeutic mAbs. Our methods enabled selective bioanalysis of bevacizumab without LC-MS/MS instruments. Therefore, these methods are expected to become general-purpose analytical methods that are capable of complementing results obtained using conventional LBA or LC-MS/MS.
2020-10-28T18:56:31.516Z
2020-10-20T00:00:00.000
{ "year": 2020, "sha1": "114bb66fcd3708404f66fed7788777e8685cc631", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/jpchrom/41/3/41_2020.014/_pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "73583859e07e79d701f4a2f05a390d964190fb46", "s2fieldsofstudy": [ "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Chemistry" ] }
58652980
pes2o/s2orc
v3-fos-license
Action of antimicrobial photodynamic therapy with red leds in microorganisms related to halitose Abstract Introduction: Halitosis is the term used to describe any unpleasant odor relative to expired air regardless of its source. The prevalence of halitosis in the population is approximately 30%, of which 80 to 90% of the cases originate in the oral cavity resulting from proteolytic degradation by gram negative anaerobic bacteria. Antimicrobial photodynamic therapy (aPDT) has been widely used with very satisfactory results in the health sciences. It involves the use of a non-toxic dye, called photosensitizer (FS), and a light source of a specific wavelength in the presence of the environmental oxygen. This interaction is capable of creating toxic species that generate cell death. The objective of this controlled clinical study is to verify the effect of aPDT in the treatment of halitosis by evaluating the formation of volatile sulphur compounds with gas chromatography and microbiological analysis before and after treatment. Materials and Methods: Young adults in the age group between 18 and 25 years with diagnosis of halitosis will be included in this research. The selected subjects will be divided into 3 groups: G1: aPDT; G2: scraper, and G3: aPDT and scraper. All subjects will be submitted to microbiological analysis and evaluation with Oral ChromaTM before, immediately after treatment, 7, 14, and 30 days after treatment. For the evaluation of the association of the categorical variables the Chi-square test and Fisher's Exact Test will be used. To compare the means the student t test and analysis of variance (ANOVA) will be used and to analyse the correlation between the continuous variables the correlation test by Pearson will be applied. In the analyses of the experimental differences in each group the Wilcoxon test will be used. For all analyses a level of significance of 95% (P < .05) will be considered. Discussion: Halitosis treatment is a topic that still needs attention. The results of this trial could support decision-making by clinicians regarding aPDT using aPDT for treating halitosis. Introduction Halitosis is the term used to describe any unpleasant odor related to expired air regardless of its origin. [1] The chemical components related to halitosis are volatile sulfur compounds (VSCs) such as hydrogen sulfide (H2S), methylmercaptanes (CH3SH), and dimethyl sulphide (CH3SCH3). [2][3][4][5] The prevalence of halitosis in the population is approximately 30%, of which 80 to 90% of the cases originate in the oral cavity resulting from proteolytic degradation by gram negative anaerobic bacteria from sulfurcontaining substrates in saliva, epithelial cells, blood, and food debris. [6] The dorsum of the tongue and periodontal pockets are related as the major niches of bacteria that are responsible for the emission of VSCs. Some methods are used to diagnose halitosis: the organoleptic method, which is a subjective evaluation of the exhaled air of the mouth and nose, and a scale, is used to quantify this odor, depending greatly on the olfactory capacity of the evaluator. Sulphide monitors are devices that quantify the total value of VSC exhaled from the oral cavity. Gas chromatography is the most appropriate method to detect halitosis of different origins because it performs individual measurement of the three main gases (sulfhydride, methylmercaptanes, and dimethyl sulphide), allowing to assess the intensity of the breath and its origin. However, the lack of standardization in the protocol for diagnosis and treatment of halitosis makes it difficult to compare the epidemiological data obtained in different countries. [7,8] Currently the treatment of halitosis is related to the masking of the odor or the decrease of the number of bacteria, through chemical and mechanical removal with the use of tongue scrapers and mouthwashes. [5,[9][10][11][12][13] Antimicrobial photodynamic therapy (aPDT) has been widely used with very satisfactory results in the health sciences. It involves the use of a non-toxic dye, called photosensitizer (FS), and a light source of a specific wavelength in the presence of the environmental oxygen. This interaction can create reactive oxygen species (ROC) that generate cell death. [14][15][16][17] The advantages of this approach are to avoid resistance to target bacteria and damage to adjacent tissues as the antimicrobial effect is confined only to the areas covered by the photosensitizer and irradiated by light acting on the target organism rapidly, depending on the dose of light energy and power output. In addition, according to Wainwright, M. [18] bacterial resistance to aPDT is unlikely, since the singlet oxygen and free radicals formed interact with various bacterial cell structures and different metabolic pathways. Halitosis is considered one of the problems related to the impact of oral health on quality of life and is enough to affect the individual's perception about his life. [19] Therefore, there is a need for new treatment alternatives for the patient, especially in the face of the technological advances in dentistry, so the patient's expectation regarding the final results can be met. The present study proposes to conduct a controlled clinical trial to evaluate the effectiveness of the application of aPDT in the tongue coating as a new way to control halitosis, mainly because it is a fast, noninvasive procedure and does not generate bacterial resistance. Materials and methods The study will follow the regulatory norms of research in humans with favourable opinion of the Research Ethics Committee of the University Nove de Julho number, and the participants will sign the informed consent form after clarifications for authorization of the participation in the research, according to Resolution 466/12 of the National Health Council. Type of study: controlled, quantitative, cross-sectional clinical study. Hypothesis Null hypothesis: There is no alteration of halitosis after the use of photodynamic therapy employing the use of blue dye and red LED. There is no microbiological alteration after antimicrobial photodynamic therapy. Experimental hypothesis: there is a decrease in halitosis after the use of photodynamic therapy using blue dye and red LED associated or not with the tongue scraper. There is microbiological alteration after antimicrobial photodynamic therapy. Research subjects For this study, young adults of both sexes, students and employees of the Universidade Nove de Julho, São Paulo will be evaluated. Inclusion criteria Young adults between the ages of 18 and 25, with an informed consent form and authorization for the diagnosis and treatment of halitosis. Young adults diagnosed with halitosis presenting Oralchroma S2H ≥ 112 ppb and/or CH3SH ≥ 26. Exclusion criteria Individuals will be excluded from the study: With dentofacial anomalies, in orthodontic and/or orthopedic treatment, Using a removable device, implant and/or prosthesis, With periodontal disease, With carious lesions, In cancer treatment, On antibiotic treatment up to 1 month before the survey, Pregnant, With hypersensitivity to the photosensitizer to be used. As the research is a randomized clinical study and seeking greater transparency and quality of this research, we will use the CONSORT (Consolidated Standards of Reporting Trials) recommendations (Fig. 1). The protocol is in accordance with the 2013 Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) Statement. The selected subjects will be divides by block randomization into 3 groups (n per group = 13), according to the treatment to be performed. Opaque envelopes will be identified, and a sheet containing the information of the corresponding experimental group will be inserted. There will be a 1 session for each group (Table 1). All samples from the subjects will be submitted to microbiological analysis and evaluation with Oral ChromaTM before and after treatment, followed by controls of 7, 14, and 30 days. Microbiological analysis Samples of tongue coating will be collected using a sterile swab that will be passed on the surface of the tongue dorsum with a single movement and light pressure. Samples will be deposited in sterile tubes that will be identified and stored at -80°C until analysed. After thawing, the samples will be vortexed for 1 minute. For extraction of the bacterial DNA samples will be boiled for 10 minutes and then centrifuged at 10,000 rpm for 10 minutes. The supernatant will be placed in a new microtube containing 100 mL phenol/chloroform/isoamyl alcohol (25: 24: 1), followed by ethanol precipitation. The purified DNA will be resuspended in TE buffers. The levels of P. gingivalis, T. forsythia, and T. denticola will be analysed by quantitative PCR. The quantitative analysis will be performed using real-time PCR using the Step One Plus Real-Time PCR System (Applied Biosystem, Foster City, CA) and fluorescently detected products using the Quantimix Easy SYG Kit (Biotools, Madrid, Spain), following the protocol recommended by the manufacturer. To the reaction 10l will be used SYBR Green 0.5 ul DNA template, 200 mM of each primer (P. gingivalis CATAGATATCACGAGGAACT CCGA TT and AAACTGTTAGCAACTACCGATGTGG; T. forsythia GGGTGAGTAACGCGTATGTAACCT and ACC-CAT CCGCAACCAATAAA, T. denticola CGTTCCTGGGCC TTGTACA and TAGCGACTTCAGGTACCCTCG; Universal bacteria CCATGAAGTCGGAATCGCTAG and GCTTGACGG GCGGTGT) in total volume of 20 ml. For the standard curve, reactions containing template DNA 2 to 2X105 copies of the analyzed gene (16S rRNA) will be performed using pTOPO plasmids in which the 16S genes of the 14 different organisms will be cloned. As a negative control sterile milliQ water will be added instead of DNA template. Reactions to 16S rRNA will be performed with initial denaturation at 95°C for 2 minutes followed by 36 cycles of 94°C for 30 seconds, 55°C for 1 minute and 72°C for 2 minutes and final extension at 72°C for 10 minutes 46. Fluorescence will be detected after each cycle and plotted using Step One Plus Real-Time PCR System software (Applied Biosystem, Foster City, CA). To ensure the specificity of the products detected by fluorescence and to avoid detection of primer dimers, the detection will be performed to a degree below the dissociation temperature of the amplicons. All samples will be analyzed in duplicate and each dilution of the plasmids to the standard curve in triplicate. The purpose of the microbiological evaluation will be to verify the effectiveness of the photodynamic Assessment of halitosis level The literature describes some methods for measuring halitosis, such as the organoleptic assessment of air emanated from the oral cavity, [21,22] by sulfide monitor [21,23,24] and by gas chromatography. [24][25][26] However, it has already been demonstrated that the organoleptic test can be influenced by the olfactory ability, the emotional state of the examiner and by climatic conditions. Therefore, the Oral Chroma portable device (Abilit, Japan) (Fig. 2), using a highly sensitive semiconductor gas sensor, will be used for this study. The air collection will follow the OralChromaTM Manual Instruction, in which the participant is instructed to make a mouthwash with cysteine (10 mM) for 1 minute to differentiate the VSCs origin and remain mouth closed for 1 minute. A manufacturer's own syringe for collection of mouth air will be introduced into the patient's mouth with the plunger fully inserted. The patient closes his mouth, breathes through his nose and waits with his mouth closed for 1 minute. They will be asked not to touch the tip of the syringe with their tongue. The plunger will be pulled out, re-emptied into the patient's mouth and pulled out again to fill the syringe with the breath sample. The tip of the syringe will be cleaned to remove moisture from the saliva, the gas injection needle will be placed in the syringe and the plunger will be adjusted to 0.5 ml. The air is injected into the door of the appliance in a single movement (Fig. 3). OralChromaTM allows the capture of gas concentration values by measuring the VSC thresholds (from 0 to 1000 ppb), very accurately after 8 minutes. The results are stored on the device itself and can be retrieved and viewed at any time for comparison before and after treatment. From the analysis of VSC captured by the system, we have: To avoid changes in halimetry, the examination will be conducted in the morning [26] and participants will be instructed to follow the following guidelines: 48 hours before the evaluation, avoid ingesting food with garlic, onion and strong seasoning, alcohol consumption and use of oral antiseptic. On the day of the evaluation, in the morning, you can eat up to 2 hours before the examination, abstain from coffee, candies, chewing gum, oral hygiene products, and perfumed personnel (aftershave, deodorant, perfume, creams and/or tonic) and the tooth brushing should be done only with water. [27] Application of aPDT For the photodynamic therapy, an equipment developed for this project will be used, with emission of red LED (660 nm) and tip of 2.84 cm 2 in diameter. At the moment of application of the aPDT, only the volunteer to be treated and the professional responsible will be present, both using specific eye protection glasses. The active tip of the laser will be coated with clear disposable plastic (PVC) (avoiding cross contamination) and the professional will be properly dressed (Fig. 4). Methylene blue will be used as the photosensitizing agent, at a concentration of 0.005% (165 mm), to be applied in enough quantity to cover the middle third and back of the tongue for 2 minutes for incubation. The excess will be removed with a sucker in order to maintain the surface wet with the photosensitizer itself, without the use of water. Four points will be irradiated with 1 cm between them, considering the scattering halo and the effectiveness of aPDT (Fig. 5). Based on previous studies carried out with aPDT for the treatment of halitosis [21][22][23][24] the apparatus will be previously calibrated with Table 2). [28] The method of point application will be used in direct contact with the tongue. Sample calculation For the calculation of the sample size, the data of the work by A. Costa Costa and Mota et al was used. Effect of photodynamic therapy for the treatment of halitosis in adolescents-a controlled, microbiological, clinical trial. [29] Initially we established an error err ¼ jðx 1Þ À Àðx 2Þ À j, where ðx 1Þ À and ðx 2Þ À are the baseline values for periodontal treatment with PDT. From this error, the effect size, given by err/ p (s_1^2 + s_2^2) where s_1^2 and s_2^2 are the variances of groups 1 and 2, respectively. Assuming that the studied groups have a normal or approximately normal distribution, that the sample size will be sufficiently large and that a 2-tailed test will be used, for a significance level a = 0.05 and maintaining the power of the test 1-b = 0.90 we have: In Figure 5 it is observed that with a total sample size of 39 subjects, that is, 3 groups with 13 samples each, the statistical difference should be demonstrated, keeping the power of the test greater than or equal to 0.90. If the normal distribution hypothesis is rejected, the sample size should be corrected by approximately 5%. Organization and statistical processing of data The data will be tabulated and treated in the program Bioestat 5.0. The descriptive statistics of the data will be performed. For the evaluation of the association of categorical variables, the Chisquare test and Fisher's Exact Test will be used. To compare the means, the Student's t test and Analysis of variance (ANOVA) will be used and to analyze the correlation between the continuous variables the test of Pearson correlation will be applied. In the analyzes of the experimental differences in each group the Wilcoxon test will be used. For all analyzes a level of significance of 95% (P < .05) will be considered. Discussion Halitosis plays an important role in communication and is related to quality of life and social interaction. Quality of life refers to a person's perception of their position in the system they live in relation to their goals, standard expectations and concerns. It is a comprehensive concept, which is affected by the physical, psychological and social relationship, and relationship with the environment in which it lives. And halitosis is considered one of the problems related to the impact of oral health on quality of life and is enough to affect the individual's perception about his life. [19] Therefore, the need for new treatment alternatives for the patient, especially in the face of the technological advances in dentistry, can work better with the patient's expectation regarding the results. Because it is a condition that involves mainly anaerobic bacteria, which release the SVC in the exhaled air and are responsible for the bad smell. Its main cause is related to the presence of these microorganisms in the lingual flap. And so, it must work to reduce the number of microorganisms responsible for this problem. There is still a great deal of divergence in the literature regarding the epidemiological data of this condition due to the lack of standardization in the diagnosis, and the treatment proposed in the literature is broad that consists of the chemical and mechanical reduction of the tongue coating. [9][10][11][12][13] In the initial studies in this area it was possible to observe an immediate result, with the elimination of bad odor by the reduction of CSV concentration specifically of the levels of SH2 when the treatment with photodynamic therapy was used in the dorsum of adolescents with halitosis. [29][30][31][32][33] The present study proposes to conduct a controlled clinical trial to evaluate the effectiveness of the application of aPDT in the tongue coating as a new proposal for treatment and control of halitosis, mainly because it is a fast, non-invasive procedure and does not generate bacterial resistance, as evidenced in the studies cited below and published. Not generating bacterial resistance can be considered one of the great advantages of aPDT, due to the great problem that we are currently with the development of superbugs being these resistant to the broader antibiotics.
2019-01-22T22:25:23.444Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "9a6ec861070a67133f7f3e3fae0f7605c9a4f969", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1097/md.0000000000013939", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9a6ec861070a67133f7f3e3fae0f7605c9a4f969", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
52312798
pes2o/s2orc
v3-fos-license
Towards synthetic cells using peptide-based reaction compartments Membrane compartmentalization and growth are central aspects of living cells, and are thus encoded in every cell’s genome. For the creation of artificial cellular systems, genetic information and production of membrane building blocks will need to be coupled in a similar manner. However, natural biochemical reaction networks and membrane building blocks are notoriously difficult to implement in vitro. Here, we utilized amphiphilic elastin-like peptides (ELP) to create self-assembled vesicular structures of about 200 nm diameter. In order to genetically encode the growth of these vesicles, we encapsulate a cell-free transcription-translation system together with the DNA template inside the peptide vesicles. We show in vesiculo production of a functioning fluorescent RNA aptamer and a fluorescent protein. Furthermore, we implement in situ expression of the membrane peptide itself and finally demonstrate autonomous vesicle growth due to the incorporation of this ELP into the membrane. L ife is based on the complex interaction of numerous molecular components, which self-assemble and self-organize into higher ordered structures. Inspired by natural systems, several aspects of living cells have already been recapitulated in vitro, such as DNA replication, protein expression or compartmentalization of molecular reactions 1,2 . Compartmentalized protein expression and DNA replication inside vesicles have been studied as well 3,4 . For bottom up approaches towards the creation of protocellular compartments, these aspects need to be coupled and coordinated. For instance, DNA amplification was coupled with vesicle selfreproduction 3 , or the genetically encoded synthesis of phospholipids inside lipid vesicles from precursor molecules such as acylcoenzyme A and glycerol-3-phosphate 5 . Here, mainly phospholipids or fatty acids are used for compartmentalization [5][6][7][8] . However, in principle also other suitable amphiphilic building blocks can serve as membrane material, such as peptides or synthetic block copolymers. For instance, Huber et al. 9 used a specifically designed amphiphilic elastin-like peptide (ELP) to form vesicles inside E. coli. The genetic template for the production of such peptides can be enclosed inside an in vitro reaction compartment and therefore directly linked to the expression machinery. In order to implement this in a cell-free context 10,11 , transcription-translation systems based on purified components, e.g., the PURE system 7,12 , or on bacterial cell extracts, e.g., the TX-TL system 13,14 , can be used. These systems employ the multicomponent bacterial translation machinery, to express proteins from externally added DNA in a one pot reaction. For the transcription the T7 polymerase can be used as well as the present constitutive based transcription system in the case of the E. coli TX-TL system. Such in vitro systems have been successfully used to implement and to study gene expression, gene circuits, expression and DNA self-replication 12,[15][16][17][18] . For instance, cell-free systems were encapsulated into phospholipid vesicles to facilitate compartmentalized protein expression for several hours 15 . In the present work, we encapsulate the TX-TL system in peptidosomes made of amphiphilic elastin-like peptides (ELP). These peptides can be easily expressed in cell-free systems 10,11 and thus they simplify the synthesis of the membrane material in comparison to lipid synthesis. We show that biomolecules can be easily enclosed in ELP-based vesicles, and we further demonstrate in vesiculo transcription of an RNA aptamer and translation of a fluorescent protein. We finally, succeed in the expression of the membrane-constituting peptides inside the vesicles themselves and demonstrate their incorporation into the membrane and thus inherent vesicle growth. Results The synthetic cell model. The molecular building block of our membrane was a synthetic peptide derived from the protein tropoelastin, which commonly comprises the sequence motif (GαGVP) n , where α can be any natural amino acid except for proline and n is the number of pentapeptide repeats. ELPs are stimulus-sensitive peptides and undergo a fully reversible phase transition from a hydrophilic to a hydrophobic state when the sample temperature exceeds the specific transition temperature T t 19 . The latter depends on several parameters such as the amino acid used for α, concentration, salt conditions, pH, etc. Unlike other hydrophobic molecules such as lipids, ELPs in a hydrophobic coacervate state can still exhibit a water content of about 63% by weight 20 . In order to create an amphiphilic peptide capable of membrane formation, we produced a diblock copolymer with the sequence MGH-GVGVP((GEGVP) 4 (GVGVP)) 4 -((GFGVP) 4 (GVGVP)) 3 (GFGVP) 4 -GWP abbreviated as EF. At physiological pH, this peptide has a charged hydrophilic E-rich block (mainly GEGVP pentapeptides) with a specific T t,E below sample temperature T and a hydrophobic F-rich block (mainly GFGVP pentapeptides) with a specific T t,F above T (Fig. 1a). The peptide was expressed using E. coli cells carrying a plasmid coding for EF and purified using inverse transition cycling 21 at pH 2 and 7 (see Methods). The purity of the protein was confirmed by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) with Roti®-blue staining ( Supplementary Fig. 1) and the concentration was determined using spectroscopic methods (see Methods). For controlled formation of vesicles, the peptides were dissolved in a chloroform-methanol mixture together with spherical glass beads (see Methods). We assume that the vesicle formation process is similar to the liposome formation process (Fig. 1b) 7 . After fast evaporation of the organic solvent the glass beads are coated with EF. Due to the addition of the swelling solution (initially 1x phosphate-buffered saline (PBS)) the EF film rehydrates through budding and the swelling solution is encapsulated. Using dynamic light scattering (DLS) we determined the diameter distributions for 110, 180, 220, and 440 pM EF. The corresponding peak values were 87, 178, 220, and 250 nm with dispersion (sample standard deviation) values of 47, 67, 111, and 415 nm (Supplementary Table 1). For further experiments we used 180 pM EF since it resulted in the lowest relative dispersion (dispersion divided by the peak value). Using transmission electron microscopy (TEM) we determined the peak value of the diameter distribution for 180 pM EF (Fig. 1c) to 176 nm with a dispersion of 68 nm, which is in good agreement with the DLS data ( Supplementary Fig. 2). The size distribution of the TEM and DLS data can be described by a Weibull extremal probability distribution 22 (Supplementary Note 1), and all given dispersion values are determined from a Weibull fit to the data. Membrane formation and its stability depend on the osmotic pressure, the critical aggregation concentration and the chemical potentials in general. Therefore, the samples were not diluted if applicable or purified. We also verified the size stability of the vesicles over 18 h using DLS with a mean of 179 nm and a standard deviation of 10 nm ( Supplementary Fig. 3). We found consistent mean diameters in DLS measurements from five repetitive rehydration experiments indicating no stochastic influence on the vesicle formation process with a student-t corrected standard deviation of 12.4 nm for a P-value of 0.95. For a swelling solution containing 1x PBS, the mean membrane thickness was roughly determined to be 4.9 nm with a standard deviation of 0.5 nm using TEM (Supplementary Table 2 and Supplementary Fig. 8 Fig. 5 and 6). In order to show encapsulation using the glass beads method, we utilized fluorescently-labeled DNA in the swelling solution. The resulting vesicle sample was diluted and subsequently measured with a flow cytometer. The fluorescence intensities of the vesicles scaled with the DNA concentrations used ( Fig. 1d and Supplementary Fig. 7). In a control experiment without vesicles the fluorescence determined for a sample of FAM-labeled DNA was reduced due to dilution of the DNA, but it also scaled with the concentration (Supplementary Fig. 20). Transcription in vesicles. We next studied the efficacy of in vitro transcription reactions encapsulated inside of the peptide vesicles. To this end, we transcribed the fluorogenic dBroccoli RNA aptamer 23 in the presence of its cognate fluorophore DFHBI (3,5difluoro-4-hydroxybenzylidene imidazolinone) and monitored the increasing fluorescence of the vesicles over time. dBroccoli is the dimeric version of the Broccoli aptamer, which exhibits robust folding, even in low magnesium concentrations, and it's optimized for usage in living cells. DFHBI is a small nonfluorescent molecule that gets into a highly fluorescent state upon binding to its aptamers such as Broccoli. The encapsulated transcription mix (TX) consisted of ribonucleoside tri-phosphates (rNTPs), electrolytes, DFHBI (dimethyl sulfoxide (DMSO) stock), T7 RNA polymerase, and the DNA template for the dBroccoli aptamer. Encapsulation was performed with the glass beads method described above, whereby 10% DMSO in solution did not affect vesicle formation. To suppress transcription outside of the vesicles DNase I was added to the outer solution after formation of the vesicles, which digested any non-encapsulated DNA template. As a negative control, the DNase I was added before the vesicle formation was done. As expected, transcription inside the vesicles led to an increase in the fluorescence signal, which reached its maximum after ≈50 min. The plateau phase was most probably caused by the depletion of resources such as the rNTPs, the formation of pyrophosphates or the exhaustion of the polymerase. To test the hypothesis of resource depletion the rNTP concentration was altered. The measured intensity values always reached their plateau phase after the same time but depending on the rNTP concentration the maximum fluorescence level increased ( Fig. 1e and Supplementary Fig. 9). This increase was not linear with the rNTP concentration, and hence the rNTP depletion was not solely responsible for the limitation of the transcription reaction. Protein expression in peptide vesicles. In order to demonstrate a compartmentalized transcription-translation process, we expressed the fluorescent protein mVenus (Fig. 2a) and YPet (Supplemental Figs. 10 and 11) inside of the vesicles. Encapsulation again was carried out using the glass beads method. Upon gene expression, mVenus fluorescence increased and reached a plateau phase after 180 min ( Fig. 2 and Supplementary Fig. 10). Expression of proteins occurring outside of the vesicles was suppressed using the antibiotic kanamycin, which blocks the 30S-subunit of the ribosome and thus prevents translation. As an alternative, we used EDTA to chelate magnesium ions in the buffer and thereby also suppress gene expression (Supplementary Fig. 11). Neither the presence of kanamycin nor EDTA outside of the vesicles compromised expression of mVenus inside of the peptidosomes, which indicates that the peptide membrane is not permeable for these small molecules on the time-scale of our experiments. When kanamycin or EDTA was encapsulated together with the expression mix as a control, protein expression was successfully suppressed (Fig. 2b red curve and Supplementary Figs. 10 and 11). The TX-TL system also contains cofactors, such as NAD + or FAD, which change their autofluorescence upon reduction to NADH or FADH 2 and thus create an additional change in the fluorescence signal. Therefore, we used a background correction with a sample containing the cell extract, but without plasmid. By assuming a Poisson distribution we can calculate the probability to find the TX-TL components inside the vesicles (Supplementary Note 2). For concentrations of about 1 µM and above (e.g., the proteins) the probability is close to 100%, whereas for the plasmid (50 nM) the probability is 35%. Vesicle growth caused by compartmentalized peptide synthesis. As a proof of principle, we measured the growth of the peptide vesicles, when monomers are added to the outer solution. In DLS measurements we could measure a size change from 192 nm (dispersion 74 nm) to 234 nm (dispersion 101 nm) after the addition of 50 µM ELP ( Supplementary Fig. 4). Given the capacity for protein expression within the peptidosomes, we finally proceeded to synthesize the membraneconstituting peptide itself inside of the vesicles (Fig. 3a). To verify peptide expression inside the vesicles containing the TX-TL system, we equipped EF with a His-tag. After vesicle formation and EF expression we analyzed this sample for His-tagged peptides. External expression was suppressed using kanamycin. Using western blotting the peptide was clearly identified after 240 min of expression, whereas in the initial sample (before expression) no His-tagged peptide was found (Fig. 3b bottom and Supplementary Fig. 18). The peptide band was shifted upward with respect to the expected position at about 18 kDa, which is a well-known effect for elastin-like polypeptides 24,25 . For clarification we confirmed the expected weight of our non-tagged EF construct by mass spectrometry. We found a peptide mass of 18180 Da (Fig. 3b top and Supplementary Fig. 21), which is the exact ELP mass reduced by the mass of the methionine at the beginning of the peptide sequence, which most probably has been removed through posttranslational modification 26 . The two peaks next to the parent mass most likely represent the peptide mass with Na + and the peptide mass with acetonitrile. In the next step, we investigated the incorporation of the internally generated EF peptides into the vesicle bilayer and thus the growth of the membrane from within. The vesicles were produced as before and kanamycin was utilized to suppress outside expression. After 8 h of incubation, flow cytometry measurements indeed showed an increase of the forward scattering signal. This indicates a growth in vesicle size with respect to a reference sample where kanamycin was present inside the vesicles (Supplementary Fig. 19). Using TEM we statistically verified the relative size change of the vesicles (Supplementary Figs. 12 and 13). The freshly prepared vesicles, which were able to grow, were divided into two batches. One was immediately flash-frozen to suppress peptide expression, whereas the second was incubated at 29°C for 240 min and then flash-frozen to stop expression. At the beginning of the expression (before incubation of the sample) the peak of the size distribution was found at a diameter of 157 nm with a dispersion of 104 nm, while peptide synthesis for 240 min resulted in a diameter of 330 nm and a dispersion of 83 nm (Fig. 3c, Supplementary Fig. 16, and Supplementary Table 3). As mentioned before only 35% of the vesicles should contain a plasmid, and are therefore able to express EF and to grow. Figure 3c shows no indication of a not growing subpopulation at t = 240 min. We assume that the vesicles exchange membrane peptides, which makes the whole population grow; perhaps this also indicates the existence of flip-flop between the leaflets. As a negative control, the hydrophilic ELP (GVGVP) 40 (further denoted as V40) was expressed to keep the load on the expression system similar. Neither flow cytometry nor TEM measurements ( Supplementary Figs. 14 and 15) showed a measurable change in vesicle size in this case, which is in agreement with the fact that V40 is not able to incorporate into the membrane. The peak value determined from these TEM measurements was 149 nm with a dispersion of 102 nm at the beginning of V40 expression and 145 nm with a dispersion of 128 nm after 240 min. To further examine vesicle growth, we utilized a Förster resonance energy transfer (FRET) assay to monitor the incorporation of internally expressed EF into the membrane. To this end, we prepared two batches of EF, which were either modified with the fluorophore Cy3 or the fluorophore Cy5 via copper catalyzed azide-alkyne Huisgen cycloaddition (see Methods). Vesicles were then formed using a 1:1 mixture of Cy5-EF and Cy3-EF. As a result, the dyes were randomly and homogeneously distributed within the vesicle membrane after the formation with the glass beads method. As Cy3 and Cy5 constitute a FRET pair, excitation of the Cy3 fluorophore, therefore, led to fluorescence emission of the Cy5 acceptor via FRET. In a bulk experiment, we found that expression of non-labeled EF within the vesicles led to a decrease in the acceptor signal accompanied by an increase in donor fluorescence. This indicated an increasing average distance between the dyes inside the membrane (Fig. 3d, e) and thus an incorporation of new EF. The vesicles were produced as before and kanamycin was utilized to suppress outside expression. In control measurements with kanamycin inside the vesicles, the FRET signal stayed constant ( Supplementary Fig. 17). This clearly demonstrates that EF peptides expressed in the interior of peptide vesicles incorporate into the membrane and cause vesicle growth. From the measured vesicle diameters, we could estimate the relative number of peptides expressed inside the vesicles. The membrane volume at times 0 min and 240 min after start of the expression reaction can be calculated using the volume of a single ELP V ELP and the number of peptides N 0 and N 240 at these times. The relative volume increase of the membrane ξ=N 240 V ELP / N 0 V ELP = N 240 /N 0 is related to the vesicle radii at times 0 min and 240 min via the expression ξ ≈ R 2 240 /R 2 0 (Supplementary Note 3). From our experiments, we found ξ ≈ 4.4, which means a 4.4-fold increase of the initially present number of ELP due to peptide production inside the vesicles. Discussion In conclusion, our results demonstrate that peptide vesicles are promising candidates for the generation of artificial cell-like compartments. Their fabrication is relatively straightforward and the encapsulation of biochemical reaction mixtures is-at the moment-only limited for low-concentrated molecules by the vesicle size. We successfully showed transcription of an RNA aptamer and the expression of fluorescent proteins inside our peptide vesicles. Most importantly, we demonstrated vesicle growth through expression of the membrane peptide in vesiculo and its incorporation into the membrane. It is conceivable that in future work peptide vesicle growth and also replication of the encapsulated genetic templates could be coupled, which would be a major step towards the generation of self-replicating protocellular compartments. Methods Expression of elastin-like peptides. For bacterial ELP expression, the peptide gene was cloned into a pET20b( + ) expression vector (Novagen) and transformed into the BL21(DE3)pLysS strain of E. coli. As confirmed by Sanger sequencing, the gene encoded the polypeptide sequence MGHGVGVP((GEGVP) 4 (GVGVP)) 4 ((GFGVP) 4 (GVGVP)) 3 (GFGVP) 4 GWP (abbreviated as EF). ELP expression was performed in a culture flask shaker at 37°C in a 1 L culture of LB (Luria/Miller) medium (10 g of tryptone, 5 g of yeast extract, 10 g of NaCl, and 100 mg of carbenicillin), induced with 240 mg of IPTG (isopropyl β-D-1-thiogalactopyranoside) when the optical density at 600 nm reached approximately 0.8. After 16 h of incubation at 16°C, the bacteria were harvested through centrifugation. The bacteria were lysed by sonication in phosphate-buffered saline (PBS, pH 7.4) supplemented with lysozyme (1 mg/mL), 1 mM pMSF, 1 mM benzamidin and 0.5 U of DNase I. After lysis 2 mL of 10% (w/v) PEI was added per 1 L of original cell culture. The samples where incubated at 60°C for 10 min and afterwards at 4°C for 10 min, followed by a centrifugation at 16,000 x g at 4°C for 10 min. The ELPs were purified through sequential centrifugations under acidic (pH 2) and neutral (pH 7) conditions. For the pH adjustment phosphoric acid and sodium hydroxide were used. During centrifugations at pH 7, the ELPs remained in the supernatant, while during centrifugations at pH 2 they phase-separated into the pellet, which was re-suspended in water. After three cycles of centrifugation, the ELPs were dissolved in water at a concentration of 700 μM. The concentration of the peptides was measured using absorption spectrometry (Nanophotometer Glass beads method. Two-hundred microliters of concentrated 1.1 mM ELP solution was mixed with 1250 µL of a 2:1 chloroform/methanol mixture, for fast evaporation. A total of 1.5 g of spherical glass beads (212 µm to 300 µm in size) were added to a round-bottom flask. Using a rotary evaporator the solvent was evaporated, resulting in a peptide film on the glass beads. For further experiments 100 mg of the glass beads were mixed with 60 µL of the swelling solution containing the molecules to be encapsulated. After an incubation for 5 min at 25°C, the vesicles had formed and the sample was centrifuged to sediment the glass beads. The vesicle solution was removed using a pipette. Western blotting. Samples were mixed with 2x Laemmli buffer and heated to denature the peptide structure. SDS PAGE (12%) was used for separation of the sample components. For further analysis, the peptides were fixed on a PVDF (polyvinylidene difluoride) membrane by transferring the content of the SDS gel to the membrane using a Semi-Dry blotter. The peptide-free areas of the membrane were blocked by incubation in a blocking solution containing bovine serum albumine (BSA). Afterwards, the membrane was rinsed several times with PBST (phosphate buffered saline with Tween 20) to remove residual BSA. The detection of the immobilized peptides was carried out by incubating the membrane with a specific anti-His antibody (6 × -His Epitope tag antibody, mouse, purchased from Life Technologies GmbH, Darmstadt, Germany: catalog number MA1135, clone 4E3D10H2/E3) at 4°C overnight. Residual antibodies were removed by washing with PBST. For visualization secondary antibodies (anti-mouse Alexa Fluor 680, goat, purchased from Life Technologies: catalog number A28183) were added onto the membrane and incubated for 1 h at room temperature. Residual secondary antibodies were removed through washing with PBST before the membrane was imaged using a fluorescent scanner (Typhoon Fla 9500, GE Healthcare Life Science) ( Transcription translation reaction (TX-TL). For the generation of crude S30 cell extract a BL21-Rosetta 2(DE3) mid-log phase culture was bead-beaten with 0.1 mm glass beads in a Minilys homogenizer (Peqlab, Germany) as described in ref. 27 . The composite buffer contained 50 mM Hepes (pH 8), 1.5 mM ATP and GTP, 0.9 mM CTP and UTP, 0.2 mg/mL tRNA, 26 mM coenzyme A, 0.33 mM NAD, 0.75 mM cAMP, 68 mM folinic acid, 1 mM spermidine, 30 mM PEP, 1 mM DTT and 2% PEG-8000. As an energy source in this buffer phosphoenolpyruvate (PEP) was utilized instead of 3-phosphoglyceric acid (3-PGA). All components were stored at −80°C before usage. A single cell-free reaction consisted of 42% (v/v) composite buffer, 25% (v/v) DNA plus additives and 33% (v/v) S30 cell extract. For ATP regeneration 13.3 mM maltose and 1 U of T7 RNA polymerase (NEB, M0251S) were added to the reaction mix 2 . All measurements took place at 29°C with 50 nM of plasmid if it is not indicated differently. Dynamic light scattering. For the DLS experiments the instrument DynaPro Nanostar (Wyatt technology corporation) was used. The buffers were sterile filtered before usage and the samples were measured in a disposable cuvette. For one distribution a set of 50 single measurements where performed for 2 s and averaged afterwards. The values were averaged and processed with the DYNAMICS software using a CONTIN-like algorithm. Transmission electron microscopy. The vesicle solution was adsorbed on glowdischarged formvar-supported carbon-coated Cu400 TEM grids (FCF400-CU, Science Services, Munich, Germany) for 2 min, followed by a negative stain using a 2% aqueous uranyl formate solution with 25 mM sodium hydroxide for 45 s. Afterwards the grid was dried and stored under vacuum for 30 min. Imaging was carried out using a Philips CM100 transmission electron microscope at 100 kV. For acquiring images an AMT 4 megapixel CCD camera was used and imaging was performed at magnification between x 8900 and × 15,500. For image processing the plugin Scale Bar Tools for Microscopes for Java-based software ImageJ was used. Flow cytometry. The flow cytometer measurements were performed by using a CyFlow Cube 8 cytometer (Sysmex Partec GmbH, Germany) equipped with a blue laser emitting at 488 nm. The measured signals were the forward scattering signal (FSC), the side scattering signal (SSC) and the fluorescence signal, which was bandpass filtered at 536 nm ± 40 nm. The buffers were sterile filtered and degassed before usage. For a measurement 100 µL of the sample was diluted with 500 µL 1 x PBS (8 g/L NaCl, 2 g/L KCl, 1.42 g/ L Na 2 HPO 4 , 0.27 g/L K 2 HPO 4 , pH 6.8-7.0) and measured immediately. The analysis of the data was performed with the FlowJo v10 software (FlowJo LLC, USA). Fluorescence measurements. Cell-free expression and transcription was characterized via plate reader measurements, with the corresponding filter sets for the fluorescence (BMG FLUOstar Optima) using 15 mL reaction volumes in 384-well plates. FRET measurements. The vesicles were prepared according to the glass beads method, with a mixture of 100 µL of 1.1 mM Cy3 labeled ELPs and 100 µL of 1.1 mM Cy5 labeled ELPs. For rehydration the TX-TL solution was used with the plasmid containing the EF gene (50 nM). Expression of EF occurring outside of the vesicles was suppressed using the antibiotic kanamycin. For the reference, kanamycin was added before the vesicle formation. Cell-free expression was characterized via plate reader measurements in a bulk measurement, with the corresponding filter sets for the FRET dyes (BMG FLUOstar Optima) using 15 mL reaction volumes in 384-well plates. Mass spectrometry. Full-length protein mass spectrometry was performed on a Dionex Ultimate 3000 HPLC system coupled to a Thermo LTQ-FT Ultra mass spectrometer with electrospray ionization source (spray voltage 4.2 kV, tube lens 120 V, capillary voltage 48 V, sheath gas 60 arb, aux gas 10 arb, sweep gas off). In all, 2.5 µL of sample corresponding to 1.64 nmol of peptide were on-line separated using a BioBasic-4 column (Thermo; 150 mm × 1 mm, 5 µm) by applying a multistep gradient from 2% to 20% eluent B over 6 min; 20% to 25% B over 1 min and 25% to 85% B over 14 min (eluent A: water with 0.1% (v/v) formic acid; eluent B: 90% (v/v) water, 10% (v/v) acetonitrile with 0.1% (v/v) formic acid; flow: 0.2 mL/min). All solvents were of liquid chromatography-mass spectrometry grade. The mass spectrometer was operated in positive mode collecting full scans at R = 50,000 from m/z 400 to m/z 2000. Collected data was deconvoluted using Thermo Xcalibur Xtract algorithm. Click chemistry. Initially, elastin-like peptides were activated with an azide group and then conjugated to dyes via copper-based azide-alkyne Huisgen cycloaddition (denoted as click chemistry). The used NHS-azide linker (y-azidobutyric acid oxysuccinimide ester) was diluted in DMSO to a final concentration of 20 mM. ELPs were dissolved in 1x PBS (8 g/L NaCl, 2 g/L KCl, 1.42 g/ L Na 2 HPO 4 , 0.27 g/L K 2 HPO 4 , pH 6.8-7.0). The peptides were mixed with a 2-fold excess of NHS-azide and incubated for 12 h at room temperature. To remove residual NHS-azide, the sample was loaded into a 10 kDa dialysis cassette and stored at 4°C for 12 h. In the next step, the activated EF and dye were conjugated using the aforementioned click chemistry. The alkyne-modified dye was mixed with activated EF at a molar ratio of 1:1 (dye:ELP). Afterwards 1 mM TBTA (tris(benzyltriazolylmethyl)amine), 10 mM TCEP (tris(2-carboxyethyl)-phosphine hydrochloride), and 10 mM CuSO 4 were added; all given concentrations are final concentrations. The mixture was incubated at 4°C for 12 h. Remaining linker strands were removed by dialysis with a 10 kDa dialysis cassette, which was stored at 4°C for 12 h.
2018-09-22T13:27:56.864Z
2018-09-21T00:00:00.000
{ "year": 2018, "sha1": "7b4499ec341fa5e720f185f1fdc5ebe5c7cf3540", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41467-018-06379-8.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7b4499ec341fa5e720f185f1fdc5ebe5c7cf3540", "s2fieldsofstudy": [ "Materials Science", "Biology", "Engineering" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
264662009
pes2o/s2orc
v3-fos-license
Partial nephrectomy of a horseshoe kidney associated with renal cell carcinoma and ureteral stone: A clinical case report Key Clinical Message Although anatomical and vascular abnormalities of the horseshoe kidney might be challenging, complete preoperative imaging evaluations and accurate organ‐sparing surgical planning can lead to much lower complications. Abstract Horseshoe kidney (HK) is one of the most common renal fusion anomalies. Renal carcinoids are rarely reported in HK patients. Here, we described a rare case of advanced right renal cell carcinoma (RCC) along with proximal left ureter stone in a 41‐year‐old man who presented with a complaint of turbid urine. Early blood tests revealed a blood urea nitrogen of 44 mg/dL and serum creatinine of 1.35 mg/dL. The urine analysis showed microscopic hematuria (6–8 RBCs) and few calcium oxalate crystals. The imaging evaluations revealed an HK anomaly with a solid mass on the right side and a 4 mm stone in the proximal left ureter. The findings suggested RCC which was confirmed by histopathology examination. Consequently, the patient was scheduled for an organ‐preserving open surgery of a right kidney tumor with concomitant left ureterolithotomy. The 16‐month follow‐up showed no urological complications, metastasis, or tumor proliferation. Although the anatomical and vascular abnormalities of HK might be challenging, organ‐sparing surgical treatment should be considered in feasibly resectable tumors. Complete preoperative imaging evaluations to identify the characteristics of HK, as well as accurate surgical planning, can lead to much lower complications. dL.The urine analysis showed microscopic hematuria (6-8 RBCs) and few calcium oxalate crystals.The imaging evaluations revealed an HK anomaly with a solid mass on the right side and a 4 mm stone in the proximal left ureter.The findings suggested RCC which was confirmed by histopathology examination. Consequently, the patient was scheduled for an organ-preserving open surgery of a right kidney tumor with concomitant left ureterolithotomy.The 16-month follow-up showed no urological complications, metastasis, or tumor proliferation. Although the anatomical and vascular abnormalities of HK might be challenging, organ-sparing surgical treatment should be considered in feasibly resectable tumors.Complete preoperative imaging evaluations to identify the characteristics of HK, as well as accurate surgical planning, can lead to much lower complications. K E Y W O R D S horseshoe kidney, organ-sparing surgery, partial nephrectomy, renal cell carcinoma, ureteral stone commonly occur due to anatomical variation of the HK, causing hydronephrosis. 3Renal carcinoids have rarely been reported in HK patients.However, the risk of developing Wilms tumor in HK has been described as double. 4Renal infection, obstruction, and stone disease in patients with HK may lead to renal pelvic tumors. 5urrently, surgery including partial nephrectomy, is the standard treatment for renal neoplasms, but it is challenging for tumors developed in HK because of its complex anatomy and abnormal vascularization.Here, we describe a rare case of advanced right renal cell carcinoma (RCC) occurring in HK simultaneously with a proximal left ureteral stone. | CASE PRESENTATION A 41-year-old male was admitted to Mostafa Khomeini Hospital, Tehran, Iran, with a complaint of turbid urine.The patient did not complain of pain.His medical history included migraine, bipolar disorder, and hyperlipidemia.On examination, the patient was alert and oriented and his vital signs were normal.Initial blood tests revealed a blood urea nitrogen (BUN) 44 mg/dL and serum creatinine (SCr) 1.35 mg/dL.Urine analysis revealed yellow, semi-clear urine with a trace of blood, 0-2 WBCs, 6-8 RBCs, and a few calcium oxalate crystals.The urine culture was negative. Furthermore, kidney ultrasonography was performed, which showed a well-defined round hetero-echoic solid mass (52 × 51 mm) with extrarenal extension in the middle pole of the right kidney (103 mm) with a pressure effect on the renal sinus.The left kidney (120 mm) appeared to be malrotated and located caudal to the normal anatomical site (Figure 1).Subsequently, abdominal and pelvic spiral computed tomography (CT) and abdominal magnetic resonance imaging (MRI) demonstrated an exophytic well-defined 50 × 53 × 48 mm mass lesion in the middle pole of the right kidney, with a mass effect on the pyelocaliceal system adjacent to the right liver lobe, suggesting renal cell carcinoma (RCC), along with a 4 mm stone in the proximal left ureter with 21 mm distance from the left ureteropelvic junction (UPJ) of the HK (Figures 2 and 3).The type of kidney stone was calcium oxalate.Until then, the patient was unaware of his kidney abnormalities and had not observed any symptoms of urological disorders. Based on these findings, the patient was a candidate for organ-preserving open surgery for the right kidney tumor with concomitant left ureterolithotomy.Initially, renal angiography was performed to reveal an aberrant pattern of vascularization, which showed two arteries in the right kidney.We initiated our surgical approach via a midline abdominal incision.The ascending colon and ileum mesenterium were then dissected, which was essential for complete exposure to HK.The lower pole of the right kidney was occupied by a large tumor.Due to the large size of tumor, the renal ischemia was prepared by clamping the three arteries of right kidney using bulldog clamp technique.The polar resection technique completely excised the tumor and prepared the perfect visualization of the layers.Then, the visible vessels and collecting system were sutured (figure-of-eight stitch and running sutures, respectively).After unclamping, the renal defect was assessed for bleeding, and selective single-layer renorrhaphy was done.The ischemia time was 11 min.Fortunately, most of the right renal bulk tissue was successfully preserved.The upper left ureter was then explored and released.Once a ureteral stone was identified, ureterolithotomy was performed.A double-J ureteral stent was inserted and the ureter was sutured. Pathological examination of the tumor (Figure 4) confirmed Grade 2 RCC with a clear cell subtype, 6.0 cm in diameter, and adhered to 0.5 cm surrounding tissue.The surgery lasted 215 min, with an estimated blood loss of 200 mL.The patient was discharged after 7 days of hospitalization.During the 16-month follow-up period, the patient reported no urological symptoms.Serial imaging studies after surgery showed no metastasis or growth of another tumor.His SCr level was stable at approximately 2 mg/dL. | DISCUSSION HK is one of the most common fusion anomalies of the kidneys, accounting for 0.15%-0.25% of renal developmental disorders in the general population. 6Almost 30% of all HK are asymptomatic and are discovered accidentally by physical examination or imaging modalities such as CT and ultrasound during the management of other diseases. 7Renal tumors of HK have been identified in almost 12% of patients with these anomalies. 8][10][11] Previous studies have reported flank pain, hematuria, and abdominal masses as the most common symptoms in patients with kidney tumors. 8,10Similarly, our patient had hitherto unknown HK with microscopic hematuria that developed RCC confined to the kidney. CT is the gold standard for identifying renal tumors.However, in the case of a solitary kidney, bilateral tumors, or in patients with HK, CT angiography, or MRI is also required because of its unusual anatomy. 12Abnormal vascularization is a common finding in patients with fusion anomalies of the kidney.Graves 13 described six blood supply patterns to HK.However, nonconformity with Graves' classification has been identified, and arteries may be asymmetrically distributed. 14Clodney et al., in a study of 209 cases with fusion anomalies, reported that the average number of arteries of the left and right kidneys were 1.9 and 2.4, respectively, and only 5% of patients with HK had one artery for each kidney.In this study, CT angiography revealed that accessory blood vessels (two arteries) supplied the right hemi-kidney of the However, we encountered three vessels when the HK was exposed. Surgery is the best treatment for HK cancers with resectable cases.Since Vermooten presented the clinical and technical aspects of organ-sparing tumor resection in 1950, 15 many surgeons have developed techniques based on various surgical experiences.Three main factors resulting from the nature of the HK defect should be considered when deciding to use an organ-sparing surgical method: the anatomical position of the kidney, variations in renal vascularization, and different structures of the isthmus. 16Partial nephrectomy for the treatment of HK tumors has been demonstrated to be a practical method.Considering surgical complications, including perioperative blood loss, urine leakage, peri-and postoperative renal function, and survival rate, partial nephrectomy has led to an excellent oncological and functional prognosis.Furthermore, in several case series, the average estimated blood loss was approximately 400 mL. 11Our patient underwent a successful organ-preserving partial nephrectomy for a tumor with a diameter of 6 cm, and the estimated blood loss was 200 mL. Partial nephrectomy of HK could be challenging due to complex anatomy and abnormal vasculature.In this case, although the angiography showed two supplier arteries, we faced three vessels when we were exposed to the HK.The uncertainty of the collecting system, the stone in the opposite ureter, and the lack of previous experience due to the rarity of such conditions, were other challenges of our patient's surgery. | CONCLUSION The current case report presents a case of incidentally diagnosed RCC simultaneously with a proximal left ureteral stone in a patient with HK who underwent complex organpreserving open surgery (partial nephrectomy), resulting in a good prognosis during a 16-month follow-up.This case demonstrates that although anatomical and vascular abnormalities of HK might be challenging, organ-sparing surgical treatment should be considered in feasibly resectable tumors.Thorough preoperative imaging evaluations to identify the characteristics of HK, as well as accurate surgical planning, can lead to fewer complications. F I G U R E 1 The kidney ultrasonography; left renal malrotation with pelvicalyceal fullness due to a stone in proximal left ureter. F I G U R E 2 The magnetic resonance imaging revealed horseshoe kidney with exophytic mass lesion in the right middle pole, adjacent to right liver lobe.F I G U R E 3 Abdomen and pelvis spiral CT scan with and without contrast; (A) A well-defined hypodense round shaped mass with heterogenous enhancement (53 × 51 mm) in middle pole of right kidney, near approximation with Segment VI of liver and pressure effect to renal sinus.(B) Horseshoe kidney and 4 mm stone in proximal left ureter with 21 mm distance from left UPJ.F I G U R E 4 The microscopic examination of tumor, Van Gieson stain; showed the Grade 2 renal cell carcinoma with a clear cell subtype.
2023-11-01T05:05:25.589Z
2023-10-29T00:00:00.000
{ "year": 2023, "sha1": "a1b14a7439bb504e75023904e5d0f5bfe65e402d", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "a1b14a7439bb504e75023904e5d0f5bfe65e402d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
259782211
pes2o/s2orc
v3-fos-license
Factors associated with deep sternal wound infection after open-heart surgery in a Danish registry Objective To conduct a comprehensive multivariate analysis of variables associated with deep sternal wound infection, after open-heart surgery via median sternotomy. Method A retrospective cohort of all adult patients, who underwent open-heart surgery at Odense University Hospital between 01‐01-2000 and 31-12-2020 was extracted from the West Danish Heart Registry. Data were analyzed using maximum likelihood logistic regression. Results A total of 15,424 patients underwent open-heart surgery and 244 developed a deep sternal wound infection, equivalent to 1,58 %. After data review 11,182 entries were included in the final analysis, of which 189 developed DSWI, equivalent to 1,69 %. Multivariate analysis found the following variables to be associated with the development of deep sternal wound infection (odds ratios and 95%confidens intervals in parentheses): Known arrhythmia (1.70; 1.16–2.44), Left Ventricular Ejection Fraction (1.66; 1.02–2.58), Body Mass Index 25–30 (1.66; 1.12–2.52), Body Mass Index 30–35 (2.35; 1.50–3.71), Body Mass Index 35–40 (3.61; 2.01–6.33), Body Mass Index 40+ (3.70; 1.03–10.20), Age 60–69 (1.64; 1.04–2.67), Age 70–79 (1.95; 1.23–3.19), Chronic Obstructive Pulmonary Disease (1.77; 1.21–2.54), Reoperation (1.63; 1.06–2.45), Blood transfusion in surgery (1.09; 1.01–1.17), Blood transfusion in intensive care unit (1.03; 1.01–1.06), Known peripheral atherosclerosis (1.82; 1.25–2.61), Current smoking (1.69; 1.20–2.35), Duration of intubation (1.33; 1.12–1.57). Conclusion Increased risk of deep sternal wound infection after open-heart surgery is a multifactorial problem, while some variables are unchangeable others are not. Focus should be on optimizing the condition of the patient prior to surgery e.g. weight loss and smoking. But also factors surrounding the patient e.g. preventing blood loss and minimizing intubation time. Several studies have sought to identify risk factors related to the development of DSWI after OHS.Amongst the identified risk factors are Chronic Obstructive Pulmonary Disease(COPD), diabetes, Body mass index (BMI), reoperation for bleeding and procedure type [2,8,10,11,13].Other factors found to be associated with increased risk of DSWI are prolonged cardio pulmonary bypass (CPB) time, previous heart surgery, hypertension, male gender, mechanical ventilation over 72 h, age, transfusions, NYHA class and peripheral vascular disease Abbreviations: DSWI, deep sternal wound infections; OHS, open-heart surgery; VAC, vacuum assisted closure; COPD, chronic obstructive pulmonary disease; BMI, body mass index; CPB, cardiopulmonary bypass; CABG, coronary artery bypass grafting; AVR, aortic valve replacement; WDHR, The West Danish Heart Registry; CI, confidence intervals; ECC, extracorporeal circulation; OR, odds ratio.[1,9,12,14].The aim of this study was to investigate risk factors associated with the development of DSWI in a cohort of patients that have undergone OHS at Odense University Hospital, Denmark (OUH).In previous studies, the number of included patients have often been small resulting in a low number of patients with DSWI, leading to limitations in the statistical analysis.Furthermore, studies have often concentrated on one procedure type, for example coronary artery bypass grafting (CABG) or aortic valve replacement (AVR).In this study we have included a historically large cohort, comprised of all the surgeries performed at our department, and with-in this a reasonably large number of patients with DSWI.This allows us to conduct a more comprehensive multivariate analysis, which prior studies are in lack of. Study design This study was performed as a retrospective cohort study with data extracted from The West Danish Heart Registry (WDHR).This registry holds data on every patient that has undergone open-heart surgery in the Region of Western Denmark. The extracted data include all adult patients (≥18 years) who underwent OHS at the Department of Cardiothoracic and Vascular surgery at OUH between 01-01-2000 and 31-12-2020.Surgery types primarily comprised CABG and AVR, but also included procedures such as mitraland tricuspid-valve surgeries, double-and triple procedures, surgery for aortic dissections, tumor removals and traumatic cardiac injuries amongst others.To be defined as having a DSWI the patients had to have undergone re-sternotomy with verified bacterial growth in cultures from the mediastinum.To make sure all patients with a DSWI was registered correctly, the database was controlled against the ICD-10 codes for DSWI in the electronic patient journal and when discrepancies was detected the electronic journal was reviewed to confirm or deny a DSWI.Variables for the statistical analysis of risk factors were chosen based on previous literature review. Statistical analysis Data were analyzed using maximum likelihood logistic regression and prior to analysis data was reviewed in regards to missing and outlying values.Errors in the database documentation were evaluated and if an obvious correction was not possible, the entry was removed from the dataset.Univariate analysis was performed for each variable before selecting variables for multivariate analysis.Multicollinearity was tested when appropriate and linearity of the numerical variables were tested, and log transformation applied when deemed necessary.Furthermore, data were tested for outliers and a multivariate regression without the outliers was performed to evaluate the effect of the outliers on the statistical outcome. Data are presented as odds ratios and 95 % confidence intervals (CI).A p-value of 0,05 was regarded as significant.The statistical analysis was performed using CRAN R Version 4.0.3. Results A total of 15,424 adult patients underwent OHS at OUH, between 01-01-2000 and 31-12-2020.DSWI developed postoperatively in 244 patients, equivalent to 1,58 %.To calculate incidence development over time, patients from the total cohort were subdivided into 4 groups, with 5 year intervals according to surgery date, 319 patient where excluded due to missing data.Calculations showed a significant increase in the incidence of DSWI over time for the groups 2010-2014 (p = 0,005) and 2015-2020 (p = 0,012) see Table 1.After exclusion of missing values and unreliable entries, a total of 11,182 entries were included in the final analysis, of which 189 developed DSWI, equivalent to 1,69 %. The patient characteristics and the results of the univariate analysis can be found in Table 2.A total of 17 variables from the univariate analysis showed association with the development of DSWI.The variables from the univariate analysis can be divided into three categories: Baseline factors; Treatment for high cholesterol (p = 0.0064), treatment for high blood pressure (p = 0. From the univariate analysis, variables with p-values <0.05 were chosen for the multivariate analysis.Results of the analysis can be seen in Table 3.In the multivariate analysis 11 variables showed association with the development of DSWI; Known arrhythmia (OR = 1.70;CI:1.68-2.44),Left Ventricular Ejection Fraction (OR = 1.66;CI:1.02-2.58),Body Mass Index 25-30 (OR = 1.66;CI:1.12-2.A test for outliers revealed 41 outliers with possible influence on the analysis.These were removed from the dataset and a new multivariate analysis was performed on a dataset with a total of 11,141 patients and 148 cases of DSWI.This analysis was performed to evaluate the influence of outliers on the results of the multivariate analysis.When comparing the results from the two multivariate analysis we found that diabetes was now a significant risk factor after removal of outliers, with OR = 1.55 and CI:1.05-2.26,all other variables remained unchanged in terms of significance after the removal of outliers. To evaluate the difference in survival between patient with and without DSWI Kaplan Meier plot was produced and can be seen in Fig. 1.A log-rank-test were performed, and it showed a significant difference (p = 1 − 15 ) in survival in the two groups. Next patients were divided into two new groups according to the type of surgery they underwent. The two groups were comprised of patients who underwent either a surgery that included a CABG or a non-CABG surgery.The incidence of DSWI in the two groups can be found in Table 4. Multivariate analyses were performed on the two groups with a reduces number of variables.The results can be found in Tables 5 and 6. Discussion The aim of this study was to identify risk factors associated with the development of DSWI after OHS. Previous studies have sought to do the same with varying results.The relatively low incidence of DSWI makes it hard to accumulate enough cases to conduct advanced analyses using an observational study design.Furthermore, the nature of DSWI makes it impossible to conduct randomizes controlled studies.Studies are therefore mainly based on retrospective cohorts.This specific study was conducted with data drawn from the Western Danish Heart Registry, which composes a potentially big margin for error since data are mostly manually reported, resulting in a risk of errors when entering information into the database.In our study this resulted in a large number of entries having to be removed before analysis, because of missing entries or unreliable data.However, after the careful removal of irregular entries we consider our data of generally high quality, as a previous study evaluated the WDHR and concluded that that the data registry was valid with an overall low error rate and would therefore be well suited for use in epidemiological studies [16]. The majority of the current literature identifies a few significant risk factors in their multivariate analyses, from a large pool of variables, such as diabetes, COPD, obesity, prolonged intubation time, and CABG.In our study, we manage to confirm many of the previously found risk factors in our multivariate analysis of the full cohort, with a total number of 11 risk factors associated to the development of DSWI after OHS.Our excessed findings are probably due to our larger number of patients with DSWI in a historically large cohort, making the statistical power of the study greater than previous studies on the subject.Dividing our cohort into subgroups according to procedure-type significantly reduces our statistical power, although we still produce statistically significant results, on some of the known risk factors, the decreasing number of patients in the two groups affects the analysis substantially, increasing our confidence intervals for the significant results, therefore the results should be interpreted with care. Although, no new associations have been found, our study was able to confirm almost all the previously detected factors in a single heartsurgery population.Only a few earlier detected risk factors were not significant in our study, such as male gender and extracorporeal circulation (ECC) time.This could be due to the fact that these might be indirect risk factors of the patient's risk, since male gender is known to Table 4 Incidence of infection in patients that have undergone a procedure including a CABG and for patients with a non-CABG procedure.be associated with both diabetes, COPD and arteriotic arteriosclerosis, while increased ECC-time may be an in-direct risk factor for more severe disease in the patient population such as multiprocedural surgeries and endocarditis etc.Therefore, not surprisingly was ECC time a significant risk factor for DSWI in the univariate analysis, while not in the multivariate analysis.Further, prolonged ECC time often results in increased transfusion requirements [17,18], which was found to be a significant risk factor for DSWI in both the univariate and the multivariate analysis.ECC time may therefore present as a confounder in the univariate analysis. Transfusions, both during and after surgery, were associated with increased risk of DSWI, and since this is a factor that, to some extent, is amendable, it should be a focus point when dealing with patients undergoing OHS.We found an increased risk of DSWI for the age group 60-69 and 70-79 years, but not for the 80+ group.This discrepancy is most likely related to the low number of patients in the 80+ group and we expect that this group would have increased risk of DSWI if the sample size was bigger. Surprisingly, we initially found no association between diabetes and DSWI in the multivariate analysis even though diabetes is a well-known risk factor for surgical infections [19].A limitation of this study is the rather large number of outliers that affected the final analysis.A second multivariate analysis was therefore performed to investigate the influence of the outliers on the result.After removal of the outliers the multivariate analysis revealed an association between diabetes and DSWI while all other factors remained unchanged in regard to significance.Therefore, we find that the lack of significance is probably due to suppression by the outliers, and recommend relying on the conclusions of previous studies, that diabetes is a risk factor. Regarding survival after a DSWI, as expected we see a lower survival rate in patients who have develop a DSWI.The significant difference in survival develops during the first few months after the primary surgery, and then levels out to a point where the two curves develop a parallel trend.Thus, our results suggest, that if a patient survives the initial infection, it's complications, and treatment, the survival rate is no different than in patients without a DSWI. When considering the relative impact for the individual risk factors in the development of DSWI, it is difficult to identify prime movers, due to relatively similar effect sizes and overlapping CI.However, when looking at baseline characteristics, BMI shows the highest point estimate (OR: BMI > 40 compared to ref. = 3.6993), but other factors such as smoking, peripheral atherosclerosis, COPD and diabetes are also important in the development of DSWI. Although a postoperative factor such as reoperation are statistically indistinguishable from BMI due to overlapping CI, practical considerations concerning reoperation may make BMI the superior choice for intervention. Conclusion The overall conclusion of the study is that the development of DSWI is multifactorial and pinpointing specific variables as the main cause is difficult.This study can point out areas that needs considerations in the quest to eliminate DSWI.Though some baseline variables such as diabetes, COPD and age cannot be changed, others such as BMI, arrhythmia and smoking, are factors that can be improved or prevented.Reductions in BMI could be a point of focus when trying to reduce the risk of infection. Fig. 1 . Fig. 1.Kaplan Meier plot of the difference in survival between patients with and without a DSWI. Table 1 Incidence of DSWI in the total cohort. Table 2 Univariate analyses of risk factors for deep sternal wound infection. Table 2 (continued ) b Left ventricular ejection fraction prior to surgery.c Body mass index.d Chronic obstructive pulmonary disease.e Extracorporeal circulation.f Acute reoperation caused by bleeding, tamponade, or ischemia.g Coronary artery bypass grafting. Table 3 Multivariate analysis of risk factors for deep sternal wound infection. a The dominating arrhythmia type was Atrial fibrillation but other types of arrhythmias also occurred.b Left ventricular ejection fraction prior to surgery.c Body mass index.d Chronic obstructive pulmonary disease.e Extracorporeal circulation.f Acute reoperation caused by bleeding, tamponade, or ischemia.g Coronary artery bypass grafting, h Log transformed. Table 5 Multivariate analysis of risk factors for deep sternal wound infection for patients undergoing a procedure that includes CABG. c Body mass index.d Chronic obstructive pulmonary disease.e Acute reoperation caused by bleeding, tamponade, or ischemia.f Log transformed. Table 6 Multivariate analysis of risk factors for deep sternal wound infection for patients undergoing a non-CABG procedure. a The dominating arrhythmia type was atrial fibrillation but other types of arrhythmias also occurred.bLeft ventricular ejection fraction prior to surgery.c Body mass index.d Chronic obstructive pulmonary disease.e Acute reoperation caused by bleeding, tamponade, or ischemia.f Log transformed.
2023-07-12T16:42:37.504Z
2023-06-06T00:00:00.000
{ "year": 2023, "sha1": "7e1509e3f0bb368dcfab2b9a3e14573d773abff3", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.ahjo.2023.100307", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "66845338be45727c69f8bfa0bb937e702af6b207", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
243040409
pes2o/s2orc
v3-fos-license
Biological Profile of Gout in Obese and Non-Obese Subjects in Yaoundé, Cameroon Gout is thus a frequent pathology in our context, responsible for joint pain but also for premature death [2]. This significant morbidity is explained by a strong association between hyperuricemia and cardiovascular and renal diseases, whose relationship increases with the duration of hyperuricemia. Hyperuricemia is responsible for activation of the renin angiotensin aldosterone system, decrease in nitric oxide activity, proliferation of vascular myocytes and insulin resistance. Through these mechanisms, hyperuricemia and gout promotes the occurrence of cardiometabolic pathologies thus worsening the prognosis of these patients [4,5]. INTRODUCTION Gout is the leading cause of arthrites in adults [1]. It is characterized by an ever increasing frequency and high morbidity [2]. Its global prevalence varies between 1 and 4%, as long as in Cameroon a study carried out in 2007 found a prevalence of 5.9% [3]. Gout is thus a frequent pathology in our context, responsible for joint pain but also for premature death [2]. This significant morbidity is explained by a strong association between hyperuricemia and cardiovascular and renal diseases, whose relationship increases with the duration of hyperuricemia. Hyperuricemia is responsible for activation of the renin angiotensin aldosterone system, decrease in nitric oxide activity, proliferation of vascular myocytes and insulin resistance. Through these mechanisms, hyperuricemia and gout promotes the occurrence of cardiometabolic pathologies thus worsening the prognosis of these patients [4,5]. Obesity is the most important comorbidity of gout compared to hypertension, diabetes and dyslipidemia [6,7]. In Cameroon, it is present in 78 to 84% of subjects suffering from gout [8,9]. Obesity is associated with gout according to a dose dependent effect as it is responsible for an exponential increase in uricemia in gouty patients, in particular by the secretion of pro-inflammatory cytokines [10,11]. These two mechanisms suggest that the biological profile of gout could vary depending on the body mass index of patients, which explains the higher morbidity of obese gouty patients. The metabolic alterations of gout depend on hyperuricemia and could be amplified by the accentuation of the inflammatory process caused by obesity [12]. Therefore, this study aimed to compare the biological profile of gout in obese and non-obese patients. Study Framework We conducted a cross-sectional study from January to April 2019 in 06 hospitals in the city of Yaoundé: Yaoundé Central Hospital, Hôpital de la Caisse Nationale de Prévoyances Sociales de Yaoundé (HCNPS), Biyem-Assi district hospital, Yaoundé University Hospital and the Efoulan district hospital. Participants Were included all patients over the age of 18 who met the diagnostic criteria for gout according to ACR / EULAR 2015 or followed for documented chronic gout [13]. Patients with intensive physical activity, on corticosteroid, hormonal contraceptives, or with muscle wasting, ascites and edematous syndrome were not included. All patients with another etiology of arthritis associated with gout were excluded. The participants were then divided into two groups: Group 1: Obese patients: BMI ≥ 30Kg / m² or waistline> 102 cm for men or waistline> 88 cm for women [14]. Ethical Considerations The study was approved by the institutional research ethics committee of the Faculty of Medicine and Biomedical Sciences of the University of Yaoundé 1. It was conducted in strict accordance with the principles of the Declaration of Helsinki. Data Collection Participants were recruited from internal medicine consultation and hospitalization. Data were collected by the principal investigator using a data collection sheet comprising the following information: socio-demographic and anthropometric characteristics and the history of gout.Then a venous sample was taken to carry out dosages of: C-reactive protein, uricemia, creatinemia, total cholesterol, HDL cholesterol, triglycerides, and LDL cholesterol calculated by the Friedwald equation when the concentration of triglycerides was less than 3.4 g / l. Statistical Analysis We built a database using Excel 2013 software, realized analyzes using SPSS software version 22. The data were reported in median and interquartile range for continuous variables and in proportions for categorical variables. The Mann Whitney test was used to compare the medians between the groups. The significance threshold was set at 0.05. General Characteristics of the Study Population We included 101 subjects, including 57 non-obese with a BMI of 26.80 [24.14 - DISCUSSION We conducted a cross-sectional study to compare the biological profiles of gout in obese and non-obese patients. It appears that the rates of serum uricemia and serum creatinine are similar in the two groups. In contrast, obese patients had lower CRP and HDL cholesterol, higher total cholesterol and LDL cholesterol levels than non-obese patients. The main biological marker for gout is uricemia. It serums levels are similar in our study between the obese and the non-obese. These results are in contrast to those found in the literature showing a strong and positive correlation between the body mass index and the uricemia, making the latter one of the main risk factors for obesity [15,16]. Our observation suggests that obesity plays a less important role than other factors such as diet and genetics in our population. However, the size of our sample does not allow us to confirm this hypothesis. In addition, CRP, the inflammatory marker evaluated in this study, is higher in non-obese patients; obesity therefore seems to be a factor in controlling inflammation in gouty patients. This observation, similar to that made with uricemia, diverges from numerous studies which report that obesity, which itself causes systemic inflammation, aggravates the inflammation caused by deposits of uric acid crystals [17,18]. However, a study from Taiwan on recent changes in clinical manifestations and risk factors associated with gout shows that, despite an increase in body mass index after 1992, the frequency of episodes and the severity of inflammatory signs have significantly decreased [19]. In addition, obesity is associated with hyperuricemia and the latter can constitute a protective factor against the oxidative stress found during gout. In fact, obesity is associated with a predominant production of superoxide dismutase (SOD), an antioxidant enzyme which seems to have a negative correlation with IL-6 and TNF-α, thus reducing the chronic inflammatory process which leads to formation of tophi during gout [20,21]. Obesity could therefore currently have a protective effect on the severity of gout, contrary to what has been observed in the past. The lipid profile assessment found that obese patients had higher levels of total cholesterol and LDL cholesterol and lower levels of triglycerides and HDL cholesterol. These results, although not significant, agree with those found in the literature and are due to the presence of other cardiovascular risk factors such as physical inactivity and an unbalanced diet in obese patients. We found a predominance of the metabolic syndrome in obese subjects. This predominance of metabolic syndrome in obese subjects was also noted by Moulin et al in Angola in 2016 who had an average of 24.5% of subjects with metabolic syndrome [22]. In Cameroon, Doualla et al found a higher prevalence, 54.6% of the metabolic syndrome, which can be explained by the size of their sample, which was larger than ours [9]. The predominance of metabolic syndrome in obese subjects that we obtained can be explained by the fact that obesity is a major determinant of metabolic syndrome as described by many studies, including that of Sherling et al in 2017 in the United States. United [22]. CONCLUSION Our results suggest that obesity appears to reduce the inflammatory response during gout, and do not have any effect on creatinemia and uricemia.
2020-06-18T09:09:55.769Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "3b9de29cc7ca54ee1301541edade6dc0e10f6c6a", "oa_license": null, "oa_url": "https://doi.org/10.20431/2455-7153.0601005", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "c65d2685d7f3fe7cda5df38cc8c8485c4b648c71", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [] }
238859424
pes2o/s2orc
v3-fos-license
Intramuscular Bleeding Triggered by Disseminated Intravascular Coagulation with Enhanced Fibrinolysis in a Patient with Prostate Cancer Disseminated intravascular coagulation (DIC) is the most frequent coagulation disorder associated with solid tumors, including prostate cancer. We herein report a 76-year-old man who suffered from intramuscular bleeding of the right gluteus maximus. Laboratory data showed a pattern of DIC with enhanced fibrinolysis, and a general examination led to the diagnosis of advanced prostate cancer with multiple bone metastases. To our knowledge, this is the first report describing intramuscular bleeding as an initial manifestation of prostate cancer with DIC with enhanced fibrinolysis. Introduction Disseminated intravascular coagulation (DIC) is a serious complication in the presence of underlying disease, including solid tumors. There are three types of DIC, which are categorized according to the degree of fibrinolytic activation: DIC with suppressed fibrinolysis, DIC with enhanced fibrinolysis, and an intermediate pathogenesis between the two entities, DIC with balanced fibrinolysis (1). The appropriate diagnosis of DIC and successful treatment of the underlying malignancy can result in the improvement of the survival rate. We herein report a case in which intramuscular bleeding of the buttocks was triggered by DIC with enhanced fibrinolysis in a patient with progressive prostate cancer. Case Report A 76-year-old man was admitted to our hospital due to continuous right buttock pain that had persisted for 2 weeks. Extensive subcutaneous bleeding was observed from his hip to his back. Computed tomography (CT) revealed an intra-muscular mass of the right gluteus maximus. Contrast-enhanced CT on administration showed the irregular enhancement of the prostate, and his prostatespecific antigen (PSA) level was 1,827 ng/mL. Therefore, a prostate biopsy was performed, and he was diagnosed with adenocarcinoma (Gleason score: 9 points). Bone scintigraphy revealed widespread bone metastases. Based on these He was initially administered both tranexamic acid (TA; 1,000 mg/day for 3 days) and carbazochrome sodium sulfonate hydrate (50 mg/day for 3 days) against severe intramuscular bleeding. He was then administered transfusion of red blood cells (total 4 units) as well as fresh-frozen plasma (total 480 mL), since his hemoglobin level was decreased to 5.7 g/dL. Subsequently, his DIC score began to improve, and the subcutaneous bleeding gradually disappeared. He also started to take bicalutamide for the treatment of prostate cancer, and his PSA level decreased to 1,003 ng/mL at 1 month after the onset. For the meantime, DIC continued to be controllable (Figure). Discussion DIC is a major hemostatic complication in patients with solid tumors (2). In a previous study that analyzed a total of 1,117 patients with solid tumors, 76 were diagnosed with DIC (6.8%), 50 (4.5%) showed bleeding episodes, and 31 (2.8%) showed clotting episodes (2). The distribution of hemorrhaging included oozing from venipuncture or vascular access sites, skin and mucosal surfaces, hematuria, gastrointestinal, hemoptysis, and central nervous system bleeding (2). In particular, hyperfibrinolysis has been reported in patients with acute promyelocytic leukemia (APL), abdominal aortic aneurysm, and urothelial or prostate cancer (1), leading to DIC with marked fibrinolytic activation corresponding to the activation of coagulation (1, 3). Regarding prostate cancer, DIC is the most frequent coagulopathy dis-order, with an incidence ranging from 13% to 30%; however, most patients are subclinical, and only 0.4-1.65% of these patients are symptomatic (4,5). In the present case, we observed intramuscular bleeding, which is well known to be a main symptom of hemophilia but is very rare in solid tumors with DIC. Our investigation failed to identify any other such cases in the English literature. These findings underscore the importance of considering the existence of serious illness, such as solid tumor, when a patient is diagnosed with DIC. Furthermore, it is necessary to take the presence of urothelial or prostate cancer into consideration when we diagnose DIC with enhanced fibrinolysis in a patient with bleeding symptoms while referencing the remarkable elevation of both TAT and PIC as well as the elevation of FDP (more dominant than D-dimer) and/or the marked decrease of fibrinogen (1). The underlying mechanisms of DIC in patients with solid cancer are not fully understood. In general, tumor cells express various procoagulant molecules, including tissue factors, which initiate the extrinsic coagulation pathway and lead to the activation of the hemostatic system (6)(7)(8)(9). In addition, urinary-type plasminogen activators (u-PA) produced by tumor tissues or secondary depletion of fibrinolytic inhibitors, such as α2-plasmin inhibitor, are thought to play important roles in the coagulation and fibrinolytic pathways, which explains the severe bleeding symptoms (10). In fact, u-PA levels were reportedly elevated in patients with prostate cancer (11), and increased u-PA can generate plasmin, leading to hypofibrinogenemia, which leads to a bleeding tendency (12). In addition, Annexin II is well known to be a cell-surface receptor for both plasminogen and its activator, t-PA, catalyzing the conversion of plasminogen to plasmin. Furthermore, these molecules are reportedly expressed by some tumor cells (13) as well as APL cells (14) and are correlated with the onset of DIC with enhanced fibrinolysis. The basis of DIC treatment is the removal of the underlying causative factor (6). Castration therapy is indispensable for hormone-sensitive prostate cancer, as in this case; however, it is probably not sufficient to stop life-threatening episodes of bleeding in an emergency situation. TA is a wellestablished anti-fibrinolytic agent for bleeding in DIC with enhanced fibrinolysis (1,15,16). It has previously been reported that TA in metastatic prostate cancer with DIC was effective for relieving the sudden onset of massive epistaxis and remarkable hematuria (16). In the present case, TA was administered to control bleeding and resulted in the improvement of the DIC score. However, TA may cause thromboembolic events in patients with enhanced fibrinolysis. Indeed, it is contraindicated in patients with APL when all-trans retinoic acid is used (1). The present patient was using TA alone without the simultaneous receipt of anticoagulant drugs, as the emergency doctor who first examined the patient was not familiar with the treatment of DIC. Although no thromboembolic events were fortunately observed, TA should be used with anticoagulant drugs, such as nafamostat mesilate or heparin (1,17).
2021-10-15T06:16:47.768Z
2021-10-12T00:00:00.000
{ "year": 2021, "sha1": "42bd8301fc1882f1142688a4c7916f3f23b6f35b", "oa_license": "CCBYNCND", "oa_url": "https://www.jstage.jst.go.jp/article/internalmedicine/advpub/0/advpub_7697-21/_pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1247861c77171494e1753ac4c4561a521cc1916e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
168731452
pes2o/s2orc
v3-fos-license
Knowledge Network of Toyota: Creation, Diffusion, and Standardization of Knowledge : Knowledge is a source of firm’s competitiveness and is created, diffused, and standardized within a company’s knowledge network. The knowledge network of Toyota Motor Corporation in Japan comprises multiple automotive plants, the Operation Management Consulting Division (OMCD), and the Global Production Center (GPC) as nodes on that network. Knowledge is created on a manufacturing plant floor and diffused between multiple automotive plants through a direct interacting network without standardization. The OMCD diffuses both standardized and unstandardized knowledge. The GPC’s important function is knowledge standardization. In conclusion, Toyota’s domestic knowledge network maintains a balance between the diversification and standardization of knowledge created on the production floor through a mix of nodes at various standardization levels. Introduction Many studies have emphasized the idea that knowledge of a firm is a source of competitive advantage (Chini, 2004;Grant, 1996;Kogut & Zander, 1992). To acquire knowledge that can become a source of competitiveness, there is a method of creating knowledge within an organization and gaining knowledge from external sources. Nonaka and Takeuchi (1995) stated that knowledge creation is a dynamic circulation process of tacit knowledge and explicit knowledge through four conversion modes. Cohen and Levinthal (1990) used the concept of absorptive capacity to explain ways in which companies evaluate, interpret, and apply external knowledge. While transferring knowledge to the required location and using it is important, knowledge transfer has cost issues, namely its stickiness (Szulanski, 1996(Szulanski, , 2000von Hippel, 1994). Knowledge network theories have developed to explain the process of gaining, diffusing, and transferring knowledge. Within the field of knowledge network theory, research has focused on the strength or weakness of ties (Hansen, 1999(Hansen, , 2002, network scope (Ernst & Kim, 2002), center of network (Ernst & Kim, 2002;Tsai, 2001), and the directionality of transfer (Chini, 2004;Kim, 2015). However, few detailed studies have explored how knowledge within networks is created, diffused, and standardized. This study comprehensively analyzes the case of Toyota Motor Corporation (Toyota) to investigate the creation, diffusion, and standardization of knowledge within a corporation. In particular, the analysis is on the function of domestic plants, the Operation Management Consulting Division (OMCD), and the Global Production Center (GPC) as nodes on Toyota's domestic knowledge network. Creation of Knowledge Diversity Toyota has four domestic vehicle production plants: Motomachi, Takaoka, Tsutsumi, and Tahara. These four plants have slightly different production systems based on conditions such as car model, the ratio of exports, the number of production options, supplier relationships, and plant location constraints. One characteristic of the Toyota Production System (TPS) is kaizen activities that occur on the production floor. If the results of kaizen activities have positive effect for productivity, it becomes new work standards. These activities are conducted on the production floors of each plant and each operate under different conditions. These activities are reflected in the work standards of each plant, with the production system of each developing over time. This is how diversity of the TPS is initiated. Kaizen ideas are generated by individual workers or by small groups called QC circles. When a problem is identified on the production floor, a production floor leader confirms the problem as it occurs, investigating the circumstances surrounding the problem in detail and determining the cause. The leader then encourages the worker to generate ideas that will resolve the problem. These ideas are then compiled, and a solution is submitted (Monden, 2006). When submitting the solution, the production floor leader primarily makes a specific determination regarding various factors involved in work standards. The production floor leader determines the cycle time required to produce each unit of product as well as the order of manufacturing operation job. Production floor leaders in each plant are responsible for creating and revising work standards; this bottom-up organizational culture is characteristic of Toyota. It is one of the sources of Toyota's knowledge diversity. In labor-intensive processes such as final assembly, there are no set of universal engineering principles as can be found in other processes; thus, a chief leader (CL) and group leader (GL) have been influential in developing not only kaizen on a process but also the design of the process (Fujimoto, 1997). implemented (Fujimoto, 1997). In this manner, unique production systems were created on the production floors of Toyota in response to unique conditions on each production floor. New system concepts are, at times, implemented when building new plants or refurbishing old plants. Knowledge with a high level of diversification is created within the same TPS. Higuma and Suh (2017) Knowledge Diffusion by the OMCD The OMCD is a division that belongs to the Production Management Division. The TPS is not a single system per se but takes on various forms depending on the plant and personnel. The OMCD was created in 1970 with TPS specialists to systemize and diffuse TPS both within and outside Toyota (Dyer & Nobeoka, 2000;Fujimoto, 1997;Higuma & Suh, 2017;Satake, 1998). Knowledge Diffusion by the GPC The GPC sets the most fundamental skills required in automaking and develops tools to teach these skills with clarity to workers on the production floor. Elemental work refers to individual jobs that comprise standard work, and fundamental skills are skills needed to perform elemental work. There are minor differences in fundamental skills between plants, and the GPC surveys fundamental skills to set the most efficient best practices for production floors. The GPC creates standard visual manuals used when teaching best practices for these fundamental skills. Visual manuals explain fundamental skills in the form of videos, computer-based video, animation, and other visuals. Using video and animation, they are able to explain the instinctual aspects. Workers first gain an understanding of these basic skills using the video manuals and subsequently use training facilities for the development of fundamental skills. In other words, in the past, many aspects were tacitly taught in on-the-job training on the shop floor, but fundamental skills have been standardized by GPC's visual manual. Trainers are responsible for training at each plant. The GPC has a master trainer who trains these trainers and sends them to each plant. Two Cases of Knowledge Diffusion Through two case studies, this section explains knowledge diffusion within Toyota's knowledge network. (Higuma & Suh, 2017). Case 2: The GPC gathers best practices from each plant to create best practices in visual manuals. Doing so, they notice disparities between plants for even the same work. For example, the method for holding a paint gun in the painting process may differ by plant. Many such differences appear to be trivial at first, for example, the number of fingers used to hold the spray gun or where the thumb is placed. The GPC analyzes them to find the various merits and demerits and to determine the most efficient best practices with the most merits and the fewest demerits (Suh, 2012). Knowledge Network of Toyota The domestic knowledge network of Toyota repeatedly creates diversity of knowledge and standardizes it. The knowledge system of TPS is not uniform even among Toyota's domestic production sites, with each plant having individuality with regard to detailed operations. Plants directly gain knowledge from each other through liaison meetings although these are not strong ties, as already explained in the case of the yamazumi table. Kono (2016) argued that weak ties promote the acquisition of new, non-redundant knowledge and that it contributes to diversification of knowledge. Furthermore, standardized knowledge is transferred by both the OMCD and the GPC. In other words, Toyota's domestic knowledge network has nodes for standardization at various levels and balances the contradicting goals of diversification and standardization of knowledge created on production floors (gemba). Fujimoto (1997) noted how diversification in Toyota's production system was converged and established. For its certain core values and philosophy, Toyota is exceedingly homogeneous, though on other levels and domains, particularly when the system is changing, many internal discrepancies have been observed. The diversification created within Toyota is converged through a convergence mechanism. Fujimoto (2012) explains this as the evolution of TPS. Toyota's domestic knowledge network has been extended overseas. Dyer and Nobeoka (2000) analyzed how Toyota's learning network was created and evolved in the US. This can be seen as a case of domestic knowledge network expanding overseas. Moreover, studies on Toyota's global knowledge network (Suh, 2012(Suh, , 2015(Suh, , 2016 have clarified that the role of domestic plants, the OMCD, and the GPC in knowledge transfer to overseas is essential. Conclusion This paper surveyed the case of Toyota in detail to show a knowledge network that creates, diffuses, and standardizes knowledge that is the source of corporate competitiveness. Within Toyota's knowledge network, production floors in each plant take the role of knowledge creation. Knowledge created on these production floors is spread throughout the network via three routes: diffusion through direct interaction between plants; diffusion through the OMCD; and diffusion through the GPC. The OMCD and GPC diffuse knowledge through standardization. Knowledge created in Toyota's plants has the same direction as a part of TPS although they create diversification of knowledge through their differing management environments. The Toyota knowledge network is a source of competitiveness through its balance of knowledge diversification and knowledge standardization.
2019-05-30T13:17:50.494Z
2017-04-15T00:00:00.000
{ "year": 2017, "sha1": "2f7d6d227c1d39ad2b086fa7b4f8093a5edd92c9", "oa_license": "CCBY", "oa_url": "https://www.jstage.jst.go.jp/article/abas/16/2/16_0170126a/_pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "20158860856a48e3b5cfcb33e6257cfef9260888", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
118476274
pes2o/s2orc
v3-fos-license
Determining Reflectance Spectra of Surfaces and Clouds on Exoplanets Planned missions will spatially resolve temperate terrestrial planets from their host star. Although reflected light from such a planet encodes information about its surface, it has not been shown how to establish surface characteristics of a planet without assuming known surfaces to begin with. We present a re-analysis of disk-integrated, time-resolved, multiband photometry of Earth obtained by the Deep Impact spacecraft as part of the EPOXI Mission of Opportunity. We extract reflectance spectra of clouds, ocean and land without a priori knowledge of the numbers or colors of these surfaces. We show that the inverse problem of extracting surface spectra from such data is a novel and extreme instance of spectral unmixing, a well-studied problem in remote sensing. Principal component analysis is used to determine an appropriate number of model surfaces with which to interpret the data. Shrink-wrapping a simplex to the color excursions of the planet yields a conservative estimate of the planet's endmember spectra. The resulting surface maps are unphysical, however, requiring negative or larger-than-unity surface coverage at certain locations. Our"rotational unmixing"supersedes the endmember analysis by simultaneously solving for the surface spectra and their geographical distributions on the planet, under the assumption of diffuse reflection and known viewing geometry. We use a Markov Chain Monte Carlo to determine best-fit parameters and their uncertainties. The resulting albedo spectra are similar to clouds, ocean and land seen through a Rayleigh-scattering atmosphere. This study suggests that future direct-imaging efforts could identify and map unknown surfaces and clouds on exoplanets. INTRODUCTION Next-generation space telescopes will spatially resolve terrestrial exoplanets from their host star (Traub & Oppenheimer 2011). The rotational color variations of a "pale blue dot" might betray the presence of landforms rotating in and out of view (Ford et al. 2001). Ground truth is critical to our understanding of planetary climate because surface liquid water is the definition of habitability (Abe 1993;Kasting et al. 1993) and long-term habitability may require exposed continents . Previous research has shown that rotational color variations of a planet can yield rotation rate (Pallé et al. 2008;Oakley & Cash 2009), coarse planetary maps under the assumption of known surfaces Fujii et al. 2011;, single-band maps Fujii & Kawahara 2012), and maps of eigencolors (Cowan et al. 2009(Cowan et al. , 2011. We cannot assume, however, that terrestrial exoplanets will have the same surface types as Earth, and there is no obvious relation between single-band albedo or eigencolors with surface features, so neither of these "exocartographic" strategies is entirely satisfactory. Three methods have been proposed for using orbital phase variations to detect surface liquid water on exoplanets, but they have challenges of their own: thern-cowan@northwestern.edu 1 Center for Interdisciplinary Exploration and Research in Astrophysics (CIERA), Northwestern University, 2131 Tech Dr., IL 60208, USA 2 Department of Physics and Astronomy, Northwestern University, 2145 Sheridan Rd., F165, Evanston, IL 60208, USA mal inertia (thermal infrared; Gaidos & Williams 2004;, glint (visible-near infrared; Williams & Gaidos 2008;Robinson et al. 2010; and polarization (visiblenear infrared; Zugger et al. 2010Zugger et al. , 2011. Significantly, these methods leverage variability on the planet's orbital, rather than rotational, timescale. While this makes the signal-to-noise requirements less stringent than exocartography, none of these methods would be able to distinguish between partial vs. complete ocean coverage. In this Letter, we show how rotational color variations can be used to retrieve reflectance spectra of a planet's dominant surfaces. Our method does not require a priori knowledge of the number of surfaces, let alone their colors or geographical distributions. ANALYSIS We re-analyze disk-integrated Earth observations obtained by the Deep Impact spacecraft as part of the EPOXI Mission of Opportunity (Livengood et al. 2011). Our data consist of hourly measurements of apparent albedo, A * , in seven wavebands spanning a single rotation of Earth (the Earth5 time-series). The first and last observations occur exactly 24 hours apart but have slightly different apparent albedos because of day-to-day changes in cloud cover (Goode et al. 2001;Pallé et al. 2004;Cowan et al. 2009Cowan et al. , 2011. Specifically, given the orbital phase near quadrature, the final 6 observations probe the same regions as the early part of the timeseries. We "correct" the final 6 observations so that the lightcurves may be fit with a static model. At each wavelength, we subtract the last datum from the first to estimate the effect of changing cloud cover. We then correct the final 6 observations according to this change: the 19 th observation is unaffected, and the correction increases linearly from the 20 th to the 25 th observation. After the application of this correction, the last observation is identical to the first, by construction. In order to avoid giving this point too much weight, we remove the final datum, resulting in 24 observations populating a 7-dimensional color space (Figure 1). Principal Component Analysis We center the data by subtracting the planet's timeaveraged color and then perform principal component analysis (PCA) using the covariance of the data (Cowan et al. 2009(Cowan et al. , 2011. The first two principal components have nearly equal power and account for 99% of the color variance in our data (90% of the variability). There is no one-to-one correspondence between surface types and principal components, however: the latter are normalized and orthogonal and therefore unphysical (Cowan et al. 2011). Furthermore, the number of dominant principal components does not even correspond to the minimum number of surfaces for reflected light surface retrieval, rather, n surf n dom + 1 (Cowan et al. 2011). Nevertheless, it is useful to reduce the dimensionality of the data by projecting them onto the principal component plane (PCP). In the present case, the PCP is defined by the first two principal components ( Figure 2); in general it is a hyper-plane. Simplex Shrink-Wrapping Insofar as the color excursions of Earth lie in the PCP, then so must its dominant surfaces. Since three point uniquely define a plane, Occam's razor dictates that we try a three-surface solution. If one assumes that different surfaces and regions combine linearly to determine the planet's over-all colors, then the three pure surface spectra must define a triangle that encloses the data. Estimating broadband endmember spectra from our locus of data is isomorphic to unsupervised spectral unmixing of multi-spectral satellite images (Sabol et al. 1992;Keshava 2003). In traditional remote sensing, the data consists of colors at each pixel in a spatially-resolved image; in the present exoplanet application, the data consist of colors at each point in time, where there is a nontrivial convolution tying the time-series to spatial inhomogeneities on the surface of the planet. We adopt a remote sensing algorithm: simplex shrinkwrapping (Fuhrmann 1999). Recall that a simplex in N -dimensional space is a polyhedron with N +1 vertices: on a line the simplex is a line segment, on a plane the simplex is a triangle, in three-dimensions the simplex is a tetrahedron, etc. Simplex shrink-wrapping entails finding the simplex with the smallest volume (length in 1D; area in 2D) that encloses all the data. Since only the most extreme colors constrain the shrink-wrapping, it is expedient to first determine the convex hull of the locus in the PCP (the dotted black line in left panel of Figure 2). This step is not computationally necessary for our small (24 × 7) data matrix, but may be necessary for longer observations and/or higher spectral resolution. The apparent albedo can be modeled as the matrix product of surface spectra, S, and apparent covering frac- where n t is the number of observations and n λ is the number of wavebands. Insofar as the observations do not lie perfectly in the principal component plane, this model cannot perfectly match the data. Instead, the object is to match the projected albedo in the PCP. Simplex shrinkwrapping amounts to solving for f * and S given A * and the constraint of minimum volume. The vertices of our shrink-wrapped triangle are denoted by triangles in Figure 2. The shrink-wrap endmembers are more likely to be unique if the data locus is non-spherical. If one knew nothing about the planet's orbital plane or viewing geometry, endmembers would be the most conservative estimate of surface spectra. The apparent covering fraction is related to the planetary covering fraction, f , by the convolution: where n slice is the number of longitudinal slices on the planet. The weight, W , quantifies the visibility and illumination of a given longitudinal slice on the planet at a given point in time. It is computed given the known sub-observer and sub-stellar positions at the time of each observation (Cowan et al. 2011). Although we adopt a spatial resolution of n slice = 9, the W and f arrays are oversampled by a factor of 100 in order to reduce numerical integration error. Since W is a known low-pass filter, acceptable f * (between zero and unity, and summing to unity at each point in time) may require unphysical f . Indeed, if we adopt the shrink-wrap endmember spectra as bona fide surface spectra, the deconvolution, f * → f , produces surface maps with covering fractions greater than unity, or negative. Since there is no possible surface map of the endmembers that matches the data, the endmembers must show the endmember spectra obtained by shrink-wrapping a simplex (a triangle, in this case) onto the convex hull. The green line shows the color excursions of our best-fit model obtained by rotational unmixing. Right: The black points are the EPOXI data after zooming out. The colored triangles are the same endmember spectra as on the left. The colored squares with error bars are the surface spectra retrieved from the data by rotational unmixing. The colored asterisks show predicted spectra of ocean, land and cloud combined with Rayleigh scattering (Robinson et al. 2011;Cowan et al. 2011). not correspond to the colors of actual surfaces. In other words, the shrink-wrap produces albedo spectra that are too conservative, and geographies that are too liberal. Rotational Unmixing The essential difference between the present application and traditional spectral unmixing is that while the surface coverage, and hence colors, of a pixel may be arbitrarily different from that of its neighbor, the colors of a disk-integrated planet cannot change arbitrarily fast. The third and final step of our analysis, after PCA and shrink-wrapping, is to solve for surface spectra, S, and geographies, f , given A * and W : A * [n t , n λ ] = W [n t , n slice ] × f [n slice , n surf ] × S[n surf , n λ ]. This amounts to fitting the path of the planet in color-space by simultaneously varying the three surface spectra and their planetary geography. While simplex shrink-wrapping only uses the convex hull of the data, rotational unmixing makes use of all the observations. We use a 3,000,000-step Markov Chain Monte Carlo (MCMC) to determine the best parameters and their uncertainties (Ford et al. 2001). We initialize the surface spectra at the endmember spectra from the shrinkwrapping to speed up convergence, and begin with a uniform surface map with equal amounts of the three surfaces. Before evaluating a step in the MCMC, we ensure that the surface spectra have albedos between zero and unity at all wavelengths 3 , and that the covering fractions are between zero and unity at every location on the planet. The MCMC sequentially takes seven 1D steps along each principal color for each surface spectrum (n surf × n λ = 21 steps), and a 3D step in map space for each longitudinal slice of the planet, normalized such that the sum of covering fractions is unity at each slice (n slice = 9 steps). Step sizes are tuned to get an acceptance ratio of ∼ 0.25 (Gelman 2003). We adopt Gaussian uncertainties of 0.0033 on the apparent albedo measurements, which produce a best-fit reduced χ 2 of unity (essentially the same 1% uncertainties as in Cowan et al. 2009). The surface spectra are not constrained to lie in the PCP, but the best fit solutions are close to the plane, as expected from PCA. Quantitatively, the surface spectra have 5-dimensional Euclidean distances from the principal component plane of 0.008, 0.007, and 0.009, for an aspect ratio of ∼ 2% (cf. right panel of Figure 2). Following Cowan et al. (2009) and as dictated by the orbital phase, we use 9 longitudinal slices on the planet (40 • spatial resolution, with fixed longitudes for the slices), convolving the surface map with the instantaneous illumination and visibility functions assuming diffuse (Lambertian) reflection and known viewing geometry (rotation rate, obliquity, orbital and seasonal phases). Since the covering fractions for the three surfaces must sum to unity at each location on the planet, the model has (n surf − 1)n slice + n surf n λ = 39 free parameters. The green lines in Figure 1 show the lightcurves for our best-fit model. The green line in the left panel of Figure 2 shows the centered color variations of the model projected on the PCP. The gray, red, and blue colored squares with error bars in Figure 2 denote our best-fit surface spectra projected onto the PCP. The three spectra correspond roughly to clouds, land, and ocean; their deprojected albedo spectra are shown in Figure 3. In order to gauge the accuracy of the retrieval, we take empirical reflectance spectra of clouds, ocean and land from Robinson et al. (2011) and combine them with an empirical model of disk-integrated Rayleigh scattering (Cowan et al. 2011). The predicted surface spectra are the colored asterisks in Figure 2, and the colored dashed lines in Figure 3. The qualitative agreement between the retrieved and predicted surface spectra is remarkable when one considers that the surface spectra are moving targets. Ocean, land and clouds on Earth are hardly uniform, and the path-length of Rayleigh scattering is a function of location on the disk of the planet. We have incorporated Rayleigh scattering in a simple way and have made no attempt to account for water vapor absorption at 950 nm or any other atmospheric effects (Shaw & Burke 2003). The longitudinal maps of the three model surfaces (Figure 4) are not a good match to a cloud-free map of Earth. This is not surprising given our reliance on data from a single day, and the prevalence of obscuring clouds. The large land content retrieved for the middle of the Pacific Ocean is spurious and may be due to our correction for changeable cloud cover (the observations begin and end with the spacecraft over the Pacific). The right panel of Figure 2 hints at why the rotational map of Principal Component 1 was remarkably faithful to the actual surface geography of Earth (Figure 10 of Cowan et al. 2009): the land spectrum is pure PC1. 4 As noted in this paper, however, there is no reason to believe that "maps" of principal components should accurately reflect physical conditions on the planet. DISCUSSION The surface retrieval scheme presented here should be generally applicable, provided sufficient Euclidean distance between surfaces in color space (surfaces should look different), and large-amplitude variations in apparent covering fractions (large, longitudinally distinct geographical features). There are a number of well-justified assumptions made in the present work that should eventually be relaxed, however: 1) We assume Lambertian reflection for the purposes of convolving the planetary map of covering fractions with the visibility and illumination to obtain apparent covering fractions. This assumption should be correct for the present data since they were obtained with Earth at slightly gibbous phase (Williams & Gaidos 2008;Robinson et al. 2010). In order to properly interpret data at crescent phases, however, it may be necessary to relax this assumption. 2) We adopt the known viewing geometry of Earth at the time of the observations. Any observations able to measure the rotational color-variations of an exoplanet would be more than adequate for estimating orbital phase and rotation period. The planetary obliquity and seasonal phase will not be known a priori, but simulations of fullorbit multiband lightcurves indicate that these geometrical parameters should be retrievable , 2011Fujii & Kawahara 2012). 3) We assume a static surface map for the planet, but in order to properly interpret weeks-months of data it would be imperative to allow the cloud cover to evolve in time. Clouds completely obscure the underlying surface in our linear model. For example, 33% cloud coverage at a given location means that that one-third of that pixel is covered in completely impenetrable clouds, while the remaining 67% is perfectly cloud-free. Given this parametrization, changes in cloud cover necessarily involve changes in ocean and land coverage, and a significant increase in model complexity, all else being equal. 4) We assume that clouds combine linearly with actual surfaces. Optically thin clouds, however, obscure underlying surfaces while contributing to the reflectance spectrum, a non-linear effect (Sabol et al. 1992). Numerical experiments indicate that the dimensionality of the apparent albedo locus is still n surf − 1 for realistic, non-linear, radiative transfer (Cowan et al. 2011). Our experiments with a simple non-linear three-surface toy model further indicate that the apparent albedo locus is amenable to simplex shrink-wrapping, though none of the endmembers may correspond to a pure cloud spectrum ( Figure 5). In addition to being physically motivated, a non-linear cloud model would allow surface covering fractions to remain fixed despite changing cloud cover, enabling accurate surface maps with data spanning many planetary rotations. A major challenge with this approach would be determining, a priori, which surfaces should combine convexly and which should not. 5) Although in the present case the surface spectra were allowed to leave the principal component plane, it may be computationally necessary to constrain them to the PCP for larger n λ and/or n surf . 6) We adopt three surfaces because the power spectrum and cloud (0.6, 0.6). The red circles show the colors of individual pixels, while green circles show the planet's apparent albedo path over a single planetary rotation. The covering fraction of land and ocean must sum to unity for every pixel, but for clouds we use a simple parameterization that captures the non-linear behavior of atmospheric scattering, where A cloud and A surf are the cloud and surface albedos, respectively (Cowan et al. 2011). Note that the pixel colors do not all lie within the triangle defined by the surface spectra, but are still amenable to endmember analysis. of the albedo variations in dominated by the first two principal components. It is possible, however, that Earth has four or more surfaces that all happen to lie very close to the PCP. These surfaces might only betray themselves at higher spectral resolutions. Putting aside that pathological case, it is plausible that some terrestrial exoplanets will have more than three major surface/cloud types, leading to a higher-dimensional locus in color space. This would make the convex-hull and shrink-wrapping more computationally intensive, but existing algorithms are routinely applied to higher dimensional data (Keshava 2003;Shaw & Burke 2003). It is likely that the morphology of color variations in a higher-dimensional color space will still provide enough leverage to identify surface spectra, as was the case in the present study. CONCLUSIONS Our three-step surface-retrieval scheme consists of 1) performing principal component analysis on the multispectral reflectance matrix and projecting the data onto the principal component plane, 2) shrink-wrapping a simplex onto the projected data, and 3) relaxing the simplex vertices in order to match the time-variations in diskintegrated color. Although we developed the method for directly-imaged terrestrial planets, there is no reason this method could not be used to determine the colors of clouds on directlyimaged gas giants, or the colors of albedo markings on unresolved Solar System bodies.
2013-01-31T21:00:01.000Z
2013-01-31T00:00:00.000
{ "year": 2013, "sha1": "f56833ce8a5c8012f758ac403f24889d18e2eb56", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1302.0006", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "f56833ce8a5c8012f758ac403f24889d18e2eb56", "s2fieldsofstudy": [ "Physics", "Environmental Science" ], "extfieldsofstudy": [ "Physics" ] }
210944829
pes2o/s2orc
v3-fos-license
Inhibiting extracellular vesicles formation and release: a review of EV inhibitors ABSTRACT It is now becoming well established that vesicles are released from a broad range of cell types and are involved in cell-to-cell communication, both in physiological and pathological conditions. Once outside the cell, these vesicles are termed extracellular vesicles (EVs). The cellular origin (cell type), subcellular origin (through the endosomal pathway or pinched from the cell membrane) and content (what proteins, glycoproteins, lipids, nucleic acids, metabolites) are transported by the EVs, and their size, all seem to be contributing factors to their overall heterogeneity. Efforts are being invested into attempting to block the release of subpopulations of EVs or, indeed, all EVs. Some such studies are focussed on investigating EV inhibitors as research tools; others are interested in the longerterm potential of using such inhibitors in pathological conditions such as cancer. This review, intended to be of relevance to both researchers already well established in the EV field and newcomers to this field, provides an outline of the compounds that have been most extensively studied for this purpose, their proposed mechanisms of actions and the findings of these studies. Introduction Extracellular vesicles (EVs) are a heterogeneous group of vesicles released by cells under physiological and pathological conditions [1][2][3]. EVs are typically classified according to a range of properties including density, dimension and the biogenesis processes from which they originate. Two subclasses of EVs with different biogenesis pathways have been extensively reported on over the past number of years. These are exosomes, deriving from multi-vesicular bodies (MVB), and microvesicles (MVs), which bud directly from the cell membrane. Typically, exosomes are considered to be vesicles that fall within the size range of 30-150 nm, while MVs are often described as 100-1000 nm. However, it is evident that this classification is flawed, as these size ranges overlap and evidence suggests that vesicles in the size range of exosomes can also bud directly from the cell membrane rather than originate via the endosomal pathway. Despite these limitations of the terminology, to adhere to the doctrine that those EVs derived through the endosomal pathway are exosomes and those pinching from the cell membrane are MVs, until such times as the majority of the EV research community agrees on other guidelines for nomenclature, here we refer to exosomes and MVs, respectively, according to their modes of biogenesis. Exosomes biogenesis Biogenesis of the EV subpopulation that are later termed exosomes occurs inside endosomes. Early endosomes, during their maturation towards late endosomes or MVBs, start to accumulate intraluminal vesicles (ILV) through endosomal membrane invaginations. These MVBs subsequently fuse with lysosomes and thus promote ILV destruction or, alternatively, they can either fuse with the cell membrane, releasing ILVs in the extracellular space. These vesicles are then termed exosomes [4]. MVBs biogenesis is driven by at least two distinct mechanisms, i.e.. endosomal sorting complexes required for transport machinery (ESCRT)-dependent and ESCRT-independent pathways. Both pathways are schematised in Figure 1 which also includes drugs (manumycin A and GW4869, respectively) potentially able to block these mechanisms. ESCRT consists of a multi-molecular machinery constituted by four main complexes (ESCRT-0, -I, -II and -III) and some additional associated proteins, such as those of the AAA ATPase Vps4 complex. These complexes orchestrate the ILVs formation in a stepwise fashion. ESCRT-0 and -I, interacting with ubiquitylated transmembrane cargos, form microdomain in the endosomal membrane and recruit ESCRT- III. This last complex subsequently interacts with the ESCRT-II complex, which is responsible for the vesicles abscission and release. In fact, the latter complex is able to form spirals that promote the inward budding and fission of vesicles in order to form MVBs [5][6][7][8][9]. The ESCRT pathway can also be recruited by other exosomes biogenesis mechanisms. Involvement of the axis syndecan-syntenin-ALIX in exosomes generation has been reported which, through the recruitment of the ESCRT accessory protein ALIX, is able to enrol the ESCRT-III complex and orchestrate ILVs formation. Based on this model, heparanases trim the heparin sulphate side chains of syndecans, facilitating the formation of syndecan microdomains able to interact with syntenin and, thus, draft the ESCRT machinery using ALIX as an intermediate [10][11][12]. MVBs production can also be achieved by ESCRTindependent processes, based on the presence of lipid rafts inside endosomal membranes. These lipid rafts are present at high levels of cholesterol and, most importantly, sphingolipids that can be subjected to the activity of the neutral sphingomyelinase family. These sphingomyelinase enzymes convert sphingolipids to ceramide, a cone-shaped rigid lipid that coalesces to form ever-larger microdomains inducing budding and formation of ILVs [13][14][15][16][17]. When MVBs are formed, as mentioned above, they can either fuse with lysosomes or result in exosomes release via fusion with the cell membrane. Their transport within cells and towards the cell membrane is dependent on interaction with actin and microtubules of the cytoskeleton and is regulated by many proteins. Particularly important here is the GTPase family of Rab proteins, although their involvement seems to be somewhat cell specific. For example, both Rab27a and Rab27b have been shown to be involved in exosomes release from HeLa cells, i.e. Rab27b regulates the motility of MVBs towards the cell membrane and Rab27a promotes their fusion. However, Rab27 isoforms are not ubiquitous. Thus, other cell types seem to regulate exosomes release using different Rab proteins, such as Rab11 in K562 cells (bone marrow chronic ESCRT is a multi-molecular machinery composed of different proteins able to interact with ubiquitylated cargos and promote the formation of Intra-Luminal Vesicles (ILVs). The ESCRT pathways seems to be associated to RAS, a small GTPases able to orchestrate different cellular processes including exosomes production and cell proliferation, differentiation, adhesion and migration. In order to function properly, RAS must be linked to a lipid chain, achieved by the activity of specific enzymes called farnesyltransferases. Manumycin A is proposed to be able to inhibit these latter enzymes; thus, in turn, it is able to prevent RAS activation and function, preventing the formation of exosomes through the ESCRT pathway. The ESCRT-independent pathway is orchestrated by neutral sphingomyelinases, a family of enzymes able to convert sphingomyelin, present inside lipid raft, in ceramide. The resulting ceramides, interacting with each other, form large microdomains inducing the budding and the formation of ILVs into MVBs. GW4869 is a potent neutral sphingomyelinases inhibitor that preventing the formation of ILVs is able to block exosomes production. myelogenous leukaemia cells) and Rab35 in Oli-neu cells (oligodendroglial cell lines) [18][19][20]. SNARE (Soluble NSF Attachment Protein REceptor) protein family members are also involved in orchestrating the final exosomes release in response to membrane fusion. This is achieved through the formation of a SNARE complex composed of three-fourth coiled-coil helices proteins. SNARE activity is partially controlled by the phosphorylation state of these proteins, which influences their localisation and their interaction with SNARE partners, thus contributing to regulated exosomes release [21][22][23]. Microvesicles biogenesis MVs biogenesis is modulated by lipid composition and organisation of the peripheral cytoskeleton, both of which are able to alter membrane fluidity and deformability [24]. As for exosomes, mechanisms that generate or alter asymmetry of the cell membrane appear to be important during MVs generation. In fact, lipid rafts are fundamental to the budding of the cell membrane and cholesterol seems to play key roles, as cholesterol depletion reduces MVs release [14][15][16][17]25]. Interestingly, MVs present unique lipid characteristics, such as externalisation of phosphatidylserine (a lipid normally contained in the internal leaflet of the membrane) and an enrichment of the sphingolipid sphingomyelin [25,26]. Enzyme involved in transferring lipids from one leaflet of the cell membrane to the other, such as calpain, scramblases, flippases and floppases, has been reported as important during MVs formation [3,[27][28][29][30][31][32]. Of note, the activity of acidic sphingomyelinases and the conversion of sphingomyelin to ceramide are also involved in MVs production (in a process similar to the one performed by neutral sphingomyelinases during exosomes biogenesis). Specifically, ceramide, being a cone-shaped lipid, induces membrane curvature and triggers MV release [33][34][35][36][37][38]. In addition to lipids, other cytoskeletal elements and their regulators are required for MVs biogenesis. Due to their involvement in regulating actin reorganisation and mediating the cytoskeleton contractility, Rho family members are fundamental players in this process. RhoA activity has been reported to promote MVs release from cancer cells, through its activity on ROCK and ERK [39][40][41]. Additionally, RhoA, ARF6 and ARF1, phosphorylating the myosin light chain at the MV necks in order to enhance myosin contractility, favour the fission and the pinching-off of the resulting MVs [42][43][44][45]. Interestingly, a partial overlapping mechanism for MVs with exosomes biogenesis has been proposed. Specifically, it seems that the ESCRT machinery may be involved in the production of nano-sized vesicles that are enriched in cell surface proteins, reflecting their cell membrane origin. Thus, consistent with the complexity of the EV world, MVs release may also be supported by this mechanism, although further research is needed to investigate this [46][47][48]. Pharmacological inhibitors of EVs release EVs are involved in many pathophysiological conditions, and in some cases, their involvement relates to disease development and progression [1,3,49]. Thus, many pharmacological agents are being explored to investigate how best to inhibit EV release, both as research tools and potentially as therapeutic approaches, if means can be identified that selectively affect EVs involved in pathology but not those performing necessary physiological roles. However, the complexity and heterogeneity of EV biogenesis is challenging the development of a single drug that would be able to completely block EVs production. Nevertheless, identifying and blocking the predominant EV subpopulation(s) associated with a particular disease, if such exists and if feasible, could be clinically useful. In this review, the main drugs evaluated thus far to block EV release (listed in Table 1) are discussed, considering their proposed abilities to inhibit one EV population instead of another, such as GW4869 for exosomes or Y27632 for MVs. Thus, we have broadly categorised the compounds into two main groups according to their mechanisms-of-action, i.e. those that particularly affect EV trafficking (calpeptin, manumycin A and Y27632) and those that particularly affect lipid metabolism (pantethine, imipramine and GW4869). Inhibition of EV trafficking Compounds that are associated particularly with affecting EV trafficking include calpeptin, manumycin A and Y27632. Calpeptin Calpain comprises a family of calcium-dependent neutral, cytosolic cysteine proteases. The most studied isoforms appear to be the µ-and m-calpains, which are ubiquitously expressed in cells and activated with calcium concentrations of µM to mM. µ-and m-calpains are hetero-dimers consisting of a large catalytic subunit (80 kDa) and a regulatory subunit (28 kDa) containing the calcium-binding site. When calcium levels rise, calcium binds and promotes a conformational change, allowing self-cleavage and activation of the pro-enzyme. Once activated, calpains can engage with numerous protein substrates, including G-proteins and cytoskeletal proteins, that contribute to the many processes in which calpains are involved. These activities include apoptosis, cell proliferation, migration, tumour invasiveness and cancer progression, as calpains are de-regulated in cancer cells [50,51]. Due to their activity on cytoskeleton remodelling, these proteins can apparently promote MVs shedding. Conversely, their inhibition, using calpain inhibitors, is reported to reduce the amount of MVs released by cells. The most studied calpain inhibitor is calpeptin, a reversible semi-synthetic peptidomimetic aldehyde inhibitor developed by the chemical modification of a Streptomyces peptide aldehyde in which the N-terminal has been substituted with a hydrophobic cap group. Due to its inhibitory activity on calpain (see Figure 2), calpeptin has been quite extensively used as micro-vesiculation inhibitor. In resting platelets, compared to some other cell types, only a small percentage of MVs are produced, but this percentage increases upon platelet stimulation by agonists such as collagen, thrombin or the synthetic calcium ionophore A23187. Additionally, it has been demonstrated that prion protein fragment (106-126) (PrP) can activate platelets, favouring an increase of cytosolic calcium levels and leading to a concentrationdependent increased release of MVs. Specifically, PrP was shown to induce an 8-10 fold increase in calpain activity in the presence of 1 mM extracellular calcium, while pre-incubation of platelets with 80 µM calpeptin prevented calpain activation and MVs release [52,53]. Prior to these studies, Fox et al. [54] demonstrated the involvement of calpain in the shedding of pro-coagulantcontaining MVs released from activated platelets. Specifically, using a low concentration of calpeptin (10-20 µg/mL), calpain-mediated hydrolysis of the actinbinding protein was prevented, leading to reduced shedding of MVs. Pre-treatment of platelets with calpeptin substantially inhibited MVs formation (70% reduction), even after platelets activation with collagen, thrombin or a combination of these two agonists. However, increasing calpeptin concentration up to 200 µg/mL did not have any additional effect. In a subsequent experiment, Yano et al. [55] showed the calpeptin activity to be concentration dependent (0-300 µM) and that the reduction of MVs shedding is mostly due to the reduced activity of calpain on actin-binding proteins, as the inhibition of MVs formation was observed 10 min after platelets incubation with thrombin and collagen. (Continued ) Calpeptin effects have also been evaluated on other cell types. To investigate the involvement of MVs release during cell membrane repair, human embryonic kidney (HEK293) cells were treated with streptolysin-O (an agent known to damage cell membrane) and then treated with 60 µM calpeptin. Calpeptin substantially reduced MVs release and, consequently, streptolysin-O accumulated inside the cells and induced an increased lysis rate on calpeptin-treated cells compared to its effect on untreated cells [56]. MVs have been implicated in anti-cancer drugresistance [57]. Using a prostate cancer cell line model, PC3, Jorfi et al. [58] demonstrated that calpeptin's inhibition of MVs release allowed docetaxel and methotrexate to accumulate inside the cells, resulting in significantly reduced cell proliferation and more cell death than that observed in the absence of calpeptin treatment. Subsequent pre-clinical in vivo studies showed reduced PC3 tumour growth in nude mice following combined therapy of either docetaxel or methotrexate and calpeptin (injected, 1 h before the chemotherapeutics, directly in the tumour mass at a dose of 10 mg/Kg) when compared to docetaxel or methotrexate treatment alone. This phenomenon was proposed to be the result of decreased intra-tumoural vascularisation, decreased proliferation and enhanced apoptosis of PC3 cells, which was enhanced by calpeptin's inhibition of micro-vesiculation of the tumour cells. Brought together, these studies support the hypothesis that calpeptin can inhibit shedding of particles/ vesicles, described by the authors as microparticles in platelets studies and as microvesicles in the studies of HEK293, SH-SY5Y, and PC3 cell lines. However, as with many studies of EVs, there are challenges in metaanalysing the data and in drawing conclusions from the data as a whole, as different methods for EV separation were used for different studies. In fact, as summarised in Table 2, one study did not separate EVs from medium at all and instead evaluated treated platelets' supernatant (after low-speed centrifugation) by FACS, counting the particles by comparing FITC-CD41a labelled events with bead events. It would have built confidence if the particles/vesicles had been characterised by more than one method, but this was not always the case. It should be noted that about half of these studies preceded the 2014 position statement from the International Society for Extracellular Vesicles (ISEV) that described minimal experimental requirements for the definition of EVs and their functions [59]. Therefore, understandably there may not have been an awareness of what the ISEV community considers as necessary. Interestingly, unique amongst Manumycin A Ras is a family of small GTPases able to regulate the function of other proteins and involved in many different cellular processes including cell differentiation, cytoskeletal integrity, cell adhesion and migration, cell proliferation, exosomes release and apoptosis. Due to their range of functions, it is not surprising that their de-regulationwhich can occur in cancer cellscontributes to increased invasiveness, metastasise and reduced apoptosis. Structurally, Ras is composed of two domains: the G domain that binds the GTP and the C-terminal membrane-targeting domain. To function properly, the C-terminal domain must be modified by the addition of a lipid chain, a modification mediated by the activity of a farnesyltransferase enzyme. Inhibition of this modification by farnesyltransferases inhibition, in turn, inhibits Ras activity and so prevents its down-stream effects. One of the most commonly used farnesyltransferases inhibitors is manumycin A, a cell-permeable antibiotic extracted from Streptomyces parvulus that is described as acting as a selective and potent inhibitor of Ras farnesyltransferases. Due to the involvement of Ras during exosomes release, manumycin A has been investigated as an inhibitor of exosomes secretion. Using manumycin A, Hyun Jeong Oh et al. [60] reported the involvement of exosomes in neuronal differentiation. Specifically, this study showed that exosomes carrying miRNA-193a and released from F11 . Rho-associated protein kinases (ROCK) are serine-threonine kinases involved in cytoskeleton re-organisation. Once activated by multiple stimuli, ROCK regulate the shape and the movement of the cells, through the activation of Adducin or ERM (ezrin, radixin and moesin), but they can also interact with MLC (myosin light chain) and LIMK (LIM kinases, able to inactivate cofilin, fundamental for actin filament stabilisation) both involved in MVs release. Y27632 is a competitive inhibitor of both ROCK1 and ROCK2 and by blocking these proteins it can inhibit MVs release. cells (rat dorsal root ganglion cells), when in co-culture with undifferentiated F11 cells, stimulate neurogenesis of these previously undifferentiated recipient cells. This differentiation was inhibited by pre-treating the donor cells with 5 µM manumycin A and was linked to a reduction in the amount of released CD63-bearing exosomes. Datta et al. [61] reported the inhibitory activity of manumycin A specifically on the ESCRT-dependent exosomes biogenesis. Here, in prostate cancer cell lines (C4-2B, PC3 and 22Rv1) the activity of 250 nM manumycin A led to a reduction in exosomes production by 50-60% depending on the particular cell line; the effect was even more substantial when manumycin A was used in combination the nSMase inhibitor GW4869. This study also reported manumycin A's mechanisms-of-action as not only inhibitory to Ras and its farnesyltransferases, but also to hnRNP H1, a protein belonging to the heterogeneous nuclear ribonucleoproteins and involved in the regulation of pre-mRNA biogenesis, metabolism and transport. Apparently, the additional inhibition of this protein enhances the inhibitory effect on exosomes biogenesis. Furthermore, using manumycin A, Zhou et al. [62] reported a role of exosomes in the wound-healing process. Their study initially correlated exosomes secreted by scratched BUMPT cells with a negative effect on the wound-healing process. To further support these findings, they progressed to reducing exosomes release. The results showed an increased wounded area after BUMPT cells scratch upon 10 µM GW4869 or 1 µM manumycin A treatment, confirming the deleterious effect of exosomes on this process. Overall, as summarised in Table 2, these three studies reported in 2017 (so probably informed by the 2014 position statement from ISEV [59]) used the commonly selected ultracentrifugation methods for EV separation and, to varying degrees, characterised the resulting vesicles. Hyun Jeong Oh et al. [60] also investigated and confirmed the absence of the proposed negative marker, calnexin. However, as for the studies with calpeptin, not unexpectedly, not all the same characterising was done in all studies, which can make generalisation and overall conclusions challenging. Favourably, the limited number of studies of manumycin A reported thus far show relevance to progressing to further investigate the ability of this compound to inhibit EV release. Prior to such future studies, it would be necessary to do as Datta et al. [61] did and establish concentrations of the drug that are non-toxic to the cells of interest to give confidence that any reduction in EV release is not simply as a consequence of the cells being destroyed by manumycin A. Y27632 Rho-associated protein kinases (ROCK) are a family of serine-threonine kinases belonging to the PKA/PKG/ PKC family and involved in regulating the shape and the movement of cells, by acting on the cytoskeleton (Figure 3). This is relevant as re-organising the cytoskeleton and mediating cellular contractility through activity on actin filaments is important for MVs shedding. Y27632 is a commonly used cell-permeable, highly potent, competitive inhibitor of both ROCK1 and ROCK2 which is able to compete with ATP in binding the catalytic binding sites of these kinases. The involvement of ROCK kinases in MVs formation was investigating in triple-negative breast cancer (MDA-MB-231) and ovarian cancer (HeLa) cells with up-regulated levels of these kinases [39]. Firstly, it was established that these cells, expressing the constitutively active RhoA mutant, had substantial MVs formation. This MVs release was suppressed using RhoA siRNA, which abolished not only the effect of the RhoA mutant but also the effect due to an external cell activation by epidermal growth factor (EGF). To confirm proteins causally involved, 5 µM Y27632 was used to block the activity of ROCK1 and ROCK2 and, subsequently, MVs were no longer visible at the cell membrane. Consequently, the medium conditioned by these cells contained a markedly reduced amount of MVs. Finally, investigating which enzymes were activated by ROCK1 and ROCK2, the main effect on MVs shedding was found to be due to the activation of LIMK and MYLK, which act on cofilin and myosin, thus favouring cytoskeleton re-organisation and filaments contraction. Including Y27632 to elucidate the major player between ROCK1 and ROCK2 in promoting MVs release, Sapet et al [63] worked with a microvascular endothelial cell line (HMEC-1) activated by thrombin. Here, thrombin was shown to stimulate an increase in MVs shedding from HMEC-1 cells, with substantially increased transcription of ROCK2 following 4-h treatment with thrombin. Pre-treatment of HMEC-1 cells with 1 µM Y27632 prevented the thrombin-induced MVs release. When specific siRNA targeting ROCK1 and ROCK2, respectively, were used, siRNA against ROCK1 had no effect on MVs shedding, while the siRNA against ROCK2 extensively suppressed the thrombin effect. This indicated the involvement of ROCK2, but not ROCK1, in MVs production. Further using Y27632 to investigate and control the mechanisms by which MVs are released, Lathan et al. [64] performed experiments with hCMEC/D3 cells (immortalised primary microvascular brain endothelial cells) after stimulation with tumour necrosis factor (TNF). Y27632 (5 µM) causes a reduction in the release of MVs in response to TNF occurred, as well as a change in cell surface morphology, with cells adapting a more cobblestone shape and presenting fewer fibrous protrusions. This effect correlated with the activity of the inhibitor which sustains activation of proteolytic enzymes, such as Stathmin and Calpain, that destabilised the cell membrane. This study also proposed a fundamental role for βand γ-actin in facilitating membrane protrusion and MVs release. The ability of Y27632 to efficiently inhibit MVs formation has been confirmed by many research groups. Tramontano et al. [65] reported a reduction in CD105 + and CD31 + MVs release from human coronary artery endothelial cells (HCAEC) upon treatment with 10 µM Y27632 or 0.1 µM/mL statin (a well know anti-inflammatory molecule). Kim et al. [66] highlighted the ability of isoflurane to activate endothelial cell Rho kinases, promoting an increase in CD73bearing MVs production in vitro using immortalised human umbilical vein endothelial cells (EAhy9262) and immortalised mouse glomerular endothelial cells (GENC cells). Y27632 (10 µM) prevented this response to isoflurane in vitro. Additionally, Hussein et al. [67] reported a correlation between inhibition of MVs release, using either 200 µM calpeptin, 30 µM Y27632 or the combination of the two drugs (Y27632 added after 1 h of pre-incubation with calpeptin), and increased endothelial cell (EC) apoptosis and detachment in response to external stress induced by staurosporine and IL-1α. This effect correlated with the inability of the cells to rid themselves of caspase 3, a pro-apoptotic protein that typically accumulates in MVs and so is removed from these cells. The, instead, accumulation of caspase 3 in the cytoplasm upon treatment with calpeptin and/or Y27632 led to cell detachment and apoptosis. As concluded for the studies with other inhibitors, not all studies with Y27632 (and as summarised in Table 2) actually include separating EVs from conditioned medium (CM) at all, i.e. Sapet et al. [63] centrifuged their CM at 5000 g for 5 min and then proceeded with flow cytometry. This makes drawing collective conclusions very challenging, but it could be argued that it adds a different type of information. Even when EVs were separated in the studies involving Y27632, their characterisation was often limited when compared to the ISEV position paper [59] and more recent MISEV2018 guidelines [68] (but it is noteworthy that the Y27632 studies preceded the 2014 ISEV position paper). For example, the mainand sometimes the onlycharacterisation technique used was FACS, sometimes for annexin V alone. Techniques such as immunoblotting for EV markers and/or advanced microscopy were rarely used. So, here again, our conclusion is that although such studies are important to our fundamental understanding of how Y27632 might affect EVs, their interpretation can only be made in the context of pre-the ISEV position paper and MISEV2018 guidelines and not in the context of what the ISEV community now regard as ideal for EV Figure 4. MVs present unique lipid characteristics, including an enrichment of sphingomyelin and ceramide, thus every enzyme able to interfere with membrane composition, i.e. calpains, scramblases and acid sphingomyelinases, plays a key role in MVs biogenesis. Acid sphingomyelinases convert sphingomyelin into ceramide, a cone-shaped rigid lipid that forms micro-domains inside the cell membrane, inducing the budding of MVs. Imipramine, a well-known anti-depressant, can promote membrane fluidity by acting on aSMases, thus preventing MVs generation. characterisation. Thus, future studies should determine concentrations of Y27632 that are non-toxic to the cells being investigated to ensure that the effects claimed on EVs are not due to cellular cytotoxicity and then due consideration should be given to characterising the EVs in line with MISEV2018. Inhibition of lipid metabolism Compounds that are associated particularly with affecting lipid metabolism include pantethine, imipramine and GW4869. Pantethine Pantethine is a pantothenic acid (vitamin B 5 ) derivate used as an intermediate in the production of coenzyme A and it plays a role in the metabolism of lipids and reduction of total cholesterol levels. In fact, pantethine has been shown to inhibit (by 80%) cholesterol synthesis in cultured skin fibroblasts (GM0043), as well as total fatty acids synthesis [69]. As the fluidity of the cell membrane is fundamental during membrane lipid bilayer re-organisation and thus MVs formation, pantethine may be used to impair MVs shedding. Specifically, this drug has been shown to block translocation of phosphatidylserine on the outer membrane leaflet, a process that is fundamental for MVs production. Pantethine effect has been investigated using breast cancer MCF-7 cell variants that are resistant or sensitive to doxorubicin [70]. After stimulation of these cells with the agonist A23187, resistant cells, but not doxorubicin-sensitive MCF-7, released higher amount of MVs. This effect was reduced upon pre-treatment with pantethine, obtaining an overall MV reduction of 24%. Using pantethine, Kavian et al. [71] investigated the role of MVs in systemic sclerosis (SSc). The main dysfunctions in SSc involves damage to ECs and fibroblasts, and so the effect of pantethine on ECs was investigated. Following addition of either 50 µM or 100 µM pantethine, the release of MVs from ECs decreased in a concentration-dependent manner. The same effect was observed in a pre-clinical in vivo model where mice develop SSc following daily intradermal injections of 200 µL of hypochlorous acid (HOCl). In these mice, pantethine substantially reduced the quantities of circulating ECs MVs. Mice treated orally with 150 mg/Kg of D-pantethine showed an attenuation of fibrosis and vascular alteration and this protection was associated with reduced quantities of circulating CD144 + MVs of endothelial origin, reflecting the level of endothelial damage. With pantethine, Penet et al. [72] explored the involvement of MVs in cerebral malaria. In vitro, using mouse brain endothelial cells (MBECs) and human umbilical vein endothelial cells (HUVECs) as model systems, a 51% reduction in quantities of MVs released was observed after pre-treatment with 1 mM of pantethine and subsequent stimulation with TNF when compared to treatment with TNF alone. Advancing to in vivo studies, mice received intraperitoneal injections of red blood cells infected with Plasmodium berghei ANKA and the cerebral syndrome occurred on day 7-8 after infection. However, administration of pantethine (30 mg for 5 days) prevented the occurrence of cerebral malaria, associated with the ability of this drug to prevent ECs' MVs release. It should also be noted that MVs' release was inhibited only after administration of pantethine specifically, i.e. pantethine constituents, such as cystamine, cysteamine and pantothenic acid, did not affect MVs. This highlighted the importance of the disulphide bridge present in the pantethine molecular structure to exert this functional activity on MV release. Again, as for the other compounds reported to reduce EV release, collectively there is substantial evidence from these studies that pantethine has shown beneficial results in the model systems evaluated. However, minimal characterisation (typically by FACS) was done in the context of both the ISEV position paper and MISEV2018. Furthermore, prior checks to ensure that the concentrations of pantethine used were non-toxic to the cells were not always included. Thus, building on the strengths of the earlier work, more extensive studies of pantethine and how EVs released are characterised in line with MISEV2018 are now warranted to fully understand its potential as a selective inhibitor of EVs. Imipramine Imipramine is a tricyclic anti-depressant that has drawn attention due to its inhibitory activity on acid sphingomyelinase (aSMase). aSMase enzymes catalyse the hydrolysis of sphingomyelin to ceramide [73], a process involved in both exosomes and MVs formation as it increases membrane fluidity, exosomes release and MVs generation (Figure 4). Upon activation of the ATP receptor P2X 7 , MV shedding is associated with a rapid activation and translocation of aSMase to the outer leaflet of the cell membrane. After its internalisation inside the cells (especially inside endosomes and lysosomes), the basic imipramine molecule becomes protonated. In this form, it stimulates the proteolytic degradation of aSMase that, once it loses its negative charge, detaches from the membrane. Thus, imipramine is reported to prevent the translocation of aSMase, inhibiting MV and exosomes secretion. An early report on imipramine [35] highlighted the ability of astrocytes and microglia cells to produce MVs in response to agonist activation of the receptor P2X 7 . This ability was blocked using different MVs inhibitors, such as BAPTA (equivalent of EGTA), cytochalasin D (an actin polymerisation inhibitor), and Y27632. However, the most substantial inhibitory effect on aSMase was achieved with 10 µM imipramine treatment or gen knock-out knocking of aSMase. With the latter, the MVs shedding was completely abrogated. Using imipramine to investigate a potential correlation between osteoblast-derived MVs and bone loss, Deng et al. [74] demonstrated that osteoblast (UAMS-32P) cells treated with this drug release a lower amount of MVs than do untreated cells. These cells were co-cultured with a macrophage cell line able to differentiate into osteoclast upon RANKL stimulation, but after 10 µM imipramine treatment, the number of osteoclasts recovered was lower than the imipramine-untreated coculture. Advancing on this, imipramine was administered to ovariectomised mice for 45 days, and femur densitometry results showed an improved mineral density compared to mice not treated with imipramine, supporting the involvement of MVs in osteoclast differentiation and thus bone resorption. Studies involving the prostate cancer cell line PC3 reported that imipramine blocked MVs and exosomes, but, of note, these vesicle types were not analysed separately [75]. In this study, 25 µM imipramine treatment resulted in a 77% reduction in total EV release, included EVs of both <150 nm and >150 nm-sized vesicles. Incidentally, pantethine showed similar results to imipramine, although the effect on EVs release was less pronounced. The latter two studies involving imipramine were reported in 2017, so post the ISEV position paper but prior to the publication of MISEV2018 guidelines. However, as for some studies with other compounds reported to inhibit EVs, characterisation of EVs typically falls somewhat short of that outlined in the ISEV position paper. To advance on these findings, more extensive studies of imipramine at proven subtoxic concentrations and with extensive characterisation of the EVs could be very useful. GW4869 GW4869 is a cell-permeable, symmetrical dihydroimidazolo-amide compound that acts as a potent, specific, non-competitive inhibitor of membrane neutral sphingomyelinase (nSMase). nSMase is an ubiquitous enzyme that generates the bioactive lipid ceramide through the hydrolysis of the membrane lipid sphingomyelin [33]. Quantitative analysis of lipid molecular species in the exosomes bilayer highlights an extensive enrichment in cholesterol, sphingolipids and ceramide with lower amounts of phosphatidylcholine than in the cellular membrane. This lipid composition is remarkably like that of lipid raft and treatment with exogenous SMase or application of C 6 -ceramide can induce vesicles formation, underlying the importance of ceramide in ESCRT-independent exosomes generation. Conversely, GW4869 has been reported to markedly reduce exosomes release. The cone-shaped structure of ceramide, released upon the SMase activity is, thus, important during the formation of spontaneous curvature, creating large lipid-raft domains involved in exosomes shedding [13]. SMase is found in a range of compartments in all cells, including the Golgi apparatus, endosomes and cell membrane, so its activity is not only linked to exosomes generation but also MVs shedding. Specifically, using both the pharmacological inhibitor GW4869 and siRNAs against sphingomyelin phosphodiesterase 2/3 (SMPD2/3), a correlation between apparent exosomes inhibition and MVs release has been demonstrated. Treatment of SKBR-3 cells with 5 µM GW4869 (or the siRNA) resulted in increased quantities of vesicles with a size range of 100-200 nm, while the quantities of smaller vesicles released were much reduced. In keeping with this, overexpression of SMPD2 or SMPD3 decreased quantities of larger (100-200 nm) vesicles that were released and, instead, increased smaller vesicles release. This effect was also associated with a distinct lipid composition of MV bilayer, showing a higher content of sphingomyelin with the respect of the overall cell membrane lipid composition, that promotes the budding of cellular membrane. Thus deregulation of SMPD2/3 activity, interfering with lipid composition, may shift the secretion of cell cargo from exosomes to MVs or vice versa [34]. GW4869 activity on vesicles has been extensively studied using fibroblasts. Lyu et al. [76] reported that exosomes released from cardiac fibroblasts carry prohypertrophic molecules and that hypertrophy induced by cardiac fibroblasts CM was dramatically attenuated when its exosomes were depleted. Exosomes release was increased following cardiac fibroblasts' activation using angiotensin II (AngII) and exosomes uptake led to up-regulation of renin, Angiotensin I, Angiotensin receptor 1 and 2 (AT 1 R and AT 2 R) in receiving cardiomyocytes. These effects by cardiac fibroblasts' exosomes were substantially inhibited when neonatal rat cardiomyocytes co-cultured with neonatal rat cardiac fibroblasts were treated with 40 µM GW4869. These findings were also supported by in vivo experiments with C57BL/6N mice where AngII-induced myocardial hypertrophy and cardiac fibrosis were markedly reduced upon treatment with GW4869 (via intraperitoneal injection of 2.5 mg/kg body weight), supporting a causal role for exosomes during cardiac pathologic hypertrophy. Hu et al. [77] reported that cancer-associated fibroblasts (CAFs) prime cancer stem cells (CSCs) in colorectal cancer, promoting their chemo-resistance. In these studies, exosomes separated from 18Co fibroblasts stimulated SW620 CSCs to form larger spheroids than those CSCs treated with the vehicle control (DMSO). Subsequently mice implanted with CSCs and treated subcutaneously with 18Co cells' CM before intra-peritoneally injection with chemotherapeutic agents (1 µM 5-fluorouracil or oxaliplatin) had higher tumour incidence, faster tumour growth and larger tumours; demonstrating that 18Co cells' CM protects xenograft tumours from chemotherapy, priming CSCs to become more resistant. However, when 18Co and CAF cells were treated with 10 µM GW4869 in vitro, the CM effect on CSCs was markedly reduced, supporting the relevance of exosomes in the CSCs priming process. Similarly, Richards et al. [78] reported that CAFs exposed to 100 nM gemcitabine dramatically increased exosomes release that, in turn, up-regulated cell proliferation and survival of chemo-sensitive recipient pancreatic epithelial cancer cells (PDACs). However, removing exosomes from the CAF CM reduced its ability to offer gemcitabine-resistance. Additionally, when CAFs were co-cultured with chemo-sensitive pancreatic cancer cells (l3.6 cells), gemcitabine had reduced efficacy; conversely, blocking exosome secretion using 20 µM GW4869 significantly reduced these survival benefits. In vivo, in NOD/SCID mice bearing CAFs and ASPC1 cells, co-treatment with gemcitabine and GW4869 (100 mg/Kg body weight) resulted in remarkably decreased growth rate of the tumour 10 days after treatment, when compared to those treated with gemcitabine alone. While these studies of raw CM do not confirm that GW4869 inhibits exosomes release, it certainly seems to be producing benefit that could be attributed to blocking EVs. Similarly using GW4869, the role of exosomes released by hepatic stellate cells (HSCs) has been investigated in relation to chronic liver injury [79]. Here, pro-fibrogenic connective tissue growth factor (CCN2)-GFP transfected donor LX-2 hepatic stellate cells were incubated with control non-transfected LX-2 cells. Fluorescence was detected in the control cells after 48 h; this fluorescence was reduced by 55% upon pre-treatment of the donor cells with 10 g/mL GW4869. This was proposed to show the ability of exosomes to transport CCN2 to HSCs that may result in the amplification of fibrinogenic signalling in response to chronic liver injury. In another model system, Zhou et al. [62] reported negative influences of exosomes released during wound healing in renal tubular cells. Specifically, when exosomes production from mouse proximal tubular cells (BUMPT) was inhibited using 10 µM GW4869 or 1 µM manumycin A, the wound healing process was promoted, suggesting that the released exosomes reduced the rate of healing. EGF was reported to be causally involved in exosomes release, i.e. EGF-activated BUMPT cells had dramatically decreased exosomes released, while suppression of EGFR activity using gefitinib promoted exosomes release. Similar results have been obtained with lung adenocarcinoma (PC9) cells [80]. Here an increase in exosomes production occurred in response to gefitinib; conversely, pre-incubating cells with 0.5-20 µM GW4869 prior to gefitinib addition prevented gefitinib's stimulatory effect on exosomes production. Cao et al. [81] reported the involvement of exosomal DNA methyltransferases (DNMTs) in transmitting cisplatin-resistance to SKOV3 ovarian cancer cells. Specifically, exosomes released from SKOV3 cells were found to carry a significantly higher amount of DNMTs than exosomes derived from normal endometrial stromal cell lines (ESCs). When SKOV3 cells were treated with these exosomes, the cells showed significant resistance to 3 µM cisplatin when compared to SKOV3 cells that were not treated with exosomes. The resistance to cisplatin, which was proposed to be due to exosomes-carried DNMTs, was remarkably reduced by GW4869. Exosomes (and other EVs) have been implicated in tumour progression. In melanoma, Matsumoto et al. [82] reported GW4869 (5 µg/mL) significantly decreased the growth of B16BL6 cells compared with that of untreated or vehicle-treated (DMSO) cells, which they proposed to show that GW4869 inhibition of exosomes secretion suppresses autocrine-regulated proliferation of murine B16BL6 cells. The same effect was observed in vivo in C57BL/6J mice subcutaneously inoculated with B16BL6 cells. Here, B16BL6-derived exosomes injected into mice promoted tumour growth compared to those mice treated with PBS, while GW4869 treatment (1 µg of drug injected inside the tumour mass every day) was associated with significantly reduced tumour growth and thus murine survival. Including GW4869 as a hypoxia study, Panigrahi et al. [83] reported that prostate cancer cells (LNCaP, 22Rv1 and PC3 E006AA-hT) secrete more exosomes when cultured in this condition, compared to normoxic conditions, as a survival mechanism to remove metabolic waste including lactic acid encapsulated in the exosomes. Conversely, preventing exosomes production using the GW4869 inhibitor (10-20 µM) reduced cell viability compared to the untreated cells, supporting a fundamental role of exosomes in survival of these cancer cells. GW4869 has been used to show that exosomes may play an essential role in immune regulation. For instance, exosomes have been associated with house dust mite allergen-induced airway inflammation [84]. In a C57BL/6J mouse model, Th2-immune -targeted miRNAs were found to be selectively released into the airway lumen via exosomes during allergic inflammation. Pre-treatment of mice with GW4869 (1.25 mg/kg) or DMSO, injected intraperitoneally, decreased exosomes release in vivo and, in turn, decreased the numbers of eosinophils recruited and accumulated in the mucosa. Additionally, this treatment suppressed the production of IL-4 and IL-13, thus reducing the pro-inflammation signals. Exosomal miRNA content was further connected with different immune system-modulating pathways including T-helper cell differentiation, granulocytes adhesion, diapedesis and leukocyte extravasation. Similarly, with the help of GW4869, Essandoh et al. [85] reported on negative effects of exosomes during sepsis-induced inflammation and cardiac dysfunction. Here, in vitro study showed that RAW264.7 macrophages challenged with LPS increased exosomes release and that these exosomes carried pro-inflammatory cytokines, including TNF-α, IL-1β and IL-6. However, when these cells were treated with 10-20 µM GW4869, the quantity of exosomes releasedand thus of proinflammatory cytokines present in their CMwas substantially decreased. A similar effect was also observed in vivo when C57BL/6 mice were intraperitoneally injected with 2.5 µg/g GW4869 for 1 h, followed by an intraperitoneal injection of 25 µg LPS. Those pretreated with GW4869 presented a significant decrease in endotoxin-triggered production of exosomes and pro-inflammatory cytokines into their serum. As a consequence, sepsis-induced cardiomyopathy was prevented and animal survival was improved in the GW4869 pre-treated cohort of mice. The neutral sphingomyelinases pathway is reported to be involved in prion protein packaging into exosomes, based on studies including GW4869. In a rat immortalised hypothalamic cell line, GT1-7, whether infected or not with a mouse-adapted strain of human prion (M1000), the quantity of exosomes released was inhibited upon treatment with 4 µM GW4869. Interestingly, a decrease of both cellular PrP TOT and PrP SC was observed. The lowered PrP levels in the cells, in turn, were also observed in their exosomes, indicating that packaging of PrP into exosomes is regulated by nSMase pathway [38]. In OvRK13 cells infected with the ovine 127S prion strain, 10 µM GW4869 treatment did not affect infectivity levels in ovRK13 recipient cells. However, it did dramatically inhibit infectivity released into the CM, although GW4869 marginally inhibits the release of other exosomal proteins such as Alix, Flotilin-1 and Tsg101. This suggests the involvement of exosomes in PrP release. The same effects were observed using shRNA against Tsg101 of the ESCRTdependent pathway. Together these data support both ESCRT-dependent and ESCRT-independent pathways participating in extracellular release of actively multiplying prions, and suggest that inhibiting these pathways could be beneficial to prevent prion spreading [86]. While GW4869 has been relatively extensively studied and reported to inhibit exosomes, here again collating the data is complicated by the fact that some authors have not reported how exosomes were separated (if they were), some have used the well-accepted ultracentrifugation methods, while other have used less favoured kit-based approaches. Similarly, some studies have included quite extensive characterisation to support the authors claim of working with EVs, while in other studies, the characterisation used is substantially lacking. As for all other agents/drugs detailed in this review and summarised in Table 2, we strongly advise that efforts should initially be invested in determining concentrations of GW4869 that are non-toxic to the cells being investigated and due consideration should be given to methodologies for EV separation and characterisation, which are in line with MISEV2018 guidelines. Other drugs used Due to the EVs release mechanism complexity, it is not surprising that the same production pathway can be impaired using different drugs acting on diverse targets involved in the same signal cascade. For this reason, although the drugs mentioned above are the most commonly used, other drugs have been investigated for their ability to inhibit EV release. These include bisyndoylmaleimide I, U0126, clopidogrel, imatinib, NSC23766, dimethyl amiloride, glibenclamide, indomethacin, chloramidine, cytochalasin D and sulphisoxazole. Bisyndoylmaleimide I is a highly selective, cellpermeable, reversible inhibitor of protein kinase C (PKC). It acts as a competitive inhibitor of the PKC ATP-binding site and it can inhibit different PKC isoforms, including PKCα-, β I -, β II -, γ-, δ-, and εisozymes. MVs release is dependent upon calcium release and externalisation of the phosphatidylserine, both of which are involved in PKC activation with diacylglycerol. Additionally, as outlined above, MVs release is dependent upon the ROCK family activation and, as PKC belongs to this family, its inhibition can block MVs release. Stratton and co-workers [87] reported MV release to be inhibited by bisyndoylmaleimide I in prostate cancer cells (PC3). Specifically, they treat the cells with the sublytic membrane attack complex (MAC) used to increase intracellular calcium level, known to favour MVs release. However, bisyndoylmaleimide I prevented MVs shedding, independently from the increased calcium intracellular level. U0126 is a potent, specific, non-competitive inhibitor of MEK 1 and MEK 2, two protein kinases belonging to the mitogen-activated protein kinase kinase (MAPKK) family. As one of the mechanisms involved in MVs generation requires the activation of ERK, a protein upstream MEK, its inhibition has been associated with MVs reduction. Here, Mingzhen and coworkers [88] demonstrated that using tobacco smoke extract on THP-1 monocytic cell line and on primary human monocyte-derived macrophages (hMDMs) increased the release of pro-coagulant MVs, an activity subsequently associated with an overactivation of the MAPK cascade. Subsequently, using 10 µM U0126 to inhibit MEK activity, the MVs release and the procoagulant activity derived from the stimulated monocytes and macrophages were dramatically decreased. Clopidogrel is used clinically, alone or in combination with aspirin, as an anti-platelet anti-coagulant to reduce the risk of heart disease and stroke. It is administered as pro-drug, activated through its metabolism by CYP450 enzymes and, once processed, it selectively inhibits the binding of ADP to its receptor P2Y12 present on platelet surface. Clopidogrel apparently provides a protective effect on ECs, although the mechanism involved is poorly understood. Jung-Hwa Ryu et al. [89] investigated clopidogrel's effect on HUVEC cells that were stimulated with indoxyl sulphate. Here, indoxyl sulphate overactivated the MAPK signalling cascade, leading to a massive production of MVs that was efficiently blocked by 10-50 µM clopidogrel, possibly via its inhibitory activity on p38 MAP kinase, downstream in the MAP kinase pathway. Imatinib is clinically used to treat chronic myeloid leukaemia, due to its ability to interact and inhibit the bcr-abl tyrosine kinase enzyme deriving from the Philadelphia Chromosome. This drug, interacting with the ATP-binding site present in the catalytic site of the enzyme, blocks the protein in the close conformation of the catalytic site, resulting in the inactivation of the catalytic activity. Unfortunately, myeloid leukaemia cells develop resistance to imatinib, a reason that prompted the development of a new drug, dasatinib, which has a similar effect on bcr-abl kinase and is apparently effective on all the bcr-abl mutants. Mineo et al. [90] reported that these drugs prevent exosomes production by K562 chronic myeloid leukaemia cells, with reduction of 58% and 56%, respectively, for imatinib and dasatinib. NSC23766 is an inhibitor of Rac1, an enzyme belonging to the Rho small GTPase family and involved in many physiological processes, including cell growth, motility, cell-to-cell adhesion and cytoskeleton re-organisation. NSC23766 can block Rac1 activation by the guanine nucleotide exchange factors (GEF) Trio and Tiam1, without affecting its interactions with Cdc42 or RhoA. Wang et al. [91 correlated the activity of Rac1 with MVs generation in sepsisactivated platelet. Specifically, their pre-clinical in vivo study demonstrated that platelets produce a higher amount of MVs during sepsis than under healthy conditions, and the use of the Rac1 inhibitor NSC23766 (5 mg/Kg body weight) reduces MV release by 87%, confirming the involvement of this small GTPase during MVs production. Dimethyl amiloride (DMA) is a derivate of amiloride, a drug used to treat high blood pressure based on to its ability to inhibit H + /Na + (NHE 1, 2 and 3) and the Na + /Ca 2+ channels [92]. As exosomes release is connected to intracellular calcium levels, and DMA impairs the function of channels connected with the calcium homoeostasis, this drug has been proposed as a possible exosomes blocker. Both in in vitro (using mouse colon carcinoma CT26, lymphoma EL4 and human lung adenocarcinoma H23 cell lines) and in vivo (using a panel of different mice including female C57BL/6, BALB/c and nude mice) studies, Chalmin et al. [93] reported that DMA-reduced exosomes release into culture medium and blood serum, respectively. The in vivo studies indicated that DMA (1 µmol/kg body weight) enhances the anti-tumour efficacy of cyclophosphamide, suggesting it should be considered as part of a combination therapy for cancer. Glibenclamide (glyburide) belongs to the sulphonylureas family and inhibits an ATP-binding cassette transporter involved in MVs release. Glibenclamide has been used as an anti-diabetic drug because of its activity on the SUR receptor (member of the ABC family) which is involved in insulin release via its ability to regulate potassium channels. However, this drug is not specific to SUR/Kir6.2 and interacts with other proteins including the ABCA1, which is implicated in the transfer and recycle of cholesterol and phospholipid from cells to the circulating apolipoproteins A1 (apoA-I) during their maturation in highdensity lipoprotein (HDL) [94]. As cholesterol seems to have a fundamental role in MVs and exosomes production, glyburide has been proposed as a possible EVs inhibitor. Henriksson et al. [95] suggested glyburide to have an anti-coagulant effect on monocytes in vitro, reducing tissue factor expression and reducing the number of MVs released. This has been proposed to be due to inhibition of ABCA1, supporting the relevance of this drug to controlling MVs release. Conversely, however, Kosgodage et al. [75] found glibenclamide to have no effect on MV release on prostate cancer (PC3) and breast cancer (MCF7) cells. Indomethacin belongs to the non-steroidal antiinflammatory drugs (NSAID) family and is used to decrease prostaglandins production during inflammation, due to its ability to non-selectively inhibit cyclooxygenase I and II. Indomethacin has also been shown to downregulate transcription of the ABCA3 transporter, an intracellular protein involved in lipid transport [96]. As lipids are major players in exosomes biogenesis, inhibition of ABCA3 may have an impairing effect on exosomes release. Koch et al. [97] illustrated an effect of indomethacin on exosomes release using lymphoma cell lines (DlBCL cell lines SU-DHL -4, OCl-Ly1 and OCl-Ly3). Specifically, these cells were reported to encapsulate doxorubicin and pixantrone in their released exosomes, resulting in cancer cell survival. Preventing exosomes export using 10 µM indomethacin maintained the cytotoxic effect of these chemotherapeutics, as they were then able to accumulate inside the cells' nuclei. Chloramidine is a cell-permeable compound that irreversibly binds the calcium-bound form of peptidylarginine deiminases (PAD) enzymes, leading to their inactivation. Six PAD isoforms are known to exist in human, expressed in different cells and with different cell localisation. These enzymes are involved in many cellular processes, including chromatin rearrangement and protein deimination [98]. PAD enzymes are overexpressed in many different pathological processes including cancer and, due to their ability to interact with cytoskeletal proteins, they have been proposed as possible players in MVs release. Kholia and co-workers [99] demonstrated that micro-vesiculation promotion in prostate cells leads to increased levels of deiminated (i.e. removal of an amine group) of cytoskeletal proteins, particularly β-actin and actin-α1. These deiminations were found to be performed by PAD enzymes and cell pre-incubation with chloramidine substantially prevented actin deimination and, thus, MVs release. Cytochalasin D is an alkaloid produced as a toxin by many fungi. Cytochalasin D can bind the edges of actin filaments preventing subunits association or dissociation, thus inhibiting and avoiding actin polymerisation. Given the importance of actin and cytoskeleton reorganisation during MVs budding and release or MVBs trafficking towards the cell membrane, cytochalasin D may possibly be used to prevent EVs release. Using a range of cancer cell lines including cervical carcinoma (HeLaS), pancreatic carcinoma (Panc1) and prostate carcinoma (PC3), Salma Khan et al. [100] reported that cancer cells can release exosomes carrying the anti-apoptotic protein survivin. Cytochalasin D treatment reduced exosome release, in turn reducing the amount of survivin present in the tumour environment. Finally, sulphisoxazole, a short-acting sulphomanide exerting an antibiotic activity against a wide range of gram-positive and gram-negative bacteria, has shown an anti-cancer activity in breast cancer (in a study including MCF10A, MCF7, MDA-MB-231 cells). Using both in vitro and pre-clinical in vivo studies, Im et al. [101] reported the ability of this drug to efficiently reduce the release of small vesicles through its activity on the ESCRT-dependent pathway. This drug proved to inhibit the expression of several RABs and ESCRT-related components such as VPS4B and Alix, without affecting other vesicles-producing pathways such as the ESCRT-dependent pathways (neutral sphingomyelinases proved to be not affected) or modifying intracellular calcium levels. Sulphisoxazole efficiently inhibited the endothelin receptor type A, suggested this to be causally involved in small vesicles release. Considering side-effects As outlined earlier, even if some of these drugsthat are already formulated and used as therapeutic agentsare found to reliably, robustly and reproducibly, inhibit release of EVs, substantial efforts would still be needed to investigate their influence on EV release from healthy cells. Approaches to selectively deliver them to cancer cells may be required. Of course, the drugs that are already approved for use in humans, for some indication(s), would likely have a more straightforward pathway to utility than those are molecules that have never been developed as therapeutics. Notwithstanding that, even for currently used as therapeutics, their sideeffects (whether or not related to their influence on EVs) must also be considered. For example, known side-effects of imipramine include blood disorders/suppression of immune cells and associated infections, disorientation, dizziness, tiredness, nausea and vomiting, low blood pressure, among others. Side-effects of pantetheine includebut are not limited tonausea, diarrhoea and possible impaired blood clotting. Similarly, side-effects of others of these drugs such as imatinib, glibenclamide, and indomethacin are well established. However, it must also be remembered that no drugs in clinical use are without some sideeffects and so decisions must ultimately be made on the benefit/risk ratio to decide upon the appropriateness of use. Critical considerations in the context of MISEV guidelines As indicated in Table 2, in the studies of inhibitors reported to date, a relatively broad range of methodologies have been used to separate and characterise vesicles. It is evident that some studies reported do not comply with the MISEV guidelines released in 2014 or 2018 [59,68]. However, as the scope here was to review the proposed abilities of compounds thus far reported to decrease the amount of EVs released, we elected not to bias the review by eliminating studies that did not fully comply with MISEV guidelines, particularly as some such studies were performed prior to the MISEV2018 guidelines being published. However, we believe that some observation in this regard should be considered. Firstly, on the methodologies used to recover and subsequently characterise EVs: many studies in which for example calpeptin and Y27632 activity were evaluated used Annexin V test as characterisation method. While analysis of Annexin V can help to identify the presence of some vesicles subpopulations would be unreliable for identifying all MVs and to confidentially identify MVs compared to apoptotic bodies, if the latter be present. This could then lead to potentially under-or overestimating MV presence. Furthermore, where Annexin V immunoaffinity is used as an EV separation method, procedural bias may be unavoidable. Similarly, for some studies using ultracentrifugation without density gradients for EV separation. Here small, medium and large EVs may be pooled, masking influences of potential inhibitors on a specific subpopulation evaluated. Furthermore, it would be prudent for studies where 0.2 µm filters were used and resulting vesicles are described as MVs to consider that this filtration would likely have eliminated MVs. Secondly, considering the presence of all EVs subpopulations would be ideal. However, some studies claim a reduction in MVs or microparticles released, but without considering if there were any associated changes in other EV subpopulations, such as small EVs. It would be useful to consider if blocking MVs release in turn either positively or negatively affects the release of exosomes, and vice versa. Thirdly, we would advise that consideration always be given to the concentrations of inhibitors used and whether quantities of EVs released are evaluated also in the context of the number of cells releasing those EVs. As these studies include inhibitors interacting with fundamental cellular events, it is reasonable to propose that these compounds, depending on the concentrations used, will affect cell viability and proliferations. Although some studies report the use of "safe" concentrations that do not affect cellular viability and proliferations (i.e. Datta et al. and Panigrahi et al. [61,83]), the majority have given no consideration to this. So, it is impossible to determine whether or not the associated changes in EV amounts reported may be at least partly associated with a cytotoxic effect of the compounds on cells. Conclusion Many in vitro studies including cell lines and a limited number of pre-clinical in vivo studies indicate that many compounds have the ability to block, or at least limit, the formation and release of exosomes and/or microvesicles. These include calpeptin, Y27632, pantethine, imipramine, GW4869, manumycin A, bisyndoylmaleimide I, U0126, clopidogrel, imatinib, NSC23766, dimethyl amiloride, glibenclamide, indomethacin, chloramidine, cytochalasin D and sulphisoxazole. More extensive studies are now warranted to investigate the activities of these drugssingly and in combinationon a much broader range of cell line and pre-clinical in vivo models. Being mindful of how we now define EVs, in the context of the recently published minimal information for studies of EVs [68], such studies should give due consideration to concentrationresponse effects of these compounds on vesicle release; their off-target effects, if any, and their potential to somehow selectively block vesicles from cells representing disease (such as cancer) rather than vesicles from normal healthy cells, in order to truly understand their potential uses as research tools and as future therapeutics for controlling EV release.
2020-01-02T21:13:06.900Z
2019-12-19T00:00:00.000
{ "year": 2019, "sha1": "3d9197c540277d4584f8a136ac60c757a309520a", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1080/20013078.2019.1703244", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1a60bd9a83be867407aece2a51436d75051090fc", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
212664403
pes2o/s2orc
v3-fos-license
Fertility following uterine torsion in dairy cows: A cross-sectional study Background and Aim: Dairy cows with uterine torsion often are susceptible to reduced fertility resulting in more costs and effort to restore the economy of those cows. The aim of our study was to examine and evaluate the possible associations between uterine torsion and consequent uterine involution disturbances, on the one hand, and between the degree and duration of uterine torsion with fertility parameters, on the other hand. Materials and Methods: Within 1.5 years, 115 dairy cows (German Browns, German Holsteins, and German Fleckvieh) that were suffering from uterine torsion were examined to evaluate the incidence of involution disturbances of the uterus and to examine the fertility after calving. Statistical analysis included correlation analyses between the degree and duration of torsion and fertility parameters (days open, days to conception, conception rate and services per conception, and intercalving interval) as well as incidence of involution disturbances. Results: The study revealed no statistically significant correlation between uterine involution and degree of uterine torsion. However, involution processes were significantly correlated to the time of the expulsion of the fetal membranes. Days to conception and intercalving intervals were significantly influenced by the presence of uterine torsion. Conclusion: Concerning fertility after uterine torsion, it was shown that reduced fertility is associated with the duration of uterine torsion (p=0.02) and time to drop of fetal membranes (p=0.02) but not with the degree of torsion (p=0.27). Introduction Uterine torsion is a well-known cause of dystocia in dairy cattle that often results in the necessity to consult the local veterinarian [1,2]. Torsion of the organ leads to local ischemia, fetal death and may even result in the death of the cow [2]. Published literature mainly deals with the incidence, pathogenesis and therapeutic possibilities, including prognostic parameters. These prognostic parameters are used to evaluate the outcome after uterine torsion for the calves on the one hand and for the cows on the other hand [2][3][4][5]. In buffaloes, uterine torsion is also described in detail [6]. According to these authors, uterine torsion is seen in 0.5-1% of all births and represents 2.7-65% of all reasons for dystocia that is presented to the local veterinarian. Pathogenesis still is not completely clear, but several risk factors have been suggested. Among these are age, breed, anatomical reasons, electrolyte imbalances, abrupt movements of the pregnant cow as well as weight and gender of the calf [1]. The therapy of uterine torsion mainly consists of retorsion of the organ and development of the calf. To perform retorsion, several conservative methods as well as laparotomy have been described. The prognosis for the calf and the cow depend on the onset of the disease as well as on the degree of torsion of the organ. Puerperium and the followed fertility are presumed to be influenced by uterine torsion [4]. To the owner, a positive outcome includes a vital calf and a high yielding and fertile dairy cow that gets pregnant by the first service. To realize an immediate onset of the reproductive cycle, the uterine involution and the ovarian function have to be undisturbed [7]. Dystocia, in general, is associated with the retention of fetal membranes [8]. The aim of our study was to examine and evaluate the possible associations between uterine torsion and consequent uterine involution disturbances, on the one hand, and between the degree and duration of uterine torsion with fertility parameters, on the other hand. Ethical approval Examinations and treatments were performed according to the standard therapeutic measures without any unnecessary harm to the animals. Approval from the Institutional Animal Ethics Committee was not required; the study did not affect the animals in excess of therapy. Available at www.veterinaryworld.org/Vol.13/January-2020/13.pdf Animals and study design The present study included 115 dairy cows with uterine torsion of different breeds (German Browns, German Holsteins, and German Fleckvieh). All animals were housed on farms in the south-east of Germany. The diagnosis of uterine torsion was confirmed by the local veterinarian through vaginal and transrectal examination. The duration of uterine torsion was defined as time from the detection of stagnant calving to the retorsion of the organ, whereas the degree of torsion was determined during therapy. To evaluate the general condition of the patients, the animals were examined clinically and gynecologically through manual palpation of the vagina and cervix. Therapeutic measures (transcervical retorsion of the uterus or cesarean section) depended on the degree of the cervical opening. To evaluate the postpartum period, the cows were examined on days 10 and 15 after calving. A classification of the general condition as "undisturbed," "slightly disturbed," and "severely disturbed" based on the vital parameters and the behavior of the animals. For a description of the postpartum period, the time to the expulsion of the fetal membranes (normal: 12 h), food intake after calving, and milk yield and general condition of the cows were quoted. At days 10 and 15, after calving, a vaginoscopic and manual examination to assess the cervix and vaginal canal as to trauma, color of the mucous membranes, opening of the cervix, and quality of vaginal discharge were performed. Uterine size and filling (if present) were evaluated through transrectal palpation. Based on these examinations, the following involution statuses were classified: Physiological involution of the uterus (uterus transrectally definable, and unremarkable vaginal discharge), slight disturbance of uterine involution (uterus not definable, but tensed, and purulent vaginal discharge), and severe disturbance of uterine involution (uterus not definable and relaxed, and severe watery vaginal discharge). Fertility assessment included an observation of the cows until they were diagnosed pregnant or until they left the farm. To evaluate fertility, data of artificial insemination including days to the first service (days between calving and first insemination), days to conception (days between calving and 1 st day of pregnancy), days between calvings (days from calving to calving), insemination index (number of pregnant cows per number of all inseminated cows), and services per conception (number of inseminations per number of pregnant cows) were documented. Days to the first service and days to conception were compared with a small control group of 3-5 cows of the same farm that was calving within the same time frame. The time between calvings was related to data of the single farms. Statistical analysis Statistical analysis to evaluate the possible statistical differences between cows with and without uterine torsion incorporated the Fisher's exact test, the Wilcoxon Mann-Whitney test in its exact form, and correlation analyses. All tests were performed with the software package Testimate [9]. For the fertility data days to the first service, days to conception and days between calvings, the one-sided t-test was used. In all tests, results with p<0.05 were regarded as statistically significant. Results The present results indicate that the duration of uterine torsion normally was not longer than up to 6 h (49.6 %) with a degree of torsion of 270° (47.7%). The percentages of cows that showed physiological, accelerated, or delayed expulsion of the fetal membranes are shown in Figure-1. Concerning fertility parameters, we could determine an overall pregnancy rate after the uterine torsion of 80.3% with no statistically significant difference in days open. In contrast, the time from calving to the 1 st day of pregnancy (days to conception) was significantly prolonged in cows with uterine torsion. Those animals were pregnant 27.6 days later than the healthy controls. Details are given in Table-1. The index of inseminations was recorded as 2.2 and differed not significantly between diseased and healthy animals. In contrast, the time between calvings differed statistically significant between cows with uterine torsion and the healthy controls resulting in a 25.5 days longer time from calving to calving. In this context, the own statistical analyses could show that there is no statistically significant relationship between the degree of uterine torsion and fertility (p=0.27). However, there was a significant correlation between duration of torsion and fertility. With prolonged time of persistent torsion of the uterus, the probability of infertility rose (p=0.02). Infertility also resulted from retention as well as from preterm detachment of the fetal membranes (p<0.0001). Cows with the expulsion of the fetal membranes simultaneously with calving seem to be more prone to infertility than cows suffering from retained fetal membranes (RFM) because neglect of those cases with preterm detachment results in a marked reduction of significance (p=0.05). The examination concerning the influence of uterine torsion on involution processes displayed a statistically significant enhancement of expulsions of the fetal membranes simultaneously to calving in correlation with the degree of uterine torsion (p=0.03). In severe cases of uterine torsion (>360°), the fetal membranes always could be removed together with the dead calf. A retention of the fetal membranes, on the other hand, develops independently from the degree of torsion (p=0.83). The examination of the relationship between degree of torsion and involution of the uterus showed no significant correlation between these parameters (p=0.45 with R=0.076). A subsequent disturbance of uterus involution is statistically more probable after retention (p<0.0001) as well as after simultaneous expulsion of the fetal membranes at calving (p=0.0001). Discussion Knowledge concerning the restoration of the reproductive cycle after uterine torsion is extremely guarded and either restricted to the breed Brown Swiss or to a somewhat biased collective of animals within a clinic for obstetrics [3,10]. The own results are the first attempt to examine fertility parameters and the incidence of uterine involution disturbances after uterine torsion by a controlled prospective clinical study including the breeds Brown Swiss, German Holsteins, and German Fleckvieh cows. We compared the results of our evaluation of fertility data with control cows from the same farm. This intra-farm comparison of diseased and control cows enables an unbiased data analysis concerning housing or management-related risk factors for the development of uterine torsion, calving management, and fertility status within the different farms. In doing so, we could show that days to conception and intercalving intervals were significantly influenced by the presence of uterine torsion. The reason for this finding is supposed to relate to uterine tissue damage rather than to a delayed onset of ovarian activity because only days to conception and intercalving intervals but not days open were found to differ between cows with and without uterine torsion. In general, the postpartum period is of extraordinary importance for undisturbed fertility. One of the main adverse factors resulting in impaired fertility is RFM with consecutive metritis [11]. In the present study, animals suffered from this disease in 20.8% of all examined cases. This observation is in line with a retrospective study in Switzerland in Brown Swiss cows, where an observation incidence for RFM of 21.2% could be shown for this breed. The physiology of placental detachment is a complex, multifactorial system with many influencing factors as to its impairment [11]. The detachment of the fetal membranes may not only be delayed but also in severe cases of uterine torsion may be preterm. According to Schonfelder and Sobiraj [3], such a detachment takes place in 81.8% of all cows undergoing cesarean section for the correction of uterine torsion. The authors suggest that this very high rate of detached fetal membranes may be correlated to the high rate of fetal deaths during torsion of the uterus. The present study revealed a much less ratio of detached fetal membranes with 14.2% of all observations, but we agree with the surveillance that this detachment was related to fetal death. One possible explanation for this incongruity might be that Schonfelder and Sobiraj [3] refer to a collective of clinical cases whereas our own study represents a controlled prospective clinical field study with an unbiased population of animals. In contrast to the assumption, that ischemia of the uterine artery may induce loosening of the cotyledon-caruncle interface [3], other authors have suggested that after uterine torsion a delayed detachment of the fetal membranes occurs [12]. Non-inflammatory edema of this interface because of compression of the venous uterine outflow is proposed as the reason for a delayed loosening of the fetal membranes. Whereas the involution of the uterus was undisturbed in only 48.4% of all cows in the present study, we diagnosed also a remarkable amount of mild (31.6%) to severe (20%) impairments of uterine involution. The overall ratio of uterine involution disturbances in the present study seems comparatively high with 51.6% but is in accordance with the results of Schonfelder and Sobiraj [3] who could show involution disturbances in 63% of all cases after uterine torsion and cesarean section. Klein and Wehrend [12] explain this postpartum delay of uterus involution by severe uterine tissue damage in cows with torsion uteri. As this tissue damage occurs independently from RFM and because our results present a statistically significant impairment of uterine involution in cows with the preterm detachment of the fetal membranes, this symptom may serve as an in-field prognostic parameter for fertility. In addition, the results from Klein and Wehrend [12] also explain why fertility is reduced after uterine torsion. These researchers could show histologically that in cases of uterine torsion with a degree of torsion of 360° massive edema of the uterine wall with the loosening of the endometrium takes place. Combining the own results with the results of Klein and Wehrend [12] leads to the assumption that fertility impairment after uterine torsion is solely related to the described damage of the uterine wall. This hypothesis is based on the fact that we could show a significantly prolonged time to conception but a not prolonged time to the first service in cows with uterine torsion in comparison to healthy animals from the same farm. This means that the ovarian function seems not affected after uterine torsion in contrast to the integrity of the uterine wall and mucous membranes that are essential for the implantation of the embryo. As a consequence, a 27.6-days longer period to conception in those animals with uterine torsion could be shown. A meta-analysis of the effects of disease on reproduction [13] could show a similar result concerning the general effect of dystocia on reproduction. The authors state that cows after dystocia and RFM need 8 and 11 days more to conception than healthy controls. As the reason for dystocia is not clarified in this study, we suppose that uterine torsion is underrepresented in this meta-analysis resulting in a shorter time of delay to conception or in other words, we think, that the impact of uterine torsion on fertility might be more severe than dystocia because of narrowness of the vagina in calving cows. Conclusion The present study suggests that after uterine torsion, an impairment of uterine involution and reduced fertility was seen. Involution disturbances are associated with unphysiological detachment processes (simultaneously to calving and delay) of the fetal membranes. Probability of infertility is supposed to rise with the duration of uterine torsion and seems to be due to tissue damage of the uterine wall resulting in a prolonged time to conception and time between calvings.
2020-01-16T09:04:25.970Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "76ed19e8bca00fb68bd25f5abe7c256760697ad0", "oa_license": "CCBY", "oa_url": "http://www.veterinaryworld.org/Vol.13/January-2020/13.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5d538e44a109a9eac556ed52eef334ec274eeddd", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
9073008
pes2o/s2orc
v3-fos-license
Independent Evaluation of the Integrated Community Case Management of Childhood Illness Strategy in Malawi Using a National Evaluation Platform Design We evaluated the impact of integrated community case management of childhood illness (iCCM) on careseeking for childhood illness and child mortality in Malawi, using a National Evaluation Platform dose-response design with 27 districts as units of analysis. “Dose” variables included density of iCCM providers, drug availability, and supervision, measured through a cross-sectional cellular telephone survey of all iCCM-trained providers. “Response” variables were changes between 2010 and 2014 in careseeking and mortality in children aged 2–59 months, measured through household surveys. iCCM implementation strength was not associated with changes in careseeking or mortality. There were fewer than one iCCM-ready provider per 1,000 under-five children per district. About 70% of sick children were taken outside the home for care in both 2010 and 2014. Careseeking from iCCM providers increased over time from about 2% to 10%; careseeking from other providers fell by a similar amount. Likely contributors to the failure to find impact include low density of iCCM providers, geographic targeting of iCCM to “hard-to-reach” areas although women did not identify distance from a provider as a barrier to health care, and displacement of facility careseeking by iCCM careseeking. This suggests that targeting iCCM solely based on geographic barriers may need to be reconsidered. In Response Independent Evaluation of the Integrated Community Case Management of Childhood Illness Strategy in Malawi Using a National Evaluation Platform Design Dear Sir: Doherty and others suggest that our evaluation of the impact of the integrated Community Case Management of childhood illness (iCCM) in Malawi did not take into account spatial variations in exposure to iCCM within districts, thereby diluting the measured impact. 1 We find this critique unfounded given the purpose of our evaluation. The iCCM strategy as implemented by the Ministry of Health and partners in Malawi aimed to improve intervention coverage and reduce under-five mortality among all children. Central components of the strategy were to complement existing facility-based health services with trained community health workers in district-defined "hard-to-reach areas" (HTRAs), while simultaneously strengthening the quality of facility-based child health care, referral, commodities distribution, and supervision. An evaluation focused only on the effects of deploying community-based workers would miss the potential contribution of these broader health system interventions, as well as any negative side effects of the iCCM strategy such as the displacement of program effort from easily accessible areas to HTRAs. The appropriate unit of analysis for the evaluation is therefore the district, with all under-five children included in the analysis. In addition, Doherty and others' recommendations for alternative evaluation designs are flawed, and suggest an unfamiliarity with the realities of large-scale evaluations. Their suggestion to stratify the analysis by a measure of spatial exposure such as distance from a household to a health surveillance assistant (HSA) or to estimate a ratio of HSAs per population in a specified area ignores the fact that neither these data nor geospatial boundaries of district-defined HTRAs is available in Malawi. Our evaluation did examine the effects of iCCM among rural populations and the poorest 20%, and found no substantial change in care seeking over time (see Figure 3 and Supplemental Webannex 1, Part 8 in the article). The authors of the letter also suggest that using change in care seeking from HSAs as the dependent variable would produce "more accurate results" relative to the measure of overall care seeking used in our evaluation. We not only report these results (an increase in care seeking from iCCM-HSAs from 2% to 10% during the implementation period), but also document that this increase was largely as a result of mothers changing their source of care from a fixed health facility to an iCCM HSA. Limiting the analysis to those seeking care from HSAs would miss this replacement effect and also produce unstable results due to small sample sizes. We agree that it would be preferable to use correct treatment rather than care seeking as an intermediate measure. Unfortunately, there is now convincing evidence that household surveys cannot produce accurate measures of treatment of childhood pneumonia, and care seeking provides the best available measure until new methods are developed and tested. [2][3][4] In sum, an evaluation approach that defines exposure at HTRAs level would not measure the impact of roll-out of the iCCM at population level in districts or nationally. Such approach would answer a different and much more limited question-that is, what are the effects of iCCM in HTRAs as compared with non-HTRAs? Although answering this question might demonstrate the effectiveness of iCCM as a stand-alone service delivery approach, it does not provide a test of the full program strategy and Malawi's intent to reduce child mortality at district and national levels. Our rigorous approach has provided important evidence to the government of Malawi that they are using to revise and strengthen implementation of iCCM strategy in the context of their overall child survival strategy. Doherty and others' concern that the evaluation results may lead to reductions in funding for iCCM and demotivation of program managers is unjustified and inappropriate, implying that the reporting of evaluation results should be tempered to reinforce the status quo.
2018-04-03T00:05:57.255Z
2016-06-01T00:00:00.000
{ "year": 2016, "sha1": "5f5d64387886483549cfdd5327180b666b2171a8", "oa_license": "CCBY", "oa_url": "https://europepmc.org/articles/pmc4889770?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "879bb7e8c2a4a66df08079c08a3375d8a76fe585", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
210981231
pes2o/s2orc
v3-fos-license
Adding α-pinene as a novel application for sulfur dioxide-free in red wine Sulfur dioxide (SO2) is an efficient additive that is used during winemaking to prevent microbial contamination as well as the oxidation and color changes caused by enzymatic and nonenzymatic reactions. However, this compound can cause allergic reactions to humans. In this study, α-pinene, a monomeric compound derived from fruits, was used to be an alternative for sulfur dioxide in red wine fermentation. Three concentrations of α-pinene, 0.125%, 0.25%, or 0.5%, were used. The redwinewithout additive or SO2 was used as controls. No significant differences were found in total soluble solids, pH values, titratable acidity, and ethanol content (n = 3, p < .05) though lower ethanol content was observed in the group at 0.5% α-pinene. The antibacterial results were the same between adding α-pinene and SO2. In the α-pinene group, significantly higher L and a values as well as transmittance were obtained than in the control and SO2 groups. Higher free radical scavenging ability was also obtained in the groups of α-pinene. However, the group with the highest concentration, 0.5%, showed negative effects to ethanol production and sensory performance. Better physicochemical and sensory characteristics were obtained in the groups at 0.125% or 0.25% of α-pinene, particularly, when 0.125% of α-pinene was added. Thus, α-pinene at 0.125% should be the optimal concentration for a potential alternative for SO2 in winemaking. ARTICLE HISTORY Received 25 October 2019 Revised 4 January 2020 Accepted 12 January 2020 Introduction Wine fermentation is a complex biochemical process based on the microbial conversion of sugar to ethanol. Carbon dioxide, fermentation by-products, and various microbes found on the surface of grape skin and the microbiota associated with winery surfaces participate in natural wine fermentation. [1] The involvement of microorganisms is essential for the winemaking process and for determining wine quality. Sulfur dioxide (SO 2 ) is a highly versatile and efficient additive included during winemaking to prevent microbial contamination, oxidation, and color changes related to undesirable enzymatic and nonenzymatic reactions. [2] Although this compound inhibits the development of yeast, lactic acid bacteria, and acetic acid bacteria during winemaking, it has been found to be associated with allergic reactions or intolerance in some consumers. Most nonasthmatic individuals can tolerate up to 5 ppm, whereas sulfite-sensitive individuals react negatively to its ingestion and may experience a range of symptoms, including dermatitis, urticaria, angioedema, abdominal pain, diarrhea, bronchoconstriction, and anaphylaxis [3] , which have been associated with the presence of sulfite in wine. Forbidding the use of SO 2 as an antimicrobial agent without an alternative would increase the risk of wine spoilage by yeast and bacteria. [4] Therefore, there is a great deal of interest in finding alternatives to SO 2 in winemaking. Raposo et al. [5] demonstrated the use of a grapevine shoot extract as a sustainable alternative to SO2 and were able to preserve wine composition without loss of quality. Additionally, the use of lysozyme has been proposed to control malolactic fermentation in winemaking to support sulfur dioxide substitution. [5] Recently, some emerging technologies, also called green technologies, have been proposed as alternatives to SO2. Pulsed electric field, ultrasound, high pressure, and ultraviolet light have been tested in wine. [6] Santos et al. [7] demonstrated that moderate high hydrostatic pressure treatments, i.e., 425 and 500 MPa, for 5 min affect the long-term physiochemical characteristics of red and white wines, namely increasing orange-red color, reducing antioxidant activity, and enhancing total phenolic content. These studies have demonstrated the importance of preparing SO2-free wine for the winemaking industry. According to Zacchino et al. [8] , of 100 papers reviewed on natural products that showed the antibacterial inhibitory capacity of adjuvants, approximately 73% dealt with polyphenols (48%) and terpenoids (25%). Terpenes are a class of compounds found as constituents of essential oils; these compounds are mostly hydrocarbons, which are classified according to the number of isoprene units. [9] The simplest terpenes are monoterpenes that contain two isoprene molecules. Monoterpenes, such as ocimene, α-pinene, and limonene, are volatile molecules that evaporate quickly and are called top notes by the perfume industry. Some monoterpenes, such as α-pinene and limonene, exhibit antitumor properties. [10,11] The same amount of α-pinene with different enantiomeric compositions can have diverse antimicrobial potential owing to different specific interactions with other chemical compounds of essential oils. [12] Moreover, grape skin contains a large number of terpenoids, which are extracted during fermentation. [13] Among the specific aroma sources, terpenes are the main components of wine-specific aromas owing to their high concentration and low aroma threshold. [14] Terpenes, a source of volatile aromas in wine, are present in the form of free volatiles or in combination, usually in the form of glycosides [15] , whereas in grapes, they are synthesized during ripening and are subject to the qualitative and quantitative effects of variety, soil, climate, and viticulture, strongly affecting the aromatic content of the wine. Comprehensive studies have demonstrated the development of alternatives to SO2; however, these studies have not revealed other methods to effectively replace SO2 in winemaking. α-Pinene is one of the main components in the terpene family and is widely present in a variety of plant aroma compounds, including grapes and blueberries. Moreover, α-pinene shows antibacterial and antioxidant properties. [16] Therefore, in this study, we aimed to evaluate the applicability of additional commercial α-pinene as an alternative to SO2 in winemaking, which was studied using three different doses of α-pinene. Quality and sensory parameters were also analyzed. Winemaking Fresh black queen grapes grown in the Houli area (Longitude: 120E42'00", Latitude:24N18'00") (Taichung city, Taiwan) were harvested manually in July 2018 by Song Hen Farm Produce (Taichung, Taiwan) and transported to laboratory on the same day of harvest. Based on the protocols of Song Hen Farm Produce, around 25 kg of black queen grapes were harvested, destemmed, and crushed. The initial Brix was~12 degree, then sucrose was added to the grape juice to increase reach~25°B rix. The yeast, Saccharomyces cerevisiae (BC103, France), supplied by Song Hen Farm Produce was activated by adding into 25 mL warm water at 40°C to 45°C at the concentration of 0.25 g/L. After activation for 15 min, the yeast suspension was added into the juice and fermented at 25 ± 2°C for 14 d. There are five different groups: (1) control group, without added SO2; (2) SO2 group, (SO2 was added at 55 ppm); (3) 0.125% α-pinene group; (4) 0.25% α-pinene group; and (5) 0.5% α-pinene group. Three bottles were used for each group and each bottle contained 1 L. After fermentation, the wine was filtered. Suitable amount for different tests was sampled and centrifuged at 13,000 g for 20 min at 4°C to determine the characteristics in physicochemical properties. Microbial analysis Since antibacterial effects were confirmed by the previous study (Wang et al., 2019) [16] , only the standard total plate count (TPC) was conducted to monitor the microbiological quality of the white wine samples. The pour plate method was adopted for this TPC analyses [17] which were conducted before and the end of fermentation. Physicochemical analysis of the wine (total soluble solids, pH, titratable acidity, and transmittance) The physicochemical analysis of the wine including total soluble solids (TSS), specific gravity, pH value, titratable acidity, pectin, degree of ester (DE), phenolic acid, ethanol, and color (L, a, and b) values, was determined. A hand-held refractometer (Atago, Tokyo, Japan) was used to determine the TSS (as°Brix) of wine. [17] The refractometer was calibrated with distilled water each time before use. The specific gravity was determined by hydrometer. Changes in pH value of wines after fermentation were determined by a pH meter (pH/ION METER SP-2500) at room temperature (25 ± 2°C). Distilled water (25 mL) was added to wine (5 mL), and the pH value was adjusted to 8.1 by adding 0.1 N NaOH. The volume of NaOH used was recorded and calculated as percent tartaric acid. Tartaric acid was used as a standard. [17] The percent tartaric acid was calculated as follows: TA (%) = (V × 0.1 M NaOH × 0.075 × 100)/M (V: mls NaOH used; 0.075: milliequivalent factor; M: grams of sample). The transmittance was determined by spectrophotometer (HITACHI U-2000, JAPAN). Pectin Pectin was analyzed in wines after fermentation according to the methods of Hou et al. [18] Precooled H 2 SO 4 (2 mL) and distilled water (15 mL) were slowly added to 5 mg alcohol insoluble solid of wines. The mixture was filtered with Whatman number 1 filter paper to obtain pectin solution after magnetic stirring for 1 h in an ice bath and then heated in a boiling water bath for 5 min. After cooling, the reaction mixture was mixed well with 0.05 mL of 0.15% m-hydroxydiphenyl solution in 0.5% NaOH and then allowed to rest for 5 min. Absorbance at 520 nm was recorded. d-Galacturonic acid was used to construct the standard curve for calculation in the samples. [18] Dextrose equivalent (DE) The DE of commercial pectin samples was determined by the titration method of Warnakulasuriya et al. [18] Briefly, 5 g pectin was mixed with 5 mL HCl (2.7 M), and 100 mL of 60% (v/v) ethanol was added and stirred (900 rpm) for 10 min at room temperature. The mixture was further washed with the same solvent mixture (6 × 15 mL) and then with 60% ethanol. The mixture was then rinsed with 20 mL anhydrous ethanol and oven dried for 1 h at 105°C. To the dried pectin sample (0.5 g) in a beaker, we added 2 mL ethanol and 100 mL distilled water. The resulting solution was titrated against 0.1 M NaOH using phenolphthalein indicator. The initial titer value was recorded as V1 (mL). To the titrated reaction mixture, a 20.0-mL aliquot of 0.5 M NaOH was added, and the mixture was then stirred vigorously for 15 min. Subsequently, 20.0 mL of 0.5 M HCl acid was added and stirred until the pink color disappeared. The resulting solution after the addition of 0.5 M HCl was then titrated against 0.1 N NaOH to obtain the titer value Vs (mL). DE was calculated using Eq. (1) [19] : Total phenolic acids Gallic acid (dissolved in methanol at concentrations ranging from 0.125 to 2 mg/mL) was used as a standard. Briefly, grape wine was diluted to 1 mg/mL with methanol, and 30 μL of each diluted extract with 150 μL Folin-Ciocalteu reagent was transferred to a 96-well plate. An aqueous 7.5% Na 2 CO 3 solution (120 μL) was added, and the absorbance of the solution was measured at 765 nm after 1 h of incubation. [20] Ethanol content A gas chromatography system with a flame ionization detector was employed. The temperature was set at 180°C, the split ratio was set as 1:50, and the injection pore temperature was set at 180°C. The column temperature elevation condition was an initial temperature of 40°C that was maintained for 3 min before the temperature was increased to 180°C at a rate of 40°C/min; the final temperature was maintained for 1 min. [18] Wine color The measurement of wine color was carried out using an SA2000 spectrophotometer (NDK NIPPON DENSHOKU, Japan). The L, a, and b parameters were determined according to the regulations of the International Commission on Illumination: red/green color (a * ) and yellow/blue color (b * ) components and luminosity (L * ). [21] Color differences (ΔE) were calculated as the Euclidean distance between two points in the three-dimensional space defined by L, a, and b, as follows: 2,2-diphenyl-1-picrylhydrazyl (DPPH) free radical scavenging activity The free radical scavenging activity was determined using the DPPH method. The hydrogen atom or electron donation ability of the sample was measured using a methanol solution of DPPH. A 160-µL aliquot of sample solution was mixed with 20 µL DPPH solution (180 μM) in methanol in a 96-well plate. DPPH solution was used as the blank, and ascorbic acid was used as the positive control. After 40 min of incubation in the dark at room temperature (25 ± 2°C), the samples were observed using a 96-well plate reader (HT 340 ELISA reader; Bio Tec) to monitor the decrease in absorbance at 517 nm. The free radical scavenging activity was expressed as the percent inhibition of DPPH and was calculated using the following formula: Scanning electron microscopy (SEM) of the grape must In the freeze drying process [22] , grape pomace was frozen at −20°C and then lyophilized using a condenser temperature of −110°C and a pressure of 10 Pa. The working pressure was lowered using a rotary pump below the triple point of water (607 Pa), to allow ice sublimation. For SEM micrographs, an FEI Quanta 200 (FEI, USA) was used with a 5-mm working distance and accelerating voltage of 5-10 kV in secondary electron detection mode. [23] Sensory analysis Fifteen panelists were standardized by tasting with the commercial red wine provided by the Song Hen Farm Produce. The samples were evaluated using seven descriptors including color, aroma, flavor, continuing taste (long-lasting), the balance taste of sweetness, acidity and bitterness, the complexity of wine flavor and aroma, and overall quality. Totally, three sessions were taken place before formal sensory evaluation. The commercial red wine was used as a standard and tasted simultaneously throughout the study. The protocols were conducted in accordance with the implementation of the sensory evaluation guidelines. [24] The sample red wine was stored at 4°C until tasting. Wine was placed in a clear glass cup under the regular fluorescent lights. The serving temperature of the wine samples was between 14°C and 18°C and the room temperature was~25°C. The descriptors were scored on a continuous scale from 1 to 9 (1 = minimum intensity, 9 = maximum intensity). One cup (about 20 mL) was tasted for each group and the panelists rinsed by blank drinking water twice between each group. Statistical analysis Three replicates samples were conducted for each group and the tests were repeated three times. Totally, nine replicates were analyzed. Data were expressed as the mean ± standard deviation and evaluated using one-way analysis of variance to determine the significance of differences among the mean values (least-squares difference test). Statistical analysis was performed using SPSS for Windows (v.14.0; SPSS Inc., Chicago, IL, USA), and significance was determined at p < .05. Results and discussion Examination of the bacteriostatic ability of α-pinene during fermentation from bacterial counts before and after fermentation Bacterial counts in wine were determined before and after the fermentation. As shown in Table 1, the total plate count of the puree before fermentation was 3.98 log 10 CFU/mL, demonstrating that there was a risk of bacterial contamination in residual bacterial counts if SO 2 was not used. During the fermentation, the inoculated yeast gradually became the dominant organism. After fermentation, no microorganism was detected for all treated groups including SO 2 and all concentrations of αpinene. In the control group, bacterial population was 2.46 log 10 CFU/mL which was about 1 log reduction before fermentation. The results showed that SO 2 and all concentrations of α-pinene were effectively inhibited the growth of non-desired bacteria. In the contrast, the control group still had a bacterial population at 2.46 log 10 CFU/mL which could potentially cause non-desired flavor and physicochemical characteristics. These suggested that α-pinene possessed the similar ability of inhibiting bacterial growth with SO 2 . Therefore, adding α-pinene could effectively prevent the risk of bacterial contamination during fermentation and storage. Effects of α-pinene to the physiochemical characteristics of the black queen wine The quality characteristics of the black queen wine were analyzed. All results are shown in Table 2, which includes the specific gravity, TSS, ethanol, titratable acidity, total phenol, DE, and pH values of different treatment groups. Specific gravity is one of the markers to assess fermentation results and gradually decreases as fermentation proceeding. As shown in Table 2, the specific gravity of all groups approached 1.00, and there were no significant differences among groups, indicating that all groups achieved the similar degree of fermentation. Ethanol concentration is usually around 12-14% in wine. [25] In this study, the black queen grapes harvested in Taiwan in 2018 had a glucose content of 12°Brix, and the sugar content was adjusted to 25°Brix. As shown in Table 2, there was no significant difference between different groups for the ethanol contents though ethanol content gradually decreased while the concentrations of α-pinene was increased. In addition, the residual sugar content in the groups with the higher concentrations of α-pinene (0.25%, 0.5%) significantly higher than those in other groups. These results suggested that α-pinene at 0.25% and 0.5% could be too great that not only inhibited the growth of non-desired bacteria but the yeast as well. In addition, the residual sugar contents in the groups with high concentrations of α-pinene (0.25%, 0.5%) were higher than the control, SO 2 , and 0.125% α-pinene groups. The higher contents of sugar residue also indicated that the concentrations of α-pinene at 0.25% and 0.5% could negatively affected the conversion of sugar to alcohol. Therefore, 0.125% should be the suitable concentration to achieve the balance of antimicrobial ability and fermentation. Acids are important components of wine and affect physiochemical and sensory characteristics of wine. As shown in Table 2, the pH values of all groups of wine were around 3.5, which was in the range of commercial products. In addition, there was not significant difference between all groups (p > .05). These results indicated that adding α-pinene did not alter the pH of wines. All values are expressed as the mean ± standard deviation (n = 3). ND, not determined; TPC, total plate count; CFU, colony-forming unit, the detection limit was 1 log 10 CFU/mL Pectin could cause the high viscosity and turbidity of wine. The degree of pectin esterification or methylation is defined as the percentage of methylated galacturonic acid and is known as DE. In this study, addition of α-pinene significantly decrease DE than those in the control and SO 2 groups. Higher concentration of α-pinene caused higher decrease of DE. However, even at the lowest concentration, 0.125%, the DE was still significantly lower than those in the control and SO 2 groups. The mechanism of DE decrease could be the interference of α-pinene in pectinase activity and further study is needed. Color is an essential quality in winemaking and provides useful information regarding the concentration of phenols and the oxidation tendency in wine. [5] As shown in Table 3, adding α-pinene significantly increased the L and a values of wines and not in b values. Thus, the wine adding with αpinene showed higher brightness and reddish even at the lowest concentration. Therefore, adding αpinene improved the wine appearance, which offered the greater perception for consumers. Oxidative browning is a well-known problem in winemaking, and SO 2 has been used as an antioxidant to control wine browning. [26] In our study, the increase of the brightness and reddish demonstrated that α-pinene potentially inhibited the browning reaction in wine and be an effective alternative for SO 2 . High transmittance means high transparency in wine and is the key factor for the clarity of wine. As shown in Figure 1, the transmittance of the five groups ranged from 50 to 60 and adding αpinene at all concentrations showed higher results than the control and SO 2 groups. When adding αpinene at the highest concentration (0.5%), lower transparency values than the 0.125% and 0.25% were obtained. This could be due to when greater DE decreased, small-molecule pectin could link together in wine and caused the decrease of transparency. However, adding α-pinene generally increased transparency of wine. These results were correlated the previous results, in which the lower DE, higher L and a values were obtained in the α-pinene groups. All values are expressed as the mean ± standard deviation (n = 3). Lowercase letters indicate significant differences (p < 0.05). Effects of α-pinene addition on other sensory characteristics in black queen wine As shown in Figure 2, the group added the 0.125% and 0.25% demonstrated similar performance for sensory characteristics with the SO 2 groups. Overall, sensory performance decreased in the following order: control > 0.125% α-pinene > 0.25% α-pinene > SO 2 > 0.5% α-pinene group. In terms of overall performance, the 0.125% and 0.5% α-pinene group had the highest and lowest score in all treatment groups. As a terpene compound, α-pinene possesses strong odor, which negatively interfere the wine sensory even at a very low concentration. The GS-MS results also showed that residual α-pinene was detected in the group with 0.5% α-pinene but not in the groups with 0.125% and 0.5% α-pinene after fermentation (Table S1). Thus, the low scores of sensory performance in the group with 0.5% α-pinene should be due to the existence of residual α-pinene. In addition, α-pinene was degraded during fermentation if low concentrations such as 0.125% and 0.25% were used. Therefore, adding α-pinene at low concentrations could be commercially practical since it offered better or similar sensory performance with adding SO 2 . Effects of α-pinene addition on free radical scavenging abilities in black queen wine Wines contain large amounts of anthocyanin and other phenolic compounds, which offer strong antioxidative activities. As shown in Figure 3, the addition of α-pinene enhanced scavenging abilities significantly. The scavenging abilities in all α-pinene groups were significantly higher than control and SO 2 groups (p < .05). In addition, the free radical scavenging abilities were correlated with the concentrations of α-pinene. Higher abilities were obtained at the higher concentrations of α-pinene. In the contrast, significant lower total phenolic contents were obtained in the groups of α-pinene than control and SO 2 groups and lower phenolic contents were obtained in the groups with higher concentrations of α-pinene (Table 2). Therefore, α-pinene addition affected the release of total phenolic substances during winemaking. However, a previous study in our laboratory demonstrated that α-pinene possessed a strong DPPH free radical scavenging activity (half-maximal inhibitory concentration value = 12.57 ± 0.18 mg/mL) and reducing power (213.7 ± 5.27 μg/mL of l-ascorbic acid equivalents). [16] This could explain that higher scavenging abilities still obtained in the groups of α-pinene though lower phenolic contents were observed in the groups. Effects of α-pinene addition on SEM images of black queen grape pomace SEM was used to observe the images of grape pomace after fermentation and the degradation of pomace after fermentation. As shown in Figure 4, (a) Control, (b) SO2 (55 ppm), (c) 0.125% α-pinene, (d) 0.25% α-pinene, (e) 0.5% α-pinene, respective. The pomace was not completely degraded in the control group, whereas degradation was complete in the SO2 and α-pinene groups. Particularly, the degradation of the groups with 0.125% and 0.25% of α-pinene was more completed than the group with 0.5% α-pinene. These results were corresponded with the higher L and a values as well as transparency in groups with 0.125% and 0.25% of α-pinene than in the group of 0.5% α-pinene. Conclusion Results showed that adding α-pinene effectively inhibited microbial growth and reduced the total plate count. Higher transparency and free radical scavenging ability were also obtained in the groups of α-pinene. However, the highest concentration, 0.5%, showed negative effects to ethanol production and sensory performance. Therefore, adding α-pinene at lower concentration, 0.125% or 0.25%, should be more suitable. Particularly, when the lowest concentration, 0.125%, was added, better physicochemical and sensory characteristics were obtained. Thus, adding α-pinene at 0.125% should be the optimal concentration to be a potential alternative for SO 2 in winemaking. Further studies are needed to evaluated the addition of α-pinene in other types of wine winemaking and large quantity production.
2020-01-30T09:03:18.103Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "91ff92ec7c279c7abbbfb43bd88f8d96fb7450ea", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/10942912.2020.1716798?needAccess=true", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "d7a6416e124fe21b747d828158e5a049af40f0a6", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Chemistry" ] }
119735689
pes2o/s2orc
v3-fos-license
The homology of defective crystal lattices and the continuum limit The problem of extending fields that are defined on lattices to fields defined on the continua that they become in the continuum limit is basically one of continuous extension from the 0-skeleton of a simplicial complex to its higher-dimensional skeletons. If the lattice in question has defects, as well as the order parameter space of the field, then this process might be obstructed by characteristic cohomology classes on the lattice with values in the homotopy groups of the order parameter space. The examples from solid-state physics that are discussed are quantum spin fields on planar lattices with point defects or orientable space lattices, vorticial flows or director fields on lattices with dislocations or disclinations, and monopole fields on lattices with point defects. Introduction The crystal lattice plays roughly the same role in solid-state physics that linear spaces do in mathematics (some excellent references on the physics of crystal lattices are [1][2][3][4]). In either case, one is dealing with an idealization of what one finds in the realm of natural phenomena, although it is still a useful generalization in that the phenomena that one can describe by means of it are still approximately valid within some regime of system parameters. In both cases, one then proceeds into the more realistic natural phenomena by corrupting the idealization with successive levels of defects of manageable types. For instance, Hooke's law can be corrupted by a cubic term in displacement to produce the anharmonic oscillator or the linear law can be replaced with a sinusoidal function to produce the physical pendulum. The ideal crystal lattice represents a high degree of symmetry as a geometrical structure and, as a result, one finds that the dynamics and physical properties of crystal lattices and their excitations can involve all of the main branches of pure mathematics. In particular, algebra -and mostly the theory of finite groups − bears upon the very definition of the symmetries of a crystal lattice, which then influence the physical properties of the material in a fundamental way. Because the groups in question act as transformation groups by way of symmetries, geometry plays a fundamental role, especially when one introduces dislocations and disclinations into the lattice. Since a lattice is associated with an elaborate system of polyhedra, one can also naturally consider the topological aspects of the lattice by using the methods of simplicial homology, which we will be the focus of the present study. Moreover, the order parameters that one introduces on the lattice often take their values in order parameter spaces that have non-trivial topological properties in their own right. Finally, many of the questions that one asks about crystalline matter are concerned with their physical properties and excitations. Such problems introduce fields of various types, such as tensor fields, wavefunctions, and order parameters that might obey systems of differential equations, and one is ultimately led to the introduction of mathematical analysis into the study, as well. Fourier analysis is a particularly powerful tool in the experimental determination of the structure of crystal lattices. The subject of the present study is the role of the topology of lattices that contain defects in obstructing the extension of fields that are defined on lattices to fields that are defined on the spaces that these lattices "occupy" when one passes to the continuum limit. One finds that this problem is directly addressable as an elementary problem in the well-established branch of topology that is concerned with topological obstructions to the continuous extension of continuous functions defined on subsets of topological spaces to functions defined on the entire space. Furthermore, it suggests various practical interpretations in terms of familiar models of solid-state and condensed-matter physics. By now, the role of topology in condensed matter physics has been primarily confined to the study of topological defects in ordered media [4][5][6][7][8][9][10][11][12][13][14]. In particular, one mostly considers the homotopy groups of the order parameter space in various dimensions. By comparison, the topology of the space on which the order parameter is defined is usually assumed to be something topologically elementary, such as a vector space or its one-point compactification into a sphere. As a result, the homotopy classes of order parameters that one deals with can be represented as elements of the homotopy groups of the space in which the order fields take their values. However, solid-state physics also considers defects in the crystal lattice, which are distinct from the topological defects that originate in the homotopy type of the order parameter space. The fact that lattice defects are also a type of topological defect that leads naturally into the study of homology, rather than homotopy, is rarely discussed. When one defines an order field on a defective lattice that takes its values in an order parameter space with non-trivial homotopy, it is then just as natural to define (co-) homology modules for the lattice with coefficients in the homotopy groups of the order parameter space. Furthermore, that is precisely where one finds the obstruction cocyles that tell one whether it is possible to find a continuous extension of an order field on the lattice to an order field on the space it occupies when one passes to the continuum limit. Actually, the branch of topology known as obstruction theory usually gets more application in the context of extending partial sections of fiber bundles to global sections [15]. Although some order fields fall within this purview, when one is concerned with lattices in R n , whose tangent bundle is trivial, many of the bundle-theoretic aspects of the problem introduce gratuitous generality. Hence, since the methods of homology and obstructions are apparently not common knowledge to the solid-state and condensed matter community, we shall first introduce them in an elementary setting that includes much of the familiar phenomena, and then treat the introduction of non-triviality into the bundle as a further topic to be addressed by a later study. In the next section of this paper, we define lattices in R n in a manner that leads naturally to the usual definitions of solid-state physics, as well as those of simplicial homology. We then briefly review the most common types of lattice defects that solidstate physics deals with and show how they affect the homology of the lattice. In Section 3, we then review the notions of order fields and topological defects in the order parameter spaces, as they are usually considered. In Section 4, we discuss the passage from a crystal lattice to a continuum and the extension of fields defined on lattices to fields defined on continua. Since this suggests the classic formulation of the most elementary problem of obstruction theory, in Section 5 we then recall the basic algorithm that one uses for solving it and apply it to various special cases of interest to solid-state physics. Lattices in R n Although it is straightforward to mathematically generalize the basic definition of a lattice in R n to a lattice in a more topologically interesting manifold, and is probably unavoidable if one is to address "space-time defects [12][13][14]" nevertheless, in the spirit of Occam's Razor, which is one of the foundations of the scientific method, we shall first introduce only as much mathematical generality as is necessary to pose the problem at hand, which only involves lattices of a more prosaic sort, for which the topology is carried by the lattice defects, and not also the space in which they are embedded. The role of the topology of the ambient space can then be introduced later when one wishes to consider its effect on the rest of the theory. Since there is another usage of the term "lattice" that is quite established in mathematics (for instance, as in [16]), namely, a partially-ordered set (S, ≤) that is given two binary operations in the form of glb (greatest lower bound) and lub (least upper bound), we first point out that the "lattice" Z m , which consists of all ordered m-tuples I = (i 1 , …, i m ) of integers, and which can also be regarded as the set of all multi-indices, can be given a partial ordering that makes the definition of a lattice quite natural. It is simply the "product ordering" that it inherits from the total ordering one defines on Z; that is, I ≤ J iff i k ≤ j k for all k = 1, …, m. One then sees that this ordering is not a total ordering anymore, since some m-tuples cannot be compared to each other; for instance, when some pairs of indices are greater and others are lesser. The natural definitions of glb and lub are also inherited from the concepts of minimum and maximum from the total ordering on Z: glb(I, J) = (min{i 1 , j 1 }, …, min{i n , j m }) lub(I, J) = (max{i 1 , j 1 }, …, max{i n , j m }). We can illustrate this schematically as follows: Our first definition of a lattice in R n is more general than we shall need for the remainder of the study, but still quite useful. Let S be a finite (but possibly large) subset of Z m . We define a lattice in R n to be a one-to-one map L: S → R n , I ֏ 1 ( , , ) . Thus, we are making it into a finite subset of R n that is parameterized by a finite set of multi-indices in Z m . Some of the useful general features of this definition are: 1. We have yet to impose any actual symmetry considerations on the resulting subset in R n . Hence, in addition to the ions of a crystal lattice, it can just as well describe the instantaneous state of molecules in a fluid or a liquid crystal, the ions of a plasma, the vortices of an Abrikosov flux lattice, or the finite meshes used in the numerical models for systems of differential equations. 2. It is not necessary to enumerate the elements of the lattice into a total ordering; indeed, such an ordering seems rather arbitrary and unnatural when the lattice is not onedimensional. 3. One can consider most of the common examples from solid-state physics by setting n = 1, 2, 3, and introduce special relativistic considerations by going to n = 4. 4. We are leaving open the possibility that the dimension of the lattice is strictly less than the dimension of the space that it lives in. (This can be useful when one considers the motion of strings or membranes in higher-dimensional spaces.) Note that the finite lattice that we have defined will always have a boundary since S will. One simply looks at "maximal chains in S" of the partial ordering on Z m and defines the boundary ∂S to consist of all their endpoints. However, we shall leave the consideration of the boundary out of the immediate discussion, as it rapidly expands the scope of homology to relative homology, and will regard that as an extension of the more elementary analysis of the problem at hand. The traditional terminology for the points of L(S) is lattice sites or lattice points. In the case of fluids and plasmas, it is reasonable to identify them with positions of the individual molecules or ions, but for crystal lattices, that does not have to be the case. Basic notions of crystal lattices In order to define how crystal lattices differ from the more amorphous kind, one must introduce the kind of homogeneity that comes from group actions. Actually, we shall really be using the multiplicative semi-group of Z, rather than its additive group. This is because Z is a sub-ring of the ring of real numbers R, which act on R by multiplication; hence, so do the integers. Now, let {a 1 , …, a m } be an m-frame in R n ; i.e., a linearly independent set of m vectors in R n . The product ring Z m acts on {a 1 , …, a m } in the obvious -viz., componentwise -way. The product of the multi-index I = (i 1 , …, i m ) and the m-frame {a 1 , …, a m } is the vector: Hence, every multi-index I in some subset S ⊂ Z m is associated with a distinct vector v I in R n , so we have defined a lattice in R n according to our previous definition. We have also defined what is usually referred to as a Bravais lattice. Such a lattice automatically has m spatial periodicities of period || a k ||, where the norm used is simply the Euclidian norm on R n . Similarly, the additive group Z acts on each a k by scalar multiplication. More generally, if A(n) is the affine group of R n , which is then the semi-direct product of GL(n) -i.e., all invertible real n×n matrices -with the translation group R n , then A(n) acts on v ∈ R n , in the natural way: as well as on any subset of R n , such as the Bravais lattice generated by {a 1 , …, a m }, which we shall denote by L(a). A subgroup G of A(n) is said to preserve the lattice L(a) if any time v ∈ L(a) and g ∈ G the vector gv is also an element of L(a). The largest such subgroup, under the partial ordering of inclusion, is called the space group of the lattice. More generally, the crystallographic groups are the groups that can be the space group for some Bravais lattice. There are two isomorphism classes of crystallographic groups for linear lattices, seventeen for planar lattices, and 230 for space lattices. The point group at v ∈ L(a) is the subgroup of the space group of L(a) that fixes v; that also makes a point group the isotropy subgroup of the action of the space group on the lattice. It will then be a finite subgroup of the linear subgroup of A(n), namely, GL(n); usually, it consists of powers of some basic rotation, along with reflections. For planar lattices, the only discrete rotation groups that can appear as point groups have order 1, 2, 3, 4, and 6. Other possibilities, such as orders 5 and 7, assert themselves when one is dealing with "quasi-crystals," which are closely related to Penrose tilings, and whose existence in nature was first established experimentally as recently as 1984. Although we will not actually need to consider the symmetry of a lattice in what follows, we will use one key notion that it implies: that of the unit cell. One starts with the fact that the m-frame {a 1 , …, a m } in R m can be regarded as spanning an m-dimensional parallelepiped by means of all vectors of the form (2.1) when the coefficients i k now range from 0 to 1 without restriction. The volume of this parallelepiped is then: (2.3) in which the matrix is defined by the components of the vectors a i , i = 1, …, m with respect to the canonical frame. This parallelepiped then represents the minimum rectangular volume that can be spanned by the basic frame, so one calls the interior of that region of space a unit -or primitive -cell of the lattice. By definition, each primitive cell contains only one lattice point. Another way of defining a primitive cell with volume V c produces the Wigner-Seitz cell. First, one connects a chosen lattice point to all of its nearest neighbors by means of line segments. (The nearest neighbors of a given lattice point I = (i 1 , …, i m ) are characterized by those points J of the lattice whose coordinates differ from those of I by either −1, 0, or + 1.) One then intersects a normal hyperplane through the midpoint of each such line segment. The smallest-volume polyhedron that these hyperplanes bound is then the Wigner-Seitz cell. The significance of primitive cells for us is that they give us an indication of what sort of geometric building blocks we should use to define a simplicial complex that would represent the homology of the crystal lattice. Although there are five basic types of planar Bravais lattices and fourteen types of Bravais space lattices, one finds, upon perusing the illustrations (cf., e.g., Kittel [1]), that they tend to suggest two natural types of building blocks: parallelepipeds, which one can regard as cubic simplexes, and triangular simplexes, which we will refer to as cubes and simplexes, respectively. If the lattice is of the type that suggests the use of parallelepipeds then a 1-cube will be a line segment between two distinct lattice sites, a 2-cube will be the parallelogram spanned by four distinct lattice sites when two pairs of them generate distinct parallel lines, and a 3-cube is the three-dimensional parallelepiped that is generated by eight distinct lattice sites, two quadruples of which generate distinct parallel planes. In general, the m-frame {a 1 , …, a m } spans an m-dimensional simplex by letting the coordinates x k , k = 1, …, m of a point x = x k a k range from 0 to 1 under the restriction that their sum must always be 1: One sees that volume of the region that is spanned by an m-simplex is one-half the volume of corresponding primitive cell. Hence, a 1-simplex is a line segment in R n that connects two distinct lattice sites, a 2simplex is a triangle spanned by three distinct, non-collinear lattice sites (including its interior points), and a 3-simplex is a tetrahedron spanned by four non-coplanar lattice sites. One can continue the process into higher dimensions, but for the present purposes that will not be necessary. One notices that as far as topology is concerned a k-simplex and a k-cube are the same thing; i.e., they are homeomorphic. Moreover, they are both homeomorphic to a closed k-dimensional ball B k ; generically, we shall refer to all three of them as k-cells. They will define the basic building blocks for the homological structure of the lattice. The choice of one representation or the other for a k-cell then amounts to whichever representation is most natural to the problem. Homology of lattices in R n The set S can define what one calls the vertex system for an abstract simplicial complex [17][18][19]; we shall also refer to it as the 0-skeleton of that complex. The corresponding points v I of the lattice L: S → R n then define the geometric realization of the vertex system and we denote it by L 0 . The 1-simplexes of this abstract simplicial complex are then a specified subset S 1 of S × S; that is, a specified collection of pairs (I, J) of multi-indices in S. For an ideal crystal, there are two possibilities for this specified subset: If the lattice suggests cubic simplexes as its building blocks then one includes all pairs (I, J) of vertices such that one pair of lattice coordinates differs by ± 1. We denote this in the usual multi-indicial way by: Otherwise, when the basic building blocks are triangular simplexes, one includes pairs such that I and J are nearest neighbors in S, although not necessarily all such pairs. We illustrate some of these possibilities for planar lattices: (1, −1) The geometric realization of S 1 then consists of all line segments in R n that connect the vertex pairs of L(S) when the multi-index pair is in S 1 ; we denote that set by L 1 . As far as topology is concerned, the use of a straight line segment is not necessary, and one can just as well ise a continuous non-self-intersecting curve. It might be physically reasonable to say that when lattice sites represent atomic ions the inclusion of the 1-simplex IJ defined by two vertices I and J in S 1 would correspond to their being bound by some relevant crystal binding force, such as ionic forces, Van der Waals forces, or covalent bonding. The 2-simplexes are a specified subset S 2 of either triples (I, J, K) of distinct, noncollinear multi-indices, in the case of hexagonal lattices, or quadruples (I, J, K, L) of distinct multi-indices, two pairs of which generate distinct, parallel lines, in the cubic case. For an ideal lattice, all such triples or quadruples are included in S 2 . The geometric realization of S 2 then consists of all triangles or parallelograms in R n that are spanned by the vertices that correspond to the multi-indices of the triples or quadruples of S 2 , respectively; we denote that set by L 2 . By the words "triangle" or "parallelogram," we are intending that one includes all of their interior points as surface segments, as well as the points of their edges. One way of defining these interior points is to form all convex combinations λx i + (1 -λ)y i , 0 ≤ λ ≤ 1, of point pairs {x i , y i } taken from the edge points. The 3-simplexes are then a specified subset S 3 of either quadruples of multi-indices that generate tetrahedral in Z m or octuples that generated parallelepipeds. Their geometric realizations are the corresponding tetrahedral or parallelepiped regions generated by the convex closure of the corresponding vertices in R n ; we denote that set by L 3 . Ultimately, one concludes with L m as the geometric realization of the lattice in R n , since simplexes of dimension higher than m cannot exist; of course, for us, m will general be 1, 2, or 3. In general, we shall refer to the simplexes that pertain to the hexagonal lattices -viz., nodes, branches, triangles, and tetrahedral -as triangular simplexes, while the one that pertain to cubic lattices -viz., nodes, branches, rectangles, and parallelepipeds -as cubic simplexes. In one sense, the triangular simplexes are the most fundamental, since any polyhedral region can ultimately be subdivided into triangular subregions. One of the early advances of simplicial homology was the proof that subdividing a given triangular simplicial complex into a finer complex by means of a "barycentric" subdivision, which adds points at the centers of "mass" of the simplexes in each dimension and connects them with additional simplexes in each dimension, does not affect the homological information that it contains. Previously, we called S the 0-skeleton of the abstract simplicial complex and L(S) the 0-skeleton of its geometric realization. More generally, the k-skeleton Σ k of the abstract simplicial complex is the union S 0 ∪ …∪ S k , while that of the geometric realization is Λ k = L 0 ∪ …∪ L k . In either case, the entire complex coincides with its m-skeleton. One then refers to the dimension of the complex as being the highest dimension of its simplexes. For the sake of brevity, it is convenient to denote k-simplexes as "words" formed from the "letters" of the "alphabet" S. For instance, the 1-simplex (I, J) is denoted by IJ, the 2-simplex (I, J, K) by IJK, and the 3-simplex (I, J, K, L) by IJKL, in the event that the unit cell is a tetrahedron; the corresponding expressions for the cubic case are analogous. The (abstract) boundary of a 1-simplex IJ is the set of its vertices: ∂(IJ) = {I, J}. The boundary of a 2-simplex consists of all of its consecutive vertex pairs -i.e., edges − taken in the sequence defined by the word, and including the last letter followed by the first one. For the hexagonal lattice: ∂(IJK) = {IJ, JK, KI}, while for the cubic lattice: The boundary of a 3-simplex is defined differently for the triangular and cubic cases. In the triangular case, one considers its four consecutive vertex triples -or faces − when one repeats the first two letters after the last one: For the cubic case, one must decompose the parallelepiped into its six rectangular faces: ∂(I 1 I 2 I 3 I 4 I 5 I 6 I 7 I 8 ) = {I 1 I 2 I 3 I 4 , I 1 I 5 I 6 I 2 , I 1 I 5 I 8 I 4 , I 2 I 6 I 7 I 3 , I 3 I 7 I 8 I 4 , I 5 I 6 I 7 I 8 }. In general, ∂σ k consists of all of its k−1-faces. Naturally, one must add the restriction on the definition of an abstract simplicial complex that L k must always contain the boundaries of all simplexes in L k+1 . If one regards all permutations of letters in a word that represents a simplex as equivalent then one thinks of that simplex as being unoriented. Hence, one then thinks of a choice of permutation for the sequence of letters for a k-simplex as defining an orientation for that k-simplex. More generally, one can think of any permutation of the letters of σ k = IJ…K as being one or the other orientation for σ k according to the sign of the permutation (+ = even, − = odd). One arbitrarily assigns a + or − sign to σ k according to which type of permutation of IJ…K defines it. In the case of line segments, one can think of orientation as defining a sense of motion from one vertex to the other, while for a triangle or parallelogram an orientation is a sense of rotation for a circuit around the vertices by way of the edges. What makes triangular simplexes particularly convenient is that one can obtain the faces of σ k by successively deleting each letter from the word IJ…K and multiplying by the sign of the permutation that takes the resulting word to the oriented word that is included in the set of abstract k−1-simplexes. The faces of a cubic simplex are not as concisely described. A k-chain c k is defined to be a finite formal sum of oriented k-simplexes σ k (i), i = 1, …, N with integer coefficients m i : Although the concept of a formal sum can be made mathematically rigorous, for the sake of computation, it is usually sufficient to treat it as a set of rules for symbol manipulation. Hence, to be consistent, one must have: As a consequence: (1)σ k = σ k . In order to interpret (0)σ k , one must think of multiplication by 0 as eliminating that term from the sum, and the "empty sum" is represented by 0. Since the empty set is a subset of any set, 0 is a k-chain for every k. Hence, one can define the addition and subtraction of k-chains by implicitly including summands of the form (0)σ k = 0 if necessary. If c k is as in (2.6) and c′ k is defined analogously then: which then makes: The set C k (Σ m ) or C k (Λ m ) of all k-chains formed from the k-simplexes of either the abstract simplicial complex generated by S or its geometric realization L in R n , respectively, when given the scalar multiplication by integers (2.7) and the addition (2.8), defines what is called a free Z-module (for more details on modules, see, e.g., MacLane and Birkhoff [20]) Such an algebraic structure is essentially a vector space whose vectors are the k-chains, but whose scalars come from a ring that is not a field, namely, Z; hence, multiplicative inverses to scalars do not usually exist (except for 1). The rank of C k (Σ m ) or C k (Λ m ) is the analogue of the dimension of a vector space and equals the number r k of k-simplexes in S k , in this case. Since the dimensions of C k (Σ m ) or C k (Λ m ) are the same, they are isomorphic as free Z-modules, and we shall henceforth deal with only C k (Λ m ). The fact that we are dealing with finite simplicial complexes implies that each C k (Λ m ) has a finite rank, and since it is a free module, one can treat the generating set {σ k (i), i = 1, …, r k } as a generalized basis for the module, although one only forms linear combinations with integer coefficients. Since non-zero k-chains do not exist for k > m one already has C k (Λ m ) = 0, k > m. If one now regards an abstract triangular k-simplex σ k = I 0 …I k as a k-chain then one can associate its abstract boundary, in the sense above, with a k−1-chain: in which the caret over I i means that the letter in question is deleted from the word in that summand. In particular: (2.13) If one uses cubic k-chains instead of triangular ones then the analogue of (2.10) is not as simple to write down, but we can say that if the edges of a square are oriented in a counterclockwise circular manner and the faces of a parallelepiped are oriented positive outward then the analogues of (2.12) and (2.13) are: One can then extend this definition of the boundary operator by linearity to a linear operator ∂ k : That is, if c k is as in (2.6), with N = r k , by allowing some m i to be zero, then: in which r k−1 represents the total number of all k−1-simplexes. The integer matrix [∂ k ] ji is r k−1 × r k and when the simplexes are unoriented, its elements equal 1 if σ k−1 (j) is a face of σ k (i) and 0 if it is not one of the faces. When the simplexes have been given specific orientations, the 1's become + 1 or -1, depending upon whether ± σ k−1 (j) is an oriented face of σ k (i), respectively. One calls [∂ k ] ji the incidence matrix of σ k (i), and it basically represents the matrix of the linear map ∂ k with respect to the bases for the modules C k (Λ m ) and C k−1 (Λ m ) defined by σ k (i) and σ k−1 (j). If Sometimes the elements of the incidence matrix are expressed as simply the incidence , especially when one is dealing with modules that are not finitely generated. For instance, a triangulation 1 of a non-compact topological space will not have finitely many simplexes to it, and when one goes to singular homology the number of singular simplexes in each dimension will generally be uncountably infinite. Hence, one of the advantages of specializing the tools of homology to spaces that admit finite triangulations is that one can define bases for the modules of chains and treat the boundary operator as an integer matrix with a finite number of rows and columns. By definition, the boundary of any 0-simplex, and therefore any 0-chain is always zero; i.e., ∂ 0 : A physically interesting way of interpreting the boundary of a 1-chain c 1 is to think of c 1 as representing an electrical network whose branches are the 1-simplexes and whose currents I i are the coefficients in the formal sum. If one writes c 1 in the form (r 1 = number of branches in the network) then its boundary is: We shall use the word "triangulation" to mean a decomposition of a topological space into the geometric realization of an abstract simplicial complex regardless of whether the basic building blocks are triangular or cubic simplexes. in which r 0 represents the number of nodes in the network. Thus, the coefficient of each node of the network -i.e., each 0-simplex σ 0 (j) − in ∂ 1 c 1 , which represents the sum of the (signed) currents in the branches that have that node for a boundary component, and if ∂c 1 vanishes then one has a homological statement of Kirchhoff's law of currents: The current in a network is a 1-chain with real coefficients, and in the equilibrium state (constant currents, no accumulation of charge at any node) it has boundary zero 2 ; i.e.: A basic property of the boundary operator, which can be verified directly from (2.10), is that its "square" always vanishes: Since ∂ is linear, its kernel (i.e., the set of all k-chains with boundary zero) is also a Zmodule that is a submodule of C k (Λ m ), and we denote this kernel by Z k (Λ m ); an element of this Z-module is called a k-cycle. Similarly, the image of ∂ will also be a Z- From (2.20), one sees that every k-boundary is also a k-cycle, so B k (Λ m ) is a Zsubmodule of Z k (Λ m ) . The crucial question at the root of homology is whether the converse statement is true. In general, there might be k-cycles that do not bound k+1chains. One can think of a triangle or square minus its interior points as examples of 1cycles (the sequence of oriented edges) that do not bound a 2-chain. Hence, a nonbounding k-cycle represents a sort of "k+1-dimensional hole" in the space described by a chain complex. One then defines the quotient module H k (Λ m ) = Z k (Λ m ) / B k (Λ m ) to be the (integer) homology module in dimension k for either the abstract simplicial complex or its geometric realization in R n . That is, an element of H k (Λ m ) is an equivalence class of kcycles under homology: Two k-cycles z k and k z′ are said to be homologous if their difference is the boundary of a k+1-chain: One then sees that when a k-cycle is a boundary it is "homologous to zero." In particular, two vertices v 1 and v 2 are homologous iff there is some connected 1chain from one to the other, which then amounts to a continuous path in the geometric realization. Hence, the zero-dimensional homology module H 0 (Λ m ) has a basis that is defined by the path-connected components of Λ m , so if Λ m is path-connected then H 0 (Λ m ) is Z. Once again, since there are no (m+1)-chains but 0, one always has H m (Λ m ) = Z m (Λ m ). Homology is related to homotopy, but not equivalent. In particular, although two homotopic k-chains will be homologous, the converse is not generally true. We illustrate some possibilities for homologies using 1-cycles that we represent as circles in Figure 3. Figure 3. Examples of homologies of 1-cycles. The (a) illustration depicts a 1-cycle z 1 that is homologous to 0 (z 1 = ∂c 2 ); when z 1 is a loop this also makes it homotopic to a point (i.e., to a constant map). In the (b) illustration, we see a case of two homologous 1-cycles that are homotopic in a non-trivial way, and in (c) the 1-cycle z 1 is homologous to the 1-chain 1 1 z z ′ ′′ + , which does not represent a homotopic image of z 1 . One finds that the first (i.e., one-dimensional) homology module H 1 (Λ m ), when regarded as an Abelian group (i.e., ignoring the scalar multiplication), is isomorphic to the "Abelianization" of the fundamental group π 1 (Λ m ); hence, when π 1 (Λ m ) is Abelian to begin with, they are isomorphic. Thus, the first homology module of a figure eight is the same as the homology of two disjoint circles, although their fundamental groups differ. (For more details on the computation of homotopy groups, one might confer Massey [22].) It is important to note that whereas the module C k (Λ m ) was free, so it took the form of k r Z , where r k is the number of k-simplexes, the same is not true for H k (Λ m ), in general. It will generally be the direct sum of a free module k b Z , where b k is called the k th Betti number of Λ m , and a finite number of finite cyclic groups, which one refers to as the torsion part of H k (Λ m ). That means that for some k-cycles z k there will be a non-zero integer N such that Nz k = ∂c k+1 for some c k+1 . An elementary example of a topological space in which torsion appears in the integer homology is any real projective space RP n . In fact, when n > 1, if one represents RP n as the quotient of the n-sphere S n by the identification of antipodal points then the homology modules do not change, except in dimension one, where a torsion summand of Z 2 appears in place of 0. Thus, not only does RP n become multiply-connected as a result of the identification, when n is even it also becomes non-orientable, as a manifold; we shall not go into that here, but refer the curious to the chapters in Vick [23] or Greenberg [24] that are concerned with the topology of manifolds. Note that RP 3 is not only orientable, it is parallelizable, in the sense that it admits a global, continuous field of tangent linear 3-frames. Generally, the Betti number in each dimension is equal to the number of "holes" of that dimension (plus one). For instance, one can say that an n-sphere has an (n+1)dimensional hole in it, unless the interior points are included, in which case, it becomes contractible to a point or homologous to 0. When one forms the alternating sum of the Betti numbers the resulting integer is called the Euler-Poincaré characteristic of the topological space in question: ( 2.22) Interestingly, this number also equals the alternating sum of the ranks of the modules C k (Λ m ), even though the boundary operator has not been introduced at that point. For instance, in the case of a triangulated compact surface Σ: which is the form that Euler himself used in his treatment of the Königsberg Bridges problem. Another important point to comprehend about homology is that the only structure that the simplexes of each dimension contribute to the modules of chains in those dimensions is due to their raw number; i.e., the free Z-module generated by a set of N apples is isomorphic to the free Z-module generated by a set of N oranges, or anything else. The only way that topology enters into the picture is by way of the definition of the boundary operator itself, which is then tailored to the demands of the specific problem. For instance, we illustrate this process in the elementary cases of simplicial complexes that represent a circle and a disc in Figure 4. If we let σ 0 (i), i = 1, 2, 3 be A, B, and C, resp., while σ 1 (j), j = 1, 2, 3 represent AB, AC, and BC, resp. then the boundary operators ∂ 1 and ∂ 0 for the circle have the incidence matrices: If we let σ 0 (i), i = 1, 2, 3, 4 be A, B, C, and D, resp., while σ 1 (j), j = 1, …, 6 represent AB, AC, AD, BC, BD, CD, resp., and σ 2 (k), k = 1, 2, 3, represent ABC, BCD, ADC, resp. then the boundary operators ∂ 1 and ∂ 0 for the circle have the incidence matrices: One can represent all of the compact surfaces without boundary by means of identifications of the edges of a square, which then leads to triangulations of them that allow one to also express their homology modules (see Massey [22]). 2.3 Lattice defects Having already introduced more mathematical abstraction than most solid-state physicists feel comfortable with, we now return to the realm of real-world crystal lattices to apply those abstractions to the homology of defective lattices. If one were dealing with an ideal crystal lattice then every pair of lattice sites would be connected by some 1-chain, every 1-cycle would bound some 2-chain, and every 2cycle would bound some 3-chain. Hence, the homology modules of an ideal lattice all vanish, making homology a pointless complication to introduce in that context. However, in the real world crystal lattices are never ideal, but contain lattice defects of various types. The three basic types of lattice defects are point defects, linear defects, and surface defects. Point defects take the form of lattice vacancies, interstitial inclusions of ions that ordinarily belong to the lattice, and interstitial inclusions or lattice substitutions of impurities. We illustrate these possibilities as follows: Interstitial ion Interstitial impurity Substitution impurity Lattice vacancy Figure 5. Types of lattice point defects. Linear defects generally take the form of dislocations and disclinations. Both of them were collectively referred to as "distortions" by Volterra [25], who first established their role in the equilibrium state of elastic media by contributing generators to the fundamental group. The dislocations amount to translations of linear frames when they are parallel-translated around a (Burgers) loop that encircles a dislocation line and can be of the edge or screw (i.e., helical) type. Hence, they have been associated with torsion 3 in the connection that defines that parallel translation, at least when one goes to a continuous distribution of dislocations 4 . Disclinations introduce a rotation of the frame around a (Frank) loop, and therefore relate to the curvature of the connection when they are continuously distributed. Surface defects usually take the form of grain boundaries, or "walls." That is, the lattice is defined only up to that boundary surface. One can treat the lattice defects as all defining "deletions" from the abstract simplicial complex associated with a lattice S. Except for the interstitial lattice ions, which will not affect the homology, a point defect will imply deleting the vertex, in the case of vacancies and substitutions, as well as the cells that surround it when m > 1. In the case of a linear lattice (m = 1), the deletion disconnects the lattice and introduces a generator to H 0 (Λ 1 ). In the case of planar lattices (m = 2), a point defect mostly introduces a generator of H 1 (Λ 2 ), since it will be surrounded by a 1-cycle that no longer bounds the deleted 2-cell. In the case of a space lattice (m = 3), a point defect will introduce a generator of H 2 (Λ 3 ), since it will be surrounded by a 2-cycle that no longer bounds the missing 3-cell. A line defect, which can only exist when m > 1, will disconnect a planar lattice and therefore introduce a generator of H 0 (Λ 2 ). When m = 3, it introduces a generator of both the fundamental group π 1 (Λ 3 ) and H 1 (Λ 3 ). A surface defect, which can only exist when m > 2, disconnects a spatial lattice and thus introduces a generator of H 0 (Λ 3 ). Lattice defects are not only unavoidable in real crystals, but also sometimes essential. For instance, the manufacture of semiconductors for electronics applications generally involves the intentional introduction of lattice impurities in a controlled manner, because without the impurities one would not produce the desired effect. Dislocations were introduced into the theory of the yield strength at which elastic materials give way to plastic deformation in order to account for a major discrepancy in the earlier theory. Lattice defects also play an important role in the propagation of waves of various types throughout a solid medium, such as phonons, photons, and solitons, and can lead to scattering and diffraction. 2.4 Reciprocal lattices When one embarks upon the study of wavelike excitations of crystal lattices, one finds that it is just as important to consider not only a lattice in R n , but also its reciprocal lattice in the dual space R n* . Recall that the dual vector space V * to any vector space V is composed of all linear functionals on V. If α ∈ V * is one such functional and v ∈ V is any vector then one can denote the application of α to v by either the scalar α(v) or <α; v>. If {e i , i = 1, …, n} is a basis for V then one can define a unique reciprocal basis {θ i , i = 1, …, n} by specifying that: Geometrically, one can think of the linear functional α as being associated with the linear hyperplane in V that is defined by all vectors v that are annihilated by α: i.e., all v such that α(v) = 0. In the case of V = R n , whereas the elements of V are usually represented as column vectors of real numbers, the elements of V * are represented as row matrices of real numbers. One of the most physically important inhabitants of R 3* is the wave covector k = (k 1 , k 2 , k 3 ) that is associated with any wave motion in R 3 . Its components k i = 2π /λ i represent the wave numbers in each direction defined by the canonical basis {(1, 0, 0), (0, 1, 0), (0, 0, 1)}, expressed in units of radians per unit distance. Hence, when one evaluates k on a position vector x = (x 1 , x 2 , x 3 ) the resulting number is: which then represents the differential increment of phase change in the direction from the origin to the point x, as long as the k i 's are constant. The (tangent) plane annihilated by k at each point is the tangent space to the constantphase surface (φ = const.) through that point if we let: The vector spaces V and V * are always linearly isomorphic, at least in the finitedimensional case. However, the choice of isomorphism is not always defined uniquely. Although this fact is crucial in geometrical matters, in which one considers tensor fields whose components must transform covariantly or contravariantly under changes of basis, it is not as essential in the name of homology. In particular, one can use the canonical basis for R n and its reciprocal basis for R n* with impunity, as well as the linear isomorphism of R n with R n* that this defines. This effectively amounts to transposing the column vector [v 1 , …, v n ] T to the row vector [v 1 , …, v n ], so the components of the dual vector v* to v with respect to the reciprocal basis θ i will be the same as the components of v with respect to e i . This association does not, however, behave well under basis changes. Whenever V is associated with a scalar product <.,.>, one can also define an isomorphism of V with V* by associating each v ∈ V with the linear functional <v, .>, which takes any vector w to the number <v, w>. In the case of the Euclidian scalar product on R n , the canonical basis is orthonormal, so <e i , e j > = δ ij and the components v i go to v i = δ ij v j = v i . Thus, in this case, the isomorphism of R n with R n* that is defined by the scalar product is the same as the one obtained from associating the canonical basis with its reciprocal basis. The only difference is in the fact that any change of basis that takes the canonical basis to another orthonormal basis -i.e., any Euclidian rotation -will preserve the isomorphism of R n with R n* . For crystal lattices, the most natural basis to choose in R m is the basis {a 1 , …, a m } that one uses to generate the Bravais lattice. For the case of m = 3, one can take advantage of the vector cross product on R 3 to define the reciprocal basis {a 1 , a 2 , a 3 } by way of 5 : Thus, the linear functional a i annihilates the plane spanned by a j × a k , while a i (a i ) = 1, in any case, since we have normalized the reciprocal basis by the volume V of the parallelepiped spanned by {a 1 , a 2 , a 3 }. When one has settled on a linear isomorphism of R n with R n* , any lattice L: S → R n is automatically associated with a lattice in R n* , which one then calls the reciprocal lattice. That is, each lattice site corresponds to a linear functional. In particular, a Bravais lattice in R m that is generated by {a 1 , …, a m } corresponds to a reciprocal lattice in R m* that is generated by {α 1 , …, α m }, so the typical reciprocal lattice site takes the form: α = n k α k , (k = 1, …, m), (2.31) in which n k are the integers that generate the original lattice in R m . The reciprocal unit cell that corresponds to a Wigner-Seitz cell in R m is referred to as the first Brillouin zone. It plays an important role in the study of wave excitations in crystal lattices. 2.5 Cohomology and lattices One can define a corresponding notion of linear functionals on Z-modules, except that the scalars that one produces are integers, not real numbers. This allows one to also define the dual Z-module C k (Λ m ) to the Z-module C k (Λ m ) and its elements are called kcochains on Λ m . The reciprocal basis to the set {σ k (i), i = 1, …, N k } of k-simplexes in Λ m is simply the set {σ k (i), i = 1, …, N k } of Z-linear functionals σ k (i) that take each simplex σ k (j) to the integer: When one uses the reciprocal bases σ k (i) and σ k+1 (j) for C k (Λ m ) and C k+1 (Λ m ), one finds that the coboundary operator can be expressed by a matrix [δ k ] ij that is simply the transpose of the incidence matrix [∂ k+1 ] ji that is associated with the boundary operator, since the definition (2.35) makes δ k the transpose -or adjoint -of the operator ∂ k+1 . One has analogous concepts to those of k-cycle and k-boundary in the form of kcocyles and k-coboundaries. From (2.35), one can characterize a k-coboundary by the property that whenever it is evaluated on a k-cycle the result is zero, while a k-cycle has the property that whenever it is evaluated on a k-boundary it gives zero. The Z-module of k-cocycles is denoted by Z k (Λ m ), the k-coboundaries by B k (Λ m ) and the resulting quotient module H k (Λ m ) = Z k (Λ m ) / B k (Λ m ) is called the k th cohomology module of Λ m . Hence, the elements of H k (Λ m ) are equivalence classes of k-cocycles under the equivalence relation of cohomology. Two k-cocycles c k and c′ k are cohomologous if their difference is the coboundary δ k−1 c k−1 of some k−1-cochain c k−1 : i.e., c k ~ c′ k iff c k − c′ k = δ k−1 c k−1 . Just as Kirchhoff's law of currents found an interpretation in terms of the homology of a network, similarly, his law of voltages has an expression in terms of cohomology. If each branch σ 1 (i) of the network is associated with a potential difference ∆V i , and σ 1 (i) is its reciprocal co-branch then the total potential difference ∆V = for any branch σ 1 (i) . Hence, saying that ∆V is a coboundary is the same thing as saying that it really does represent the difference between two potentials defined at each node. One aspect of this latter example is that we are using cohomology with coefficients in a field, namely, the real numbers. This has a big advantage in practical terms because even though the free Z-module C k (Λ m ) is isomorphic to the free Z-module C k (Λ m ) it is not generally true that the Z-module H k (Λ m ) is isomorphic to the Z-module H k (Λ m ) since they do not have to be free; however, their free summands are isomorphic. Matters simplify considerably when one goes to homology and cohomology with real coefficients, for which the Z-modules become R-vector spaces, because one is only left with the free summands, so H k (Λ m ; R) = H k (Λ m ; R) * . 2.6 Orientation and the Poincaré isomorphism First, let us recall some manifold terminology: A topological manifold of dimension n is a topological space such that every point has at least one neighborhood that is topologically equivalent to R n , or -equivalently -an open n-ball. The interiors supports of all geometrical simplexes fall into this category. A topological manifold with boundary, by comparison, is one such that every point has a neighborhood that is homeomorphic to either an open n-ball or its intersection with the closed half-space R n+ in R n defined by -say - However, one must be careful about distinguishing the boundary of a topological manifold and the boundary of a triangulation of that manifold. In the case of any simplex they are the same thing, but that is only because any simplex is orientable, not only in the sense that we introduced above, but also in the following sense: Suppose the n-dimensional topological manifold M is triangulated by the n-chain: In Figure 6 we illustrate triangulations of a cylinder, which is an orientable 2manifold with boundary, and a Möbius band, which is non-orientable, to show how these conditions apply. One confirms that in the case of the cylinder the boundary of [M] represents the top and bottom triangles that remain when the two edges AB are identified, but in the case of the Möbius band the boundary of the manifold is triangulated by the 1-cycle (BC + CD + DA) + (AE + EF + FB). However, if one passes to homology "modulo 2" by replacing the coefficients of the simplexes by 0 if the coefficient is even and 1 if it is odd -keeping in mind that -n is also n modulo 2 -then one sees that ∂[M] MB agrees with the boundary of the Möbius band modulo 2, which one can then write: which also means that the difference between the 1-cycles on each side of the congruence is a 1-chain with even coefficients. This is actually a special case of a more general statement that whether an ndimensional manifold with boundary M is or is not orientable, if it is expressible as the n- . This is related to the fact that a non-orientable topological manifold has a two-fold covering manifold that is orientable. As an example of a triangulation of an orientable manifold without boundary we give the 2-sphere, which we express as its homeomorphic image, the surface of a tetrahedron: By direct computation, one sees that ∂ 2 [S 2 ] is zero, which is consistent with one's expectation that a 2-sphere is orientable. By contrast, the projective plane RP 2 , which is obtained from the 2-sphere by identifying antipodal points, is not orientable, although verifying that ∂ 2 [RP 2 ] consists of a sum of 1-simplexes with even coefficients first requires a triangulation, which can be involved, so we refer the reader to the discussion in Alexandroff [17]. Of course, the two-fold covering of RP 2 with an orientable manifold is by way of the map S 2 → RP 2 that identifies antipodal points. The practical significance of orientability for us in what follows is the fact that a compact, orientable n-dimensional topological manifold M -i.e., one that can be given a finite triangulation -will always have a generator to H n (M) in the form of the fundamental n-cycle [M]. (Recall that if M is n-dimensional then the only n-boundary is 0.) Furthermore, a compact orientable n-dimensional topological manifold M also has a canonical set of isomorphisms H k (M) → H n−k (M) for each dimension k that are defined by means of the fundamental n-cycle, or really, its dual fundamental n-cocycle in H n (M); these isomorphisms are called the Poincaré isomorphisms. Consequently, the free and torsion parts of both modules involved have just as many generators. In particular, each path component of M is associated with a generator of H n (M) in the form of the fundamental n-cycle that triangulates it. In the event that M is not path-connected the fundamental n-cycle then decomposes into essentially independent sub-chains as a sum over all n-simplexes. Closely related to the Poincaré isomorphisms, but not precisely identical to that concept, is the notion of Poincaré duality, which takes the form of isomorphisms H k (M) → H n−k (M) or H k (M) → H n−k (M) for each k. One needs to have isomorphisms H k (M) → H n−k (M), in addition to the aforementioned Poincaré isomorphisms, and for this, one generally needs a so-called "intersection form" on the homology modules, which works somewhat like a scalar product in the way that it defines the desired isomorphism. We shall not go into the details of the precise definition of the Poincaré isomorphisms and duality at this point, since we shall only use them casually in the sequel. The curious are then referred to the cited literature on the topology of manifolds (e.g., [23,24]). A consequence of Poincaré duality is that a compact, orientable, n-dimensional topological manifold M (without boundary) has its k th Betti number b k equal to its (n−k) th Betti number b n−k . Hence, the Euler-Poincaré characteristic takes the form: In particular, for n-spheres, χ[S n ] vanishes when n is odd and equals 2 when n is even. The 2 comes about because S n is path-connected, so b 0 = 1 and orientable, so b n = 1, by Poincaré duality, the intermediate Betti numbers vanish for spheres. More generally, for a compact, orientable surfaces S, χ[M] = 2(1 -p), where the number p, which represents the number of "2-holes" is called the genus of the surface. For a sphere p is then 0, while for a torus, p is 1. Order parameters The concept of an order field in a material medium plays the same fundamental role in condensed-matter physics that electromagnetic fields do in the theory of electromagnetism or gravitational fields in the theory of gravitation. One should not think of the use of the word "order" as necessarily being related with the entropy density in the matter, although that concept might define a useful example of an order parameter. Other examples include the electric polarization vector field P, the magnetization vector field M, and the director line fields of nematic liquid crystals that define their axis of alignment without also defining the choice of two orientations that line can have. Order parameter spaces If the material medium is described by a differentiable manifold M of dimension n, which we shall call the configuration manifold of the medium, then the order field is described by a differentiable map φ: M → R, x ֏ φ(x), where R is an r-dimensional manifold called the order parameter space, so its elements are the order parameters; one also encounters the term space of internal states for R. Although nowadays the legacy of gauge field theories is that one tends to always start with physical fields as sections of appropriate fiber bundles and regard the present construction as only being of interest locally or when the bundle is trivial, one finds that for many of the practical examples in solid-state and condensed matter physics, which tend to involve manifolds that are open subsets of R n , the restriction to trivial bundles is perfectly natural. One can then introduce the complicating factors that arise from the non-triviality of fiber bundles as an advanced topic, much as one starts with ideal lattices and introduces defects or starts with linear constitutive laws and introduces nonlinearities. Hence, for our present purposes, we shall discuss the simpler definition of an order field, as we have already introduced considerable complexity by way of the homology of lattices, at least as far as solid-state physics is concerned. The introduction of homotopy into the study of order fields is associated with the concept of topological stability. An order field is called topologically unstable iff it is homotopic to a constant field. Hence, topological stability can only come about when the there is more than one homotopy class [φ] of maps from M to R. As long as M has the homotopy type of a sphere of some dimension -say, S n − the homotopy classes of order fields on M can be identified with the elements of π n (R). This restriction is reasonable when M is a vector space minus a point (the "defect") or if the order field is required to be constant at infinity, but when one has more that one defect, it becomes necessary to replace the global homotopy classes of f with local ones -à la Poincaré-Hopf -that are associated with a triangulation of M. This then becomes the basis for obstruction theory, as we shall see. Although many common order parameters are scalars, vectors, and tensors, which suggests that one can often use vector spaces as one's order parameter spaces, nevertheless, there are enough established examples of order parameters that belong to more general manifolds than vector spaces to justify the generalization in the eyes of physics. For instance, when the order parameter is a quantum spin state one can use a finite set, such as {− / 2 ℏ j, − / 2 ℏ (j −1), …, + / 2 ℏ (j −1), + / 2 ℏ j}, for R. The homotopy groups of such a set vanish except for π 0 (R), which has as 2j elements to it. Of course, π 0 (R) does not usually admit a natural group structure and must be treated as only a set. When the order parameter is a scalar, vector, or tensor that is allowed to be zero, the order parameter space is simply a vector space, which is contractible. Hence, all of its homotopy groups vanish, and many of the following topological considerations are irrelevant, even for defective lattices. However, when the order parameter is a non-zero vector field on the medium, one finds that since when one deletes the origin of any vector space of dimension r+1 the remaining topological space is homotopically equivalent to a sphere of dimension r (by radial contraction), one can think of any non-zero vector field as taking its values in S r , at least for the purposes of homotopy. As mentioned above, when r > 0 π k (S r ) vanishes up to k = r and then π r (S r ) = Z. However, although the homotopy groups for a circle then vanish for dimensions above one, the analogous statement is not true for higherdimensional spheres, even though it is true for homology and cohomology modules. This is one more example of how homotopy theory can be more complicated than homology. Similarly, when the order parameter is a unit vector field, one can also use S r for the order parameter space, as when one is specifying merely the oriented spatial direction of something. It is also sometimes necessary to specify the non-oriented spatial direction of something as an order parameter, as with nematic liquid crystals. Instead of a unit vector, one must consider the entire line that it generates, which is homotopically equivalent to replacing the unit vector u with the pair {−u, +u}, which is equivalent to identifying antipodal points on the unit r-sphere, which defines the real projective space of r dimensions RP r . The homotopy groups of RP r are then isomorphic to those of S r , except for the fundamental group, which is Z 2 . Interestingly, as far as homotopy is concerned, defining a Lorentzian metric on a manifold, such as the spacetime manifold in general relativity, is equivalent to defining a tangent line at each point of it, so the order parameter space at each point is the "projectived" tangent space -viz., the projective space of all tangent lines through the origin. Thus, the introduction of a Lorentzian metric on the spacetime manifold has much in common with the study of uniaxial nematic liquid crystals, although in the case of spacetime, it is usually harder to justify the triviality of the bundle in question, since that would make spacetime parallelizable, which is a harder condition to impose than it sounds, topologically. Another common order parameter space with non-trivial homotopy is the rdimensional torus T r = S 1 × … × S 1 (p factors) = R r / Z r . Its coordinates can usually be regarded as r-tuples (θ 1 , …, θ r ) of rotation angles, such as phases. The homotopy groups are then isomorphic to the direct sum of r copies of π 1 (S 1 ) = Z -i.e., Z p -in dimension one and vanish in all higher dimensions. The examples S r , RP r , and T r have in common the fact all of them can be represented as homogeneous spaces. That is, there is some Lie group G with a closed subgroup H that makes R into the coset space G/H. We have already represented T r in the form R r / Z r , and one can always represent S r as the homogeneous space SO(r+1) / SO(r), at least when r > 1. For instance, S 2 is diffeomorphic to SO(3) / SO(2), while RP r = S r / Z 2 can be represented as SO(r+1) / O(r), since the only difference between the rotations of O(r) and those of SO(r) is in the sign of the determinant. The computation of the homotopy groups for a given homogeneous space usually follows from considering the long exact sequence of homotopy groups that follows from the short exact sequence of pointed spaces H → G → G/H, namely: one calls the homotopy exact sequence of the fibration of G over G/H (see [30] for more details). This sequence simplifies considerably when H is a discrete subgroup, since in that event only π 0 (H) is non-trivial, and in fact consists of the group H, itself. As a result the long exact sequence breaks up into sequences of the form: 0 → π k (G) → π k (G/H) → 0 for k > 1 and the final sequence: The advantage of the former type of short exact sequence is that it implies that π k (G) is isomorphic to π k (G/H) as a group. One can also show that the latter sequence implies that π 1 (G/H) is isomorphic to H. Thus, the only difference between the homotopy type of G and that of G/H is in dimensions zero and one. For instance, since RP 3 is SU(2)/Z 2 , and SU(2) is homotopically equivalent to S 3 , the only thing that changes by the identification of antipodal points on the 3-sphere is that the resulting homogeneous space is no longer simply connected, but doubly connected, since π 1 (SU(2)/Z 2 ) is isomorphic to π 0 (Z 2 ), which is the group Z 2 . It is no coincidence that homogeneous spaces play such a key role as order parameter spaces, since order parameter spaces usually come about as a result of spontaneous symmetry breaking. One starts with a space X of "field values," such as a vector space, and a real-valued function F: X → R that generally represents either the free energy in the macroscopic state of the system of atoms or molecules in the material or some tensor field that describes some physical property of the medium, such as a constitutive law. One then assumes that F is symmetric under the action on X of a Lie group G of internal symmetries, such as rotational symmetries. Hence F is constant on the orbits of that action and any orbit through a point a ∈ X will take the form G / G a , where the subgroup G a in G is called the isotropy subgroup of the group action at a. In the case of the action of SO(2) on R 2 by matrix multiplication there are two possible isotropy subgroups, namely, the identity group I and SO (2) itself. The orbits corresponding to the former subgroup look like SO(2) -i.e., the concentric circles around the origin -and the orbit whose isotropy subgroup is SO(2) becomes the fixed point at the origin. Spontaneous symmetry breaking of the ground state -viz., the minimal orbit of F -then takes the form of requiring that F take its minimum on one of the circles, instead of the fixed point. Hence, the ground state is no longer a unique point in the internal state space, but a subset of generally higher dimension than zero. By now, a consistent vocabulary has emerged for the topological defects in ordered media that are due to the homotopy type of R, and, in fact, the terms overlap the language of lattice defects somewhat. A topological defect, in any form, defines a generator of π k (R) in some dimension k. For k = 0, 1, 2, 3 the corresponding terms are wall, vortex (or string), hedgehog (or monopole), and texture, resp. We summarize some of the most commonly studied media, their order parameters, and the first three homotopy groups in Table 1. In this table, the group D 2 represents the dihedral group of a rectangle, which is an Abelian group that consists of the identity transformation of R 3 and the three inversions of each coordinate. The group Q is a non-Abelian group of four elements and is isomorphic to the group of unit quaternions {1, i, j, k} under multiplication. E(3) represents the three-dimensional Euclidian group O(3) □ R 3 , where the symbol □ represents the semi-direct product. It is then important point to note that the case of non-Abelian fundamental groups is not merely a mathematical pathology in the eyes of physical examples, since it pertains to the examples in the table above of biaxial nematics and cholesterics. One finds that the non-Abelian composition of homotopy classes of loops is closely related to the issue of whether those loops are linked or not. (There is considerable discussion of the issues associated with the linking of loops in Monastrysky [11]). Table 1. Common ordered media and their homotopy groups. Biaxial nematic Orthonormal 3-frame (modulo discrete transformations) Cholesteric (same as biaxial nematic) Although 4 He (in its II phase) has a relatively elementary structure in terms of the homotopy groups of its order parameter space, 3 He has a very involved order parameter space, whose homotopy type depends upon various other phase parameters, one can confer the table in Appendix B of the book [10] by Mine'ev. Since smectic liquid crystals have a layered structure, there has been some investigation of the possibility that one might usefully employ the methods of foliations to the modeling of their topological defects (see, e.g., [6]). Since the application of homotopy methods to topological defects has been established for some time, we move on to more immediate concerns that involve the application of homology methods to defective lattices, while summarizing some of the known examples of ordered parameter spaces and their homotopy groups in Table 1. Boundary-value conditions on order fields In addition to the presence of defects in crystal lattices, generators of homology modules in various dimensions can be introduced by imposing common boundary-value conditions on the order field itself. Suppose L 0 is an m-dimensional lattice in R n that has no defects, but has a boundary ∂L 0 . Hence, the set L m of m-dimensional simplexes that are generated by the vertices of L 0 represents a region in R n that is topologically equivalent to a closed m-dimensional ball B m , whose boundary ∂L m is topologically an m−1-sphere S m−1 . If one defines an order field φ: L m → R that must satisfy the condition that φ is constant on ∂L m then one can effectively treat ∂L m as a single point. Equivalently, when one identifies the boundary of B m to a point the resulting quotient space B m / ∂B m , whose points consist of either the interior points of B m or the boundary points, regarded as a single point, becomes topologically an m-sphere. This is easiest to visualize in the cases m = 1, 2, when one identifies the endpoints of, say, [0, 2π] to produce a circle or one identifies the boundary of a closed disc to produce a 2-sphere, resp. The effect of this identification is then to introduce a generator for the homology module H m (L m ). Hence, constant boundary conditions on an order field are equivalent to identifying the boundary of a lattice to a single point. Another common boundary condition on an order field is periodic boundary conditions. For instance, if a Bravais lattice is generated by linear m-frame {a 1 , …, a m }, whose lengths are a i , i = 1, …, m, resp., then one might specify that φ be periodic on the parallelepiped that they generate with spatial periods (i.e., wavelengths) in each direction that are equal to a i : The effect of periodic boundary conditions is then to identify the endpoints of each interval (x 1 , …, [0, a i ], …, x m ) of R m into a circle, which then makes the resulting quotient space into an m-dimensional torus T m = R m / Z m = S 1 ×…× S 1 . Topologically, this has the effect of introducing m independent generators for H 1 (L m ). The continuum limit Admittedly, in this day and age it is more physically fundamental to regard any realworld material as ultimately composed of atoms, whether isolated or bound together into molecules or lattices of ions. Nevertheless, the samples of materials that are used in laboratory experiments generally involve enormous numbers (> 10 23 ) of atoms that only resolve to a discrete structure at some scale of distance that might lie beyond direct measurement, except when one goes to quantum techniques such as X-ray, neutron, or electron diffraction. Thus, it is often still useful to model the macrostates of physical systems by means of distributions and fields defined on continuous regions of space. This process of passing from the discrete system of microstates, such as a volume of gas molecules or a crystal lattice, to a corresponding system of macrostates that refer to a continuous region of space is generally referred to as passing to the continuum limit. In order to make it a limit in the analytical sense of the word, one can think of the process as involving increasing the total number N of material points in the region without bound, while maintaining a constant average density of the quantity in question over the total volume V. This is distinct from the thermodynamic limit, in which one also allows V to become infinite, as well. The case at hand begins with a lattice L: S → R n , as above, whose image then becomes a finite subset of R n that we denote by L 0 . We shall be interested in the process of extending an order field φ: L 0 → Π to a continuous field defined on "the space spanned by L 0 ." Of course, the latter phrase in quotes needs to be made more rigorous, so we treat it as meaning that if S is associated with an abstract simplicial complex whose vertex scheme is defined by the points of S themselves then L 0 , which we now denote by Λ 0 , is the 0-skeleton of its geometrical realization by points in R n , L k is its set of geometrical realizations of the k-simplexes, and, in general, Λ k is the k-skeleton of that geometrical realization. Ultimately, if S ⊂ Z m then the m-skeleton Λ m of the complex is a polyhedral region in R n that will serve to give a precise meaning to the phrase "the space spanned by the lattice." In one sense, the problem of extending a field defined on a lattice to a field defined on a continuous region that contains it is a problem of interpolation. For some intriguing insights into how one might approach the problem from that angle in practice, one might confer the paper of Krumhansl [31], in which he applies a technique that is borrowed from communications theory and is based on a theorem of Claude Shannon. Increasing N to infinity can take the form of making successively finer subdivisions of the lattice by the interpolation of more lattice sites. This implies corresponding subdivisions of the associated simplicial complex, which must be of the type that preserves the homology modules, up to isomorphism, such as barycentric subdivisions of triangular complexes. In the context of topology, the problem of extending the order field φ : Λ 0 → R from a lattice to a continuous order field φ m : Λ m → R falls within the purview of the most elementary form of obstruction theory, which we now discuss. Obstructions to continuous extensions This problem that we just posed is usually first discussed in algebraic topology in the more general form of finding topological obstructions to the continuous extension of a map f: A → Y, when A ⊂ X is a subset of one topological space X and Y is another topological space, to a map f: X → Y (see [15,30,32,33] on this). The usual approach to the continuous extension problem is to first assume that X is represented as the geometrical realization of a simplicial complex K (actually, one usually encounters CW-complexes, whose basic building blocks are based on balls, but the effect on homology is the same) and A is a subcomplex of that complex. One begins with f being defined on the 0-skeleton K 0 of the complex, extending to the other points outside of A using an arbitrary constant element of Y, if necessary. One then proceeds stepwise by examining the extension to the 1-skeleton, 2-skeleton, etc. In general, if b k is the boundary ∂σ k+1 of a k+1-simplex σ k+1 then since σ k+1 is contractible to a point an extension of f from ∂σ k+1 to σ k+1 is possible iff the homotopy class [f(∂σ k+1 )] of f: ∂σ k+1 → Y is trivial. Since ∂σ k+1 is homeomorphic to a a k-sphere, [f(∂σ k+1 )] ∈ π k (Y). This association of each σ k+1 with [f(∂σ k+1 )] ∈ π k (Y) is then extended by linearity to a k+1-chain c f,k on X with values in the group π k (Y). By this, we mean that if: This makes c f,k an element of C k+1 (X; π k (Y)), so <c f,k ; c k+1 > is an element of π k (Y). In fact, c f,k is a k+1-cocycle c f,k ∈ H k+1 (X; π k (Y)), although in the case of k = 0, one must recall that π 0 (Y) generally has no natural group structure and for k = 1, one might have to use the Abelianization of π 1 (Y). The fact that c f,k is a cocycle follows from the fact that if b k+1 is k+1-boundary then it can be expressed in the form: Thus, when c f,k is evaluated on any boundary, it vanishes, and this, as we pointed out above, is one way of characterizing cocycles. The necessary and sufficient condition for the continuous extension of f from K k to K k+1 is then the vanishing of c f,k . The first non-vanishing c f,k as one goes from k = 0 upwards is referred to as the (primary) obstruction cocycle of f. One sees that H k+1 (X; π k (Y)) can be non-zero iff both H k+1 (X) and π k (Y) are non-vanishing at the same time. Thus, c f,k will automatically vanish whenever either H k+1 (X) or π k (Y) are trivial. For instance, if V is a vector space then any f: A ⊂ X → V can be continuously extended to X since π k (V) = 0 for all k. Similarly, any constant map on A can be continuously extended to a constant map on X. Physical examples Let us now apply this process to the case at hand of extending an order field φ: Λ 0 → R from a lattice Λ 0 , which we regard as the 0-skeleton of an m-dimensional polyhedral complex Λ m , to a continuous map φ: Λ m → R. Thus, we shall be concerned with elements c φ,k of H k+1 (Λ m ; π k (R)), k = 0, …, m, which makes it clear that when the lattice and the order parameter space are both topologically defective there could be topological obstructions to this extension. Since the dimension m of the lattice is usually 1, 2, or 3, in practice, this means that we shall need to consider only H 1 (Λ m ; π 0 (R)), H 2 (Λ m ; π 1 (R)), and H 3 (Λ m ; π 2 (R)). Therefore, textures (which belong to π 3 (R)) would not be an issue until one considered a four-dimensional lattice. In each case, we will examine how the obstruction cocycle in that dimension works for various physical examples of order fields on defective linear, planar, and space lattices when the order parameter space also has topological defects of the appropriate dimension. 6.1 1-cocycle on a lattice with values in π 0 (R) As mentioned above, the case of k = 0 requires special attention, since π 0 (R) is a usually a set, not a group. Of course, when R = G/H is a homogeneous space with a discrete subgroup H, π 0 (R) will indeed be a group. Also, since the ultimate result of the analysis is merely that an order field on a 0-skeleton of a lattice can be continuously extended to the 1-skeleton iff it is constant on the path components of the 0-skeleton, one sees that the only benefit to going through the argument naively is simply to illustrate the significance of obstruction cocycles in an elementary setting. The non-vanishing of H 1 (Λ m ; π 0 (R)) would imply that the lattice was multiply connected while the order parameter space was not path connected. Thus, a linear lattice would have to be a finite set of disconnected non-simply connected sub-lattices, a planar lattice would have to have point defects or be orientable and without boundary, while a space lattice would have to have dislocations. The appearance of 1-cycles in a linear lattice might follow from imposing boundary-value conditions on the order field. A typical example of an order parameter space that is not path-connected might be the quantum spin space R ={ / 2, / 2} − + ℏ ℏ , for which we shall denote the elements of π 0 (R) by -1 and + 1. First, consider a path-connected linear lattice Λ 1 for which H 1 (Λ 1 ) is non-trivial; when there is more than one path component to Λ 1 , one simply repeats the argument for each separate component. Let the fundamental 1-cycle on Λ 1 be represented as: (5.5) where N 1 = N 0 + 1 represents the number of branches that connect the vertices. Now, let φ: Λ 0 → R be an order field that is initially defined on the vertices of the lattice and takes its values in an order parameter space R with π 0 (R) non-trivial. Of course, when π 0 (R) is not a group the addition of terms is not rigorously defined. Hence, we interpret the vanishing of the expression (5.6) to mean that all individual terms are zero. Thus, since [Λ 1 ] is a cycle, each vertex that ends one 1-simplex begins another one, and if < c φ,0 ; [Λ 1 ]> vanishes then φ must be constant over all vertices. The obstruction to extending a quantum spin field from the vertices to the loop is then the constancy of its values. Similar arguments apply to the case of planar lattices with point defects and space lattices with dislocations. The loop then becomes a non-bounding 1-cycle. 6.2 2-cocycle on a lattice with values in π 1 (R) Now, let us look at the obstruction cocycle in H 2 (Λ m ; π 1 (R)). The non-vanishing of that cohomology module would imply that the H 2 (Λ m ) and π 1 (R) were both nonvanishing. In the former case, one could be either dealing with an orientable planar lattice or a space lattice with point defects, while the order parameter space would have to have a vortex defect. Possible candidates for R might be S 1 or RP n , for which π 1 (R) would be isomorphic to Z and Z 2 , respectively. If we represent a typical 2-cycle in the form: In the case of R = S 1 , for which π 1 (R) = Z, this expression will be a sum of integers [φ(∂σ 2 (i))] that represent the winding numbers of the maps φ: ∂σ 2 (i) → R that are defined by the restriction of φ to the boundary of each 2-simplex. When In the case of R = RP n , for which π 1 (R) = Z 2 , each [φ(∂σ 2 (i))] will be 0 or 1 depending upon whether the winding number is even or odd, respectively. Similarly, the sum will be 0 or 1 depending upon whether it is even or odd, as well. Thus, in either case, one sees that the vanishing of c φ,1 does not have to imply that the winding numbers of φ on each face ∂σ 2 (i) vanish, but only their sum over all faces. A historically important interpretation of this situation in obstruction theory is the closely-related problem of finding non-zero tangent vector fields on compact differentiable manifolds. Although this actually requires passing to "local systems of homotopy groups" because the space R is now a tangent space to a potentially nonparallelizable manifold, many of the same ideas we have dealing with all along assert themselves. If a tangent vector field v on a manifold M has a zero at a point x 0 then one defines the index of that zero Ind[x 0 ] to be the winding number [v(∂σ n (x 0 ))] of the map v: ∂σ n (x 0 ) → R n -{0}, where σ n (x 0 ) is a sufficiently small disc surrounding x 0 and, as one recalls, R n -{0} is homotopically equivalent to an n−1-sphere. Thus, as long as σ n (x 0 ) is part of a larger triangulation of M into the fundamental n-cycle, we can say that: A deep and ground-breaking theorem of topology that was first proved for surfaces by Poincaré and later extended to compact differentiable manifolds by Hopf, is that this sum also equals χ[M]. Thus, the obstruction n-cocycle c v,n−1 , which has its values in Z, is also referred to as the Euler class of M, or really, its tangent bundle. A compact differentiable manifold admits a global non-zero vector field iff its Euler-Poincaré vanishes, so, in particular, all compact, orientable manifolds of odd dimension admit non-zero vector fields, while many even-dimensional manifolds do not. The canonical example of a compact surface that does not admit a global non-zero tangent vector field is the 2-sphere, for which for which H 0 (T 2 ) = Z, since it is path connected, H 2 (T 2 ) = Z, since it is compact and orientable, which is also consistent with Poincaré duality, and H 1 (T 2 ) = 0, since it is simply connected. The obstruction to continuously extending a non-zero tangent vector field that is defined on the 1-skeleton of a triangulation of S 2 to a non-zero tangent vector field on S 2 is then a 2-cocycle c v,1 with values in the integers, which, when evaluated on the fundamental 2-cycle [S 2 ] gives χ[S 2 ] = 1 -0 + 1 = 2. An example of a vector field on the 2-sphere is any vector field that is tangent to the longitude circles and points south, which then has zeroes at the poles. Despite the fact that the vector field is radial outward at the North pole and radial inward at the South pole, its index at each is + 1. One can define an example of a tangent vector field with a single zero of index 2 by intersecting the sphere with all planes through a fixed line tangent to the North pole. Note that the Poincaré-Hopf theorem does not generally apply in the case of vector fields that are not necessarily tangent to submanifolds in R n , since a 2-sphere can admit a constant vector field when one drops the restriction that the vectors be tangent to it. As a first physical example of the previous remarks, let us consider a non-zero quantum wave function φ: Λ 2 → C -{0} on an orientable planar lattice Λ 2 that we assume to close into a spherical lattice. We might simply be imposing constant boundary conditions, or perhaps a "Bucky-ball" might serve to illustrate such a lattice in material reality. Thus, H 2 (Λ 2 ) will have one generator, in the form of the fundamental 2cycle [Λ 2 ], while π 1 (R) is Z since the homotopy of the punctured plane is carried by the unit circle. If the values of φ are represented in polar form as Re iθ then it is sufficient consider the circular part e iθ as far as homotopy is concerned. Note that since we are not requiring that the plane of C be tangent to the sphere in question, the Poincaré-Hopf theorem cannot be used. Since winding numbers can take on all positive and negative values, it is clear that the vanishing of the right-hand side of (5.8) does not have to imply that all of the terms vanish, only that the sum of the positive contributions equals the sum of the negative contributions. Obviously, if φ is constant on the 1-skeleton Λ 1 then all of the winding numbers vanish and one can extend φ continuously to Λ 2 . However, non-constant fields can still have non-zero winding numbers that sum to zero. The same considerations apply to the case of a non-zero quantum wave function on a planar lattice with periodic boundary conditions or a space lattice with point defects. In the case of a director field on a space lattice with point defects, we shall assume that the directors are lines in the ambient space R 3 , so as to avoid the consideration of non-trivial bundles. Thus H 2 (Λ 3 ) has as many generators as defects and π 1 (R) = π 1 (RP 2 ) = Z 2 , which we regard as composed of the set {even, odd} when given addition. In the case of loops in RP 2 , it is convenient to imagine them as consisting of rotations of lines through the origin of R 3 around a circle in some plane through the unit sphere, such as the equatorial plane or a polar plane. One can then illustrate the two possible loops as follows: Odd Even Figure 8. Schematic depiction of the winding number for loops in projective spaces. In interpreting this diagram, we intend that rotating a horizontal line through the origin through an odd multiple of π will close a loop in RP 2 into one homotopy class, while rotating it through an even multiple of π will close it into a loop of the other class. One might note that since the homotopy groups of spheres and projective spaces differ only in dimension one, the only time the continuous extensions of non-zero vector fields and line fields (i.e., director fields) are obstructed by different cocycles is in the case where H 2 (Λ m ) is non-trivial. 6.3 3-cocycle on a lattice with values in π 2 (R) In order for H 3 (Λ m ) to have a non-zero generator, first, we would clearly have to have a lattice whose dimension was at least three. If we were indeed concerned with a space lattice then since there are no 3-boundaries in three dimensions H 3 (Λ 3 ) = Z 3 (Λ 3 ) and a non-zero generator of H 3 (Λ 3 ) would take the form of a non-zero 3-cycle. Since the lattices that we have in mind are generated by finite sets of vertices their k-skeletons are all compact. As a result, the only way that H 3 (Λ 3 ) can vanish is if Λ 3 is not orientable, such as one finds with a "Möbius crystal." However, if Λ 3 is orientable then one always finds that each path component of it is associated with a non-vanishing 3-cocycle, as a result of Poincaré isomorphism, in the form of the fundamental 3-cocycle: For π 2 (R) to be non-trivial, one would have to be considering an order parameter space with point defects, such as S 2 or RP 2 , which have the same π 2 (R), namely, Z. The obstruction to extending an order field φ from Λ 2 to Λ 3 is a 3-cocycle c φ,2 with values in the integers, which, when evaluated on the fundamental 3-cycle, gives: Again, if the order field is a non-zero tangent vector field on a compact 3-manifold M, then since χ[M] = 0 in any case, the obstruction to continuously extending a non-zero tangent vector field from Λ 2 to Λ 3 is the Euler class, and its value on the fundamental 3cycle always vanishes. Similarly, if the vector field is not required to be tangent then Poincaré-Hopf does not apply. Discussion To summarize what we established in the preceding study: 1. A crystal lattice defines an abstract simplicial complex and its geometric realization in the space of the crystal. 2. Lattice defects introduce generators of the integer homology modules in each dimension. 3. When an order field takes its values in an order parameter space with non-trivial homotopy groups, it is possible that there are non-vanishing cocycles in the cohomology of the lattice that take their values in the homotopy group of one lower dimension. 4. These cocycles represent obstructions to the continuous extension of order fields defined on lower-dimensional skeletons of the simplicial complex of the lattice to the skeleton of next higher dimension. 5. Passing to the "continuum limit" of an infinitude of infinitesimally close lattice sites will then be obstructed by these cocycles. Thus, the continuum limit of the order field will not be defined at some set of "singular" points. One of the topics we implicitly left out above was based in the fact that we were using coefficient groups, namely, the groups π k (R), that were Abelian. Since non-Abelian π 1 (R) play an important role in the established theory of topological defects in order media, this must be addressed eventually. One gets into the deep issues of homology theory involving the consideration of "linking coefficients." For instance, suppose one thinks of two loops in M as two 1-cycles. If one of them bounds a 2-chain then one can assign an integer to the number of times the other intersects that 2-chainup to orientation -as a measure of the extent to which the loops are linked together. In the foregoing discussion, we also summarily ignored the consideration of the boundary to the lattice by considering only order fields that obeyed appropriate boundary conditions that would make such a consideration unnecessary. Since there are such things as topological defects in ordered media that can manifest themselves on surfaces, such as "boojums" in superfluid helium, one then must move to the study of the relative homology modules of the lattice. A k-chain in a topological space X is called a relative k-cycle relative to a closed subspace A iff its boundary is a k-chain in A. Two relative kcycles in X are homologous relative to A iff their difference is a relative k-boundary; that is, a k-chain in X that differs from a boundary in X by a chain in A. One then must consider topological obstructions as relative cocycles in M (modulo ∂M) with values in the homotopy groups of R. As mentioned above, if one wishes to generalize the order fields to sections of nontrivial fiber bundles whose fibers are homotopically equivalent to R then one must also replace the homotopy groups of R with a "local system" of homotopy groups for the fibers of the bundle. This gets one into the much-discussed realm of characteristic classes for fiber bundles, which has been a common focus in gauge field theories. For some remarks on how that would apply to spacetime defects, one might confer Delphenich [33]. One of the most fundamental topics relating to solid-state and condensed matter physics is the propagation of waves through such media, in the form of acoustic, electromagnetic, and matter waves. As we saw above, the concept of a reciprocal lattice relates to wave covectors, on the physics side of the issue, and cohomology, on the mathematical side. One then suspects that cohomology might have much to say about the propagation of waves in defective lattices. Indeed, in practice the very existence of lattice defects is often deduced from the way that waves propagate. Other directions of extension for the methods discussed above might include lattices of dimension higher than three, such as one encounters in lattice gauge theories, which also considers the continuum limit, and the meshes used in the numerical models for the solutions of systems of differential equations, such as in finite-element analysis, or numerical relativity.
2018-12-07T00:07:25.870Z
2010-10-06T00:00:00.000
{ "year": 2010, "sha1": "4b5c2db163d7d909b94fcc3298dd5542be858873", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1010.1218", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "d62a87b97df0f6c8b5506cced5cccf22967b2acf", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
398602
pes2o/s2orc
v3-fos-license
Discovery of DNA Viruses in Wild-Caught Mosquitoes Using Small RNA High throughput Sequencing Background Mosquito-borne infectious diseases pose a severe threat to public health in many areas of the world. Current methods for pathogen detection and surveillance are usually dependent on prior knowledge of the etiologic agents involved. Hence, efficient approaches are required for screening wild mosquito populations to detect known and unknown pathogens. Methodology/principal findings In this study, we explored the use of Next Generation Sequencing to identify viral agents in wild-caught mosquitoes. We extracted total RNA from different mosquito species from South China. Small 18–30 bp length RNA molecules were purified, reverse-transcribed into cDNA and sequenced using Illumina GAIIx instrumentation. Bioinformatic analyses to identify putative viral agents were conducted and the results confirmed by PCR. We identified a non-enveloped single-stranded DNA densovirus in the wild-caught Culex pipiens molestus mosquitoes. The majority of the viral transcripts (.>80% of the region) were covered by the small viral RNAs, with a few peaks of very high coverage obtained. The +/− strand sequence ratio of the small RNAs was approximately 7∶1, indicating that the molecules were mainly derived from the viral RNA transcripts. The small viral RNAs overlapped, enabling contig assembly of the viral genome sequence. We identified some small RNAs in the reverse repeat regions of the viral 5′- and 3′ -untranslated regions where no transcripts were expected. Conclusions/significance Our results demonstrate for the first time that high throughput sequencing of small RNA is feasible for identifying viral agents in wild-caught mosquitoes. Our results show that it is possible to detect DNA viruses by sequencing the small RNAs obtained from insects, although the underlying mechanism of small viral RNA biogenesis is unclear. Our data and those of other researchers show that high throughput small RNA sequencing can be used for pathogen surveillance in wild mosquito vectors. Introduction Emerging infectious diseases (EIDs) have exerted a significant burden on public health and global economies [1,2]. During the past decade, novel viruses, particularly those causing severe acute respiratory syndrome (SARS) and avian influenza A H5N1, have attracted international concern. These diseases represent only part of a rich tapestry of pathogens that have emerged to pose public health threats in recent years. Clearly, there is a pressing need for rapid and accurate identification of viral etiological agents. The development of Next Generation Sequencing (high throughput sequencing) technology provides a possible solution to this problem; indeed several recent studies have used these techniques to identify novel viral agents [3,4,5,6,7]. Palacios et al. identified a novel and deadly arenavirus by employing 454-pyrosequencing technology, the results of which were later confirmed by PCR [4]. Recent studies, have identified a novel strain of Ebola virus which caused a hemorrhagic fever epidemic in Uganda [6], and dengue virus type 1 (DENV-1) sequences in laboratory reared mosquitoes experimentally infected with DENV-1 [7]. Using de novo next generation sequencing, Makoto Kuroda et al. showed that the etiologic agent identified in a deceased pneumonia patient was, in fact, the pandemic influenza A H1N1 virus, rather than that originally assumed to be pneumococcus [8]. These studies highlight the power and feasibility of high throughput sequencing techniques for detection of unsuspected or novel etiologic agents. The sequencing technologies offer distinct advantages over traditional viral detection and surveillance methods that generally require prior knowledge of the etiologic agents, as well as depending on virus-specific primers, probes or antibodies. These traditional techniques are, therefore, unsuitable in situations where the causative agent of an outbreak is entirely novel, or is a pathogen variant with several mutations to key priming regions. Hence, high throughput sequencing techniques provide a powerful new opportunity for surveillance and discovery of novel pathogens. The techniques provide a cost-effective mechanism for massive parallel sequencing generating extreme sequencing depth, whilst providing multiplex analyses for etiologic agent identification. Mosquito-borne infectious diseases have been emerging and reemerging in many areas of the world, especially in tropical and subtropical areas where agents such as West Nile virus (WNV), dengue virus (DENV), chikungunya virus (CHIKV) and yellow fever virus (YFV) are present. Surveillance of infectious agents carried by mosquitoes is important for predicting the risk of vectorborne infectious disease outbreaks. Recently, a new strategy based on small interfering RNA (siRNA) immunity to virus infection was proposed for detecting novel RNA viruses in laboratory reared drosophilae and mosquitoes, as well as RNA/DNA viruses in plants using high throughput sequencing techniques [9,10]. Prompted by these results (in laboratory reared insects and plants by deep sequencing and assembly of small RNAs isolated from the host organisms), we explored the feasibility of using this approach to identify viruses from wild-caught mosquitoes. Our findings show for the first time that high throughput sequencing of small RNAs can detect both RNA-and DNA viruses in wild-caught insects, thus supporting the feasibility of employing this approach for surveillance purposes. Standard small RNA analysis For each mosquito species, Solexa high throughput sequencing generated about 40 million individual sequencing reads with base quality scores. After removing the sequencing adaptor and artificial junk sequences containing simple repeats of nucleotides (i.e., AAAAA…, GCGCGC…), or multiple unresolved nucleotides, which were resulted from sequencing procedures, mappable sequences were generated. By mapping to the miRNA database, we identified about 200 known miRNAs for each mosquito species. Using miRNA prediction software, one to two thousand miRNA candidates were predicted (Table 1). Virus sequence detection We performed BLAST analysis (using the blastn program) to identify potential viral sequences in the cleaned unique sequences. Preliminary results revealed that a large number of unique sequences in the Culex pipiens molestus sample shared identity with three other viruses, namely Aedes albopictus Parvovirus (GenBank Accession: X74945), Anopheles gambiae densonucleosis virus (GenBank Accession: EU233812), and Aedes aegypti densovirus strain 0814616 (GenBank Accession: FJ360744). Further analysis demonstrated that the matched A. albopictus Parvovirus sequences were also present in the A. gambiae densonucleosis virus genome and the A. aegypti densovirus strain 0814616 genome. The A. gambiae densonucleosis virus and A. aegypti densovirus strain 0814616 shared most of their matched sequences. Sequence alignment showed that these three viruses exhibited more than 80 percent sequence identity, indicating that a virus with homologous sequences to these three viruses was present within the C. pipiens molestus sample. For the other two samples (C. tritaeniorhynchus and A. sinensis), no significant amount of sequence was found that corresponded to any specific virus. To discover potential novel viruses which may be remotely related to known viruses, a BLAST strategy proposed in the literature [9] was adopted. This strategy employs the tblastx search to ensure identification of viruses based on amino acid sequences. However, this analysis did not reveal any additional viral sequences in any of the three mosquito samples tested. Small RNA sequence analysis of the newly identified virus To characterize small RNA sequences with homology to the viral genomic sequences, mappable sequence reads were assembled using three viral genomes as references (i.e., C. pipiens molestus, C. tritaeniorhynchus and A. sinensis) with CLC Bio (Katrinebjerg, Denmark) using the default parameters. The results showed that the small RNA reads overlapped which allowed contig assembly. Of the three viruses, A. gambiae densonucleosis had the most mapped reads (4481) and the longest assembled consensus sequence (3248 bp) which covered 78.5% of the whole genome (4139 bp) ( Figure 1 and file S1). The overall similarity between the newly identified virus and the A. gambiae densonucleosis virus was about 98% (3182/3248). The distribution of the lengths of the matched small viral RNAs showed that the majority (.60%) of them were 20-24 nt in length, with a peak distribution of 21 nt, while the total library small RNA (majority of them were endogenous siRNA) displayed a peak distribution of 22 nt ( Figure 2). This is consistent with the discovery in sindbis infected mosquitoes [11] and virus infected Drosophila OSS cells [9], where the matched viral siRNAs had a peak distribution at 21 nt, while the endogenous siRNA in Drosophila [12,13] and the total library small RNA in mosquitoes [11] had a peak length of 21,22 nt. Most of the small RNAs distributed along three viral transcripts (e.g. NS1, NS2 and Capsid), with more than 80% of the transcript length covered. The viral small RNA +/2 strand ratio approximated 7:1 (3933/548), indicating that these molecules were largely derived from the viral RNA transcripts. The mechanism for biogenesis of the -strand small RNAs in mosquitoes is currently unknown. Characterization of high frequency small viral RNA sequences Although most of the viral coding transcripts (NS1, NS2, and Capsid) were covered by the small viral RNA sequences, the small RNAs were not evenly distributed along the transcripts. There were 10-20 sites with relatively high coverage and 4 of these had very high coverage indeed (greater than 3206 coverage compared with the average coverage of 246) ( Figure 1). The core sequences of the high frequency reads were 20-22 nt in length, with 3 or 4 adenosine bases located at the 59 terminus ( Table 2). The two most frequently occurring sequences (with greater than 6006 coverage) were in the coding region of the viral capsid protein gene, while the other two medium high copy number sequences were located in the NS1 and NS2 genes, respectively. The biological relevance as well as the biogenesis of these high frequency reads requires further investigation. Identification of small RNA sequences in the direct repeat region within 59 and 39 UTRs The densovirus genome contains two-pairs of inverted repeats, which constitute two stem-loop structures at the 59 and 39 untranslated regions of the genome termini ( Figure 1). It also contains two pairs of direct repeats in close proximity to those inverted repeats. All of these repeats are located in the untranscribed regions at the genome termini. It is interesting to note that no small RNAs were mapped to the untranscribed regions, although large numbers of reads mapped to the four inverted sequences (with a coverage greater than 506), but not the direct repeats ( Figure 1, Table 3). Sequencing was performed on small RNA fragments, therefore, the fact that no reads mapped to the untranscribed region was not unexpected. The fact that reads mapped to the untranscribed 59 and 39 inverted repeat regions indicates that those inverted repeat regions may be transcribed by an unknown mechanism. Since the terminal stem-loop structures are usually involved in viral genome replication, it is possible that transcripts from the stem-loop regions are involved in virus replication (e.g. as primers for genomic DNA synthesis). It is notable that the high coverage small RNAs in the coding regions and the small RNAs mapping to the non-coding stem-loop regions are both highly conserved (0/385 nucleotide difference), compared to the genome as a whole which is roughly 2% different to the reference densovirus (EU233812). Such high evolutionary conservation suggests that these sequences are of functional importance. Validation of viral infection by polymerase chain reaction To validate the presence of a viral infection, a standard PCR was conducted using total DNA extracted from samples of the three mosquito species. Gel electrophoresis demonstrated that a DNA band of the appropriate size had been amplified in the C. pipiens molestus mosquito sample, but not from C. tritaeniorhynchus or A. sinensis (Figure 3). Sequence analysis of the PCR product revealed same sequence as that assembled by the small RNAs. These results, therefore, confirm the existence of a densovirus in C. pipiens molestus, but not in C. tritaeniorhynchus and A. sinensis. We have called this densovirus Culex tritaeniorhynchus densovirus YN2009. Phylogenetic analysis of the newly identified densovirus To understand the evolutionary status of the densovirus identified here, a phylogenetic tree was generated with Mega 4.0 using maximum parsimony and bootstrap 500 methods ( Figure 4). The reference densovirus strains [14,15,16,17,18,19,20,21,22] were downloaded from GenBank after blasting the NT database with a 398 bp segment assembled from the small RNAs. The phylogenetic tree obtained infers that newly identified Culex tritaeniorhynchus densovirus YN2009 is a close relative of the mosquito densoviruses prevalent in South and Southwest China. Discussion High throughput sequencing as a next generation sequencing technology has been developing rapidly during the last few years and has found various applications in different biological and medical research fields. Recent advances in this technology have made its application easier, cheaper, more convenient and more efficient allowing it to evolve into a powerful tool for identification of novel human pathogens [3,4,5,6,7]. High throughput sequencing of small RNA's (esp. miRNA) has become routine practice, with reliable protocols and readily available reagents. Due to the short length of the small RNA molecules, sequencing is even faster and cheaper than standard high throughput sequencing using longer DNA or RNA fragments. This makes high throughput sequencing of small RNA an attractive method for pathogen detection in plants and insects based on siRNA, an innate defense mechanism of plants and insects [9,10]. Detection of viruses in laboratory reared insects [9] or experimentally infected mosquitoes have been reported [7]. Our work shows that high throughput sequencing is suitable for detecting viral agents in wild-caught insects. Since siRNA defense mechanisms are triggered by the doublestranded RNA (dsRNA) sequence (and the siRNA mature forms generated from dsRNA), it is reasonable to expect that only RNA viruses which contain dsRNA as genomic RNA or replicate via a dsRNA intermediate can be identified using this strategy. This perception is consistent with previous reports where only RNA viruses were identified using small RNA sequencing techniques [7,9]. However, our work clearly demonstrates that small RNA sequencing can also detect DNA viruses in insects, although the underlying mechanism of the biogenesis of these small RNAs is unclear. Similar findings have been reported in plants [10], but again the mechanism has not been defined. It is possible that plants and insects generate small RNAs from infected DNA viruses differently. Possible mechanisms for small RNA biogenesis from DNA viruses include, for example, the local dsRNA formed in the stem-loop structure of the viral transcripts or overlapping convergent viral transcripts [23,24]. In the case of densoviruses, there seem to be no overlapping convergent transcripts [25,26] and no obvious stem-loop structure has been identified in densovirus transcripts. An alternative explanation for the small RNAs derived from the DNA virus may be degradation of virus transcripts. However, this hypothesis cannot explain at least two things: one is the very high incidence of some small RNAs that have 3-4 adenines at the 59 terminus, the other is the biogenesis of the small RNAs that map to the inverted regions of the genomic termini. These are predicted to form a T-or Y-shaped structure that may participate in genome packaging signaling or replication initiation [26]. It is interesting to note that a longer length direct repeat was located very close to each inverted repeat (Figure 1), but no small RNA mapped onto the direct repeats themselves. The function of the inverted and direct repeats, and how the small RNAs are generated from the inverted repeats but not from the direct repeats, remain interesting questions to be answered. To this end, we provide all the original data containing the read sequences of the virus small RNAs as a supplementary file to this paper (file S1). Mosquitoes are the most important vector of WNV, DENV, CHIKV and YFV, and controlling mosquito populations is an important way of preventing epidemics of these life-threatening diseases. Among the many approaches to mosquito control [27], environmentally friendly densoviruses have been considered as a biological control agents [20,26,27,28]. Field trials using a densovirus that infects A. aegypti mosquitoes showed that the virus had significant efficiency, although most densoviruses take 2-20 days to kill their insect hosts [29], making this agent unsuitable for commercial use. However, with the advent of genetic engineering, it might be possible to generate genetically modified densoviruses that could be effective mosquito control agents. Better understanding of the biology of densoviruses and their relationship with Figure 4. Phylogenetic analysis of the isolated densovirus. The phylogenetic tree was generated using Mega 4.0 with maximum parsimony and bootstrap 500. Reference densovirus strains were selected after blasting the NCBI NT database with a 398 bp fragment assembled with the small RNAs. Numbers in parentheses indicates the reference number of the particular virus stain [14,15,16,17,18,19,20,21,22]. The virus strain identified in this work has been assigned the name Culex tritaeniorhynchus densovirus YN2009 (indicated by a solid arrow). Scale bar represents the number of nucleotide substitutions. doi:10.1371/journal.pone.0024758.g004 mosquito host immunity could therefore be of practical importance for addressing disease control. Traditional generic methods for identifying and characterizing novel viral diseases have included electron microscopy, virus isolation in cell culture, immunological approaches and PCR. Recently technologies such as diagnostic microarrays and mass spectrometry have been proposed as generic tools for identifying viruses [30], but all these methods require some prior knowledge of the agents to be identified. With the advent of next generation high throughput parallel sequencing platforms, the possibility of random metagenomic sequencing of diseased samples with the object of identifying new putative pathogens has emerged [6,31]. However, elimination of host nucleic acid is critical to boost any pathogen signal toward the detection threshold. In addition, the danger of missing extremely low titer viruses is still a possibility with these systems. By comparison, small RNA sequencing requires neither viral particle purification nor viral nucleic acid sequence amplification. With the advantages of high throughput, high speed, low cost and greatly simplified methodologies, small RNA sequencing can now be used more widely to identify known viruses as well for novel virus discovery. Although the densovirus identified here was not a significant etiologic agent, this discovery proves that the approach is applicable not only for discovery of RNA viruses, but also DNA viruses in mosquitoes. Currently all known human pathogenic viruses found in mosquitoes are RNA viruses, but this does not preclude DNA viruses from using mosquitoes as vectors for human, animal or plant diseases. Indeed the African swine fever virus is an arthropod-borne double-stranded DNA virus [32] which causes a lethal hemorrhagic disease in domestic pigs. In conclusion, our study is the first to explore the application of convenient small RNA high throughput sequencing for virus discovery in wild-caught vectors. Our results suggest that small RNA sequencing is able to identify not only RNA viruses, but also DNA viruses in wild-caught mosquitoes, obviating the need for culture-based virus isolation or for prior knowledge of the etiologic agent. These results suggest that small RNA high throughput sequencing could be an ideal tool for surveillance of novel emerging viral disease or even non-viral infectious diseases. Mosquito collection The mosquitoes, including Culex tritaeniorhynchus, Culex pipiens molestus, and Anopheles sinensis were collected from Yunnan province, China, in 2009. The samples were stored in liquid nitrogen until RNA extraction. No specific permits were required for the described field studies; the samples collected were not privately owned or protected and did not involve endangered or protected species. Small RNA library preparation and sequencing Prior to RNA extraction, mosquitoes were cleaned in sterilized water and dried with hygroscopic filter paper. Mosquitoes of the same species were pooled together. Total RNA was extracted separately from the different mosquito species using the Total RNA Purification Kit (LC Sciences, Houston, USA), according to the manufacturer's instructions. The quality of total RNA was analyzed on an Agilent 2100 Bioanalyzer system and by denaturing polyacrylamide gel electrophoresis. A small RNA library was generated according to the Illumina sample preparation instructions [33]. Briefly, total RNA samples were sizefractionated on a 15% tris-borate-EDTA-urea polyacrylamide gel. RNA fragments 15-50 nt long were isolated, quantified, and ethanol precipitated. The SRA 59 adapter (Illumina) was ligated to the RNA fragments with T4 RNA ligase (Promega). The ligated RNAs were size-fractionated on a 15% tris-borate-EDTA-urea polyacrylamide gel and 41-76 nt long RNA fragments were isolated. Next the SRA 39 adapter (Illumina) ligation was performed, followed by a second size-fractionation using the same gel conditions as described above. The 64-99 nt long RNA fragments were isolated by gel elution and ethanol precipitation. The ligated RNA fragments were reverse transcribed to singlestranded cDNAs using M-MuLV (Invitrogen) with RT-primers (as recommended by Illumina). The cDNAs were amplified with pfx DNA polymerase (Invitrogen) using 20 PCR cycles and the Illumina small RNA primer set. PCR products were purified on a 12% tris-borate-EDTA polyacrylamide gel and a slice of gel containing cDNAs of 80-115 bp was excised. This fraction was eluted and the recovered cDNAs were precipitated and quantified on the Nanodrop (Thermo Scientific) and on the TBS-380 minifluorometer (Turner Biosystems) using PicoGreenH dsDNA quantization reagent (Invitrogen). The concentration of the sample was adjusted to 10 nM and 10 mL used for the sequencing reaction. The purified cDNA library was used for cluster generation (on the Illumina Cluster Station), and then sequenced on the Illumina GAIIx machine, following the manufacturer's instructions. Raw sequencing reads were obtained using the Illumina Pipeline v1.5 software following sequencing image analysis by the Pipeline Firecrest Module and base-calling by the Pipeline Bustard Module. Standard small RNA analysis Clean-up of the raw data and subsequent small RNA mapping and prediction were performed with a proprietary software package, ACGT101-miR v3.5 (LC Sciences, Houston, Texas). First, low-quality reads were removed from the raw reads. After removal of the adaptor sequences, and filtering of the low quality reads and simple artificial sequences, the mappable reads were extracted and the unique sequences generated by collapsing the identical sequences, with the occurrence count of each unique sequence as the unique sequence tag. These unique sequences were compared with the sequences of non-coding RNAs (rRNA, tRNA, snRNA, snoRNA) available in Rfam (http://www.sanger. ac.uk/software/Rfam) and in the GenBank non-coding RNA database (http://www.ncbi.nlm.nih.gov/) to clarify degradation fragments of non-coding RNA. In addition, all sequences were mapped to miRNA sequences from the miRNA database, miRBase 16.0 (http://www.mirbase.org/). Viral sequence detection using the BLAST program BLAST searches were conducted to identify the virus sequences in the cleaned unique reads using the blast-2.2.22 package [34]. Due to the large amount of high throughput sequencing data, we formatted the sequencing reads, (using the command formatdb that is included in the BLAST package), as a BLAST database and used the viral sequences downloaded from the EMBL website (http://www.ebi.ac.uk/embl/) as a query, in order to expedite the BLAST process. BLAST results were then analyzed manually to screen for potential virus sequences. PCR confirmation of viral infection Total mosquito DNAs were extracted with TRIzol reagent (Introgen, Carlsbad, CA) according to the manufacturer's instructions. A pair of primers (forward primer: 59-ATA AAT TGA TCA GTC GTC CTC CAA C-39; reverse primer: 59-CTT GGG ATC ATT TCG GTC ATA T-39) were selected from the viral sequence assembled with the mappable reads. The PCR was conducted in a 50 ml reaction mixture containing 16Easy Taq PCR SuperMix (TransGen Biotech, Beijing, China), 1 mM each of the forward and reverse primers and 10 ng of template DNA. After pre-denaturation at 94uC for 3 minutes, 35 cycles of amplification (30 sec denaturation at 94uC, 30 sec annealing at 55uC, and 60 sec polymerization at 72uC) were performed, followed by a final incubation at 72uC for 5 min. PCR products were visualized on a 1% agarose gel stained with ethidium bromide. Supporting Information File S1 Reference assembly of the small RNA with the densovirus genome (GenBank accession number NC_011317) as the reference sequence. The read alignment is saved in the ace file format and can be viewed with common read alignment program like tablet (freely available on http:// bioinf.scri.ac.uk/tablet/). All the read sequences can be retrieved from the ace file with any text editing program. (ACE) Author Contributions
2014-10-01T00:00:00.000Z
2011-09-20T00:00:00.000
{ "year": 2011, "sha1": "0c82e4a717a9242e0462b24a8759f98f0c83d677", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0024758&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0c82e4a717a9242e0462b24a8759f98f0c83d677", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
19013941
pes2o/s2orc
v3-fos-license
String Dynamics at Strong Coupling The dynamics of superstring, supergravity and M theories and their compactifications are probed by studying the various perturbation theories that emerge in the strong and weak coupling limits for various directions in coupling constant space. The results support the picture of an underlying non-perturbative theory that, when expanded perturbatively in different coupling constants, gives different perturbation theories, which can be perturbative superstring theories or superparticle theories. The $p$-brane spectrum is considered in detail and a criterion found to establish which $p$-branes govern the strong coupling dynamics. In many cases there are competing conjectures in the literature, and this analysis decides between them. In other cases, new results are found. The chiral six-dimensional theory resulting from compactifying the type IIB string on $K_3$ is studied in detail and it is found that certain strong coupling limits appear to give new theories, some of which hint at the possibility of a twelve-dimensional origin. Introduction String theory is defined perturbatively through a set of rules for calculating scattering amplitudes. However, recent progress has led to some striking conjectures regarding the non-perturbative structure of the theory that have passed many tests . The picture that seems to be emerging is that there is some as yet unknown theory that, when expanded perturbatively, looks like a perturbative string theory, but which has a surprisingly simple structure at the non-perturbative level which includes U or S duality symmetries relating perturbative states to solitons, and weak coupling to strong. Moreover, there are a number of different coupling constants corresponding to the expectation values of various scalars, and the perturbation expansions with respect to some of these define string theories, but different string theories arise for different coupling constants. This leads to unexpected equivalences between string theories that look very different in perturbation theory: they result from different perturbation expansions of the same theory. In many cases, the strong coupling limit of a given theory with respect to a particular coupling constant is described by the weak coupling expansion of a dual theory, which is sometimes another string theory and sometimes a field theory. An example which illustrates many of these points is the one obtained from the toroidal compactification of the heterotic string to four dimensions on T 6 . When the full non-perturbative theory, including solitons, is considered, there is strong evidence that the theory has an SL(2, Z) S-duality symmetry relating strong to weak coupling and interchanging electric and magnetic charges [11,12,13]. The theory is then self-dual: the strong coupling limit is described by the weak-coupling expansion of a dual heterotic string theory, which is of exactly the same form, but with magnetic charges arising in the perturbative spectrum while electric ones arise as solitons. Expanding the same theory in other directions in coupling constant space can give the perturbative expansion of the type IIA string or of the type IIB string compactified on K 3 × T 2 [1], leading to the conjectured equivalence of the type II and heterotic strings. The expansion with respect to other coupling constants of the theory has been considered in [5]. The p-brane states of the theory play a crucial role in understanding the nonperturbative structure [1]. These are associated with p-brane solutions of the effective low-energy supergravity theory that saturate a Bogomolnyi bound. As some of these solutions are singular, the question arises as to whether they should be associated with states in the quantum spectrum. For example, type II superstring theories in ten dimensions have a string and a 5-brane coupling to the 2-form in the NS-NS (Neveu-Schwarz/Neveu-Schwarz) sector together with various p-branes coupling to the antisymmetric tensor gauge fields in the RR (Ramond-Ramond) sector [28,31]. The 5-brane is non-singular and can be regarded as a soliton of weakly-coupled string theory [29,30]. The NS-NS 1-brane solution [27] is singular, but should be regarded as the field configuration outside a fundamental string source. Many of the RR p-brane solutions are singular, but can each be regarded as the field configuration outside a D-brane source [23]. The NS-NS solitons arise as conventional conformally invariant sigma-models, while the RR branes arise as D-branes [23]. Alternatively, the singular p-brane solutions of the IIA theory have a non-singular origin in the 11-dimensional theory, so that the 11-dimensional picture allows all p-branes to be included in the theory [4]. However, although an attractive picture is emerging, there remain many open questions. For example, the strong coupling limit of the SO(32) heterotic string in 10 dimensions has been variously conjectured to be a five-brane theory [32,30] or a type I superstring theory [3,33] or 11-dimensional supergravity compactified on a one-dimensional degeneration of K 3 [5], and the question arises as to which, if any, of these is correct. The first two appear at first to be on a very similar footing. A change of variables in the heterotic string low-energy effective action gives the low-energy effective action of the type I string [3] while a different change of variables gives a supergravity theory with a 6-form gauge potential (instead of a 2-form potential) that has been proposed as the low-energy effective action of some as yet unknown five-brane theory [32,30], which might bear a similar relation to the 6-form version of the supergravity theory as M-theory does to 11-dimensional supergravity. In each case, the change of variables includes a change in the sign of the dilaton, so that the strong coupling limit of the heterotic string could agree with the weak coupling limit of either the type I string or five-brane. In [9,10,25] it was argued that the heterotic string arises as a soliton of the type I theory, but it is straightforward to see that it also emerges as a soliton of the proposed five-brane effective action. String/five-brane duality in D = 10 is supported by its relation to string/string duality in D = 6 [8] (further evidence for which is proposed in [6,7]), while the heterotic/type I duality is supported by its relation to the SL(2, Z) duality of the type IIB string [9], and the evidence presented in [25]. So how are we to choose between these two conjectures, and how are they related to that of [5]? Another issue is that of the strong coupling limit of the type IIA theory. Similar arguments to those used in the heterotic string suggest that this might be a type II five-brane theory, while those of [2,3] suggest an 11-dimensional supergravity, supermembrane or M theory. In [3] it was argued that the 0-brane solitons or extreme black holes of the type IIA theory in D = 10 have masses that scale as g −1 where g is the string coupling constant. These become massless in the strong coupling limit and can be identified with the Kaluza-Klein modes of 11dimensional supergravity compactified on a circle of radius R ∼ g 2/3 [3], leading to the conjecture that the strong coupling limit is 11-dimensional supergravity. These 'extreme black holes' are in fact singular solutions of the D = 10 supergravity theory, but can be associated with p = 0 D-branes. Moreover, the theory also has RR p-branes with p = 2, 4, 6 (and 8 [23,34]) whose mass per unit p-volume also scales as g −1 , together with a NS-NS five-brane whose density scales as g −2 so that these all appear to become 'massless' in the strong coupling limit. More properly, they appear to become null p-branes with vanishing density and zero tension whose world-volume is a null (p + 1)-surface. These extra massless solitons could spoil the interpretation as an 11-dimensional field theory; indeed, the fact that the fivebrane appears to become massless faster than the other p-branes might be taken as evidence in favour of a dual five-brane theory. Even if the 11-dimensional conjecture is accepted, there remains the question as to whether the strong coupling limit is a supergravity theory as suggested in [3] or whether it is some supermembrane or M theory, since the analysis of [3] only addresses a particular class of particle states. Similar remarks apply to the type IIB theory, which again could be dual to a type IIB 5-brane theory. On the other hand, the type IIB string is conjectured to have an SL(2, Z) U-duality symmetry [1], which would imply that it is self-dual: the strong coupling limit is again a type IIB superstring theory [3] in which the RR string of the original theory becomes fundamental, through a ten-dimensional string-string duality [9]. In addition to the NS-NS string which is fundamental at weak coupling, the theory has a NS-NS 5-brane whose density scales as g −2 and RR p-branes for p = 1, 3, 5, 7, 9 whose densities scale as g −1 [23]. String/5brane duality would require that the NS-NS 5-brane governs the strong coupling dynamics, while string-string duality would require that it is the RR string that does so. However, all the p-branes have densities that become small at strong coupling and in particular the NS-NS 5-brane is the one whose density (with respect to the string-metric) tends to zero 'fastest'. It is important to understand the strong coupling dynamics in this case, and in particular to test the predictions of U-duality. Conjectures for the strong coupling limits of many string theories are to be found in [3], but a number of gaps remain. One of the most interesting concerns the strong coupling behaviour of the type IIB theory compactified on K 3 . As will be seen, going to infinity in certain directions in the coupling constant space of this theory gives limits that do not seem to correspond to any known theories and might arise from a new theory in twelve dimensions. The purpose of this paper is to address these and related issues. In [3] the strong coupling limit of various string theories was investigated by seeking the 0brane or particle states that became massless fastest as one went to infinity in a particular direction in coupling constant space. Here, this will be generalised to a study of the p-brane states (which may be represented at weak coupling by solitons or fundamental branes or D-branes) that become massless or null in superstring or supergravity or M theories in certain weak or strong coupling limits. It does not make sense to ask which of these p-branes become massless 'fastest', since the masses of p-branes with different values of p cannot be compared. ⋆ Instead, the question as to which p-branes govern the weak or strong coupling behaviour is addressed by finding a criterion to establish which correspond to perturbative states at weak coupling and which to perturbative states of the strong coupling theory, when considered as a perturbation theory in the inverse coupling. If the perturbative states are one-branes, then the perturbative theory is a string theory, while if they are 0-branes it is a field theory, and so on. The analysis of [3,5] gave the perturbative particle states of the strong coupling limit of various string theories and hence identified the supergravity theory that described the low-energy effective dynamics at strong coupling, the analysis presented here goes some way to identifying the cases in which the perturbation theory that arises in the strong coupling limit is a superstring theory and the cases in which it is a superparticle theory. The key to the analysis is to study the coupling constant dependence of the mass/p-volume of the p-branes. For example, in N = 4 supersymmetric Yang-Mills in D = 4 with the gauge group spontaneously broken to an abelian subgroup, the masses are conventionally defined so that particles carrying electric charge have masses that are independent of the coupling constant g, while those carrying magnetic charge have M ∼ 1/g 2 , so that in the weak coupling limit g → 0 the magnetic charges have infinite mass and only the electric charges remain. Perturbation theory is then based on these electric charges (together with neutral particles) and it is these that propagate in Feynman diagrams, while the magnetic charges are treated ⋆ If there are some compact dimensions, one can compare the masses of particle states in the compactified theory arising from p-branes wrapping around homology cycles, but it is clear from [3] that the relationship between the strong coupling dynamics of compactified theories and that of uncompactified ones can be subtle, and it is desirable to give an analysis that does not rely on compactification, but which gives the expected results if some dimensions are compactified. as solitons. To study strong coupling, it is convenient to make certain rescalings so that the magnetic charges have M ∼ 1 while the electric charges have M ∼ g 2 . Then the electric charges decouple in the strong coupling limit g → ∞ and are treated as solitons, while the magnetic charges (plus neutral states) are perturbative states for the perturbation theory arising from the expansion inĝ = 1/g, which is small for large g. There is perhaps an underlying theory in which electric and magnetic charges appear symmetrically, but when treated perturbatively in g, the electric charges are perturbative states and the magnetic charges are solitonic, while when treated perturbatively inĝ, the magnetic charges are perturbative states and the electric charges are solitonic. The 'perturbative states' are the fundamental objects of the usual field theory description, and the strong coupling field theory is again an N = 4 super-Yang-Mills theory as expected from the Montonen-Olive-Osborne duality conjecture. However, the conventional field theory picture is perhaps an artifice of using perturbation theory and at intermediate values of g field theory is not so useful. Indeed, at enhanced symmetry points both electric and magnetic charges become massless [4], so that a conventional local field theory description cannot be applied. However, until the underlying theory with electric and magnetic charges treated 'democratically' is understood, important progress can be made by studying the effective perturbation theory in g orĝ. As another example, consider the type IIA or IIB string in ten dimensions. In the string metric, the NS-NS string has density M 1 that is independent of the string coupling g, while the RR p-branes have density M p ∼ 1/g and the NS-NS 5-brane has density M 5 ∼ 1/g 2 . This is consistent with treating the NS-NS string as the perturbative state in an expansion in g, so that a conventional string perturbation theory emerges, while the other p-branes decouple as g → 0 and should be treated as solitons. A similar picture extends to other theories with local supersymmetry, such as superstrings, M-theory or supergravities. The p-brane spectrum can be studied by consideration of the low-energy effective supergravity theory. In cases with enough unbroken supersymmetry, the densities of BPS-saturated p-branes, and in can be used, and gives a consistent picture in other cases. In the cases examined here, the perturbative theory that emerges is almost always either a supergravity theory or a superstring theory. Moreover, a given theory can look like a superstring theory in the perturbation theory for one coupling constant, or like a supergravity theory when expanded with respect to another. Perturbative p-brane theories with p > 1 do not seem to emerge. However, in each case the analysis depends on knowing the relevant parts of the p-brane spectrum, and in some cases our knowledge is incomplete and so the results presented are provisional. In particular, while our knowledge of BPS p-brane states is fairly complete, we know very little about non-BPS states, which may also exist in a perturbative spectrum. In some cases, these are metastable in some weak-coupling regime, but cannot be extrapolated to states for other values of the coupling. In most cases, a satisfactory picture emerges in terms of BPS states alone, but in some cases it appears that it is the metastable non-BPS states that are perturbative and govern the dynamics in some coupling constant regime. The results suggest that M-theory or p-brane theories such as the 11dimensional supermembrane, if they exist, cannot be treated perturbatively as p-brane theories (indeed, the uncompactified 11-dimensional supermembrane has no coupling constant arising from a scalar expectation value) but that when expanded with respect to a coupling constant such as a compactification modulus, the perturbative theory that emerges is a superstring or supergravity theory. Remarkably, in many supergravity theories, we learn that the perturbative spectrum with respect to certain coupling constants includes strings, and so consistent perturbative quantization requires string theory. Conversely, expanding a superstring theory in a compactification modulus typically leads to a perturbative spectrum which is that of a supergravity theory rather than a superstring theory. These arguments identify some of the states which should be used in trying to construct a perturbation theory and which should propagate in loops. Whether or not the quantum perturbation theory 'exists' is another matter. For superstrings, a consistent finite perturbation theory is expected to exist, while for supergravity, it exists at best in the presence of a cut-off. At present, we do not have any good definition of what a perturbative p-brane theory might be, so it is just as well that no such theory appears to arise in any strong coupling limit. We also do not have any good definition of M-theory, but in the strong coupling limits that are naturally formulated in 11 dimensions, the perturbative states are just the supergravity 0brane multiples and all other states of M-theory appear non-perturbatively. These results would be consistent with the existence of a non-perturbative quantum Mtheory which, when expanded perturbatively with respect to a compactification modulus, gives a perturbative superstring or superparticle theory. Many of the results presented here can be viewed as giving further evidence supporting the conjectures made in [3] for the strong-coupling behaviour of various string theories: the evidence given in [3] was based on studying particle states, and here it is shown that when p-brane states are also considered, the conclusions remain the same. Low-Energy Effective Actions The low-energy effective action for the heterotic string in D dimensions includes the bosonic terms where Φ is the dilaton, χ i are the remaining scalar fields with sigma-model action g ≡ e Φ and we wish to find the dynamics for strong coupling. One approach is to seek a change of variables to an action that might be interpreted as the lowenergy effective action of a dual string or p-brane theory with coupling constant g ′ = g −β for some constant β. If β > 0, this would be consistent with the conjecture that the strong coupling limit of the original theory is the weak coupling limit of the dual theory, and vice versa. As we shall see, this process can lead to more than one candidate for a dual theory, but it is nonetheless useful in locating possible dual theories. The rescaling leads to the dual action The rescaling (2.2) with gives the action and gives AsH couples naturally to a p-brane with p = D −5, it has been suggested that this action be interpreted as the low-energy action for a p-brane theory with p = D − 5 [30,32]. This suggests a duality between the heterotic string and a p-brane theory for D > 4 (so that the strong coupling limit of one is the weak coupling limit of the other) [30,32]. For D = 6, this would be a string/string duality, while for D = 10 this would be a string/five-brane duality. Thus for D = 10 we have (at least) two candidates for the dual of the heterotic string: a type I theory and a five-brane theory. Finally, the rescaling (2.2) with gives the Einstein frame action Similar results apply for the low-energy effective actions of type II and type I superstrings. In particular, for type II strings the 2-form potential can be dualised to a 6-form potential [35], so that a string theory effective action is replaced by a '5-brane effective action', and various other anti-symmetric tensor gauge fields can be dualised [35]. Coupling Constant Dependence in Four Dimensions In four dimensions, the toroidally compactified heterotic string at a generic point in its moduli space has gauge group U(1) 28 so that there are 28 electric charges q I and 28 magnetic charges p I (as defined in [1]). The supergravity field The N = 4 Bogomolnyi mass formula for BPS saturated states (preserving half the supersymmetry) is [12,13] where M is the ADM mass in the Einstein frame and is an SL(2, R) matrix depending on λ = a + ie −2Φ = λ 1 + iλ 2 , where a is the axion and Φ is the dilaton. (In the notation of [12,13], K t K = 1 16 (M + L).) For vanishing axion expectation value λ 1 , the mass is given by where g = e Φ is the string coupling constant. For the mass M s measured with respect to the stringy metricg µν , which is given in terms of the Einstein metric g µν byg µν = e 2Φ g µν , the formula (3.3) becomes This form was to be expected, since the electric charges are carried by perturbative string states while the magnetic ones arise from solitons, and the mass of a magnetically charged state has the standard 1/g 2 coupling constant dependence of a soliton. The SL(2, R) symmetry acts as where Λ is a 2 × 2 matrix in SL(2, R), and (p, q) transforms in the same way as (p,q). The Einstein metric g µν is invariant, and the mass formula (3.1) is manifestly invariant. The SL(2, Z) transformation given by (3.5) with interchanges electric and magnetic charges while λ → −1/λ. If a = 0, then the coupling constant is inverted, g → 1/g, and weak and strong coupling regimes are interchanged. Thus the theory is self-dual: the strongly coupled regime can be treated using perturbation theory in the small coupling constantĝ = 1/g and this gives a dual heterotic string theory. In the weakly coupled theory, the electric charges q were carried by g-perturbative states (i.e. ones that arise in the perturbation theory with respect to g) and the magnetic ones p by solitons, while in the dual theory the electric charges q are carried by solitons and the magnetic ones p are carried by states that arise asĝ-perturbative states. For the dual theory, it is appropriate to use the dual stringy metricg µν , which is given in terms of the Einstein metric g µν byg µν = e −2Φ g µν . The mass M d measured with respect to this metric is then, from (3.3), This is consistent with the fact that p is carried byĝ-perturbative states and q by solitons for g >> 1 (ĝ << 1). Consider now the type II theory toroidaly compactified to four dimensions. The gauge group is again U(1) 28 , so that there are again 28 + 28 electric and magnetic charges that form a 56-vector Z which in the quantum theory must take values in a self-dual lattice. The low-energy effective action is that of N = 8 supergravity [36], which has an E 7(7) symmetry of the equations of motion which is conjectured to lead to a discrete E 7 (Z) U-duality symmetry of the string theory [1]. V is the asymptotic value of V. The Lie algebra of E 7 can be decomposed into that of SL(2; R) × SO(6, 6) and its orthogonal complement X, so that V can be written as V = ST R where S ∈ SL(2; R), T ∈ SO(6, 6) and R is the exponential of an element of X. Then the dressed charge vectorZ = T RZ decomposes into 12 doublets of SL(2, R), consisting of 12 + 12 'dressed' electric and magnetic charges (p I ,q I ), together with 32 singlets of SL(2; R), the 'dressed' RR chargesr a . The ADM mass formula for Bogomolnyi states in the Einstein-frame is then where S is given in terms of λ = a + ie −2Φ by (3.2). The dependence on λ can be understood from group theory, and in particular the fact that the RR charges r occur without any dependence on λ follows from the fact that they are SL(2, R) singlets. For vanishing λ 1 , the mass is given by For the mass M s measured with respect to the stringy metric e 2Φ g µν , this becomes while for mass M d corresponding to the dual stringy metric e −2Φ g µν the Bogomolnyi mass formula is Thus, as in the N = 4 case, NS-NS electric and magnetic charges are interchanged under strong/weak duality, but the states carrying RR charges are non-perturbative at both weak and strong coupling. Whereas magnetic charges are associated with effects with the usual non-perturbative coupling dependence of e −1/g 2 , the RR charges are associated with ones with the stringy dependence e −1/g , similar to that found in matrix models [37]. Bogomolnyi States and p-branes The particle states with a given charge that saturate a Bogomolnyi bound are This gives the same result as the Bogomolnyi formula, but can also be applied to non-Bogomolnyi solitons, although in the latter case the lack of supersymmetry can allow quantum corrections to the masses and charges. Consider an action of the form where G is the (n+1)-form field strength for an n-form gauge potential C (the case n = 0 is a scalar field) and γ is a constant. Such actions arise as part of superstring effective actions. The constant γ takes the values γ = 2 for the heterotic string or for a NS-NS field in a type II string, while γ = 0 for RR fields of type I and II strings and γ = 1 for the type I vector fields (and type I scalars in D < 10) whose kinetic terms arise from a disk diagram. Such theories have p-brane solitons which couple to C and carry corresponding charges [28,30]. These consist of electrically charged (n−1)-brane solitons and magnetically charged (D −n−3)-brane solitons. It is straightforward to check (cf [30]) that the mass per unit p-volume M p of a p-brane soliton of this type has the following coupling constant dependence: Here the coupling constant g is given by the asymptotic value of e Φ (which is assumed constant) and T p is a dimensionful constant that can be thought of as an effective brane tension, and which contains the dependence on any coupling constants other than g. In the case n = 0, C is a scalar field which formally appears to couple to an electric p-brane with p = −1. This can be interpreted as an instanton in the Euclidean version of the theory [23,38], but this case will not be considered further here as we will be interested in the p-brane spectrum for Lorentzian signature. The scalar also couples to a magnetic (D − 3)-brane (e.g. a string in D = 4 or a 7-brane in D = 10 [38]) and these will be considered here. Configurations satisfying p-brane boundary conditions typically satisfy a Bogomolnyi bound of the form M p ≥ |Z| where M p is the mass per unit p-volume and |Z| is the electric or magnetic central charge, which is given in terms of the asymptotic values of the scalar fields and the magnetic charge P = S n+1 G (given by an integral over an (n + 1)-sphere surrounding the (n − 1)-brane) or the electric The precise coupling constant dependence of the relation between Z and Q, P is determined by supersymmetry, and the dependence on scalars other than Φ has been suppressed. Of particular interest are those p-brane solitons that break half the supersymmetry, and consequently saturate the Bogomolnyi bound so that the density is given by M p = |Z| [27,30]. This takes the form (4.2), but in this case the relation is expected to be exact so that the density of the corresponding quantum state is given by (4.2). The mass formula (4.2) is valid for the string metric g µν , but it is straightforward to generalise to an arbitrary metricg µν given bỹ Lengths and masses as measured in the asymptotic region (where e Φ ∼ g) with respect to the two metrics are related byL = g α L,M = g −α M so that the masses per unit p-volume are related byM Then for a general metricg µν (4.2) becomes For α = −2/(D − 2),g µν is the Einstein metric and (4.5) reproduces the results of [30]. The g-dependence of the masses clearly depends on the choice of metric. In the string metric α = 0, the ratio M p /T p is independent of g for the fundamental string of some string theory (ǫ p = 0 and γ = 2), while for the corresponding magnetic (D − 5)-brane soliton the ratio M p /T p is proportional to g −2 and for RR branes it is proportional to g −1 . Thus in the limit g → 0, the solitons and D-branes become infinitely massive and decouple, leaving the fundamental string in the perturbative spectrum. This is precisely what is required for a sensible perturbation theory in some small coupling constant λ (which might be the string coupling or be associated with the expectation of some other scalar): some states with masses that are independent of λ and which are the perturbative states of the theory, with all other states having masses that tend to infinity as λ → 0 so that they are non-perturbative states. This can now be applied to the strong coupling limit. As we shall see, for any supergravity or superstring theory there is a unique choice of rescaled metric (4.3) such that the ratio M p /T p in that metric is independent of g for some states and tends to zero as g → ∞ for all other states, so that they decouple in the strong coupling limit. There is then a sensible perturbation theory inĝ = 1/g with the perturbative states consisting of those with g-independent masses in the 'dual metric'. Note that the set ofĝ-perturbative states identified in this way is not the same as the set of non-perturbative states of the original string theory, i.e. those states for which the string-metric ratio M p /T p tends to infinity asĝ → 0. More generally, as we shall see, given any supergravity or superstring theory and any choice of coupling constant λ, there are precisely two preferred rescaled metrics, one which leads to a sensible non-trivial weak-coupling perturbation theory in λ of the type described above, and one which leads to a sensible non-trivial strong-coupling perturbation theory in 1/λ. Of course the theory can be studied in whichever metric one chooses. The point here is that the preferred metrics are the ones in which the perturbation theory looks most natural, with the perturbative states having λ-independent masses. It is conceivable that other choices of metric might lead to consistent perturbation theories in this way, but these would be non-maximal, as the perturbative spectrum would be strictly smaller than that arising from one of the two preferred metrics, as will be seen in the examples below. In this way, it is possible to identify the part of the perturbative spectrum consisting of BPS-saturated p-branes that emerges at strong coupling and at weak coupling in lambda for any given theory. We shall first check that this gives the expected results in cases where U-duality gives an alternative derivation of strongcoupling dynamics. It will then be applied to other cases, and in particular to those theories described in the introduction, where it will help decide between different conjectures for the strong-coupling dynamics. Type II in Four Dimensions Consider the example of the four-dimensional type II string (compactified on a torus) for which the masses of Bogomolnyi 0-branes is given by (3.12) in the stringy metric. The g-perturbative states are those carrying NS-NS electric chargeq and have g-independent masses M s =q, while the non-perturbative states carrying NS-NS magnetic or RR charges have masses M s = g −2p and M s = g −1r respectively and become massless at strong coupling, M s → 0 as g → ∞. However, the NS-NS magnetic states become massless fastest and it is these that become thê g-perturbative states at strong coupling (ĝ = 1/g). For a general metric (4.3), (3.12) becomes so that for states carrying only one type of charge, the masses are M = g −αq , There are two preferred values of α which give good perturbation theories. The stringy metric α = 0 gives the perturbative spectrum described above. The value α = −2 gives the 'dual stringy metric' for which NS-NS magnetically charged states have M =p and are perturbative, while electric states have M =ĝ −2q and RR ones have M =ĝ −1r . States carryingq orr charge (including those with more than one type of charge) have masses that diverge in the strong-coupling limitĝ = 1/g → 0 and so are non-perturbative in the strong-coupling perturbation theory. For metrics with −2 < α < 0 there are states that become massless as either g → 0 or g → ∞ so that there is no consistent weak or strong coupling perturbation theory. In particular, for the Einstein metric α = −1 the RR states have g-independent masses but there is no consistent weak or strong coupling perturbation theory based on the RR states and neutral states alone. For α > 0, the Bogomolnyi formula (4.6) implies that all charged states have masses that depend on a negative power of g, so that they decouple in the g → 0 limit and any states with g-independent masses must be uncharged. Thus a weak-coupling perturbation theory, if consistent, would be non-maximal as it would contain no perturbative charged states, and would correspond to the limit of the usual perturbation theory in which the unit of charge is taken to infinity. In the strong coupling limit with α > 0, all charged states become massless so that it would not make sense to base a perturbation theory on uncharged states alone. Similar considerations apply for α < −2, for which there is a strong coupling perturbation theory involving no charged states. This leads us to the two preferred metrics with α = 0, −2 and the corresponding weak and strong coupling perturbation theories. This is precisely the result predicted by U-duality: the theory is self-dual, with the strong coupling theory given by the a dual version of the original theory with NS-NS electric and magnetic charges interchanged. Type IIB in D = 10 Type IIB supergravity and the type IIB superstring in D = 10 both have two 1-brane solutions, two 5-brane solutions and a self-dual 3-brane solution [28], together with a 7-brane and a 9-brane [23,38]. (We will only be interested in the Lorentzian signature solutions here and so will not consider the instantonic −1 brane [38].) The values of p, γ, ǫ p and ρ p ≡ ǫ p /p + 1 are given in the following table: Table 1 Type IIB Brane-Scan. The 1-branes are electric, the 5-branes are magnetic and the self-dual 3-brane has equal electric and magnetic charges. Those with γ = 2 couple to the NS-NS 2-form potential while those with γ = 0 couple to RR gauge fields. (There is no action of the form (4.1) for the 4-form potential with self-dual field strength, but the mass formula (4.2) can nonetheless be applied to the 3-brane with γ = 0 [30].) This can be extended to include the 7 and 9 branes. It follows from [23] that the RR p-branes with p = 1, 3, 5, 7, 9 all have M p /T p given by g −1 in the string metric and so by g −[1+(p+1)α] with respect to the general metric, corresponding to ρ p = 1/p + 1. In each case, the p-brane mass satisfies and for a given p-brane, this is g-independent for the metric (4.3) with α = −ρ p . For each such p-brane metric except those the preferred cases α = 0, −1/2, some of the soliton masses grow with g and others grow with 1/g, so that there is no sensible weak or strong coupling perturbation theory; this is seen explicitly in the table for the Einstein metric (α = −1/4) for which the self-dual 3-brane has gindependent mass. The special cases α = 0, −1/2 are the minimum and maximum values of ρ p and give masses that depend on g raised to a non-positive power or to a non-negative one, respectively. For α = 0, the g-perturbative Bogomolnyi spectrum consists of the 1-brane with γ = 2, which is the fundamental string so that we learn that the perturbation theory is that of a string theory. Note that even if we had started with the IIB supergravity theory (including p-brane solitons), we would have learned that the weakly coupled theory is a perturbative superstring theory, in which the γ = 2 soliton is identified with the fundamental string. All other p-brane densities depend on g to a negative power and so correspond to non-perturbative states whose (string metric) densities tend to infinity at weak coupling. For strong coupling, we learn that the appropriate metric for describing the dual theory is (4.3) with α = −1/2, and that the 1 ′ -brane which couples to the γ = 0 2-form becomesĝ-perturbative and is the fundamental string for the dual theory, while the other p-branes becomeĝ-non-perturbative states of the dual theory. Thus the dual theory is again a string theory. This is in complete agreement with the conjecture that the type IIB string has an SL(2; Z) duality symmetry [1] and so is self-dual: the strong coupling limit is a dual type IIB string theory in which NS-NS and RR charges have been interchanged. Thus although in the original string-metric, the NS-NS 5-brane has a mass which tends to zero as g −2 at strong coupling and the RR branes have masses that tend to zero as g −1 , it is the RR string that here wins out over the others and dominates the strong coupling theory, so that the theory is self-dual rather than dual to e.g. a 5-brane theory. Mass formulae similar to those given in the last section for the D = 4 heterotic string, which is also self-dual with an SL(2; Z) symmetry, can be written down for the type IIB theory. A 1-brane can carry an NS-NS charge q (coupling to the NS-NS 2-form) or a RR charge r (coupling to the RR 2-form) or both, in which case it is 'dyonic'. (Dyonic solutions can be constructed from the NS-NS string and RR string given in [9] by acting with SL(2; Z).) The Einstein frame mass per unit length M 1 satisfies a Bogomolnyi bound which, for vanishing pseudo-scalar, is Strings saturating this bound have masses saturating this bound have masses sat- in the string frame or in the dual string frame. The General Case Consider now an arbitrary supersymmetric theory in D dimensions, which might be a supergravity, superstring or super-p-brane theory. Usually, we will restrict ourselves to situations in which there is enough supersymmetry for the classical Bogomolnyi formulae to be reliable in the quantum theory. Let φ be some scalar field, which might be the dilaton Φ, or −Φ, or some compactification modulus. Then g = e φ is one of the coupling constants of the theory and one can attempt to define perturbation theory with respect to g. There is a spectrum of p-branes, and for each p-brane there is a constantρ p such that the mass per unit p-volume has a dependence on the coupling constant g that is given with respect to the Einstein metric by The dependence on all other coupling constants is absorbed into the effective tension T p . In string theory, the usual weak coupling perturbation expansion is that in e Φ , but expanding in other couplings such as e −Φ or one of the compactification moduli can give new insights and can lead to a new perturbative theory, which might be another string theory or a supergravity theory or perhaps something more exotic such as a p-brane theory. (The 11-dimensional supergravity or supermembrane theory has no scalars or coupling constants and so this analysis does not apply. If the 11-dimensional theory is compactified, however, one of the compactification moduli can be used as a coupling constant and a mass formula (4.11) can be found.) If the Einstein metric is conformally rescaled to g ′ µν = e 2αφ g µν (4.12) then since the conformal factor tends to g 2α in the asymptotic region, the mass formula becomes with respect to the rescaled metric. The weak coupling regime g << 1 is dominated by the brane or branes for whichρ p takes its minimum valueρ min and the appropriate metric is that given by (4.12) withα = −ρ min . The branes withρ p =ρ min are then the g-perturbative states, while the others are g-non-perturbative. For example, if the only brane withρ p =ρ min is a 1-brane, then the weakly coupled theory is a string theory, while if the only such states are 0-branes, then the perturbative theory is a supergravity theory. Similarly, the strong coupling regimê g << 1 withĝ = 1/g is dominated by the brane or branes for whichρ p takes its maximum valueρ max and the appropriate metric is that given byα = −ρ max , and again this might be a string theory or a p-brane theory or a field theory or a theory of a set of coupled p-branes with various values of p. It will be useful to define so that (4.11) becomes In the remainder of this paper, this structure will be explored in various theories. In each case, there are two preferred metrics corresponding toα = −ρ min andα = −ρ max (or equivalently α = −ρ min and α = −ρ max ) for weak and strong coupling respectively and the corresponding perturbative spectrum can be read off in each case. This will enable us to decide between various alternative duality conjectures, such as those discussed in the introduction. Before proceeding, however, a number of remarks are in order. Firstly, the p-branes fit into supermultiplets and the formula (4.15) applies to the whole supermultiplet, although we shall usually refer explicitly to only the member of the supermultiplet with lowest spin. Secondly, given a p-brane with M p ∼ g a T p for some a, there will be further states with M p ∼ ng a T p for integers n and these are degenerate in mass with a configuration of n p-branes which each have M p ∼ g a T p . Often in what follows, we will take T p to be the minimum value, corresponding to elementary solitons, and not consider the 'bound states' with M p = nT p explicitly. Thirdly, the argument given above will identify only the charged massive BPS states in the perturbative spectrum. There will also be neutral states in the perturbative spectrum, and in particular these can include particles with zero mass and p-branes with zero density. In some cases, there are non-BPS states that are metastable perturbative states in some coupling constant regime, even though they cannot be continued to states at other values of the coupling. When these are included in the p-brane spectrum, they can 'win' over other p-branes and so need to be taken into account. However, in most cases, they lose out to BPS states and in those cases will not be considered explicitly. Ten and Eleven Dimensional Examples As discussed in the introduction, there are at least two rival proposals for the strong coupling limit of the type IIA string, a 5-brane theory or an 11-dimensional theory, and the analysis here will enable us to decide between them. Type IIA supergravity and the type IIA superstring both have a 1-brane and a 5-brane Value of p In addition, there is an 8-brane solution of the massive type IIA theory [34], which formally has M p /T p = g −1 in the string metric [23], corresponding to ρ 8 = 1/9. The minimum value of ρ is ρ min = 0, corresponding to the 1-brane which becomes the fundamental string of the weakly coupled perturbative theory, for which the appropriate metric is the stringy metric with α = 0. This is just as expected. The other p-branes all have densities that tend to zero as g → ∞, so that arguments similar to those of [3] might suggest that all could be important for the strong coupling dynamics. However, as ρ max = 1, the 0-brane multiplet dominates at strong coupling, which is then aĝ-perturbative supersymmetric field theory. The BPS-saturated p-branes occuring in the strong coupling perturbative spectrum are then the the 0-branes, which fit into short massive IIA supermultiplets with spins ranging from 0 to 2. There is such a multiplet with electric charge nq 0 and mass M = |nq 0 |/g 1+α for each integer n = 0, where q 0 is some unit of charge [3]. In addition, there is the neutral massless type IIA supergravity multiplet (corresponding to n = 0) and these constitute the fundamental excitations of the dual theory. Thus the states that emerge in strong coupling perturbation theory are those of the D = 10 type IIA supergravity theory, plus an infinite tower of short massive multiplets. As we shall see in the next section, this is the perturbative spectrum that emerges from 11-dimensional supergravity compactified on a circle, when the radius of the compact dimension is treated as a coupling constant. 11-Dimensional Supergravity, M-Theory and Supermembranes 11-dimensional supergravity has a 2-brane [39] and a 5-brane soliton [40], but there are no scalar fields whose expectation values can be used as coupling constants. Thus there is no perturbation theory in 11 dimensions that can be used to apply the analysis developed above. However, if the theory is compactified, then the moduli of the compactification space can be used as coupling constants. For example consider compactification on a circle. The radius R can be used as a coupling constant: the massless spectrum consists of a D=10 type IIA supergravity multiplet, and there are in addition Kaluza-Klein momentum modes with mass M ∼ q 0 n/R for integers n and some constant q 0 (with respect to the Kaluza-Klein metric [3]). These fit into massive supermultiplets that saturate a Bogomolnyi bound and have spins ranging from 0 to 2, and have masses that tend to zero in the large R limit. These have magnetic partners, which arise as generalised Kaluza-Klein monopoles in 11 dimensions and lead to 6-brane solutions on reducing to the 10-dimensional theory [2]. In addition, the 11-dimensional 2 and 5 branes give rise branes are included in the spectrum, we see that at small R the 1-brane dominates and the perturbation theory is that of the type IIA string theory, so that the 11dimensional theory compactified on a small circle should be treated as a string theory, not a field theory. If there is an 11-dimensional supermembrane or M theory whose low energy effective action is 11-dimensional supergravity, it should also have a 2-brane and a 5-brane soliton. On compactifying to 10 dimensions, the brane-scan would then again be that of table 2, so that the small R limit would give the perturbative states of type IIA string theory, while the large R limit would give type IIA supergravity. If instead the theory is compactified on an n-torus of volume V , the perturbative BPS states that emerge in perturbation theory in 1/V consist of n 0-brane multiplets in 11 − n dimensions, which transform as an n under the action of SL(n). Each of these multiplets is the base of a Kaluza-Klein tower. The mapping class group of the torus is the discrete subgroup SL(n; Z) and this is part of the U-duality group of the compactified theory. The Type I String and Heterotic Strings in D = 10 Consider the ten-dimensional N = 1 supergravity theory coupled to SO (32) super-Yang-Mills, which is the low-energy effective field theory of both the type These strings can break and so are not stable and should not be expected to saturate any BPS bound. However, these give metastable states that should be included in the spectrum at weak coupling. A macroscopic type I string would break, although an unstable solution of the supergravity theory representing the field configuration outside a macroscopic type I string could still exist, at least in the weak coupling limit. However, as the type I string is not BPS saturated, there is no reason to expect that such states could be extrapolated to strong coupling. Indeed, type I strings become more likely to break as the coupling increases, so that such states should not exist at strong coupling, and there should be no corresponding states in the weakly coupled heterotic string. Presumably, such a fundamental type I string should have mass/length that is independent of g in the type I string metric. The mass/length of this type I string would then be M 1 ∼ g −1 T 1 in the heterotic string metric and M 1 ∼ g −1−2α T 1 in a general metric given by scaling the heterotic metric by e 2αΦ . The brane-scan for all these p-branes is Value of p 1 (type I) This table also includes the coupling constant dependence in the so-called 5-brane metric given in terms of the heterotic string metric by (4.3) with α = −1/3. Note the unusual g-dependence of the 0 and 6 branes in the type I metric. The maximum and minimum values of ρ are indeed those corresponding to the type I and heterotic strings and the perturbative heterotic and type I strings are a dual pair with the strong coupling regime of one corresponding to the weak coupling regime of the other. There may be other p-branes of the theory, but it seems unlikely that they will have ρ < 0 or ρ > 1/2. Indeed, if there were pbranes with such values of ρ, then it seems likely that there would be problems for the perturbative formulation of either the heterotic or type I string, and no such difficulties are apparent. If the metastable type I string were not included in the brane-scan in this way for weak type-I coupling, then it would be hard to find a plausible duality picture unless there were some other as yet unsuspected p-brane states of the theory. For example, if there was no type I string states in the heterotic string at strong coupling and no other states with ρ < 1/6, then table 3 would support the conjecture that the strong coupling limit of the heterotic string is a perturbative 5-brane theory, and presumably the weakly coupled string would correspond to the strongly coupled 5-brane [30,32]. However, table 3 would still support the conjecture that the strong coupling limit of the type I string is a weakly coupled heterotic string, and this would then imply the implausible statement that the weakly coupled type I string is equivalent to weakly coupled perturbative 5-brane theory. One way out of this would be if the heterotic string solution of the type I theory [9,10] were not to be regarded as an acceptable soliton solution (perhaps because of its singularity structure [25]) but it has recently been shown that the heterotic string indeed arises as a D-brane in the type I string theory, making the conjecture of the existence of a type I soliton of the heterotic string all the more attractive. Compactified Type II String Theories Toroidally compactified type II superstring or supergravity theories have a number of massless scalar fields and their expectation values can all be regarded as coupling constants. Any of these can be used to define a perturbation theory and one can ask which of the p-branes are perturbative states for a given coupling constant. In this section we will consider this for the scalar field corresponding to the dilaton in ten dimensions, while the other coupling constants will be discussed in the next section. In each case, we will examine the p-brane spectrum of the appropriate supergravity theory and look for the perturbative states. We will find that for the 'stringy' coupling constant corresponding to the dilaton in ten dimensions, the perturbative spectrum at weak coupling is that of a string theory, while in almost all other cases (for strong stringy coupling or for other coupling constants) the perturbative states are those of a field theory. The main exception is in six dimensions, where the perturbative spectrum at strong string coupling is again that of a string theory. This then supports string/string duality in D = 6 dimensions, but not string/(D − 5)-brane duality in D > 6 dimensions. More precisely, the stringy strong coupling limit in D > 6 is some theory whose perturbative states are particles, not strings or p-branes. It will have p-brane solitons, but is not a perturbative p-brane theory involving the sum over p-brane world-volume topologies etc. In D dimensions, a (p + 1)-form gauge field couples to an electric p-brane while the RR branes (γ = 0) have the following scan: principle any of them can be given the preferential treatment usually afforded to the dilaton in string theory. A given coupling constant g is e T where T is a noncompact element of the Lie algebra of G and is associated with a scalar expectation T = φ for some φ ∈ G/G c . We will be interested in the dependence of masses on the coupling g and in particular in the strong coupling limit g → ∞ and the weak coupling limit g → 0. Our aim will be to find those states that are in the perturbative spectrum at weak coupling and at strong coupling Compactified Type II Theories and U Duality The curve e tT parameterised by t defines an R + subgroup of G and induces a maximal embedding for some H. Factoring both sides by their maximal compact subgroup induces a decomposition of the moduli space of the form where H c is the maximal compact subgroup of H and R M is a vector space equipped with a representation of H [5]. A boundary of the moduli space is then given by going to infinity in the R + direction, i.e. by taking t → ∞. It was argued in [3,5] that it is sufficient to consider those subgroups H that can be obtained by removing from the Dynkin diagram of G a single vertex representing a non-compact simple root. In a quantum theory for which the supergravity is a low-energy effective action (as in string theory), the classical supergravity duality group is broken to a discrete subgroup G(Z), which is the conjectured U-duality symmetry of the theory [1]. The n-form gauge fields of the supergravity theory formulated with respect to the Einstein metric transform as a representation R n of G and this implies that the magnetic p-branes with p = D − n − 3 that couple to these have charges transforming as the R n representation of G while the corresponding electric pbranes with p = n − 1 have charges that transform according to the contragredient representation R ′ n . In either case, the embedding (7.1) induces a decomposition of the G representation R: where the R H i are representations of H and theǫ i are the corresponding R + weights. This implies that the mass per unit p-volume of the p-branes transforming as the R H i of H have the following g-dependence with respect to the Einstein metric g µν : where the effective tension T p can depend on the other coupling constants in H/H c × R M . Then for the general metric e 2αφ g µν , the mass per unit volume becomes The p-branes can then be assigned to representations (R H i )ǫ ĩ ρi with H representation R H i and R + weightǫ i , whileρ i is defined byρ i =ǫ i /(p + 1). In this way the parametersρ can be found for all p-branes and all choices of H. For a given H the p-branes corresponding to the maximum and minimum values ofρ are the perturbative states of the strong and weak coupling limits, respectively. We shall now examine the consequences of this in a number of examples, finding broad agreement with the conjectures of [3,5]. This gives an important check on these conjectures, which were based on the 0-brane spectrum only. It is a nontrivial result that the behaviour of the p-brane spectrum supports these conjectures, rather than e.g. the conjectured string/(D-5)-brane duality. Furthermore, this analysis will enable us to treat cases that could not be analysed in [3,5]. We shall consider first the maximal supergravity theories in D dimensions, given by compactifying 11-dimensional supergravity on an 11 − D torus or 10-dimensional type IIA or IIB supergravity a 10 − D torus, which arise from compactified Mtheory or compactified type II superstrings. Type II in D=7, Duality Group G = SL(5) The maximal supergravity in 7 dimensions has duality group SL (5) where the exponent is the R + weightǫ. This gives the following brane-scan 0-branes The maximum value ofρ isρ min = 3 and is carried by four 0-branes, fitting into 4 D = 7 supergravity multiplets that transform as a 4 ′ under SL(4). This gives the same set of perturbative states that emerged in section 4 from 11-dimensional supergravity compactified on a 4-torus of volume V in the large V limit, where the group SL(4) acted as the mapping class group of the T 4 . This is consistent with the proposal [3] that the theory that emerges in the strong coupling limit of the seven dimensional type II string theory is an 11-dimensional theory compactified on T 4 . Moreover, we learn that the perturbative theory that emerges is a particle theory, not a supermembrane theory. This leads to the following brane-scan This gives the following brane-scan 0-branes 8 s Both strong and weak coupling limits give the same perturbative string theory, which is that of a compactified type II string. The SL(5) × R + Decomposition The decomposition of the relevant representations under SL(5) × R + is The resulting brane-scan is 0-branes The maximum value ofρ is 3 and is carried by 5 0-branes, fitting into 5 D = 6 supergravity multiplets that transform as a 5 ′ under SL(5). This gives the same set of perturbative states that emerged in section 4 from 11-dimensional supergravity compactified on a 5-torus of volume V in the large V limit, where the group SL (5) acted as the mapping class group of the T 5 . The minimum value ofρ is −5 and is carried by the SL(5) singlet 0-brane, and the perturbative theory is that of 7-dimensional type II supergravity compactified on a circle, in the large radius limit. (2), so that the theory is self-dual and the same perturbative theory as arises in both these limits also emerges from the type IIB theory compactified on T 4 in the limit of large or small torus volume, with SL (2) the type IIB duality group and SL(4) the T 4 mapping class group. The SL ( Heterotic Theories and K 3 Compactifications In the previous sections we have considered theories with maximal supersymmetry whose low energy supergravity effective action comes from toroidally compactifying N = 2 supergravity theories from ten dimensions. In this section, we will turn to theories with half the maximal number of supersymmetries -N = 1 in D = 10 or N = 4 in D = 4 -which can result from toroidal compactification of heterotic or type I theories, or from compactification of type II theories on In each case, the brane-scan is again determined by low-energy supersymmetry, so we shall start from the brane-scan of the supergravity theory, find which are the perturbative states for the various coupling constants, and then seek an interpretation of the resulting perturbation theory. In D > 4 dimensions, the half-maximal supergravity theory coupled to n abelian vector multiplets (resulting from toroidal compactification of N = 1 theories in D = 10) has scalar fields lying in the coset space and as yet no extra exotic branes are suspected in these theories. We shall start by studying the known branes and investigating their dependence on the various coupling constants in (8.1). The Heterotic Dilaton In this section, we shall investigate perturbation theory in the coupling constant g in the R + factor of (8.1), which corresponds to the heterotic string dilaton Φ, The brane scan is In section 5.3, however, reasons were given for expecting new type I branes with a higher value of ρ in D = 10, and the conjectured type-I/heterotic duality in D ≥ 8 dimensions [3] suggests that the highest value of ρ should be carried by such branes in D = 8, 9, 10. We shall therefore concentrate on D < 8 dimensions here. In D = 6, the strong coupling perturbative spectrum consists of the (D − 5)-brane, which in this case is a string. There are no 0-branes, so this doesn't correspond to the spectrum of a toroidally compactified heterotic string, but is precisely the weak-coupling perturbative spectrum of the type IIA string theory compactified on K 3 , with type IIA coupling constant g ′ = 1/g. Thus the strongly coupled heterotic string in D = 6 is described by a weakly coupled type IIA string theory compactified on K 3 , as expected [1,3]. Consider an 11-dimensional theory (supergravity or M-theory) compactified on K 3 × S 1 . Large and small values of g correspond to large and small values of the radius of the circle, respectively. Thus, we learn that in the large radius limit, the perturbation theory arising from expanding in the inverse radius is a string theory, as expected from the corresponding results for compactifying from 11 dimensions to 10 on S 1 . In D = 5 dimensions, the strong coupling spectrum consists of a single 0-brane supermultiplet, and this is described by the supergravity theory resulting from type IIB compactified on K 3 × S 1 , with the g → ∞ limit corresponding to that in which the radius of the circle becomes infinite [3]. In D = 7 dimensions, however, the BPS-saturated brane with the highest value of ρ is a supermembrane. Precisely the same brane-scan arises from 11-dimensional supergravity compactified on a K 3 with radius g 1/3 , and it was suggested in [3] that the strongly coupled D = 7 heterotic string should correspond to K 3 -compactified 11-dimensional supergravity in the large radius limit. However, if this were the case, then one would expect the perturbative spectrum to consist of 0-branes instead of membranes; this was certainly the way things worked so far. There seem to be two possibilities. The first is that this is indeed the correct perturbative spectrum and that the large g behaviour is described by a perturbative supermembrane theory in D = 7, as considered in [41]; it is remarkable that in contrast to the case of 11- in D = 7 has such non-BPS 0-brane states that are metastable at strong coupling but which may not continue to states at weak coupling, then these would be the perturbative states at strong coupling and this is what must happen if the strong coupling limit is to be K 3 -compactified 11-dimensional supergravity. Other Coupling Constants We now turn to the coupling constants in the factor in (8.1). From [5], the distinct possibilities are labelled by an integer n and constitute going to a boundary in the moduli space in which for some M. We will be interested in the dependence on the coupling constant g The resulting brane-scan for the 0, 1 branes and their magnetic duals is Table 11 Heterotic Brane-Scan. In D ≥ 4 the maximum and minimum values of ρ are ±1 and are carried by n 0-branes, so that the perturbative spectrum for both g and 1/g consists of n 0-brane multiplets in D > 4 and 2n 0-brane multiplets in D = 4. We now turn to the interpretation of these results, using the discussion of [5]. The factor SL(n,R) SO(n) in (8.2) is the moduli space of fixed-volume metrics on an ntorus. From earlier discussions, the perturbative spectrum of a supergravity theory is the moduli space of conformal field theories on K 3 [42]. The spaces for n = 1, 2, 3 were identified in [5] as the moduli spaces of certain degenerations of K 3 down to 4 − n dimensions. It will be convenient to denote the m = 4 − n dimensional squashed K 3 as Ξ m , with Ξ 4 = K 3 . There are in fact two distinct 1-dimensional degenerations [5], which we will denote Ξ 1 1 and Ξ 1 2 . It was argued in [5] that taking the strong coupling limit in certain directions in coupling constant space corresponds to going to boundaries of the K 3 moduli space in which K 3 degenerates to one of the Ξ m . As in [5], it will be assumed here that taking this limit makes sense and corresponds to a compactification of M-theory on the Ξ m . Consider the case of the six-dimensional theory with scalar coset space which arises from the toroidally compactified heterotic string, type IIA compactified on K 3 and the 11-dimensional theory compactified on K 3 × S 1 . For the coupling constant in the R + factor, the weak coupling theory is the perturbative heterotic string and the strong coupling is described by the perturbative type IIA string, as discussed in section 8.1. Consider first the degeneration The charged perturbative spectrum in g at weak coupling and in 1/g at strong coupling both consist of a single six-dimensional 0-brane supermultiplet, so that both the weak and strong coupling limits are field theories. The limit g → ∞ for the coupling constant g corresponding to the R + factor in (8.5) corresponds, for the heterotic string, to a boundary of the T 4 moduli space in which one of the circles in T 4 becomes large, so that in the limit the theory decompactifies to the 7-dimensional heterotic string, which has moduli space The limit g → 0 corresponds to one of the circles becoming small, but this is related by T-duality to the large radius limit of the same theory, so that the theory is selfdual and the weak and strong coupling limits (with respect to g) define equivalent theories. In the limit g → ∞ (or g → 0), one recovers 7-dimensional Lorentz invariance. Consider now the interpretation of these limits in terms of the 11-dimensional M theory. The limits g → 0 and g → ∞ correspond to boundaries of the moduli space of compactifications of the 11-dimensional theory on K 3 × S 1 which lead to 7-dimensional theories and so must be the ones in which the circle becomes large or small. The large radius limit decompactifies to the 7-dimensional theory resulting from compactifying from 11-dimensions on K 3 , which is expected to correspond to the 7-dimensional heterotic string [3]. We also learn that the small radius limit is equivalent to the large radius limit (using the heterotic equivalence), so that the 11-dimensional M-theory must also exhibit some form of T-duality. Thus the perturbation theory of the D = 6 heterotic string with respect to the coupling constant that vanishes when the theory decompactifies to D = 7 is described by the field theory arising from compactifying the 11-dimensional theory on K 3 × S 1 and expanding in the inverse circle radius, as expected from [3]. Similar remarks apply to other degenerations. For we obtain two 0-brane supermultiplets (transforming as a 2 of SL(2, R)) as the perturbative spectrum both at weak and strong coupling for the coupling constant corresponding to the R + factor in (8.6). This corresponds to (i) the 8-dimensional heterotic string on T 2 and (ii) the 11-dimensional theory on Ξ 3 × T 2 , assuming this limit makes sense [5]. In each case g is related to the volume of the T 2 , so that g → ∞ corresponds to decompactification to 8-dimensions, while the g → 0 limit defines an equivalent theory, related by T-duality. For we obtain three 0-brane supermultiplets as the perturbative spectrum both at weak and strong coupling. This corresponds to (i) the 9-dimensional heterotic string on T 3 and (ii) the 11-dimensional theory on Ξ 2 × T 3 . Finally, for we obtain four 0-brane supermultiplets as the perturbative spectrum both at weak and strong coupling. This corresponds to (i) the 10-dimensional heterotic string on T 4 (ii) the 11-dimensional theory on Ξ 1 × T 4 . Similar results apply in other dimensions D ≤ 6 (cf. [5]) with a perturbative field theory emerging in each case when expanding in couplings corresponding to scalars other than the heterotic dilaton. The results are consistent with the conjecture that the heterotic string in D dimensions is equivalent to 11-dimensional M-theory compactified on Ξ 11−D and to the type IIA string compactified on Ξ 10−D , with Ξ n defined to be K 3 × T n−4 for n ≥ 4. This gives the following table. Dimension D Symmetry Group G M-Theory Compactified on: The Chiral Theory in Six Dimensions The N=2 supersymmetric theories in six dimensions considered so far have had (1,1) supersymmetry, i.e. the two supersymmetries have opposite chirality. There are also (2, H u ) and defining we obtain 5 unconstrained field strengths G + a , their duals G − a = * G + a , and n − 5 anti-self-dual field strengths H u . The theory with n = 19 emerges as the low-energy effective field theory for the type IIB superstring compactified on K 3 , with U-duality group O(5, 21; Z) [5]. The p-brane spectrum for general n consists of 5 self-dual strings and n anti-self-dual ones, or equivalently 5 ordinary (non-chiral) strings and n − 5 anti-self-dual ones. Consider the compactification of this theory to D = 5. If the type IIB theory is compactified on K 3 × S 1 with circle of radius R, it is equivalent to the type IIA theory compactified on K 3 × S 1 with circle of radius 1/R, and this is in turn related to the D = 5 heterotic string [3] and the 11-dimensional theory on K 3 × T 2 . However, in D = 6 the chiral supergravity theory is not related to any of the theories in table 12. The general decomposition is where the superscript is the R + charge. The corresponding values of ρ for the three terms on the right hand side of (9.4) are 0, 1 2 and − 1 2 , respectively. If there are no metastable non-BPS states with higher or lower values of ρ in the spectrum, then for the coupling constant corresponding to the R + factor in (9.3), the perturbative states at strong coupling are n non-chiral strings (i.e. ordinary strings satisfying no self-duality condition) coupling to G + a and transforming as an n of SL(n, R) while those at weak coupling are n non-chiral strings coupling to G − a and transforming as an n ′ of SL(n, R). The case n = 1 is straightforward to interpret: the decomposition corresponds to the weak (or strong) coupling limit of the type IIB string [5]; the R + factor corresponds to the type IIB string coupling constant, given by the exponential of the dilaton expectation value, while O(4,20) O(4)×O (20) is the moduli space of conformal field theories on K 3 [42]. At weak coupling, the perturbative states correspond to the fundamental type IIB string, while at strong coupling we obtain the dual RR string, which is the fundamental string of the dual theory [9]. The decomposition appears to correspond to expanding in the volume of the K 3 surface [5]: (19) is the moduli space of fixed volume K 3 metrics, SL(2,R) U (1) is the moduli space of type IIB string theories in D = 10 and R + is the K 3 volume. However, the perturbative spectrum consists of two strings, which is unlike any conventional string theory. (The only way out would be if there were some metastable states dominating the perturbation theory, such as Kaluza-Klein modes on K 3 . Note however that no such states should occur in the previous example of n = 1 if the type IIB string interpretation is to hold.) The perturbative spectra arising for n > 1 are remarkable in that they are quite unlike anything that has been seen in the cases considered so far (again, unless some metastable states dominate). Recall that for a supergravity theory compactified on an n-torus, the perturbative spectrum for the coupling constant corresponding to the inverse volume of the torus consists of n charged 0-brane multiplets that become massless supergravity multiplets in the large volume limit and which transform as an n under the SL(n) torus mapping class group. Each of these n multiplets carries a minimal charge e 0 with respect to a corresponding gauge field and has partners with charges me 0 for all integers m, so that the spectrum includes n Kaluza-Klein towers. The same perturbative spectrum emerged from superstring theories compactified on an n-torus, when expanded with respect to the inverse volume of the torus. Here we are finding something rather different: n superstrings instead of n super-0-branes. Furthermore, each of these has a charge q 0 with respect to a particular 2-form G + and has partners of charge mq 0 for all integers m, so that we are obtaining Kaluza-Klein-like towers of superstrings. These facts, together with the presence of the SL(n, R)/O(n) factor in (9.3) and comparison with the various limits of the non-chiral supergravities considered in the last section, suggest that the theory could have an interpretation in terms of a compactification of some theory on an n-torus (or perhaps on some space that is locally T n , such as an orbifold of T n ). However, the required theory cannot be any known theory compactified on a torus in the conventional way from n + 6 dimensions, as the wrong spectrum would emerge. For example, the case n = 5 would require reduction from 11 dimensions. Reducing the 11-dimensional Mtheory on T 5 gives the usual type II string theory in D = 6 which is rather different from what we want, although it is possible that reduction of M-theory on some orbifold of T 5 could work here. To get more insight, consider the compactification of the chiral theory to 5dimensions on a circle or radius R, as this gives the 5-dimensional heterotic theory [3], with moduli space Consider the coupling constant g such that the limit g → ∞ corresponds to the degeneration (9.3). Taking g → ∞ corresponds to the decompactification to 5 + n dimensions:-the heterotic string in 5-dimensions arises from the heterotic string in 5 + n dimensions on compactification on a torus T n of volume V , and the coupling constant g of the 5-dimensional theory corresponds to the volume V , so that g → ∞ corresponds to V → ∞. Thus taking g → ∞, we recover Lorentz invariance in 5 + n dimensions. On the other hand, taking R → ∞ for fixed g we recover 6-dimensional Lorentz invariance. This suggests taking both decompactification limits together, g → ∞, R → ∞ might give a theory a (6 + n)-dimensional theory, and in particular an 11-dimensional theory for n = 5. Unfortunately, the situation is not quite so simple as one has to be careful as to how the limits are taken. Consider the case n = 2 for example. The string coupling constant g 7 of the D = 7 heterotic string is given by g 2 7 = g 3 R 2 , so that on taking the limit g → ∞ one must take R → 0 holding g 3 R 2 constant if one is to obtain the heterotic string with finite coupling g 7 . However, the heterotic string in D = 7 is conjectured to be equivalent to M-theory compactified on a K 3 of volume V = g 4/3 7 [3], so that the limit g 7 → ∞ corresponds to the limit in which the volume of the K 3 becomes infinite, so that the theory decompactifies and 11-dimensional Lorentz invariance is regained. It is not clear whether this is related to the limit we are interested in here, which is given by first taking R → ∞ to regain the type IIB string compactified on K 3 and then taking g → ∞ so that the moduli space decomposes as in (9.6). However, it is plausible that the latter limit might define a theory in more than 6 dimensions. Thus the strong coupling limit corresponding to (9.3) of the chiral d = 6 theory, if it exists, appears to be a theory in at least six dimensions and probably more, which has an n-dimensional lattice of superstrings in the perturbative spectrum, acted on by SL(n, Z). If, as suggested above, it arises from some theory compactified on T n in the limit of large torus volume, then the limiting theory should live in (at least) 6 + n-dimensions and should have a moduli space Note the absence of any dilaton-like R + factor. For n = 5 there are two distinct limits (9.3) (cf [5]), so there could be two distinct theories in (at least) 11 dimensions in this case, and these would have no scalars and so no coupling constants and no perturbation theory. It will be convenient to refer to these theories as N-theories. There do not appear to be any known supergravity theories that could serve as the low-energy limits of these N-theories, so if these N-theories do exist, then either they do not have a low-energy effective field theory, or there are some new supergravity-type theories that are yet to be found. So what could these N-theories be? One possibility is that they arise from and which when compactified on S 1 should give the d = 7 heterotic string, which is conjectured to be equivalent to M-theory on K 3 [3]. However, (9.9) is the moduli space for fixed-volume Ricci-flat metrics on K 3 and this suggests that d = 8 Ntheory might arise from some theory in 12 dimensions compactified on K 3 . Let ⋆ After this paper appeared, it was argued by Dasgupta and Mukhi and by Witten that the chiral theory in six dimensions given by compactifying the type IIB string on K 3 can also be obtained from M-theory by compactifying on the orbifold T 5 /Z 2 . This can be extended to the other degenerations. The degeneration (9.3) with n ≥ 2 would correspond to Y-theory compactified on Ξ 6−n and for n ≤ 2 perhaps to Y-theory on K 3 × T 2−n . Then a consistent picture seems to emerge which is similar to that proposed for the non-chiral D = 6 theory and its limits in the previous section. It is conceivable that both the Y-theory picture and the orbifold picture are correct and that there is a relation between them, which would be similar to the relation between the Aspinwall-Morrison picture [5] and the Horava-Witten description [26] of the theories described in the last section. The type IIB theory might also be obtained from 12 dimensions, which would afford a geometrical interpretation of the SL(2) symmetry, without needing to go through 9-dimensional theories, as in [19,21]. Needless to say, this is rather speculative, but if M-theory, why not Y-theory?
2014-10-01T00:00:00.000Z
1995-12-21T00:00:00.000
{ "year": 1995, "sha1": "916bbf42a3702684af346e92ec5187c121a9c092", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-th/9512181", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "916bbf42a3702684af346e92ec5187c121a9c092", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
212427898
pes2o/s2orc
v3-fos-license
Characteristics of Low-Temperature Polyvinyl Chloride Carbonization by Catalytic CuAl Layered Double Hydroxide : A good way to make carbon materials was presented in low-temperature polyvinyl chloride (PVC) carbonization by catalysis. The process of low-temperature PVC carbonization by CuAl-layered double hydroxide (CuAl-LDH) was investigated by thermogravimetric analysis (TGA) and tubular furnace. The results show that CuAl-LDH accounting for 5% of PVC mass enabled acceleration of the dehydrochlorination in PVC as soon as possible and maximized the yield of the PVC carbonized product. The vacuum with 0.08 MPa, 20 ◦ C / min heating rate and 90 min carbonized maintenance time were optimal for PVC carbonization. Moreover, the best morphology and yield of the carbonized product was provided at a carbonization temperature of 300 ◦ C. Introduction Polyvinylchloride (PVC) is a general-purpose resin, ranked second to polyolefins amongst thermoplastics in total world production volume [1]. Due to its properties such as excellent acid and alkali resistance, wear resistance, flame resistance, insulation, excellent processing performance, cost-effectiveness and good compatibility with plasticisers, PVC can greatly improve the mechanical properties of most materials. With the rapid increase in the use of plastics, plastic waste accounts for a large proportion of municipal solid waste and as such, has gained more attention regarding the inefficient methods for its disposal [2][3][4]. Amongst all types of plastics, PVC is one of the most important but also a potentially hazardous polymer material in the environment [5,6]. Current disposal methods include landfill, incineration, chemical recovery, etc. Traditional landfill accumulation and other treatment methods are non-biodegradable approaches. Incineration disposal has a large amount of treatment, good reduction and can recover heat energy, but it also leads to serious environmental pollution. When PVC is incinerated, it generates a considerable amount of smoke as well as potentially harmful volatile organic compounds [7][8][9], such HCl. PVC contains chlorine as high as 56.7 wt % [10,11] and when burnt, releases a class of highly toxic substances such as dioxins. Additionally, PVC can generate chlorinated hydrocarbons during pyrolysis, which act as precursors for toxic emissions of substances such as polychlorinated dibenzo-p-dioxine and dibenzofurane [12,13]. The usual approach for chemical recycling of PVC is the cracking/pyrolysis method, which includes hydrocracking, thermal cracking and catalytic cracking [14,15]. Feed-stock recycling pyrolysis is capable of converting mixed, unwashed plastic waste into fuels, monomers or other valuable materials [16]. Qiao et al. indicated that the pyrolysis and carbonization of two forms of waste PVCs were examined to understand the carbonization process of waste PVC and thus develop mesoporous structures in carbonized products [17]. This is also a new research direction for the reuse of waste PVC materials. In its initial phase, the thermal degradation of PVC is primarily a process of sequential dehydrochlorination that forms conjugated polyene sequences. Chlorine in PVC leaves pyrolysis units mainly as hydrogen chloride [18,19], and HCl is released in large amounts at a relatively lower temperature (≤300 • C) [20]. Limitations in the solid waste combustion or energization processes (pyrolysis or gasification) are due to the chlorine compounds in PVC and to the corrosive effect of the generated acid gas (HCl) on the inner wall of the equipment; as a result, the difficulty in the subsequent purification treatment is increased. Therefore, the presence of a chlorine source causes potential danger in the generation of highly toxic compound dioxins [21,22]. The prior removal of chlorine from PVC in the form of hydrogen chloride at a relatively lower temperature is vital for inhibiting the generation of hazardous dioxins followed by the stabilization of the dechlorinated polyene fractions to be converted into char with elevated temperatures. Hydrotalcite is a non-toxic, environmentally friendly compound which can be commercially produced by the co-precipitation method. The use of hydrotalcite, a PVC thermal stabilizer, was initially proposed and tested by Kyowa Chemical Industries, Japan [23]. The stabilization mechanism was further studied by van der Ven et al. who reported that the process involves the following steps: the interlayer counter ions initially react with HCl and secondly, the layers react with HCl to form metal chlorides with loss of the layered structure [24]. Pike et al. showed that copper compounds greatly suppress benzene production during PVC pyrolysis [25]. Copper-containing layered double hydroxides (LDHs) with extremely high catalytic activities of copper ions show excellent catalytic performance for volatile organic compound (VOC) oxidation and organic dye degradation, as well as phenol hydroxylation as described by Fan et al [26,27]. Yang et al. showed that the CuAl-layered double hydroxide (CuAl-LDH) additive has a strong charring effect on PVC/CuAl-LDH nanocomposites. This result demonstrates that the CuAl-LDH in the PVC matrix has further favored the oxidative dehydrogenation-crosslinking-charring process and increased the char yield [28]. Chen et al. indicated that CuAl-LDH exhibited extraordinary catalysis in the pyrolysis and combustion of PVC. The addition of CuAl-LDH significantly decreased the release of aromatics and increased the yield of solid char [29]. In this study, the characteristics of low-temperature (about 300 • C) PVC carbonization by catalytic CuAl-LDH was investigated. It was found that many factors affected PVC pyrolysis and carbonization. Different contents of CuAl-LDH, pyrolysis atmosphere, carbonization temperature, the heating rate and carbonization maintenance time on the yield of carbonized PVC product was tested by thermogravimetric analysis (TGA) and tubular furnace. The morphology and composition of the carbonized products were examined by scanning electron microscopy (SEM) and elemental analysis. Materials PVC resin was kindly provided by Shanghai Chlor-Alkali Chemical Industry Stock Co., Ltd. (Shanghai, China). Plasticizer (Tri-Octyl Tri-Meta-Benzoate, abbreviating TOTM) was supplied by Aladdin Chemical Co., Ltd. (Shanghai, China). The preparation procedure and structural characterization of CuAl-LDH have been reported in [28]. Preparation of Sample PVC/CuAl-LDH were directly prepared via melt blending with the mixture of the desired amount of CuAl-LDH and 100 g PVC. The formulations were mixed on a two-roll mill at 105-110 • C for 30 min, Processes 2020, 8, 120 3 of 11 then the resulting mixtures were cooled to room temperature for testing. Table 1 shows the mixing mass ratio of PVC formulations. Methods of Characterization and Instruments Thermogravimetric analysis was performed using a thermogravimetric analyzer (TGA, SII-TGA/DTA 6300, SII Nano Technology Inc. Northridge, CA, USA). The heating rate was 20 • C/min from 50 • C to 600 • C under a nitrogen atmosphere. The pyrolysis experiment was tested using a tubular furnace (SG-GS1400, Shanghai Jiejie Electric Furnace Co., Ltd. Shanghai, China). The temperature, heating rate and maintenance time were regulated by a computer (LenovoB5040, Beijing Lenovo Co., Ltd. Beijing, China).'The varying atmosphere was provided by TGA and a vacuum pump (SHB-LL1, Zhengzhou Changcheng Branch Industry and Trade Co., Ltd. Zhengzhou, China). The experimental apparatus is shown in The microstructure of the carbonized product was observed by scanning electron microscopy (SEM, Hitachi s-3400n, Hitachi, Japan), and their magnifications were 10,000 times and 20,000 times respectively. The elemental analysis of the carbonized product was carried out with an element analyzer (Vario EL III). Figure 1. Preparation of Sample PVC/CuAl-LDH were directly prepared via melt blending with the mixture of the desired amount of CuAl-LDH and 100 g PVC. The formulations were mixed on a two-roll mill at 105-110 °C for 30 min, then the resulting mixtures were cooled to room temperature for testing. Table 1 shows the mixing mass ratio of PVC formulations. Methods of Characterization and Instruments Thermogravimetric analysis was performed using a thermogravimetric analyzer (TGA, SII-TGA/DTA 6300, SII Nano Technology Inc. Northridge, CA, USA). The heating rate was 20 °C/min from 50 °C to 600 °C under a nitrogen atmosphere. The pyrolysis experiment was tested using a tubular furnace (SG-GS1400, Shanghai Jiejie Electric Furnace Co., Ltd. Shanghai, China). The temperature, heating rate and maintenance time were regulated by a computer (LenovoB5040, Beijing Lenovo Co., Ltd. Beijing, China). The varying atmosphere was provided by TGA and a vacuum pump (SHB-LL1, Zhengzhou Changcheng Branch Industry and Trade Co., Ltd. Zhengzhou, China). The experimental apparatus is shown in Figure 1. The microstructure of the carbonized product was observed by scanning electron microscopy (SEM, Hitachi s-3400n, Hitachi, Japan), and their magnifications were 10,000 times and 20,000 times respectively. The elemental analysis of the carbonized product was carried out with an element analyzer (Vario EL III). Thermogravimetric Analysis of Catalytic Pyrolysis of PVC The pyrolysis process of PVC and its composites were investigated by TGA in the temperature range of 50 °C to 600 °C. The DTG and TG curves are presented in Figure 2a,b. The pyrolysis trend of PVC/CuAl-LDH-4% is similar to PVC, but they present with two different mass loss stages. The first region ranges from 50 °C to 400 °C, and the second ranges between 400 °C and 600 °C. The first stage is attributed to the dehydrochlorination of PVC and volatilization of the plasticizer, and the second is due to the further degradation of the molecular chains after dichlorination [29]. Thermogravimetric Analysis of Catalytic Pyrolysis of PVC The pyrolysis process of PVC and its composites were investigated by TGA in the temperature range of 50 • C to 600 • C. The DTG and TG curves are presented in Figure 2a,b. The pyrolysis trend of PVC/CuAl-LDH-4% is similar to PVC, but they present with two different mass loss stages. The first region ranges from 50 • C to 400 • C, and the second ranges between 400 • C and 600 • C. The first stage is attributed to the dehydrochlorination of PVC and volatilization of the plasticizer, and the second is due to the further degradation of the molecular chains after dichlorination [29]. From the DTG curves shown in Figure 2a, at the first stage, the temperature at the maximum mass loss rate of PVC/CuAl-LDH-4% appearing at 305 °C is lower than that of PVC resins. Meanwhile, the degradation temperature at the mass loss of the first stage spans a narrower temperature range from 241 °C to 352 °C. This is because the released hydrogen chloride is absorbed by CuAl-LDH to form a lot of Lewis acid sites that can combine with electrons and hydrogen radicals, which would give rise to the further dehydrochlorination of PVC. This indicates that the CuAl-LDH can significantly accelerate the degradation of PVC and reduce the pyrolysis temperature. According to the TG curves shown in Figure 2b, from the beginning of the second stage to the end of thermal degradation, PVC/CuAl-LDH-4% always displays higher residue compared to that of PVC. This may be attributed to the fact that a number of the Lewis acid sites formed during the first stage would combine firstly with hydrogen radicals, impeding the reaction of hydrogen radicals and the conjugated alkene, and decreasing the release of low molecule hydrocarbons. These results represent that CuAl-LDH not only inhibits the molecular chain fracture,r but also increases the residue yield [9,30]. Thermogravimetric Analysis of Different Contents of CuAl-LDH on the Carbonization of PVC The pyrolysis process of PVC and its composites were investigated by TG in the temperature range between 50-600 °C under nitrogen atmosphere. From Figure 3, the TG curves of PVC/CuAl-LDH are similar to that of PVC. It is clearly observed that the degradation of PVC is accelerated after the addition of CuAl-LDH. According to the TG curves, the onset degradation temperature (Tonset), the temperature of the maximum mass loss rate of the first stage (Tmax1) and the yield of residue at the end temperature of 600 °C (Wend) can be determined. The determined values are given in Table 2. From Figure 3, the onset degradation temperature of PVC/CuAl-LDH-5% appearing at 238 °C and the temperature at the maximum mass loss rate of PVC/CuAl-LDH-5% appearing at 300 °C were both the lowest temperatures compared to those of other PVC composites. This indicates that the addition of CuAl-LDH-5% can most effectively stimulate the dehydrochlorination of PVC and significantly accelerate the degradation of PVC. Additionally, from Table 2, at the maximum temperature of 600 °C, the yield of residue of PVC/CuAl-LDH-5% (20.48%) was more than three times that of PVC without CuAl-LDH. It may be concluded that the addition of CuAl-LDH promotes charring during the pyrolysis process, and the charring amounts of PVC/CuAl-LDH-5% were larger than that of PVC and its composites. Thus, CuAl-LDH accounts for 5% of PVC mass, which is the best choice for PVC carbonization. From the DTG curves shown in Figure 2a, at the first stage, the temperature at the maximum mass loss rate of PVC/CuAl-LDH-4% appearing at 305 • C is lower than that of PVC resins. Meanwhile, the degradation temperature at the mass loss of the first stage spans a narrower temperature range from 241 • C to 352 • C. This is because the released hydrogen chloride is absorbed by CuAl-LDH to form a lot of Lewis acid sites that can combine with electrons and hydrogen radicals, which would give rise to the further dehydrochlorination of PVC. This indicates that the CuAl-LDH can significantly accelerate the degradation of PVC and reduce the pyrolysis temperature. According to the TG curves shown in Figure 2b, from the beginning of the second stage to the end of thermal degradation, PVC/CuAl-LDH-4% always displays higher residue compared to that of PVC. This may be attributed to the fact that a number of the Lewis acid sites formed during the first stage would combine firstly with hydrogen radicals, impeding the reaction of hydrogen radicals and the conjugated alkene, and decreasing the release of low molecule hydrocarbons. These results represent that CuAl-LDH not only inhibits the molecular chain fracture, but also increases the residue yield [9,30]. Thermogravimetric Analysis of Different Contents of CuAl-LDH on the Carbonization of PVC The pyrolysis process of PVC and its composites were investigated by TG in the temperature range between 50-600 • C under nitrogen atmosphere. From Figure 3, the TG curves of PVC/CuAl-LDH are similar to that of PVC. It is clearly observed that the degradation of PVC is accelerated after the addition of CuAl-LDH. According to the TG curves, the onset degradation temperature (T onset ), the temperature of the maximum mass loss rate of the first stage (T max1 ) and the yield of residue at the end temperature of 600 • C (W end ) can be determined. The determined values are given in Table 2. From Figure 3, the onset degradation temperature of PVC/CuAl-LDH-5% appearing at 238 • C and the temperature at the maximum mass loss rate of PVC/CuAl-LDH-5% appearing at 300 • C were both the lowest temperatures compared to those of other PVC composites. This indicates that the addition of CuAl-LDH-5% can most effectively stimulate the dehydrochlorination of PVC and significantly accelerate the degradation of PVC. Additionally, from Table 2, at the maximum temperature of 600 • C, the yield of residue of PVC/CuAl-LDH-5% (20.48%) was more than three times that of PVC without CuAl-LDH. It may be concluded that the addition of CuAl-LDH promotes charring during the pyrolysis process, and the charring amounts of PVC/CuAl-LDH-5% were larger than that of PVC and its composites. Thus, CuAl-LDH accounts for 5% of PVC mass, which is the best choice for PVC carbonization. Effect of Varying Pyrolysis Atmosphere on the Carbonization of PVC Composites The carbonized product of PVC/CuAl-LDH-5% was achieved using a tubular furnace. The temperature of carbonization was 300 °C, the heating rate was 20 °C/min and the maintenance time of carbonization was 30 min. We examined this process under three atmospheres: nitrogen, vacuum and air. The morphology of the carbonized products are shown in Figure 4. The surface of Figure 4a,b shows black lustre before grinding which turns into dry and black solid particles after grinding. Thus, whether ground or not, Figure 4c shows a light gray powder all along. There is no doubt that the carbon element of PVC/CuAl-LDH-5% would combine with oxygen in the air to form carbon dioxide, which would be released quickly. The carbon content of the product decreases sharply, so the morphology of the carbonized product is hardly seen. Moreover, at the end of pyrolysis, these products have a light green powder appearance due to the presence of copper ions. From Figure 5, the yield of the carbonized product was 33.5%, 33.3%, 1.4% in nitrogen, vacuum and air, respectively. The yield of carbonized product is virtually identical under nitrogen and vacuum atmospheres. However, it is difficult to realize the nitrogen condition as it needs more auxiliary equipment which indirectly increases the energy consumption. However, the vacuum condition would be easily realized with less energy consumption. Considering the above, the best choice of pyrolysis atmosphere on the yield of the carbonized PVC product is the vacuum with 0.08 MPa. Effect of Varying Pyrolysis Atmosphere on the Carbonization of PVC Composites The carbonized product of PVC/CuAl-LDH-5% was achieved using a tubular furnace. The temperature of carbonization was 300 • C, the heating rate was 20 • C/min and the maintenance time of carbonization was 30 min. We examined this process under three atmospheres: nitrogen, vacuum and air. The morphology of the carbonized products are shown in Figure 4. The surface of Figure 4a,b shows black lustre before grinding which turns into dry and black solid particles after grinding. Thus, whether ground or not, Figure 4c shows a light gray powder all along. There is no doubt that the carbon element of PVC/CuAl-LDH-5% would combine with oxygen in the air to form carbon dioxide, which would be released quickly. The carbon content of the product decreases sharply, so the morphology of the carbonized product is hardly seen. Moreover, at the end of pyrolysis, these products have a light green powder appearance due to the presence of copper ions. Effect of Varying Pyrolysis Atmosphere on the Carbonization of PVC Composites The carbonized product of PVC/CuAl-LDH-5% was achieved using a tubular furnace. The temperature of carbonization was 300 °C, the heating rate was 20 °C/min and the maintenance time of carbonization was 30 min. We examined this process under three atmospheres: nitrogen, vacuum and air. The morphology of the carbonized products are shown in Figure 4. The surface of Figure 4a,b shows black lustre before grinding which turns into dry and black solid particles after grinding. Thus, whether ground or not, Figure 4c shows a light gray powder all along. There is no doubt that the carbon element of PVC/CuAl-LDH-5% would combine with oxygen in the air to form carbon dioxide, which would be released quickly. The carbon content of the product decreases sharply, so the morphology of the carbonized product is hardly seen. Moreover, at the end of pyrolysis, these products have a light green powder appearance due to the presence of copper ions. From Figure 5, the yield of the carbonized product was 33.5%, 33.3%, 1.4% in nitrogen, vacuum and air, respectively. The yield of carbonized product is virtually identical under nitrogen and vacuum atmospheres. However, it is difficult to realize the nitrogen condition as it needs more auxiliary equipment which indirectly increases the energy consumption. However, the vacuum condition would be easily realized with less energy consumption. Considering the above, the best choice of pyrolysis atmosphere on the yield of the carbonized PVC product is the vacuum with 0.08 MPa. From Figure 5, the yield of the carbonized product was 33.5%, 33.3%, 1.4% in nitrogen, vacuum and air, respectively. The yield of carbonized product is virtually identical under nitrogen and vacuum atmospheres. However, it is difficult to realize the nitrogen condition as it needs more auxiliary equipment which indirectly increases the energy consumption. However, the vacuum condition would be easily realized with less energy consumption. Considering the above, the best choice of pyrolysis atmosphere on the yield of the carbonized PVC product is the vacuum with 0.08 MPa. Effect of Carbonization Temperature on the Carbonization of PVC Composites The carbonied product of PVC/CuAl-LDH-5% was obtained using a tubular furnace with different pyrolysis conditions in a vacuum. The vacuum was set at 0.08 MPa, and the carbonization temperature, heating rate and carbonization maintenance time ranged from 250 °C to 600 °C, 10-40 °C/min and 10-120 min respectively. The morphology of the carbonized product is shown in Figure 6. The temperature at the red region was between 250 °C and 280 °C. The carbonized products in this range were irregular blocks, black lustre and soft texture; impossible to be ground. The product in the blue and green regions were irregular blocks and black lustre before being ground which then turned into black solid particles after grinding. The temperature of the blue region was 290 °C and the carbonized product had hard texture before grinding. After grinding, the surface of the solid particles was moist; furthermore, an oily substance was found in the grinder. The temperature of the green region was between 300 °C and 600 °C. The carbonized product was crispy before grinding, and the solid particles were dry, and there was no oily substance in the grinder after grinding. Moreover, whether carbonized at low or high temperatures, the morphology of the carbonized product was not influenced by the heating rate or the maintenance time of carbonization. At the carbonization temperature of 300 °C, the morphology of the carbonized product changes drastically, but not after 300 °C. This result illustrates that the temperature of carbonization is the primary interfering factor for the morphology of carbonized product compared to the heating rate and the maintenance time of carbonization. Thus, 300 °C is a significant temperature value for the change in carbonized product morphology. Effect of Carbonization Temperature on the Carbonization of PVC Composites The carbonied product of PVC/CuAl-LDH-5% was obtained using a tubular furnace with different pyrolysis conditions in a vacuum. The vacuum was set at 0.08 MPa, and the carbonization temperature, heating rate and carbonization maintenance time ranged from 250 • C to 600 • C, 10-40 • C/min and 10-120 min respectively. The morphology of the carbonized product is shown in Figure 6. The temperature at the red region was between 250 • C and 280 • C. The carbonized products in this range were irregular blocks, black lustre and soft texture; impossible to be ground. The product in the blue and green regions were irregular blocks and black lustre before being ground which then turned into black solid particles after grinding. The temperature of the blue region was 290 • C and the carbonized product had hard texture before grinding. After grinding, the surface of the solid particles was moist; furthermore, an oily substance was found in the grinder. The temperature of the green region was between 300 • C and 600 • C. The carbonized product was crispy before grinding, and the solid particles were dry, and there was no oily substance in the grinder after grinding. Effect of Carbonization Temperature on the Carbonization of PVC Composites The carbonied product of PVC/CuAl-LDH-5% was obtained using a tubular furnace with different pyrolysis conditions in a vacuum. The vacuum was set at 0.08 MPa, and the carbonization temperature, heating rate and carbonization maintenance time ranged from 250 °C to 600 °C, 10-40 °C/min and 10-120 min respectively. The morphology of the carbonized product is shown in Figure 6. The temperature at the red region was between 250 °C and 280 °C. The carbonized products in this range were irregular blocks, black lustre and soft texture; impossible to be ground. The product in the blue and green regions were irregular blocks and black lustre before being ground which then turned into black solid particles after grinding. The temperature of the blue region was 290 °C and the carbonized product had hard texture before grinding. After grinding, the surface of the solid particles was moist; furthermore, an oily substance was found in the grinder. The temperature of the green region was between 300 °C and 600 °C. The carbonized product was crispy before grinding, and the solid particles were dry, and there was no oily substance in the grinder after grinding. Moreover, whether carbonized at low or high temperatures, the morphology of the carbonized product was not influenced by the heating rate or the maintenance time of carbonization. At the carbonization temperature of 300 °C, the morphology of the carbonized product changes drastically, but not after 300 °C. This result illustrates that the temperature of carbonization is the primary interfering factor for the morphology of carbonized product compared to the heating rate and the maintenance time of carbonization. Thus, 300 °C is a significant temperature value for the change in carbonized product morphology. Moreover, whether carbonized at low or high temperatures, the morphology of the carbonized product was not influenced by the heating rate or the maintenance time of carbonization. At the carbonization temperature of 300 • C, the morphology of the carbonized product changes drastically, but not after 300 • C. This result illustrates that the temperature of carbonization is the primary interfering factor for the morphology of carbonized product compared to the heating rate and the maintenance time of carbonization. Thus, 300 • C is a significant temperature value for the change in carbonized product morphology. The yield of the PVC/CuAl-LDH-5% carbonized product was gained by using a tubular furnace with different carbonization temperatures. The pyrolysis conditions were: under vacuum (0.08 MPa), heating rate of 20 • C/min and carbonization maintenance time of 30 min. The curves for the yield of the carbonized products are shown in Figure 7. It can be seen that the yield of the carbonized product declines rapidly before 300 • C, but from 300 • C to 600 • C, its yield declines slowly and reaches 33.3% at 300 • C, which is the highest yield compared to those of other temperatures. Based on the earlier results, the temperature at the maximum mass loss rate of PVC/CuAl-LDH-5% is exactly 300 • C and hydrogen chloride was released rapidly at 300 • C; for this reason, the yield of the carbonized product changes drastically at 300 • C. The yield of the PVC/CuAl-LDH-5% carbonized product was gained by using a tubular furnace with different carbonization temperatures. The pyrolysis conditions were: under vacuum (0.08 MPa), heating rate of 20 °C/min and carbonization maintenance time of 30 min. The curves for the yield of the carbonized products are shown in Figure 7. It can be seen that the yield of the carbonized product declines rapidly before 300 °C, but from 300 °C to 600 °C, its yield declines slowly and reaches 33.3% at 300 °C, which is the highest yield compared to those of other temperatures. Based on the earlier results, the temperature at the maximum mass loss rate of PVC/CuAl-LDH-5% is exactly 300 °C and hydrogen chloride was released rapidly at 300 °C; for this reason, the yield of the carbonized product changes drastically at 300 °C. However, after 600 °C, the yield of the carbonized product remains constant. It can be concluded that the pyrolysis of PVC/CuAl-LDH-5% was basically completed in a tubular furnace at 600 °C. According to the temperature needed for a change in carbonized product morphology and the yield of carbonized product generated, 300 °C is tentatively selected as the carbonization temperature. Effect of Heating Rate on the Carbonization of PVC Composites PVC/CuAl-LDH-5% was investigated by TG at different heating rates in the temperature range between 50-600 °C under nitrogen atmosphere. The TG curve is presented in Figure 8. It can be seen that the pyrolysis trend of PVC/CuAl-LDH-5% at different heating rates are similar to each other. At the termination pyrolysis temperature, the yield of the PVC/CuAl-LDH-5% carbonized product is virtually identical at all different heating rates. These results suggested that the yield of the carbonized product is unaffected by heating rate in TG. However, after 600 • C, the yield of the carbonized product remains constant. It can be concluded that the pyrolysis of PVC/CuAl-LDH-5% was basically completed in a tubular furnace at 600 • C. According to the temperature needed for a change in carbonized product morphology and the yield of carbonized product generated, 300 • C is tentatively selected as the carbonization temperature. Effect of Heating Rate on the Carbonization of PVC Composites PVC/CuAl-LDH-5% was investigated by TG at different heating rates in the temperature range between 50-600 • C under nitrogen atmosphere. The TG curve is presented in Figure 8. It can be seen that the pyrolysis trend of PVC/CuAl-LDH-5% at different heating rates are similar to each other. At the termination pyrolysis temperature, the yield of the PVC/CuAl-LDH-5% carbonized product is virtually identical at all different heating rates. These results suggested that the yield of the carbonized product is unaffected by heating rate in TG. The yield of the PVC/CuAl-LDH-5% carbonized product was gained by using a tubular furnace with different carbonization temperatures. The pyrolysis conditions were: under vacuum (0.08 MPa), heating rate of 20 °C/min and carbonization maintenance time of 30 min. The curves for the yield of the carbonized products are shown in Figure 7. It can be seen that the yield of the carbonized product declines rapidly before 300 °C, but from 300 °C to 600 °C, its yield declines slowly and reaches 33.3% at 300 °C, which is the highest yield compared to those of other temperatures. Based on the earlier results, the temperature at the maximum mass loss rate of PVC/CuAl-LDH-5% is exactly 300 °C and hydrogen chloride was released rapidly at 300 °C; for this reason, the yield of the carbonized product changes drastically at 300 °C. However, after 600 °C, the yield of the carbonized product remains constant. It can be concluded that the pyrolysis of PVC/CuAl-LDH-5% was basically completed in a tubular furnace at 600 °C. According to the temperature needed for a change in carbonized product morphology and the yield of carbonized product generated, 300 °C is tentatively selected as the carbonization temperature. Effect of Heating Rate on the Carbonization of PVC Composites PVC/CuAl-LDH-5% was investigated by TG at different heating rates in the temperature range between 50-600 °C under nitrogen atmosphere. The TG curve is presented in Figure 8. It can be seen that the pyrolysis trend of PVC/CuAl-LDH-5% at different heating rates are similar to each other. At the termination pyrolysis temperature, the yield of the PVC/CuAl-LDH-5% carbonized product is virtually identical at all different heating rates. These results suggested that the yield of the carbonized product is unaffected by heating rate in TG. The carbonized product of PVC/CuAl-LDH-5% was obtained by the use of a tubular furnace at different heating rates. The pyrolysis conditions were: under vacuum (0.08 MPa), carbonization temperature of 300 • C, heating rate of 10-40 • C/min and carbonization maintenance time of 30 min. The yield of carbonized product is shown in Table 3. It can be seen that the yield of carbonized product with the 20 • C/min of heating rate is the highest compared to those of others, which reaches 33.3%. We propose that the yield of carbonized product is affected by the heating rate in a tubular furnace. Though there are contradictions in the conclusions between the yield of the carbonized products at different heating rates from TGA and the tubular furnace, it is worth considering utilizing the conclusion of the tubular furnace. Thus, the 20 • C/min heating rate is one of the optimum experimental conditions of PVC carbonization. Effect of Carbonization Maintenance Time on the Carbonization of PVC Composites The PVC/CuAl-LDH-5% carbonized product was gained by tubular furnace under different carbonization maintenance times. The carbonization conditions were: under vacuum (0.08 MPa), carbonization temperature of 300 • C, heating rate of 20 • C/min and carbonization maintenance time ranging from 10-150 min. The curves for the yield of the carbonized product with different carbonization maintenance times is shown in Figure 9. Obviously, as the maintenance time of carbonization increases, the yield of the carbonized product gradually reduces. The carbonization temperature of 300 • C was used, which is the temperature at the maximum mass loss rate of PVC/CuAl-LDH-5% at the first stage of degradation. The fact that the plasticizer is volatilized at the release of hydrogen chloride (HCL) explains the carbonization process. Before 90 min, the yield of the carbonized product is maintained at approximately 28% after its initial reduction. The carbonized product of PVC/CuAl-LDH-5% was obtained by the use of a tubular furnace at different heating rates. The pyrolysis conditions were: under vacuum (0.08 MPa), carbonization temperature of 300 °C, heating rate of 10-40 °C/min and carbonization maintenance time of 30 min. The yield of carbonized product is shown in Table 3. It can be seen that the yield of carbonized product with the 20 °C/min of heating rate is the highest compared to those of others, which reaches 33.3%. We propose that the yield of carbonized product is affected by the heating rate in a tubular furnace. Though there are contradictions in the conclusions between the yield of the carbonized products at different heating rates from TGA and the tubular furnace, it is worth considering utilizing the conclusion of the tubular furnace. Thus, the 20 °C/min heating rate is one of the optimum experimental conditions of PVC carbonization. Effect of Carbonization Maintenance Time on the Carbonization of PVC Composites The PVC/CuAl-LDH-5% carbonized product was gained by tubular furnace under different carbonization maintenance times. The carbonization conditions were: under vacuum (0.08 MPa), carbonization temperature of 300 °C, heating rate of 20 °C/min and carbonization maintenance time ranging from 10-150 min. The curves for the yield of the carbonized product with different carbonization maintenance times is shown in Figure 9. Obviously, as the maintenance time of carbonization increases, the yield of the carbonized product gradually reduces. The carbonization temperature of 300 °C was used, which is the temperature at the maximum mass loss rate of PVC/CuAl-LDH-5% at the first stage of degradation. The fact that the plasticizer is volatilized at the release of hydrogen chloride (HCL) explains the carbonization process. Before 90 min, the yield of the carbonized product is maintained at approximately 28% after its initial reduction. Analysis of the Carbonized Product of PVC/CuAl-LDH-5% by Elemental Analysis The elements in the carbonized product were measured using an element analyzer. The carbonized product of PVC/CuAl-LDH-5% was obtained by tubular furnace. The carbonization conditions were: under vacuum (0.08 MPa), carbonization temperature of 300 °C, heating rate of 20 °C/min and carbonization maintenance time of 90 min. The carbonized product acid-washing process involved adding a 3:1 volume of aqua regia to the carbonized product and soaking it at 25 °C for 24 h. As listed in Table 4, the carbon content of the acid-washed PVC/CuAl-LDH-5% carbonized product Analysis of the Carbonized Product of PVC/CuAl-LDH-5% by Elemental Analysis The elements in the carbonized product were measured using an element analyzer. The carbonized product of PVC/CuAl-LDH-5% was obtained by tubular furnace. The carbonization conditions were: under vacuum (0.08 MPa), carbonization temperature of 300 • C, heating rate of 20 • C/min and carbonization maintenance time of 90 min. The carbonized product acid-washing process involved adding a 3:1 volume of aqua regia to the carbonized product and soaking it at 25 • C for 24 h. As listed in Table 4, the carbon content of the acid-washed PVC/CuAl-LDH-5% carbonized product reaches 66.91%. The results show that it has the potential to be a source of carbon material due to the high carbon content. The microstructure of the carbonized product was observed by SEM. The carbonized product of PVC/CuAl-LDH-5% was treated with aqua regia after the tubular furnace tests. Detailed images are shown in Figure 10. reaches 66.91%. The results show that it has the potential to be a source of carbon material due to the high carbon content. The microstructure of the carbonized product was observed by SEM. The carbonized product of PVC/CuAl-LDH-5% was treated with aqua regia after the tubular furnace tests. Detailed images are shown in Figure 10. It can be seen from Figure 10a that the carbonized product has a rough and porous surface. Moreover, cavities at the surface were found which were derived from the dissolution of the solid particles by the acid. These solid particles are dispersed uniformly on the carbonized product surface, which bond to each other together to form larger aggregates that contribute to the improvement of the material's adsorption properties. Thus, the structure of the carbonized product was improved by these cavities. However, in Figure 10b, these cavities were blocked by something. This may be attributed to the fact that the carbonized product was generated at a low temperature at the same time the tar was produced as well, and the cavities were filled with the tar. To be exact, these are the precursors of the porous carbon material. Conclusions This paper proposes a method for preparing carbon materials by the use of low-temperature catalytic pyrolysis of PVC. The carbonization process is affected by many factors such as the catalyst, carbonization atmosphere, carbonization temperature, heating rate and carbonization maintenance time. This experimental research has reached the following conclusions: (1) CuAl-LDH acts as a catalyst for pyrolysis, and at the right carbonization temperature (about 300 °C), the carbon material can be prepared by the use of low-temperature carbonization of PVC. (2) For a double-stage process for the low-temperature catalytic pyrolysis and carbonization of PVC, CuAl-LDH not only reduces the pyrolysis temperature and accelerates the dehydrochlorination of PVC in the pyrolysis stage, but also impedes the release of hydrocarbons in the carbonization stage. (3) For the process of low-temperature carbonization of PVC by CuAl-LDH catalysis, CuAl-LDH accounts for 5% of PVC mass when under vacuum atmosphere (vacuum degree ≤0.08 MPa) at a It can be seen from Figure 10a that the carbonized product has a rough and porous surface. Moreover, cavities at the surface were found which were derived from the dissolution of the solid particles by the acid. These solid particles are dispersed uniformly on the carbonized product surface, which bond to each other together to form larger aggregates that contribute to the improvement of the material's adsorption properties. Thus, the structure of the carbonized product was improved by these cavities. However, in Figure 10b, these cavities were blocked by something. This may be attributed to the fact that the carbonized product was generated at a low temperature at the same time the tar was produced as well, and the cavities were filled with the tar. To be exact, these are the precursors of the porous carbon material. Conclusions This paper proposes a method for preparing carbon materials by the use of low-temperature catalytic pyrolysis of PVC. The carbonization process is affected by many factors such as the catalyst, carbonization atmosphere, carbonization temperature, heating rate and carbonization maintenance time. This experimental research has reached the following conclusions: (1) CuAl-LDH acts as a catalyst for pyrolysis, and at the right carbonization temperature (about 300 • C), the carbon material can be prepared by the use of low-temperature carbonization of PVC. (2) For a double-stage process for the low-temperature catalytic pyrolysis and carbonization of PVC, CuAl-LDH not only reduces the pyrolysis temperature and accelerates the dehydrochlorination of PVC in the pyrolysis stage, but also impedes the release of hydrocarbons in the carbonization stage. (3) For the process of low-temperature carbonization of PVC by CuAl-LDH catalysis, CuAl-LDH accounts for 5% of PVC mass when under vacuum atmosphere (vacuum degree ≤0.08 MPa) at a carbonization temperature of ≥300 • C, heating rate of ≥20 • C/min and carbonization maintenance time of 90 min. (4) The carbonized product is prepared by low-temperature carbonization of PVC, which displays a cellular structure on the surface and has high carbon content (66.91%) and hydrogen content (5.16%).
2020-01-23T09:08:23.149Z
2020-01-17T00:00:00.000
{ "year": 2020, "sha1": "7142d75c72a2cc36eadc3b2ab16a76c84c89f261", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-9717/8/1/120/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "05806941f92c248c7d983600ae3b49c01fa9fa36", "s2fieldsofstudy": [ "Materials Science", "Chemistry" ], "extfieldsofstudy": [ "Materials Science" ] }
31558634
pes2o/s2orc
v3-fos-license
High Intensity Focused Ultrasound Optimal Device Design for Targeted Prostate Cancer Treatment : A Numerical Study A parametric study was performed to design a device capable of treating small targeted regions within the prostate using high intensity focused ultrasound, while sparing the surrounding organs and minimizing the number of elements. The optimal focal length (L), operating frequency (f), element size (a) and central opening radius for lodging an imaging probe (r) of a device that would safely treat tissue within the prostate were obtained. Images from the Visible Human Project were used to determine simulated organ sizes and treatment locations. Elliptical tumors were placed throughout the simulated prostate and their lateral and axial limits were selected as test locations. Using graphics processors, the acoustic field and Bio-Heat Transfer Equation were solved to calculate the heating produced during a simulated treatment. L, f, a and r were varied from 45 to 75 mm, 2.25 to 3 MHz, 1.5 to 8 times the wavelength and 9 to 12.5 mm, respectively. The resulting optimal device was a 761-element concentric-ring transducer with L = 68 mm, f = 2.75 MHz, a = 2.05λ and r = 9 mm. Simulated thermal lesions showed that it was possible to treat target tumors consistent with reported locations and sizes for prostate cancer. Introduction Trans-rectal high intensity focused ultrasound (HIFU) is currently used in clinic as a minimally-invasive approach for prostate cancer treatment [1].Two trans-rectal HIFU clinical commercial systems are currently available: Ablatherm TM (EDAP-TMS, Vaux-en-Velin, France) and Sonablate ® (Focus Surgery, Inc., IN, USA).The standard treatments performed using these systems ablate the entire prostate.This causes some associated morbidity resulting on partial or total loss of potency (43.2% of patients), stress incontinence (5.7%), urinary tract infections (7.1%), pelvic pain (5.7%) and rarely rectourethral fistula (2.2%) [2][3][4][5][6].This morbidity is usually attributed to the ablation of nerves and structures within the prostate, and it could be significantly reduced by treating only the affected areas and therefore reducing the amount of ablated tissue.This can be achieved by using a phasearray device capable of dynamically focusing the energy to limit the treatment to target tumor masses within the prostate. The design of such an array can be done by using numerical tools.This kind of design usually starts with the proposal of geometry and simulations are run to predict the behavior and analyze the feasibility of treating the desired regions [7][8][9][10].The process is then iterated until a satisfactory compromise is obtained.This process can be time consuming and is not guaranteed to give the optimal result since it requires analyzing multiple configurations and parameters.An alternative to this process is presented in this paper.A numerical method that allowed us to obtain an optimal geometry for a multi-element device with specific ergonomic constraints was used.By combining analytical and numerical platforms with parallel processing using graphic processing units we were able to perform a large number of simulations on manageable execution times allowing for optimization studies in order to obtain a successful configuration.Thermal lesions were finally simulated for the optimal configuration and confirmed that the targets could be reached.This method and platform can shorten device development time and cost, and it can be used as a treatment planning tool.treatment probe in use for the Ablatherm TM machine [7] (50-mm spherical cap truncated at 31 mm with a central opening for lodging an imaging transducer).The array configurations were then constructed as described in [11].Two array configurations were analyzed: a concentric ring array with independent sectional elements and a circular-element array with a pseudorandom pattern (Figure 1). A simulation environment was created from full body histological images taken from the Visible Human Male Server Project (VHSMP, Ecole Polytechnique Federale de Lausanne, Switzerland) [12].One hundred transverse images with a resolution of 0.2 × 0.2 × 1 mm were analyzed using an in-house developed interface in order to obtain a 3D structure of the prostate and neighboring structures (bladder, rectum, muscular tissue, ductus deferens, seminal vesicles, rectal wall, rectal muscle and nerve bundle). Parametric Study A parametric study was performed to determine the optimal focal length (L), operating frequency (f), element size (a) and central opening (r) for a device capable of treating different locations within the prostate without secondary lesions while minimizing the number of elements. Using an in-house interface and the images from the VHSMP, the device was positioned at the center of the rectum and the maximum and minimum distances necessary to achieve a treatment throughout the organ were defined.Figure 2 shows the closest and farthest distances from the centre of the device needed to treat the prostate at different anatomical locations.This determined the minimal and maximal focusing locations as 31 and 65 mm. The acoustic field was simulated in a model of the pelvic region.The propagation media considered for the simulation was: coolant liquid (water) between the transducer ant the tissues, rectum and then soft tissue.The geometry of the coolant was modeled as an elliptical cylinder with a major diameter of 39 mm and a minor diameter of 26 mm [7].The rectal wall was modeled as an elliptical cylinder with a major diameter of 45 mm and a minor diameter of 32 mm [7] and a thickness of 6 mm (obtained from the VHSMP images).The acoustic field simulation was performed using the Rayleigh-Sommerfeld diffraction integral given by [13]     where ν s is the particle velocity at surface dS, R = R 1 + R 2 + R 3 , R is the distance sound propagates with R 1 through the coolant liquid, R 2 through the rectal wall, and R 3 through the prostatic tissue, α is the attenuation factor obtained as R 1 α 1 + R 2 α 2 + R 3 α 3 , dS is the surface area, k = 2π/λ, λ is the wavelength and Z is the acoustic impedance of water. The focal length was ranged from 45 to 60 mm by with 1-mm steps and the element size from 1λ to 8λ by steps of 0.05λ.The operating frequency was varied from 2.25 to 3 MHz by steps of 0.25 MHz.The size of the central opening was varied from a radius of 1 to 11 mm by steps of 2 mm.For each combination of parameters we evaluated the degree of focusing by the ratio [14]. where q 2 and q 1 are the acoustic pressure at the secondary and primary lobe, respectively.The goal was set for a configuration with the fewest elements and value of η ≤ 0.5.The acoustic attenuation was 2.9 × 10 −4 Np/m/MHz for the coolant, 4.1 Np/m/MHz for the rectal wall and 9 Np/m/MHz for the prostate [7].Simulations were performed using in-house developed and optimized code using two Tesla C1060 (NVIDIA, Santa Clara, CA, USA) graphics processors. Steering Capabilities For the optimal configuration obtained from the parametric study, the steering capabilities were evaluated to ensure that tumors at different locations within the prostate could be treated.We first determined the maximum lateral steering possible at the anterior and posterior of the prostate with the device located in front of the apex, center and base of the prostate (Figure 2).Then the maximum vertical steering obtained at the center of the prostate was evaluated.The value of η (Equation ( 2)) was used to evaluate the degree of focusing.Focusing was tested at off-axis distances from 1 mm to 16 mm by steps of 0.25 mm. Thermal Simulation The temperature increase obtained with the optimal configuration was estimated using the Bio-Heat Transfer Equation (BHTE) [15] and the thermal dose concept [16] at different relevant treatment locations.These locations were chosen using clinical MR images and pathology from 37 patients reported by Coakley et al. [17].Virtual tumors with a size equivalent to the minimum (0.02 cc), average (0.79 cc) and maximum (3.7 cc) volumes reported in [17] were modeled as ellipsoids and located close the prostatic capsule.The BHTE was then used to calculate the temperature obtained with the optimal configuration focused at the centre and edges of the target virtual tumor [15] by , , , where ρ t , c t , and K t are the density, specific heat and thermal conductivity of the tissue, respectively, T P is the temperature a point P(x, y, z) and time t, ω b , c b and T b are the perfusion rate, specific heat and temperature of blood, respectively, E 0 and E are the acoustic energy entering and exiting a volume V and χ is the acoustic absorption coefficient. The BHTE was implemented using the numerical Finite Difference Time Difference technique [18] with boundary conditions at the borders of the tissue volume set to body temperature (37˚C) and thermo-mechanical properties of tissues were defined in [8].The effect of this cooling was simulated as a boundary condition using a Newtonian flux convention [19] and a temperature of 13˚C.Tissues that reached a thermal dose of 240 minutes or greater were considered as ablated tissues [20].If any of the tested sites within each virtual tumor formed a secondary lesion (outside of the target tumor), it was considered a failed treatment. Parametric Study The optimal configuration was a concentric ring array with a focal length of 68 mm, an operating frequency of 2.75 MHz, a diameter of 50 mm, a truncated width of 31 mm, a 9-mm in radium central opening and an element width of 2.05λ for a total of 761 elements. Steering Capabilities Table 1 shows the steering capabilities reached by the optimal concentric ring and random arrays.The random array was capable of steering farther in the lateral direction when focusing close to the transducer but this deteriorated when the focusing was done at deeper locations.Overall, the deeper the focal region the more difficult it was to achieve steering and the random array was unable to steer at the deeper locations. Thermal Simulations Adequate focusing and lesion formation was possible at all limits of the virtual tumors using the concentric ring device, except for the two largest located at the prostate peripheral zone (located closest to the device). Figure 3 shows the simulated lesions (thermal dose above 14,400 s) obtained when electronically steering the optimal configuration to reach the edges of the largest tumor that was virtually defined. When treating tumors in the peripheral zone, secondary lesions were observed when targeting the posterior limit of the tumors (closest to the device).These secondary lesions appeared deeper than the targeted focus at approximately twice the focal length (towards the anterior region of the prostate).They were caused by grating lobes produced when the dynamic focus was located out of the steering range of the elements on the outer rings of the device (Table 1) [14]. Table 2 shows the results for thermal simulations at all the defined targets.The size of the target tumors at peripheral and transitional prostate zones were defined based on MR images [17].The size of the calculated thermal lesion when targeting the tumors varied depending on the location and the size of the target tumor since the maximum steering required was different in order to cover the entire tumor.Lesions were generally smaller for medium-sized tumors since to reach these tumors the device does not require as much steering and we are closer to the natural focus of the device.The lesions were generally larger for targets in the transitional region.The acoustic power required to obtain a lesion also increases with target lesions that require steering (larger tumors located outside of the central axis of the device). For the peripheral zone targets where the secondary lesions appeared two solutions were proposed to avoid them: partial excitation of the elements and mechanical device rotation.By exciting only the elements closest to the center of the device we could decrease the phase difference between elements and eliminate the secondary lesion while minimizing the acoustic intensity at the surface.A series of simulations was performed with 50% to 75% of the elements excited and it was found that secondary lesions were eliminated with 56% of active elements when targeting a large peripheral tumor (3700 mm 3 ) and with 71% when targeting the medium peripheral tumor (790 mm 3 ).The device could also be rotated on the left-right direction by ±π/8 in order to eliminate the secondary lesions. The overall maximum acoustic intensity at the surface of the transducer for treatment was 22.74 W/cm 2 .This maximum occurred when focusing at the location the lateral edge of large tumors in the peripheral zone with only 56% of the elements active.This intensity was high as a result of the reduced surface area caused by elements being off. Discussion and Conclusion On this study we used an extensive parametric study us- ing computational tools to obtain an optimal configuration for a trans-rectal HIFU device for localized prostate cancer treatment.We found an optimal device with a concentric ring array configuration, a focal length of 68 mm, an operating frequency of 2.75 MHz, a diameter of 50 mm truncated at 31 mm, an element width of 2.05λ and an 18-mm diameter central opening. The simulation environment accurately depicted the male pelvis using histological images of tissue.The limitation of this simulated anatomical environment was the fact that only one subject is available and that images come from a 38-year-old man, whereas prostate cancer patients treated with HIFU have a mean age of 69 [5].As a result, the volume of the prostate used for simulations was approximately 20 cc whereas typical patients have a prostate volume up to 40 cc [3].Nevertheless, the targets used for this treatment are similar to those performed in HIFU transrectal therapy [7] and are compatible with the reported anatomical findings from tumors [17]. The rectum distention due to the transducer and the circulation of coolant liquid was accurately incorporated into the simulation environment.This consideration was important for the validity of the study because the transducer causes the rectal wall to be closer to the prostate increasing the risk of rectal wall damage causing rectourethral fistula [4].All observed thermal doses at the rectal wall remained below the lesion threshold and as a result no rectal wall damage was predicted by the simulations. The focal length of the optimal device is located deeper than the target depth (33 to 63 mm).Similar results have been reported for HIFU prostate treatment devices [7] and when simulating the treatment of a trial fibrillation in the heart [19]. The random circular element array configuration was tested on this study to determine if secondary lobes could be reduced with this approach.However, this type of device did not perform as well as the concentric ring array since the active surface area was too low and the acoustic field could not be successfully steered.This is an important consideration to make for HIFU devices that need to fit constrained spaces such as transrectal approaches. The thermal simulation showed that even though the device was able to treat all locations in the target tumors, different strategies are needed during the treatment: partial excitation or rotating the device.The proposed simulation can be used to determine these strategies as a tool for treatment planning.The ability of conducting fast calculations is then an important asset of this tool. The size of individual simulated lesions varied as a function of the distance to the center of the device, with lesions farther from the transducer being larger.This effect is in direct relationship to the elliptical focus of the device.The platform can also be used for accurate treatment planning that takes into account these variations. Finally, the numerical approach used in the study involved the use of algorithms and techniques that utilize graphic processing units (GPUs).This allowed for reduced the computation times and allowed exploring large variations of the parameters.Also, the cost of a GPU system is considerably lower than other alternatives for fast calculation, making this methodology an attractive fast and low cost tool for device design and treatment planning [21]. Figure 1 . Figure 1.Schematic for the simulated arrays: (a) concentric ring and (b) random.FH: foot head; LR: left right direction for treatment. Figure 2 . Figure 2. Transducer positioning to treat the prostate and limits for treatment at different anatomical locations.FH: foot head; AP: anterior posterior. Figure 3 . Figure 3. Simulated thermal lesions at four extremes of the larger virtual target tumor.
2017-10-23T14:28:10.479Z
2013-02-05T00:00:00.000
{ "year": 2013, "sha1": "81a85237d6f5bff57a34eddf36d418cfb8ca605a", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=28326", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "81a85237d6f5bff57a34eddf36d418cfb8ca605a", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Materials Science" ] }
212880331
pes2o/s2orc
v3-fos-license
Consumer behaviour and new consumer trends vis-à-vis the ICTs Objectives. In this paper, consumer behaviour is analysed following the criterion of behavioural economics and how it affects digitalisation, and the current change in consumer trends. Methodology. It is based on a search of behavioural economics studies and theories arising within this field, such as the Nudge Theory. Having gained an understanding of consumer behaviour, this was taken to the digital arena and the different ways of obtaining information from users to learn about their behaviour in this area and its impact, were analysed. Results. By studying behavioural economics, the conclusion was that there are differences with classical economics, which asserts that people’s decision making is rational, while behavioural economics argues that we are influenced by many aspects. This taken to the world of digitalisation translates as the great importance to companies of collecting data from users in order to learn about them and be able to predict their behaviour. Limitations. There are few studies on behavioural economics within the digital field Introduction As a result of phenomena such as globalisation and digitalisation, as well as the economic crisis that began in late 2007, companies and organisations have undergone profound internal changes in order to adapt to changing environments, in which there is a great deal of uncertainty. New environments are now being observed and new solutions demanded, which require professionals that are capable of facing these new challenges. Information and communication technologies are part of our current technological culture. In the last ten years, the increased use of devices such as mobile phones, computers and tablets, together with e-commerce and the development of apps, has brought about a societal change, in how we work, purchase products, gain information and relate with one another (Relaño, 2011). In addition to e-commerce, ICTs have facilitated the processes of many other activities, For example, it is possible to file an income tax return from home, study for a degree online without have to attend classes in person, and pay by mobile phone in many establishments without the need to carry cash (Díez, 2018 Use of the internet in the past three months Weekly internet use Online purchases in the past three months As Graph 1 shows, the use of ICTs has more than doubled since 2010. Consumer behaviour has changed, consumers have now become minor specialists through the many information sources available to them. Thanks to ICTs, the producer and the consumer have become closer and more interconnected, hence the importance of studying and finding out about the consumer and the new technologies. It is very important to determine the variables that affect the consumer's purchasing process, in the environment as well as each individual (Barruñas, 2016). The study of consumer behaviour changes as their behaviour itself changes. The act of shopping is no longer the only important thing, everything surrounding it is also important. Schiffman (2010) defined consumer behaviour as behaviour exhibited by consumers when seeking, buying, using, evaluating and rejecting the products and services that they expect to meet their needs. Study of consumer behaviour In the 1960s, the study of consumer behaviour was innovative in the field of marketing and there was very little research; therefore, it started to borrow concepts developed in other areas such as psychology, sociology or anthropology. Many of the first theories of consumer behaviour were based on the idea that individuals act rationally to maximise the benefits to them. However, it was later shown that individuals also tend to buy on impulse or be influenced by their environment (Schiffman, 2010). Consumer research is of great importance, since it allows market specialists to make intuitions or anticipate consumer needs, in order to better satisfy the consumer (Schiffman, 2010). Behavioural economics Traditional economics has always been based on the idea that people follow a strictly rational model in making their decisions, but it has not been able to fully understand human behaviour. It has therefore progressively integrated aspects from other branches, psychology in the main and lately even neuroscience, in studies on economic behaviour and consumer attitude, in order to better understand the decision process of people in situations of uncertainty (Kahneman and Tversky, 1979). Behavioural economics explains that the decisions people make are influenced by the cognitive, social and emotional aspects that condition them (Kahneman and Tversky, 1979). In the field of behavioural economics, the studies by Daniel Kahneman andAmos Tvesky (1979 -2000) are worthy of mention. They initially focus on the field of psychology but over time contributed to economic research. This economic thinking argues that humans are not well described by the rational agent model. However, it considers it wrong to classify them as irrational, since our behaviour does not fit a definition of irrational as such (Kahneman, 2012). Studies on behavioural economics focus mainly on understanding the functioning of human thinking, in order to better understand how people's brains work in decision making. (Thaler, 2018, p. 23). Becker (2012) noted regarding the study of individual behaviour that on certain occasions it is not individual response that matters, but rather aggregate behaviour. Since there can be differences in how Germans and Americans respond, or between people who attended a university compared to others who attended a different one. For Becker, this was a fundamental difference between psychology and economics. Two systems have been differentiated that are involved in people's mental processes. System 1, which corresponds to fast and intuitive thinking and System 2, which would be slow and deliberate thinking (terms originally proposed by psychologists Keith Stanovich and Richard West, 2000). System 1 acts fast and automatically and includes the innate skills that we share with other animals, which are instinctive and often totally involuntary; for example, turning around when we hear a loud noise behind our back. System 2, however, is related to making decisions where an effort to pay attention is required, otherwise the activity will be a failure. For example, looking for a relative outside a train station in a crowd requires more specific concentration (Kahneman and Tversky, 2003). Both systems interact with each other: "when System 1 runs into difficulty, it calls on System 2 to support more detailed and specific processing that may solve the problem of the moment. System 2 is mobilised when a question arises for which System 1 does not offer an answer" (Kahneman 2012, p. 25). This is shown in Figure 1 below, where we see the difference between the two systems in a schematic and more visual form. With this, the attempt was to give a sense of "the complexity and richness of the automatic and often unconscious processes that underlie intuitive thinking, and of how these automatic processes explain the heuristics of judgment" (Kahneman, 2012, p. 16). The figure of the so-called Homo Economicus, born from the traditional economic models, was questioned, referring to people as impartial beings who always make optimal decisions, as if they were economic experts (Thaler, 2015). The popularity that behavioural economics has gained in recent times can be interpreted as a revolution against the ideals of classical economics. But in reality, behavioural economics is taking economic thinking back to its origins (Thaler, 2018). Many authors have already taken the fundamental role of psychology in economics into account. Clark (1918) wrote 100 years ago: "The economist may try to ignore psychology, but it is sheer impossibility for him to ignore human nature. If the economist borrows his conception of man from the psychologist, his constructive work may have some chance of remaining purely economic in character. But if he does not, he will not thereby avoid psychology. Rather he will force himself to make his own, and it will be bad psychology" (1918, p. 4). These words remain valid to this day. (Kahneman, 2003) One of the conclusions of Falah Mohammad (2019) in his paper on factors affecting consumer decision-making behaviour when purchasing organic products, was that psychological factors are the most effective in the buying decision-making process, in addition to the social factor. The fact that we want to use a theory as the basis for achieving different objectives, on the one hand, to find out what optimal behaviour is, and on the other hand to predict real behaviour, is a problem. The ideal is to consider theories that are based on data, not axioms, but without giving up the first theory, since it is also necessary in economic analysis (Thaler, 2018). "If the economy does develop along these lines the term 'behavioural economics' will eventually disappear from our lexicon. All economics will be as behavioural as the topic requires, and as a result we will have an approach to economics that yields a higher R 2 " (Thaler, 2018, p. 32). Prospect Theory The prospect theory was developed by Kahneman and Tversky in 1979, in their study of decision making, with the aim of understanding human attitudes to risky decisions. The study of the attitudes that human beings take in situations of uncertainty has always been a topic of maximum interest for economists (Tversky and Kahneman 2007, Levin 2006, Kreps 1995. In all the choices we make throughout our lives there is a degree of uncertainty, the study of human behaviour seeks to optimise the decision process (Kahneman, 2012). For economists, the purpose of this theory was to determine how decisions should be made and to explain how the econ's decision-making process worked (Kahneman, 2012). Econ comes from the term Homo economicus, i.e., "economic man" in Latin; it was first used in the 19th century by critics of the work of John Stuart Mill (Persky, J. 1995). The econ is defined as an individual who always makes optimal decisions, as if they were an expert economist, and is never influenced by the temptations of the environment. This was the human behaviour described in neoclassical economics (Thaler, 2018). The authors Kahneman and Tversky as psychologists did not manage to understand the bases of a theory based on human rationality, and therefore began to conduct different studies with which to understand intuitive choices, far from attempting to explain rational or more correct choices. This resulted in the Prospect Theory. Nudge Theory The term "nudge" was defined by Thaler and Sunstein (2008) as: "any aspect of the choice architecture that alters people's behaviour in a predictable way without forbidding any options or significantly changing their economic incentives. (...) Nudges are not mandates. Putting the fruit at eye level counts as a nudge. Banning junk food does not. (Thaler and Sunstein, 2008). A nudge translates into Spanish as a "little push", referring to the nudges or small impulses that make us choose one or other option in our day-to-day choices (Thaler and Sunstein, 2008). This theory is the result of the ideas that have been discussed on behavioural economics, and of studies on how different factors influence consumer behaviour. The decisions that people make are often triggered by inertia. There are other causes that trigger such inertia towards certain choices. Once such cause is "status quo" bias (Samuelson and Zeckhauser, 1988), which explains that people tend to remain in their current situation and often find it difficult to change. For example, if you start watching a programme on a particular TV channel, it is very likely that when the programme ends you will continue to watch the same channel, even though you only need to press a button to change it. In conclusion, we humans really are "creatures of habit". One of the main causes of status quo bias is lack of attention, and therefore the choices we make "by default" act as important nudges. Framing is another cause of selection inertia in people, in other words, the way people are told about problems influences their decision. A good example is the situation that the teacher found herself in when placing foods in the school canteen. From her experience, she knew that the students' food choices varied according to where the food was placed. If she placed fruit in a more visible place, she observed that its consumption was higher than when she placed it further away from the teacher. In this case, she would be what is termed the "decision architect", i.e., the person who constructs the context in which decisions are made, and therefore can influence them in some way. Here, there is the dilemma of what the teacher should do in this case, placing fruit in such a way that the children are encouraged to eat it, and placing junk food further away where it is less easy for them to choose it, the justification being that this is beneficial for them. However, even so, this is still influencing her students' choices to an extent, which could seem unethical. But we assume that the decision architect will always have to choose how to frame a decision, even if she had decided to place it at random, she would be taking the decision not to encourage healthy eating in their students, with it being within their reach (Thaler and Sunstein, 2008). Decision architects can intervene in small details to which we do not attach importance, but which have great impact on people's subconscious; this is the idea of "everything matters" (Thaler and Sunstein, 2008). "They have shown that people can be fooled by how a question is framed. And they have shown that similar people in different contexts may behave differently" (Becker, 2012, p. 78). In our daily lives we meet many decision architects, from the person who chooses the design of a restaurant menu to the way an office is laid out. These are details that affect people's behaviour without their really being aware that their choice has been manipulated in some way. In other words, nudges help people make the decisions that are best for them and that will make their lives much better. Libertarian paternalism Although decision architects promote an environment where people make the best and most beneficial decisions, is it ethical to influence people's decision making? It is difficult to classify nudges as manipulators, since the concept of manipulation is very complex and can take many forms (Wilkinson, 2013). The libertarian paternalism movement argues that people must be completely free in their decision making, but at the same time considers it legitimate for decision makers to influence people's behaviour in order to make their lives better. The concept of paternalism does not seek to impose or prohibit the free choice of people, but to guide them towards making the best decision; this is termed giving a "nudge" (Thaler and Sunstein, 2008). However, many people believe that people's choices should not be influenced even if beneficial to them and argue that people must make mistakes and take risks (Sunstein, 2017). In our daily lives we are constantly subject to decision architecture, from going to a restaurant and choosing a dish from the menu to our choice of bank account. The question is whether decision architecture affects us in a positive way, or, conversely, is harmful and exploits us. The digital consumer and new consumer trends People are the focus of digital evolution; the habits of digital life go hand-in-hand with technological development. In order to understand what is happening in digital development, we must find out about the lifestyles of people and the use they make of it (Fundación Telefónica, 2018). We have experienced a process of gradual adaptation with the use of the Internet and e-commerce. Progress has been quite fast. At first, we started buying things like air tickets or booking hotel stays, and gradually made the leap to buying more personal or everyday objects (Puromarketing, 2019). The arrival of the new information technologies (ICTs), has led to a change in consumer behaviour. We are now faced with new consumers, who are much better informed and who, thanks to the Internet and social networks, can find a multitude of opinions on the products they want to buy and offer their own (Barrullas, 2016 Behavioural economics in the digital world Today's increased use of information technology (ICT) means that we make many of our decisions in a digital environment. We can now perform hundreds of actions through websites or mobile apps, from buying clothes to opening a bank account; and in this digital context it is easier to make poor decisions, since there is a great deal of information that we can ignore or not pay sufficient attention (Weinmann, Schneider and Vom Brocke, 2016). Digital nudging is defined as an approach that applies user interface design elements to affect decision making and guide people's behaviour in digitally chosen environments (Weinmann, Schneider and Vom Brocke, 2016). In the physical environment, many cases have been studied of how people react to these stimuli, but the psychological impact that they have on the digital environment is less known. Analysing the impact that nudges have on people within digital contexts will determine whether they show predictable effects similar to physical contexts. Furthermore, knowing the effects that nudges have on decision making in the environment helps to make it possible to adapt them to the user, making use of the data that they provide. This is very important today because many studies on nudge and behavioural economics as we have always known it have been opened, it is essential that all present knowledge and research on consumers and the digital environment is carried forward and adapted (Mirsch, Lehrer and Reinhard, 2017). A fundamental issue is the importance of ethics in establishing these nudges, following the abovementioned ideals of libertarian paternalism. They should lead consumers to the best option, but this is not always the case. The original purpose of the digital nudge is to simplify and reduce obstacles to guide user behaviour in a desirable way (Weinmann, Schneider and Vom Brocke, 2016). (2017) gave the Amazon.com application as a practical example. On the product pages Amazon emphasises product-related elements. In doing so, choice architecture intervenes in drawing the user's attention to related articles. This emphasis can lead to an additional purchase, which was not originally planned by the user. Data analysis Information technologies are part of our lives. With the development of e-commerce and the various applications that it provides for our daily lives, their use has become essential. This means that through these technologies we are constantly generating data about everything we do, what interests us and what does not, and even where we move, which translates as valuable information for companies to analyse. This massive data set generated by each user is called Big Data. This term was first used in the 2000s, popularised by Mashey (1998) in the scientific field. These data are analysed and organised in order to eventually be interpreted by companies to learn much more about consumer behaviour. The social networks play a fundamental role in datafication, as they collect large amounts of information on users. For example, when we "like" something that appears on apps such as Instagram or Facebook, this is recorded, and even the length of time we have been looking at the item, the sort of people we relate to, and the brands we follow are recorded (Sánchez, 2013). Datification Datafication is the name given to the collection and subsequent transformation of information into data for analysis. The IT (information technology) revolution is evident in all aspects of our lives, and although it is true that the technical part has been given more weight, we have recently focussed on the information branch (Mayer-Schonberger and Cukier, 2013). Some of the most generated data is from geolocation; information that was much more difficult to acquire in the past. Many of the objects we use on a daily basis now have chips incorporated, through which companies can monitor their location and consequently their users. Mobile phone apps are an example of this. They collect the location of users and use this information to recommend nearby restaurants, hotels or whatever they might be interested in at the time. This type of data is very valuable to companies, as they can analyse people's behaviour and tastes and thus improve their services or products, focusing and personalising them much more precisely thanks to all the information they have about the consumer. Location data can give information on many things, from a traffic jam on a particular road to the nightlife in a specific area, or even how many people have attended a demonstration. However, the commercial use of geolocation could be considered most important. The datafication of location has given rise to new uses, from which new value can be created. (Mayer-Schonberger and Cukier, 2013). The idea of datafication is the objective of many of the social networks that we use today. Datifying our relationships, moods, tastes. They take intangible information from our lives, to then transfer it into data and use it. "Facebook datified relationships, Twitter enabled the datafication of sentiment by creating an easy way for people to record and share their stray thoughts. LinkedIn datified our long-past professional experiences (...) turning that information into predictions about our present and future" (Mayer-Schonberger and Cukier, 2013, p. 62). All the immense amount of data that is currently recorded provides us with a new panorama of reality. But the simple mass collection of data on people's behaviour is not enough to identify opportunities. Companies have to observe, analyse and find out, to achieve what is known in marketing as insights (Antevenio, 2017). "Insights are human truths that allow us to understand the deep emotional, symbolic and profound relationship between the consumer and a product. (...) It has the capacity to connect a brand and a consumer in a way that goes beyond the obvious and not just sell" (Quiñones, 2013, p. 34). Such insights, help to provide a more humane vision of the consumer, making brands and products take on very valuable intangible value for consumers (Quiñones, 2013). Behavioural economics also plays a role in aspects of the digital economy. Jonathan Hilton Stahl Ducker, founder of Edufintech, highlighted in a BBVA conference in 2018 the transition to the digital economy with the help of behavioural economics; that it is no longer useful simply to analyse data that show how a person behaves. "You also need the 'behavioural' part and to understand the symbolism of the person; and also the part of how I present the product to them so that they take it and it does not remain in a bias alone" (BBVA Open Innovation, 2018). Reality Mining Reality mining is defined as: "quantifying and modelling long-term human behaviour and social interactions, by using mobile phones and wearable badges as sensors that capture real world face-to-face interactions" (Madan, Waber, Ding, Kominers, Pentland, 2009, p. 1). In other words, this refers to processing the large amounts of data that come from mobile phones with the aim of extracting predictions about human behaviour. In one of the studies that they conducted, they were able to identify people who had contracted influenza before they had even noticed it themselves by tracking their movements and call patterns. This ability to detect epidemics could save millions of lives in the future (Mayer-Schonberger and Cukier, 2013). The aim of reality mining is to obtain characteristic information about people's attitudes and social relationships from so-called "honest signals" that reflect unconscious behaviour. The study of these behaviours can serve to obtain sociological information non-intrusively (Andradas and Ju, 2010). Although many people do not like the idea of leaving a digital trail through their activities, the objective of reality mining is the opposite. In other words, more and more actions are recorded, from physical activity to interactions with other people. The aim is to predict, thanks to algorithms, aspects such as the course of diseases and help in human health. Google Dodgeball collects the information provided by terminals to determine the geographical location of their users and give information by SMS on the location of possible friends who are nearby. And like this example, there are many more of a multitude of companies (Andradas and Ju, 2010). But the ideas that focus on social issues must be highlighted, such as the use of mobile phones to control epidemiology and consequently the spread of disease through human relationships, the detection of severe psychological disorders through conversational analysis or even being able to detect diseases such as Parkinson's thanks to movement sensors (Andradas and Ju, 2010). User privacy Information is power, and much of the data collected contains personal information. The increased concern about data privacy has been one of the most controversial issues in the past year (Fundación Telefónica, 2018). The question is whether data protection laws and regulations are still relevant in an era of mass data. If the problem has changed, so should the laws that regulate it. The Internet has enabled the increase in rights such as freedom of expression and freedom of information. But in turn, it has brought risks to fundamental rights such as data protection and the right to privacy. The social networks are double-edged swords, they have a very positive side that allows us to connect with people from all over the world and exchange all kinds of information, photos and videos, but all the information that we put onto networks can be counterproductive in the long run (Alcó, 2015). Massive data has rendered ineffective the main technical and legal mechanisms that existed to protect the privacy of individuals. In the past it was easy to determine data considered personal and it was easier to protect it. Today, even the most insignificant data collected can reveal an individual's identity. The danger lies not only in the vulnerability of privacy, but in the information that can be deduced from such data (Mayer-Schonberger and Cukier, 2013). On May 25, 2016 the General Data Protection Regulation (GDPR) came into force, which considers "the protection of natural persons with regard to the processing of personal data is a fundamental right" (Regulation (EU) 2016/67). However, it was not until 2018 when the GDPR became mandatory. From then on, companies began to send thousands of emails asking us to renew our consent, which we were not aware that we had given, and informing us of changes in privacy policies (Fundación Telefónica, 2018). In conclusion, the right to be forgotten is based on two main rights: The right to data protection and the right to privacy. This right arises as a response to the problem of the existence of personal information on the web with no expiry limit, which constitutes a threat to the free development of the personality (Mieres, 2014). Experience economy In 1998, the authors Pine and Gilmore in their paper "Experience Economy" spoke of the emergence of a new type of economy, the experience economy, "characterised by a type of consumer focused on the search for and experience of a series of sensations, memories and moments qualified as extraordinary and memorable" (Moral, 2012, p. 5). The main idea of the experience economy is the fact that a series of emotions and feelings are related to a product that increase the value of the product or service, giving it a more personalised and differentiated character. These experiences can be physical, emotional, intellectual or spiritual. It is crucial to know how to develop them, since companies not only try to sell a product but also to generate a sentiment and link the consumer to an experience, as a key factor of competitiveness. Experience marketing in the digital era The concept of experience marketing has its origin in Schmitt's papers (1999,2003) "Experiential Marketing" and "Customer Experience Management (CEM)". Both papers deal with the importance of involving the customer emotionally. Consumer experience is the interaction between the consumer and the product, so that it impacts on the subject in a pleasant way. It constitutes a personal experience, in which the customer will evaluate their experience comparing their expectations with the stimuli received (Moral and Fernández, 2012). Marketing is undergoing a transformation due to the new digital environment in which customers move. We are in a new era in which companies must adapt as quickly as possible if they want to remain competitive. Digital marketing has a fundamental role to play in this task (Sainz de Vicuña, 2008). "The new communication strategies are changing the way we reach younger consumers. In the end, researchers are suggesting the need to extend additional research on new communication models that impact on the behaviour of millennial consumers" (Dones, Flecha, Santos, López, 2018, p. 527). The objective does not differ from digital marketing to analogical marketing (offline). Its contribution remains the same: helping the company to be customer-oriented and trying to satisfy them in what they really value" (Sainz de Vicuña, 2008, p. 75). Experiential marketing has also entered the digital world. It uses new technologies to offe r new types of experiences that will appeal to customers (ZenithBlog, 2017). I will highlight the main trends that we find in this type of marketing in the current year (2019), (Gregorio, 2018): 1. Video marketing: the presence of videos in marketing campaigns will intensify, providing benefits such as enhanced visibility, increased impact and added value for the user, facilitating user interactivity and improving engagement between brand and customer. In this area, new developments are emerging such as 360° videos and virtual reality. 2. The incorporation of IGTV into Instagram: incorporating this new video format in the application exclusively for mobile users, which allows longer videos to be viewed, has relaunched it as one of the most powerful marketing channels on the Internet. 3. Chatbots: these are computer systems with artificial intelligence that enable simulating conversations with the customer, so that they can raise queries and solve complaints. They allow savings in personnel costs, more efficient and fast interaction with the customer and even convenience for some users. This is a system that is increasingly being implemented on websites and will be further refined. Chatbots are becoming a part of the sales world, In the area of education chatbots also have great growth potential, such as the UNED (Spain's open university) chatbot, which serves as a virtual teacher to support students (Fundación Telefónica, 2018). 4. Voice Search: being able to interact by voice with different devices is a trend that companies are increasingly considering. In addition, new devices have been added such as Google home and Amazon's Alexa, which extend the use of voice to a great variety of possibilities. 5. Influencer marketing: Influencers are known people with a great number of followers on their social networks, impacting 60% more than traditional ads (Del Castillo, 2019). Therefore, many brands use them to promote the products and services through social networks. Companies' knowledge of new trends places them at an advantage over their competitors (Gregorio, 2018). Conclusions The study of consumer behaviour from a psychological perspective, accepting the reality of people as beings who make mistakes in choosing and reaching a tipping point in differentiating optimal behaviour from real behaviour seems essential to understand human attitudes to choices made in situations of uncertainty. It breaks with the classical theory that was based on rational behaviour by people, which resulted in erroneous theories due to a failure to consider the intuitive factor of choices. This factor, which belongs to System 1, is key, since we are starting to consider the complexity of unconscious actions and their importance in decision making. In my opinion, behavioural economics has become the main form of economics because it most closely matches reality. Habitual behaviours detected in people, such as risk aversion or risk seeking, depending on our choice context, evidence the accuracy of behavioural economics. This leads us to consider the importance of the environment in which decisions are made, and the factors that surround it. The possibility of being able to set "traps" for our brain's System 1 to encourage us to make choices flags up the importance of the libertarian paternalism movement. It is considered very necessary to use these nudges as an aid to better decision making, since a completely neutral environment without influences is impossible. The role of the decision architect is always going to be how to design the framework in which a decision is made. Even if stockists of large stores decide at random where to place products on a shelf, they make the decision not to put the best quality or the cheapest products in places where they know that people will look more closely and pause to look harder, and therefore they have already decided that they will not encourage people passing through that aisle to buy the best options. However, with the advent of the ICTs a new consumer has emerged, with different behaviours and needs. Today, all companies are in the race to be pioneers in offering their customers the most personalised and up-to-date services. Just as behavioural economics used to be based on studies conducted on groups of people to detect their behaviour, now with the new technologies it is much easier to gather information on consumers because users leave behind data on their lifestyle, which is then used by companies to personalise and improve their products or perfect their marketing strategies. For example, Amazon suggests products that may be of interest based on previous searches. This information is also used by companies to conduct their marketing strategies and to personalise their services as much as is possible. The ability to influence has been taken to the highest level, we are now interconnected and the opinions we pour into networks can reach anybody anywhere in the world. In other words, although the way in which the customer interacts with the consumer has evolved dramatically in recent years, being able to find out details on consumers, in order to develop an environment conducive to customer acquisition, remains the objective of brands. Today it is not enough to be aware of the place on the supermarket shelves most looked at by customers in order to place products. Now, for example, their tastes must be discovered through their social networks or lifestyles and the places they regularly visit, with the information provided through the apps that they use on their mobile phones. This change in the ICT society can be double-sided. Although the fact that an application can control our location is an intrusion of privacy, by contrast, most people find it very useful to have a GPS on our mobiles to show us how to get somewhere, to warn us about traffic and redirect us to a faster route, to recommend a restaurant with our favourite food near home or tell us about special ticket promotions for an event we regularly attend. Therefore, the use of our data is often necessary for us to receive personalised services. The question is how legitimate it is to gather our information and how it is used. In my opinion, we are in a stage of "adapting" to the technological era and gradually there will be a balance of the benefits provided by the analysis of our data and the consequences this entails. This starts with the legal updating of these aspects, which is already starting to take place, since many laws have become outdated vis-à-vis this new digital reality.
2020-02-13T09:13:46.819Z
2020-02-04T00:00:00.000
{ "year": 2020, "sha1": "102678ac9f831f33f228f2be56f62ec75b1ae2cc", "oa_license": null, "oa_url": "https://doi.org/10.7200/esicm.164.0503.4", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "c02de6667e4809cb098dacef9d3435e11933dc37", "s2fieldsofstudy": [ "Economics", "Computer Science", "Business" ], "extfieldsofstudy": [ "Business" ] }
119313498
pes2o/s2orc
v3-fos-license
Finite groups of diffeomorphisms are topologically determined by a vector field In a previous work it is shown that every finite group $G$ of diffeomorphisms of a connected smooth manifold $M$ of dimension $\geq 2$ equals, up to quotient by the flow, the centralizer of the group of smooth automorphisms of a $G$-invariant complete vector field $X$ (shortly $X$ describes $G$). Here the foregoing result is extended to show that every finite group of diffeomorphisms of $M$ is described, within the group of all homeomorphisms of $M$, by a vector field. As a consequence, it is proved that a finite group of homeomorphisms of a compact connected topological $4$-manifold, whose action is free, is described by a continuous flow. Introduction The study of the automorphism group, or centralizer, of a complete vector field X of class C r , r ≥ 1, or more precisely that of its quotient by the flow of X, is a classical question with a great amount of interesting results. Often these quotient groups are trivial or almost trivial if a reasonable hypothesis of transversality is imposed. Therefore it is natural to consider the inverse point of view (the inverse Galois problem): given a group of diffeomorphisms G do construct a complete vector field X whose automorphism group, up to quotient by the flow, equals G (shortly one will say that X determines or describes G). Notice that this last problem can be addressed in topological manifolds and homeomorphisms by replacing the vector field by a continuous flow. In [8] it is shown that every finite group of diffeomorphisms of a smooth connected manifold is determined, within the group of all diffeomorphisms, by a vector field. Here the foregoing result is extended to show that every finite group of diffeomorphisms can be described, within the group of all homeomorphisms, by a vector field. on M, such that the map is a group isomorphism, where Φ and Aut 0 (X) denote the flow and the group of continuous automorphisms of X respectively. An immediate consequence of Theorem 1.1 is that any smoothable finite group of homeomorphisms of a connected topological manifold is determined by a continuous flow. is a group isomorphism, where Aut 0 (ψ) denotes the group of continuous automorphisms of ψ. Remark 1.3. While it is a classical result that any finite group of homeomorphisms of a compact surface is smoothable, the situation in dimension three is not straightforward: not every finite group G of homeomorphisms of a 3-dimensional topological compact manifold is smoothable [2]. Indded G is smoothable if and only if it is locally linear [ Therefore the corollary above applies to connected compact surfaces and, in the case of topological connected compact 3-manifolds, if G is locally linear. For a generalization of Corollary 1.2 to some cases of non-smoothable actions see Example 6.4 and Theorem 6.5. In this last one we show that a finite group of homeomorphisms of a compact connected topological 4-manifold, whose action is free, is described by a continuous flow. Terminology: We assume the reader is familiarized with our previous paper [8]. All structures and objects considered in this work are smooth, i.e. real C ∞ , and manifolds are without boundary, unless another thing is stated. Whenever we say a set is countable we mean the set is either a finite set or a countably infinite set. For the general questions on Differential Geometry the reader is referred to [5] while we refer to [3] for basic facts on Differential Topology. Somme preliminary notions Given a vector field Z on an m-manifold M, a continuous automorphism of Z is a homeomorphism f : M → M which maps integral curves of Z into integral curves of Z (i.e. if γ(t) is an integral curve of Z then f (γ(t)) is so.) The set Aut 0 (Z) of all continuous automorphisms of Z is a subgroup of the group of homeomorphisms of M. If Z is complete and Φ t denotes its flow, then f ∈ Aut 0 (Z) if and only if f • Φ t = Φ t • f for any t ∈ R. In a more general setting, given a topological space E a continuous flow is a continuous map ψ : R × E → E such that ψ 0 = Id and ψ t+s = ψ t • ψ s for each t, s ∈ R. As before, Aut 0 (ψ) is the group of all homeomorphisms f : E → E such that f • ψ t = ψ t • f , t ∈ R. We say that a subset S of Homeo(E) is smoothable if there exists a structure of smooth manifold on E which is compatible with the preexisting topology and makes every element of S a diffeomorphism. Returning to the smooth framework again, given a vector field Z on an m-manifold M, a pseudo-circle of Z is a subspace of M which is homeomorphic to S 1 and consists of a regular trajectory of Z and an isolated singularity. In this case the α-limit and the ω-limit of the regular trajectory is the singular point. Let B(r) be the open ball in R m centered at the origin and radius r > 0. For the purpose of this work, we will say that p ∈ M is a source of Z if there exists an open neighborhood of this point which is diffeomorphic to an open ball B(r), with p ≡ 0, such that in the coordinates given by the diffeomorphism where (1) ϕ is a non-negative function and ϕ −1 (0) is countable, and (2) on each ray issuing from the origin there are at most a finite number of zeros of ϕ. By condition (2), for every ray issuing from the origin there exists just one regular trajectory whose α-limit is p ≡ 0 and that near the origin lies along this ray. A point q ∈ M is called a rivet if the following hold: (a) q is an isolated singularity of Z, (b) around q one has Z = ψZ where ψ is a function andZ a vector field withZ(q) = 0, and (c) no trajectory has q as α-limit and ω-limit at the same time. Note that by (b) and (c), any rivet is the ω-limit of exactly one regular trajectory, the α-limit of another different one and moreover, it is an isolated singularity of index zero. A topological rivet means an isolated singularity of Z that is the α-limit of a single regular trajectory, the ω-limit of another single regular trajectory and both trajectories are different. As one would expect, any rivet is a topological rivet. By definition, a chain of Z is a finite and ordered sequence of three or more different regular trajectories, each of them called a link, such that: (a) The α-limit of the first link is a source or empty. (b) The ω-limit of the last link is a pseudo-circle. (c) Between two consecutive links the ω-limit of the first one equals the α-limit of the second one. Moreover this set consists in a rivet. The number of links defining a chain is called the order of the chain. The ω-limit of a chain is that of its last link. Given a subset Q ⊂ M, we say that the dimension of Q does not exceed ℓ, or Q can be enclosed in dimension ℓ, if there exists a countable collection {N λ } λ∈L of submanifolds of M, all of them of dimension ≤ ℓ, such that Q ⊂ λ∈L N λ . Note that the countable union of sets whose dimension does not exceed ℓ, does not exceed dimension ℓ too. On the other hand, if the dimension of Q does not exceed ℓ < m then Q has measure zero and therefore Q has empty interior. Let us give the last definition of this section. A vector field Z on M is called limit (abbreviation of "with an almost controlled ω-limit") if the following conditions hold: (i) The set of zeros of Z is discrete (that is with no accumulation point). (ii) Z has exactly one pseudo-circle. (iii) There exists a set Q ⊂ M whose dimension does not exceed m − 1 such that the trajectory of any point of M − Q has the pseudo-circle as ω-limit. (iv) Z has no chain and no periodic regular trajectory. Proposition 2.1. Each sphere S k , k ≥ 2, supports a limit vector field. Proof. On S k ⊂ R k+1 consider the vector field orthogonal projection of the vector field ∂/∂x k+1 onto the sphere, whose trajectories go from the south pole to the north one. Since ξ is transverse to the equator E = {x ∈ S k |x k+1 = 0}, First assume k = 2. On A consider the vector field Z ′ = ϕ(t)∂/∂t + (1 − ϕ 2 (t))(−y 2 ∂/∂y 1 + y 1 ∂/∂y 2 ) and extend it outside A by −ξ on the north part and by ξ on the south one. Fixed a point p 0 ∈ E, consider a function ψ : S 2 → R vanishing at p 0 and positive on S 2 − {p 0 }. It is easily checked that Z = ψZ ′ is a limit vector field (here Q is the equator plus both poles, thus the dimension of Q does not exceed 1, and observe that the only sources are the poles). Now assume k ≥ 3; let Z be a limit vector field on S k−1 constructed by induction. On A consider the vector field Z = ϕ(t)∂/∂t + (1 − ϕ 2 (t)) Z, where Z is regarded as a vector field on A in the obvious way, that is tangent to the second factor, and extend it outside of A by −ξ on the north part and by ξ on the south one. We now prove that Z is a limit vector field on S k . First note that the only sources of Z are the poles. Moreover, Z does not have any rivet, which implies that Z has no chain. Indeed clearly no point of S k − E is a rivet; on the other hand if (0, q) is a rivet, as Z is tangent to {0} × S k−1 the only trajectory whose ω-limit is this point is included in {0} × S k−1 . But clearly (−δ, 0) × {q} for a δ > 0 sufficiently small is included in a trajectory with ω-limit (0, q) what leads to contradiction. By construction, no trajectory in S k − E is regular and periodic, so Z does not possess any periodic regular trajectory. On the other hand, if Z is regarded as a vector field on E and Q ⊂ E satisfies condition (iii) for Z, it suffices to take as Q the union of all trajectories of ξ passing through Q plus both poles. For if p ∈ (S n − E) and q ∈ E belong to the same ξ-trajectory, then the ω-limit of the Z-trajectory of p equals the ω-limit of the Z-trajectory of q. where the positive isotropy I + ϕ is the set of positive points and the negative isotropy I − ϕ that of negative ones. If ϕ = Id has finite order, that is to say ϕ spans a finite subgroup of diffeomorphisms, then ϕ is an isometry for some Riemannian metric. Thus if p ∈ I ϕ , the use of normal coordinates with origin p allows us to identify the diffeomorphism ϕ with an element of O(m) different from the identity. Therefore locally I − ϕ is a regular submanifold of codimension ≥ 1 and I + ϕ a regular submanifold of codimension ≥ 2. By definition the maximal isotropy I max ϕ is the set of those points p ∈ I − ϕ such that the codimension of I − ϕ at p equals 1. It is easily seen that I max ϕ is either empty or a closed regular submanifold of codimension 1. Notice that if p ∈ I max ϕ then in normal coordinates with origin p the diffeomorphism ϕ is a symmetry with respect to a hyperplane (the trace of I max ϕ ). Let G be a finite group of diffeomorphisms of M. Let e ∈ G be the identity element and ℓ be the order of G. By the isotropy, the positive isotropy, the negative isotropy and the maximal isotropy of G we mean where * equals nothing, +, − or max respectively. For the purpose of this work one will say that the action of G is almost free if I G = I max G . Proof. (a) If p ∈ I − g ∩ I − h then p belongs to the positive isotropy of gh −1 , so gh −1 = e. (b) If p ∈ I − g then p belongs to I + g 2 , hence g 2 = e. In the remainder of this section the action of G is assumed to be almost free. Our goal will be to prove the main theorem under this supplementary hypothesis. The proof consists of four steps. In the first one, we construct a vector field Z as the gradient of a suitable G-invariant Morse function µ. In a second step, one modifies Z for obtaining a new G-invariant vector field Y with as many pseudo-circle as (local) minima of µ. The third part is the construction from Y of a G-invariant vector field X that possesses a countable family of chains. These chains are topological invariant of X and allow us to control its continuous automorphisms. Finally, the fourth step is devoted to determine these automorphisms. 3.1. The gradient vector field. Let µ : M → R be a Morse function that is G-invariant, proper and non-negative, whose existence is assured by a result of Wasserman [9]. Let C denote the set of critical points of µ, which is closed, discrete and countable. As M is paracompact, there exists a locally finite family of disjoint open sets {A p : p ∈ A p } p∈C which is G-invariant, i.e. A g·p = g · A p for any p ∈ C and any g ∈ G. By shrinking each A p if necessary, one constructs a collection of charts {(A p , ρ p )} p∈C such that: (Of course k and ε depend on p, and x = (x 1 , . . . , x m ) are the coordinates associated to the chart (A p , ρ p ). Nevertheless, in order to avoid an over-elaborated notation, these facts are not indicated unless it is completely necessary.) If |O| < ℓ then p ∈ I G and, by Lemma 3.1, |O| = ℓ/2 and there exists just one element there are coordinates (y 1 , . . . , y m ) around p ≡ 0 such that h is given by the symmetry Γ(y) = (y 1 , . . . , y m−1 , −y m ). By Lemma 7.1 applied to coordinates (y 1 , . . . , y m ) (observe that now x and y have exchanged their roles) there exist coordinates (x 1 , . . . , x m ) around p ≡ 0 in which h is still given Finally if our p is a minimum of µ, always with a G-orbit of ℓ/2 elements, by applying Proposition 7.2 to a 0 < r ′ p < min{1, r p } we may modify µ inside ρ −1 (B(r p )) to construct a new h-invariant Morse function, still called µ, such that each one of its minima in A p is not h-invariant and, as before, transfer this modification to every A q , q ∈ O − {p}, by means of a g ∈ G − {e} such that g · p = q. On M there always exists a Riemannian metric g ′ that on each ρ −1 p (B(r p )) is written as 2 m j=1 dx j ⊗ dx j . Therefore shrinking every A p allows to assume g ′ = 2 m j=1 dx j ⊗ dx j on the whole A p . Moreover taking into account Property (C.3) of the collection {(A p , ρ p )} p∈C we may assume, without losing the property above, that g ′ is G-invariant by considering As every minimum in Let Z ′ be the gradient vector field of µ with respect to g ′ and ϕ : M → R be a G-invariant proper function that is constant around every p ∈ C. As before, ϕ can be supposed constant on each A p by shrinking these open sets if necessary. It is well known that the vector field On the other handg = g ′ on every A p , p ∈ C, since ϕ is constant on these sets. Hence 3.2. Construction of pseudo-circles. Since µ is non-negative and proper, the α-limit of any regular trajectory of Z is a (local) minimum or a saddle of µ, whereas its ω-limit is empty, a (local) maximum or a saddle of µ. Moreover Z does not possesses any pseudo-circle because no trajectory of a gradient vector field has its α-limit equal to its ω-limit. Clearly Z does not have rivets nor topological rivets. Now by modifying Z we will construct a new vector field with as many pseudo-circle as minima of µ. Let I be the set of minima of µ andĨ be that of maxima. For sake of simplicity let us , endowed with coordinates (t, y), in such a way that E i corresponds to {0} × S m−1 and Z to ∂/∂t. construct Y on A i as before. Then by means of the G-action construct Y on every A j , j ∈ O. As the action of G on j∈O A j is free by property (C.4) of the family {(A p , ρ p )} p∈C , this construction is coherent. Therefore from now on Y will be assumed G-invariant. Notice that the singularities of Y in M − i∈I E i are saddles or sources. The singularities of Y in i∈I E i are never sources nor rivets since each of them is the ω-limit of two or more regular trajectories traced in M − i∈I E i . Thus Y has no topological rivet and, consequently, no chain. Besides every E i contains a single pseudo-circle of Y denoted by P i henceforth; this vector field does not possess any other pseudo-circle. It is easily checked that Y is complete with no regular periodic trajectories. On the other hand the set Y −1 (0) of singularities of Y consists of C plus the singularities in each E i (a finite number for every E i ). Since the family {E i } i∈C is locally finite because {A p } p∈C is, it follows that Y −1 (0) is discrete and countable. Moreover the set of sources of Y equals I ∪Ĩ. Lemma 3.2. There exists a subset Q ⊂ M, which does not exceed dimension m − 1, such that for every point q ∈ (M − Q), the Y -trajectory of q is regular and included in M − Q, its α-limit is a source or empty, and its ω-limit a pseudo-circle. Proof. As the set of zeros of Y is countable and i∈I E i can be enclosed in dimension m − 1, it suffices to consider the points q of M − i∈I E i such that Y (q) = 0. On the other hand since the outset and the inset of any saddle is enclosed in dimension m − 1, the set of points whose α-limit or whose ω-limit is a saddle is enclosed in dimension m − 1. Thus it suffices to study those points q in M − i∈I E i whose trajectory is regular and By construction Y is tangent to E i ≡ {0} × S m−1 and a limit vector field on this submanifold. Therefore there exists {0}×Q i ⊂ E i , which can be enclosed in dimension m−2, such that for any q ∈ (E i − {0} ×Q i ) its Y -trajectory has the pseudo-circle P i as ω-limit. Consequently, the pseudo-circle P i is the ω-limit of the Y -trajectory of each point of (−ε i , ε i ) × (S m−1 −Q i ). Let Φ t be the flow of Y . The set (−ε i , ε i ) ×Q i does not exceed dimension m − 1 and, since Q is countable, neither does t∈Q Φ t ((−ε i , ε i ) ×Q i ). In other words, taking into account that I is countable follows that the set of points q ∈ (M − j∈I E j ), with Y (q) = 0, whose Y -trajectory intersects some A i but whose ω-limit is not a pseudo-circle may be enclosed in dimension m − 1. Finally if M − Q is not Y -saturated, since all points of the trajectory of q ∈ M − Q have the same properties, we may replace Q by Consider a set Q as in Lemma 3.2, which is Y -saturated since M − Q is so. On the other hand as G is finite the set G · Q still has the properties of Lemma 3.2. In short, we may suppose that Q is G and Y -saturated. Since Let U be a G-invariant vector field on M, ϕ t be its flow and q be a point whose Proof. From hypotheses, it immediately follows that ((h −1 g) • ϕ (t−s) )(q) = q. Therefore since ℓ is the order of G. Hence ϕ ℓ(t−s) (q) = q, which implies ℓ(t − s) = 0 and t = s. Corollary 3.4. The natural action of G on F is free. Proof. Assume g · T = T for some g ∈ G and T ∈ F . Then given q ∈ T there exists t ∈ R such that Φ t (q) = g · q and, applying Lemma 3.3 to Y and q, it follows that t = 0 and g · q = q. As q ∈ I G , then g = e must hold. The set F is a disjoint union of G-orbits, say F n , n ≥ 3 (by technical reasons we start at natural three). Let N ′ = N − {0, 1, 2}. Since by Corollary 3.4 each F n consists of ℓ different trajectories one set F = {T nk : n ∈ N ′ , k = 1, . . . , ℓ}, where F n = {T nk : k = 1, . . . , ℓ}, in such a way that T nk = T n ′ k ′ if (n, k) = (n ′ , k ′ ). (That is to say first one numbers the G-orbits in F and then, with a second subindex, the elements of every orbit.) Consider a sequence of G-invariant compact sets {K n } n∈N such that K n ⊂ • K n+1 and n∈N K n = M. For every trajectory T nk let W nk be a set of n − 1 different points of T nk in such a way that: (2) X is complete and has no periodic regular trajectory. (3) {P i } i∈I is the family of all pseudo-circles of X. (4) Let C nk be the family of X-trajectories of T nk − W nk endowed with the order induced by that of T nk as Y -trajectory. Then C nk is a chain of X of order n whose rivets are the points of W nk . Besides C n1 , . . . , C nℓ are the only chains of X of order n and hence {C nk }, k = 1, . . . , ℓ, n ∈ N ′ , is the set of all the chains of X. Denote by H nk the last link of C nk and by P λ(n,k) the ω-limit of H nk (therefore λ is a map from N ′ × {1, . . . , ℓ} to I). Remark 3.5. Notice that the chains C nk given by (4) can be described in topological terms as finite sequences of X-trajectories such that: (i) Between two consecutive links, the ω-limit of the first one equals the α-limit of the second one. Moreover this set consists in a topological rivet. (ii) The α-limit of the first link is an accumulation point of X −1 (0) or empty. (iii) The ω-limit of the last link is a pseudo-circle. Observe that {C nk }, k = 1, . . . , ℓ, n ∈ N ′ , is the set of all the objects satisfying (i), (ii) and (iii) above because W is the set of topological rivets of X (Y has no rivet). Therefore any continuous automorphisms of X maps chains to chains. By definition the roll R i , i ∈ I, is the union of all H nk whose ω-limit equals P i . X is a suitable vector field. In this subsection Φ t will be the flow of X. Consider a Proof. Fixed a R i consider a chain C nk whose last link H nk has P i as ω-limit. By Remark 3.5, f (C nk ) is a chain of of order n, so f (C nk ) = C nk ′ and f (H nk ) = H nk ′ for some k ′ ∈ {1, . . . , ℓ}. Moreover f (P i ) is the ω-limit of H nk ′ . As F n = {T n1 , . . . , T nℓ } is an orbit of the action of G on F , there exists h ∈ G such that h · T nk = T nk ′ , hence h · H nk = H nk ′ . Now by composing f on the left with h −1 we may assume Since f commutes with any Φ t and the regular trajectory of P i is not periodic, there exists . . , ℓ}, with ω-limit P i . Then f (H ab ), which is the last link of f (C ab ), has P i as ω-limit too. Recall that the G-orbit of i possesses ℓ elements, that is to say if i ∈ I G . Therefore there is a single H ab ′ with ω-limit P i ; H ab itself. Indeed, there are only ℓ chains of order a and the ω-limits of their last links are included in the disjoint union g∈G (g · A i ). In other words But H ab is a non-periodic regular trajectory and f commutes with the flow Φ t , so there exists t ′ ∈ R such that f = Φ t ′ on H ab . As P i is the ω-limit of H ab one has f = Φ t ′ on P i too, From Proposition 3.6, it immediately follows: By construction of X (see Property (5)) n∈N ′ , k=1,...,ℓ H nk is dense in M. On the other hand i∈I R i is closed because {R i } i∈I is locally finite, so i∈I R i is a closed set that includes n∈N ′ , k=1,...,ℓ H nk . On the other hand if p ∈ D 1 ∩ D 2 then there exist R i ⊂ D 1 and R j ⊂ D 2 such that . As t i = t j from Lemma 3.3 applied to X and p follows that the X-orbit of p is periodic. Hence X(p) = 0 since X has no periodic regular trajectories, which implies that where the terms of this union are non-empty, disjoint and closed in M − D 1 ∩ D 2 , contradiction. Now composing f with Φ −t where t is the scalar given by Corollary 3.7 and Lemma 3.9 we may assume, without lost of generality, that f (x) = g x · x for any x ∈ M where g x ∈ G. For finishing the proof of the existence of (g, t) ∈ G × R such that f = g • Φ t it suffices to apply the following result: x ∈ M there exists g x ∈ G such that τ (x) = g x · x. Then τ = g for some g ∈ G. Then ϕX is a complete vector field. Besides X and ϕX have the same trajectories (with different speeds but the same orientation by the time), α and ω-limits, pseudo-circles, rolls, rivets and chains. Therefore reasoning as before but this time with ϕX shows that (g, t) ∈ In other words ϕX is a suitable vector field too. The general case In this section the main result will be proved in the general case by reducing it to the almost free one. Given g ∈ G − {e} let J g be the set of those points p ∈ I g such that the dimension of I g at p is ≤ m − 2. Set J G : = g∈G−{e} J g . It is easily seen that J G is a G-invariant closed set and M − J G a G-invariant dense open set. One will say that the dimension of a point p ∈ J G is zero if the dimension at p of every I g such that p ∈ J g , is zero. Let S 0 be the set of all points of J G of dimension zero. Clearly S 0 is G-invariant and S 0 ⊂ J G . By making use of normal coordinates with respect to a G-invariant Riemannian metric, centered at points of J G , it is easily checked that: (1) S 0 has no accumulation point so it is countable and closed. (2) Every q ∈ S 0 possesses a neighborhood whose intersection with J G equals {q}. (3) For any q ∈ J G − S 0 and any neighborhood A of q the set A ∩ J G is uncountable. Since M is paracompact (even more σ-compact) from (1) and (2) is G-invariant and G is finite shrinking the elements ofà allows us to assume that this family is G-invariant. Even more one can suppose that eachà q is a domain of normal coordinates centered at q of some G-invariant Riemannian metric. For every q ∈ S 0 consider a set q ∈ C q ⊂à q that in the normal coordinates mentioned before is a closed non-trivial segment sufficiently small. Set D q : = {(g, p) ∈ G×S 0 : g ·p = q} and E q : = (g,p)∈Dq g · C p . Then E q ⊂à q so the family {E q } q∈S 0 is locally finite, hence On the other hand the action of G onM is almost free. Indeed, if q ∈ I g ∩M for some g ∈ G − {e} then q ∈ J g so the dimension of I g at q equals m − 1 and q ∈ I max Notice that g −1 * (X) equals (ϕ•g)X and zero on M −M . Therefore by taking ℓ −1 g∈G (ϕ•g) instead of ϕ we my assume that ϕ is G-invariant, which implies thatX is G-invariant too. Given a singularity p ofX one has two possibilities: (a) Any neighborhood of p contains uncountably many zeros ofX; that is to say p ∈ M −M . (b) There is a neighborhood of p that only includes countably many zeros ofX; that is to say p ∈M and X(p) = 0. Consider f ∈ Aut 0 (X). From Thus the restriction of f toM is a continuous automorphism ofX |M = ϕX and, by Section 3, there exist g ∈ G and t ∈ R such that f = g •Φ t onM whereΦ t is the flow ofX. By continuity f = g •Φ t everywhere. The uniqueness of g and t is obvious. In short the main result is proved in the general case. Actions on manifolds with boundary Let P be a m-manifold with nonempty boundary ∂P . Then each homeomorphism f : P → P induces a homeomorphism f : ∂P → ∂P . Therefore the same reasoning as in Section 4 of [8] shows that the main result of the present paper also holds for a connected manifold P , of dimension m ≥ 2, with nonempty boundary and a finite subgroup G of Diff(M). Then the action of G on R 2 is almost free and I max G = {0} × R. For constructing a suitable vector field X as in Section 3, one can start with the Morse function µ = (x 2 1 − 1) 2 + x 2 2 that has two minima at (1, 0) and (−1, 0) respectively and a saddle at the origin. Examples Therefore at the end of the process X has two pseudo-circles around (1, 0) and (−1, 0) respectively. Moreover the set of singularities of X is countable and accumulates towards (1, 0), (−1, 0) and the infinity. Observe that I max G consists of a singular point and two regular trajectories with the singular point as α-limit and empty ω-limit. In a similar way, on S 2 one may consider the group G = {e, g} where now g(x) = (x 1 , x 2 , −x 3 ) and the Morse function µ = 2x 2 1 + x 2 2 that has two minima at (0, 0, ±1), two maxima at (±1, 0, 0) and two saddles at (0, ±1, 0). The action of G is almost free and there is no minimum on I max Example 6.2. Let M be a connected compact manifold of dimension m ≥ 2. Given G, a finite group of diffeomorphisms of M, and a G-invariant Morse function µ, let X be the gradient vector field of µ with respect to a G-invariant Riemannian metric. Then, although the group of smooth automorphisms of X, namely Aut(X), may equal G × R (e.g. in [8, Example 5.2]), the group Aut 0 (X) is strictly greater than G × R. Indeed, first note that there always exist a minimum and a maximum of µ, that we denote by p and q, and a trajectory γ of X whose α-limit and ω-limit are p and q respectively. Consider a closed sufficiently small (m − 1)-disk D transverse to X and intersecting γ just once. We may suppose, without lost of generality, that every trajectory of X intersects D at most once and if so its α-limit equals p and its ω-limit q. Let E be the set of those points of M whose trajectory meets D. Then E is diffeomorphic to D × R in such a way that X becomes ∂/∂s where D × R is endowed with coordinates (x, s) = (x 1 , . . . , x m−1 , s). For each continuous function , q} our f is continuous. Its inverse is given by −λ and obviously f is an automorphism of X, which in general does not belong to G × R. Other way for constructing such a f is to consider a homeomorphism τ : D → D with τ |∂D = Id |∂D and set f (x, s) = (τ (x), s) on E, f = Id elsewhere. Observe that an analogous construction can be done if the gradient field is slightly modified, namely if one adds a finite number of new singularities of index zero. Thus, in general, the group of continuous automorphisms of vector fields constructed in [8] is strictly greater than G × R (for the non-compact case the reasoning above can be easily adapted if there is at least a maximum). In other words, these vector fields determine G in the smooth category but not in the continuous one. is a group isomorphism. Proof. Consider G under restriction as a group of diffeomorphisms of A and define J G and S 1 as in Section 4 (for A of course). Then the action of G onM is almost free, which gives rise to a suitable vector field X onM. The (topological) suspension of S 5 and that of β give rise to a homeomorphism f : S 6 → S 6 of order three, whose set of fixed points is (homeomorphic to) the suspension of RP 3 in such a way that the vertices are the poles. Therefore f cannot be smoothed otherwise the suspension of RP 3 has to be a differentiable manifold, which is not the case. Let G be the group of homeomorphisms of S 6 spanned by f , whose order equals three. Clearly G cannot be smoothed. However, away of the poles G is a group of diffeomorphisms. Consider a meridian (that is the intersection of S 6 ⊂ R 7 with a plane passing through the origin and the poles) and saturate it under the action of G for constructing a G-invariant compact set C. Finally set A = S 6 − C and apply Proposition 6.3 for concluding that, even if G cannot be smoothed, there exists a differentiable flow on S 6 that determines G. Theorem 6.5. Let G be a finite group of homeomorphisms of a connected compact topological 4-manifold M with no boundary. Assume that the action of G is free. Then there exist a continuous flow Φ that determines G. We devote the rest of this section to the proof of Theorem 6.5. Let P = M/G be the topological quotient manifold and π : M → P the canonical projection, which is a covering. Fix a point a of P . Then P ′ : = P − {a} has a structure of smooth manifold (Quinn [7]). The pull-back of this structure defines a smooth structure in in such a way that the natural action of G on M ′ is smooth and π : M ′ → P ′ is a smooth covering. Consider a set a ∈ C ⊂ P that with respect to some topological coordinates centered at a is a closed non-trivial segment sufficiently small; assume that a is one of its vertices. Then π −1 (C) is a disjoint union of compact sets C 1 ∪ . . . ∪ C ℓ where ℓ is the order of G and each π : C j → C a homeomorphism. Notice that P − C is connected and dense in P , and M − π −1 (C) is connected, dense in M and G-invariant. Now construct a suitable vector field X on M − π −1 (C). Since π −1 (C − {a}) is closed in M ′ , a vector field X that vanishes on π −1 (C − {a}) and equals ϕX on M − π −1 (C) for a suitable function ϕ : M ′ → R can be constructed as in Section 4. On the other hand the flow Φ of X can be extended into a continuous flow Φ : R × M → M by setting Φ(R × {b}) = b for every b ∈ π −1 (a). Accept this fact by the moment; we will prove it later on. If f : M → M is a continuous automorphism of Φ, then f (π −1 (C)) = π −1 (C) by the same reason as in Section 4 (replace singularities of X by stationary points of the flow Φ and take into account that any neighborhood of any point of π −1 (C) contains uncountably many points of π −1 (C)). Therefore f : M −π −1 (C) → M −π −1 (C) is a continuous automorphism of X and hence f = g • Φ t = g • Φ t on M − π −1 (C) for some g ∈ G and t ∈ R. Since M − π −1 (C) is dense in M, by continuity f = g • Φ t everywhere. Clearly if g • Φ t = Id then g = e and t = 0. In short, even if M has no smooth structure, the continuous flow Φ determines G. Let us prove that Φ is a continuous flow. The only difficult point is the continuity. Note that as Φ is G-invariant there is a smooth flow ψ : (it is the flow associated to the projection of X onto P ′ ). Denote by ψ : R × P → P the extension of ψ defined by setting ψ(R × {a}) = a. Observe that π • Φ = ψ • (Id × π). Proof. As before the difficult point is the continuity. For checking it one will show that ψ : [a, b] × P → P is continuous for any a < b belonging to R. Consider a map s : E 1 → E 2 between locally compact but not compact topological spaces. Finally if one identifies P to A(P ′ ) by regarding a like the infinity point, then ψ = A(π 2 ) • A(h) • g. Proof. Consider the map l : R × M → P given by l(t, x) = ψ(t, π(x)) and a point v ∈ M ′ . As R is contractile, then l is homotopic to the map Therefore l ♯ π 1 (R × M, (0, v)) = π ♯ π 1 (M, v) and hence, with respect to the covering π : M → P , there exists a lift L : R×M → M of l, with initial condition L(0, v) = v. Moreover L = Φ on R × M ′ since one knows that Φ is continuous on R × M ′ and π • Φ = ψ • (Id × π). Proof of Proposition 7.2. First observe that if τ is like in Proposition 7.2 for r = 1, then for any other r > 0 it suffices to take τ r (x) = r 2 τ (r −1 x). Consider a function ρ as in Lemma 7.3 and set τ (x) = x 2 1 + · · · + x 2 m−1 + x 2 m ρ(||x|| 2 ). Then |τ (x)| ≤ ||x|| 2 everywhere and τ (x) = ||x|| 2 if ||x|| ≥ 1. On the other hand an elementary computation making use of Corollary 7.4 shows that the singularities of τ are always nondegenerate and belong to the last axis, while the origin is a saddle. Remark 7.5. As in one variable between two consecutive minima there always exists a maximum, the function λ of Corollary 7.4 has a single singularity, which is a minimum. A more careful computation shows that function τ of the proof of Proposition 7.2 has just three singular points: a saddle and two minima.
2019-03-06T13:05:00.000Z
2018-11-14T00:00:00.000
{ "year": 2020, "sha1": "a792fbd9a896045fabd5d91912c0380c083839ac", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1811.05840", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "a792fbd9a896045fabd5d91912c0380c083839ac", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
95754952
pes2o/s2orc
v3-fos-license
Usage of di ff erent vessel models in a fl ow-through cell: in vitro study of a novel coated balloon catheter Drug-coated balloon catheters are a novel clinical treatment alternative for coronary and peripheral artery diseases. Calcium alginate, poly(vinylethylimidazolium bromide) and polyacrylamide hydrogels were used as vessel models in this in vitro study. In comparison to a simple silicone tube their properties can be easily modi fi ed simulating di ff erent types of tissue. Local drug delivery after balloon dilation in the fi rst crucial minute was determined in a vessel-simulating fl ow-through cell by a simulated blood stream. Balloon catheters were coated with paclitaxel using the ionic liquid cetylpyridinium salicylate as a novel carrier. Drug transfer from coated balloon catheters to di ff erent simulated vessel walls was evaluated and compared to a silicone tube. The highest paclitaxel delivery upon dilation was achieved with calcium alginate as the vessel model (60%) compared to polyacrylamide with 20% drug transfer. The silicone tube showed the least amount of wash-o ff (<1%) by a simulated blood stream after one minute from the vessel wall. The vessel-simulating fl ow-through cell was combined with a model coronary artery pathway to estimate drug loss during simulated use in an in vitro model. Calcium alginate and polyacrylamide hydrogels were used as tissue models for the simulated anatomic implantation process. In both cases, similar transfer rates for paclitaxel upon dilation were detected. Introduction Drug-coated balloon (DCB) technologies have emerged as a potential alternative to drug eluting stents (DES) to minimize restenosis. 1 The applied drug should exhibit specic chemical properties and mechanism of action as well as pharmacokinetics and a fast transfer to be quickly absorbed by the vessel wall. 2 Paclitaxel (PTX), a cytotoxic agent, was determined as the primary drug for DCB due to its efficient uptake as well as its extended retention. 3The cytotoxic, anti-proliferative effect of DES on the vessel wall has been widely explored. 2,4Preclinical studies with DCB have shown that 3 mg mm À2 paclitaxel is the effective dose to achieve an efficient, long-term, antiproliferative effect on the vessel wall. 2,5Drug delivery during angioplasty depends on drug dose, transfer system, dilation time, release pattern and appropriate balloon coating.][8] In addition to pure PTX balloon catheters different PTX formulations with additives such as urea, butyryl-tri-hexyl citrate (BTHC), iopromide and Shellac (aleuritic and shellolic acid) are commercially available. 6Kleber et al. summarized clinical evidence for these different DCBs in coronaries arteries with CE-mark (Conformité Européene). 7he Paccocath technology with PTX embedded in hydrophilic iopromide coating increases the solubility and thus the transfer of PTX to the vessel wall.More than 80% of the drug is retained during balloon implantation to the target tissue (lesion) and 10-15% of the drug is released in the vessel wall upon 60 s balloon ination. 3FreePac technology uses the natural additive urea as a carrier, which should enhance drug release as well as absorption, and thereby reduce total drug elution times (30-60 s).During balloon ination, the blood ow in the vessel is interrupted and therefore expansion can only be maintained up to one minute.Microporous balloon surfaces with Shellac coating technology can be inated up to one minute and achieve total drug release.A shorter dilation time results in partial drug release. 2,3In a porcine model vessel wall, Scheller et al. have demonstrated a drug release of approx.90% aer one minute ination and 40 to 60 minutes later, they could detect about 10% PTX in the vessel wall.Thus, PTX is transferred into and retained by the pig tissue for a certain time. 91][12] The previously used models are far from physiological properties of the material, e.g. a silicone tube acts like an artery.We have been working on polymerized ionic liquids (PILs) which are able to form hydrogels.Depending on the type of ionic liquid and degree of cross-linking the mechanical properties can be modied. 13In our presented study, these hydrogels were evaluated to act as vessel model and compared to known hydrogels.Next to calcium alginate as a natural hydrogel, synthetic polymers with good mechanical and long-term stability were also used. 13PTX-coated balloon catheters using the ionic liquid cetylpyridinium salicylate (Cetpyrsal) as a novel innovative additive were studied. 11Local drug delivery within a vesselsimulating ow-through cell under physiological conditions during the rst crucial minute was investigated.For an assessment of this study, the total drug delivery upon dilation (retention into the hydrogel and wash-off (release) from the hydrogel compartment by a simulated blood stream) and the residual load on the balloon were analyzed.Furthermore, the drug loss during a simulated insertion was estimated by combining the ow-through cell with a model coronary artery pathway. Balloon coating A pipetting technique was used for the coating of the inated balloon catheter according to Petersen et al. 11 Briey, PTX and Cetpyrsal were separately dissolved in methanol to yield concentrations of 4.72 mg mL À1 (both stock solutions).Following this, a Cetpyrsal-PTX solution (50%, w/w) was mixed from both stock solutions.100 mL of the Cetpyrsal-PTX solution was then slowly pipetted onto each balloon catheter, resulting in a PTX surface load of approx.3 mg mm À2 , respectively a total of 659.73 mg (balloon 1: 3.5 mm diameter, 20 mm length), 753.99 mg (balloon 2: 4.0 mm diameter, 20 mm length) or 1130.97 mg (balloon 3: 4.0 mm diameter, 30 mm length).During the pipetting process, the balloon was rotated and evaporation of methanol was ensured by a gentle stream of air.Finally, all balloon catheters were dried at 23 AE 2 C overnight. 11ed balloons during in vitro study.Silicone tube (only balloon type 2), calcium alginate (trial 1: balloon 2, trial 2-6: balloon type 1), poly(VEImBr) hydrogel (only balloon type 2), PAAm (trial 1-3: balloon type 2, trial 4-6: balloon type 3).For the model ASTM F2394-07 only balloon type 2 was used. Hydrogel preparation Calcium alginate hydrogel.Sodium alginate (3%, w/w) was dissolved in de-ionized water.To 0.15 g CaSO 4 $2H 2 O 1500 mL de-ionized water was added and to the resulting suspension, 500 mL of a 10% (w) Na 3 PO 4 $12H 2 O solution was added.16.5 g alginate sol (3%, w/w) was mixed by a sheet of strong paper with this fresh calcium-containing suspension and was lled in the vessel-simulating ow-through cell.Aer complete gelation, the metal rod was removed from the ow-through cell and a simulated articial vessel wall was obtained. Polyacrylamide (PAAm) hydrogel.PAAm was synthesized by radical polymerization.Rotiphorese® Gel 30 (3.975 mL) was added to 10.794 mL de-ionized water.Polymerization was initiated by adding 210 mL fresh APS solution (10%, w/w) and TEMED (21 mL).Aer a short reaction time (2-3 min), the owthrough cell containing the metal rod was lled with the polymerizing solution.Following full gelation, the metal rod was removed from the system Poly(vinylethylimidazolium bromide) hydrogel (poly-(VEImBr)).7.5 g of 1-vinyl-3-ethyl-imidazolium bromide (36.93 mmol, [VEIm][Br]), synthesis previously described by Bandomir et al., was dissolved in 11.675 mL de-ionized water. 13otiphorese® Gel B (4.925 mL), 750 mL APS solution (10%, w/w) and 150 mL of TEMED were added.The solution was mixed using a Vortex and aer a short reaction time (1-2 min), the ow-through cell containing the metal rod was lled with the polymerizing solution. 13The metal rod was removed following complete hardening of the hydrogel. Simulated use of DCB in a ow-through cell in different vessel models An adapted vessel-simulating ow-through cell was chosen, which is described in detail by Seidlitz et al. 15 Instead of the acrylic glass disc a metal disc was used.Calcium alginate, PAAm and poly(VEImBr) hydrogels were inserted as hydrogel compartments.The DCB was placed in the simulated vessel wall and dilated for 60 s with a nominal pressure of 7 bar.The owthrough cell with the inated balloon catheter is shown in Fig. 1 with PAAm hydrogel as the vessel model.Aer expansion, the balloon was removed and isotonic sodium chloride (NaCl, 0.9%) as a perfusion medium was circulated along the simulated vessel wall for a duration of 1 min at a ow rate of 35 mL min À1 .Pumping of medium was managed with a gear pump (Ismatec MCP-Z ISM 405A, pump head model 186-000, Germany; Tygon® tube R 3607, 3.17 mm ID, VWR International GmbH, Germany) and the set ow rate was adjusted to the blood ow velocity in coronaries. 16The isotonic solution was collected in a falcon vessel and the PTX concentrations were measured by HPLC (see HPLC parameters).The process of balloon angioplasty was simulated applying an in vitro model, consisting of a guiding catheter (Cordis® Vista Brite Tip®; 6F; 1.75 mm ID; 90 cm) with a guide wire (Biotronik SE & Co KG, Galeo M 014) and a ow-through cell with different hydrogel compartments at the end of the test path, representing the vessel wall.Experiments were also performed with a silicone tube (3.0 mm ID) as the vessel model to compare the results.Paclitaxel transfer into different simulated vessel walls was measured.Before balloon dilation, the guiding catheter and vessel model were ushed with 20 mL NaCl-solution (0.9%).PTX-coated balloon catheters using Cetpyrsal additive were inserted into the guiding catheter and via a guide wire, the balloon catheter was placed in the simulated vessel wall. Aer balloon deation, the pump was started (ow rate 35 mL min À1 ).The PTX concentration simulating drug wash-off from the vessel model within the rst crucial minute was determined by HPLC measurements.The guiding catheter was then ushed with 20 mL methanol.The balloon catheter was extracted in 10 mL methanol for 30 min at 23 AE 2 C and then the residue on the balloon was analyzed.The used hydrogel aer cutting into small pieces was also extracted with methanol (20 mL) for 30 min at 23 AE 2 C to detect the amount of transferred drug into the vessel model.The entire guiding catheter was then ushed with 20 mL of 0.9% NaCl-solution in preparation of the next experiment.In summary, the total PTX delivery upon dilation composed of drug transfer into the hydrogel and drug wash-off from the hydrogel compartment aer 1 min by a simulated blood stream.All samples were quantied by means of HPLC aer a 1 : 2 dilution with methanol. Simulated use of DCB in the vessel-simulating owthrough cell aer passage through an in vitro vessel model according to ASTM F2394-07 A standard anatomic model adapted from ASTM F2394-07, recently described in the literature as a standard procedure, was applied to simulate the implantation process of DCB. 17 The model consisted of polymethacrylate plates forming a simulated course of a coronary artery.The used guiding catheter (Cordis® Vista Brite Tip®; 6F; 1.75 mm ID; 90 cm) with a guide wire (Biotronik SE & Co KG, Galeo M 014) and the tortuous path equipped with a PTFE tube was placed in a 37 AE 2 C heated water bath (Fig. 2).The model was ushed with 30 mL 0.9% NaCl-solution.A DCB was introduced into the guiding catheter of the model and initially placed at the end of the PTFE tube.The guiding catheter was then ushed with 30 mL 0.9% NaClsolution to recover particles and PTX released during tracking.At the distal end of the test path, a hydrogel vessel model (calcium alginate or PAAm) was placed and the balloon was dilated to 7 bar and held for 1 min.The balloon was removed aer deation and extracted in 20 mL methanol for 10 min (residual PTX load on the balloon) at 23 AE 2 C. The pump was then started (ow rate 35 mL min À1 ) and the PTX concentration simulating the drug wash-off in the rst crucial minute was determined.Then the used hydrogel aer cutting was also extracted with methanol (20 mL) for 30 min at 23 AE 2 C (drug transfer into the vessel model).Aer balloon extraction (10 min) in methanol, the balloon was removed and the entire pathway was then nally ushed with 30 mL methanol.Subsequently, the test path was ushed with 0.9% NaCl-solution in preparation of the next balloon dilation. The total PTX delivery upon dilation composed of drug transfer into the vessel model (hydrogel) and drug wash-off from the hydrogel compartment aer 1 min by a simulated blood stream.All samples were quantied by means of HPLC aer a 1 : 2 dilution with methanol. Comparison of different hydrogels in the ow-through cell The rst set of experiments of DCB compared different hydrogels as tissue models to evaluate drug release of PTX.Drug transfer, the retention of PTX into three different hydrogels as tissue models respectively vessel walls as well as the wash-off (release) from the hydrogel compartment within a vesselsimulating ow-through cell were investigated during balloon dilation.A PTX transfer should be examined by using different hydrogel compartments to determine the inuence of the tissue model relating to the PTX transfer upon dilation.Certain properties of the used hydrogels to simulate a vessel wall such as permeability, exibility and long-term stability of synthetic polymers (poly(VEImBr) and PAAm) are of particular importance.Calcium alginate as a natural polymer is easily accessible but has limited long-term stability.Monovalent cations such as Na + dissolve the network within short time.In addition, alginate hydrogels are prone to microbial contamination.Results for various vessel models are depicted in Fig. 3.The total PTX delivery upon dilation composed of drug transfer into the hydrogel and drug wash-off from the hydrogel compartment aer 1 min by a simulated blood stream.In the following the results from the balloon dilations will be discussed. Drug transfer into the vessel model (Fig. 3, entry 1).The PTX transfer into the vessel models are listed in Table 1.Drug delivered in the silicone tube was extracted with methanol (38.6 AE 3.4%, 1.02 AE 0.03 mg mm À2 ).Considerably lower PTX was delivered into hydrogel-based vessel models.In the case of calcium alginate as the vessel wall, a PTX content in the hydrogel of 21.4 AE 10.7% (0.53 AE 0.23 mg mm À2 ) was detected.Alternatively, with PAAm as the vessel wall, only 2.8 AE 1.8% (<0.1 mg mm À2 ) of PTX was transferred into the hydrogel.There are different possibilities for interpretation of the observed results.Drug transfer from coated balloons to the simulated vessel wall could occur in different ways.Paclitaxel may dissolve on contact with the hydrogel compartment and diffuse into the gel.Thus, solubility is very important for drug release and delivery.Dissolution depends on solubility of the used drug in 0.9% NaCl-solution.Liggins et al. published a maximum solubility of anhydrous PTX of 3.59 AE 0.41 mg mL À1 in water aer 3 h at 37 C. 18 Another report described a solubility of <0.1 mg mL À1 in aqueous medium, which is quite low. 19Water solubility could be increased by synthesis of derivatives, under the risk of changing pharmaceutical characteristics. 20Due to very poor solubility of PTX in water, transport via dissolution and diffusion into the hydrogel is not responsible for the main transfer.Another drug transfer pathway may occur by particle transfer of PTX due to prevailing mechanical forces during balloon expansion onto the vessel wall.Over a period of one minute, a contact between the expanded balloon and simulated vessel wall is established, thus allowing transfer of PTX particles.The contact time was consistent in every case, but the inner diameter of the silicone tube (3.0 mm) was different in comparison to the articial vessel walls (3.14 mm).Hence, the prevailing mechanical forces during balloon expansion in the silicone tube were stronger and more PTX could be transferred.To conclude, the main PTX transfer during balloon expansion occurred due to prevailing mechanical forces. Furthermore, the hydrogel characteristics were important for PTX transfer and diffusion into hydrogels. 21PAAm and poly-(VEImBr) were synthetic polymers with a specic cross-linker content (poly(VEImBr): 1.7% to PAAm: 0.8% cross-linker content). 13On the contrary, the calcium alginate hydrogel is a natural polymer with variability in its properties.In addition to mechanical properties (exibility) of the vessel models, different adhesion properties were present.This corresponds to different amounts of PTX wash-off from the vessel models aer 1 min by a simulated blood stream (see Table 1 or Fig. 3).Moreover, the diffusion of PTX into the vessel wall occurs at various rates, which may be related with the cross-linker content.This leads to PTX diffusion into synthetic polymers < 5% (poly(VEImBr) and PAAm) compared to the natural polymer of 21.4 AE 10.7%. Drug wash-off from various vessel models aer 1 min (Fig. 3, entry 2).A drug release time of only one minute was chosen to simulate a very fast PTX transfer and wash-off from the vessel model.The silicone tube is a hydrophobic material and showed the least amount of wash-off (<1%) from the vessel model (see Table 1).Silicone tubes as a vessel model were not very suitable because they are not similar to physiological uptake behavior.A hydrogel is a network of polymer chains that are hydrophilic and should be more appropriate. 22,23With a hydrogel compartment as a vessel wall, the PAAm was able to achieve the lowest wash-off quantities (17.8 AE 5.3%, 0.40 AE 0.14 mg mm À2 ), compared to poly(VEImBr) (28.7 AE 26.2%) and calcium alginate (41.2 AE 14.2%, 1.15 AE 0.58 mg mm À2 ).Thus, the highest drug wash-off aer 1 min was achieved in case of calcium alginate as the vessel model.The simulated vessel models chosen were important for an effective drug transfer.Thus, the drug delivery characteristic is dependent on the hydrogel compartment.With poly(VEImBr) as the hydrogel compartment, some analytical problems occurred.Thus, their potential could not be fully explored.The poly(VEImBr) hydrogel shows strong swelling behavior in methanol which was used to extract the drug from the hydrogel.Most of the solvent diffused into the polymer and thus the hydrogel rapidly swells.In addition, the high salinity compromised the HPLC analysis of PTX (value for drug washoff, see Table 1) and therefore elongated peaks in the chromatogram were difficult to integrate together with a low interpretable reproducibility of the data.This could be overcome by using other drug candidates or models showing, for example, uorescence.In summary, the hydrogel material was crucial for the total drug delivery upon dilation (Fig. 3).Since the drug is poorly soluble in water and because of binding to tissue structures, the PTX may persist longer in the vessel wall.Calculated curves for PTX tissue concentration as function of time are provided in the literature.Within the rst hour, the concentration decreases dramatically. 24rug residue on the balloon.The residual loads of PTX on the balloon catheter were also determined (Fig. 3, entry 4).Extraction of the balloon in methanol resulted in the highest PTX concentration for the silicone tube (59.5 AE 4.6%, 1.6 AE 0.2 mg mm À2 ) as the vessel model, meaning most of the PTX remained on the balloon surface.Only 40% of the drug could be transferred during balloon dilation.However, considerably less drug on the balloon catheter surface were analyzed in the cases of dilation in calcium alginate (30.8 AE 7.6%, 0.8 AE 0.2 mg mm À2 ) and PAAm (33.2 AE 15.3%, 0.8 AE 0.4 mg mm À2 ) as vessel models.Consequently, in both cases about 70% of the drug is removed from the balloon catheter. As already mentioned, PTX is characterized by its very low solubility.The balloon catheters used here exhibit homogeneous coating due to the use of an IL as a novel additive (Cetpyrsal/PTX, 50/50, w/w).There are no needle-like crystals present on the balloon surface. 11Previous experiments showed that the novel additive reduced the drug loss compared to a commercially available DCB with an urea-based coating. 11For this reason, there is the possibility to deliver (transfer) more PTX during the balloon expansion and therefore we concentrated on this novel DCB.The degree of crystallization is important; Afari et al. published that more crystalline coatings yield higher tissue levels and biological efficacy. 25In contrast, less crystalline coatings resulted in improved uniformity and less particle formation. 25Heilmann et al. had found (via an in vivo study) that the advantageous effect of a hydrophilic additive such as using iopromide for higher tissue concentrations was antagonized by increased amounts of wash-off of used coatings. 26rug loss is a process constituted of mechanical loss by sheath passage and collisions with the vessel wall and dissolution of the coating in the blood stream. 26This process will be simulated using a standard anatomic model adapted from ASTM F2394-07 (described in next section).Drug adherence and loss on the way to the vessel was tested in vitro by Kelsch et al. 8 Drug loss upon passage through a blood-lled hemostatic valve and guiding catheter for one minute in stirred blood at 37 C was investigated.Urea-based DCB lost 26 AE 3% and iopromidebased DCB lost 36 AE 11% of the total amount on the balloon. 8n conclusion for the simulated use of DCB, the total drug delivery upon dilation is different for the used hydrogels simulating the vessel wall.Calcium alginate hydrogel as the vessel model showed the highest PTX delivery upon dilation.The wash-off from the alginate hydrogel was high (drug release aer 1 min by a simulated blood stream: 41.2 AE 14.2%).However, 21.4 AE 10.7% of the drug diffused into the hydrogel compartment.The silicone tube showed the least amount of wash-off (<1%) from the vessel model aer 1 min, but it is quite different to natural vessels.Poly(VEImBr) hydrogels as vessel models were difficult to analyze.In the case of PAAm as the vessel model, only 20% of PTX could be delivered upon dilation. Simulated use of DCB in the vessel-simulating owthrough cell aer passage through an in vitro vessel model according to ASTM F2394-07 In order to simulate the implantation process, the vesselsimulating ow-through cell was combined with a model coronary artery pathway to estimate drug loss and transfer as well as particle release.Cetpyrsal-based DCBs were manually advanced through a tortuous vessel path, consisting of a guiding catheter with a guide wire.Calcium alginate and polyacrylamide hydrogels were used as tissue models for the simulated use in an in vitro model (Fig. 4).The obtained results can be compared with the data from Petersen et al. 11 In their study, they also used the anatomic model according to ASTM F2394-07 with a silicone tube as the vessel model. Total PTX delivery upon dilation (Fig. 4, entry 1).Only small transferred fractions were observed for both vessel models aer passage of the balloon catheter through a simulated anatomic model.In the case of PAAm, a total PTX delivery upon dilation of 5.1 AE 2.1% (0.14 AE 0.06 mg mm À2 ) was achieved.Similar transfer rates for PTX upon dilation were detected with calcium alginate as the vessel model (6.4 AE 3.8%, 0.13 AE 0.07 mg mm À2 ).As before, a short wash-off time (drug release aer one minute) was chosen to simulate the drug behavior aer pass through the tracking model.With PAAm as the vessel model, a PTX content of 1.7 AE 0.7% could be detected in the wash-off solution.A similar value for calcium alginate as the vessel model was found (PTX content of 2.0 AE 1.1%) as wash-off from the hydrogel compartment in the rst minute.The PTX transfer into the hydrogel compartment was slightly higher (PAAm: of 3.4 AE 1.9%; calcium alginate: of 4.3 AE 2.8%).Thus, the drug diffused into the vessel model or adhered on the vessel wall and was not released in one minute into the medium.However, the total PTX delivery upon dilation was similar for two different vessel models aer the simulated implantation process.Petersen et al. transferred more PTX in the silicone tube (up to 40%) with a PTX-Cetpyrsal balloon catheter (50 : 50, w/w) coated in a folded condition.With the balloon coated in an expanded condition, the PTX transfer in the silicone tube was lower (5-15%). 11Here, the used balloon catheters were coated in an expanded condi-Seidlitz et al. used pure PTX-coated balloons and showed PTX transfer rates to gel below 1% (calcium alginate as vessel model). 12In their study, they also used a model of a coronary artery pathway to investigate drug loss and drug transfer to the gel.However, in our study with the novel DCB coating more PTX was delivered upon dilation (calcium alginate: 6.4 AE 3.8% compared to below 1%).In conclusion, the PTX transfer upon dilation depends on the coating of the balloon and the used vessel model simulating the vessel wall. Drug residue on the balloon (Fig. 4, entry 2).Extraction of the balloon catheter in methanol resulted in a PTX content of 1.38 AE 0.46 mg mm À2 with PAAm as the vessel model (Fig. 4).Consequently, there was still 51.4AE 15.7% PTX remaining on the balloon surface and about 50% of the drug is removed from the balloon catheter.However, expansion of the balloon in calcium alginate yielded only 0.27 AE 0.14 mg mm À2 PTX residue on the balloon (13.3 AE 8.3%).The balloon was almost completely unloaded. Particle quantication.In addition to the total drug delivery upon dilation, particle measurements (>10 mm, >25 mm) were performed aer track and dilation of the balloon (Table 2).These size limits (>10 mm, >25 mm) are assumed from the evaluation of surface and coating damage of stent delivery catheter.The estimated mechanism from DCB involves the delivery of particles to the inner lumen of coronary arteries, the release of particles or coating fragments in the coronary arteries.Complications are occlusions of small vessels or capillaries. 17,27Quantied particles are mainly PTX particles because Cetpyrsal does not form any ascertainable particles in aqueous solution under used conditions. Using calcium alginate as the vessel model, a total of 589 AE 309 particles (>10 mm) per mm 2 were analyzed.Contained particles >25 mm per mm 2 were detected in a ratio of 1 : 10 (57 AE 28).In the second test series using PAAm, the expected sum of particles was decreased (234 AE 127 (>10 mm) per mm 2 , 34 AE 7 (>25 mm) per mm 2 balloon surface).Petersen et al. described that DCB based on Cetpyrsal generated a lower quantity of particles (expanded condition: 280 AE 91 particles (>10 mm) per mm 2 balloon surface) compared to commercially available DCB using a urea-based coating (329 AE 161 particles (>10 mm) per mm 2 balloon surface). 11Amounts of particles generated from the PTCA balloon catheters by comparing two modied lubricous polymeric hydrogel coatings used at various thicknesses were demonstrated by Babcock et al. 28 In their study, a submicron coating (dry thickness of 0.5 mm) generates far fewer particulates than the micron coating (dry thickness of 2 mm) on the same substrate in a standard anatomic model adapted from ASTM F2394-07. 28 Conclusions Drug-coated balloon catheters are an alternative for coronary and peripheral artery disease.Based on the limited number of published results of in vitro characterization of drug coated balloons, there is a need for further research.Novel PTX-coated balloons using ionic liquid Cetpyrsal as an additive for the in vitro study were applied.Drug delivery upon dilation in different tissue models (calcium alginate, poly(VEImBr) and PAAm) using a vessel-simulating ow-through cell was investigated and compared to a silicone tube as the tissue model.The highest PTX delivery upon dilation was achieved with calcium alginate as the vessel model (about 60%).However, a total PTX delivery upon dilation of 20% was determined with polyacrylamide as vessel model.The used vessel models showed seemingly various adhesion properties, thus the PTX wash-off quantities during simulated blood ow were different.The silicone tube showed the lowest amount of wash-off (<1%) from the vessel model aer 1 min simulated blood stream.The highest drug wash-off (release) was achieved with calcium alginate as vessel model.Moreover, the diffusion of PTX into the vessel wall occurs at various rates, which may be related to the cross-linker content of the hydrogels.In addition to solubility and thus diffusion of PTX, the hydrogel material as well as the coating was crucial for drug transfer from the balloon into the vessel wall when compared to reported data.Furthermore, the vessel-simulating ow-through cell was combined with a model coronary artery pathway to simulate an anatomic implantation process.Vast amounts of the coated drug were lost during a simulated artery pathway.Only a small fraction of the total loads of PTX were delivered upon dilation.Similar transfer rates for PTX upon dilation were achieved with calcium alginate and PAAm as vessel models.The crucial drug delivery upon dilation was examined with the aid of different hydrogel materials to Fig. 3 Fig.3Drug transfer rate for PTX in different in vitro vessel models. Fig. 4 Fig. 4 Total drug transfer rate upon dilation for PTX after simulated anatomic passage. Table 1 Total PTX delivery upon dilation in different vessel models after simulated use in an in vitro vessel model a a n.a.: not available. Table 2 Particle quantification after simulated anatomic passage Open Access Article.Published on 09 January 2015.Downloaded on 10/22/2023 4:25:21 PM.This article is licensed under a Creative Commons Attribution 3.0 Unported Licence.evaluate the in vitro research.These are important data for the in vivo application.
2019-04-05T03:32:30.610Z
2015-01-19T00:00:00.000
{ "year": 2015, "sha1": "e860ee8d0678ce9a36056260c83ea300b92a62fc", "oa_license": "CCBY", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2015/ra/c4ra12524j", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "ca5b835789038ad88e87bcde59882b64d955e928", "s2fieldsofstudy": [ "Medicine", "Biology", "Engineering" ], "extfieldsofstudy": [ "Chemistry" ] }
236792988
pes2o/s2orc
v3-fos-license
Development of the General Self-regulation Scale for a Healthy Lifestyle (GSRSHL) among Japanese University Students Background: A healthy lifestyle of students is an important contributor to both quality of life and longevity. The objective of this study was to develop a new measurement scale, the General Self-regulation Scale for a Healthy Lifestyle (GSRSHL), for university students. Methods: A total of 434 university students (281 male, 153 female) participated in this study. To conrm the validity of the new scale, we examined the relationship among the GSRSHL, the Japanese version of the Brief Self-control Scale, perceived stress, and life satisfaction. Results: The exploratory factor analysis of 15 items yielded two correlated factors: “planning achievement” and “emotional control.” The new scale showed good internal consistency. Conrmatory factor analysis with covariate analysis demonstrated that “planning achievement” and “emotional control” were positively associated with self-regulation and life satisfaction. “Emotional control” related negatively to perceived stress. “Planning achievement” increased the odds of adequate sleep, balanced diet, and physical exercise. “Emotional control” increased the odds of consuming breakfast, having adequate sleep, following a balanced diet, and having less stress. Conclusions: Our study provided evidence of the validity and the applicability of the GSRSHL in Japanese students. Background Recently, chronic somatic diseases (e.g., diabetes, cardiovascular disease, respiratory disease, and cancer) have become the most common causes of death worldwide, and nearly all such diseases emerge following years of undesirable, unhealthy lifestyles. (e.g., Matsui, 2019;Miller et al., 2018). With the continued aging of Japanese society, the annual mortality rate is over 60% from chronic somatic diseases, and now, Japanese health organizations and jurisdictions have increasingly implemented health-promotion programs aimed at the prevention, diagnosis, and treatment of lifestyle-related diseases. (Ministry of Health, Labour and Welfare, 2016;2019). However, their effectiveness has recently aroused widespread doubt (World Health Organization, 2010), because participants do not appear able to maintain long-term behavioral changes that support more healthful lifestyles despite having received health education from an early age. These ineffective efforts toward long-term change can result from any number of factors such as bad in uences in one's environment or a loss of interest in one's health care over time (Darviri et al., 2014;Yamamoto et al., 2018). In Japan as elsewhere, young people who graduate from high school and leave for university gain newfound independence from their parents and establish drastically different lifestyles. Young people on their own must construct their new lives according to their priorities and values, including regarding how they maintain their health. Yet young adulthood is the time when many undesirable lifestyle habits form such as smoking, drinking, skipping meals, overeating, and ceasing or failing to exercise. Indeed, many previous researchers reported that increasingly unhealthy lifestyles in adolescence were associated with lifestyle-related diseases in adulthood (Matsui, 2019;Miller et al., 2018). The implication of these various ndings is that, for people to enjoy healthy lives and prevent lifestyle-related diseases, they must be made aware of the importance of "self-regulation" from an early age, and efforts are needed to improve this ability, especially among university students. Future research can be expected to provide individualized health education tailored to clients' individual levels of self-regulation ability. So what is self-regulation? In the early stages of research on self-regulation, the de nition focused mainly on "impulse control/emotional control" (e.g., Mischel et al., 1989;Thoresen & Mahoney, 1974). Impulse behavior control has long been a concern in elds of research related to the self-regulation of eating behaviors, alcohol abuse, and other dangerous behaviors (e.g., Atalayer, 2018;Stephan et al., 2017). In addition, it is reported that emotional control and adjustment are related to psychosomatic symptoms in children (Sato et al., 2016). Recently, research has focused on other aspects of self-regulation, such as "achieving the goals/solving problems/adapting to social life" (e.g., Miyazaki et al., 2007;Sugiwaka, 1995;Takahashi & Miura, 2016). Self-regulation, composed of achieving goals and solving problems, can help people curb relatively undesirable goals and, at the same time, pursue more desirable goals over the long term by realizing one's own social value (Ozaki et al., 2016). Precisely individual differences in lifestyle self-regulation are associated with a variety of healthy habits. Therefore, self-regulation in daily life, if capable of being measured, will play a prominent role in improving unhealthy lifestyles and preventing lifestyle-related diseases. In Japan, previous studies on self-regulation have focused on learning psychology, neurology, sociology, and criminology (e.g., Arai & Hishiki, 2019;Harada & Tsuchiya, 2019;Kiuchi et al., 2017;Umeno et al., 2017), and scales were developed to measure these aspects (learning, interpersonal, etc.) of selfregulation (e.g., Ozaki et al., 2016), but few studies have been conducted on self-regulation relating to lifestyle changes. Some sporadic studies on health-related self-regulation ability have been conducted among patients, but not the general public (Fukada et al., 2012;Ogasa, 2018). Additionally, the only measurement available for health-related self-regulation targets the elderly, focusing more on the health behaviors of the elderly, with questions regarding dental checkups, regular meal intake, exercises, and whether they have physical examinations (Fukada et al., 2012). To focus on the Japanese population, the Self-control Scale Japanese Version (Miyazaki et al., 2007) was created based on the Self-control Scale (Tangney et al., 2004). This scale was developed to measure self-control behaviors for various situations of daily life. It also focused on "control" and "regulation", and the items covered control over thoughts, emotional control, impulse control, performance regulation, and habit breaking. Even though this scale uses items to measure habits like "keeping the time" and "eating healthy food," there are only a few of them, because this scale is not only for maintaining lifestyle habits, which is related to health. Additionally, in the results of an exploratory factor analysis (EFA) for the Japanese version, the items that measure interpersonal control, emotional control, and impulse control were mixed in one subscale. The Brief Self-control Scale Japanese Version (BSCS-J) (Ozaki et al., 2016) was created based on the short version of the Self-control Scale. It is a good scale to assess individual differences in self-control in several different aspects, but there are only two items to measure healthy habits. In order to address the above problems, it is necessary to design a new measurement scale to evaluate the degree of control exerted by an individual only on daily life habits, especially for university students. Thus, in this study, based on the two important aspects of de nition of self-regulation "control" and "achievement," we created a new general self-regulation scale for a healthy lifestyle (GSRSHL). This scale records several speci c health behaviors in relation to daily life, not a vague measure for several aspects. In addition, in further studies, this measurement may be used to investigate lifestyles and behavior characteristics shown by individuals with high self-regulation ability, allowing us to focus on strengthening self-regulation ability in terms of lifestyle habits in further studies. Thus, the objective of this study was to create and assess a new GSRSHL. Concurrent validation of the scale was based primarily on a similar self-control scale, perceived stress, and healthy lifestyle questionnaires. Methods Design and procedure A total of 434 Japanese university students (281 male,153 female, aged 17 to 25 years old, M= 20.0, SD = 1.7) were asked to participate in this study. Excluding the missing value, the object of analysis was 411 (265 male, 146 female, aged 18 to 25 years old, M= 20.0, SD= 1.7), and the effective answer rate was 89.3%. Participants completed the questionnaire either before or after classes with the permission of teachers. Participants were given an outline of this research before answering the questions. The research protocol was approved by the Human Ethics Committee of the Faculty of Human Development of K University of Japan. Development of item pool for self-regulation for the lifestyle scale To collect items assessing self-regulation ability in daily life, 20 open questions like "What are you doing to protect your healthy lifestyle?" and "What kinds of behaviors do you think will damage your daily routine?" were designed. Twenty-two university students (12 male, 10 female) were asked to complete the questionnaire. Finally, 15 items were gathered based on the qualitative data obtained from the participants' answers in order to measure general self-regulation for a healthy lifestyle (GSRSHL), with a ve-point Likert scale (1= not at all; 5= very much) used in the questionnaire. Self-control Self-control was assessed with the BSCS-J (Ozaki et al., 2016), which, more speci cally, was used to assess individual differences in self-control. This scale was translated based on the Brief Self-control Scale English version (Tangney et al., 2004). This one-dimensional scale measures control over thoughts, emotional control, impulse control, performance regulation, and habit breaking with 13 items. Participants were instructed to indicate how they saw themselves on a ve-point Likert scale. Cronbach's alpha was .72. Perceived stress The degree to which an individual's life is considered stressful was assessed with the Perceived Stress Scale (PSS) Japanese Version (Sumi, 2006). This scale was translated based on the PSS English version (Cohen et al., 1983). This is a two-factor model scale that measures positive (7 items) and negative (7 items) aspects of stress control using a ve-point Likert scale (0-4) for 14 items. Higher scores indicate a higher level of perceived stress. Cronbach showed a good internal consistency (α= .83). Items can be answered as yes (=1) or no (=0). "Yes" means a healthy habit, and "no" means an unhealthy habit. A higher total score means a healthier daily routine. Based on the total score, the sample can be divided into three groups: unhealthy group (0~4 scores), moderate group (5~6 scores), and healthy group (7~8 scores). The Cronbach's alpha was .74. Data analysis EFA was used to identify the factor structure for the 15 GSRSHL items. Loadings of each item were maximized using the principal factor method promax rotation. Items with a factor loading below .3 would be deleted (Oshio, 2018) and the EFA repeated. Cronbach's alpha of each subscale was calculated to assess the internal consistency of the identi ed factors, along with the calculation of t-tests to describe gender differences. Con rmatory factor analysis was performed to express the signi cant determinants of subscales of the GSRSHL for healthy lifestyles. Thus, the convergent and discriminant validity of the GSRSHL can be inferred using several different validating variables. Then, we applied a binary logistic regression analysis to predict healthy behaviors of the lifestyle scale in each subscale of the GSRSHL. In addition, based on scoring rules of the lifestyle scale, we also grouped lifestyle in three groups: healthy group, moderate group, and unhealthy group. Analysis of variance (ANOVA) of the GSRSHL was conducted on these groups. The Statistical Package for the Social Sciences (SPSS) version 22 and Amos version 21 were used for all analyses. Results First, we veri ed the ceiling and oor effect of the 15 items, and there was no ceiling and oor effect in these items. Table 1 shows the results of EFA of the GSRSHL, using the principal factor method with promax rotation. We deleted one item because the factor loading was below .30 and performed the analysis again. As a result, two factors were found based on the scree-plot and interpretability of the factors. "Emotional control (7 items)" assessed a general capacity for emotional self-discipline, and "planning achievement (7 items)" assessed a range of the performance of plan achievement. Factor loadings of these two factors were above .45, and Cronbach's alphas were .86 and .85. This shows that these two factors associated with each other positively (r= .503, p< .001). Table 2 presents the mean scores of each subscale of the GSRSHL. It also shows the meaningful associations between the subscales of the GSRSHL and other study variables. As a result of the t-test of the GSRSHL, we did not nd a signi cant difference between men and women. The signi cant associations with the subscale scores of the GSRSHL can be summarized that all subscales were signi cantly positively correlated with the BSCS-J and lifestyle scale. "Emotional control" was negatively associated with the PSS. Table 3 shows the results of logistic regression analysis. Older age was signi cantly associated with sleep duration (p< .001), and compared with men, women have healthier eating behaviors (breakfast and nutritional, p< .001 and p< .01). The higher planning achievement group made healthy choices in sleep duration, nutritional, and exercise habits (p< .01, p< .001, and p< .001, respectively). The higher emotional control group was signi cantly associated with breakfast, sleep duration, nutritional, and less perceived stress (p< .01, p< .05, p< .05, and p< .01, respectively). Age, gender, planning achievement, and emotional control were noncontributory to non-smoking, not overdrinking, and learning hours. Table 4 shows the "planning achievement" and "emotional control" average scores of the unhealthy, moderate, and healthy groups of the lifestyle scale. It also shows the results of ANOVA among the three groups. We found that in both subscales of the GSRSHL, there were signi cant differences among the three groups (F=10.9, p< .001, df= 409; F=30.1, p< .001, df= 409). Discussion The objective of this study was to develop a new questionnaire to examine the self-regulation ability involved in daily lifestyle (General Self-regulation Scale for a Healthy Life, GSRSHL), and to test the factorial and construct validity. Fifteen items of this new measurement were derived from a descriptive questionnaire, and EFA was used to clarify the factor structure of these items. After deleting one item (factor loadings < .3), two factors were obtained from the analysis -"planning achievement" and "emotional control" -which demonstrated two aspects of the self-regulation de nition: "achievement" and "control" (e.g., Baumeister et al., 1998;Miyazaki et al., 2007;Sugiwaka, 1995;Thoresen & Mahoney, 1974). "Emotional control (7 items)" assessed a general capacity for emotional self-restraint, while "planning achievement (7 items)" assessed a range of the various performances of regulation. Both factors described the principal regular behavior that is crucial for maintaining a routine and can contribute to keeping an e cient and healthy life. As the Cronbach's alphas were above 0.70 (Oshio, 2018), both factors showed satisfactory internal consistency, and the scores showed adequate variances relative to the theoretical ranges. The result of Pearson correlation indicated that the two factors were signi cantly positively related to each other, which implies that these two factors represent the degree of people's self-regulation ability collectively. To test validation, a similar questionnaire (BSCS-J) was used, which measured individual differences in self-control. The results showed a positive association between the subscales "planning achievement" and "emotional control" of the GSRSHL with the BSCS-J. Therefore, as presented in the results section, a high GSRSHL score re ects a self-e cacious lifestyle pattern. Further, the new measurement was developed as a broad-based questionnaire of self-regulation. According to previous studies, the scale for the general public needs to ensure that there is no gender difference between men and women. And the result of comparison of gender proves it very well (Ozaki et al., 2016;Tangney et al., 2004). Comparing with Fukada et al.'s (2012) health-related self-regulation scale, the GSRSHL targeted younger people (university students). The items not only measured regular daily activities (like whether eating on time every day), but also recorded speci c healthy habits (planned and targeted lifestyle, negative emotional control) that relate to one's daily routine. In comparison with the Self-control Scale and the BSCS-J Japanese Version (Miyazaki et al., 2007;Ozaki et al., 2016), the GSRSHL measured "planning achievement" in daily life in more detail, as well as the control of emotions that have negative effects on daily life. As the results presented in Table 2 show, both subscales of the GSRSHL related to the BSCS-J positively. However, "planning achievement" was more strongly associated with the BSCS-J than "emotional control." It is conceivable that the BSCS-J is more focused on achievement and planning. As this new measurement scale assesses the self-regulation ability that can help to keep one's life healthy, we proposed that the GSRSHL will be positively correlated with a healthy daily routine. The positive association between the GSRSHL and the lifestyle scale con rmed that (Table 2). Moreover, the results of ANOVA showed that the higher the scores of "planning achievement" and "emotional control" people had, the healthier the choices they made (Table 4). Previous studies have shown the association between emotion regulation and promotion of mental health (e.g., Sakakibara, 2017). Therefore, in support of the concurrent validity of our new measurement scale, the PSS was negatively associated with the "emotional control" of GSRSHL (Table 2) and less perceived stress of the lifestyle scale was positively associated with the "emotional control" of GSRSHL (Table 3). The results of logistic regression analysis showed that age and gender were signi cantly associated with sleep duration and eating behaviors, because the older a person is, the better his or her sleeping habits, and the more they care about their diet (Kato et al., 2019;Takaizumi et al., 2016). However, this result was not veri ed on university students, making it essential to consider the speci c situation of university students separately in future studies. Additionally, other studies have found that women, rather than men, were more aware of healthy eating habits, similar to what was found in the current study (Nakao et al., 2016;Namiwa et al., 2010). "Planning achievement" and "emotional control" were signi cantly associated with better choices in eating habits (breakfast and nutrition) and sleep habit. These results were in line with those obtained in previous studies (e.g., Starcke & Brand, 2012;Tangney et al., 2004). Although several studies have veri ed the association between emotional control and healthy eating habits, there is no study directly explaining the relationship between emotional control and "not skipping breakfast" (Shibatsuji & Fumiko, 2003;Shimizu, 2013). Additionally, making plans is more important for doing exercises in daily life, and in Japan, several school programs such as "Let's make a plan" have been carried out to increase the physical strength of students (Ministry of Education, 2008). In this study, the result of logistic regression analysis also revealed the importance of planning. On the other hand, the result of the binary logistic regression analysis showed that noncontributory was found between the GSRSHL and risk behaviors (smoking habits, drinking habits). The reason might be that the subjects of this study are university students, with an average age of 19.9 years, and they are not allowed to buy/drink alcohol or buy cigarettes in Japan. Therefore, for university students, except for overdrinking and smoking, there are several other common risk behaviors (Internet addiction, tra crelated risk behaviors, etc.) that need to be examined in future studies (Nozu et al., 2006). Moreover, because our new scale focused only on daily lifestyle, not learning regulation, we did not nd an association between the GSRSHL and learning hours. From this result, we can explain the discriminant validity of this new measurement scale. This study also has a number of limitations. First, the items were selected based on the personal experience of 22 university students of random selection, which does not include all aspects of a daily routine. Moreover, only the self-regulation ability in a general situation is assessed through this new measurement scale. However, self-regulation is not restricted to general situations, but is also re ected in various speci c situations we did not look at, such as eating, exercising, and resting. Healthy habits in these speci c situations are also related to disease prevention, especially for youth, thus making it essential to focus on these aspects of daily living in future studies. Second, in this study, to con rm the reliability and contractual validity, several analyses were performed. However, to develop a better measurement, we need to carry out more analyses to verify the validity and association factors. Lastly, in the future, it is necessary to expand the scope of the study with more subjects in order to generalize our results. Conclusions To summarize, the GSRSHL, which includes two aspects, "planning achievement" and "emotional control," is a new measurement to assess the degree of one's self-regulation ability to keep a daily routine for a healthy lifestyle. As the GSRSHL is positively related to a healthy lifestyle and negatively associated with perceived stress, its reliability and validity were determined in this study. It is our hope that this measurement can be used in more studies related to the promotion of healthy behavior in order to reduce the risks of lifestyle-related disease. Abbreviations GSRSHL: General Self-regulation Scale for a Healthy Lifestyle At the cover page of the survey, we wrote the information about contents of the survey, precautions, and how to protect personal information. And at the end of the information, we wrote one sentence "By answering and submitting this survey, it is assumed that you consent to the participation in this survey.", to get the consent to participants. We also explained the information verbally before the survey, but did not required a signature for consent form because there is a possibility that we can identify someone by the signature. The above-mentioned method was approved by the Human Ethics Committee of the Faculty of Human Development of Kobe University Consent for publication Not applicable Availability of data and materials The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request. Competing interests The authors declare that they have no competing interests Funding The present research was nancially supported by JSPS KAKENHI Grant Number 15K00871, 17K12537, 18KK0055, and 19K11666. The authors appreciate participants in this study. The funding support analysis software and equipment for writing. Authors' contributions YR and YK conceived of the study and participated in its design and coordination and helped to interpret and draft the manuscript. RU and AR contributed to statistical analyses, writing and interpretation of the results. All authors have read and approved the manuscript
2020-08-20T10:12:42.860Z
2020-08-19T00:00:00.000
{ "year": 2020, "sha1": "02bee71a75b5d93f79bc6721fe464eacd683dbb7", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-52577/v1.pdf?c=1631866984000", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "52cd861a9af594aef9900884e8c8f38f311aebb6", "s2fieldsofstudy": [ "Medicine", "Psychology", "Education" ], "extfieldsofstudy": [ "Psychology" ] }
201276995
pes2o/s2orc
v3-fos-license
The challenges of handling deferasirox in sickle cell disease patients older than 40 years. ABSTRACT Objectives: Deferasirox is an oral iron chelator with established dose-dependent efficacy for the treatment of iron overload secondary to transfusion. However, there is few data reporting the use of Desferasirox in adult patients with sickle cell disease (SCD) and transfusional iron overload. Methods: We conducted a prospective, single center, nonrandomized study from January 2014 to March 2015 in Campinas, Brazil. Seven patients (five women, median age 50 y.o.) who were followed up on regular transfusion program were treated with a single daily dose of deferasirox (median dose 20 mg/kg). They were monitored for clinical symptoms, renal function and hepatotoxicity. Results: One patient discontinued the study due to lack of compliance. Two patients reported mild to moderate adverse events (gastrointestinal disturbances). Five patients had the drug discontinued due to worsening of renal function. One patient had the drug discontinued due to severe hepatotoxicity that evolved to death; no patient finished the study. Discussion and conclusions: Deferasirox does not appear to be well tolerated in SCD patients older than 40 years, in which complications of the underlying disease are already fully installed. The choice of the ideal iron chelator for this population should include an evaluation of comorbidities and organic dysfunctions, as well as the need to find pharmacogenetic safety markers in this group of patients. Introduction Patients with sickle cell disease (SCD) often require blood transfusions for the treatment of both acute and chronic complications associated with the disease. Despite the undeniable benefit of transfusion for these patients, iron overload associated with transfusion is a common complication and, if left untreated, can result in severe impairment of various organs (mainly liver, heart and endocrine glands), significantly elevating the morbidity and mortality of this disease [1,2]. The clinical experience with iron chelators in patients with transfusional hemochromatosis shows that, when performed correctly, this therapeutic modality is able to reduce the incidence of these complications, improving quality of life and overall survival [3,4]. The efficacy of chelation therapy, however, is often limited by the route of administration of the drug of choice. Deferoxamine, although efficient in the removal of body iron, should be administered subcutaneously or intravenously in prolonged infusions of 8-12 h, 5-7 days per week, which significantly decreases the adherence of patients to this regimen [5,6]. In the RELATH study (Registry of Latin Americans with Transfusional Hemosiderosis) that included 646 patients with sickle cell anemia, only a 46.3% of these patients were observed to have received chelation therapy. In this study, in which deferoxamine was the only available chelator, 20% of patients abandoned the use, primarily due to adherence issues [7]. The availability of oral Deferasirox (Exjade®) in a single dose makes this option a very attractive alternative to other iron chelators, favoring the adherence of patients to chelation therapy. The safety profile of this drug has already been demonstrated in patients with several types of transfusion-dependent anemias, especially in patients with ß-thalassemia [8][9][10][11]. In addition to the possibility of oral administration in a single daily dose, studies reporting the use of Deferasirox in patients with sickle cell disease showed a significant reduction in ferritinemia. However, in all these studies, the preferential inclusion was that of young patients, with no significant comorbidities [12][13][14]. After the advent of chronic therapy with Hydroxyurea and other improvements in the management of these patients, individuals with sickle cell disease currently present an expressive increase in survival, with a consequent longer exposure to transfusions and onset of comorbidities and target organ damage. Considering the importance of adequate iron chelation therapy in patients with sickle cell disease, this study was performed to evaluate the efficacy and tolerability of using Deferasirox specifically in patients with sickle cell disease with iron overload secondary to transfusion and over 40 years of age. Methods This was a prospective, single center, non-randomized study conducted from January 2014 to March 2015 at the Hematology and Transfusion Medicine Center, University of Campinas, Brazil. Seven patients (2 male and 5 females) were included in the study. All patients had SCD (6 homozygous and 1 SC hemoglobinopathy). The median age was 50 (43-67) years. Patients received a median dose of 20 (12-20) mg/kg/day Deferasirox. Patients had documented iron overload: serum ferritin > 1000 ng/ml or transferrin saturation > 50%, secondary to blood transfusion (all patients had received more than 20 units of packed red blood cells throughout life). One patient had no previous use of iron chelators, and the others had their current drug(s) switched to Deferasirox (3 of them used Deferoxamine alone, 2 used Deferoxamine associated to Deferiprone and 1 of them used Deferiprone alone). Initial assessment included serum creatinine, creatinine clearance calculated using the Cockcroft-Gault formula and documented by Chromium EDTA, proteinuria and microalbuminuria, sodium and potassium serum levels, AST, ALT and bilirubin, serological markers for Hepatites B and C viruses, blood counts, reticulocyte count, serum ferritin and iron concentration, total iron binding capacity (TIBC), transferrin saturation index, cardiac and hepatic magnetic resonance imaging and echocardiogram. These tests were repeated periodically to assess the safety, efficacy, and tolerability of deferasirox. Monitoring was carried out as follows: (i) every 7 days: creatinine dosage during the first 8 weeks or until dose adjustment; (ii) every 30 days: blood counts, microalbuminuria, Na, K, AST, ALT and bilirubin; (iii) every 90 days: ferritin, serum iron, TIBC, transferrin saturation; (iv) every 180 days: MRI, echocardiogram, renal scintigraphy with Cr EDTA. The present study was conducted in accordance with the Declaration of Helsinki and was approved by the Institution's Medical Ethics Committee (protocol No. 478.921). Informed written consent was obtained from all patients prior to participating in any of the study procedures. Criteria for suspension of the drug included tolerability failure and mainly progressive changes in renal or hepatic function. In the case of 2 consecutive increases of AST and/or ALT greater than 5 times the baseline values and 2 consecutive increases greater than 33% of baseline creatinine, the dose would initially be reduced by half; in the case that these changes persisted for more than 3 weeks, the medication would be discontinued [15]. The data were analyzed under the supervision of a statistics professional and were reviewed by the researchers. Safety and efficacy data are reported for all patients who received at least one dose of deferasirox throughout the study. The safety assessment was based primarily on the frequency of adverse events and laboratory values that exceeded the predetermined intervals. The main parameter of effectiveness was the alteration of the last serum ferritin performed in relation to basal ferritin. The association between variables was calculated using Wilcoxon (median difference) and Spearman non-parametric (correlation) tests. The percentage difference used in the Spearman test was calculated by subtracting the initial value from the final value of the analyzed variable divided by the initial value. P values were considered statistically significant when < 0.05. Results Patients' clinical and laboratory data are depicted in Table 1. Adherence to treatment was checked at each visit by the attending physician or nurse through interviews and checking of medication withdrawal control at the pharmacy. Liver iron concentration assessed by MRI before initiation of treatment showed mean T2* of 1.8 (0.7-6.4) msec and LIC of 15.11 (3.92-39.57) mg/g. The median serum ferritin at the beginning of treatment was 2,971 (1,453 -13,969) ng/mL and the median transferrin saturation was 73.57 (100-22)%. The high levels of ferritin prior to treatment with Deferasirox in some of these patients are due to lack of adherence to the chelation regimen subcutaneously or intolerance to other chelators. After the initiation of Deferasirox, the median ferritin was 2,383 (284 -12,383, p = 0.297) and the median transferrin saturation was 62.28 (44-83, p = 0.042). The correlation of the dose of Deferasirox with the percentage of reduction of the serum ferritin in relation to the basal levels showed Ro 0.359/p = 0.43. One patient was withdrawn from the study due to a violation of the protocol, since he did not adhere properly to the administration of the drug and failed consultation. The adverse events reported by the patients were predominantly gastrointestinal disorders (nausea, diarrhea and epigastralgia in 25% of patients) ranging from mild to moderate intensity. There was complete improvement of the symptoms after continuation of the treatment, with no need of suspending the drug, as also reported in previous studies evaluating Deferasirox [8,9]. The median baseline creatinine was 0.96 mg/dL (0.69-1.24) and the median creatinine after the introduction of Deferasirox was 1. The median ALT before drug introduction was 35 U/ L (12-68) and the median ALT after initiation of treatment and during the study period was 62U/L (17-463) (p = 0.016). Correlation of the dose of Deferasirox with the percentage of increase of serum ALT in relation to the basal levels showed Ro 0.418/p = 0.35. One patient developed severe hepatotoxicity (increase of AST, ALT and bilirubin > 5 x UNL) after administration of deferasirox. This was a 43-year-old female who started 20 mg/kg/day oral DFX therapy in January 2014. She had no signs of chronic liver disease on abdominal ultrasound assessment prior to study enrolment and she initially showed a good tolerance to the medication (including normal renal and liver function). 56 days after the beginning of therapy, transaminases levels increased 4X the ULN; the drug was immediately discontinued but AST and ALT levels presented progressive elevation, and bilirubin levels were also markedly increased. A presumable diagnosis of hepatocellular drug-induced liver injury (DILI) was performed, and the injury was initially classified as moderate (score 2) according to the Drug-Induced Liver Injury Network criteria [16]. Autoantibodies for autoimmune hepatitis (antinuclear antibody, liver and kidney microsome type 1 antibody, and smooth muscle antibody) were negative and chronic hepatitis B and C were excluded by serology and PCR. She evolved with grade III/IV encephalopathy, and a rapid clinical deterioration with refractory distributive shock and multi-organ failure, followed by cardiac arrest and death in less than 24 h. It is interesting to note that this patient had a homozygous polymorphism in the gene encoding the multidrug resistance protein (MRP2), 17774delG Abcc2 [17]. Discussion and conclusions Our results demonstrate the difficulty in handling Deferasirox and its low tolerance for this group of patients who already have complications of the underlying disease. The only adverse events reported by the patients were gastrointestinal symptoms (nausea, diarrhea and dyspepsia) that were mild to moderate in intensity, transient, and did not lead to drug withdrawal. However, contrary to previous studies, in our group of patients the increase in serum creatinine was not mild and transient, resulting in the suspension of the drug in the majority of patients. Creatinine levels returned to baseline levels after discontinuation of the drug. However, subsequent re-introduction, even at smaller doses, was not possible due to further worsening of renal function. Changes in renal function parameters were reported as an adverse event of the product in both children and adults, with increased plasma creatinine and proteinuria in 33-38% of treated patients. The effects on renal function appear to be mostly moderate, transient, and dose-dependent [12]. However, since renal toxicity can be silent and non-specific, leading to diagnosis in late stages of injury, data on the incidence and management of these events are essential for the care of these patients, especially those with SCD who may present with renal impairment due to the disease itself. Few studies have so far evaluated the efficacy and safety of using Deferasirox specifically in patients with sickle syndromes [13][14][15]. Vichinsky et al. published data on 5-year cumulative safety and efficacy of deferasirox in 185 SCD transfusion-dependent patients (mean age 19.2 years, range 3-54) [14]. Increased serum creatinine levels led to dose adjustment or discontinuation in 11 (5.9%) patients. In relation to the dose used, there was no apparent difference in incidence or type of adverse events, before and after the dose increase above 30 mg/kg/day in relation to lower doses, as well as no significant differences in laboratory parameters were reported (creatinine clearance, plasma creatinine and proteinuria). In the EPIC study, of the 1,744 patients included, 80 had sickle cell disease (mean age 23.9 years, range 4-60). The dose of Deferasirox was 20-30 mg/kg/day. Only two patients (2.5%) had consecutive serum creatinine increases of more than 33% above the baseline. The increase in serum creatinine in the study appeared to correlate with the previously increased baseline levels of the patients. There were no cases of drug-related proteinuria in patients with sickle cell disease [9]. It is important to emphasize once again that in these studies the cases consisted essentially of young patients, possibly with lesions in target organs still incipient or absent. Regarding the patients described in our study, due to the occurrence of the previously reported hepatic and renal toxicities, the dose of the medication had to be reduced or discontinued, resulting in the administration of subdoses to many patients when compared to the doses administered in previous studies. This is probably the reason that we have not found a proven efficacy of deferasirox in reducing iron overload (accessed by ferritinemia). Therefore, this is the first study that specifically addresses the handling of Deferasirox in adult patients with sickle cell disease and transfusion iron overload. Further studies, with a larger number of patients, will be necessary to prove the initial data of this study. The main metabolic pathway for deferasirox is glucuronidation by the subfamily UDP-glucuronosyltransferase 1A (UGT1A) which encodes UDP-glucuronosyltransferase, a glucuronidation enzyme that transforms small lipophilic molecules into excretable water-soluble metabolites. Deferasirox is then eliminated in bile through the multidrug resistance protein (MRP2) which is an organic transporter anion expressed in the canalicular membrane of hepatocytes and in the epithelium of the proximal tubule cells. MRP2 is encoded by the Abcc2 gene. Lee et al. analyzed the genetic polymorphisms of UGT1A and MRP2 to predict toxicities in a population undergoing deferasirox treatment. Their results indicated that hepatotoxicity was related to the Haplotype MRP2 and elevation of creatinine was associated with the UGT1A1 * 6 genotype [18]. Therefore, the investigation of these polymorphisms may lead to a better understanding of the pathophysiological mechanism of deferasirox-related toxicity, in order to determine the actual dosages and contraindications of the drug. In conclusion, administration of Deferasirox at a dose of 10-20 mg/kg in patients with sickle cell disease over the age of 40 years was not well tolerated and treatment was discontinued due to changes in renal function or due to severe hepatotoxicity (that subsequently evolved to death) which were not strictly related to drug dose (Table 2). It was not possible to prove the effectiveness of the treatment in this specific cohort of patients, since none of them reached the end of the study nor was the time of exposure to treatment sufficient to verify efficacy. We conclude that in adult patients with sickle cell anemia, the choice of the ideal iron chelator should include an evaluation of comorbidities and organic dysfunctions, as well as the need to find pharmacogenetic safety markers for the use of drugs in this group of patients. Disclosure statement No potential conflict of interest was reported by the authors. Funding MRI scans and patient costs were sponsored by Novartis.
2019-08-23T13:03:44.635Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "7dbbcf0275e5888d93dac0cf9cde3ab6fcf4e021", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/16078454.2019.1657667?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "26d31fb6c14f0651383075457535ad83c4a9a20d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
17671164
pes2o/s2orc
v3-fos-license
SAMP, the Simple Application Messaging Protocol: Letting applications talk to each other SAMP, the Simple Application Messaging Protocol, is a hub-based communication standard for the exchange of data and control between participating client applications. It has been developed within the context of the Virtual Observatory with the aim of enabling specialised data analysis tools to cooperate as a loosely integrated suite, and is now in use by many and varied desktop and web-based applications dealing with astronomical data. This paper reviews the requirements and design principles that led to SAMP's specification, provides a high-level description of the protocol, and discusses some of its common and possible future usage patterns, with particular attention to those factors that have aided its success in practice. Introduction Astronomical research requires complex and flexible manipulation and processing of various different types of data. Images, spectra, catalogues, time series, coverage maps and other data types need their own special handling, typically provided by specialist tools. Data sets of different types meanwhile are usually related in various ways arising from their physical origin, for instance catalogues are often derived from images and best understood in conjunction with them, and spectra and time series usually originate from specific sky positions or regions which may be represented on images and described by catalogue entries. To extract scientific meaning from the data it is usually necessary to exploit these linkages between data items as well as the internal structure of each. The working astronomer therefore uses a selection of different software components, each specialising in a particular type of data or manipulation, for different data sets and different tasks, and has to integrate these together in a way that takes account of the relationships of the data items under consideration. For batch or pipeline-type processing the required tool integration is usually, in terms of data flow, fairly straightforward: the output of one step can be fed to the input of the next as a file, stream of bytes, or some kind of parameter list, often under the control of a script of some kind. During the exploratory or interactive phase of data analysis however, this traditional model of tool integration Email address: m.b.taylor@bristol.ac.uk (M. B. Taylor) 1 Present address: Google, USA is less satisfactory. Within a given GUI analysis application it is usual to interact with the data using mouse and keyboard gestures to perform actions like selection or navigation with instant visual feedback, in many cases with some kind of internal linkage between different data views. But communicating such actions or their results between different tools tends to be much more cumbersome. A way can often be found to reflect a result generated by one tool in the state of another, for instance by reading sky coordinates reported by one tool and typing or pasting them into another, or saving an intermediate result from one tool to temporary storage and reloading it into another, but it can be fiddly and tedious, especially if similar actions are required repeatedly. This lack of convenience is more than just an annoyance, it can interrupt the flow of the data exploration, reduce the parameter space able to be investigated, and effectively stifle discovery of relationships present in the data. From this point of view, a single monolithic astronomical data analysis user application providing the best available facilities for interactive presentation, manipulation and analysis of all kinds of astronomical data and their interrelationships seems an attractive prospect. In reality of course, no such one-stop analysis tool exists. The obvious practical difficulties aside, it is not even clear that deviating so far from the Unix philosophy of "Make each program do one thing well" (McIlroy et al., 1978) would be desirable. These considerations have driven the development of a framework for communication between independentlydeveloped software items, written in different languages and running in different processes. Such applications can thus be made to appear to the user as a loosely integrated suite of cooperating tools, providing facilities such as data exchange, linked views and peer-to-peer or client-server remote control. Although communication between interactive desktop tools was the original stimulus for what is now SAMP, the framework is flexible enough to support other usage patterns as well. Two previous papers on SAMP have been presented in the ADASS conference series: Taylor et al. (2012a) briefly outlines the architecture and explains the Web Profile, and Fitzpatrick et al. (2013) lists some existing client libraries. The current paper discusses the protocol, its communication model, and its current usage in sufficient detail to understand the design decisions taken and their consequences, particularly from the point of view of the usage scenario outlined above. Section 2 traces the evolution of SAMP from its predecessor PLASTIC alongside a comparison with some alternative messaging systems, section 3 outlines some high-level design principles, section 4 presents a description of the protocol itself along with some of the thinking behind it, section 5 considers its use in practice, and section 6 concludes by reviewing the current status and possible future directions for SAMP, as well as the factors that have encouraged its uptake. For the complete and definitive details of the protocol, the reader is referred to the standard document itself (Taylor et al., 2012b). History In the context of the emerging Virtual Observatory in the mid-2000s, the benefit of connecting client-side tools to improve productivity when working with multiple data types became apparent. In fact this problem was not specific to the VO, but the ease with which multiple related data products could be acquired using VO technologies, themselves sometimes requiring the use of separate tools for data discovery and acquisition, amplified the benefits that such tool integration could deliver. Additionally, the new shared funding and communications channels between institutionally and geographically separated software developers that arose from various VO initiatives proved important in practice as a platform for experimentation and agreement in this area. The external scripting capabilities of tools such as Aladin (Bonnarel et al., 2000), SPLAT-VO (Škoda et al., 2014) and SAOImage ds9 (Joye and Mandel, 2003) already provided the option of tightly coupled master/slave control between pairs of applications, but did not lend themselves to the kind of cooperative interaction envisaged. The developers of Aladin experimented with Java interfaces designed for two-way communications; these delivered some limited integration, but were restricted to applications operating within the same Java virtual machine. Meanwhile the Astro Runtime developed by AstroGrid was providing to desktop tools a simplified façade for a range of VO services using their choice of communication technology (XML-RPC, REST, Java RMI or JVM call). From this background, in 2005 discussions between developers of the AstroGrid, Aladin, VisIVO and TOPCAT (Taylor, 2005) software in the context of the Euro-VO framework and the SC4DEVO workshop series led to the development of a new communication protocol PLASTIC: the PLatform for AStronomical Tool InterConnection Boch et al., 2006). PLASTIC built on the implementation and technology choices present in the Astro Runtime to provide the interaction capabilities required by the participating teams, and prototyped many features that were later inherited by SAMP, including a central hub, publish-subscribe messaging, use of XML-RPC, loosely-defined message semantics, and a pragmatic approach to providing "good-enough" communications. It proved popular with developers and users, and was incorporated into a dozen or so desktop applications, which could thereby be used together effectively in productive and sometimes novel ways. Interest in PLASTIC was however largely confined to Europe. Efforts to gain IVOA endorsement and expand the pool of applications that could communicate in this way led after some discussion to the drafting by European and US authors of a successor standard, the Simple Application Messaging Protocol, which was accepted as an IVOA Recommendation in 2009 (SAMP version 1.11). This standard was intentionally similar in many respects to PLASTIC, in order to avoid disrupting patterns of successful cooperation already in use, but the opportunity was taken to amend some decisions that experience had shown to be sub-optimal, and to expand its scope to accommodate other possible usage patterns. Changes made on the basis of lessons learned from PLASTIC included a simplification of the type system, complete language independence (though PLASTIC could be used from any language, certain parts of the protocol were defined with reference to Java), simplification of message targetting, improved security arrangements (security is still rudimentary in SAMP, but opportunities for trivial client spoofing were removed), modification of message names (now both human-readable and wildcard-able rather than opaque URIs), definition of all message parameters and return values as key-value pairs rather than ordered lists, use of fundamentally asynchronous messaging for robustness, restriction rather than proliferation of transport mechanisms, improved error reporting, and better extensibility. Also new in SAMP was the notion of a Profile to provide formal separation between the abstract messaging model and the transport layer. One reason for its introduction was to enable the possible future use of the protocol for messaging in less "PLASTIC-like" contexts. At the time, requirements for improved performance or security were envisaged; to date extensions in those directions have not been explored, but the Profile mechanism has paid off by supporting the later development of the Web Profile to support browser-based clients alongside desktop ones. The introduction of the Web Profile in SAMP version 1.3 (2012) has been the main change to date since the initial version. Other Messaging Systems Many general-and special-purpose messaging frameworks exist. It is beyond the scope of this paper to provide a comprehensive review, but we provide here a brief comparison between SAMP and a few of the alternatives. Several generic messaging frameworks share some features with SAMP, for example AMQP, ZeroMQ, XPA, and D-Bus. To our knowledge, none satisfy SAMP's key requirements of an easily implementable platform-neutral standard supporting straightforward messaging between a shared community of clients in quite the way required, though some of these systems could be used as transport layers on which future SAMP profiles could be built, in the same way that XML-RPC has been used in the existing profiles. SAMP deliberately restricts some choices related to implementation and usage to reduce the burden on client developers, so it is perhaps not surprising that generic messaging frameworks are not, on their own, appropriate. One or two messaging systems however merit further mention. WAMP, the Web Application Messaging Protocol 2 , bears some striking architectural similarities to SAMP including a combination of RPC and publish-subscribe messaging mediated by a central component known in WAMP terminology as the Router. However, it does not address the issue of router discovery, so there is no prescribed way for clients to initiate communication. The Intent mechanism for inter-process communication that forms part of the Android operating system, while its target environment clearly differs from that of SAMP, shares with it some characteristics in terms of design and usage. For instance message semantics may be defined in a way which is either app-specific ("explicit") or vague ("implicit"), in the latter case resulting in a user choice at runtime between candidate receiving apps. Furthermore, bulk data transfer is achieved using URIs to refer to an external data source rather than conveying the payload within the message. An example of a domain-specific messaging framework is the Systems Biology Workbench (Sauro et al., 2003), which is close in spirit to SAMP, enabling platform-independent remote method invocation between components (known as "modules") in the field of systems biology, based around the SBML data format. One point of difference is that the SBW infrastructure itself orchestrates the loading of modules to provide the required functionality (the Android OS does something similar with Intents); in SAMP this choice of components is left to the user, though components like AppLauncher (Lafrasse et al., 2012) are available to layer automatic client startup on top of the basic protocol where desired. Design for Interoperability The overriding objective for the design of SAMP has been to foster interoperability in practice. This requires not just a messaging system with sufficient communication capabilities, but also one which developers of popular analysis tools, and ideally private scripts as well, are actually willing and able to integrate into their software. To achieve this, a number of principles have been followed. In the first place, it is as far as possible platform independent. The definition of the protocol is not dependent on or biassed towards use of particular implementation languages or operating systems. Second, ease of adoption. Application authors have found basic use of SAMP to require little implementation effort. In practice, availability of easily deployable SAMP client libraries developed within the SAMP community for a number of implementation languages have been an important factor in this. However the communications are, by design, simple enough that basic SAMP use is not hard to achieve given only an XML parser and HTTP access capabilities. Ease of use by end users is equally important, so that those running analysis tools can benefit from the integration capabilities that SAMP provides without needing to perform expert configuration (ideally, any explicit setup at all) or to understand the details of the messaging system. Third, extensibility and flexibility. Building into the system the capability to use it in ways driven by the requirements of the client tools rather than just those forseen by the standard authors increases its likely usefulness. The mechanisms for extensibility have been particularly designed to allow the introduction of new features without affecting existing ones, with the aim of reducing compatibility issues. Finally, the approach has been above all pragmatic, favouring the straightforward over the rigorous in cases of conflict. For instance message delivery is not guaranteed, but can be expected to work most of the time. The security model will prevent casual interference, but may be vulnerable to determined attack. Semantics are tagged using short readable strings on the assumption of sensible choices, rather than URIs with guaranteed private namespaces. Performance is easily good enough to handle exchange of short control messages on a timescale commensurate with user actions, but not for sustained throughput of high data volumes. It may however be noted that most of these items could if required be ameliorated by future introduction of a new Profile with different transport characteristics. These principles and their application, in some cases informed by positive and negative lessons from the ex- Figure 1: Schematic of SAMP Hub and clients joined in a star topology. Black lines indicate clients registered with the hub. The red half-arrows indicate the progress of a message from a sending client (which may or may not be callable) to a receiving client (which must be callable), passing through the Hub. perience of PLASTIC, might not be appropriate for all contexts but have led to a messaging infrastructure which ought to be easy for client developers to understand and adopt, and which has in fact been widely taken up. Protocol Description SAMP is based on a star topology, and its central component is a Hub through which all communications are passed ( Figure 1). Clients first perform a resource discovery step to locate the Hub, and then register with it, establishing a private communication channel through which subsequent calls to the Hub's services can be made. These services include accepting metadata about the registering client, providing information about other registered clients, and forwarding messages to those clients. These messages may elicit responses, which may optionally be passed back to the message sender, again via the Hub. All registered clients are able to send messages in this way. Any client may optionally declare itself callable, in which case it is also able to receive messages sent by others. Callability is optional since it is more difficult to achieve in client code, requiring some server-like capacity on top of the ability to invoke Hub services, and simple actions like sending an image or table can be achieved without this requirement. In addition to declaring itself callable, a client wishing to receive messages must explicitly subscribe 3 to one or more MTypes (message types). Every message is labelled with an MType, and the Hub will only deliver messages to clients that have declared their interest in the MType in question with an appropriate subscription. When sending messages, clients may either broadcast them to all subscribed clients or target them to a named client, but in the latter case delivery will fail if the target client has not appropriately subscribed. If a client has no further use for SAMP communications (for instance on application exit), it can and should unregister . This framework combines the notions of publish-subscribe (pub/sub) and Remote Procedure Call (RPC) messaging. Like publish-subscribe, messages are only delivered to appropriately subscribed recipients, but like RPC the sender may optionally target messages to a selected recipient, and may optionally receive responses from the recipient(s). The targetting mode, response requirement, and message content are all decoupled from each other. The details of this system are codified in a three-layer architecture: Abstract API: defines the services provided by the Hub and clients Profile: maps the Abstract API to specific communication operations, such as bytes on the wire MTypes: provide semantics for the actual messages exchanged between clients Note that SAMP thus defines two distinct sets of Remote Procedure Call (RPC) operations: the functions declared by the Abstract API, concerning the mechanics of client-hub communication and message delivery, and SAMP Messages themselves classified by MType, bearing the application-level content that clients wish to exchange with each other. The syntax and semantics of the former are carefully defined by the SAMP standard, but the form and content of the latter are agreed outside of SAMP itself by cooperating client developers. Because of the central rôle of the Hub in this pattern, it presents a single point of failure and potential bottleneck. However, SAMP messages are usually short, and in practice performance issues have not generally been apparent. The following subsections present a more detailed account of these ideas, along with some of the considerations that influenced their design. Sections 4.1, 4.2 and 4.3 describe the three architectural layers listed above, and sections 4.4 and 4.5 describe the underlying type system and how it is used to underpin extensibility in SAMP. Abstract API The Abstract API defines the messaging capabilities of SAMP. It takes the form of a list of a dozen or so function definitions with typed arguments and return values, and well-defined semantics. Most of these functions represent services provided by the Hub, for instance register (which returns information required by the client for future communications, typically an identification token) and notifyAll (which requests forwarding a given message to all appropriately subscribed clients). The remainder represent services required from callable clients, such as receiveNotification (which consumes a given message originating from another client). The messaging model in principle associates a response with every message, containing at least a completion status flag along with zero or more MType-defined return values. However it is up to the sending client whether a response is required from any given message; in many cases the status flag is the only return value, and in this case a sending application may or may not wish to make the effort to pass this on to its user (for instance "the table you sent was/was not successfully received"). If the sender has no interest in the return value, it can use the "send-andforget" (notification) pattern, with lower cost for sender, recipient and hub. Message processing is fundamentally asynchronous from the receiver's point of view, so that message/response times are not limited to the lifetime of an RPC call in the underlying transport mechanism. However, the Hub provides an optional synchronous façade for sending messages when clients expect fast turnaround and wish to avoid the additional complication of asynchronous processing. Profile A particular SAMP Profile is what turns the Abstract API into a set of rules that a client can actually use to communicate with a running Hub, and hence with other clients. It performs two main jobs: first, it describes how the functions defined by the API are turned into concrete communication operations, by specifying an RPC-capable transport mechanism and rules for mapping the SAMP data types into the parameters and responses used by that mechanism. Second, it defines a hub discovery mechanism, which tells clients how to establish initial communications with the Hub, usually involving some authentication step. Particular profiles may also specify additional profile-specific hub or client services exposed as functions alongside those mandated by the Abstract API. Initially (SAMP 1.11, 2009) only a single profile was defined, the Standard Profile. This uses XML-RPC 4 as a transport mechanism, and allows hub discovery by storing the URL of the hub's XML-RPC server along with a secret randomly generated key in a private "lockfile" in the user's home directory. Version 1.3 of the standard (2012) introduced a second, the Web Profile, for use by web-based clients. This is required for applications running within web pages, since the sandboxed environment imposed by the browser makes the Standard Profile inaccessible. It shares use of XML-RPC and some other characteristics with the Standard Profile, but hub discovery has to be done differently, and there are a number of complications to do with security, described in Taylor et al. (2012a) as well as the Standard. This decoupling between the functionality of the service interface and its incarnation in a specific transport mechanism allows different transports to be introduced without changes to the core protocol or existing clients, and has a number of benefits. In a given SAMP session, a client may use the most appropriate Profile for its SAMP communications and exchange messages seamlessly with other clients using different profiles; a desktop application can exchange messages with a web page just as easily as with another desktop client. This works because clients only ever communicate directly with the Hub and not with each other, while the Hub performs lossless translation between profile-specific network operations and the messaging model defined by the Abstract API. Future requirements may result in additional Profile definitions, and there is nothing in principle to prevent hub developers from implementing new ones outside the frame of the SAMP standard. However, from an interoperability point of view it is important that all profiles are supported by all common Hub implementations, so that a client can rely on the availability of a chosen profile in a SAMP environment, and for this reason unnecessary proliferation of profiles is discouraged. MTypes An MType (message type) is the description for a message with particular syntax and semantics. It is analogous to a function definition in an API, and consists of a labelling string (sometimes itself also known as the MType) along with a set of zero or more typed and named arguments, a set of zero or more typed and named return values, and some associated semantics indicating what the sender of such a message is trying to convey. By way of example, a commonly used MType is image. load.fits, defined like this: Name: image.load.fits Semantics: Loads a 2-d FITS image Arguments: url (string): URL of the image to load image-id (string, optional): Identifier for use in subsequent messages name (string, optional): Name for labelling loaded image in UI Return Values: None. The name is a short hierarchical string composed of atoms separated by the "." character. As well as identifying to a recipient the type of an incoming message, it is used by clients to subscribe to messages, that is to indicate to the Hub which messages they are prepared to receive. For the purpose of subscription a limited wildcarding syntax is available, so by using the MType patterns image. load.fits, image.* or * a client may declare interest in only the above message, or all image-related messages, or all messages, respectively. In general, a callable client will only subscribe to those MTypes on which it can meaningfully act, so for instance an image analysis tool typically would subscribe to image. load.fits, but not to spectrum.load.ssa-generic. A client that has an image FITS file to send can then either query the Hub for those clients subscribed to the image load message and offer its user the choice of which one to target, or request the Hub to broadcast the image load message to all (and only) the image-capable clients. Type System Supporting the function list defined by the Abstract API and the parameters and return values specified by MTypes is a type system defining the types of value permitted, as well as rules for encoding various structured objects using these types: message objects themselves, success and failure message responses, application metadata, and MType subscription lists. This system contains only three types: string, list and map. A string is a sequence of 7-bit ASCII printable characters, a list is an ordered sequence of strings, lists or maps, and a map is an unordered set of associations of string keys with values of type string, list or map. Structured objects are specified by the use of well-known keys in maps, there is no special representation for null values, and non-string scalar types must be serialized as strings. (Obvious) conventions are suggested for serializing integer, floating point and boolean values into string form, but these suggestions are provided for the convenience of MType definitions that wish to exchange such values without reinventing the wheel, and are not a normative part of the protocol. This restricted type system has been deliberately chosen to introduce minimal dependency of messaging behaviour on the details of non-core parts of the delivery system, in particular profile-specific transport mechanisms and language-specific client libraries. This both reduces the restrictions on what languages and transport layers may be used with SAMP, and ensures that values transmitted will not be modified during processing by parts of the messaging system outside of client control. The type system is rich enough to represent complex structured data where required, but note it is not intended for use with binary data, and transmission of bulk data or large payloads in general is discouraged within SAMP messages in favour of passing URLs around instead, meaning that client and Hub implementations can work on the assumption of short message payloads. This convention of out-of-band bulk data transfer does place an additional burden on sending clients however, since to transmit a bulk data item (such as a table or image) not already available from an existing URL it is necessary to make it so available, for instance by writing bytes to a temporary file or serving them from an embedded HTTP server. It can also present complications if the sending and receiving client are not able to see the same URLs, for instance due to different security contexts; in this case, additional Hub services may be required to assist with data transfer between domains (accordingly, the Web Profile provides services to assist with cross-domain data exchange). Note also that the string type does not natively accommodate Unicode text, including XML. The restriction to 7-bit ASCII is driven by the requirement for use from non-Unicode-capable environments such as Fortran, IDL and some shell scripting languages. This has not caused known problems to date, but inability to handle Unicode text without additional encoding could prove awkward in some cases, and it may be necessary to revisit this restriction in a future revision of the standard. Extensible Vocabularies Extensibility is built into this system via the notion of an extensible vocabulary used when representing structured objects. Structured objects are represented as maps with wellknown keys, but the rule is that additional keys are always permitted, and that hubs and clients must ignore any keys they do not understand, propagating them to downstream consumers where applicable (compare the NDF extension architecture described in Currie et al. (1989)). A corollary is that such non-well-known keys must be defined in such a way that ignoring them will result in reasonable behaviour. The Abstract API tends to prefer maps (unordered name/value pairs) over ordered parameter value lists, which makes this extensibility pervasive throughout the messaging system, applying for instance to client metadata and subscription declarations, message transmission information, and MType-specified message parameter lists and return values. For instance, a client sending a message must pass it to the Hub as a map with two required keys: samp. mtype giving the MType label and samp.params giving the MType-specified parameter list (itself a map). But a client may optionally insert additional non-standard key/value pairs into that map, for instance using a non-standard key priority to associate a particular priority level with the message. If the Hub happens to support this nonstandard feature, it is able to treat the message specially in view of this declaration; in any case it will propagate the message to recipient clients with the additional entry present, so if one of those supports this feature then it may use the value in processing. The same rule applies for instance to the MType-determined message parameter list; an MType like image.load.fits has a required parameter url, but a sending client may add a non-standard parameter like colormap (or ds9.colormap) alongside the well-known ones for the benefit of any client that happens to support it. Clients can therefore piggy-back experimental or application-specific instructions on top of generic messages to achieve more detailed control where available, falling back to the baseline functionality if it is not. Using this extensibility pattern, new or enhanced features of particular MTypes or of the protocol itself can be prototyped very easily, requiring no changes to the SAMP standard or infrastructure implementation beyond those components actually using the non-standard features, and imposing no negative impact on existing messaging operations. If they are found to be useful, they may be adopted in the future as (most likely optional) well-known keys alongside the original ones. Some associated namespacing rules apply. Well-known keys defined by the SAMP standard are in the reserved samp namespace, meaning they begin with the string "samp.". When introducing non-standard keys it is not permitted to use this namespace, but any other syntactically legal string is allowed. The special namespace x-samp is available for keys proposed for future incorporation into the standard, and hubs and clients should treat keys which differ only in the substitution of samp for x-samp as identical, to ease standardisation of prototype features. In the case of MType parameters and return values, which are mostly not defined by SAMP itself, there is no reserved namespace. Use in Practice The protocol described above is capable of supporting a wide range of different messaging patterns. For use in a particular scenario, a number of practical considerations must also be worked out. This section discusses how the framework has been applied to date to support the original goal of helping to integrate data analysis tools used by astronomers. Section 5.1 explains how hub provision is managed, section 5.2 describes some common patterns of message semantics, and section 5.3 addresses the social mechanisms by which these are agreed on by the SAMP community. Section 5.4 reviews the existing landscape of SAMP infrastructure software and SAMP-aware tools, and Section 5.5 provides some concrete examples of it in action. Hub Provision SAMP's star topology means that a Hub (in most cases, exactly one Hub) must be running for any messaging to take place. Ideally, an independent Hub process would be started as part of user session setup to ensure its constant availability. While this is quite possible and appropriate in some scenarios, even the minimal configuration required to establish it (a hub startup line in a session startup file) requires the kind of explicit user action which cannot always be relied upon. Simply put, if the functionality doesn't show up in the user interface with zero user effort, most users will never discover it. It is common practice therefore, though by no means a requirement, for some SAMP-aware tools to come with an embedded hub. In this scenario, when a SAMP-aware tool starts up, it first checks for a running hub. If one exists, it registers with it; if not, and if it has the capability, it starts its own embedded Hub, and registers with that. Note that a client running an embedded Hub communicates with it in just the same way as with an independent one, it has no privileged access. Non-hub-capable clients may choose to check for a running Hub and connect on startup, on explicit user request, or when periodic polling indicates that one has become available. The effect is that usually when two or more SAMP-aware tools are running, a Hub will be present and those tools will find themselves connected to it, enabling messaging. Sometimes the application hosting the embedded hub will be shut down during a session taking the Hub down with it, and in that case another application may notice the fact and start one up, at which point some or all previously registered clients may notice the new hub and re-register with it. This somewhat haphazard model of hub provision does not form a robust platform for high-reliability messaging, but, in accordance with SAMP's pragmatic approach, operates well enough most of the time, with a minimum of user effort; usually, it "just works". Note that where explicit control of an independent hub process can be arranged, for example as part of a managed user environment, more robust connectivity will result; an example is the Herschel Interactive Processing Environment (Balm, 2012). MType Semantics A messaging framework only serves any purpose if there exists a vocabulary of messages understood by the applications which are going to exchange them. In SAMP terms that means establishing a collection of more or less wellknown MTypes (section 4.3). Choosing the right semantics for this collection is crucial to the utility and character of the messaging system in practice. The most obvious approach for providing message-based control of an application is to identify (at least some of) the capabilities it offers and define a messaging interface with parameters and return values exposing those capabilities. An image display application might expose a set of MTypes allowing image load into a new window, zoom configuration, colour map choice, WCS display and so on. This allows other applications to control its behaviour in detail and is suitable for tight integration of a known set of tools with a good understanding of each other's capabilities, for instance to execute a pre-orchestrated sequence of processing steps. However, this approach is less effective in less predictable environments. The controlling client needs to understand the capabilities of its partner client in order to control it. But if the set of tools in use at runtime is chosen by a user from an open-ended set rather than mandated by a developer, the identity of the partner client or clients is not known in advance. In general, different applications even of similar types have different capability sets and internal data models, and these cannot readily be encompassed by any single general abstraction. Different image display tools may support different data formats, may or may not support multiple loaded windows or images, may specify zooms in different ways, may offer different selections of colour maps, may provide WCS display with different options or not at all and so on, and the burden on a client wishing to control a range of different recipients quickly becomes large. Even if an application developer is prepared to study the messaging APIs offered by existing available tools and implement logic managing message dispatch for each case, the resulting code will not cope with applications unknown to the developer, for instance ones yet to be written. For uncontrolled environments in which the user selects the range of cooperating tools at runtime therefore, a "loose integration" model has turned out to be more successful. This approach focuses on a messaging interface consisting of a fairly small number of MTypes with semantics that are non-client-specific and rather vague. The semantics of the most-used messages generally boils down to "Here is an X", where X may be some resource type such as a table, image, spectrum, sky position, coverage region, bibcode etc, or sometimes a reference to an X from a previous message, for instance a row selection relating to an earlier-sent table. The implication of such an MType is that the receiver should do something appropriate with the X in question: load, display, highlight, or otherwise perform some action which makes sense given the receiving application's capabilities. Callable SAMP clients should therefore advertise themselves (by subscribing to the appropriate MTypes) as X-capable tools only if they are in a position to do something sensible with an X should they be presented with one. Such an advertisement serves as a hint to potential X-senders, though it does not constitute a guarantee of any particular behaviour. This framework typically manifests itself in a client user interface as an option, for an X currently known by that client, either to broadcast it to all X-capable clients, or to target it to an entry selected by the user from a dynamically-discovered list of X-capable clients (see Figure 3 for an example). For this kind of usage, the presence of a human in the loop to direct messages between clients as required by a particular workflow is an important part of the system, but the decisions required from the user are generally simple ones, e.g. which of a small list of clients to contact. For clients to interoperate as reliably as possible in this scenario, it is not sufficient just to agree on the notion of a table or an image for exchange, it is also necessary to specify the exact data exchange format. In the case of tabular data, a variety of possible exchange formats is in common use: FITS binary and ASCII tables, VOTable, Comma-Separated Values and a host of others including many ASCII-based variants. Different choices are convenient in different usage contexts, suggesting the need for a variety of distinct format-specific MTypes. However, a proliferation of alternative exchange formats, though superficially convenient, erodes interoperability. If multiple exchange formats capaple of serializing the same thing are available, the sender has to choose which to send, and the receiver may or may not be able to receive it. Well-behaved recipients should include conversion code for as many formats as possible, and well-behaved senders should send data in a format dependent on what is supported by the intended recipient. For applications willing to expend a lot of effort on interoperability the work required at both ends increases rapidly with the number of available formats, while less conscientious implementations may find themselves unable to exchange data of essentially compatible types, or the community of SAMP clients may fragment into format-specific sub-communities unable to communicate globally. As much as possible therefore, it is desirable to restrict the options to a single well-defined exchange format for each basic data type. This can be a difficult balance to get right. In the case of images, astronomy is fortunate that FITS serves as the lingua franca, and the only commonly-used MType is image.load.fits. For tabular data, clients are strongly encouraged to use table.load.votable even if it means translating to/from some other format; however other table. load.* variants are in use for specialist purposes, for instance for the CDF format 5 , which though tabular is not readily translated to VOTable without loss of information, and which tends to be used in communities not familiar with VOTable. In the case of spectra, for various reasons related to the form in which spectral data is typically obtained and the typical capabilities of spectrumcapable clients, the relevant MType is spectrum.load. ssa-generic, which permits any format to be used for the spectral data, with additional parameters specifying the format actually in use. SAMP is capable of supporting both tight and loose integration, and both are in use, but for coupling interactive data analysis tools the loose integration model has proven the most productive, and able to support ways of working that have not been possible using other available messaging systems. MType Definition Process A suitable collection of MTypes must not only exist but be known by potential message senders -which means the authors of the relevant software -in order for useful messaging to take place. In the case of application-specific MTypes, the documentation of available MTypes and their definitions is clearly best handled as part of the documentation of the application itself. These typically provide functionality that only makes sense for a given tool, and make use of a suitably specific namespace, for instance script.aladin. send, which allows external applications to control Aladin by sending commands in its scripting language. Developers are also free to define their own MTypes for use privately or in some closed group with locally agreed conventions for documentation, perhaps to support some tight-coupling-like usage. However, for well-known MTypes intended for unrestricted use, for instance of the loose-coupling variety described above, some public process is required to establish and publicise their definitions, so that client developers can both become aware of the conventions currently in use by other tools, and contribute their requirements for new or modified functionality. One possibility is to decide on a fixed list to form part of the SAMP standard. A small number of "administrative" MTypes, concerned with the messaging infrastructure, for instance samp.hub.event.register which informs existing clients when a new client has registered, have been written in to the standard in this way. All of these are in the reserved samp. namespace. However, for astronomyspecific MTypes this option was rejected, partly in order to avoid the introduction of astronomy-specific details into a standard which is otherwise not tied to a particular domain, and partly because the rather heavyweight IVOA process for standard review (Hanisch et al., 2010), in which draft to acceptance rarely takes less than 12 months, would impede introduction and updating of MTypes as required by implementation experience and new application demands. Another option is periodic publication of MType definitions in an IVOA Note. Such Notes may be issued at will without formal review, but no straightforward updating mechanism is in use, and this option was still felt to be undesirably cumbersome. Instead, a wiki page 6 was set up on the IVOA web site listing currently agreed MTypes. An informal understanding was adopted in which application developers are encouraged to discuss requirements for new MTypes or modifications to existing ones either privately or on the associated mailing list 7 , and if consensus is reached, to edit the wiki page accordingly. This was intended as a provisional measure to be reviewed and modified as required, but, six years later, the need for a more formal process has not been apparent, and there are no current plans to modify this arrangement. At time of writing, a dozen MTypes are listed on the wiki, concerned with exchanging tables, row selections, FITS images, spectra, sky coordinates, VO Resource identifiers (Demleitner et al., 2014), MOC sky coverage maps (Boch et al., 2014), bibcodes and one or two other items. The list has been fairly stable, though new entries and new optional parameters are sometimes added as required. Existing Software Since its first version in 2008, a wide range of SAMPenabled infrastructure and application software has become available. Of infrastructure sofware that is actively maintained at the time of writing, interchangeable Hub implementations exist in Java and Python, and client toolkits in Java, Python, C and Javascript. Validation, debugging and development support tools are also available. Historical, partial or experimental SAMP functionality has also appeared in other languages including Perl, C Sharp and IDL. Applications using SAMP number in the dozens, and include GUI analysis tools for images, catalogues, spectra, SEDs, time series and interferometry data, observation tools, outreach applications, command-line and graphical data access and manipulation suites, interactive processing environments, web archives either exposing simple results pages or offering sophisticated browser functionality, throwaway user scripts, and more. SAMP infrastructure libraries are surveyed in Fitzpatrick et al. (2013), though a notable more recent development is the inclusion of the Python hub and client implementation in the Astropy package (Astropy Collaboration et al., 2013) since its version 0.4. A partial list of some other tools with SAMP functionality may also be found near http://www.ivoa.net/samp. While initially developed and used mostly within stellar and galactic astronomy, use is now becoming common in related fields such as planetary science (Erard et al., 2014) and space physics . It is probably now the case that most astronomical applications that can benefit from interaction with other such tools include at least a basic SAMP capability. It is harder to ascertain to what extent this functionality is used in practice, but the enthusiasm of application developers to incorporate SAMP is presumably indicative of its utility. Examples of Use Current SAMP usage is most prevalent along the lines of the scenario outlined in the Introduction, allowing desktop and web-based clients to cooperate as a loosely integrated suite by employing the small number of dataexchange MTypes in common use. Figure 2 shows some examples of the basic case where one client transmits a data item (spectrum, table or image) to another. Figure 3 illustrates a more sophisticated interaction in which two applications exchange data and control in both directions to provide linked views exploiting the capabilities of each. SAMP has also been employed in other ways however, for instance to provide a private layer for RPC functionality required internally by Iris (Laurino et al., 2014) and to experiment with visualisation using on-demand data generation in Astro-WISE (Buddelmeijer and Valentijn, 2013). Conclusions SAMP provides a flexible and easy to use messaging framework, deployed in much current astronomical software, which supports various models of inter-tool communication. Tau. A full list of sources in the region has been loaded into Aladin (right), then transferred to TOPCAT (left), using SAMP MType table.load.votable. The user plots a colour-colour diagram in TOPCAT and selects the reddest objects graphically, causing them to be displayed as red circles, then passes the selection back to Aladin (MType table.select.rowList) where they are shown as green squares; the inset menu shows TOPCAT's user interface for this step. Clicking on one of the points in either application can then highlight the corresponding point in the other (MType table.highlight.row). This example illustrates how SAMP enables seamless exploration of data using a combination of parameter space and physical space. The most productive of these models to date has been loose coupling between a user-selected set of independently developed interactive data acquisition and analysis applications, to deliver functionality approaching that of an integrated suite. This model is built on SAMP's combination of publish-subscribe messaging, vague message semantics, and ease of adoption by both developers and users leading to widespread uptake. SAMP's flexibility means that it is capable of supporting other communication models, some in more marginal use now and some which may be explored further in the future. Introducing new Profiles, different MType libraries or alternative hub provision arrangements could render the same infrastructure suitable for contexts with different requirements for reliability, security or scalability. Another possible scenario is inter-host messaging to support collaborative work; this option has been under consideration throughout SAMP's history and is possible using existing profiles, though in current configurations it is somewhat cumbersome to set up and has so far not received much user attention. Despite its development history, there is nothing in the protocol specific to either the Virtual Observatory or astronomy, so use in other problem domains is quite feasible, though the authors are not aware of effort currently being deployed in this direction. SAMP's design has been informed by the requirements and experimentation of the SAMP developer community, largely within the context of the Virtual Observatory, including positive and negative lessons learned from its predecessor PLASTIC. Some aspects of this design that have proved particularly successful include the decoupling of architectural concerns into API, transport mechanism and semantics, the lightweight, bottom-up process for agree-ment of semantics, and the built-in extensibility provided by pervasive use of extensible vocabularies. Together, these fall under the heading of standardising only those things which need to be defined at a given stage, and leaving the option of filling in the details until a time and in a context when the requirements will be clearer. The need for the Web Profile was not forseen when the first version of the standard was published, but the transport/API decoupling meant it could be retrofitted with no disruption to existing client code. The fact that MType semantics are excluded from the standard itself means that these can be defined and iteratively adjusted with experience of a working transport infrastructure, rather than specifying them up front by committee decision as part of the protocol design, only to find them ill-adapted to tool deployment in practice. Other factors important to its success have been the small number of MTypes actually in common use, enabled by the convention of vague message semantics and standardisation on data formats, and the unobtrusive embedding of SAMP into existing applications meaning that the functionality is available without requiring any special setup or understanding from the user. The IVOA and other cross-institutional forums associated with the Virtual Observatory movement have also been of considerable importance in enabling and encouraging the necessary communication between application developers, though much software from outside the VO community is now also involved. SAMP is not a magic bullet. In typical current use the level of integration it offers between independently developed tools falls short of what would be available from a monolithic application, its pragmatic approach to com-munications can lead to patchy reliability, and its security model would not be appropriate for use with commercially sensitive data. However its ease of use and widespread uptake have delivered in practice an improved environment for desktop data analysis, allowing working astronomers to get more done.
2015-01-06T02:44:37.000Z
2015-01-06T00:00:00.000
{ "year": 2015, "sha1": "bc054059dcffa9bd18164a43303d3fae22e04fda", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1501.01139", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "9353192f26852fe2a1eb0deb9a04df91dfc8b3e4", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Physics" ] }
222298852
pes2o/s2orc
v3-fos-license
The Reduction Behavior of Ocean Manganese Nodules by Pyrolysis Technology Using Sawdust as the Reductant Ocean manganese nodules, which contain abundant Cu, Co, Ni and Mn resources, were reduced using biomass (sawdust) pyrolysis technology. Valuable metals were further extracted by acid leaching after the reduction process with high efficiency. The effects of sawdust dosage, reduction temperature, and time were investigated to obtain optimal operating parameters. The extraction rates of Mn, Cu, Co, and Ni reached as high as 96.1%, 91.7%, 92.5%, and 94.4%, respectively. Results from TGA show that the main pyrolysis process of sawdust occurs at temperature range of 250–375 ◦C with a mass loss of 59%, releasing a large amount of volatile substances to reduce the ocean manganese nodules. The pyrolysis activation energy of sawdust was calculated to be 52.68 kJ·mol−1 by the non-isothermal kinetic model. Additionally, the main reduction reaction behind the main sawdust pyrolysis process was identified by the comparison of the assumed and actual TG curve. The thermodynamic analysis showed that the high valence manganese minerals were gradually reduced to Mn2O3, Mn3O4, and MnO by CO generated from sawdust pyrolysis. The shrinking core model showed that the reduction process is controlled by the surface chemical reaction with activation energy of 45.5 kJ·mol−1. The surface of reduced ore and acid leached residue exhibited a structure composed of relatively finer pores and rougher morphology than the raw ore. Introduction Due to rapid economic growth and technological advances, the easily explored and high-grade non-ferrous resources on Earth's surface are being exhausted. For example, the average grade of copper contained in the rocks has decreased from 4% (in the year 1990) to 0.5% [1]. With the remaining low-grade and refractory ore, it is costlier and more technically difficult to extract those valuable metals [2]. More attention has been paid to the exploration and investigation of ocean mineral resources in recent years. Ocean manganese nodules, also called ocean polymetallic nodules, are rock concretions formed of concentric layers of Fe and Mn hydroxides/oxide at depths of approximately 4000-6000 m [3,4], containing large amounts of valuable non-ferrous metal resources, such as Cu, Co, Ni, Mn, and so on [5,6]. Their formation is the result of millions of years of mineral precipitation surrounding objects such as fish teeth, bones, etc. [7]. In addition, ocean manganese nodules are exploited two-dimensionally, reducing the costs in the mining process compared with three-dimensional Materials and Reagents The ocean manganese nodules were collected from the China Pioneer Area of the Eastern Pacific Ocean (Clarion Clipperton Zone). The ocean manganese nodules were dried under 105 • C for 8 h to remove the moisture in ocean manganese nodules and then crushed to less than 2 mm for the experiments. The sawdust was obtained from Hubei, China and ground to fine powder with a particle size of less than 1 mm. The chemical elements and industry analysis results are given in Table 1. All reagents used in the experiments were of analytical grade purchased from Sinopharm Chemical Reagent Co., Ltd., Shanghai, China. The chemical composition and X-ray diffraction (XRD) analysis of the ocean manganese nodules used in this work are presented in Table 2 and Figure 1. The nodules were mainly composed of MnO 2 (36.5%), Fe 2 O 3 (17.89%), and SiO 2 (21.11%). The contents of CuO, Co 3 O 4 , and NiO were 1.32%, 0.27%, and 1.76%, respectively. The XRD analysis of ocean manganese nodules was performed on a Bruker D8 ADVANCE diffractometer (Billerica, MA, US) with Cu λ = 1.5418 Å radiation generated at 40 kV and 40 MA. The XRD result shows that the main minerals of the manganese were vemadite and todorokite, and the main gangue minerals were zussmanite, paravauxite, and quartz. The X-ray diffraction peaks of oceanic manganese nodules are numerous and disorderly, indicating that the crystallization degree was poor. The peaks of valuable metals containing Cu, Co, and Ni cannot be observed, revealing that those valuable metals did not exist as isolated minerals but were ionic in the lattice of manganese minerals. The chemical composition and X-ray diffraction (XRD) analysis of the ocean manganese nodules used in this work are presented in Table 2 and Figure 1. The nodules were mainly composed of MnO2 (36.5%), Fe2O3 (17.89%), and SiO2 (21.11%). The contents of CuO, Co3O4, and NiO were 1.32%, 0.27%, and 1.76%, respectively. The XRD analysis of ocean manganese nodules was performed on a Bruker D8 ADVANCE diffractometer (Billerica, MA, US) with Cu λ = 1.5418 Å radiation generated at 40 kV and 40 MA. The XRD result shows that the main minerals of the manganese were vemadite and todorokite, and the main gangue minerals were zussmanite, paravauxite, and quartz. The X-ray diffraction peaks of oceanic manganese nodules are numerous and disorderly, indicating that the crystallization degree was poor. The peaks of valuable metals containing Cu, Co, and Ni cannot be observed, revealing that those valuable metals did not exist as isolated minerals but were ionic in the lattice of manganese minerals. Experimental Apparatus and Procedure Biomass (sawdust) pyrolysis and the ocean manganese nodules' reduction process were carried out via self-designed quartz tube furnace. The schematic diagram of pyrolysis reduction device is shown in Figure 2. Biomass and ocean manganese nodules were mixed at a specific ratio in the porcelain boat. Then the mixed samples were put in the middle of the quartz tube. Before the pyrolysis reduction process, the air in the tube was drained by purging with pure nitrogen gas. The ore samples were reduced by biomass pyrolysis technology under nitrogen flow at specific temperatures ranging from 300 to 500 °C. After the reduction process, the samples were cooled to room temperature under the protection of nitrogen. The experiments were repeated in triplicate. Experimental Apparatus and Procedure Biomass (sawdust) pyrolysis and the ocean manganese nodules' reduction process were carried out via self-designed quartz tube furnace. The schematic diagram of pyrolysis reduction device is shown in Figure 2. Biomass and ocean manganese nodules were mixed at a specific ratio in the porcelain boat. Then the mixed samples were put in the middle of the quartz tube. Before the pyrolysis reduction process, the air in the tube was drained by purging with pure nitrogen gas. The ore samples were reduced by biomass pyrolysis technology under nitrogen flow at specific temperatures ranging Minerals 2020, 10, 850 4 of 15 from 300 to 500 • C. After the reduction process, the samples were cooled to room temperature under the protection of nitrogen. The experiments were repeated in triplicate. Extraction Efficiencies of Valuable Metals To extract the Cu, Co, and Ni in the reduced residue, an acid leaching process was conducted in a 100 mL conical flask under the conditions of sulfuric acid, 1.0 mol/L, liquid to solid ratio of 10:1, temperature of 60 °C, and leaching time of 60 min. The reduction-leaching residue was filtered and dried after acid leaching. Additionally, the extraction efficiencies were calculated by analyzing the contents of Cu, Co, Ni, and Mn in reducing-leaching residue using UV (UV-1750, Shimadzu, Kyoto, Japan) spectrophotometric methods involving sodium diethyldithiocarbamate trihydrate, 1-nitroso-2-naphthol, dimethylglyoxime, and periodate potassium, respectively [20]. Pyrolysis and Reduction Process The sawdust pyrolysis properties and reduction process were performed by thermogravimetric analysis (TGA, TGA-50, Shimadzu, Kyoto, Japan). The ocean manganese nodules and sawdust were uniformly mixed in certain proportions. The samples of 10 to 15 mg were input for TGA analysis. Before initiating the heating program, the system was purged with nitrogen for 10 min to ensure that the oxygen-free environment was established. The mass loss was continuously recorded in situ during a linear temperature increase from 303 to 1073 K with a heating rate at 10 K/min. The surface morphology and composition analysis of ocean manganese nodules before and after pyrolysis reduction and acid leaching the residue were conducted by scanning electron microscopy (SEM) with energy dispersive spectrometry (EDS) (SEM-EDS, JSM-7001F, Shimadzu, Kyoto, Japan). The main operating parameters were as follows: accelerating voltage of 15 kV; applied current of 5 nA. The Effect of Sawdust Dosage on Valuable Metal Extraction The effects of sawdust dosage on the extraction rates of valuable metals were examined under the conditions of reduction temperature of 500 °C, reduction time of 30 min, and nitrogen flow rate of 500 mL/min. As shown in Figure 3A, the sawdust dosage (wt.%) ranging from 4.0% to 14.0% had a significant influence on the extraction rates of valuable metals. The extraction rates of Mn, Cu, Co, and Ni increased with the increasing sawdust dosage from 4.0% to 10.0% and then remained at relatively high levels. The extraction rates of Mn, Cu, Co, and Ni reached 96.6%, 92.3%, 93.0%, and Extraction Efficiencies of Valuable Metals To extract the Cu, Co, and Ni in the reduced residue, an acid leaching process was conducted in a 100 mL conical flask under the conditions of sulfuric acid, 1.0 mol/L, liquid to solid ratio of 10:1, temperature of 60 • C, and leaching time of 60 min. The reduction-leaching residue was filtered and dried after acid leaching. Additionally, the extraction efficiencies were calculated by analyzing the contents of Cu, Co, Ni, and Mn in reducing-leaching residue using UV (UV-1750, Shimadzu, Kyoto, Japan) spectrophotometric methods involving sodium diethyldithiocarbamate trihydrate, 1-nitroso-2-naphthol, dimethylglyoxime, and periodate potassium, respectively [20]. Pyrolysis and Reduction Process The sawdust pyrolysis properties and reduction process were performed by thermogravimetric analysis (TGA, TGA-50, Shimadzu, Kyoto, Japan). The ocean manganese nodules and sawdust were uniformly mixed in certain proportions. The samples of 10 to 15 mg were input for TGA analysis. Before initiating the heating program, the system was purged with nitrogen for 10 min to ensure that the oxygen-free environment was established. The mass loss was continuously recorded in situ during a linear temperature increase from 303 to 1073 K with a heating rate at 10 K/min. The surface morphology and composition analysis of ocean manganese nodules before and after pyrolysis reduction and acid leaching the residue were conducted by scanning electron microscopy (SEM) with energy dispersive spectrometry (EDS) (SEM-EDS, JSM-7001F, Shimadzu, Kyoto, Japan). The main operating parameters were as follows: accelerating voltage of 15 kV; applied current of 5 nA. The Effect of Sawdust Dosage on Valuable Metal Extraction The effects of sawdust dosage on the extraction rates of valuable metals were examined under the conditions of reduction temperature of 500 • C, reduction time of 30 min, and nitrogen flow rate of 500 mL/min. As shown in Figure 3A, the sawdust dosage (wt.%) ranging from 4.0% to 14.0% had a significant influence on the extraction rates of valuable metals. The extraction rates of Mn, Cu, Co, and Ni increased with the increasing sawdust dosage from 4.0% to 10.0% and then remained at relatively high levels. The extraction rates of Mn, Cu, Co, and Ni reached 96.6%, 92.3%, 93.0%, and 94.7%, respectively. The ocean manganese nodules can be reduced by the reductive volatile substances generated in the process of pyrolysis. The concentration of reductive volatile substances increases with rising sawdust dosage, resulting in a more sufficient reaction with oceanic manganese nodules. Furthermore, as shown in Figure 3B, the linear relationships between Mn extraction rate and Cu, Co, and Ni extraction rates can be observed. The fitting coefficient (R 2 ) and the slopes of the fitted curves are all close to 1, revealing that Cu, Co, and Ni closely coexist with Mn. Therefore, the extractions of Cu, Co, and Ni depend on the reduction degree of manganese. Minerals 2020, 10, x FOR PEER REVIEW 5 of 16 94.7%, respectively. The ocean manganese nodules can be reduced by the reductive volatile substances generated in the process of pyrolysis. The concentration of reductive volatile substances increases with rising sawdust dosage, resulting in a more sufficient reaction with oceanic manganese nodules. Furthermore, as shown in Figure 3B, the linear relationships between Mn extraction rate and Cu, Co, and Ni extraction rates can be observed. The fitting coefficient (R 2 ) and the slopes of the fitted curves are all close to 1, revealing that Cu, Co, and Ni closely coexist with Mn. Therefore, the extractions of Cu, Co, and Ni depend on the reduction degree of manganese. The Effect of Temperature on the Reduction Degree of MnO2 The effect of reaction temperature on the reduction of Mn was investigated at 300, 350, 400, 450, and 500 °C under the conditions of sawdust dosage of 10.0%, nitrogen flow rate of 500 mL/min, and reaction time ranging from 0 to 30 min. According to the results shown in Figure 4, the reaction temperature is a significant factor during the pyrolysis reduction process. The reduction degree of MnO2 increases with the increase of reaction time. It is obvious that the higher temperature is conducive to shortening the time required and increasing the reduction degree of Mn. The reduction degrees of Mn can reach 71.8% (30 min), 87.6% (30 min), 92.7% (20 min), 94.0% (8 min), and 96.1% (6 min) under the temperatures of 300, 350, 400, 450, and 500 °C, respectively. Due to the exponential dependence of the rate constant in the Arrhenius equation, the chemical reaction is accelerated and the reaction rate is also improved with the increase of temperature. The extraction rates of Mn, Cu, Co, and Ni reach 96.1%, 91.7%, 92.5%, and 94.4% after 6 min reduction at 500 °C, respectively. The Effect of Temperature on the Reduction Degree of MnO 2 The effect of reaction temperature on the reduction of Mn was investigated at 300, 350, 400, 450, and 500 • C under the conditions of sawdust dosage of 10.0%, nitrogen flow rate of 500 mL/min, and reaction time ranging from 0 to 30 min. According to the results shown in Figure 4, the reaction temperature is a significant factor during the pyrolysis reduction process. The reduction degree of MnO 2 increases with the increase of reaction time. It is obvious that the higher temperature is conducive to shortening the time required and increasing the reduction degree of Mn. The reduction degrees of Mn can reach 71.8% (30 min), 87.6% (30 min), 92.7% (20 min), 94.0% (8 min), and 96.1% (6 min) under the temperatures of 300, 350, 400, 450, and 500 • C, respectively. Due to the exponential dependence of the rate constant in the Arrhenius equation, the chemical reaction is accelerated and the reaction rate is also improved with the increase of temperature. The extraction rates of Mn, Cu, Co, and Ni reach 96.1%, 91.7%, 92.5%, and 94.4% after 6 min reduction at 500 • C, respectively. Minerals 2020, 10, x FOR PEER REVIEW 5 of 16 94.7%, respectively. The ocean manganese nodules can be reduced by the reductive volatile substances generated in the process of pyrolysis. The concentration of reductive volatile substances increases with rising sawdust dosage, resulting in a more sufficient reaction with oceanic manganese nodules. Furthermore, as shown in Figure 3B, the linear relationships between Mn extraction rate and Cu, Co, and Ni extraction rates can be observed. The fitting coefficient (R 2 ) and the slopes of the fitted curves are all close to 1, revealing that Cu, Co, and Ni closely coexist with Mn. Therefore, the extractions of Cu, Co, and Ni depend on the reduction degree of manganese. The Effect of Temperature on the Reduction Degree of MnO2 The effect of reaction temperature on the reduction of Mn was investigated at 300, 350, 400, 450, and 500 °C under the conditions of sawdust dosage of 10.0%, nitrogen flow rate of 500 mL/min, and reaction time ranging from 0 to 30 min. According to the results shown in Figure 4, the reaction temperature is a significant factor during the pyrolysis reduction process. The reduction degree of MnO2 increases with the increase of reaction time. It is obvious that the higher temperature is conducive to shortening the time required and increasing the reduction degree of Mn. The reduction degrees of Mn can reach 71.8% (30 min), 87.6% (30 min), 92.7% (20 min), 94.0% (8 min), and 96.1% (6 min) under the temperatures of 300, 350, 400, 450, and 500 °C, respectively. Due to the exponential dependence of the rate constant in the Arrhenius equation, the chemical reaction is accelerated and the reaction rate is also improved with the increase of temperature. The extraction rates of Mn, Cu, Co, and Ni reach 96.1%, 91.7%, 92.5%, and 94.4% after 6 min reduction at 500 °C, respectively. Thermogravimetric Analysis of Sawdust Biomass can be thermally decomposed to low molecular weight gases, which are considered as the direct reductive agents to reduce high valence manganese minerals; thus, it is important to study the pyrolysis process of biomass. The sawdust's pyrolysis properties were investigated by thermogravimetric analysis. According to TG and DTG curves presented in Figure 5, the pyrolysis process of sawdust can be divided into four stages: moisture evaporation (25-105 • C), bound water removal (105-250 • C), the main pyrolysis process (250-375 • C), and afterward, slight pyrolysis (375-800 • C). Thermogravimetric Analysis of Sawdust Biomass can be thermally decomposed to low molecular weight gases, which are considered as the direct reductive agents to reduce high valence manganese minerals; thus, it is important to study the pyrolysis process of biomass. The sawdust's pyrolysis properties were investigated by thermogravimetric analysis. According to TG and DTG curves presented in Figure 5, the pyrolysis process of sawdust can be divided into four stages: moisture evaporation (25-105 °C), bound water removal (105-250 °C), the main pyrolysis process (250-375 °C), and afterward, slight pyrolysis (375-800 °C). During the moisture evaporation stage, the surface water is heated and purged by nitrogen with a slight weight loss of 5%. In the bound water removal process, the amorphous part of the high polymer inside sawdust begins to depolymerize for the formation of carbonyl, carboxyl, and peroxide hydroxyl, and a small amount of hydrogen is produced at the same time [27]. The DTG and TG curves both drop slowly during this process. The volatile reductive substances are generated due to the decomposition of hemicellulose and cellulose, and partial decomposition of lignin in the main pyrolysis stage. The TG curve decreases sharply with a mass loss of 59% and a large peak can be observed at 355 °C, indicating that the pyrolysis reaction of sawdust occurs heavily in this temperature range. At temperatures greater than 375 °C, the weight loss decreases slowly which is attributed to the slow degradation of the remaining lignin [28]. The DTG curve peak at 670 °C is ascribed to the further pyrolysis of lignin to form porous biomass char with graphite structure. Pyrolysis Kinetic Model of Sawdust The main products generated in the process of pyrolysis were sawdust char, tar, and pyrolysis gas [29]. With regard to kinetic analysis, the pyrolysis relative mass loss rate can be calculated from the TG data based on the following equation [28]: where α is the relative mass loss rate at time t; m0, mt, and m∞ refer to the initial, actual, and final masses of the sawdust, respectively. It is assumed that the volatile substances produced by pyrolysis do not undergo a secondary reaction. An independent parallel first-order reaction kinetic model was used for kinetic analysis of sawdust pyrolysis process. The kinetic equation can be presented as follows: During the moisture evaporation stage, the surface water is heated and purged by nitrogen with a slight weight loss of 5%. In the bound water removal process, the amorphous part of the high polymer inside sawdust begins to depolymerize for the formation of carbonyl, carboxyl, and peroxide hydroxyl, and a small amount of hydrogen is produced at the same time [27]. The DTG and TG curves both drop slowly during this process. The volatile reductive substances are generated due to the decomposition of hemicellulose and cellulose, and partial decomposition of lignin in the main pyrolysis stage. The TG curve decreases sharply with a mass loss of 59% and a large peak can be observed at 355 • C, indicating that the pyrolysis reaction of sawdust occurs heavily in this temperature range. At temperatures greater than 375 • C, the weight loss decreases slowly which is attributed to the slow degradation of the remaining lignin [28]. The DTG curve peak at 670 • C is ascribed to the further pyrolysis of lignin to form porous biomass char with graphite structure. Pyrolysis Kinetic Model of Sawdust The main products generated in the process of pyrolysis were sawdust char, tar, and pyrolysis gas [29]. With regard to kinetic analysis, the pyrolysis relative mass loss rate can be calculated from the TG data based on the following equation [28]: where α is the relative mass loss rate at time t; m 0 , m t , and m ∞ refer to the initial, actual, and final masses of the sawdust, respectively. It is assumed that the volatile substances produced by pyrolysis do not undergo a secondary reaction. An independent parallel first-order reaction kinetic model was used for kinetic analysis of sawdust pyrolysis process. The kinetic equation can be presented as follows: where A is pre-exponential or frequency factor (min −1 ), E is the activation energy (J·mol −1 ), R is the gas constant (8.314 J/(mol·K)), and T is the temperature (K). With relation to the kinetic processes of a heterogeneous system in non-isothermal condition, a simplified expression for a first-order reaction developed by Coats and Redfern is often adopted [30]. The following equation can be used. where β is the heating rate, 10 K/min. The value of temperature is far less than that of activation energy in most regular reactions, so the equation can be simplified as follows: where A is pre-exponential or frequency factor (min −1 ), E is the activation energy (J•mol −1 ), R is the gas constant (8.314 J/(mol•K)), and T is the temperature (K). With relation to the kinetic processes of a heterogeneous system in non-isothermal condition, a simplified expression for a first-order reaction developed by Coats and Redfern is often adopted [30]. The following equation can be used. where β is the heating rate, 10 K/min. The value of temperature is far less than that of activation energy in most regular reactions, so the equation can be simplified as follows: The Analysis of the Reduction Process Using Sawdust Pyrolysis The reduction process of ocean manganese nodules was performed by thermogravimetric analysis. The TG and TDG curves of the reduction process were gathered at the mass ratio of ocean manganese nodules:sawdust = 5:1. As shown in Figure 7, the curves of TG and DTG for the reduction process are roughly similar to that of sawdust pyrolysis ( Figure 5). The slow decline of TG curve is due to the moisture evaporation and bound water removal process of sawdust. Additionally, no chemical reactions between sawdust and ocean manganese nodules occur when the temperature is below 250 °C. According to results of Figure 4 and Figure 5, the TG curve in Figure 7 decreases significantly during the temperature range of 250-390 °C, which is ascribed to the combined effects of sawdust pyrolysis and reduction of ocean manganese nodules. The DTG peak (346 °C) of the reduction process appears 9 °C lower than that of sawdust pyrolysis process (355 °C), indicating that the oxygen deprivation of ocean manganese nodules promotes the mass loss of the system. The TG curve continues decreasing steadily as the temperature increases from 390 to 800 °C. The ocean The Analysis of the Reduction Process Using Sawdust Pyrolysis The reduction process of ocean manganese nodules was performed by thermogravimetric analysis. The TG and TDG curves of the reduction process were gathered at the mass ratio of ocean manganese nodules:sawdust = 5:1. As shown in Figure 7, the curves of TG and DTG for the reduction process are roughly similar to that of sawdust pyrolysis ( Figure 5). The slow decline of TG curve is due to the moisture evaporation and bound water removal process of sawdust. Additionally, no chemical reactions between sawdust and ocean manganese nodules occur when the temperature is below 250 • C. According to results of Figures 4 and 5, the TG curve in Figure 7 decreases significantly during the temperature range of 250-390 • C, which is ascribed to the combined effects of sawdust pyrolysis and reduction of ocean manganese nodules. The DTG peak (346 • C) of the reduction process appears 9 • C lower than that of sawdust pyrolysis process (355 • C), indicating that the oxygen deprivation of ocean manganese nodules promotes the mass loss of the system. The TG curve continues decreasing steadily as the temperature increases from 390 to 800 • C. The ocean manganese nodules can be further reduced by the reductive volatile gas generated by sawdust pyrolysis. Thus, the main reduction process of ocean manganese nodules may occur during the temperature range of 390-800 • C. manganese nodules can be further reduced by the reductive volatile gas generated by sawdust pyrolysis. Thus, the main reduction process of ocean manganese nodules may occur during the temperature range of 390-800 °C. To further investigate the pyrolysis and reduction process, a comparison of the assumed TG curve without reduction and the actual TG curve was conducted. The assumed TG curve without reduction was obtained on the assumption that the sawdust pyrolysis was an independent process without any physical and chemical interactions with the other process. The assumed TG curve can be calculated using the following equation. TGassumed=1/6TGsawdust + 5/6 × 100 (5) As shown in Figure 8, the assumed TG curve does not match with the actual TG curve in the non-reaction temperature range (25-250 °C). In the actual measurement, sawdust and ocean manganese nodules are mixed and heated more uniformly than sawdust pyrolysis alone. The surface water and bound water of sawdust escape more easily, leading to a lower TG curve than the assumed TG curve. A large amount of volatile tar is generated by sawdust pyrolysis in the temperature range of 250-390 °C. Additionally, the adsorption of tar on the surfaces of ocean manganese nodules results in less mass loss than in the assumed TG curve. Thus, the actual TG curve is higher than the assumed TG curve. With the increase of temperature, the reduction process occurs with the oxygen deprivation of ocean manganese nodules. The actual TG curve keeps decreasing at the temperature range of 390-800 °C. Therefore, the actual measured TG value is lower than the assumed TG value in the reduction process. To further investigate the pyrolysis and reduction process, a comparison of the assumed TG curve without reduction and the actual TG curve was conducted. The assumed TG curve without reduction was obtained on the assumption that the sawdust pyrolysis was an independent process without any physical and chemical interactions with the other process. The assumed TG curve can be calculated using the following equation. TG assumed =1/6TG sawdust + 5/6 × 100 (5) As shown in Figure 8, the assumed TG curve does not match with the actual TG curve in the non-reaction temperature range (25-250 • C). In the actual measurement, sawdust and ocean manganese nodules are mixed and heated more uniformly than sawdust pyrolysis alone. The surface water and bound water of sawdust escape more easily, leading to a lower TG curve than the assumed TG curve. A large amount of volatile tar is generated by sawdust pyrolysis in the temperature range of 250-390 • C. Additionally, the adsorption of tar on the surfaces of ocean manganese nodules results in less mass loss than in the assumed TG curve. Thus, the actual TG curve is higher than the assumed TG curve. With the increase of temperature, the reduction process occurs with the oxygen deprivation of ocean manganese nodules. The actual TG curve keeps decreasing at the temperature range of 390-800 • C. Therefore, the actual measured TG value is lower than the assumed TG value in the reduction process. Minerals 2020, 10, x FOR PEER REVIEW 8 of 16 manganese nodules can be further reduced by the reductive volatile gas generated by sawdust pyrolysis. Thus, the main reduction process of ocean manganese nodules may occur during the temperature range of 390-800 °C. To further investigate the pyrolysis and reduction process, a comparison of the assumed TG curve without reduction and the actual TG curve was conducted. The assumed TG curve without reduction was obtained on the assumption that the sawdust pyrolysis was an independent process without any physical and chemical interactions with the other process. The assumed TG curve can be calculated using the following equation. TGassumed=1/6TGsawdust + 5/6 × 100 (5) As shown in Figure 8, the assumed TG curve does not match with the actual TG curve in the non-reaction temperature range (25-250 °C). In the actual measurement, sawdust and ocean manganese nodules are mixed and heated more uniformly than sawdust pyrolysis alone. The surface water and bound water of sawdust escape more easily, leading to a lower TG curve than the assumed TG curve. A large amount of volatile tar is generated by sawdust pyrolysis in the temperature range of 250-390 °C. Additionally, the adsorption of tar on the surfaces of ocean manganese nodules results in less mass loss than in the assumed TG curve. Thus, the actual TG curve is higher than the assumed TG curve. With the increase of temperature, the reduction process occurs with the oxygen deprivation of ocean manganese nodules. The actual TG curve keeps decreasing at the temperature range of 390-800 °C. Therefore, the actual measured TG value is lower than the assumed TG value in the reduction process. Thermodynamic Analysis The composition of volatile gas listed in Table 3 was measured by gas chromatography. The gases generated by the pyrolysis of sawdust are mainly composed of CO 2 , CO, H 2 , CH 4 , C 2 H 4 , and C 2 H 6 . Additionally, the reductive gases are mainly composed of CO with a content of 28.82%, which is generated by the fracturing of carbonyl and ether bonds in the sawdust structure [31]. The main oxides of Mn and Fe can be reduced in the presence of CO at a certain temperature. Additionally, the possible reaction equations (Equations (6)- (20)) are as follows. The standard Gibbs free energy changes of Equations (6)- (20) were calculated using HSC 6.0 software. MnO FeO where G T 0 is the standard Gibbs free energy change; T r is the reduction temperature range. The relationships between temperature and standard Gibbs energy change were plotted in Figure 9. The negative values of Equations (6)- (12) and (16) in the whole temperature range indicate that those reactions are spontaneous, and Mn 2 O 3 , Mn 3 O 4 , and MnO are the main reduction products with sawdust pyrolysis technology. The standard Gibbs free energy changes of Equations (13)-(15) are positive, revealing that the high valence manganese oxide cannot be directly reduced to metallic manganese by CO. According to the T r values of Equations (16)- (19), Fe 2 O 3 can be easily reduced to Fe 3 O 4 that cannot be further reduced to FeO at the temperature of 500 • C. Although the standard Gibbs energy changes of Equations (18) and (19) are negative at 500 • C, the reaction constants are too small to drive the reactions, indicating that the oxide of Fe cannot be reduced to metallic Fe [32,33]. The gasification of carbon occurs at temperatures higher than 702 • C (Equation (20)), demonstrating that the ocean manganese nodules are mainly reduced by CO generated by sawdust's pyrolysis process. (20) where △GT 0 is the standard Gibbs free energy change; Tr is the reduction temperature range. The relationships between temperature and standard Gibbs energy change were plotted in Figure 9. The negative values of Equations (6)- (12) and (16) in the whole temperature range indicate that those reactions are spontaneous, and Mn2O3, Mn3O4, and MnO are the main reduction products with sawdust pyrolysis technology. The standard Gibbs free energy changes of Equations (13)- (15) are positive, revealing that the high valence manganese oxide cannot be directly reduced to metallic manganese by CO. According to the Tr values of Equations (16)- (19), Fe2O3 can be easily reduced to Fe3O4 that cannot be further reduced to FeO at the temperature of 500 °C. Although the standard Gibbs energy changes of Equations (18) and (19) are negative at 500 °C, the reaction constants are too small to drive the reactions, indicating that the oxide of Fe cannot be reduced to metallic Fe [32,33]. The gasification of carbon occurs at temperatures higher than 702 °C (Equation (20)), demonstrating that the ocean manganese nodules are mainly reduced by CO generated by sawdust's pyrolysis process. Reduction Kinetic Analysis The reduction products (Mn2O3, Mn3O4 and MnO) were gradually formed at the outer particle layers and moving inward toward the core. The reduction process can be described by the shrinking core model. If the process is controlled by the resistance of a chemical surface reaction, then the Equation (21) can be used to represent the process [34]. If the process is controlled by the inner diffusion, then the Equation (22) can be used to describe the kinetics [35]. where x is the reduction degree of ocean manganese nodules, ks and ki are the reaction rate constants associated with temperature, and t is the reaction time. Reduction Kinetic Analysis The reduction products (Mn 2 O 3 , Mn 3 O 4 and MnO) were gradually formed at the outer particle layers and moving inward toward the core. The reduction process can be described by the shrinking core model. If the process is controlled by the resistance of a chemical surface reaction, then the Equation (21) can be used to represent the process [34]. If the process is controlled by the inner diffusion, then the Equation (22) can be used to describe the kinetics [35]. where x is the reduction degree of ocean manganese nodules, k s and k i are the reaction rate constants associated with temperature, and t is the reaction time. In order to determine the kinetic parameters and rate controlling step of reduction process, the experimental data presented in Figure 4 were analyzed on the shrinking core model using Equations (21) and (22), respectively. The linear fits of surface chemical controlling and inner diffusion controlling are shown in Figure 10A,B. The kinetic parameters of the shrinking core model are listed in Table 4. The reaction rate constants increase with the increase of temperature, and the linear fitting correlation coefficient of surface chemical controlling is better than that of the inner diffusion controlling model. The activation energy of the reduction reaction can be calculated using the Equation (23). where E a denotes the activation energy (J·mol −1 ), R is the gas constant (8.314 J/(mol·K)), T is the temperature (K), and A 2 is a pre-exponential factor. Arrhenius linear fittings of the reduction reaction controlled by the surface chemical reaction and inner diffusion are shown in Figure 11. According to Equation (23), the value of activation energy can be obtained by the confirmation of the slope of the Arrhenius linear fitting curve. The Arrhenius linear fitting equation of surface chemical reaction controlling can be expressed as lnk = −5.47 × 1000/T + 5.23 with the calculated activation energy of 45.5 kJ·mol −1 . The Arrhenius linear fitting equation of inner diffusion controlling can be written as lnk = −6.04 × 1000/T + 6.09 with the calculated activation energy of 50.2 kJ·mol −1 , which is beyond the normal range of inner diffusion. Therefore, the reduction process of ocean manganese nodules is mainly controlled by the surface chemical reaction, and the kinetic model can be expressed as t × Exp[−6040/T + 6.09] = 1 − (1 − x) 1/3 . Minerals 2020, 10, x FOR PEER REVIEW 11 of 16 In order to determine the kinetic parameters and rate controlling step of reduction process, the experimental data presented in Figure 4 were analyzed on the shrinking core model using Equations (21) and (22), respectively. The linear fits of surface chemical controlling and inner diffusion controlling are shown in Figure 10A,B. The kinetic parameters of the shrinking core model are listed in Table 4. The reaction rate constants increase with the increase of temperature, and the linear fitting correlation coefficient of surface chemical controlling is better than that of the inner diffusion controlling model. where Ea denotes the activation energy (J•mol −1 ), R is the gas constant (8.314 J/(mol•K)), T is the temperature (K), and A2 is a pre-exponential factor. Arrhenius linear fittings of the reduction reaction controlled by the surface chemical reaction and inner diffusion are shown in Figure 11. According to Equation (23), the value of activation energy can be obtained by the confirmation of the slope of the Arrhenius linear fitting curve. The Arrhenius linear fitting equation of surface chemical reaction controlling can be expressed as lnk = −5.47 × 1000/T + 5.23 with the calculated activation energy of 45.5 kJ•mol −1 . The Arrhenius linear fitting equation of inner diffusion controlling can be written as lnk = −6.04 × 1000/T + 6.09 with the calculated activation energy of 50.2 kJ•mol −1 , which is beyond the normal range of inner diffusion. Therefore, the reduction process of ocean manganese nodules is mainly controlled by the surface chemical reaction, and the kinetic model can be expressed as Microstructure Characterization The scanning electron microscopy with energy dispersive spectrometry (SEM-EDS) of ocean manganese nodules, reduced ocean manganese nodules, and acid leaching residue, was carried out to investigate the microstructural changes during the extraction processes of valuable metals. The surface of raw ore ( Figure 12A) was observed to contain a small number of fine particles, which was due to the accumulation of the large particles during the crushing process. The raw ore particles are Microstructure Characterization The scanning electron microscopy with energy dispersive spectrometry (SEM-EDS) of ocean manganese nodules, reduced ocean manganese nodules, and acid leaching residue, was carried out to investigate the microstructural changes during the extraction processes of valuable metals. The surface of raw ore ( Figure 12A) was observed to contain a small number of fine particles, which was due to the accumulation of the large particles during the crushing process. The raw ore particles are impact with a relatively smooth surface. After ocean manganese nodules were reduced by sawdust pyrolysis gas, the surface of ore exhibited a structure composed of relatively finer pores and rougher morphology with the presence of the pyrolysis char ( Figure 12B). The deprivation of oxygen during the reduction process might promote those morphological changes on the surfaces of ocean manganese nodules. Fe, Mn, and Ni can be detected by the EDS of the raw ore and reduced ore. After the acid leaching process, the manganese was dissolved by the acid solution, leading to a complex porous structure ( Figure 12C) without the appearance of Mn and other valuable metals Cu, Co, and Ni. The reduction performance of ocean manganese nodules was compared with recently reported studies of both hydrometallurgy and pyro-hydrometallurgy processes. As shown in Table 5, the reduction process of hydrometallurgy can be carried out at room temperature with a longer reduction time compared with pyro-hydrometallurgy. Additionally, the valuable metals can be reduced to alloy at temperatures higher than 1100 • C. However, the extraction efficiencies of Cu, Co, and Ni are lower than hydrometallurgy and this work. Compared with the traditional extraction methods, the valuable metals can be extracted at lower temperatures (compared with pyro-hydrometallurgy) and shorter reduction times (compared with hydrometallurgy) using biomass pyrolysis technology. Furthermore, the reductive substance is the volatile gas generated by pyrolysis process, and no other impurity ions are introduced in the subsequent acid leaching process after the reduction process using sawdust as the reductant. 1. The ocean manganese nodules can be reduced by sawdust pyrolysis technology. The valuable metals locked in by high valence manganese minerals can be further extracted by acid leaching after the reduction process. Cu, Co, and Ni in ocean manganese nodules closely coexist with Mn, and their extraction rates keep consistent with the reduction degree of Mn. Under the optimal conditions of sawdust dosage of 10.0%, reduction temperature of 500 • C, and reduction time of 6 min, the extraction rates of Mn, Cu, Co, and Ni reach as high as 96.1%, 91.7%, 92.5%, and 94.4%, respectively. The reduction temperature is lower than for the traditional pyro-hydrometallurgy using a carbon-based reductant. Compared with the hydrometallurgy method, the reduction time is significantly decreased. Moreover, no other impurity ions were introduced into the acid leaching process by the reductant. 2. The sawdust pyrolysis process involved four stages: moisture evaporation (25-105 • C), bound water removal (105-250 • C), the main pyrolysis process (250-375 • C), and afterward, slight pyrolysis (375-800 • C). Large amounts of volatile substances are released in the main pyrolysis process with a mass loss of 59%. Additionally, the main pyrolysis's reductive volatile gas is CO with a content of 28.82%. According to the pyrolysis's non-isothermal kinetic model established by thermogravimetric analysis, the sawdust's main pyrolysis process is an endothermic reaction with the activation energy of 52.68 kJ·mol −1 ; the kinetic model can be established as: α = 1 − Exp −T 2 Exp(−6480/T − 2.36) . 3. The main reduction process of ocean manganese nodules occurs at a temperature higher than 390 • C-see comparison with the assumed and actual TG curves. The high valence manganese nodules are gradually reduced to Mn 2 O 3 , Mn 3 O 4 , and MnO by reductive volatile gas generated
2020-10-12T13:58:55.098Z
2020-09-26T00:00:00.000
{ "year": 2020, "sha1": "5041fc281e80fa05c2fdcad88b7af33e2f4f58c7", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-163X/10/10/850/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "5041fc281e80fa05c2fdcad88b7af33e2f4f58c7", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Chemistry" ] }
234916658
pes2o/s2orc
v3-fos-license
Clinical heterogeneity and novel pathogenic variants of Chinese patients with neuroacanthocytosis Background: Neuroacanthocytosis (NA) syndromes are a group of exceedingly rare diseases, including McLeod syndrome (MLS) and chorea-acanthocytosis (ChAc). The characteristic phenotypes comprise a variety of movement disorders, including chorea, dystonia, and parkinsonism. We aimed to investigate the clinical heterogeneity and novel pathogenic variants of Chinese patients with neuroacanthocytosis. Results: Three novel XK variants (c.942G>A, c.970A>T, c.422_423del) were identied in three index MLS patients and three novel VPS13A variants (c.3817C>T, c.9219C>A, c.3467T>A) in two index ChAc patients. In addition, we summarized the genotypes and phenotypes of reported MLA patients and ChAc patients in the Chinese population. Conclusion: Our study expands the genetic spectrum of XK and VPS13A and helps the clinical diagnosis of neuroacanthocytosis. chorea-acanthocytosis NA families and identied three novel XK variants (c.942G > A, c.970A > T, c.422_423del) in 3 index MLS patients and three novel VPS13A variants (c.3817C > T, c.9219C > A, c.3467T > A) in two index ChAc patients. In addition, we summarized the genotypes and phenotypes of reported MLA patients and ChAc patients in the Chinese population. Our study expands the genetic spectrum of XK and VPS13A and helps the clinical diagnosis of neuroacanthocytosis. Introduction Neuroacanthocytosis (NA) is a group of syndromes, including McLeod syndrome (MLS), choreaacanthocytosis (ChAc), Huntington's disease-like 2 (HDL2), and pantothenate kinase-associated neurodegeneration (PKAN). The acanthocytosis is a frequent nding in MLS and ChAc, but only occurs in approximately 10% of HDL2 and PKAN patients [1,2]. The "core" neuropathologic nding in NA syndromes is the degeneration affecting the basal ganglia, thus their clinical presentations can be remarkably similar [3]. The characteristic phenotypes comprise psychiatric and cognitive symptoms, and a variety of movement disorders including chorea, dystonia, and parkinsonism. The age of onset, inheritance pattern and ethnic background provide diagnostic clues in each condition, but clinical diagnosis remains challenging. Genetic diagnosis provides a de nitive diagnosis and is important for genetic counseling. In the two core NA syndromes, MLS is inherited in an X-linked way with pathogenic variants of XK gene, while ChAc is an autosomal recessive disorder caused by pathogenic variants in VPS13A. It is estimated that there are likely a few hundred of MLS patients and around one thousand ChAc patients in the world [3]. Although more than 50 NA patients have so far been reported in China [4][5][6][7][8][9][10][11][12][13], most of which were not con rmed by speci c molecular tests [4]. The distribution of pathogenic variants within XK and VPS13A has not been conclusively determined in China. Targeted next-generation sequencing (NGS) has been successfully utilized to diagnose neurological diseases which may be di cult to differentiate from clinical symptoms. In this study, we collected 14 unrelated Chinese patients who exhibited involuntary movements and had been ruled out the Huntington's disease (HD) by the HTT gene testing. We performed targeted NGS in these patients and 5 index patients were diagnosed with MLS or ChAc by Sanger sequencing veri cation. Three novel XK variants and three novel VPS13A variants were identi ed. Subjects Fourteen unrelated patients who presented suspected chorea with negative HTT genetic testing and their familial members were recruited between June 2015 and March 2019. The neurological examinations and clinical evaluations were performed by at least two senior neurologists. Routine blood tests and radiological examinations were performed. The study was approved by the Ethics Committee of Second A liated Hospital, Zhejiang University School of Medicine. Written informed consent was obtained from each participant. Acanthocyte examination Blood was collected into the tube containing ethylene-diamine-tetracetic acid (EDTA) and examined by scanning electron microscopy (SEM). Red cells were xed in a 1% solution of glutaraldehyde and stored at 4°C until the time of examination. The red cells were absorbed onto tissue paper and gold-coated with an Emscope Sputter Coater. They were then examined in a Nova nano 450 electron microscope operating at 5 kV. The acanthocyte rate was counted as the method described previously [14]. Genetic analysis Genomic DNA was extracted from peripheral EDTA-treated blood by Blood Genomic Extraction Kit (Qiagen, Hilden, Germany). A customized panel was designed to cover 54 genes for chorea and dystonia, including VPS13A, XK, PRNP, 10 genes of neurodegeneration with brain iron accumulation (NBIA), 4 genes of primary familial brain calci cation (PFBC), and other 37 genes of hereditary dystonia (Supplemental Table 1). A detailed protocol of NGS was reported in our previous study [15]. Sanger sequencing was performed on an ABI 3500xl Dx Genetic Analyzer (Applied Biosystems, Foster City, USA) as our previous report to verify the ltered potential variants [16]. Co-segregation analysis was carried out in families with genetic variants. SIFT and PolyPhen-2 were used to predict the pathogenicity of the identi ed variants. Results Identi cation of novel XK and VPS13A variants After targeted NGS and Sanger sequencing, three novel hemizygous variants within XK gene were found in three index patients ( Fig. 1A-C). Among these three variants, two are nonsense (c.942G>A and c.970A>T) and one is frameshift (c.422_423del). They are all absent in 1000G, gnomAD and our NGS database containing 500 Chinese matched controls. According to American College of Medical Genetics and Genomics (ACMG) standards [17], c.942G>A is classi ed as pathogenic and the other two variants c.970A>T and c.422_423del are classi ed as likely pathogenic (Table 1). In addition, three novel variants within VPS13A gene were detected in another two index patients, one of whom harbored a homozygous variant (c.3817C>T) and the other harbored compound heterozygous variants (c.3467T>A and c.9219C>A) ( Fig. 1D-E). Among these variants, two are nonsense (c.3817C>T and c.9219C>A) and one is missense (c.3467T>A). The three variants are at extremely low frequency in gnomAD (<=2) and absent in 1000G and our NGS database. The missense variant c.3467T>A was predicted to be deleterious by SIFT and PolyPhen-2. According to ACMG standards, c.9219C>A is classi ed as pathogenic, while c.3817C>T and c.3467T>A are classi ed as likely pathogenic (Table 1). Clinical features of ve genetically diagnosed patients The detailed clinical features of ve NA patients are summarized in Table 2. Three MLS index patients were all males with no family history of NA ( Fig. 1A-C). Two ChAc index patients were females, one of which was from consanguineous marriage ( Fig.1D) and the other had a positive family history (Fig.1E). All ve patients had mild to moderate elevated creatine kinase (CK). Four patients had generalized chorea-like symptoms, and the brain magnetic resonance imaging (MRI) of them revealed atrophy of either caudate or putamina ( Fig. 2A-D). One MLS case (Case 2) showed gait disturbance, mild involuntary movements and markedly elevated CK, which made it confused with myopathy at rst. Re exes of ve NA patients were all reduced. Two of them received electromyogram (EMG) and showed axonal neuropathy. Apart from one case undetected, the acanthocyte rate of peripheral blood in four patients was between 18-32%, which was easily recognized in electron microscopy ( Fig. 2E-F). Only Case 2 received the Kell blood group phenotype in blood centre. The Kell antigen was absent in his erythrocytes, as well as the K, Kpa and Kpb antigens. The k antigen was weakly positive. Case 1 (II-2 in Family 1, Fig. 1A) was a 47-year-old male with involuntary tongue and limb movements for over 20 years. At age 43, he noted progressive gait disturbance and tendency to fall. He went to the hospital and was diagnosed with "chorea". Then he was treated with risperidone and discontinued after several months. At age 47, he had a general tonic-clonic seizure and was treated with valproate. Neurological examinations showed dysarthria and 4/5 weakness of left proximal lower limb. Case 2 (II-4 in in Family 2, Fig. 1B) was a 41-year-old male with gait disturbance for 4 years. At age 37, he noted progressive gait disturbance and tendency to fall. His serum CK was markedly elevated, which could be over 4000U. Therefore, he was previously diagnosed with "metabolic myopathy". Neurological examinations showed mild postural instability and slight wadding gait. When he was sitting, mild involuntary movements of his toes and trunk were noted, just like akathisia. The muscle force, muscle tone, nger-to-nose test and heel-to-knee test were normal, which suggested that his gait instability was due to mild involuntary movements in trunk and lower limbs. Case 4 (II-2 in Family 4, Fig. 1D) was a 35-year-old female with involuntary limb movements for over 10 years. Her parents had a consanguineous marriage. At early of 20s, she presented mild involuntary movements which did not affect daily life. The symptoms worsen gradually and she developed dysarthria, drooling, orofacial dyskinesias with tongue and lip biting in the following years. At age 35, she had two generalized tonic-clonic seizures and was treated with levetiracetam. Neurological examinations showed general choreiform movements with dysarthria and orofacial dyskinesias. Case 5 (II-2 in Family 5, Fig. 1E) was a 61-year-old female with involuntary limb movements for over 10 years. At age of about 50, she began to present with progressive general involuntary movements and then gradually developed dysarthria, orofacial dyskinesias and shoulder shrugging. Her neurological examinations showed general choreiform movements with dysarthria and orofacial dyskinesias. She had a 50-year-old brother who presented with similar symptoms at age of 40s. The brother could not walk and stayed in bed in a rehabilitation hospital for many years. He did not undergo the genetic testing. The prognosis of most cases was poor. Case 3 died from falling in the paddy eld one year after diagnosis. Case 1, Case 4 and Case 5 progressed to bed after 2-3 years after diagnosis, and Case 5 nally died from pneumonia. Case 2 had mildest symptoms and was able to take care of himself 4 years after diagnosis. Discussion Neuroacanthocytosis syndromes are all exceedingly rare with an estimate prevalence of less than 1 to 5 per 1,000,000 for each disorder worldwide [3]. Currently, 43 pathogenic variants of XK gene have been included in the Human Gene Mutation Database (HGMD, v2019.4), including 13 missense/ nonsense variants, 6 splicing variants, 12 small deletion/ insertion/ indels and 12 gross deletions. In the Chinese population, six pathogenic variants of XK gene have been reported (Table 3), two of which were included in HGMD. Combined with three novel variants in the current study, there are 9 pathogenic XK variants including ve nonsense variants, two small deletions and two gross deletions. ChAc are relatively more common than MLS. There are 126 pathogenic variants of VPS13A in HGMD, including 41 missense/ nonsense variants, 25 splicing variants, 48 small deletion/ insertion/ indels and 12 gross deletion/ insertion/ complex rearrangements. Combined with three novel variants in our study, there are 10 pathogenic VPS13A variants in the Chinese population (Table 1 and Table 3), including 7 missense/ nonsense variants, one splicing variant and two small insertion/ deletions. These 19 pathogenic variants were all from single families and we did not nd any hotspot variant in the Chinese population. MLS and ChAc share a number of similar features [18] and there is a striking phenotypic overlap in these two disorders, including age at onset, chorea, dysarthria, orofacial dyskinesia, seizures, psychiatric symptoms and brain MRI, but some clues may help to differentiate them. First, in genetics, MLS is inherited in an X-link way and ChAc in an autosomal recessive manner. Therefore, the family history and the gender can give important clues to help diagnosis. Besides, the large deletion in XK gene often involves multiple adjacent genes such as DMD, CYBB, RPGR and OTC, and leads to concomitant disorders such as dystrophinopathy, chronic granulomatous disease, retinitis pigmentosa and ornithine transcarbamylase de ciency [19]. Although our patients did not have large deletions, two Chinese patients with chronic granulomatous had been reported that their XK genes were involved in multigenic deletions [7]. It is important to screen MLS phenotype in males with chronic granulomatous disease for complication monitoring and management. In clinical manifestations, the orolingual dystonia is more frequently presented in ChAc. In our study and previous reports in the Chinese population, all ChAc patients had orolingual dystonia, but less than half MLS had the condition. In addition, the severity often disproportionate to other involuntary movements in ChAc. For example, self-mutilating tongue and lip biting, which present in 40% of Caucasians [20] and 85% (6/7) of reported Chinese ChAc patients, are very rare in MLS patients [18]. In the other hand, MLS patients seemed to have higher CK levels than ChAc in the Chinese population. About 78% of MLS patients (7/9) had CK above 1000 U/L, whereas only one of six ChAc patient reached this level. Moreover, MLS patients can have muscle complaints with mild involuntary movements, which often lead to a confusion with myopathy [5,6]. However, the EMG usually show axonal neuropathy, so sometimes these patients can be misdiagnosed with Kennedy's disease, which is also an X-link neurological disease manifests with proximal weakness, hyperCKemia, and axonal neuropathy-mimic EMG [21]. Therefore, it is important to pay attention to mild involuntary movement and differentiate MLS in males with both hyperCKemia and a neuropathic EMG. The acanthocyte rate in peripheral blood can usually fall 5-50% in ChAc and 8-30% in MLS [3], which is consistent with our results. However, the presence of acanthocytes may vary over time and can be absent in some cases [22][23][24][25], thus cannot be used to distinguish different NA syndromes. Conclusion In conclusion, we reported ve Chinese NA families and identi ed three novel XK variants (c.942G > A, c.970A > T, c.422_423del) in 3 index MLS patients and three novel VPS13A variants (c.3817C > T, c.9219C > A, c.3467T > A) in two index ChAc patients. In addition, we summarized the genotypes and phenotypes of reported MLA patients and ChAc patients in the Chinese population. Our study expands the genetic spectrum of XK and VPS13A and helps the clinical diagnosis of neuroacanthocytosis. Ethics approval and consent to participate The study protocol was in accordance with the tenets of the Declaration of Helsinki and was approved by the Ethics Committee of Second A liated Hospital, Zhejiang University School of Medicine. Written informed consents were obtained from all the participants. Page 7/15 Written informed consents for publication were obtained from all the participants. Availability of data and material The data that support the ndings of this study are available from the corresponding author upon reasonable request. Competing interests The authors declare that they have no competing interests. Funding This work was supported by the research foundation for distinguished scholar of Zhejiang University to Zhi-Ying Wu (188020-1938 10101/089). Authors' contributions Hao Yu and Liang Zhang: data acquisition, analysis and interpretation, and manuscript preparation; Xiao-Yan Li: data acquisition, analysis and interpretation; Zhi-Ying Wu: study design and conception, data acquisition, analysis and interpretation, critical revision of the manuscript. Supplementary Files This is a list of supplementary les associated with this preprint. Click to download. TableS1.docx
2020-04-30T09:04:18.838Z
2020-04-27T00:00:00.000
{ "year": 2020, "sha1": "59d9a7ecafef3fbedc8cdaff88bb73ce2609b4a9", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-23935/v1.pdf?c=1631847883000", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "a0b45df3d00217ef88201a6191112251a39924a2", "s2fieldsofstudy": [ "Medicine", "Biology", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
119237581
pes2o/s2orc
v3-fos-license
Determination of astrophysical 12N(p,g)13O reaction rate from the 2H(12N, 13O)n reaction and its astrophysical implications The evolution of massive stars with very low-metallicities depends critically on the amount of CNO nuclides which they produce. The $^{12}$N($p$,\,$\gamma$)$^{13}$O reaction is an important branching point in the rap-processes, which are believed to be alternative paths to the slow 3$\alpha$ process for producing CNO seed nuclei and thus could change the fate of massive stars. In the present work, the angular distribution of the $^2$H($^{12}$N,\,$^{13}$O)$n$ proton transfer reaction at $E_{\mathrm{c.m.}}$ = 8.4 MeV has been measured for the first time. Based on the Johnson-Soper approach, the square of the asymptotic normalization coefficient (ANC) for the virtual decay of $^{13}$O$_\mathrm{g.s.}$ $\rightarrow$ $^{12}$N + $p$ was extracted to be 3.92 $\pm$ 1.47 fm$^{-1}$ from the measured angular distribution and utilized to compute the direct component in the $^{12}$N($p$,\,$\gamma$)$^{13}$O reaction. The direct astrophysical S-factor at zero energy was then found to be 0.39 $\pm$ 0.15 keV b. By considering the direct capture into the ground state of $^{13}$O, the resonant capture via the first excited state of $^{13}$O and their interference, we determined the total astrophysical S-factors and rates of the $^{12}$N($p$,\,$\gamma$)$^{13}$O reaction. The new rate is two orders of magnitude slower than that from the REACLIB compilation. Our reaction network calculations with the present rate imply that $^{12}$N($p,\,\gamma$)$^{13}$O will only compete successfully with the $\beta^+$ decay of $^{12}$N at higher ($\sim$two orders of magnitude) densities than initially predicted. I. INTRODUCTION The first generation of stars formed at the end of the cosmic dark ages, which marked the key transition from a homogeneous and simple universe to a highly structured and complex one [1]. The first stars of zero metallicity are so-called Population III that formed before Population I in galatic disks and Population II in galatic halos [2,3]. The most fundamental question about Population III stars is how massive they typically were since the mass of stars dominates their fundamental properties such as lifetimes, structures and evolutions. Recent numerical simulations of the collapse and fragmentation of primordial gas clouds indicate that these stars are predominantly very massive with masses larger than hundreds of M ⊙ (see Ref. [1] and references therein). A classic question on the evolution of supermassive stars is whether they contributed any significant mate- * Corresponding author: wpliu@ciae.ac.cn rial to later generations of stars by supernova explosions which ended the lives of Population III stars. In 1986, Fuller, Woosley and Weaver [4] studied the evolution of non-rotating supermassive stars with a hydrodynamic code KEPLER. They concluded that these stars will collapse into black holes without experiencing a supernova explosion. This is because the triple alpha process (3α → 12 C) does not produce sufficient amounts of CNO seed nuclei so that the hot CNO cycle and rp-process are unable to generate the nuclear energy enough to explode the stars. In 1989, Wiescher, Görres, Graff et al. [5] suggested the rap-processes as alternative paths which would permit these stars to bypass the 3α process and to yield the CNO material. The reactions involved in the rap-processes are listed as below: rap-I : 3 He(α, γ) 7 Be(p, γ) 8 B(p, γ) 9 C(α, p) 12 N(p, γ) 13 O(β + ν) 13 N(p, γ) 14 O rap-II : 3 He(α, γ) 7 Be(α, γ) 11 C(p, γ) 12 N(p, γ) 13 O(β + ν) 13 N(p, γ) 14 O rap-III : 3 He(α, γ) 7 Be(α, γ) 11 C(p, γ) 12 N(β + ν) 12 C(p, γ) 13 N(p, γ) 14 O rap-IV : 3 He(α, γ) 7 Be(α, γ) 11 C(α, p) 14 N(p, γ) 15 O. It is crucial to determine the rates of the key reactions in the rap-processes in order to study if they play any significant role in the evolution of supermassive stars by producing CNO material. 12 N(p, γ) 13 O is an important reaction in the rap-I and rap-II processes. Due to the low Q-value (1.516 MeV) of the 12 N(p, γ) 13 O reaction, its stellar reaction rate is dominated by the direct capture into the ground state in 13 O. In addition, the resonant capture via the first excited state in 13 O could play an important role for determining the reaction rates. In 1989, Wiescher et al. [5] derived the direct astrophysical S-factor at zero energy, S(0), to be ∼40 keV b based on a shell model calculation. In 2006, Li [6] extracted the direct S(0) factor to be 0.31 keV b by using the spectroscopic factor from the shell model calculation of Ref. [7], where the proton-removal cross section of 13 O on a Si target was well reproduced. It should be noted that there is a discrepancy of two orders of magnitude between the above two values of the direct S(0) factor. In 2009, Banu, Al-Abdullah, Fu et al. [8] derived the asymptotic normalization coefficient (ANC) for the virtual decay of 13 O g.s. → 12 N + p from the measurement of the 14 N( 12 N, 13 O) 13 C angular distribution and then calculated the direct S(0) factor to be 0.33 ± 0.04 keV b, which is consistent with that in Ref. [6]. As for the resonant capture component, the resonant parameters of the first excited state in 13 O have been studied through a thick target technique [9,10] and R-matrix method [8,10]. In 1989, Wiescher et al. [5] derived the radiative width to be Γ γ = 24 meV with one order of magnitude uncertainty based on a Weisskopf estimate of the transition strength. In 2007, Skorodumov, Rogachev, Boutachkov et al. [10] measured the excitation function of the resonant elastic scattering of 12 N + p and extracted the spin and parity to be J π = 1/2 + for the first excited state in 13 O via an R-matrix analysis. In addition, the excitation energy and the proton width were determined to be 2.69 ± 0.05 MeV and 0.45 ± 0.10 MeV, respectively. In 2009, Banu et al. [8] derived a radiative width Γ γ = 0.95 eV by using the experimental ANC, based on the R-matrix approach. This work aims at determining the astrophysical Sfactors and rates of the 12 N(p, γ) 13 O reaction through the ANC approach based on an independent proton transfer reaction. Here, the angular distribution of the 12 N(d, n) 13 O reaction leading to the ground state in 13 O is measured in inverse kinematics, and used to extract the ANC for the virtual decay of 13 O g.s. → 12 N + p through the Johnson-Soper adiabatic approximation [11]. The (d, n) transfer system has been successfully applied to the study of some proton radiative capture reactions, such as 7 Be(p, γ) 8 B [12,13], 8 B(p, γ) 9 C [14], 11 C(p, γ) 12 N [15], and 13 N(p, γ) 14 O [16]. The astrophysical S-factors and rates for the direct capture in the 12 N(p, γ) 13 O reaction are then calculated by using the measured ANC. Finally, we obtain the total S-factors and rates by taking into account the direct capture, the resonant capture and their interference, and study the temperature-density conditions at which the 12 N(p, γ) 13 O reaction takes place. The experiment was performed with the CNS low energy in-flight Radio-Isotope Beam (CRIB) separator [17,18] in the RIKEN RI Beam Factory (RIBF). A 10 B primary beam with an energy of 82 MeV was extracted from the AVF cyclotron. The primary beam impinged on a 3 He gas target with a pressure of 360 Torr and a temperature of 90 K; the target gas was confined in a small chamber with a length of 80 mm [19]. The front and rear windows of the gas chamber are Havar foils, each in a thickness of 2.5 µm. The secondary 12 N ions with an energy of 70 MeV were produced through the 3 He( 10 B, 12 N)n reaction and then selected by the CRIB separator, which mainly consists of two magnetic dipoles and a velocity filter (Wien filter). A schematic layout of the experimental setup at the secondary reaction chamber (namely F3 chamber, see Ref. [18] for details) of CRIB separator is shown in Fig. 1. The cocktail beam which included 12 N was measured event-by-event using two parallel plate avalanche counters (PPACs) [20]; in this way, we determined the particle identification, precise timing information, and could extrapolate the physical trajectory of each ion in real space. In Fig. 2 we display the histogram of time of flight (TOF) vs. horizontal position (X) on the upstream PPAC in the F3 chamber for the particle identification of the cocktail beam. The main contaminants are 7 Be ions with the similar magnetic rigidities and velocities to the 12 N ions of interest. After the two PPACs, the 12 N secondary beam bombarded a deuterated polyethylene (CD 2 ) film with a thickness of 1.5 mg/cm 2 to study the 2 H( 12 N, 13 O)n reaction. A carbon film with a thickness of 1.5 mg/cm 2 was utilized to evaluate the background contribution from the carbon nuclei in the (CD 2 ) target. The target stand with a diameter of 8 mm also served as a beam collimator. The typical purity and intensity of the 12 N ions on target were approximately 30% and 500 pps after the collimator, respectively. The 13 O reaction products were detected and identified with a telescope consisting of a 23 µm silicon detector (∆E) and a 57 µm double-sided silicon strip detector (DSSD). In order to determine the energy of 12 N ions after they pass through two PPACs, a silicon detector with a thickness of 1500 µm was placed between the downstream PPAC and the (CD 2 ) target, and removed after measuring the beam energy. The energy calibration of the detectors was carried out by combining the use of α-source and the magnetic rigidity parameters of 10 B and 12 N ions. The energy loss of the 12 N beam in the whole target was determined from the energy difference measured with and without the target. The 12 N beam energy in the middle of the (CD 2 ) target was derived to be 59 MeV from the energy loss calculation by the program LISE++ [21], which was calibrated by the experimental energy loss in the whole target. In addition, a beam stopper (close to the DSSD) with a diameter of 8 mm was used to block un-reacted beam particles in order to reduce radiation damage to the DSSD. The emission angles of reaction products were determined by combining the position information from the DSSD and the two PPACs. As an example, Fig. 3 shows a two-dimensional histogram of energy loss (∆E) vs. residual energy (E R ) for the events in the angular range of 3 • < θ c.m. < 4 • . For the sake of saving CPU time in dealing with the experimental data, all the events below ∆E = 20 MeV were scaled down by a factor of 100, and the 13 O events were not affected. The two-dimensional cuts of the 13 O events from the 2 H( 12 N, 13 O)n reaction were determined with a Monte Carlo (MC) simulation, which took into account the kinematics, geometrical factor, the energy diffusion of the 12 N beam, the angular straggling, and the energy straggling in the two PPACs, the secondary target and the ∆E detector. This simulation was calibrated by using the 12 N elastic scattering on the target. Such a calibration approach has been successfully used to study the 2 H( 8 Li, 9 Li) 1 H reaction [22]. The 13 O events are clearly observed in the two-dimensional cut for the (CD 2 ) measurement, while no relevant events are observed in this cut for the background measurement. The 7 Be contaminants don't affect the identification of the 13 O events since these ions and their products are far from the 13 O region in the spectra of ∆E vs. E R and have significantly different energies from the 13 O events. The effects of the pileup of 7 Be with 12 N can be estimated and subtracted through the background measurement. In addition, the detection efficiency correction from the beam stopper was also computed via the MC simulation also by considering the effects mentioned above. The resulting detection efficiencies range from 66% to 100% for different detection regions in the DSSD. After the beam normalization and background subtraction, the angular distribution of the 2 H( 12 N, 13 O g.s. )n reaction in the center of mass frame was obtained and is shown in Fig. 4. For a peripheral transfer reaction, the ANC can be derived by the comparison of the experimental angular distribution with theoretical calculations, where (2) dσ dΩ exp and σ th lijil f j f are the experimental and theoretical differential cross sections, respectively. C l f j f , b d liji represent the nuclear ANCs and the corresponding single particle ANCs for the virtual decays of 13 O g.s. → 12 N + p and d → p + n, respectively. l i , j i and l f , j f denote the orbital and total angular momenta of the transferred proton in the initial and final nuclei d and 13 O, respectively. R lijil f j f is model independent in the case of a peripheral transfer reaction; therefore, the extraction of the ANC is insensitive to the geometric parameters (radius r 0 and diffuseness a) of the bound state potential. In this work, the code FRESCO [23] was used to analyze the experimental angular distribution. In order to include the breakup effects of deuterons in the entrance channel, the angular distribution was calculated within the Johnson-Soper adiabatic approximation to the neutron, proton, and target three-body system [11]. In the present calculation, the optical potentials of nucleontarget were taken from Refs. [24,25], which have been successfully applied to the study of some of the reactions on light nuclei [26][27][28]. The theoretical angular distributions of the direct process were calculated with these two sets of optical potentials, as shown in Fig. 4. The employed optical potential parameters are listed in Table I. In addition, the UNF code [29] was used to evaluate the compound-nucleus (CN) contribution in the 2 H( 12 N, 13 O g.s. )n reaction, as indicated by the dotted line in Fig. 4. The single-particle bound state wave functions were calculated with conventional Woods-Saxon potentials whose depths were adjusted to reproduce the binding energies of the proton in the ground states of the deuteron (E b = 2.225 MeV) and 13 O (E b = 1.516 MeV). To verify if the transfer reaction is peripheral, the ANCs and the spectroscopic factors were computed by changing the geometric parameters of Woods-Saxon potential for single-particle bound state, using one set of the optical potential, as displayed in Fig. 5. One sees that the spectroscopic factors depend significantly on the selection of the geometric parameters, while the ANC is nearly constant, indicating that the 2 H( 12 N, 13 O g.s. )n reaction at the present energy is dominated by the peripheral process. The spins and parities of 12 N g.s. and 13 O g.s. are 1 + and 3/2 − , respectively. Therefore, the 2 H( 12 N, 13 cross section could include two contributions from the proton transfers to 1p 3/2 and 1p 1/2 orbits in 13 O. The ratio of 1p 3/2 : was derived to be 0.16 based on a shell model calculation [7]. C 2 d was taken to be 0.76 fm −1 from Ref. [30]. After the subtraction of the CN contribution, the first three data points at forward angles were used to derive the ANC by the comparison of the experimental data with the theoretical calculations. For one set of optical potential, three ANCs can be obtained by using three data points, and their weighted mean was then taken as the final value. The square of the ANCs for the 1p 1/2 and 1p 3/2 orbits were extracted to be (C The ANC, which defines the amplitude of the tail of the radial overlap function, determines the overall normalization of the direct astrophysical S-factors [31]. In the present work, the direct capture cross sections and astrophysical S-factors were computed based on the measured ANC by using the RADCAP code [32], which is a potential model tool for direct capture reactions. The resulting direct astrophysical S-factors as a function of E c.m. are displayed in Fig. 6, as indicated by the dashed line. The S-factor at zero energy was then found to be S(0) = 0.39 ± 0.15 keV b, which agrees with the values in Refs. [6,8]. The astrophysical S-factors of the resonant capture can be obtained by using Breit-Wigner formula. In the present calculation, the resonant parameters (J π = 1/2 + , E x = 2.69 ± 0.05 MeV, Γ p = 0.45 ± 0.10 MeV [10], and Γ γ = 0.95 ± 0.24 eV [8]) were adopted. In Fig. 6, we display the resulting S-factors for the resonant capture, as indicated by the dotted line. Interference effects will occur only in the case that the resonant and direct amplitudes have the same channel spin I and the same incoming orbital angular momentum [8,33]. The direct capture amplitude for the 12 N(p, γ) 13 O reaction is given by the sum of I = 1/2 and 3/2 components. Since the channel spin for the first resonance is 1/2, only the first component in the direct capture interferes with the resonant amplitude. Therefore, the total S-factors were calculated with where S d (E), S r (E) and S 1/2 d (E) denote the astrophysical S-factors for the direct capture, the resonant capture, and the I=1/2 component in the direct capture, respectively. δ is the resonance phase shift, which can be given by Here, Γ p (E) = Γ p P l i (E) P l i (ER) , where P li (E) is the penetration factor. The ratio of the I=1/2 amplitude to the total amplitude in the direct capture was derived to be 2/3 using the RADCAP code. Generally, the sign of the interference in Equation 3 has to be determined experimentally. However, it is also possible to infer this sign via an R-matrix method. Recently, Banu et al. [8] found the constructive interference below the resonance and the destructive one above it using an R-matrix approach. Based on this interference pattern, the present total S-factors were then obtained, as shown in Fig. 6. In addition, we estimated the uncertainty of the total Sfactors by taking into account the errors of the present ANC for the ground state in 13 O and the employed resonant parameters for the first excited state in 13 O. The astrophysical 12 N(p, γ) 13 O reaction rates (cm 3 s −1 mol −1 ) were then calculated with [34,35] where the Gamow energy E G = 0.978Z 2 1 Z 2 2 µ MeV, and N A is Avogadro's number. The solid curve represents the total rates in the present work, while the dash-dotted curve denotes the REACLIB compilation [5]. See text for details. In Fig. 7 we display the resulting reaction rates as a function of temperature, together with the REACLIB compilation [5]. There is a discrepancy of up to a factor of ∼100 between these two reaction rates for the temperatures below T 9 = 3 (T 9 is a temperature in unit of 10 9 K). In addition, our total rates are in good agreement with those in Fig. 9 of Ref. [8] since the similar contribution of the direct capture was found and the same resonant parameters were used in both works. From Fig. 7 one also sees that the direct capture dominates the 12 N(p, γ) 13 O reaction for the temperatures below T 9 = 1.5. 13 O reaction could operate. Curve 1 represents the equilibrium lines between the rates of the 12 N(p, γ) 13 O reaction and 12 N β + decay. Curve 2 shows the same result determined from Ref. [5]. Regions 3 and 4 denote the revised temperature-density conditions for rap-I,II and rap-III with the present 12 N(p, γ) 13 O rate, respectively, while Regions 5 and 6 represent those with the REACLIB rate from Ref. [5]. Within these four regions more than 1×10 −6 abundance (mass fraction/mass number) could be converted to CNO cycle. Note that when determining Regions 3 and 4, only the 12 N(p, γ) 13 O rate was changed, all the rest were still taken from the REACLIB compilations. is because the new rates are about two orders of magnitude slower than the compilation. On the contrary, the present region for rap-III (Region 4), where the β-decay of 12 N prevails over its proton capture, was enlarged relative to Region 6, which led to an increase of the upper limit of the density from ∼100 to ∼10000 g/cm 3 . In brief, the present rate of 12 N(p, γ) 13 O shows that it will only compete successfully with the β + decay of 12 N at higher (∼two orders of magnitude) densities than initially predicted in Ref. [5]. This finding is consistent with the result reported in Ref. [8], while is contrary to that in Ref. [10]. V. SUMMARY AND CONCLUSION In this work, the angular distribution of the 2 H( 12 N, 13 O g.s. )n reaction was measured and utilized to derive the ANC for the virtual decay of 13 O g.s. → 12 N + p. Our result is in agreement with that from the 14 N( 12 N, 13 O) 13 C transfer reaction in Ref. [8]. The astrophysical S-factors and rates for the direct capture in the 12 N(p, γ) 13 O reaction were then obtained from the measured ANC by using the direct radiative capture model. In addition, we determined the total S-factors and reaction rates by taking into account the direct capture into the ground state of 13 O, the resonant capture via the first excited state of 13 O and the interference between them. This work provides an independent examination to the existing results on the 12 N(p, γ) 13 O reaction. We conclude that the direct capture dominates the 12 N(p, γ) 13 O reaction for the temperatures below T 9 = 1.5. We also performed reaction network simulations with the new rates. The results imply that 12 N(p, γ) 13 O will only compete successfully with the 12 N β + decay at higher (∼two orders of magnitude) densities than initially predicted in Ref. [5]. Recent simulation of massive metal-free stars between 120 and 1000 solar masses shows that a metallicity as small as ∼1×10 −9 is sufficient to stop the contraction [38]. Therefore, this revise of temperature-density conditions may have substantial implications on the evolution of these massive metal-free stars.
2019-04-13T04:22:35.941Z
2012-11-21T00:00:00.000
{ "year": 2012, "sha1": "2c969fa4a3ab6b5b0c3697e3be7eb5522abf12a2", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1211.4972", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "f2281e672b2a9b81926af14f016cfc201bbb78fd", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
59483723
pes2o/s2orc
v3-fos-license
Genotype x environment interaction of wheat genotypes under salinity environments Genotypes x environment interaction as well as stability of performance were determined for grain yield and yield contributes of 12 wheat genotypes under four salinity levels of environments (control, 8, 12, 16 dS/m). Significant genotype-environment interaction (linear) for days to heading, plant height, number of spikes per plant and grains per spikes, 1000-grain weight and grain yield per plant at 1% level of probability when tested against pooled deviation. Both the environment (linear) and genotype x environment (linear) components of variation for stability were also significant indicating that prediction of the genotypes on the environment appeared feasible for all the characters. The variance due to pooled deviation was significant for only days to heading. Considering all the three stability parameter, genotype G11 was found most stable among all the genotypes for grain weight of wheat. Among the genotypes G11, G22, G24, G33 and G40 were most desirable for yield per plant. The genotype G32 showed more responsiveness to changing environment and was suited only for highly favorable environments. Based on three stability parameters, G11, G22 and G37 were the most stable and desirable genotypes with reasonable good yield among the all. Introduction Yield and its component being quantitative in nature, it would be useful to gain knowledge about the nature and magnitude of the genetic variability and its interaction with environment.The grain yield which is the ultimate expression to various yield contributions characters is a polygenic character and influenced by many genetic factors as well as environmental fluctuations.Combined effects of grain set per spike, ear bearing tillers per plant, spikes per plant, as well as per unit area and 1000 grain wt. and also other morphelogical characters, especially plant height, spike area, days to physiological maturity influence directly in yield (Barma et al., 1990).Yield is also a complex character, which is dependent on a number of a agronomic characters and is highly influenced by many genetic factors as well as many environmental fluctuations (Joarder et al., 1978).Genotypes which can adjust its phenotypic state in response to environmental fluctuations in such a way that it give maximum stable economic return can be termed as well buffered or stable (Allard and Bardshow, 1964).In a plant Breeding program potential genotypes are usually evaluated in different environments selecting desirable ones.For stabilizing yield, it is necessary to identify the stable genotypes suitable for wide range of environments.To identify such genotypes, genotype environment interactions are of major concern for a breeder because such interactions confound the selection of superior cultivars by altering their relative productiveness in different environments (Eagles and Frey, 1977).Varietal stability in yield with respect to wide range of environments is one of the most desired properly of a genotypes to fit the crop under available cropping pattern.So under adaptability and stability are prime considerations in formulating efficient breading programme.Stability analysis is a good technique for meaning the adaptability of different crop varieties to varying environments (Morale et al., 1991).Therefore, the experiment was conducted to estimate the nature of genotype and environment interaction in wheat under artificially induced saline environment. Treatment In this experiment, 12 genotypes were evaluated under four artificially induced saline environment (control, 8, 12, 16 dS/m).List of wheat genotypes are shown in Table 1.The experiment was conducted in pot culture under semi-controlled environment (inside plastic greenhouse) and natural light during the season of 2009-10.This was two factor experiment following randomized complete block design with three replications. Pot preparation and plant raising Pots were prepared with the dried soil and evenly mixed well rotten cow dung at the ratio of 3:1 (by volume).Clean and dry plastic pots of 12 liter size were used for each hybrid.Each pot was then filled with 10 kg previously prepared growth media (soil and cow dung mixture).Fertilizations were done following BARC fertilizer recommendation guide-2005.After pot preparation ten seeds per pot were sown by making holes and keeping more or less equal distances.Ten days after germination five seedlings of each genotypes in each pot and after seedling establishment two uniform healthy plants were allowed to grow in each pot. Salinity development Salt solution was prepared artificially by dissolving calculated amount of commercially available NaCl with tap water to make 80, 120 and 160 mM NaCl solution.The electric conductivity (EC) of the respective salt solutions was equivalent to 8, 12, 16 dS/m, respectively and 0.8 dS/m for tap water (control).The salt solution was applied with an increment of 30 mM at every alternate day till the respective concentrations were attained. Plants in control group were irrigated with tap water.Treatment solution was applied in excess so that extra solution dripped out from the bottoms of the pots (Ashraf and McNeilly, 1988;Aziz et al., 2005, 06). Data collection and analysis Days to heading, days to maturity, plant height (cm), number of spikes per plant, number of grains per spike, 1000-grain weight (g), grain yield/plant were recorded with standard procedure.Genotype and environmental interactions were estimated according to Eberhart and Russell (1966). Pooled analysis of variance The results of the pooled analysis of variance for relevant characters after Eberhart and Russell's model is presented in Table 2. Luthra et al. (1974) recommended Eberhart and Russell model for stability analysis considering its simplicity.Mean sum of square due to genotypes were significant (P < 0.01) for all the characters when tested against pooled deviation.ANOVA shows the significant genotype environment interaction (linear) for days to heading, plant height, number of spikes per plant and grains per spikes, 1000grain weight and grain yield per plant at 1% level of probability when tested against pooled deviation.Mahak et al. (2002) reported significant genotype-environment interaction (linear) for plant height, while Madariya et al. (2001) also observed significant genotype-environment interactions (linear) for grain number per spike, 1000 grain weight and yield.This indicates that the genotypes differed considerably with respect to their stability in different environments (salinity levels).Both the environment (linear) and genotype x environment (linear) components of variation for stability were also significant indicating that prediction of the genotypes on the environment appeared feasible for all the characters.The variance due to pooled deviation was significant for only days to heading.This suggest that the genotypes fluctuated significantly from their respective linear path of response to environments as well as considerable genetic diversity among the materials for days to heading.Mean square due to pooled deviation was not significant for all characters except days to heading which indicated that the major components of differences in stability for these characters was due to the linear regression and not the deviation from linear function (Ghose and Das, 1981;Ahmed and Khatum, 1996;Hossain et al., 2004). Stability and response parameter The mean performance of individual genotypes along with their estimated stability parameters for yield and its contributing characters are presented in Table 3 and Table 4.The results were discussed character wise as follows: Days to heading For days to heading, early to medium flowering type are generally preferred i.e. negative phenotypic index (Pi) is desirable.Seven genotypes such as G1, G15, G18, G24, G26, G33 and G40 to be desirable for this character. A genotype with larger bi value indicates its highest degree of response to environmental changes.In the study genotyped G22, G26, G32, and G45 was very sensitive to changes of environments.The genotypes G1, G15, G18, G22, G24, G26, G40 and G45 showed significant S 2 di values which indicate that they were more affected by the environmental fluctuations i.e. performance of these genotypes were unpredictable.Genotype G45 had both bi and S 2 di significant, indicating high genotype x environment interaction for this trait.Genotypes G11, G33, G37 and G40 however, exhibited non-significant S 2 di values which suggest that the performance among the genotypes were predictable in nature.Among the genotypes, G33 showed bi value of nearly unity (0.771) with non-significant S 2 di and also had the higher negative indices, indicating that the genotype was fairly stable and less sensitive to environmental changes. Days to maturity The genotype environment interaction (linear) was non-significant for days to maturity.Genotypes G18, G24, G26, and G32 had the negative phenotypic indices, therefore, they are desirable genotypes with early maturity.Genotypes G11, G22, G24, G33, G37 and G45 showed bi value higher than the unity, indicating their suitability only for favorable environmental condition Genotype G15, G33, G37, and G40 exhibited significant S 2 di, thus prediction of their performance over environments would be not authentic.The rest of the genotypes showed non-significant S 2 di values, therefore, their performance was predictable in nature.Genotype G1 possessed the regression coefficient (bi) value were close to unity with non-significant S 2 di and genotypes G32 and G18 had the higher negative phenotypic indices with non-significant S 2 di values, indicating that they were to be more or less stable for this trait.Based on all the three estimates of stability parameters (negative phenotypic index, bi value close to unity and non-significant S 2 di value) into consideration, it appeared that G32 might be regard as suitable genotype among the all. Plant height Genotype G33 had the highest phenotypic index (Pi = 8.56) and G45 the lowest (-8.36).Genotypes G11, G15, G24, G37, and G40 exhibited positive Pi values (p>0) which might be expected to be able to exploit the most favourable environments.The genotype (G45) showed combined response to bi and S 2 di values suggesting that both linear and non-linear components were responsible for significant G X E interaction.Due to significant S 2 di, genotypes G11 G22, and G45 are unstable and as such prediction across the environment would not be possible.Considering all the three stability parameter it is observed that G1 and G26 were the most stable as they fulfilled the requirement, i.e. bi ≈ 1 and S 2 di ≈ 0. Number of spikes per plant Genotypes G1, G11, G24, G33 and G40 had positive phenotypic indices, therefore they were desirable for this character G1, G32, and G33 showed significant linear component (bi significant and S 2 di non-significant) for spikes/plant.Such a linear response is predictable.Both the regression coefficient (bi) and deviation from regression (S 2 di) were non-significant for G11, G22, G26, G40 and G45.So these genotypes are stable for both favorable and unfavorable environments.Genotype G40 had the lowest bi value (0.01) with non-significant S 2 di values indicates the genotype was suited only for poor environmental condition.Talking all the three stability parameter genotype G11 considered as stable and desirable for number of spikes per plant. Number of grains per spike This trait is considered to have higher variation due to variable size of spikes for most of the varieties/genotypes of wheat.The genotypes G24, G33, G37 and G40 exhibited positive phenotypic indices, so these genotypes were desirable for this character.G15, G18, G32 and G45 tended to be responsive to change in environments as affirmed by their high bi values (1.82, 1.68, 1.67 and 1.12, respectively).These genotypes could be exploited favorable environment.The genotypes G33 and G40 with lower bi values (0.3 and 0.39, respectively) can be considered less responsive than that of the previous four genotypes.The genotypes G11 and G15 had the significant S 2 di values, therefore, performance of these genotypes were somewhat unpredictable.Again the genotypes G15 showed combined linear and non-linear sensitivity indicating that both linear and nonlinear components were responsible for significant G X E interactions.While the genotypes G18 and G32 had significant bi values with non-significant S 2 di values indicating that genotypes are responsive and stable to favorable environment (bi >1).Considering all the three stability parameters, G37 having bi ≈ 1 an S 2 di ≈ 0 was most stable and desirable genotypes. 1000-grain weight This character is important as to indicate the size of the seed.The phenotypic indices of G11, G24, G33, G37 and G40 were positive.This indicates that these genotypes had 1000-grain weight higher than the average and are desirable.The genotype G15, G18, G26, G32, G37 and G45 exhibited higher bi values (bi >1) and were suitable only for highly favorable environments.All the others genotypes gave low bi values and thus indicated better performance to average environmental condition.The regression coefficient of genotypes G15, G18, G26, G32, G37 and G45 were also significantly different from unity with non-significant S 2 di values demonstrating their responsiveness to changing environments, which suggested that the performances of these genotypes were predictable in nature.All the other genotypes had the bi values, which were not significantly differed from unity.The genotypes G22 and G33 showed significant S 2 di values with non-significant bi values which indicate that they are more affected by the environmental fluctuations i.e. performance of these genotypes over environments (salinity level) were unpredictable.Considering all the three parameters (Pi, bi and S 2 di), the genotype G11 was found most stable among all the genotypes having bi value near to unity with non-significant S 2 di value. Grain yield per plant The most predictive parameter was the phenotypic index (Pi) of the individual genotypes.The genotypes G11, G22, G24, G33 and G40 exhibited positive Pi value.So these were desirable for this trait while the same for others were negative and therefore, are not desirable.However, it was the highest in G40 (Pi=2.49)while the lowest in G18 (Pi=-2.08).The genotype G32 showed the highest bi value (bi=1.6**)with non-significant S 2 di value, demonstrating their responsiveness to changing environments, which suggested that, the performance of the genotype was predictable in nature and was suited only for highly favorable environments.But the genotype G33 exhibited lower bi value (bi=0.57**)with non-significant S 2 di value indicating that this genotype was less responsive to environmental changes and were suited only for poor environments.All the genotypes possessed non-significant S 2 di values, indicating that their performance was predictable.Based on all the three stability parameters into consideration, it was observed that G11, G22 and G37 having bi ≈1 and S2di≈0 were most stable and desirable genotypes with reasonable good yield among the all. Conclusions Considering all the three stability parameter, genotype G11 was found most stable among all the genotypes for grain weight of wheat.The genotypes G11, G22, G24, G33 and G40 were most desirable for yield per plant. The genotype G32 showed more responsiveness to changing environment and was suited only for highly favorable environments.Based on three stability parameters, G11, G22 and G37 were the most stable and desirable genotypes with reasonable good yield among the all. Table 4 . Mean performance, phenotypic index (Pi), regression coefficient (bi), deviation from regression (S 2 di) of twelve Genotypes of wheat for number of grains per spike, 1000-grain weight and grain yield per plant. * and ** indicate significant at 5% and 1% level of probability, respectively
2018-12-30T17:04:51.318Z
2017-04-14T00:00:00.000
{ "year": 2017, "sha1": "1accf1ec0b1eff03e992975f0513b755c673af41", "oa_license": "CCBY", "oa_url": "https://www.banglajol.info/index.php/AJMBR/article/download/32034/21592", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1accf1ec0b1eff03e992975f0513b755c673af41", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
234667877
pes2o/s2orc
v3-fos-license
Clinical Ecacy and Safety of Artemisinin-based Combination Therapies in the Treatment of Uncomplicated Plasmodium Falciparum Malaria in Cameroon: A Systematic Review and Meta-Analysis from Individual Patient Data (2004-2020) Background: Cameroon remains a high malaria endemic country. The rapid emergence and spread of Plasmodium falciparum resistant parasites compelled the World Health Organisation (WHO) to recommend the change from monotherapies to artemisinin-based combination therapies (ACTs). This study aimed to assess the clinical ecacy and safety of artemisinin-based combination therapies in the treatment of uncomplicated Plasmodium falciparum malaria in Cameroon from 2004 to 2020. Methods: The preferred reporting items for systematic review and meta-analysis (PRISMA) statement were adopted for the selection of studies. The heterogeneity of the included studies was determined using Cochrane Q and the I 2 . The random effects model was used as standard to combine studies showing heterogeneity of Cochrane Q with P < 0.10 and I 2 > 50. Results: Out of the 4,920 articles and unpublished datasets screened, 16 records with a sample size of 3,737 participants on 8 generic ACTs fullled the inclusion criteria. The per protocol (PP) analysis pooled ecacy of the ACTs was 97.9 % (95 % CI, 97.2-98.7, P<0.01). Sub-group analyses were performed for ASAQ, AL and DHAP. The aggregated ecacies of ASAQ, AL and DHAP were 97.5 % (95 % CI, 96.3-98.8, P<0.01), 99.4 % (95 % CI, 98.6-100.0, P=0.39), 98.0 % (95 % CI, 96.3-99.7, P=0.28) respectively. The pooled ecacies were above the WHO minimum benchmark of 90.0%. The ACTs are well tolerated and common adverse events reported were asthenia, diarrhoea, abdominal pain, anorexia, nausea and vomiting, headache and dizziness. Conclusion: This study reported a high pooled ecacy for all ACTs. AL and DHAP were found to have higher cumulative ecacies than ASAQ. The ACTs are still ecacious and well tolerated for the treatment of malaria in Cameroon. However, there is need for continuous monitoring of ecacy of ACTs despite the high cure rates as resistance Background Malaria is a major public health disease in Cameroon with Plasmodium falciparum responsible for most of the cases [1]. In 2018, malaria accounted for 228 million cases and 405,000 related deaths worldwide (1). Early diagnosis and treatment of clinical cases remain the main tools for the control of malaria in several regions of Africa [2]. In Cameroon, until 2002, the rst-line recommended therapy for uncomplicated falciparum malaria was chloroquine (CQ) and later amodiaquine (AQ) monotherapy between 2002 and 2004 [3]. However, this policy was threatened by the emergence and spread of Plasmodium falciparum resistance to chloroquine (CQ), amodiaquine (AQ) and sulphadoxinepyrimethamine (SP) in most malaria endemic countries. This resulted in major challenges to malaria control in sub-Saharan Africa [2,4]. Parasite resistance to monotherapies compelled WHO to recommend combination of dual or triple therapy, which combines molecules with independent modes of action or distinct target enzymes [5]. Therefore, WHO recommended the adaptation and implementation of artemisinin-based combination therapies (ACTs) as the rst-line treatment for malaria since early 2000 in most countries with endemic P. falciparum malaria (6). Five ACTs are currently recommended by WHO for the treatment of uncomplicated Plasmodium falciparum infection: artemether-lumefantrine (AL), artesunate-amodiaquine (ASAQ), artesunate-me oquine (ASMQ), artesunate-sulphadoxinepyrimethamine (ASSP), dihydroartemisinin-piperaquine (DHAP) [7]. The basis for the use of ACT relies on the rapid reduction of the parasite biomass, reduction of transmission (reducing gametocytes), protection of partner drug against resistance, and rapid fever reduction [8]. In January 2004, Cameroon o cially aligned with the recommendations of WHO and adopted artesunate-amodiaquine (ASAQ-75%) and later included artemether-lumefantrine (AL-25%) in 2006 as rst-line treatment of uncomplicated malaria [9]. These drugs are distributed by those proportions in public facilities, while AL is relatively predominant within the private health facilities and vendors [10]. A recent network meta-analysis (NMA) study on ACTs in Cameroon revealed that AL was more e cacious than ASAQ [11]. The advantage of the NMA approach is that it provides estimates of the effect of each intervention relative to each other [12]. However, the method adopted for the NMA study is not without limitations namely: non-inclusion of observational studies, use of intention-to-treat (ITT) approach and the non-adoption of the individual patient data (IPD) in quantitative syntheses [11]. The adoption of multiple rst line ACTs has the potential to delay the emergence of parasites resistant to the anti-malarials [13]. This study was done to provide additional information on the pooled per protocol (PP) e cacy of ACTs using IPD. Hence, this study aimed to assess the clinical e cacy and safety of Artemisinin-based combination therapies in Cameroon from January 2004 to June 2020 with emphasis on ACTs adopted for rst-line treatment of uncomplicated falciparum malaria. Methods/design Searching strategies Studies included in this review were selected using the preferred reporting items for systematic review and meta-analysis (PRISMA) statement [14]. A computerised systematic strategy based on key words was used to search articles from PubMed/Medline, Google Scholar, and Science Direct databases. Both interventional and observational studies were retrieved to be included in the review using the following MeSH search terms: 'Cameroon AND malaria AND artemether-lumefantrine', 'Cameroon AND malaria AND artesunate-amodiaquine', 'Cameroon AND malaria AND artesunate-me oquine', 'Cameroon AND malaria AND dihydroartemisinin-piperaquine', 'Cameroon AND malaria AND artesunate-sulphadoxinepyrimethamine', 'Cameroon AND malaria AND artesunate-atovaquone-proguanil', 'Cameroon AND malaria AND artesunate-sulphamethoxypyrazine-pyrimethamine', and 'Cameroon AND malaria AND artesunatechloguanil-dapsone'. Additional information on Clinical Trials was also obtained from the libraries and from researchers at the Universities of Yaounde I, Buea, Douala, Dschang and Bamenda. The library of the Catholic University of Central Africa based in Yaounde Cameroon was also consulted. Moreover, information was obtained from the OCEAC Bulletin and from the National Malaria Control Programme (NMCP), the Ministry of Public Health annual reports. Furthermore, studies conducted in any of the four sentinel sites in Cameroon were sought from the NMCP or from individual researchers who provided this information voluntarily. In addition to published studies, unpublished thesis reports were accessed for inclusion in the study. Inclusion criteria The following studies were included in this systematic review and meta-analysis: original articles of studies that investigated at least one ACT in the treatment of uncomplicated falciparum malaria; studies published that included study periods from January, 2004 to June, 2020; studies written in English or French; all multi-centric studies in which Cameroon was one of the sites were included in this report. The population intervention comparator outcome (PICO) format was used to select and include studies (Additional le 1). The primary objective of this review was to assess the e cacy of ACT measured as treatment success at days 14, 28, 42 or 63 for uncomplicated malaria caused by Plasmodium falciparum, while the frequency of adverse events (AEs) was the secondary objective. AEs were de ned as 'signs and symptoms that rst occurred or became more severe post-treatment' or 'as a sign, symptom, or abnormal laboratory value not present on day 0, but which occurred during follow-up, or was present on day 0 but became worse during follow up'. Serious adverse events were de ned according to International Conference on Harmonisation (ICH) guidelines. Studies included in this review are shown in Additional le 1. Non-inclusion criteria The following papers were excluded from this systematic review and meta-analysis: studies that used artemisinin monotherapies or non-artemisinin monotherapies; non-artemisinin combination therapies; studies that assessed malaria treatment outcomes at times less than 14 days and studies with PCR unadjusted cure rates. Studies that were excluded from this review are shown in Additional le 1. Review process All of the research articles identi ed from searches of the electronic databases were screened for eligibility based on title and abstract. Ineligible articles and duplicates were eventually removed using Zotero standalone software version 5.0.56. Full-length articles of the selected studies were read to con rm for ful lling of the inclusion criteria before data extraction began. Two reviewers (PTNN and CMM) independently screened the titles and abstracts to identify potentially eligible studies and data extracted from full-length articles that ful lled the inclusion criteria ( Figure 1). Discrepancies were resolved by mutual consent after discussion and independent review from the third researcher (AMN). WFM reviewed the whole process. Data extraction procedure Data on the types of study design (observational versus interventional), year the studies were conducted, duration of study, and geographic location of the study area was rst extracted. Participants' age ranges were then extracted. Finally, data regarding the types of anti-malarial treatments, treatment outcome measures (including treatment success rates, treatment failure rates), treatment duration, and adverse events (AEs) were extracted to be included in the systematic review and meta-analysis. Methodological quality assessment and sensitivity analysis The quality of the reviewed studies was assessed through sensitivity analysis, which classi ed the included studies into high quality and low quality according to modi ed Jadad scale for randomised controlled trials (RCTs) [15] and the strengthening the reporting of observational studies in epidemiology (STROBE) statement for observational studies [16]. Modi ed Jadad scale assesses the quality of a trial with the range from 0 to 8 (randomisation and its appropriate use, blinding and its appropriate use, withdrawals and dropouts, description of inclusion and exclusion criteria, assessment of adverse effects, and description of statistical analysis). The score for the modi ed Jadad scale range of 0-3 represents low or poor quality and score ranges of 4-8 represents good to excellent quality. The observational studies were categorised as low quality with a score under 75% of the STROBE checklist and high quality with a score over 75% of the STROBE checklist. The reviewers independently assessed the quality of the methodology of included studies. Assessment of treatment outcomes Treatment outcome was assessed as treatment failure and treatment success. The outcomes of all the studies included in this review were assessed and analysed on the 14 th , 28 th , 42 nd and 63 rd day of treatment. Treatment failure included: early treatment failure (ETF), late parasitological failure (LPF) and late clinical failure (LCF). The indicator for treatment success was adequate clinical and parasitological response (ACPR). ACPR was de ned as absence of parasitaemia by the end of treatment (days 14, 28, 42, 63) irrespective of axillary temperature without previously meeting any of the criteria for early treatment failure or late clinical failure or late parasitological failure (17)(18)(19)(20). The treatment success was de ned based on PCR genotyping according to current World Health Organisation (WHO) recommendation. Publication bias Publication bias was assessed using funnel plot with the standard error of each study plotted against its effect size (Additional le 2). The Egger test was also used to assess publication bias. Data analysis and heterogeneity assessment Page 7/24 The traditional meta-analysis that estimates a common effect of the same intervention A, B and C by pooling individual patient data (IPD) from various studies was adopted. The R software package version 3.5.2 was used to carry out all the meta-analyses of malaria treatment e cacy. The heterogeneity of the included studies was investigated using Cochrane Q and the I 2 . The random effects model was used as standard to combine studies showing heterogeneity of Cochrane Q with P < 0.10 and I 2 > 50 [21]. Heterogeneity using was classi ed as low (0-49%), moderate (50-74%) and high (75-100%). Ethical considerations The PRISMA guideline recommendations were used and strictly followed to carry out this systematic review and meta-analysis. Ethical approval is not recommended and was not needed since it is a systematic review and meta-analysis. Study identi cation and selection process A computerised systematic strategy was used to identify and screen articles from PubMed, Google Scholar, and Science Direct databases for eligibility ( Figure 1). Qualitative synthesis The search of studies published from 2004 to 2020 identi ed 13 articles important to the topic under review [22][23][24][25][26][27][28][29][30][31][32][33][34], out of which 10 were RCTs [22-30, 33, 34] and 3 non-comparative clinical trials without randomisation [31, 32] (Additional le 1). Data from 3 unpublished studies that were conducted by the Cameroon National Malaria Control Programme and independent researchers were also included in this review. A total of 39 studies from 16 records with the same or different ACTs ful lled the inclusion criteria and were included in this systematic review and meta-analysis with a total sample size of 3,747 participants that ranged from a minimum of 48 patients [22] Fever and parasite clearance by ACTs Fever temperature measurements were mostly axillary and on average were above 38 °C . Fever was rapidly cleared by the ACTs with low proportions with fever on D1 of 17 % and 5 % by D3 (Additional le 5). Some authors did not measure fever clearance on subsequent days post drug administration and only choose D3 for this clinical measurement. Starting parasitaemia with geometric mean densities varied between 3800 to 42000 parasites per ml. Parasite clearance was very rapid sometimes dropping to 7-8 % on D1 but on average stayed at 30-50 %. Parasite clearance times were similar across study arms in most studies. Nji et al., observed that there appeared to be a study site effect on parasite clearance between Garoua and Mutengene [30]. Although not signi cant most participants in Mutengene site (75 %) (Figure 8), did not clear their parasites as measured by microscopy by the end of D1 post-treatment compared to participants in Garoua (32 %). Comparing the parasite clearance time across the different treatment arms and site did not show any signi cant difference by D3 post-treatment (p>0.05). Overall, the proportion of patients with parasites on D2 was much lower than on D1 and even lower on D3 (Figure 8). Apinjoh et al., found a high occurrence of parasitaemia of 17% and 35% respectively in ASSP and ASAQ arms in Buea-Tole on D28 [33] and from which study an e cacy of ASAQ of about 92 % was recorded. The average parasite clearance did not take into account the two studies with outlier gures (Figure 8 and Figure 9). Only two of these studies were interested to have measured parasite levels up till D42 or D63 [30,32]. It should be noted that parasitaemia tended to re-occur between D14-D63. A total of 12 (76.5 %) articles and unpublished data reported adverse events [23,24,26,[28][29][30][31]33]. The ACTs are well tolerated with few adverse events, the most reported across all studies being those of asthenia, diarrhoea, abdominal pain, anorexia, nausea, vomiting, headache and dizziness. Out of 12 studies, 5 reported severe adverse events which included: 1 jaundice, 1 haemoglobulinuria, 3 anaemia, 1 convulsion, 1 severe fatigue, 3 severe malaria and 3 deaths (Additional le 5). A three-arm randomised non-inferiority controlled trial to assess the e cacy and safety of ASAQ, DHAP and AL was conducted in Mutengene and Garoua from 2009 to 2013. The authors demonstrated that the frequency of adverse events such as vomiting, cough, rashes, and anorexia was slightly higher in the groups of participants on ASAQ and DHAP. The drugs did not differ with respect to the type of AEs (all p values <0.05). Although there was no signi cant statistical difference (P=0.09) in the occurrence of all AEs when comparing the trial drugs, ASAQ (35.5 %) and DHAP (37.9 %) had higher number of AEs than AL (27.5 %). One serious AE occurred involving a child who experienced severe fatigue after AL ingestion [30]. In another study in Buea, it was observed that while administering ASAQ and SPAS, at least one adverse event (AE) was reported in 69.2 % (117/169) of patients in both treatment groups during the posttreatment period that was not present on admission. These were probably related to the study drug and mainly mild or moderate in intensity. The most frequent AEs were cough, dizziness, fatigue, catarrh, and gastrointestinal disorders (nausea, abdominal pain, and diarrhoea). A total of 36 (43.4 %) and 81 (94.2 %) patients in the AS/SP and AS/AQ group, respectively, experienced AEs by day three. By day seven, the number of patients with AEs had reduced to two (4.5 %) and eight (17.7 %) in the AS/SP and AS/AQ groups, respectively [33]. Quality assessment and sensitivity analysis The articles included in this review were of high qualities according to modi ed Jadad scale for randomised controlled trials (RCTs) with values ranging from 5 to 8 (Additional le 3) while and the strengthening the reporting of observational studies in epidemiology (STROBE) statement for observational studies with a range of 86 % to 91 % (Additional le 4). There was no need for sub-group analysis because of the high quality of studies included. Discussion The systematic review and meta-analysis aimed to assess the pooled clinical e cacy and Safety of artemisinin-based combination therapies from individual participant data for over 16 years after adoption and use for the treatment of uncomplicated falciparum malaria in Cameroon. We demonstrated that the cure rates of most ACTs were above the WHO minimum limit of 90% with a cumulative value of 97.9%. This observation is in agreement with cure rate of 98.0% reported in meta-analysis on ACTs used in Sudan [35], but higher than the e cacy of 92.9% recorded in Ethiopia [36]. However, 2 studies on Artesunate-chloguanil-dapsone (AS-CD) and Artesunate-sulphadoxine-pyrimethamine (AS-SP) failed to meet the WHO, Day 28, PCR-adjusted cut-off of > 90% e cacy [27,33]. Despite the high success rates of ACTs, resistance to anti-malarial drugs poses a major threat globally and if the parasites develop resistance to these anti-malarial regimens, it is inevitable that treatment would be more di cult, unsuccessful and high rates of relapse could be due to multidrug-resistant malaria. Therefore, monitoring the anti-malarial drug e cacy is important to enable early detection of emergence of drug resistance before it spreads to most of the parasite population, similarly to what happened with chloroquine, sulphadoxine-pyrimethamine, and amodiaquine monotherapies in Cameroon [2,37,38]. More so, there is no guarantee how long the currently used anti-malarial drugs will remain effective based on evidence that previous monotherapies were associated with higher rates of treatment failures in P. falciparum infected patients [2,39]. This calls for concerted action for search of new alternative anti-malarial drugs to treat malaria in the near future. The e cacy of ASAQ the rst line treatment adopted in 2004 for the treatment of uncomplicated falciparum malaria in Cameroon was 97.5%. Contrary, a slightly lower value of 93.9% was recorded in a multi-centric study on pooled e cacy of ASAQ in sub-Saharan Africa [40]. Moreover, the e cacy of AL, the parallel rst-line drug for the treatment of uncomplicated falciparum malaria in Cameroon was 99.4%. This is in concordance with the 97.3% (95.9-98.3%, 95% CI) treatment success in children with uncomplicated malaria recorded in the pooled analysis of data from seven studies supported by Novartis [41]. This suggests that the results of treatment success with AL in uncomplicated malaria patients in Cameroon are evenly distributed with other high malaria endemic countries. The aggravated e cacy of DHAP was 98.0%. AL and DHAP were found to have higher pooled e cacies than ASAQ. This nding is in agreement with those of different studies on the e cacy of ACTs assessed using the network metaanalysis approach [11,12,42]. DHAP has also been shown to be highly e cacious in the clearance of malaria parasites among Human immunode ciency virus (HIV) patients in Malawi and Mozambique [43]. The standard uncomplicated malaria treatment guideline for Cameroon recommends a three-days administration of ASAQ or AL depending on the weight and age of the patient [44]. Based on the present evidence, it will be advisable to consider DHAP in the treatment of malaria especially among HIV patients where ASAQ and AL may be contraindicated concurrently taking efavirenz-or nevirapine-based antiretroviral therapy. It was observed that the e cacy of ASAQ declined an original value of 100% to a current value of 91.8% while that of AL declined from an original value of 100% to a current value of 96.7% over time. It was also shown that the e cacy of ASAQ declined faster in some ecological zones when compared to others. The greatest deep in ASAQ was noticed in the Littoral ecological zone. This change could be due to the decline in the e cacy of ASAQ in Buea [33]. However, it is important to note that the study in Buea was among the rst studies to evaluate the e cacy and safety of ACTs in real time. Most studies included in the present review achieved a rapid reduction of fevers and parasitaemia between D0 and D3 of assessment. Majority of these studies treated patients with ASAQ, AL and DHAP. A previous aggregate study on the clinical predictors of early parasitological response to ACTs (ASAQ, AL and DHAP) in African patients with uncomplicated falciparum malaria con rmed the rapid decrease of parasite positivity rate (PPR) from 59.7% (95% CI: 54.5-64.9) on day 1 to 6.7% (95% CI: 4.8-8.7) on day 2 and 0.9% (95% CI: 0.5--1.2) on day 3 [45]. Some studies showed a delayed clearance on D1 with a proportion of 75% persistence of parasite in Mutengene/Garoua [30] and 35% parasitaemia on D28 Buea-Tole [33]. It should be recalled that Buea is in a zone which borders with Limbe and Mutengene which are sites for high chloroquine-resistance [2,46]. The ACTs, ASAQ, AL and DHAP are well tolerated in spite of a few adverse events such as asthenia, diarrhoea, abdominal pain, anorexia, nausea, vomiting, headache and dizziness reported during different studies. These ADRs were not serious enough to discontinue anti-malarial treatments except for individuals in studies that reported serious adverse events such as jaundice, haemoglobulinuria, anaemia, convulsion, severe fatigue, severe malaria and deaths. In the current review, 3 study participant died during treatment [26]. This rate of mortality was not related to the study drugs on patients included in the study with uncomplicated malaria. Similarly, systematic reviews and meta-analyses conducted in Ethiopia and Sudan reported similar adverse events when patients with uncomplicated falciparum malaria were administered ACTs [35,36,47]. Strengths And Limitations Of The Study The current study has several strengths. A total of 39 studies with the same or different ACTs derived from 13 published articles and 3 unpublished studies were included that gave a total of 3,747 study participants. The study assessed e cacy of commonly used anti-malarial drugs: ASAQ and AL. Study outcomes were measured both clinically and parasitologically. Most studies evaluated the comparative e cacy of different anti-malarial medications. However, the current study is not without limitations. The study included only mono-infection with P. falciparum with no available data on the other Plasmodium species. Moreover, not all studies reported AEs to anti-malarial drugs. Furthermore, there were fewer studies carried out in the Northern Regions of the country compared to the Southern Regions. Conclusion The present systematic review and meta-analyses reported a high overall e cacy of ACTs (97.9%). The standard regimens, ASAQ, AL and DHAP showed high cure rates of 97.5%, 99.4% and 98.0% respectively. A number of adverse drug events were encountered such as asthenia, diarrhoea, abdominal pain, anorexia, nausea, vomiting, headache and dizziness after the administration of ACTs. However, the ADRs were not serious enough to discontinue the use of anti-malarial treatment except for a few patients who experienced severe adverse events. ACTs in Cameroon are still e cacious and well tolerated albeit with a slight decline in the e cacies of ASAQ and AL over time. There is need for continuous monitoring of e cacy of ACTs despite the high success cure rates as resistance seems inevitable since cases of antimalarial drug resistance have been reported in some areas of the world. Ethics approval and consent to participate Not applicable because this is a systematic review and meta-analysis. Consent for publication Not applicable. Availability of data and materials All data and materials used for the analysis of this systematic review and meta-analysis are included in this write-up and the additional documents. Competing interests The authors declare that they have no competing interests.
2020-07-23T09:09:34.177Z
2020-07-22T00:00:00.000
{ "year": 2020, "sha1": "12a0c053a693d408f8b4410434b9dfd2ce688a5f", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-45427/v1.pdf?c=1595732193000", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "decdb7f7cf9193a942777590f28409b20e7c0b84", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
81144014
pes2o/s2orc
v3-fos-license
Odontogenic keratocysts in gorlin–goltz syndrome: how to manage? Odontogenic keratocysts is a frequent manifestation of Gorlin–Goltz syndrome and can be its first sign, mainly in young patients.There are two methods for the treatment of KCOT, a conservative and an aggressive. A more careful approach for the syndrome is needed as there is high chance of malignant changes owing to improper management of the syndrome. *Correspondence to: Khalfi Lahcen, Department of Stomatology and Maxillofacial Surgery, ‘Mohammed V’ Teaching Armed Forces HospitalMorocco, E-mail: khalfi.l@hotmail.fr Introduction Gorlin-Goltz syndrome is an autosomal dominant disorder characterized by a predisposition to neoplasms and other developmental abnormalities [1]. In 1960, Robert James Gorlin and William Goltz described Gorlin-Goltz syndrome as a condition, comprising the principle triad of multiple Basal Cell Carcinoma, odontogenic keratocysts (OKC) and skeletal anomalies [2][3]. This syndrome has numerous names as basal cell nevus syndrome, multiple BCC syndrome and fifth phacomatosis [4]. The tumor suppressor gene called Patched (PTCH), located in the 9q22.3 chromosome, has been identified as cause of Gorlin-Goltz syndrome [5][6]. Mutations in this gene results in loss of control of several genes known to play a role in organogenesis, carcinogenesis and odontogenesis thus resulting in the development of GGS [7-10]. The prevalence is about 1/60000 live births, and it has both sporadic and familial incidence [11]. This syndrome affects both male and female equally and are seen during the first, second, third decades of life [12]. In this manuscript we report a case of Gorlin-Goltz syndrome where we focused on the management of treatment protocol of odontogenic keratocysts with review of literature. Case report A 22-year-old female patient reported to our Department of Oral Maxillofacial. From a dentist as her orthopantomogram showed multiple well-defined radiolucency ( Figure 1). One lesion was in left ramus with the impacted 38 tooth, of about 2cm × 2 cm and another lesion was in the right en the left body of mandible from 47 to 45, of about 4 cm × 2 cm and the last one was in the left body o from 37 to 36 of about 3cm × 2 cm. She had reported to the dentist with a chief complaint of swelling in the left side of lower jaws. It had started as a small swelling that increased in size over 10 months. The patient was the fifth child of non-consanguineous parents there were no familial history of similar lesions. Clinical examination revealed dysmorphic facial features including mild macrocephaly, frontal bossing, hypertelorism, multiple nevi of size on the face and suspected nodular lesions on the left lower eyelid and on right side of the forhead (Figure 2). Other examinations were also performed which included skull radiograph showed calcification of falx cerebri ( Figure 3) and bridging of sella turcica. Under general anesthesia, extended ward's incision was placed in the left ramus region. Cyst enucleation and surgical removal of 38 was performed. On the right body of the mandible, a crevicular incision with relieving incision was placed from 48 to 45 and cyst enucleation was performed. Carnoy's solution was applied on the exposed bony walls with preservation of the inferior alveolar neurovascular bundle in the cystic cavities for 3 min, and charring effect was achieved. The cystic cavities were then irrigated thoroughly with normal saline and closure done with 3-0 Vicryl. Simultaneously, the tumor on the forehead and on the lower eyelid were removed, the defect consecutive was repaired by local flap (Figure 4). The specimens were sent for histopathological examination. Histopathological report revealed and confirmed the presence KCOT and basal cell carcinoma for our patient. Based on clinical, radiographic and microscopic data, the diagnosis of Gorlin-Goltz syndrome was established. New bone formation sites were identified in the threemonth radiological follow-up. The patient is being followed-up for past 6 months on a regular basis without evidence of any recurrence. In addition, molecular genetic studies confirmed PTCH 1 germline mutations. The patient and his parents are aware of the importance of regular examination. Discussion According to the clinical criteria of Kimonis et al. [13] (Table 1), the diagnostic criteria of NBCCS require the presence of two major, or one major and two minor criteria. KCOTs are among the most consistent and common features of Gorlin-Goltz syndrome. They are fet alound in 65 to 100% of affected individuals [14]. Clinically, the lesions are characterized by aggressive growth and a tendency to recur after surgical treatment. The mandible is involved more frequently than the maxilla and the posterior regions are the most commonly affected sites [15]. There are two methods for the treatment of KCOT, a conservative and an aggressive. In the conservative method, simple enucleation with or without curettage and marsupialization are suggested. Aggressive methods include peripheral ostectomy, chemical curettage with Carnoy's solution, and resection [16]. Radical interventions as enucleation with shaving of surrounding bone or sometime resection might contribute to preventing recurrences and to improve the prognosis [16,17]. In the following cases the aggressive method should be applied: 1) When KCOT recurs after a conservative method 2) in cases of multilocular (multi lobular) aggressive intraosseous KCOT; 3) in a diagnosed KCOT exhibiting particularly aggressive clinical behavior (eg. growth, destruction of adjacent tissues) that should require resection as the initial surgical treatment [18]. In our case, as far as KCOT is mulilocular without aggressive behavior, a conservative method was appropriate. Although some authors believe that simple enucleation might be the most appropriate conservative method for the treatment of KCOT [18]. Application of Carnoy's solution into the cyst cavity for 3 min after enucleation results in a lower rate of recurrence (0-2.5%) without any damage to the inferior alveolar nerve [19,20]. Although benign, the recurrence rate after excision of KCOT is high, ranging from 12% to 62.5%, that's due to a higher rate of proliferation of the epithelial lining [ 21,22]. Regular follow-up by a multi-specialists team should be offered.An annual dental panoramic radiograph is usually suggested between the ages of 8 and 40 years to aid in monitoring the recurrence or development of new KCOT [23,24]. Moreover, it is of great importance to make a dermatological examination every 3-6 months with removal of basal cell nevus showing evidence of growth,
2018-12-30T01:59:09.551Z
2018-06-15T00:00:00.000
{ "year": 2018, "sha1": "bd4410addb0297051ab7adb8d66bcbc5c6eafadf", "oa_license": "CCBY", "oa_url": "https://www.oatext.com/pdf/OHC-3-140.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "e898f32638730ae4b11477dde7bcb944ce2a7f5f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
165541461
pes2o/s2orc
v3-fos-license
Mapping Class, Gender and Race in Resistance Literature Salt of the Earth, <D the film made in 1953 by a group of blacklisted filmmakers who were among the best and the brightest Hollywood talent of the day (including screenwriter Michael Wilson, director Herbert Biberman, and producer Paul Jarrico), is the only blacklisted film in the US history as a result of that country's anti communist hysteria of the 1950s. Featuring a Mexican American family (Ramon and Esperanza Quintero] stuck in the middle of a bitter mine strike, the movie is about the story of miners fighting against a giant company, of Chicanos and Anglos, and of minors and their wives, or as Wilson says with mixed excitement and surprise, "It's a story of the people and the conflict is very complex . There are battles for equality taking place here on so many levels I am hardly unskein them yet myself" (Biberman, 39) . As Bernard Dick notes in his in depth studies of the Hollywood Ten, "In the long run, what may be significant is not the film [Salt of 163 the Earth] itself but its history" (Dick,70). This is a quite pertinent observation. But I would like to add that what may be significant is not only its history, but also the fact that the film itself is a piece of history. In fact, I should pay credentials to George Lipsitz whose idea I borrowed and applied to the Salt of the Earth case. The method he employed to study the working class in his highly acclaimed book Rainbow at Midnight: Labor and Culture in the 1940s is, to state in a simplified way, putting mass culture in a certain historical moment and examining its relerance with that particular time, thereby gaining the knowledge about labor history. By drawing theoretical resources from George Lipsitz and taking Salt of the Earth as a case study of the cultural representation of resistance, this paper attempts to address the important question of "the indivisibility of equality"® raised by the movie's creators ( Biberman,39). Race Blacklisted by the Hollywood studios, a number of blacklisted film artists formed their own production company in 1951 and sought to "make a movie about real -life people; traditionally unintimidated Americans" when told about a strike by Local 890 of the Mine, Mill and Smelter Workers Union against the New Jersey Zinc Company in Bayard, New Mexico (Biberman,2 them unless they overcome their own against their women. A people will either unify itself for the real struggle-or fail in it. That's it! The theme: The indivisibility of equality. The story: A husband's struggle to accept as his equal the wife he loves. A wife's insistence that love include respect. The resolution: The women lead the men to victory on all fronts because in social struggle they call on and embrace every ooul in their community-the men included . (Italics in original, Biberman, 40) The filmmakers' cinematic political convictions are evident in this quotation: the institutionalized social hierarchies are so tied together and systematically related that any resistance should be multidimensional, based on the "indivisibility of equality" and that any individual resistance should be incorporated into people's collective struggles. Therefore, the filmmakers put the story in a framework of multidimensional social categories including class, gender and race. 1) Narrating Class First of all, the class paradigm is important in the study of labor and working -class history, since this is certainly not a classless society.® Narrations of class conflicts, and working class consciousness of Mexican American workers employed at very minimal pay run through Salt of the Earth. The film is about a highly political subject: a controversial strike by New Mexico zinc miners, whom the general public viewed to be either Communists or Communist -influenced. In fact, during the union meeting discussing whether to continue the strike or not, a union leader said, "I just want to say this-no matter which may decide, the International will back you up", which states explicitly the communist influence on the union members (Biberman, 343). can't offer", inequality resulting from class difference can be seen from the Union leaders words (Biberman,322). Communism as an ideology and political practice breaks down social hierarchies drawn along the class line and aims at the reorganization of all the people and the equality for all the people. It is an intelligent, powerful, board force, appealing and accessible to the working people, capable of drawing them together for mass struggles. If "greater equality is achieved by collective action, " Communist Party can be a good social organization (Ryan,164). "The Communist Party itself ... was the most successful multiracial class struggle political organization ever built on the US Left, even though its legacy leaves at least as many problem to haunt us as it does admirable achievements to inspire", Alan W ald argues for "the indispensability of social organization" (Wald,6) . 2) Narrating Race Because the film is about a Mexican-American community, the story focused on a Chicano community at a time when attitudes about Chicanos were changing. Throughout the Great Depression, official attitudes toward Mexican immigration and transborder migration had grown increasingly hostile, as Anglos clamored in the depressed economy to take jobs that had traditionally belonged to Mexican immigrants. Throughout the 1940s and 190s, the movement towards closing the porous border at the Rio Grande had culminated in "Operation Wetback" in 1953, a government program designed to find and deport .illegal Mexican aliens. These tensions were made more complex by the fact that many "Mexican -American immigrants" had, in fact, been on their lands longer than those lands had been a part of the United States, becoming U.S. Citizens by virtue of the Treaty of Huadalupe Hidalgo that ended the Mexican American War in 1848. Esperanza begin the story with talking about the town as it used to be, "This is my village. When I was a child, it was called Marcos . The Anglos changed the name to Zinc Town. Zinc Town, New Mexico, U.S. A ... Our roots go deeper in this place, deeper than the pines, deeper than the shaft. . . . In these arroyos my grandfather raised cattle before the Anglos ever came" (Biberman,315). This is such a powerful narration. Rapidity of corporate capitalist development took the town from local Mexican people and renamed it, turning the grandchildren of free cattle raisers into the underclass of unfree wage labors. A counterproof of the film's successful representation of political and racial issue is Congressman Donald L. 1 ackson' s attacking of the film as a "new weapon for Russian ... deliberately designed to inflame racial hatreds to the depict the US as the enemy of all colored people" (Biberman,86). 3) Narrating Gender The film Salt of the Earth is also a movie of the female testimony to the sophistication and power against women and their evolution from men's subordinates into their allies and equals. It tells the story of a Mexico American family stuck in the middle of a bitter strike. Against a backdrop of social injustice, a riveting family drama is played out by a miner and his wife. (5) In the course of the strike, Ramon and Esperanza find their roles reversed: an injunction against the male strikers moves the women to take over the picket line, leaving the men to domestic duties. The women found themselves on the picket line being attacked by force, arrested in droves. When Esperanza shouted at her husband, "You can't win this strike without me! You can't win anything without me", she has know clearly the role women played in this cross -gender strike, "And so they came, the women . . . they rose before dawn and they came, wives, daughters, grandmothers . . . . " ( Biberman,367,351) Wald sees cross-cultural relationships "founded more on common feelings about the reigning US social order than ever before" ( W ald, 160). Biberman' s memoir well illustrates the Leftist intellectuals' "common feelings" towards working people, "Culturally and socially, as well as politically and economically, vast number of our American people had been blacklisted for centuries. Had they not been, we And the movie he went to make on the desert of New Mexico came out to be "a triumph of determination and dedication" (Cole,356) . This an serves as a telling example of Alan W ald's observation on the radial cultural activism, "On the whole, this generation's [of Marxist cultural workers] devotion to social changes at considerable risk to their personal well-being may not have been matched by their successors" (Wald, 3) . Cultural representation is not just literature; it is itself history. It is also a kind of agency that Leftist cultural workers affected and changed the politics of the nation. When commenting on radical culture and politics, Alan Wald takes the popularity of Salt of the Earth in the 1960s as "anotable instance" of the survival of the Communist cultural tradition "even after the near destruction of the Communist Party as a credible political force in 1956-58" (Wald,3). The Matrix of Domination: Intersecting Class, Race, and Gender In their preface to Race, Class and Gender: An Anthology, the editors propose a new approach of studying social structure-"the approach of a matrix of domination", by which they mean the way of in-terrogating social structural patterns on "multiple, interlocking levels of domination that stem from the societal configuration of race, class, and gender relations" (Anderson et. al. , xi). In the same vein, some social historians advocate "to write a rich history that complicates categories, suggesting how class, gender, race and/ or ethnicity combine across a wide range of economic and social landscape" ( Boris and Janssens, 1). Made nearly fifty years before the publication of these books, Salt of the Earth is such "a rich history", the history of "the Mexican -American's struggles for equality on all levels" (Biberman,2). Not only the issues of class, gender and race are all well represented, but they are represented in a way of intersecting and interrelating each other. The following is an analysis of the matrix of domination in Salt of the Earth from three aspects: recializing class, classifying race, and engendering labor movement. 1) Racaializing Class Racialization in American South grow out of the intersection of imperialistic expansionism, class formation and industrialization. The social relations of New Mexican's industry helped to produce the class -based characteristics deemed so crucial in the construction of the racial category of "Anglo" and "Mexican". These two racial designations to some extent become class markers. "Mexican" stands for the underclass while an "Anglo", even if an Anglo miner is more better off. As A. Yvette Huginnie puts in an astute way, "Class relations was naturalized in racial categories" (Huginnie,37). This tendency is tellingly expressed in the film when Esperanza tried to describe her husband's insubordinate position, "The Anglo bosses look down on you, . . . ' Stay in your place, your dirty Mexican' -that what they tell you" (Biberman,367). By naturalizing class hierarchies in terms of race, social inequality is then seem to correspond with the seeming biologically inherited racial differences . This is perhaps why :"US labour history needs to utilize better ' race' as an historically contingent factor affecting class relations" ( Huginnie,49) . But it is important to note that Salt of the Earth does not just draw a dividing line between Mexican I Anglo and labor I capitalist. When Ramon at the Union conference demanded that "We want equality with Anglo miners-the same pay, the same conditions", he was reminded that "discrimination hurts the Anglo too" (Biberman,322). And when he showed hostility to Anglo workers, he was told, "You lump them together-Anglo workers and Anglo bosses" ( Biberman,342). 2) Classifying Race I want to begin the discussion with a quotation from the script. The dialogue happens when Ramon protests to the foreman that workers should work in pairs for their safety sake. Foreman: "You work alone, sawy? You can't handle the job, I'll find someone who can. " Ramon: "Who? A scab?" Foreman: "An American." (Biberman,319) The answer of the foreman confused me until I read an interesting research paper "Mexican Labour" in a "White Man's Town" by A. Yvette Huginnie about the usage of "American" in the Southwestern America, "Hinting of the imperialism and conquest which lay at the heart of US acquisition of the southwest, the term 'American' was used as a racial designator as opposed to indicating nationality . It fused nationality and whiteness, granting those of 'fair complexion' legitimacy within the region while delegitimizing those of darker complexion regardless of nationality. It associated, on the one hand, conquest witli racial superiority, and, on the other hand, defeat with racial inferiority" (Huginnie,(35)(36). "Race has seldom confronted workers as a simple dichotomy of black and white", David Mont-gomery argues for the examination of "Mexicans who had long inhabited the conquered region or who moved into it after it had been annexed by the US" (Montgomery, p. 2) . ® Bernard F. Dick in Radical Innocence offers a brief critique about the term, "The screenplay . . . is a textbook illustration of the reduction of an action to a single theme: 'the indivisibility of equality' " (Dick,78) . @ See's challenging essay "The Classless Society", when he uses analytical categories other than class to call into the effectiveness of class as a category in the US, "although the poverty -caused misery of the American masses has by no means been eliminated, it is so dispersed and scattered among various segments of the population that it does not constitute a fundamental and unifying issue to mobilize the masses of the people in struggle" ( 43) . ® The exclusive employment of men in the industry-whether as managers, foremen, miners or labors, -points to gendered aspects of these racial and class categories and relations. @ Bernard Dick notices this as a great difference between Salt of the Earth and other films documenting or dramatizing strikes, "the conventional voice-over narration . . . is an off-camera voice belonging to none of the characters, whereas Salt of the Earth is narrated by the heroine" (Dick,(77)(78) .
2019-05-27T13:24:13.929Z
2000-10-01T00:00:00.000
{ "year": 2000, "sha1": "a3d82fcf2635a931861b33514eb6010806c1b867", "oa_license": null, "oa_url": "https://doi.org/10.1080/25723618.2000.12015273", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "4cf707ae39797d434a60d59dc1198fe449f7b61e", "s2fieldsofstudy": [ "Sociology", "History" ], "extfieldsofstudy": [ "Sociology" ] }
247026235
pes2o/s2orc
v3-fos-license
Facial asymmetry tracks genetic diversity among Gorilla subspecies Mountain gorillas are particularly inbred compared to other gorillas and even the most inbred human populations. As mountain gorilla skeletal material accumulated during the 1970s, researchers noted their pronounced facial asymmetry and hypothesized that it reflects a population-wide chewing side preference. However, asymmetry has also been linked to environmental and genetic stress in experimental models. Here, we examine facial asymmetry in 114 crania from three Gorilla subspecies using 3D geometric morphometrics. We measure fluctuating asymmetry (FA), defined as random deviations from perfect symmetry, and population-specific patterns of directional asymmetry (DA). Mountain gorillas, with a current population size of about 1000 individuals, have the highest degree of facial FA (explaining 17% of total facial shape variation), followed by Grauer gorillas (9%) and western lowland gorillas (6%), despite the latter experiencing the greatest ecological and dietary variability. DA, while significant in all three taxa, explains relatively less shape variation than FA does. Facial asymmetry correlates neither with tooth wear asymmetry nor increases with age in a mountain gorilla subsample, undermining the hypothesis that facial asymmetry is driven by chewing side preference. An examination of temporal trends shows that stress-induced developmental instability has increased over the last 100 years in these endangered apes. significant in all three taxa, explains relatively less shape variation than FA does. Facial asymmetry correlates neither with tooth wear asymmetry nor increases with age in a mountain gorilla subsample, undermining the hypothesis that facial asymmetry is driven by chewing side preference. An examination of temporal trends shows that stress-induced developmental instability has increased over the last 100 years in these endangered apes. Background Facial symmetry is widely regarded as a reliable indicator of attractiveness and reproductive success in humans, while asymmetry is often used as a measure of early life stress [1,2]. As both sides of bilaterally symmetric faces share the same genotype, it is expected that they will exhibit the same phenotype, except when individuals experience instability during development [2]. Therefore, studies typically use fluctuating asymmetry (FA) as a measure of individual or population-level fitness, calculated as the random deviations from perfect symmetry or from population-specific patterns of directional asymmetry (DA) [2,3]. Most FA studies have focused on elucidating the environmental causes, although there is also evidence suggesting that FA is heritable [4,5]. Experimental studies have linked environmental stressors and inbreeding to the level of FA in bilateral structures of rodents and flies [6][7][8]. In humans, it has been suggested that genetic or environmental stress increases susceptibility to health problems later in life, such that FA might provide a reliable signal of fitness [9]. As a result, studies measuring FA in non-human primate faces have focused on the link between FA and adult fitness or health outcomes [10], and not necessarily the environmental conditions under which individuals developed. As such, the possible stressors behind facial FA beyond the classical 'environmental or genetic' dichotomy remain poorly understood. Moreover, surprisingly little is known about the evolutionary significance of facial asymmetry, including the magnitude of facial FA in extinct hominins and our closest living relatives, the non-human apes [11,12]. Groves & Humphrey [13] first described the marked asymmetry present in the craniofacial skeletons of Virunga mountain gorillas (Gorilla beringei beringei) studied at the Dian Fossey Gorilla Fund's Karisoke Research Center (figure 1). They found that western lowland gorilla (G. gorilla gorilla) faces were not significantly asymmetric (n = 138), but 19 of 55 eastern gorillas (i.e. mountain and Grauer (G. beringei graueri)) had faces that were at least 4 mm longer on the left side than the right, and of these individuals, 18 were Virunga mountain gorillas. Mountain gorillas were the only subspecies with almost as many asymmetric as symmetric individuals in the sample, and the authors suggested that the observed asymmetry may reflect a preference for chewing on the left side [13]. This 'gross asymmetry' was further evidenced by the presence of uneven tooth wear and lopsidedness of the sagittal crest, but they acknowledged that other closely related taxa with equally developed masticatory musculature, such as orangutans, did not seem to show similar levels of facial asymmetry. Indeed, while habitual unilateral chewing is commonly invoked as an explanation for facial asymmetry [14], this link has largely been assumed rather than tested. In this study, we use three-dimensional geometric morphometrics to quantify adult facial skeleton asymmetry in three gorilla subspecies with well-documented variation in environmental (extrinsic) and genetic (intrinsic) stress, namely western lowland gorillas, Grauer gorillas and Virunga mountain gorillas (figure 2). Although we are not directly measuring genetic or environmental variables, we use the term 'stress' to describe the factors hypothesized to increase developmental instability, thus leading to measurable facial asymmetry in the skeleton. In addition to FA, we also analyse DA, which occurs when one side differs consistently from the other at the population level, in line with the differential chewing hypothesis proposed by Groves & Humphrey [13]. We test whether the marked asymmetry in mountain gorillas is significantly greater than that expressed by other gorilla taxa, and evaluate the results considering current ecological, behavioural, and genetic information. Because asymmetric variation usually only explains a small proportion of the total variation in morphological analyses, we also characterize symmetric variation in facial morphology, as well as subspecies-level variation in facial asymmetry as it relates to sexual dimorphism. We investigate three alternative hypotheses to test whether genetic stress, environmental stress, or chewing side preference better correspond with facial asymmetry among gorilla subspecies. First, we test whether more inbred gorilla subspecies exhibit more pronounced facial FA. In terms of genetic stress or inbreeding, mountain gorillas are homozygous at about one third of their genomes, and thus have very low genetic diversity compared to western lowland and, to a lesser extent, Grauer gorillas [16,17]. Mountain gorillas are more inbred than even the most inbred contemporary human populations [18] and the Altai Neanderthal [19]. While several studies have suggested that inbreeding might lead to higher levels of FA in model species [8], this study provides an opportunity to assess facial FA in the case of extreme inbreeding yet relatively stable socioecological conditions of the mountain gorilla. Second, we assess whether gorillas that experience more environmental stress exhibit more pronounced facial FA. While environmental stress encompasses many factors, one main axis of variation among gorilla subspecies lies in dietary and ecological variability, with western lowland gorillas being exposed to the highest level of seasonal unpredictability in food resources and competition [20]. Western lowland gorillas have also experienced major population declines due to human activity and infectious diseases, most notably Ebola, resulting in approximately 90% casualties in affected populations [21]. By contrast, Virunga mountain gorillas eat a reliable, almost entirely folivorous diet, and have experienced increased population growth over the past several decades [22]. Grauer gorillas fall between these two extremes, with those from highland areas eating a more folivorous diet, and those from lowland areas eating a similar proportion of fruit as western lowland gorillas [20,23]. Grauer gorillas have also experienced major population declines in the last century, losing up to 77% of their total population [24]. In terms of gorilla behavioural ecology, these dietary and other social differences form potential sources of variation in environmental stress among taxa. Third, we test whether gorillas exhibit more pronounced facial asymmetry because they have a chewing side preference. royalsocietypublishing.org/journal/rspb Proc. R. Soc. B 289: 20212564 If gorillas preferentially chew on one side of the mouth, then they can be expected to show differences in the degree of tooth wear between the left and the right sides, which will match the pattern in facial asymmetry. A subsample of Virunga mountain gorillas with associated tooth wear data are used to test whether they exhibit chewing side preferences as evidenced by uneven tooth wear, and the patterns of facial asymmetry are considered in light of those results. Material and methods The sample includes the crania of 40 Virunga mountain gorillas (Gorilla beringei beringei), 40 Grauer gorillas (G. beringei graueri) and 34 western lowland gorillas (G. gorilla gorilla), with equal representation of females and males. We analysed adult individuals as determined by the third permanent molar being fully erupted and in occlusion. Only those with completely preserved facial anatomy, and no clear evidence of trauma or pathology, were included. In the mountain gorilla sample, 22 of the individuals were from the Mountain Gorilla Skeletal Project (Musanze, Rwanda) and three-dimensional models were digitized with a Breuckmann SmartScan white light scanner (aligned and merged in Optocat software v.11.01.06-2206). The remaining 92 models were reconstructed from medical CT scans at the Katholieke Universiteit Leuven, Belgium (Siemens Sensation 64, 120 kV, 135 mA, 1 mm slice thickness, reconstruction interval of 0.5 mm, 15 cm field of view, 0.29296875 mm pixel size, 512*512 pixel matrix) [25], and the Smithsonian's National Museum of Natural History (Washington, DC; Siemens Somatom Emotion CT Scanner, 110 kV, 70 mA, 1 mm slice thickness, 0.1 mm reconstruction increment; surface models generated in Materialize Mimics). A recent study suggests that there are no significant differences in models derived from different imaging modalities [26], allowing for the direct comparisons made here. We used Viewbox 4 software (http://www.dhal.com/) to place 156 homologous landmarks and curve sliding semilandmarks on the cranial models (electronic supplementary material, figure S1). The landmark configuration uses classic fixed facial landmarks supplemented by curve sliding semilandmarks set on the face and palate. Because of the uncertainty in semilandmark location, they were slid along their corresponding curves with respect to the fixed landmarks in order to minimize bending energy following a standard procedure for semilandmark analyses [27]. Landmarks were digitized twice on each individual to assess FA and parse it out from measurement error via Procrustes ANOVA, as described below. The raw three-dimensional coordinates were subjected to Procrustes superimposition to remove the effect of scale, orientation and position from the shape analyses [28]. The symmetric versus asymmetric components of shape variation were then analysed separately: the symmetric component comprised the original and mirrored landmark configurations for each cranium, while the asymmetric [29][30][31][32]. Principal component analysis was conducted to analyse the main patterns of variation using symmetric coordinates and asymmetric residuals. The effects of allometry were assessed using a multivariate regression of shape versus centroid size. Analyses of the different components of variance were conducted using Procrustes ANOVA [29], in which the factor 'individual' represents symmetric variation, 'side' represents one-sided or DA, and the interaction between the two represents non-directional or FA. Measurement error was calculated as the residual variation in the Procrustes ANOVA, and explains between 1.6 and 2.2% of the shape variation (table 1). To test for differences in the magnitude of facial FA among taxa, we conducted a bootstrapping analysis of the correlation matrices for the FA component following Webster & Zelditch [33]. The probability of matrices being identical was assessed by 1000 bootstraps (unpublished R code from Haber, provided by Webster & Zelditch [33]). Individual facial asymmetry scores were calculated with respect to a perfectly symmetric configuration to measure the magnitude of asymmetry across the whole face. This was done by calculating the Procrustes distance between the original and reflected and relabelled landmark configurations following Procrustes registration [2]. Individual asymmetry scores were compared to the magnitude of tooth wear asymmetry because there was no clear population-level chewing side preference (figure 4). If there had been a clear directional signal in tooth wear, directional facial asymmetry would have been the more appropriate form of asymmetry to assess in relation to tooth wear. In a subsample of Virunga mountain gorillas (sample sizes specified in each table and figure), we assessed the relationships between tooth wear asymmetry in upper and lower molars of the same position (electronic supplementary material, table S4), facial asymmetry scores and molar wear asymmetry (electronic supplementary material, table S5; figure 4), the relationship between each variable and age at death (electronic supplementary material, figure S2), and facial asymmetry scores through time (figure 4) using Spearman's rank correlation analysis. We compared the magnitude and direction of tooth wear in upper versus lower molars of the same position as a test of whether this metric is consistent and a reliable indicator of chewing side preference. Tooth wear was assessed by calculating the per cent of dentine exposure in the first permanent molars following Galbany et al. [34]. The percentage of the occlusal surface with exposed underlying dentine, relative to the total area of the occlusal surface, was measured in both the right and left first, second and third molars using digital photographs of original teeth. While a three-dimensional topographic measure of tooth wear, such as slope, is more sensitive to early stages of tooth wear [35], dentine becomes exposed on the M1 in mountain gorillas by the time the M3 erupts [34], and thus percentage of exposed dentine is sensitive enough to capture molar wear in this adult sample of gorillas. Known ages at death are available for a majority of the mountain gorilla individuals included in the subsample, but those without known ages were estimated based on incisor wear following the protocol developed by Galbany et al. [36], which estimates ages at death within about a 1-3 year error margin (see electronic supplementary material, figure S2 for sample details). The collection dates were known for most individuals in the sample, but in cases where there were several years in which remains were estimated to have been collected, we used the earliest possible date for our analysis. For example, 20 Grauer gorillas were collected between 1980 and 1984, so we used 1980. Ten mountain gorillas were also collected in estimated windows of 5 years (4 individuals) and 9 years (5 individuals), with one individual only being known to have been collected before 2001, so we substituted the year 2000 in that instance. Besides the known individuals with uncertain collection dates, there is likely some additional variation in record keeping across institutions, as well as collection practices over time, justifying our estimation and inclusion of the uncertain dates in this study. To assess the trend in facial asymmetry magnitude through time, we conducted a linear mixed model with Table 1. Interspecific Procrustes ANOVAs of gorilla facial morphology. SS, sum of squares; MS, mean squares (multiplied by 1000); d.f., degrees of freedom; F, F-ratio; p: p-value; %var, percentage of variance explained by each effect; S, symmetry; DA, directional asymmetry; FA, fluctuating asymmetry. Asterisks mark significant differences among taxa, as determined by 1000 bootstraps of observed FA correlation matrices (see electronic supplementary material, Results The principal component analyses (PCAs) and Procrustes ANOVAs of the symmetric (i.e. the original and mirrored landmark configurations for each cranium) and asymmetric (i.e. deviations of the original configurations from the symmetric averages) aspects of shape variation show that 79-91% of shape differences within and among gorilla taxa are related to symmetric variation in facial morphology ( figure 3a, table 1). The plot of the first (PC1) and second (PC2) principal components of the symmetric aspect, which explain 30.5% and 12.7% of the variance, respectively, shows separation of western lowland and mountain gorillas within the morphospace, mainly along PC2, with Grauer gorillas falling in between but overlapping more with mountain gorillas (figure 3a). In general, western lowland gorillas have relatively narrower, less prognathic faces with more rounded orbits framed by a curved supraorbital torus. By contrast, mountain gorillas have relatively broader, more prognathic faces and more rectangular orbits framed by a flat supraorbital torus. Grauer gorillas have the narrowest faces of the three taxa, rounder orbits, and a taller nasal aperture and rostrum. When sex is considered, male and female mountain gorillas separate primarily along PC2, while male and female Grauer gorillas separate along PC1 and PC2 (figure 3a). By contrast, male and female western lowland gorillas do not clearly separate along either of the first two PCs (figure 3a), nor PC3 (not shown). The multivariate regression of shape against size indicates that allometry explains about 10% of the shape variation in the symmetric aspect (r = 0.10; p < 0.001), and 15% (r = 0.38; p < 0.001) and 8% (r = 0.28; p = 0.002) for PC1 and PC2, respectively. The PCA of the asymmetric aspect of shape shows that the range of variation in mountain gorillas envelops that of the other two taxa (figure 3b). Both DA and FA are highly significant in all three taxa (table 1), but FA explains a much larger proportion of variation, ranging from 6% in western lowland gorillas, to 9% in Grauer gorillas, and 17% in mountain gorillas compared to the 0.6-2.0% of shape variation explained by DA. The probability of observed FA correlation matrices being identical between each of the three taxa, as assessed by 1000 bootstraps, is 0, suggesting that mountain gorillas have significantly greater facial FA than both Grauer and western lowland gorillas, and that Grauer gorillas have significantly greater FA than western lowland gorillas (table 1; electronic supplementary material, table S3). To assess whether these results are related to the drastic reduction of gorilla habitat and population sizes, the collection years of the individuals (ranging from 1880 to 2008) were compared to facial asymmetry scores. The magnitude of facial asymmetry for each individual increases through time within the combined sample, even when controlling for sex and species differences in the magnitude of asymmetry (F 1,107 = 4.95, p = 0.028). The most recent mountain gorillas exhibit the highest facial asymmetry scores of all (figure 4). Though statistically significant in all three taxa, DA only explains a small percentage of morphological variation (0.6-2.0%), suggesting that it is not a major factor shaping facial morphology at the population level in gorillas (table 1). In all three taxa, most of the total asymmetry occurs in the lower midface, as shown in several examples of particularly asymmetric individuals (figure 3c), and in distance-based heatmaps summarizing the asymmetry of the whole sample (figure 3b). In the Virunga mountain gorilla subsample, we found that the upper and lower molars (UM and LM) of the same position (i.e. first, second or third) exhibit matching unsided wear (electronic supplementary material, table S4), but there is no relationship between unsided tooth wear asymmetry (i.e. considering only the magnitude) and facial asymmetry scores (electronic supplementary material, table S5, figure 4). In other words, individuals with the highest degree of differential wear between the left and the right side do not show the highest level of facial asymmetry. Male Virunga mountain gorillas, which tend to be younger than females in this subsample, show less variation in right-left tooth wear differences compared to females ( figure 4). However, there is no evidence of chewing side preference as inferred from tooth wear asymmetry within this population; distributions of tooth wear asymmetry centre close to 0 in all six teeth examined (figure 4 shows density plot for LM1). Older individuals exhibit more asymmetric tooth wear compared to younger individuals (r s = 0.56, royalsocietypublishing.org/journal/rspb Proc. R. Soc. B 289: 20212564 p < 0.001 for LM1; r s = 0.66, p < 0.001 for UM1) (electronic supplementary material, figure S2), but there is no relationship between individual facial asymmetry scores and age (r s = 0.10, p = 0.694) in the Virunga mountain gorilla subsample (electronic supplementary material, figure S2). Discussion Bilateral asymmetry is used as a reliable indicator of developmental instability in humans despite the relative paucity of evidence linking it to specific early life stressors in primates or other long-lived mammals [2]. This study demonstrates that in gorillas, facial asymmetry mirrors known variation in genetic diversity, with the markedly inbred mountain gorillas exhibiting significantly more facial asymmetry than both Grauer and western lowland gorillas. Our results show that gorilla facial asymmetry is dominated by FA rather than DA due to chewing side preference as was originally suggested by Groves & Humphrey [13]. In the absence of populationlevel trends in lateralized tooth wear (figure 4), the lack of relationship between facial asymmetry scores and first molar tooth wear asymmetry in the markedly asymmetric mountain gorilla subsample suggests that facial asymmetry does not relate to preferential mastication on one side of the mouth, neither as a cause nor a consequence. Intraspecific genetic diversity is increasingly recognized as a requirement for the long-term survival of species, but the genetic health of a population is difficult to assess without a major investment in genomic analyses [38]. Population fragmentation and reduction in population size are the driving forces behind reductions in diversity, with genetic drift becoming the dominant evolutionary force rather than selection [39]. All extant gorilla subspecies, three of which were analysed here, are either endangered or critically endangered because of infectious disease, hunting by humans, and habitat loss and degradation [40,41]. A consequence of reduced genetic diversity is an increase in deleterious mutations, threatening long-term population survival and even extinction through decreased fertility, reduced ability to adapt to environmental changes and susceptibility to disease [16,24]. Among the ancestors of eastern gorillas, at least one genetic bottleneck was followed by subsequent inbreeding over the last 100 000 years [16], probably resulting in high frequencies of otherwise rare hand and foot morphology [15] and temporalinsular fusions in the brain [42] within extant populations. Only about 1000 mountain gorillas, including both the Virunga and Bwindi populations, remain after experiencing long-term, sustained population declines. These declines have led to a host of issues including webbed feet and fertility problems [16], although the Virunga population size in particular has increased for several decades following successful conservation interventions [22]. By contrast, western lowland gorillas have the largest range and population size at around 350 000 individuals [43]. Grauer gorillas, endemic to the eastern Democratic Republic of Congo, have experienced a startling 70% population decline in the last 100 years due to factors like human encroachment and poaching, with only about 3800 individuals alive today [41]. The genetic diversity of Grauer gorillas is intermediate between mountain and western lowland gorillas, with lower genetic diversity in peripheral versus core groups, suggesting a strong effect of genetic drift and limited gene flow among small, isolated forest fragments [24]. However, Grauer gorillas are significantly more inbred than western lowland gorillas and much more like mountain gorillas in their level of genetic diversity, but without necessarily sharing the same level of facial asymmetry. It is possible that the collection dates of the Grauer gorillas sampled here may influence our results as inbreeding has likely increased dramatically in the time since the current sample was collected ( figure 4). Further, the Virunga mountain gorilla sample analysed here exhibits lower genetic diversity than the only other population of mountain gorillas from Bwindi National Park, Uganda [17,24]. Future work should prioritize the assessment of changes in facial asymmetry through time, and where possible in the context of longterm field sites, consider population-level group dynamics and familial relationships as they relate to asymmetry. In addition to increasing the number of deleterious genes, inbreeding also makes individuals more susceptible to developmental perturbations caused by environmental stress [2]. Short-term activation of the stress response helps vertebrates cope with fluctuating environmental conditions, but chronically elevated glucocorticoid levels are pathogenic in the sense that they deplete energy reserves, negatively affecting health, immunity, fertility and survival [44]. In mountain gorillas, and especially the Virunga population, the increase in deleterious mutations in genes important for immune function has probably reduced their resilience to environmental change and pathogen evolution [16]. The precise mechanism behind the patterns of asymmetry documented in this study are not yet clear, but environmental stress alone is unlikely to explain variation in FA among gorillas. Virunga mountain gorillas rely on an almost entirely folivorous diet that is both spatio-temporally abundant and high in protein, while western lowland gorillas rely most heavily on unpredictable, seasonal fruit in addition to herbs and leaves. The western lowland gorilla diet is comparatively lower in protein, but high in non-protein energy during periods of high fruit consumption [20,23,45]. When fruit-feeding, gorillas spend more time travelling and less time resting [45], which, in addition to competition with other apes and the comparative unpredictability of their ecological conditions, might provide differential sources of environmental stress. In terms of temporal trends, studies of brain size [46] and dental defect severity [47] suggest that Virunga mountain gorillas that died between the 1960s and 1980s were more developmentally stressed than those that died between the 1990s and 2010s, with smaller brain sizes and more severe enamel defects on their teeth. However, these previous studies had more limited timeframes of analysis (i.e. 1960s to 2010s), so further work would benefit from incorporating multiple stress indicators to better understand their relationships and what factors influence their development. Clinical and experimental studies of facial FA have identified the lower midface and mandible as the regions with the most asymmetry, particularly in inbred individuals [8,36], in line with our results (figure 3). Lacy & Horner [8] proposed, based on their analysis of inbred rats, that asymmetry is a threshold phenomenon with no lessening impact of inbreeding on FA after generations of breeding. Like mountain gorillas, the inbred Australian wild rats also exhibit abnormalities of the digits in addition to pronounced lower midfacial asymmetry. In the clinical setting, Al Kaissi et al. [48] documented a connection between persistent torticollis of a congenital origin and royalsocietypublishing.org/journal/rspb Proc. R. Soc. B 289: 20212564 facial asymmetry in humans due to the malformation of the atlas in three family members. Individuals with congenital torticollis usually exhibit hemihypoplasia in the midfacial skeleton, on the opposite side of the palsied sternocleidomastoid muscle, presenting as unilateral facial compression [49]. As shown in figures 1 and 3, the most asymmetric mountain gorillas show evidence of facial compression and hemihypoplasia in the midface, leading to the dramatic asymmetry of the lower partition (i.e. inferior to the lower margin of the nasal aperture). While we do not suggest torticollis as a mechanistic explanation for the marked asymmetry present in mountain gorillas, we highlight the commonalities between inbreeding and mid to lower facial asymmetry with a hinge-like compression of the midface. Further studies undertaken in well-documented hominoid populations should help to shed light on the mechanisms behind the development of pronounced facial FA in humans and non-human primates. Assessing facial asymmetry in closely related yet differently adapted gorilla taxa provides a critical context with which to better interpret such features in past human populations. Asymmetry is almost always removed from morphometric analyses of fossils as it is typically assumed to be caused by deformation via taphonomic processes, but it might be worthwhile to revisit reconstruction methodologies to allow for the measurement of FA. Particularly when studying samples without associated genetic data, facial FA might be used as a proxy for inbreeding, which reduces long-term adaptability, survival and fitness [38]. Baab & McNulty [11] documented asymmetry in contemporary humans, non-human great apes and fossil hominins, and they showed that the infraorbital foramina, alare and lingual canine margins are by far the most asymmetric, matching the tendencies in the extant species documented here (figure 3). The shape of the facial skeleton of extant great apes is well documented as it is often used to aid in fossil reconstructions and phylogenetic interpretations, but it is worth considering whether extant hominoids are reliable models for such reconstructions [50,51], especially in light of the increase in facial asymmetry in the last 100 years. Intraspecifically, the facial skeletons of western lowland gorillas are sexually dimorphic, with facial growth continuing longer after reaching dental maturity in females compared to males, ultimately decreasing the level of dimorphism in older age classes [52]. This continued growth is unlikely to result from biomechanical processes related to mastication as the changes are not centred around the lower face [52], unlike the asymmetry results presented in this study. Debate surrounds whether FA is expected to be higher or lower in faster-growing groups with shorter developmental windows compared to slower-growing, longer-lived ones [53]. Here, mountain gorillas are the faster-growing taxon, and within species, females exhibit slower growth rates and males later ages at sexual maturity [46,54]. Our results from the Virunga mountain gorilla subsample show that there is no relationship between facial asymmetry magnitude and age, suggesting that asymmetry primarily develops during ontogeny and remains relatively stable throughout adulthood (electronic supplementary material, figure S2). By contrast, tooth wear asymmetry continues to increase with age (electronic supplementary material, figure S2). The females in this sample are, on average, older than the males, reflecting the demographics of mountain gorillas. The greater variation in tooth wear asymmetry among females is likely a consequence of these age differences between the sexes (figure 4; electronic supplementary material, figure S2). Our facial asymmetry results do not clarify the contribution of the fetal environment versus later in life correction to facial asymmetry, but this should be further analysed in the context of documented ontogenetic samples. Other parts of the skeleton besides the face should also be examined for asymmetry. Conclusion Taken together, our study shows that pronounced facial asymmetry occurs in the most genetically stressed gorillas and that it is not obviously related to lateralized mastication. While the plight of mountain gorillas is well known, these methods may serve the conservation efforts of less wellstudied species without available genomic data. Our results also show that facial asymmetry has increased through time in all three gorilla subspecies, suggesting that increased human encroachment, human-mediated disease spread, and further reductions in gorilla genetic variation have contributed to high levels of environmental and genetic stress in Homo sapiens' second closest living relative. : data curation, resources, writing-review and editing; T.S.S.: conceptualization, data curation, resources, writing-review and editing; S.C.M.: conceptualization, data curation, funding acquisition, investigation, resources, supervision, writing-review and editing; Y.H.: conceptualization, formal analysis, funding acquisition, investigation, methodology, resources, software, supervision, visualization, writing-review and editing. All authors gave final approval for publication and agreed to be held accountable for the work performed therein. Fossey Gorilla Fund International, Gorilla Doctors, Virunga National Park, The George Washington University, National Museums of Rwanda and New York University College of Dentistry. We thank the Evolutionary Anthropology Labs at the University of Minnesota for software support and access to scanning equipment, and finally Debbie Guatelli-Steinberg for her support of this work.
2022-02-23T14:06:54.369Z
2022-02-23T00:00:00.000
{ "year": 2022, "sha1": "8b9fc63fa37b7961e11635a0a8163c88fca950bd", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1098/rspb.2021.2564", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "3daeb83c3abacb0391e0ac2628d1f32cc45c3246", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
233746431
pes2o/s2orc
v3-fos-license
Forming new habits in the face of chronic cancer-related fatigue: An interpretative phenomenological study Purpose The growing group of patients who suffer from chronic cancer-related fatigue (CCRF) after cancer have helpful and less helpful ways of responding to this long-lasting and disruptive problem. This qualitative study aimed to gain insight in essential elements of how patients respond to CCRF, with a focus on helpful responses to facilitate adaptation. Methods We conducted semi-structured interviews with a purposive sample of 25 participants who experienced severe CCRF for at least 3 months. Participants were recruited via media, patient associations, meetings, and health professionals until data saturation was attained. We used a topic guide with open-ended questions about lived experiences. Interpretative phenomenological analysis (IPA) was used for analysis of the transcripts. Results We identified five interrelated themes of how patients respond to CCRF: (1) discovering physical and emotional boundaries; (2) communicating support needs; (3) reorganizing and planning activities and rest; (4) letting go of one’s habitual identity; and (5) recognizing and accepting CCRF. Conclusion This study highlights the development of new habits and positive beliefs in the face of CCRF and the importance of (social) support in this process. This experiential knowledge on helpful responses can be used to inform patients and their significant others and improve self-efficacy. Health professionals could use these insights to improve recognition of CCRF and personalize treatment. Supplementary Information The online version contains supplementary material available at 10.1007/s00520-021-06252-3. Introduction Approximately 25% of the cancer population experiences severe and disabling chronic cancer-related fatigue (CCRF) months to years after cancer treatment is finished [1][2][3][4]. Patients and their caregivers are often negatively impacted by the adverse physical, psychosocial, and economic consequences of experiencing CCRF [5]. For example, fatigue as a chronic illness can affect the ability of the patient to engage in paid employment [6] and caregiver burden is high [7,8]. To date, various psychological and exercise interventions for CCRF exist and have been found effective in selected groups of cancer patients [9][10][11]. The variation in effectiveness of interventions for CCRF may also be down to a mixture of ineffective interventions or study designs that do not account for intra-and inter-individual variability in fatigue patterns. The question "what works best for whom?" needs to be addressed to support patients and therapists in the selection of the most effective intervention for alleviating CCRF. Further in-depth insight into the patient perspective on essential elements of responding to CCRF could help answer this question. Although the complex etiology is not fully understood, we expect that a variety of risk and protective factors can influence how people respond to CCRF over time [12,13]. So far, the literature has mainly focused on unhelpful responses to fatigue (e.g., dysfunctional cognitions, dysregulation of activity, and negative social interactions) [8,14,15]. In order to improve treatment of CCRF, more insight in helpful responses is needed to target the complete spectrum of behavior during therapy. A relevant interrelated framework of personal, behavioral, and social factors to gain insight into patients' helpful responses to a chronic illnesses, such as CCRF, is the THRIVE model [16]. One of the key factors of this model is the forming of new habits and breaking with unhelpful habits [16][17][18][19]. From a psychological perspective, habits are defined as repetitive behaviors that are partially automatically executed over time [20]. The experience of cancer and its possible side effects during treatment has disrupted patients' daily routines. The persistence of CCRF after treatment prevents patients from restarting their daily routines, meaning they cannot do simple (automated) tasks in daily life at the same time and in the same way as before cancer [5]. The formation of new habits is required in this situation to improve self-management behaviors, such as exercising [17,19,21]. The THRIVE model emphasizes the importance of positive psychological beliefs (e.g., acceptance of illness and self-efficacy) in order to adhere to these new habits in the long-term [22]. More insight into the formation of new habits and beliefs from a phenomenological perspective can inform us what conscious and unconscious processes are part of helpful responses in the face of CCRF. Therefore, we chose a phenomenological study design because this interpretative methodology can be applied to illuminate the first-person perspective of responding to CCRF. Responding to a chronic illness is expected to be a dynamic, complex, multifaceted, and ongoing process that is never fully completed [16,23,24]. The purpose of the present study is to gain insight in essential elements of how patients respond to CCRF, with a focus on helpful responding to facilitate adaptation. Study population Participants were deemed eligible when they met the following criteria: (1) adults at time of cancer diagnosis; (2) experienced severe cancer-related fatigue (score ≥ 35 on Checklist Individual Strength-fatigue severity subscale [25]) for at least 3 months after finishing curative cancer treatment (except hormonal treatment); (3) had no current or former severe psychiatric comorbidity (i.e., suicidal ideation, psychosis, or schizophrenia); and (4) were able to speak and read Dutch. Recruitment All participants were recruited within two research projects in the Netherlands. The first project is the More Fit after Cancer trial (Fitter Na Kanker (FNK) Trial) that selected 22 participants after (partial) completion of one of the two online interventions for CCRF: physiotherapist-guided ambulant activity feedback (AAF) and psychologist-guided mindfulness-based cognitive therapy (eMBCT). In this study, nineteen participants agreed to participation in the interview study between December 2015 and March 2016. The second project is the REFINE project, in which 28 participants responded. Of the 28 responders, six participants were eligible and purposively sampled for participation in the interview study between August 2018 and January 2019. In the FNK trial, participants were recruited via advertisements in newsletters of patient associations, relevant websites, regional newspapers, social media, and through oral presentations on various occasions for patients and/ or caregivers. In the REFINE project, participants were recruited via patient websites or their health professionals. The health professional provided the participants with information about the interview study and referred interested participants to the researcher. Willing participants of both projects were then contacted by a researcher of one of both projects, and both verbal and written informed consent was obtained. We stopped recruitment when saturation was attained, i.e., when no new information emerged from the interviews of the purposive selected sample during analysis [24]. Semi-structured interviews The semi-structured topic guides consisted of open-ended questions based on literature and clinical experience and were pre-tested with a therapist with clinical expertise in treating patients with CCRF at the mental health institute for psycho-oncology. The first interviews with patients in both projects were seen as a pilot and evaluated as clear and concise. Therefore, no changes had to be made to the interview questions and these pilot interviews were included in the analysis. The topic guide of the FNK trial was focused on the evaluation of one of the two online interventions for CCRF (see Supplementary Materials Table S1). The topic guide of the REFINE project included experiences (descriptions, sensations, cognitions, patterns, attributions), consequences (daily life, body, self), actions (self, others, helping, and hindering factors), and other important factors related to CCRF (see Supplementary Materials Table S2). Interviews were face-to-face and held at the participants' location of preference at either an institute for Psycho-Oncology, their home, or online with a video connection. Three researchers (two men, one woman), including the first author, conducted the interviews in both projects and had previous experience in psycho-oncology and/or qualitative research. The interviews were audio-recorded with a voice tracer and transcribed anonymously by one of the research assistants of the study projects. The duration of the interviews in the FNK trial was on average 40 min, and on average 64 min in the REFINE project. Member checks were utilized for the six additional interviews of the REFINE project by sending a summary of findings to participants to check that ways of responding to CCRF were correctly understood and to allow them the opportunity to respond. All participants agreed with the content of the summary. Data-analysis The first and third authors independently coded the first five interviews of the FNK trial (MaxQDA software, version 18.2.0). Codes were discussed until consensus was reached. Because after five interviews, less and less variation between codes existed, the first author openly coded the other twenty interviews case-by-case in the same inductive way. In case of uncertainties regarding the codes, the first author discussed this with the third author to reach consensus. We used the six steps of interpretative phenomenological analysis (IPA) for the coding process [24]. The IPA is particularly useful to investigate the multidimensionality, dynamics, context, and subjectivity of CCRF [24]. The first step of coding according to IPA was reading and re-reading the transcripts to become familiar with the data. The second step was initial noting as the start of the open coding process. In the third step, emergent themes about responding to CCRF were developed within the interviews. A selection of six categories (i.e., metaphors, beliefs, comparisons, responses, helpful, and unhelpful responses) of the codebook were sorted in general maladaptive (e.g., move on, denial, and resistance) and adaptive (e.g., slow down, stop, or reduce activities) subcategories. The complete codebook was used for interpretative analysis of lived experiences with CCRF for publishing a separate paper. In the fourth step, we searched for connections across the emergent themes and developed a cross-table with two general patterns in the coping process: adapting (individual/ social) and letting go (individual/social). In the fifth step, we checked for completeness with the summary of the individual interviews. In the last step, we organized two team discussions to identify superordinate themes that describe both helpful and unhelpful responses. Discrepancies about identified themes between team members were discussed until consensus was reached. This multidisciplinary team included all authors who have clinical and/or qualitative research expertise in psycho-oncology. Sample characteristics In total, 25 patients suffering from CCRF participated. Nine participants (partially) completed the AAF intervention. Ten participants (partially) completed the eMBCT intervention. Two participants reported that they were currently treated by a physiotherapist. Eleven patients (44%) scored above the threshold (≥ 15) on the Hospital Anxiety and Depression Scale, suggesting clinical levels of distress [26][27][28]. Table 1 shows the characteristics of the participants. Responses to CCRF We identified five interrelated themes that characterize the dynamic and mutually reinforcing process of responding to CCRF: (1.) discovering physical and emotional boundaries; (2.) communicating support needs; (3.) reorganizing and planning activities and rest; (4.) letting go of one's habitual identity; and (5.) recognizing and accepting CCRF. Table 2 shows patients' quotes that support the five themes. Discovering physical and emotional boundaries In the first period after experiencing CCRF, most patients used habits that were useful before their cancer diagnosis. For example, pushed themselves to do certain things in their daily life, tried to move on, and as a result, neglected their bodies and emotions. Moving on is characterized by speeding up activities and resisting CCRF. After a while, several patients stopped denying of the fact they suffered from CCRF and used different ways to tune into the emotions and sensations of their vulnerable bodies, afflicted by cancer and its treatment. For example, writing about their emotions, cognitions, and CCRF experiences helped discover change and progression. This kind of self-monitoring led to new insights into how their body felt and gave them a choice in how to protect their boundaries. The protection of their boundaries is a dynamic process of trial and error. Patients described that they needed to learn deciding on the right moment for relaxation to prevent exhaustion. Communicating support needs While discovering their physical and emotional boundaries, patients found it difficult to ask for help and tried to go on without support. However, most patients came to a point to accept they needed help and ask for the support of their close Currently employed (n) Prior professional support for cancer (n) g 18 others. For example, patients openly communicated with their partner, friends, family, or colleagues about how they felt and what they could and could not do anymore. Most patients appreciated the support and empathic reactions they received from their social contacts. Some patients described how they tried to keep their CCRF silent when they met new people, because they were afraid of misunderstanding and negative reactions. When CCRF and the debilitating consequences for daily functioning persisted, most patients looked for more information on the internet, in books, or apps and/ or asked for professional support from their medical oncologist, a psychologist, or physiotherapist. Reorganizing and planning activities and rest The experience of CCRF introduced a dysregulation in (social) activity, with patients being too active or too inactive. The time for rest was in all cases prolonged, for example, by sleeping more hours at night. Sometimes too much rest was taken, such as sleeping for hours during the day. As a result, patients did not experience an alleviation but an exacerbation of their feelings of CCRF. Becoming aware of their boundaries and communicating their support needs to others helped them to adapt their daily habits and search for more balance in activities and rest. Their activities were passively and actively clocked, structured, adjusted, or reduced, and alternated with time to rest. Patients reported examples of how they reorganized their daily activities: withdrawing from activities, taking time for themselves, going home earlier, and using ear plugs to avoid over-stimulation. Social activities lost their spontaneity and were planned. Patients prioritized the contact with close others and tended to stay at home instead of visiting someone. In cases of extreme fatigue, patients disengaged from their social life entirely and focused on resting. When exhaustion threatened their ability to do daily life activities, they found practical solutions. For example, some patients started using a disability parking card, cruise control in the car, an electric bike, or an elevator. The rush hours in traffic were avoided and a car was only used for short distances. Letting go of one's habitual identity As patients learned more about their boundaries and adjusted their daily life accordingly, they were confronted with the fact the cancer and resulting CCRF had changed them. Some patients reported becoming more of an emotional person after cancer and preferring a different type of contact with others, with a focus on listening rather than talking and a need for indepth conversations. Several (social) activities were not possible anymore. For example, some participants stopped working and had to let their significant others take over activities, such as taking care of children, driving the car, or cooking dinner. The letting go of old habits and becoming a less active person (in the evening or after activity) meant losing (part) of their old self. Recognizing and accepting CCRF Most patients experienced negative emotions, unhelpful thoughts, and beliefs that make it difficult to recognize and accept the unpredictable symptoms of CCRF. Some patients were able to let go of their negative emotions, unhelpful thoughts, and beliefs towards CCRF at certain moments throughout the day. If their social environment was accepting and understanding, it was easier to let go of their resistance against CCRF. The first step in this process was recognizing and acknowledging they suffered from CCRF. This awareness made it possible to accept that their situation was different from before cancer and they had to make the most of this "new normal." The formation of new habits such as discovering their boundaries, adjusting their daily life activities, and communicating their support needs have contributed to the acceptance of CCRF. Interrelations of themes The phenomenological approach showed that the identified themes are part of habit formation that starts at a pre-reflective level (i.e., prior to any conscious evaluation). One participant described the change in habits and adjustment to their new normal as an automatic process: "Yes, unconsciously you devise things for yourself and yes that becomes routine and, in the meantime, you know, you don't even realize that you have planned or managed things like that because it eventually has become normal. It actually has become your life…" [male, 41-50 years]. The self-monitoring process of awareness of sensations and discovering physical and emotional boundaries is originated in the body and in relation to others by communicating support needs. These introspective and communicative ways of responding provide useful insights to reorganize one's activities and rest and vice versa. The letting go of old and unhelpful habits and beliefs initiates an identity change, which creates room for new habits and beliefs that facilitate acknowledging and accepting CCRF. Discussion This study aimed to better understand how patients respond to CCRF in helpful ways. The processes of forming new habits and positive beliefs and breaking with unhelpful habits and negative beliefs appeared essential for a helpful way of responding. Body awareness helped patients to discover their physical and emotional boundaries and seek support, which in turn facilitated time management between activities and rest. This change in habits created a change in identity, with new behavior and beliefs, which further aided patients to adhere to their new habits. These findings are in line with other qualitative studies that found several comparable ways of responding to CCRF, such as support, activity management, identity change, and acceptance [29,30]. However, these qualitative studies did not investigate the habit formation process with use of helpful and less helpful ways over time. The discovering of physical and emotional boundaries is not covered sufficiently by other studies. Another difference is that while other qualitative studies focused on differences of responding to CCRF between persons, our results suggest also the possibility of differences within persons of responding to CCRF throughout the day and from day to day. These results build on previous findings that responding to CCRF is embodied [31] and partly an automatic, repetitive habitual behavior that requires minimal forethought [32]. The identified themes of responding to CCRF closely fit the three levels of habit formation that were reported by Wehrle, based on Husserl's later works [33,34]. These active and passive levels of habit formation represent both conscious and unconscious processes in relation to one's previous experiences [33]. The first level of habit formation is defined as a style of experiencing based on a direct, unconscious reaction towards repeated individual experiences. The struggle against CCRF with unhelpful habits and beliefs, such as moving on and neglecting one's body, are based on this primary unconscious reaction to CCRF and related to precancer experiences with fatigue. The second level of habit formation originates from the body and relates to previous embodied experiences (e.g., bodily memory) in active and passive ways. The themes "Discovering physical and emotional boundaries," "Communicating support needs," and "Planning and reorganizing activities and rest" are examples of the second bodily level of habit formation and indicate a learning process in relation to previous embodied experiences. For example, what the body experiences is central and related to previous experiences in the different self-monitoring processes patients use such as writing about experiences, clocking activities, and taking care of their bodies. At first, these processes are more active before becoming aware of their bodies in more passive ways. The third level of habit formation is based on personal and conscious reflection to change and adhere to new habits. The themes "Letting go of one's habitual identity" and "Recognizing and accepting of CCRF" reflect this personal level of habit formation with development and adoption of new beliefs. In the present study, we shed more light on what characterized the change of identity and how it plays a central role in responding to CCRF, by letting go of old habits and (social) activities and forming new ones. The changed habits of sleeping more, moving less, and changes in communication preferences make patients with CCRF sometimes unrecognizable for themselves and others, which has an impact on their identity [33]. This loss of self was reported by patients in several qualitative studies on CCRF [31,[35][36][37]. Informing and involving the social environment (e.g., partner) on what it means to experience and cope with CCRF can facilitate this change in habits [38]. Clinical implications and future research This study offers in-depth insight into the central role of the body, identity, and dynamics of helpful and unhelpful responses in the face of CCRF that can be useful for improving self-management and development of personalized treatment. It depends on the individual patient whether self-management is sufficient to form new habits and beliefs or whether additional treatment is needed, and which (combination of) treatment(s) is preferred and most effective in reducing CCRF. These insights into responses to CCRF can facilitate patients and therapists in making a shared decision about the preferred (combination of) treatment(s). Because many patients suffer from multiple interrelated symptoms after anticancer treatment is completed, a transdiagnostic or holistic perspective is preferred to treat these patients. For example, Kuba investigated a group of hematological cancer patients (≥ 2.5 years post diagnosis) and found that acceptance of one's present moment experiences is associated with lower levels of fatigue and subjective cognitive impairment [39]. Acceptance of CCRF is a central theme in our study and can be a potential target for interventions to deal with CCRF and other symptoms. Mindfulness-based interventions (MBIs) focus on enhancing bodily awareness by intentionally paying attention to present moment experiences, in an accepting, non-judgemental way [40,41]. Besides face-to-face and group interventions for treating CCRF, effective web-based interventions (e.g., online activity coaching, cognitive behavioral therapy (CBT), and mindfulness-based cognitive therapy (MBCT)) might be a valuable alternative for patients [10,42,43]. Health professionals should advise patients about the different options and refer patients to the treatment of preference. Future research on CCRF could benefit from innovative methodologies such as the experience sampling method and network analysis to investigate inter-and intra-individual differences in experiencing and responding to CCRF and find an answer to the question: what works best for whom? Further qualitative research for developing and evaluating interventions is recommended that include the caregivers' perspective because responding to CCRF is a mutual process. Strengths and limitations The IPA used in the present study goes beyond other existing qualitative studies on CCRF by exploring the central role of the body, interrelatedness, and importance of social context in the evaluation of helpful responses in a purposive selected sample. Some limitations should be noted. First, patients' experiences were evaluated retrospectively on different times after cancer treatment was finished. Second, although a diverse clinical sample of patients with CCRF and several comorbidities participated, we should be cautious to generalize the results to individual cancer patients with other cultural backgrounds and differing comorbidities. Third, similar to other qualitative studies on CCRF [31], the majority of participants had breast cancer which limits generalizability. At the same time, this is an adequate reflection of patients who seek psychological help for CCRF [44]. Fourth, although three different interviewers conducted the interviews, this impact is expected to be minimal, because of the self-reflection of interviewers and inductive participant-oriented analysis process of the rich data [24,45]. Conclusions The present study highlights the development and adherence of new habits and beliefs in the face of CCRF and the importance of (social) support in this process. This new experiential knowledge on self-monitoring, support-seeking, and time-management habits and acceptance of CCRF can help inform patients and their significant others about selfmanagement in the face of CCRF and improve self-efficacy. Health professionals could use these insights in clinical practice to improve timely recognition and personalize treatment for patients with CCRF.
2021-05-06T14:08:32.377Z
2021-05-06T00:00:00.000
{ "year": 2021, "sha1": "3f320880ee43cb1f4950d594ed0163711493e9c5", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00520-021-06252-3.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "3f320880ee43cb1f4950d594ed0163711493e9c5", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
208639380
pes2o/s2orc
v3-fos-license
Diarrhea: a missed D in the 4D glucagonoma syndrome Glucagonoma is a rare and slow-growing pancreatic tumor that usually manifests as glucagonoma syndrome. It is mainly characterized by a typical Dermatosis named necrolytic migratory erythema (NME), Diabetes and glucagon oversecretion. Deep vein thrombosis and Depression complete this set. We report the case of an advanced glucagonoma with liver spread, where all these 4D symptoms occurred but a chronic secretory Diarrhea was the most relevant feature. A 65-year-old man was referred to our center to investigate multiple hepatic nodules evidenced by abdominal tomography. He had a recent diagnosis of diabetes and complained of significant weight loss (25 kg), crusted skin lesions and episodes of a large amount of liquid diarrhea during the past 6 months. On admission, there were erythematous plaques and crusted erosions on his face, back and limbs, plus angular cheilitis and atrophic glossitis. The typical skin manifestation promptly led dermatologists to suspect glucagonoma as the source of our patient’s symptoms. A contrast-enhanced abdominal computed tomography showed a hypervascularized pancreatic lesion and multiple hepatic nodules also hypervascularized in the arterial phase. Despite initial improvement of diarrhea after subcutaneous octreotide, the patient’s impaired nutritional status limited other therapeutic approaches and he died of respiratory failure due to sepsis. His high levels of serum glucagon were not yet available so we performed an autopsy, confirming the diagnosis of metastatic glucagonoma with NME on histology. Chronic diarrhea is not a common feature in glucagonoma syndrome; however, its severity can lead to serious nutritional impairment and set a poor outcome. INTRODUCTION The association between skin lesions and multiple hepatic nodules has a broad differential diagnosis, particularly in patients with chronic diarrhea. It is essential to be aware that gastrointestinal tract tumors are related to these three conditions, especially gastric, colonic and pancreatic tumors. 1,2 In this setting, feasible hypotheses are hepatic and skin metastases or paraneoplastic cutaneous syndromes. Often, despite etiological elucidation and treatment, liver spread may determine an unfavourable prognosis. 3,4 The investigation must be quick. In addition to blood tests and serum tumoral markers, upper digestive endoscopy, colonoscopy and contrast-enhanced abdominal imaging are essential. Widespread skin lesions should not be understood as a finding but as part of a systemic disease, so the evaluation of an experienced dermatologist is helpful. We report this fatal case of an advanced glucagonoma, highlighting its severity and systemic involvement. CASE PRESENTATION A 65-year-old Caucasian male was referred to our tertiary hospital in order to elucidate the etiology of multiple hepatic nodules detected by abdominal computed tomography (CT). He complained of weight loss during the past 6 months (25 kg), non-pruritic skin lesions on the face and limbs, weakness, sadness, and episodes of a large amount of liquid diarrhea more than 20 times a day without blood or mucus. He was able to ingest less than one daily meal during the last 3 months. He was smoker (20 packs-year) and was on metformin because of a new-onset diagnosis of diabetes mellitus. On examination the patient had a weakened status (body mass index 14 kg/m 2 ), with multiple skin lesions (erythematous and brownish plaques and crusted erosions) on his face (Figure 1), back and limbs. He also had angular cheilitis and atrophic glossitis. There were white plaques on the oral mucosa suggesting moniliasis. The liver was hardened and palpable 4 cm below the costal margin and xiphoid process. He had severe asymmetric edema of the lower limbs, and there were no palpable lymph nodes. L a b o r a t o r y t e s t s e v i d e n c e d a r e l e v a n t normocytic/normochromic anemia with hemoglobin 3.9 g/dL (reference value [RV] 13-16), low serum albumin 1.8 g/dL (RV 3.5-5.2), and low levels of calcium, phosphate, magnesium, sodium, zinc, folate, and potassium. Regarding the severe anemia, the glycosylated hemoglobin was 5.9% (RV 4.0-5.6); the serum C-peptide was unchanged (2.7 ng/mL [RV 0.8-4.2]); and the serum insulin was low (1.5 UI/mL [RV 3.2-16.3]). Other relevant dosages were: prolactin 19 ng/mL (RV 4.0-15.2), ferritin 1,872 ng/mL (RV 30-400), with transferrin saturation at 88%. The liver enzymes, international normalized ratio, bilirubin, parathyroid hormone, and adrenocorticotropin were normal. Serum urea and creatinine were slightly elevated. Viral hepatitis and HIV serologies were negative. Doppler ultrasonography of the lower limbs showed bilateral deep venous thrombosis. Abdominal CT evidenced a contrast-enhanced lesion between the body and tail of the pancreas measuring 24 mm at its largest diameter (Figure 2A). There were also multiple hepatic lesions with peripheral enhancement in the arterial phase, and a hypodense center, suggesting necrosis ( Figure 2B). Carbohydrate antigen 19.9 (CA 19.9) dosage was 413 U/mL (RV < 34); other tumoral markers, such as alpha-fetoprotein and carcinoembryonic antigen (CEA), were unchanged. The stool analysis depicted a fecal osmolar gap of 5.02 Osm / kg H2O (compatible with secretory diarrhea since it is < 50), rare blood red cells, and absence of leukocytes, yeasts, fatty acids, helminths and protozoa. The patient maintained a high fecal discharge even after initial therapeutic measures and parenteral nutrition. An experienced dermatologist evaluated the patient. Among the differential diagnosis of cutaneous lesions there were pemphigus, malnutrition and vitamin deficiencies in the setting of chronic diarrhea. However, necrolytic migratory erythema (NME) could be a major hypothesis, since it is strongly associated with glucagonoma, which could also encompass the combination of diabetes mellitus, diarrhea, anemia, weight loss and a pancreatic nodule with probable liver metastases. With this in mind, we performed the dosage of serum glucagon and vasoactive intestinal peptide (VIP), and subsequently initiated subcutaneous octreotide 100 mcg three times a day. There was satisfactory improvement of the diarrhea; however, the patient's condition evolved to septic shock due to pulmonary infection, and he died of respiratory failure 20 days after admission. The VIP dosage was slightly elevated, 49.6 pmol/L (RV < 30) and serum glucagon was 4,354 pg/mL (RV < 208), but we did not yet have this dosage on the day he died, so we decided to perform the autopsy (with his family's consent). AUTOPSY FINDINGS There were multiple hardened and well-delimited nodules in the liver -the largest measuring 85 mm ( Figure 3A). Between the pancreatic body and tail, we found a brownish solid nodule measuring 28 × 25 × 25mm ( Figure 3B) with an enlarged adjacent lymph node measuring 20 × 18 × 18 mm. H i s t o l o g i c a l a n a l y s i s e v i d e n c e d a w e l ldifferentiated pancreatic neuroendocrine tumor ( Figure 4A), with lymph node and liver metastases, which was confirmed after immunohistochemical staining positivity for CD56 ( Figure 4B), chromogranin A ( Figure 4C) and synaptophysin ( Figure 4D). The proliferating index Ki67 was inconclusive, due to autolysis. Skin lesions histology showed epidermis with marked degenerative changes and superficial necrosis, loss of the granular layer, cytoplasmic balloonisation and vacuolisation of keratinocytes, which was compatible with NME ( Figure 5). DISCUSSION Among the functioning pancreatic neuroendocrine tumors, the most common are gastrinomas and insulinomas. Glucagonomas are extremely rare, with an estimated global incidence of one case in 20 million people. 5,6 The peak presentation is in the fifth decade of life, affecting men and women in almost equal proportions. Less than 10% of tumors are associated with familial syndromesmore frequently multiple endocrine neoplasia type 1 (typically non-functioning). 3,7,8 Most are sporadic and this presentation has lower survival, since about half of the patients have metastases at diagnosis. 9,10 Achieving effective treatment and reaching higher survival rates remain a major issue. In approximately 87% of cases, sporadic glucagonoma is located in the body or tail of the pancreas. 11 This tumor has an alpha-cell production of glucagon, 5,12 and the main clinical manifestation is known as glucagonoma syndrome (GS). It is a systemic condition characterized by the combination of NME, high levels of serum glucagon and hyperglycemia. 10 This was our patient's presentation. There were no additional features to suggest multiple endocrine neoplasia type 1, except for a slight increase in serum prolactin, which was not valued. NME is a paraneoplastic skin disorder typically presented in up to 70%-80% of the patients with GS. 10 Lesions can be widespread but are usually located in intertriginous areas, perioral region, perineum, lower abdomen, thighs and distal extremities, and may have a migratory course, occurring in spontaneous exacerbations and remissions. Commonly there are annular or irregular eruptions and plaques with superficial epidermal necrosis and crusts, leading to pruritus or pain, with susceptibility to secondary infection. After lesions heal, residual areas of hyperpigmentation and peripheral collarette can remain. The most specific histological feature includes superficial epithelial necrosis of the upper spinous layer with vacuolated keratinocytes. 13 Patients may also present angular cheilitis, glossitis and alopecia. 10 In 90% of cases, NME is associated with glucagonoma but some conditions -usually leading to nutritional impairment -may be involved, such as cirrhosis, cystic fibrosis, inflammatory bowel disease, kwashiorkor, celiac disease and other neoplasms. 7 This rare presentation is known as pseudoglucagonoma syndrome. 5,14 Other findings of GS include anemia (49.6%), w e i g h t l o s s ( 6 6 % -9 6 % ) , d i a r r h e a ( 3 0 % ) , abdominal pain (7.5%-10.6%), lower limbs edema, venous thromboembolism (50%) and depression (50%); 3,15-17 these match our patient's presentation. Glucagon-related cardiomyopathy has already been described. 18 GS is known as 4D syndrome (Dermatosis, Diabetes, Deep vein thrombosis and Depression). 17 Diarrhea occurs more frequently in patients with other conditions, such as somatostatinomas, gastrinomas, VIPomas and carcinoid tumors. Multiple etiologies could be associated with that, such as increased motility, malabsorption and bacterial overgrowth. The consequences are hydroelectrolytic disorders, renal disfunction, weight loss and malnutrition. 3 This was the most relevant symptom in our patient and he had also a slightly elevated serum VIP, which may have contributed to it. Glucagonoma is a slowly progressing tumor, and some patients develop pancreatic adenocarcinoma before the neuroendocrine tumor spreads. Metastases are found in half the patients at diagnosis. 9,10 and occur predominantly in the liver (79%-90%) and lymph nodes (30%-37.8%). 7 Early recognition of GS before liver dissemination can be life-saving, 5,19 with a 10-year survival reaching 100%. 20 However, after it spreads to the liver, survival drops by half. 17 When there is massive hepatic involvement the liver cannot properly metabolize glucagon, icreasing its levels and thereby worsening the symptoms. 17 Song et al. 12 evaluated 623 reported cases of glucagonoma. The male to female ratio was 0.79 and metastases were detected in 49.2% of patients. These subjects were older than those without metastases upon diagnosis (54.0 vs. 50.8 years old). The average time between initial symptoms and diagnosis of the tumor was 31.4 months. Wei et al. 9 reported six cases of GS in a 17-year database. Most were women (4/6), and the median age at diagnosis was 48.8 years (younger than our patient). NME was found in all subjects and was the first symptom in 67%. Five patients had diabetes mellitus and the other one had impaired glucose tolerance. The whole group had anemia, and the serum glucagon ranged from 245.6 to 1,132.0 pg/mL. The highest value was a quarter of our patient's level. In a series of 21 patients with GS reported by Wermers et al. 14 , eleven were male (52,4%). The median age at diagnosis was 54 years (also younger than our patient). Twenty subjects had already liver spread at diagnosis and the other one had lymph node dissemination. However, just 9/21 patients had tumor-related death, which occurred on average 4.91 years after diagnosis. Despite the diverse clinical features, the diagnosis of glucagonoma requires evidence of a pancreatic lesion by an imaging examination. As this tumor is often located in the distal pancreas, the role of ultrasonography seems to be limited. 21 Therefore, the most useful methods are contrast-enhanced imaging, such as CT and magnetic resonance. 5,21 Positron emission tomography or octreotide scan scintigraphy may be applicable especially when there is concern for distant disease. 22 In addition to the dosage of serum glucagon, C-peptide and usual laboratory measurements (blood count, lipids, glucose, glycosylated hemoglobin, vitamins, electrolytes, iron, ferritin, hepatic and renal function), hormonal profile must be evaluated, since glucagonomas (as other islet cell neoplasms) may overproduce multiple hormones, such as insulin, adrenocorticotropin, parathyroid hormone, gastrin and VIP. 3,5,17 High levels of chromogranin A have been associated with advanced disease. 23 The management of glucagonomas is wide and multifactorial. Hyperglycemia can be controlled using insulin or oral blood glucose lowering drugs. Ketoacidosis rarely occurs since pancreatic beta cells are preserved. Somatostatin analogues (octreotide/lanreotide) have effective suppression on glucagon secretion, so it is used to improve GS symptoms, such as diarrhea and skin lesions. 10,24 However, it may work through other mechanisms as some cases have had a response independent of the decrease in glucagon levels. The use of somatostatin analogues is a well-tolerated and safe therapy, but it has lower effectiveness in the management of diabetes mellitus and does not reduce the incidence of venous thrombosis, which requires prophylactic low doses of heparin. 24 Nutritional support is necessary, since patients have usually a relevant weight loss. 22 Total parenteral nutrition with amino acid and caloric supplementation may be used to counteract the catabolic effects of high glucagon. 19 Surgical resection of the pancreatic nodule is indicated whenever possible. [25][26][27][28][29] After tumor resection, the symptoms of GS commonly decrease. 22 Unfortunately, less than 15% of patients with liver spread have the possibility of surgical cure, requiring additional therapy. 21 Interferon-alpha, everolimus and sunitinib are useful, especially in the presence of liver metastases. Embolisation, chemoembolisation and radioablation may be performed with satisfactory outcomes. 2,3,28 Radioisotope therapy can also benefit some patients. 6,30 When patients are not able to undergo surgery, systemic chemotherapy can be conducted, particularly streptozotocin and doxorubicin. 5 Cryoablation is an option to treat the pancreatic tumor and NME, but it requires additional studies to prove efficacy. 10 After pancreatic resection, liver transplantation should also be considered in cases with clinically controlled disease, even in the presence of hepatic metastases. 18,31 Our patient was older than most of the previously related cases of sporadic glucagonoma. He could not undergo surgery because of his impaired nutritional status. He had a delayed diagnosis. He was not receiving satisfactory nutrition and did not experience episodes of hypoglycemia probably because of his very high levels of glucagon. When he was finally admitted to hospital, the diagnosis was suspected and treatment was promptly initiated; however, we were not able to prevent his unfavourable but presumed outcome. CONCLUSION Dermatosis, Diabetes, Deep vein thrombosis and Depression characterize the 4D GS -a main manifestation of glucagonomas. Necrolytic migratory erythema is the most specific presentation but the systemic involvement of this syndrome must be broadly recognized. Our intention is not really to change the acronym '4D' to '5D', but to emphasize that although chronic secretory diarrhea is not such a common clinical feature, it leads to severe nutritional impairment and can set a poor outcome, especially in elderly patients.
2019-11-28T12:36:15.645Z
2019-11-27T00:00:00.000
{ "year": 2019, "sha1": "6307bc75cb46029c3cb3d4693eb278286d185021", "oa_license": "CCBYNC", "oa_url": "https://www.autopsyandcasereports.org/article/10.4322/acr.2019.129/pdf/autopsy-9-4-e2019129.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "53bb839c588526eabcd9ec9db2baacf6717fdfae", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
259774036
pes2o/s2orc
v3-fos-license
Narrative influences in contemporary ceramics experiences The current research dealt with the study of (narrative influences in the experiences of contemporary ceramics). It contained four chapters. The first chapter included the methodological framework for the research, which was represented by the research problem, which dealt with the possibility of the contemporary Iraqi potter from having an imagined narrative vision in the embodiment of his ceramic productions. As for the second chapter, it included two sections. The first topic included (narrative between mechanisms and elements) and the second topic (narrative in contemporary ceramic experiences). The research ended with the indicators that resulted in the theoretical framework. The third chapter included the research procedures that included the research community, a sample, the research methodology, and the analysis of (3) ceramic models. The fourth chapter included results, conclusions Introductions The first man, with his thought and his aesthetic vision towards things, sought to form a definite expressive image through which he would express his gesture, which exceeds in the power of its expression what can be expressively, since he was born with the motive to establish a special world of absolute values to be a substitute for the changing world of manifestations subject to will.As we find the European potter a quasi-artistic form and an idea leading to a creative phenomenon with a beautiful narrative that comes with the state in which artistic experiments ignite to establish a formation loaded with thought and content directions, the narrative is subject to interpretation that the artist's mind strives to break every previous horizon of experience or reading, and from here the research problem is determined to wonder what influences are listed in the contemporary potter's experiences? Research importance Increasing the plastic orientation of ceramics by integrating the narrative into contemporary ceramic experiences as a contemporary philosophical thought.Giving ceramic works a philosophical value, in addition to expanding the knowledge horizons of researchers and scholars in the field of plastic arts. Research Objectives The objective of the research is to identify the influences that are narrated in the experiences of contemporary ceramics. Search limits 1. Objective boundaries: The current research is determined by studying the narrative influences in the productions of contemporary ceramics.2. Spatial boundaries: Contemporary Iraqi ceramic works. Define narrative terms Linguistically: The quality of the speech context lies in the smoothness and flow of narration and presentation. ~ 30 ~ When a hadith is presented properly and accurately, then soand-so has the ability to narrate the hadith in an elaborate and smooth manner, if the quality of the context is excellent.(Al-Hamdani, Hamdiyeh, p. 187-188) [1] . Idiomatically: Means anything that tells or presents a story, whether it is a text, image, performance, or a mixture of that, and accordingly, novels, films, comics, and other places belong to the storyteller side.(Mangrid, Pal, 2011, p. 51) [2] .And (Gerard Genet) defines it as a presentation of a hadith or a sequence of events, real or imaginary, presented by means of language.(Sahrawi, Ibrahim, 2008, p. 32) [3] . Procedurally: It is the artist's ability to transfer an event or events from its real or imagined image to another aesthetic image embodied in a work of art. Chapter II The first axis Narrative between mechanisms and elements The narrative system recently opened up to all the ramifications of human life and occupied the thoughts and research of many writers and critics, the narrative after a semantic sign, changes from the multiple life situations in a dynamic way, laden with broad implications, until it occupies a prominent place in human reflective thought.Since the narrative began since the beginning of the history of the human race, as there is no people anywhere without a narrative, the narrative is present at all times and in all places and in all societies, as it (narrative science) does not stop at literary texts that are based on the element of storytelling in the traditional sense rather, the narration took a literary direction towards all literary and artistic genres alike, so we find it in the legend, the novel, theatrical discourses, tragedies, drama, comedy, and artistic forms .(Mangrid, Ball, 2011, p. 53-54) [2] .Narrative is a term that includes the separation of an event, group, fantasies, or choices, whether based on reality or derived from fiction.The narrator, storyteller, or narrator implements the narration process and leads to the narrative text, as it is the narration found in every real or imagined text of stories.(Mornad, Abd al-Malik, 1989, p. 32) [4] Narrative refers to the linguistic, structural, and semantic aspects of narrative discourse.There are two main directions in the field of narrative, the first is the semantic narrative that focuses on the content of linguistic narrative verbs, and the second is the linguistic narrative that focuses on the linguistic elements associated with the discourse and the meanings included in it.(Ibrahim Abdullah, 2005, p. 8) [5] .The storyteller is based on two main parts, the first includes a story and includes certain events, while the second includes the form through which the story is told, and this method is called narration, as one story can be narrated in multiple forms, and the reason is due to the fact that narration is relied upon to distinguish ironing patterns, and communication is represented in narration Between two individuals, the first party is known as the narrator or the narrator, and the second party is known as the narrator or the reader.(Mangrid,Ball, 2011, p. 45-46) [2] . Among the additions to the linking relations of the event, which the narrator reviews is the artist through his presentation (narrative material) is the artistic work that may be realistically related to the thought of description or far from it within the levels of imagination of the artist in contrast to the realistic tendency of the event and the adoption of a new method in shouldering the descriptive analyzes of the artwork either (Thomaszewski) It has two features that enable the narration to be the first style.Objective narration, in which the writer summarizes a comprehensive summary of the events, including the narrative ideas of the characters, in order to give the reader the freedom to think about what is being told and interpreted.This type of narration is similar to realistic narrations, or the second type is the subjective narration:-Here the narrator is an interpreter of each news or event and imposes ideas on the reader and invites him to find (Um Bertraiko) cares about the internal linguistic expansion, as well as the value of referral (reference), but the presence of the reference is based on the internal environment of the text.The environment is not treated as closed, but as open. Reading the text and its interpretation with (Echo) is suspected of respecting the cultural and linguistic background (Al-Khatib Muhammad, 2007, p. 85) [6] , Likewise, the events begin to present a temporal and spatial row by describing the characters and identifying them through movement.The narration also takes a compositional function between the imaginary forms (a novel and a story in literature) and its real forms (event, facts, and history) by extending to the recipient a meaning and an intersectional significance between the two worlds of the text and the reader to spread the relations of man between man and the world and between man and man (Ricoeur Paul, 2009, p. 56) [7] , the human being himself, as the narrative event, as it has the face of imagination, can reshape our vision of the world, as the event is linked to time and space.Through its representation of a group of scattered incidents that overflow with its meanings and are followed up to form a narrative material that relies on a group of technical and artistic elements for the first year.(Mortada, Abd al-Malik, 1898, p. 19) [8] .Thus, every artistic event has a time, and time is linked to a place without time.The successive events from (the beginning, through the middle, to the end) constitute an effective role in the growth and consolidation of the artistic text, and this is what led it to survival and continuity by presenting the ideas of those events with a narrative artistic leisurely significance for the recipient. The second axis Narrative in the experiences of contemporary Iraqi ceramics Global thought at the beginning of the twentieth century and the end of the nineteenth century witnessed a great displacement and radical transformations in art and symbolic forms, this transformation is considered as a displacement in receiving formation as a result of major shifts in thought and making it witness dialectical formal orientations.The image is art and its forms are shifts towards narration in the new image, and it achieved another history of reception and aesthetic response in presenting its plastic categories and established a history of texts and images due to this formal transformation, as it witnessed a revolution against the old symbolic forms of academic ~ 31 ~ realism as the symbols are transmitted as complexities and as historical knowledge to take the characteristic of constancy within the combined structure with its civilizational characteristics and as a social reality, to achieve the continuity of patterns with the human values they bear, and they may not differ from time and place and can differ only in the means expressing them and determined by the collective behavior in its structure and narrative forms (Golden Man Lucian,2003, p.4) [9] , contemporary art, sculptural ceramics, and plastic art appeared in vast and different parts of the world, but the increase was in Iraq, so the effect of alienation should be considered.In the twentieth century, the aspects are tiring, so the social reality today is crowded with contradictions and multifaceted conflicts, the sudden development, it is in the eyes of many artists a strange and incomprehensible reality (Philip Syring, 1992, pp.6-7) [10] , And the processes that take place in the art of ceramics are considered strange only to man as a spiritual formation, but rather they consider that to be the result of global friction due to the development of the laws of rapid communication, as ceramics is no longer a carrier of contents, or an answer to moral, anecdotal or anecdotal contents, but art has become a unique formula due to this displacement that the world witnessed ceramics tries to keep pace with the era in its times, as it does not fall in one form and in its mechanical dimension, rather, it treats and presents the problems of contemporary man within cosmic dimensions, with an expressive and aesthetic framework that opens its horizons on the scale of the inherited symbols within the era in which the artist exercises the true essence of capturing the semantic and aesthetic symbols and drawing them to contemporary city in the throes of creative controversy(Samar Gharib, 1982, p. 103-106) [11] .As a result of this displacement, the aesthetic transformation was also confirmed at the technical level in the narration of forms, and the experiences of the contemporary potter and his continuous attempts to search and experiment in aesthetic disputes and his attempt to discover the elements of innovation and contemporary through interaction with the plastic genres and the diversity of styles and the mutual influence between him and those genres were achieved.Exiting, according to the concepts of modernity, from a limited form to a multi-faceted artistic formation.As a creative mixture that stirs the recipient's mental treasury, (The narration in ceramic experiments is specific if viewed from the traditional side, and it is possible to get out of this limitation if the artist wants to call himself the narration in the formation of ceramic shapes, with the effect of displacement.The artist's skills must be borrowed from all methods, tools and methods to search for the right face suitable for the artist's hand and thought (Muhammad, Saad Shaker, Al-Jazaery, 1979, p. 19) [12] .It develops the abilities of ceramics to express to levels commensurate with the spirit of the era, and this is what speaks of the emergence and development of the contemporary Iraqi ceramics movement, as the aspirations of contemporary potters began as a result of the search for new horizons in artistic work outside the traditional frameworks.Artistic and educational, leading to an advanced stage in the field of contemporary ceramics, in terms of both the technical and aesthetic aspects.(Adel Kamel, 1986, p. 92) [13] .The narration in the form is aesthetic experiments on the entirety of the structure of the history of plastic art and its color, structural, technical and human relations(Bachelard, Gaston, 1985, p. 9) [14] , and this rejection came to everything that is prevalent, which led to the events of a serious change in the symbols and the consistency of the elements and concepts of the narration structure in ceramic experiments in particular, and the most effective role in the openness of the transmitter / potter and techniques based on scientific theories, which produced new stylistic systems and artistic movements .(Dewey, John,1963, p.160) [15] .With the scientific, technological and industrial development, the techniques have become open formative capabilities and options that are in favor of the ceramic symbol image and part of the visual image communication.It became natural when talking about the form of an artistic discourse that is desired in an effective manner by the use of technology, starting with the choice of the artist / potter for the raw material and carrying out the executive performance operations.With the continuation of the process of interaction of his senses and his plastic ability with the raw material (Ghanem, Farouk Abdel-Kadhim,2013, p.53) [16] , art tends to be imaginary or hypothetical as part of the privacy of his job, but he establishes relationships related to all basic human functions, and on the other hand, he is keen on the relationship with technology in a way that clarifies the assumption as if it is that space that diverges between art and technology .(Marcuse,Herbert, 1971, p. 70) [17] .The formed image of the imagination is the image that has been installed to reveal itself in the apparent material production and what it contains of great capabilities to create balance and the birth of harmonious relationships between form and technique in the form of ceramic and composition of the imagination that enables the creation of a creative system.It agrees to transcend the contemporary and new sensations with the old objective visions, and juxtaposes between the intense state of emotion and the accuracy of the logical system.(Richard, Ambadi,1963, p.309) [18] .The avoidance of the usual visual discourse in the composition of the contemporary Iraqi ceramic symbol shatters what is loyal to the signifier and the traditional signifier in search of a different act in which the practice is the artistic achievement.This is what keeps the artistic achievement away from the academic side.(Abdul Hamid, Shaker, 2001, p. 280) [19] The raw material is a medium with an indefinite expressive capacity that is used for expression or employed in aesthetic discourses.The spontaneity of the material is part of its expressive value.Some potters left the color of the raw clay unchanged or added to it, and its roughness, opacity, and irregular thermal distribution form and here are the technical pigments of the artist.It is the method used by any employed raw material to highlight an effect in a fabled form that is bad in terms of texture and what it causes of direct instantaneous reactions such as etching adjacent to rough polishing adjacent to soft sunken prominent and color contrast such as black and white to obtain the desired effects.(Gadamer, Hans George, 2007, p. 45) [20] Thus, the narration in the pottery's experiences is one of the important images, and that part of its importance lies in its intrinsic value, which is directly related to the method of its implementation, an artistic form that has its symbols that indicate it and aesthetic values.(Ghanem, Farouk Abdel-Kadhim, 2013, p.68) [16] . ~ 32 ~ The plurality of the text is one of the good qualities that constitute the abstract semantic symbols in the ceramic plastic discourse, which includes the vocabulary that he works on with greater openness, as it contains intense references that lead us to separate interpretations to the connotations that make up an artistic form (Bart, Roland, 1986, p. 62) [21] , and as the symbol relates to in the Iraqi ceramicist narration.The (abstract) concept in which the energies of the abstract geometric form are invested.Abstraction represents the original reductive understanding of art.It is the search for the essence and depth of things, and it is not content with an apparent formal meaning or its connection to the area of reality and its proximity to natural manifestations.Tight associative analytical relationships that link the artist's sensitivity and humanity, and thus refine his sense so that the ceramic figure becomes an aesthetic of its own.(Al-Awawdeh, Hassan Mahmoud Issa, 2009, p.41-42) [22] .Abstract expressionism expresses the artist's self, the creative self here, working with free mechanisms and stylistic transformations in which free imagination participates in the effectiveness of the mind and the authority of the mind.The self transcends it in its spontaneity and spontaneity.(Al-Zubaidi, Kazem Neuer, 2000, p. 136) [23] .In addition to the Iraqi ceramic form's ability to interpenetrate or overlap between more than one art form, seeking the Iraqi potter to activate the imagination by creating aesthetic visual relationships between these symbols and connotations in the art form.The narrative Iraqi ceramics presented a different image in its various concepts, the technical significance linked to the nature of the reference influencing the experiences, style, as well as the vision of the Iraqi ceramicist, which involves the nature of the symbol, especially to some extent about the nature of the ceramic experience, because it is consistent with the intellectual statements and philosophical treatises that deal with the nature of overlap and symbols between arts of all kinds. The results drawn from the theoretical framework: 1.The artistic text (the ceramic work) represents an open narrative discourse, as it carries many endless symbols and connotations, and the movement of criticism, reading, and intellectual and cognitive interaction with it continues.Concepts are exchanged between the artwork and the recipient.2. Art derives its ingredients from realism associated with symbolic icons.3. Artistic texts are not a quotation from the external reality, but an ideal interpretation of the art itself.4. The aesthetic dimension has a major role in shaping the pottery form in artistic works.5.The potter possesses the characteristic of mythology with vast dimensions of ideas in artistic forms. Chapter III 1.The research community: It consists of the (ceramic) works of contemporary Iraqi potters within the time period of (15) ceramic works within the limits of the current research.2. The researcher relied on selecting the sample for the research using the standardization method, which amounted to (3) samples.A composite ceramic sculptural piece consisting of two separate parts, one of which is larger than the other.The larger part represents a shape similar to the architectural structure of the ancient Ziggurat of Ur, including a ball-like shape, but hollow from the inside and with a circular mouth resting on the base of the formation glass in general.In three main colors, black, gray, and greenish, the potter accomplished a ceramic sculptural composition with several expressive meanings that present individual letters.Through its quintessential elements, but he was able to collect them in the formation of an integrated discourse.The accomplished subject reveals the Iraqi intellectual structure as a kind of etymology of thought to create a conceptual state between the expressive connotations, which in turn distinguishes the form and its connection to the understanding of the recipient.Expanding the potter's self-vision, time and place, this text was able to invoke a water system. Chapter IV Research results 1.The Iraqi potter is working on crystallizing new visions to be replaced by ceramics through the imaginary narrative and the image associated with the artwork.2. The juxtaposition and consistency in the elements of the ceramic text made it a water-based system that serves the narrative structure.3. The creation of the contemporary Iraqi potter through the pottery mass, the response of the narrative and its beauty to suggest depth through overlay and contrast, ascending in experiments with the pottery form within direct borders of the recipient.4. How to store the mythological content of artistic discourse in favor of visual research that produces proposals and formulations that were imagined in the image of the narrative.5.The Iraqi potter employed some colors according to their corresponding cultural and social significance in the form of art.6.The narrative and the night in the text worked to open horizons for interpretation and multiple readings, and this revolves around the depth of the concept of narrative for the narrator (the potter) and the one to whom it is narrated the recipient 9. Conclusions 1.The formulations of the narrative relied on both the presence and absence of the visual units to understand the state of communication and fabrication in the pottery's text in order to stand on the mental associative level of the idea.2. The nature of the narrative in the achievement is not based on a fixed position, but rather depends on movement and transition in intellectual and constructive routines.3. The potter always starts from the self-imagined artist by narrating a significant narrative represented through his obsessions, experiences, emotions, and embodiment in an accomplished. Recommendations Interest in documenting the works of Arab potters so that scholars and researchers can wait and benefit from them for the purposes of study and scientific research. 3 .Model ( 1 ) Research tools: -The researcher relied on the indicators to analyze the sample models.4. Research methodology: The researcher relied on the descriptive methodology to analyze the research sample.Work name: Eternity Artist Name: Maher Al-Samarrai Completion year: 2010The store is a semi-engineering subterranean ceramic for an architectural building in which a wall with irregular rims at the ends appears from the top, as well as a person with a character between the Sumerian and the Babylonian.The potter wrote Surat Al-Ikhlas in the simple, primitive Kufic script, devoid of decorative additions and dots, and the back of the literal writings was in light brown.The potter's body was on a rectangular board engraved on the wall, an indication of ancient civilization.To show the shape of the bricks and show the color of the bricks as an expressive sign of reality, the store took intellectual dimensions, as it was formed from different elements according to a hypothetical awareness of the extent of what the event could reach, as it is the focus of significance.The narrative description in this ceramic text represents a link between the past and the present.The art shop embodies, in its apparent form, a sculptural pottery consisting of five books arranged and lined up of different sizes, and two figures with a three-dimensional sculpture.The work was based on a rectangular wooden base in terms of technique, the figure of the two figures and the red and green writers was formed.The potter employed two materials from the finished piece, which constitutes an indication of the exit of the pottery from the world of clay to the world of nature it does not see nature as others see it, it is the world of nature transformed into an educated world linked to time and learning.The narrative structure here does not constitute the artistic text as a transcendent visual sense only for amateurs.It represents the transition from narrative time to a new time full of education and life and the continuation of science, and everyone is looking for a destiny for him in the midst of this.The intellectual atmosphere tries to live within the scientific life, and the artist also sealed it in the element of the place that grows in science with the same narration.It has a role in the recipient, in a way, in allocating perception.So is the narration a link between the idea and the level of transmission of expression.
2023-07-12T15:07:37.709Z
2023-05-01T00:00:00.000
{ "year": 2023, "sha1": "d2dee85143f344bbefaa0739ca02fb4200d1d2e4", "oa_license": null, "oa_url": "https://www.allstudyjournal.com/article/989/5-5-2-797.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9a6e4195e9e76a6e84bd7b8c59194edce2a67980", "s2fieldsofstudy": [], "extfieldsofstudy": [] }
119236982
pes2o/s2orc
v3-fos-license
Probing Light Dark Matter via Evaporation from the Sun Dark matter particles can be captured by the sun with rates that depend on the dark matter mass and the DM-nucleon cross section. However, for masses below $\sim 3.3$ GeV, the captured dark matter particles evaporate, leading to an equilibrium where the rate of captured particles is equal to the rate of evaporating ones. Unlike dark matter particles from the halo, the evaporating dark matter particles have velocities that are not limited to values below the escape velocity of the galaxy. Despite the fact that high velocities are exponentially suppressed, I demonstrate here that current underground detectors have the possibility to probe/constrain low dark matter parameter space by (not)-observing the high energy tail of the evaporating dark matter particles from the sun. I also show that the functional form of the differential rate of counts with respect to the recoil energy in earth based detectors can identify precisely the mass and the cross section of the dark matter particle in this case. Dark matter particles can be captured by the sun with rates that depend on the dark matter mass and the DM-nucleon cross section.However, for masses below ∼ 3.3 GeV, the captured dark matter particles evaporate, leading to an equilibrium where the rate of captured particles is equal to the rate of evaporating ones.Unlike dark matter particles from the halo, the evaporating dark matter particles have velocities that are not limited to values below the escape velocity of the galaxy.Despite the fact that high velocities are exponentially suppressed, I demonstrate here that current underground detectors have the possibility to probe/constrain low dark matter parameter space by (not)-observing the high energy tail of the evaporating dark matter particles from the sun.I also show that the functional form of the differential rate of counts with respect to the recoil energy in earth based detectors can identify precisely the mass and the cross section of the dark matter particle in this case. Despite the overwhelming amount of evidence from galaxy structure formation and cosmology in favor of the existence of dark matter (DM), so far experiments have failed to conclusively detect directly or indirectly DM.A lot of scientific effort and resources have been allocated in order to detect DM.In particular, underground detectors have imposed strict limits on the type of DM particles that can be viable, constraining the parameter space of the mass and the cross section of DM interacting with baryons.The basic principle of direct detection is simple.A DM particle interacts with a nucleus of the detector deposing an amount of energy that is detectable.Different experiments apply different techniques on how they observe the recoil.However, direct detection rates have limitations.Obviously a small DM-nucleus cross section reduces the probability of interaction.Similarly, heavy DM particles have low number densities and consequently low flux.Low DM masses are also difficult to probe simply because the DM particle does not have enough energy to trigger the detector.This is true regardless the exposure that an experiment can achieve. No matter what is the velocity distribution of DM particles in the galaxy, their velocities are below the escape velocity of the galaxy.Therefore below a sufficiently small DM mass and a given detector energy threshold, no DM particle can be detected.This is the reason why low DM masses are not probed by direct DM searches. However, as I will demonstrate in this paper, it is possible to probe lighter DM masses with current detectors and energy thresholds due to a flux of DM particles that after being captured by the sun, leak out via evaporation.These particles can arrive in earth with energies that are high enough to produce a detectable recoil.Therefore not only it is possible to probe relatively lighter DM, but additionally the spectrum has features that distinguish it clearly from heavier DM candidates.The possibility of detecting DM particles that have been captured by the sun, on the earth, has been studied in the past [1,2].In these two seminal papers, the flux of DM particles bound to the sun and having orbits that can reach the earth was estimated.Damour and Krauss considered particles that have been captured by the outer layers of the sun and due to perturbations from other planets, the orbits evolved to eliptical ones that do not cross the sun anymore and therefore do not lose further energy.Particles in these orbits can accumulate for billions of years and since some of the orbits can reach the earth, these particles are potentially detectable.Additionally, this scenario has been studied numerically [3,4].It should be emphasized here that this present paper studies a fundamentally different scenario.Instead of looking at loosely bound DM particles that have orbits that cross the earth, I focus on light particles that have had the time to thermalize with nuclear matter inside the sun.The tail of the distribution of these particles corresponds to velocities above the escape velocity of the sun and this is exactly the spectrum of particles that I consider here. Generally, the number of DM particles in the sun N is determined by where F is the capture rate, and C e,a are coefficients related to the evaporation and annihilation of DM respectively.From the above equation it is clear that if C 2 e >> C a F , and C −1 e is much smaller than the age of the solar system, evaporation dominates the whole process and an effective equilibrium between the accretion rate F and evaporation has been established by now.In other words, if the above conditions are satisfied, DM particles leak out from the sun with the same rate as they are captured.Let us first estimate the capture rate [5][6][7][8].The capture rate is where M and R are the mass and the radius of the sun, v 0 the velocity dispersion of DM in our galaxy, ρ dm and arXiv:1506.04316v1[hep-ph] 13 Jun 2015 m χ the local DM density and the DM mass respectively.The sum runs over all different chemical elements present in the sun.I am going to consider for simplicity only hydrogen and helium.E i is the maximum energy per DM mass that can lead to a capture due to a collision of DM with element i and is given by is the average fraction of energy that the DM particle loses after colliding with a nucleus N i .Note that for energy larger than E i , even if the particle scatters inside the sun will lose on average an amount of energy that is not sufficient to bind the particle gravitationally to the sun.Finally f i represents the probability that scattering will take place. where p is the mass fraction of hydrogen in the sun that is taken to be 0.75.For helium, f He = 0.89 × 4 He σ p /σ crit , if 4 He σ p /σ crit < 1 and 1 if 4 He σ p /σ crit > 1, where He = 0.24.σ crit = m p R 2 /M 4 × 10 −36 cm 2 is roughly speaking the cross section above which every particle that will cross the sun will scatter.Note that everything is expressed in terms of the DM-proton cross section σ p .I consider spinindependent interactions and therefore the DM-helium cross section will be σ He = σ p (µ 2 He /µ 2 p )A 2 , where µ i corresponds to the reduced mass of DM with nucleus i and A = 4 for helium.Since I am interested in low DM mass, σ He 16σ p .Now let us focus on evaporation.This effect has been studied extensively in the case of the sun [9][10][11][12][13].Unless one assumes unreasonably high DM annihilation cross section, it has been shown [12,13] that for DM masses below ∼ 3.3 GeV, DM particles get effectively evaporated out of the sun.In this case the steady state solution of Eq. (1) will give an equilibrium between captured and evaporated DM particles.Although the exact formula of C e has been estimated [12,13], it will not be needed here.As long as I consider particles below 3.3 GeV, the rate of evaporation will be equal to that of capture.Therefore the overall evaporation rate will be given by Eq. ( 2).Let us now determine the spectrum of the evaporating DM particles.In general there are two possibilities.If the DM-nucleon cross section is large and the mean free path of the DM particle small, the captured population of DM will thermalize fast with nuclei and the DM distribution will be a Maxwell-Boltzmann one with a DM temperature equal to the one of the star at a particular position.However, if the DM-nucleon cross section is small, captured DM interacts over several orbits and therefore there is no single temperature that picks up.The distribution is not a Maxwell-Boltzmann one, but it can be approximated as one although the DM "effective" temperature is different from that of the star [9,13].This approximate distribution should look like where T χ is the "effective" temperature of the DM, and E(v, r) = m χ v 2 /2 + m χ V (r) is the total energy, V (r) being the gravitational potential as a function of r inside the sun.I would like to estimate the spectrum of evaporating DM particles now.From this point of view, the details of the spatial dependence of the distribution are irrelevant since I consider particles that are at distance R from the center of the sun with a velocity higher than the escape velocity of the sun.Therefore the spectrum of evaporating DM particles will be given by where A is a constant to be determined and v s = (2GM /R ) 1/2 is the escape velocity from the surface of the sun.However, this spectrum of evaporating DM particles does not remain the same when the DM particles arrive on earth.Let us find the spectrum of velocities at the earth f (r, v) by using the collisionless Boltzmann equation Due to the isotropy of the problem, ∂f /∂θ = ∂f /∂φ = 0.I am interested in a steady state solution and therefore ∂f /∂t = 0. F r /m χ = −GM /r 2 + GM ⊕ /( − r) 2 is the force due to gravity from the sun and the earth, where r is the distance from the center of the sun, M ⊕ is the mass of the earth, and is the distance between the sun and the earth.Note that F θ = F φ = 0.The generic solution of Eq. ( 4) ).This solution should match the boundary distribution at the surface of the sun f (R , v) given by Eq. (3).Upon using this boundary condition, the distribution in earth is where v e = (2GM / + 2GM ⊕ /R ⊕ ) 1/2 and R ⊕ is the radius of the earth.I have omitted negligible terms of the order O(GM ⊕ / ).Note here that in Eq. ( 5), , where the values of v θ and v φ are equal to the ones at the boundary of r = R of Eq. ( 3), since Eq. ( 4) does not involve derivatives of them.DM particles that evaporated from the sun arrive in the earth almost radially.This is because the ratio v θ /v r of a particle arriving in the earth varies from 0 to a maximum value of R / 0.0046.The total flux of evaporating DM particles arriving in the earth is Recall that d 3 v = v 2 dvd cos θdφ and as mentioned above the solid angle integral part does not extend to the full 4π but it is constrained to the value mentioned above.The product of A with the angular integration part is determined by Eq. ( 6), thus leading to the following flux of evaporating DM on earth where the constant C is Let us now estimate the number of counts registered in an underground detector taking into account both the flux of evaporating DM and regular DM halo particles.The differential rate of counts per recoil energy is where N T is the number of targets in the detector, ρ χ = 0.3GeV/cm 3 is the local DM density, v esc = 550km/sec is the escape velocity of our galaxy and v min = m N E R /2µ 2 is the minimum velocity required to produce a recoil E R in a DM collision with a nucleus of mass m N (µ being the reduced mass between DM and nucleus).For the distribution f (v), a truncated Maxwell-Boltzmann function up to v esc of the form where N is a normalization constant [14,15].v b is the velocity of the earth with respect to the rest frame of the halo.The value used here is v b = (232 + 0.489 • 30) km/sec, which is the velocity of the solar system plus the rotational velocity of the earth around the sun (when the latter aligns maximally with the former).This value represents the best possible scenario for detecting halo DM particles.Note that the first term in the right hand side of Eq.( 9) corresponds to the evaporating DM particles that arrive in earth from the sun, while the second one is the usual rate from incoming DM halo particles.A crucial observation here is that although DM halo particles have an upper velocity of v esc , the evaporating ones can have any energy by paying a price in an exponential suppression in the density.However this fact has important consequences because for a given threshold in DM detectors, and since v < v esc , there is a mass below which DM particles from the halo can never have energies that can trigger the detector no matter how large the exposure is.On the contrary, for DM particles that have been captured first by the sun and later evaporated, it is probable to detect the tail of their distribution since there is no upper velocity, if enough exposure is achieved. So far the value of the "effective" DM temperature T χ has not been specified.This has been estimated in e.g.[9,13] and it depends on the DM mass.However both papers gave results for DM masses above ∼ 2 GeV.In order to find T χ in much smaller masses of interest, I implement the method presented in [9].Although as mentioned earlier, the actual distribution of captured DM is not an exact Maxwell-Boltzmann, it can be approximated by such with a temperature T χ .T χ can be estimated by demanding no net flow of energy from the nuclei of the sun to the DM particles once a steady state has been achieved.The condition can be written [9] as where n p (r) and T (r) are the number density of nuclei and the temperature of the star at radius r respectively, v χ and v p are the velocities of DM and nuclei, and E is the total energy of DM (i.e.kinetic plus potential).σ p is the DM-proton cross section and ∆E is the energy exchange in a DM-nucleon collision.For simplicity I use a polytropic model of n = 3 as an approximation for the sun.If φ(ξ) is the solution of the n = 3 Lane-Emden equation, where ξ represents a dimensionless radius defined as ξ = ξ 1 (r/R ), ξ 1 = 6.8968486 being the first zero of φ(ξ).Eq. ( 10) can be rewritten in terms of the dimensionless quantities τ = T χ /T c (T c being the core temperature of the sun) and ν = m χ /m p as Apart from a minor typo, the above equation is the same as that derived in [9].For every considered DM mass, I have solved numerically Eq. ( 11) in order to find the corresponding T χ .Fig. 1 shows some characteristic results that demonstrate why the spectrum of evaporating DM particles can enhance the chances for direct detection of light DM particles.It shows the rate of counts per recoil energy in bins of 0.1 keV, normalized to an exposure of 1 Kg•day, for a Si detector like the one used in DAMIC.I quote DAMIC here because it is one of the experiments with the lowest recoil energy threshold.I have used a flat efficiency of 0.17 for the detector, deduced from [16].Fig. 1 corresponds to DM-proton cross section of σ p = 10 −36 cm 2 .For smaller cross sections, the rate of counts can be obtained by scaling the evaporating lines by (σ p /σ 36 ) 2 and the halo one by σ p /σ 36 , where σ 36 = 10 −36 cm 2 .For the evaporating particles, one power of σ p comes from the capture rate in the sun and one from the detection in the earth.I assume contact spin-independent interactions (using a Helm form factor) in this paper, although the generalization for spin-dependent is trivial.In the plot three cases of DM masses are shown: 1 GeV, 100 MeV and 10 MeV.One can see that halo DM particles roughly below 1 GeV cannot be detected.The reason is that light DM particles do not have enough energy to create recoil energies above the threshold.Note that the number of counts for the halo DM in the plot, correspond to the best detection scenario, i.e. the period of the year where the earth rotational velocity around the sun aligns maximally with the sun velocity in the rest frame of the halo.On the contrary, light DM that has been captured by the sun and has evaporated after thermalization, have no upper bound (apart from the speed of light).Although high velocities are exponentially suppressed, depending on the detector exposure, such particles can be detected.Fig. 1 shows that slightly lower thresholds in direct detection can set limits down to DM masses of 10 MeV.This is impossible for halo DM particles that have no chance to be detected even with a threshold of order of eV. The spectrum of evaporating DM particles not only can probe/constrain parameter space that is not accessible by observing halo DM particles, but it can potentially determine accurately the DM mass and DMnucleon cross section.As it can be seen in Fig. 1 for the case of m = 1 GeV, there is a change in the shape of the rate of counts in the detector.For low recoil energies, halo DM particles dominate the counts.However the counts produced by halo DM particles drop sharply at the maximum recoil energy γ Si m(v esc +v b ) 2 (γ Si refers to the nucleus of Si and it is defined below Eq. 2).Above this specific value of recoil energy only evaporating DM particles contribute to the counts.Therefore this drop in the number of counts per energy can lead to the exact determination of the DM mass and the DM-nucleon cross section. Some comments are in order here.One should make sure that there is enough time for the DM particles to thermalize with the interior of the sun.The issue has been addressed in [7] where it was shown that the characteristic time scales are of order of a year or smaller (for the bulk of the DM orbits).The second comment is related to DM annihilation.The spectrum of evaporating DM particles shown here is valid whether one considers asymmetric DM or thermally produced symmetric DM.As it was argued in [13], for a DM mass of 3 GeV, the evaporation rate is equal to the annihilation one (with an annihilation cross section that of the weak interactions).For every 0.3 GeV below that mass value, the annihilation signal is suppressed by a factor of 100 compared to evaporation, practically eliminating the annihilation below ∼ 2 GeV.Therefore the spectrum of the evaporating particles predicted here does not depend on the nature (asymmetric or symmetric) of DM. I should also mention that the evaporating DM particles can create an annually modulated signal with a different phase from the one of the halo DM particles.Here, the modulation is due to the small changes in the distance of the earth to the sun between summer and winter.The perihelion (shortest distance) takes place around January 3 and the aphelion (largest distance) around July 4. The small fluctuation in the distance creates a fluctuation in the DM flux arriving on earth, thus the annual modulation.The largest signal should be expected around January 3. Therefore there is a phase difference by almost a π with respect to the annual modulation of the halo DM particles. Finally, one can study the same effect from evaporating DM particles from the earth.Although the capture rate of the earth is on average smaller by at least eight orders of magnitude, this can be counterbalanced by the fact that the flux is inversely proportional to the distance, thus earth is favored by a factor of ( /R ⊕ ) 2 .Additionally, the mass below which evaporation dominates is not much different from the case of the sun.Although σ crit for the earth might be a bit smaller compared to the sun, as well as the thermalization time slightly larger, the flux of evaporating DM particles is not significantly lower than the one coming from the sun.Despite this fact, there is a fundamental difference.The spectrum of the evaporating DM from the earth is dominated by the velocities close to the earth's escape velocity ∼ 11 km/sec, which is small to create significant recoil for light DM.This is why I do not examine evaporating DM from the earth here. The author is supported by the Danish National Research Foundation, Grant No. DNRF90. FIG. 1 : FIG.1: Number of counts per 0.1 keVee recoil energy normalized to an exposure of 1 Kg•day for a Si detector such the one used by DAMIC for DM-proton cross section of σp = 10 −36 cm 2 .A flat efficiency of 0.17 for the detector at low masses has been used.The dashed line corresponds to halo DM counts of DM with a mass of 1 GeV.The solid lines correspond to counts coming from DM evaporating from the sun with masses 1 GeV (thick solid), 100 MeV (medium solid) and 10 MeV (thin solid).Note that for DM mass of 10 and 100 MeV, halo DM particles have not sufficient energy to recoil within the range shown in the plot.The vertical line represents the current threshold of DAMIC i.e. 40 eVee.
2015-06-13T19:49:42.000Z
2015-06-13T00:00:00.000
{ "year": 2015, "sha1": "08d365319b922740e5457c6cf7b76cd0341048e8", "oa_license": null, "oa_url": "https://findresearcher.sdu.dk/ws/files/129111376/Probing_Light_Dark_Matter_via_Evaporation_from_the_Sun.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "08d365319b922740e5457c6cf7b76cd0341048e8", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
267105280
pes2o/s2orc
v3-fos-license
Spin–Orbit Torque and Current‐Driven Switching in Pt100‐yTby/Co/AlOx Trilayers To decrease the energy consumption for the electrical manipulation of magnetization, the enhancement of the spin Hall effect through alloying is widely investigated, but the use of rare earth elements is rarely mentioned. This work reports the modification of the spin Hall effect on Pt by doping rare earth Tb atoms. The spin–orbit torque (SOT) performance is significantly enhanced in Pt100‐yTby alloyed heavy metal (HM) layer. Compared with the Tb‐free sample, the damping‐like effective field per unit current density increases to 1.9 times in the samples with Tb content between 5% and 10%. The critical current density for magnetization reversal is greatly reduced by 65% in a device with Pt87Tb13 HM layer and the in‐plane assistant field as small as ±20 Oe is sufficient for the deterministic switching in the same device. By magneto‐optical Kerr effect imaging, it is confirmed that the increased in‐plane field can effectively compensate the Dzyaloshinskii–Moriya interaction (DMI), which not only helps to reduce the critical current, but also facilitates the domain wall motion and is beneficial for the switching process. All results show that the Pt‐Tb alloy is a competitive candidate for low‐power spintronic devices. Introduction 3][4][5] DOI: 10.1002/aelm.[11] Indeed, both the spin Hall effect and the interfacial Rashba-Edelstein effect are spinorbit coupling (SOC) correlated phenomena, which makes the SOT limited to materials with sizable SOC effects like Pt, W, Ta, etc. [12] Many efforts have been paid to improve the generation efficiency of spin current such as the exploration of topological insulators as the spin source, [13][14] modulation of the interface, [15][16] assembling different HMs on both sides of the FM layer, [17][18] alloying of the HM layer [19][20][21] and so on.[24] Ueda et al. [25] reported an enhancement of SOTs originating from the Co/Gd interface in Pt/Co/Gd heterostructure; an enhanced SOT was also reported in Pt/[Co/Ni] 2 /Co/Tb system generated by the Co/Tb interface; [26] L. Liu et al. [27] discussed the thickness dependence of Ho on the interfacial Dzyaloshinskii-Moriya interaction (DMI) in Pt/Co/Ho multilayers; Rare earth-transition metal (RE-TM) alloys [28][29] like TbCo provide an ideal platform to explore SOTs in ferrimagnets.Most previous studies of RE in spintronics focused on the RE/FM interface or RE-TM ferrimagnets, but the modulation of SHE on the heavy metal layer by doping RE is rarely mentioned.Due to the partially filled 4f band, Tb element exhibits large orbital and spin angular momentum, resulting in a large moment per atom and strong spin-orbit coupling.So Tb doping contributes to enhancing the bulk scattering caused by magnetic impurities. [30]Additionally, compared to the dopant of 3d, 4d, or 5d metal, the effect of the 4f electrons on the HM layer has not yet been clearly proposed.It is reported that the contribution of 4f electrons to the SHE is influenced by the proximity degree of the 4f level to the Fermi level F and the method of alloying makes it possible to artificially modulate the proximity. [22]Therefore, it is In this work, we investigate the SOT performance in Pt-Tb alloy with different compositions.The current-induced effective fields and SOT efficiency have been quantified by the harmonic Hall voltage method.The deterministic current-induced switching is realized in all the samples and a small in-plane field of 20 Oe is sufficient for the deterministic switching.A significant reduction in critical current density is observed in the Pt 87 Tb 13 HM layer.The magnetization switching process is analyzed via the magneto-optical Kerr effect (MOKE) microscopy.Our work witnesses an effective improvement of SOT performance in Pt 100-y Tb y heavy metals. Results and Discussion Figure 1a gives the schematic diagrams of the electrical transport measurements and the film structure.Due to the wiring pat-tern, the anomalous Hall resistance (AHR) loops have a clockwise switching polarity as shown in Figure 1b (all following electrical tests follow the same polarity).The fairly square AHR loops indicate a desirable perpendicular magnetic anisotropy (PMA) in all the samples.The robust PMA is further verified by the effective anisotropic field H keff shown in Figure 1d, which is obtained by measuring the dependence of AHR on the in-plane field H x [Figure 1c] and fitting it according to the Stoner-Wohlfarth model: [31][32] The H keff value exhibits fluctuations with the Tb content, yet it remains consistently substantial overall, above 11 000 Oe.This strong PMA can be largely ascribed to the ultrathin Pt insertion of 0.55 nm.The coercivity field H c extracted from the AHR loops shows a similar trend to H keff .Figure 1e gives the resistivity of Pt 100-y Tb y alloy layer.The of the pure Pt layer is measured to be 33.9 μΩ cm, close to the previously reported values. [21]Along with the Tb content, increases monotonically from 33.9 to 128.6 μΩ cm, implying a good dispersion of Tb atoms in the Pt matrix and no phase transition occurs. [33]igure 1f gives the saturation magnetization for different samples and a minimum M s of 877 kA m −1 is obtained at y = 10.The variation of M s may arise from the formation of a dead layer, which is relevant to the oxidation of FM surface during the deposition of AlO x layer or the diffusion of the FM and HM layers after annealing. [34]In consequence, the variation of M s together with the shunting effect of large resistance give rise to the gradually increased R H amplitude shown in Figure 1b, while the similar R H amplitudes in samples with y = 17 and 20 may originate from the interfacial spin-orbit coupling at HM/FM interface. [35]verall, all samples show suitable performance for further SOT measurements. The harmonic Hall voltage measurement [36] was employed to measure the current-induced damping-like (H DL ) and field-like (H FL ) effective fields.With a sinusoidal current injected into the Hall bar along the x-axis, the first harmonic V 1f (in-phase) and second-harmonic V 2f (out-of-phase) Hall voltages are simultaneously detected as functions of an in-plane field.For the measurement of H DL , the in-plane field is swept along the x-axis (longitudinal) while it turns to the y-axis (transverse) for that of H FL .The values of H DL and H FL are derived from the following equations: [36] H In Equation ( 2), ± corresponds to the situation of M z > 0 or M z < 0, respectively.The notation r represents the ratio of the planar Hall resistance (PHR) to the AHR and its value for different samples varies from 0.07 to 0.114.Figure 2a,b gives the dependence of V 1f and V 2f on the in-plane field H x or H y in the y = 17 sample.As expected, [17,37] V 1f is quadratically related to H x or H y , while V 2f varies linearly with respect to them.The slopes of two V 2f curves for M z > 0 and M z < 0 are the same in the case of H x , whereas the signs of them become opposite when sweeping H y .Figure 2c,d plots the variations of H DL and H FL with the current density J of HM layer in the sample with y = 17.It is clear that both the H DL and H FL vary linearly with J, suggesting that the influence of Joule heating is negligible within this current range. [17,38]The signs of H DL and H FL indicate the directions of the effective fields, and it is found that the direction of H FL is independent of the magnetized state while that of H DL reverses with switching the magnetization.[41] H DL(FL) per unit current density is plotted as a function of Tb content in Figure 2e.For the pure-Pt sample, the value of H DL /J is about 2.85 Oe per 10 10 A m −2 , approximating the results previously reported. [17,42]After doping with Tb atoms, H DL /J rapidly increases to 5.4 Oe per 10 10 A m −2 , which is almost 1.9 times higher than that of the pure Pt, and remains at the same level for y = 5-10.Then a continual decline emerges with further increasing the Tb content.The variation of H DL /J with the Tb content is related to many factors and it can be explained by referring to the Equation ( 5): [19,21] where T int , SH , and 0 denote the interfacial spin transparency, spin Hall conductivity, resistivity of HM layer, and permeability of the vacuum, respectively.M s and t FM are the saturation magnetization and thickness of Co layer.Since the reduction of T int can be inhibited by inserting a thin Pt layer [21] due to the alloying of the HM layer, it can be assumed that T int is not the major factor causing changes in H DL /J.Due to the band structure origin of Berry curvature, which is crucial for SOT, Tb doping is thought to lead to a decrease in SH . [43]The influences of and M s on H DL /J as well as their variations with Tb content are clear from the Equation (5) and Figure 1.So at a low Tb content, the slightly reduced SH is compensated by the combined effect of the increased and the reduced M s , resulting in the increase of H DL /J.After y > 10, the decrease of SH is further exacerbated, which, together with the rise of M s , makes the loss of H DL /J no longer be compensated, leading to the continual decline in H DL /J.In Figure 2e, the small and negative H FL /J is consistent with the previously reported H FL originating from the spin Hall effect (SHE) of Pt. [21,44] However, in the samples with Tb = 17 and 20, the signs of H FL become positive, while that of H DL are not affected.A reasonable explanation is that the origin of H DL and H FL are different in both two samples, with H DL stemming from the SHE of HM layer and H FL stemming from the interfacial Rashba effect.As for whether the Rashba effect occurs at the HM/FM interface or at the Co/AlO x interface, [45][46][47] this still needs to be verified in next step.In connection with the R H amplitude shown in Figure 1b, it seems to be more possible that the Rashba effect occurs at the HM/FM interface.The spin Hall angle (SHA) shown in Figure 2f is derived from the Equation ( 6): [48] For the pure Pt layer, SH is determined to be ∼0.08,comparable to the previous reports. [17,21,41] SH shows a similar variation trend to H DL /J, but due to the influence of M s , SH drops faster after reaching the maximum value of 0.115 at y = 5.Since enhancement of SOT by doping relies on the intrinsic SOC properties of the matrix, [19,21] this result implies that Tb doping is more destructive to the original Pt band structure, compared to the doping elements which reach the optimal doping at a higher composition ratio such as Cu, [49] Au, [19] and Pd. [43]ompared to the SOT efficiency, the current-induced magnetization manipulation depends on more comprehensive factors.The influence of Pt 100-y Tb y composition on current-driven magnetization switching is investigated.Deterministic switching can be realized in all the samples with the assistance of a non-zero H x , as shown in Figure 3a.Under −600 Oe in-plane field, the required critical current density J c (J c is defined as the current density of Pt-Tb layer where R H changes its sign) exhibits a trend of falling first and then increasing with increasing the Tb content.From Figure 3b, a minimum J c about 1.4 × 10 11 A m −2 is acquired in the sample with y = 13, which is considerably reduced by about 65% compared to the Tb-free sample.Such a huge reduction in J c can be attributed to a combination of factors.First, J c is proportional to the saturation magnetization M s of the FM layer, [50][51] so samples with small M s (around y = 10) are conducive to attaining a low J c .The improvement of H DL /J (at y ≤ 10) significantly increases the efficiency of current-induced SOT generation, thereby further reducing J c .For sample with a low coercivity field H c (like Pt 87 Tb 13 ), the barrier to nucleation or domain wall motion (DMW) is correspondingly low.And under the assistance of the enhanced Joule heating effect that stems from the increased resistance of the HM layer, the barrier to switching is further reduced.Ultimately these factors above combine to create the minimum J c in sample with y = 13 rather than in the sample with higher SH or H DL /J.Further investigation under different H x is introduced in the sample with y = 13.As can be seen from Figure 3c, deterministic switching by SOT is achieved over a wide range of fields, and the minimum H x can be as small as 20 Oe, which is beneficial for practical application.By flipping the H x from −20 to 20 Oe, the switching polarity of R H -J loop is also reversed, indicating that the opposite in-plane field can reverse the chirality of domain walls (DWs) as well as the associated current-driven DW motions, and ultimately results in the reversal of switching polarity.The critical current density J c extracted from the R H -J pulse loops is plotted as a function of H x in Figure 3d.Evidently, the increased in-plane field H x can greatly reduce the critical current density and the lowest J c is about 1 × 10 11 A m −2 .Overall, our results confirm that Pt 87 Tb 13 HM layer exhibits a decent SOT performance such as the low critical current density and the small assistant field, which is helpful in reducing the energy consumption of the device. To understand the switching behavior at the level of magnetic domains, the polar MOKE imaging is employed.Figures 4 and 5 give the switching loops and the corresponding MOKE images for the switching achieved at H x = −600 and −50 Oe in the sample with y = 13.Prior to every test, the device is positively magnetized by a saturated field.The gray and dark areas in MOKE images denote the upward-and downward-magnetized domains, respectively.Under a field of −600 Oe, both the up-to-down and down-to-up switching processes are completed quickly.The reversed domains, as shown in Figure 4b,c, nucleate at defects or boundaries and then rapidly expand throughout the whole device.The DMI originating from the inversion symmetry breaking can stabilize the chiral Néel-type DW and has important effects on DW motion during the switching. [52]The value of the DMI effective field in this sample is determined to be about 400 Oe through the loop-shift method [53] (see S1, Supporting Information).Apparently, −600 Oe in-plane field is sufficient to overcome the DMI effective field and break the chirality of the DWs, thereby leading to a straightforward domain expansion, as depicted in Figure 4b,c.However, in the case of H x = −50 Oe, the DMI is not fully compensated, thus the switching behavior and DW motion may be significantly different.The nucleation of domains [square in Figure 5b] and the pinning of DWs [circle in Figure 5b,d] show higher sensibility to defects.Especially the rightward DW motion in Figure 5b is more likely to be pinned by defects or it needs to be driven by a higher pulse current during the up-to-down process.Tilted DWs marked by white arrows in Figure 5b,d are clearly visible, which can be taken as evidence of the competition between DMI and the external field. [54]To compensate the DMI effective field, the expansion of domains requires the formation of tilted DWs with large tilt angles. [55]In addition, more annihilations of residual domains can be observed near the end of switching.It can be seen from Figure 5c that after the up-to-down switching is completed, the downward domains (dark) at the right end of Hall bar partially flip back to the upward state (gray) as the current sweeps back from −38 mA.Note that this flip is not reflected in the R H -I pulse loop because of the far distance to the voltage channel.According to Ref. [55], this flip in Figure 5c can be explained in this way: when the dark domains expand to the right edge of Hall bar, the DW tilting almost vanishes.So under the combined actions of the uncompensated DMI effective field and the SOT of large pulse current, the edge DWs [56] are displaced to the left, resulting in the flipped domains.By comparing the switching processes at H x = −600 and −50 Oe, it is concluded that the increase of H x can effectively compensate the DMI effective field, which not only reduces the required current, but also makes the current-induced switching more deterministic and the DW motion more steady. Conclusion In summary, the modulation of spin Hall effect in Pt has been investigated by doping Tb atoms in the Ta (1 nm)/Pt 100-y Tb y (5 nm)/Pt (0.55 nm)/Co (0.7 nm)/AlO x (2 nm) multilayers.Through the harmonic Hall voltage measurement, a significantly enhanced SOT performance is observed at low Tb content.When the Tb content is in the range of 5-10%, the damping-like effective field per unit current density increases to 1.9 times in comparison with the Tb-free sample.The spin Hall angle reaches the maximum value of 0.115 at y = 5.For the current-induced magnetization reversal, the Tb doping makes a substantial reduction of 65% in critical current density J c and the minimum J c is acquired in sample with y = 13.In addition, an in-plane field as small as ±20 Oe is sufficient for the deterministic switching in the same sample with Pt 87 Tb 13 HM layer.All of these features are helpful to reduce the energy consumption of the device and are well suitable for industrial applications.Through the MOKE images, it is confirmed that increasing the in-plane assistant field can effectively compensate the DMI, which not only helps to reduce the current, but also facilitates the movement of domain walls and makes the current-induced switching more steady.Our study gives insight into the impact of the doping of RE atoms in heavy metal Pt and offers a new option for low-power spintronics devices. Experimental Section All samples with a structure of substrate/Ta (1 nm)/Pt 100-y Tb y (5 nm)/Pt (0.55 nm)/Co (0.7 nm)/AlO x (2 nm) were deposited on thermally oxidized Si substrate by DC/RF magnetron sputtering at room temperature.The base pressure was better than 4 × 10 −7 Torr and the working Ar pressure was kept at 2 mTorr during sputtering.The Pt 100-y Tb y alloy layer was deposited by a co-sputtering system utilizing Pt and Tb targets where the value of y denotes the atomic percentage (%) of Tb and it was changed by controlling the deposition rate of Pt while that of Tb was kept at 0.1717 Å s −1 .The deposition rates of Ta, Co, and AlO x were 0.3465, 0.223, and 0.2141 Å s −1 , respectively.After deposition, the entirety of the films were annealed under a high vacuum at 320 °C for 10 min to obtain a good perpendicular anisotropy.All devices used for electrical transport measurements were fabricated into Hall bars with current channels 20 μm wide and 200 μm long, by standard ultraviolet lithography and lift-off methods.Then Ti (10 nm)/Au (80 nm) electrodes were prepared in the same way to make better electrical contact. All the transport measurements were carried out on a homemade platform where Keithley 6221 was used as the DC or AC current source and voltage signal was detected by Keithley 2812A nanovolt meter or Stanford SR 830 lock-in amplifiers.Spin-orbit toques were quantified by the harmonic Hall voltage method with a low-frequency sinusoidal current of 133.33 Hz passed into the Hall bars.Current-induced magnetization switching loops were obtained by sweeping the pulse current with a pulse width of 1 ms.At the same time, a MOKE microscope was used to characterize the magnetic domain distribution over this process.The resistance was measured by the 4-point method and the resistivity of Pt 100-y Tb y alloy layer was calculated according to the shunt model of the parallel circuit.The magnetic properties were measured through a vibrating sample magnetometer (VSM). Figure 1 . Figure 1.a) Schematic diagrams of the electrical transport measurements and the structure of stacks.b) The anomalous Hall resistance (R H ) loops as a function of perpendicular magnetic field H z for all devices.c) Normalized R H versus in-plane field H x for both the "up" and "down" magnetized states in the sample with y = 17.d-f) The variation of (d) the effective anisotropic field H keff and the coercivity field H c , e) the resistivity of Pt 100-y Tb y layer, and f) the saturation magnetization with the Tb content of samples, respectively. Figure 2 . Figure 2. a,b) The harmonic Hall voltages V 1f(2f) versus (a) longitudinal (H x ) and (b) transverse (H y ) in-plane field for the sample with y = 17 under a sinusoidal current with an amplitude of 5 mA.The insets show the variation of V 2f which lags behind the current by 90°.c,d) The dependence of (c) damping-like (H DL ) and (d) field-like (H FL ) effective field on the current density J flowing through the HM layer in the sample with y = 17.e,f) The variations of (e) H DL(FL) /J and (f) the spin Hall angle along with the Tb content. Figure 3 . Figure 3. a) Current-induced switching loops at H x = -600 Oe and b) the corresponding critical current density J c for different samples.c) Current-induced switching loops under various H x in the y = 13 sample.d) The dependence of J c on the in-plane field H x in the y = 13 sample. Figure 4 . Figure 4. a) Current-induced magnetization switching loop at −600 Oe in-plane field in sample with y = 13.b,c) The corresponding MOKE images for (b) "up to down" and (c) "down to up" switching processes. Figure 5 . Figure 5. a) Current-induced magnetization switching loop at −50 Oe in-plane field in the sample with y = 13.b,d) The corresponding MOKE images for the (b) "up to down" and (d) "down to up" switching processes.c) The flipping process of domains at the right edge of Hall bar after the "up to down" switching is completed with current reverting from −38 mA.
2024-01-24T18:12:35.734Z
2024-01-14T00:00:00.000
{ "year": 2024, "sha1": "aaab1c11d40359cde4919c6ff9e0cf75ea37581a", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/aelm.202300726", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "2fc79bdd4a806cec9e28f220a07ce0ed96355ba4", "s2fieldsofstudy": [ "Physics", "Materials Science" ], "extfieldsofstudy": [] }
226235037
pes2o/s2orc
v3-fos-license
Universal access to essential medicines as part of the right to health: a cross-national comparison of national laws, medicines policies, and health system indicators ABSTRACT Background Access to essential medicines for the world’s poor and vulnerable has made little progress since 2000, except for a few specific medicines such as antiretrovirals for HIV/AIDS. Human rights principles written into national law can create a supportive environment for universal access to medicines; however, systematic research and policy guidance on this topic is lacking. Objective To examine how international human rights law and WHO’s essential medicines policies are embedded in national law for medicines affordability and financing, and interpreted and implemented in practice to promote universal access to essential medicines. Methods This thesis consists of (1) a cross-national content analysis of 192 national constitutions, 71 national medicines policies, and legislation for universal health coverage (UHC) from 16 mostly low- and middle-income countries; (2) a case study of medicines litigation in Uruguay, and (3) a follow-up report of eight right to health indicators for access to medicines from 195 countries. Results Some, but not all, of the 12 principles from human rights law and WHO’s policy are embedded in national UHC law and medicines policies (part 1). Even the most rights-compliant legislation for access to medicines is subject to the unique and inconsistent interpretation of domestic courts, which may be inconsistent with the right to health in international law (part 2). Many national health systems for which data were available still fail to meet the official targets for eight indicators of access to medicines (part 3). Conclusions International human rights law and WHO policy are embedded in national law for essential medicines and practically implemented in national health systems. Law makers can use these findings and the example texts in this thesis as a starting point for writing and monitoring governments’ rights-based legal commitments for access to medicines. Future research should study the effect of national law on access to medicines and population health. Access to medicines; human rights; pharmaceutical policy; universal health coverage; national medicines policy; essential drugs; health financing; vulnerable populations; constitution; litigation Background This is a PhD Review that provides a synthesis of my doctoral thesis titled 'The right to health as the basis for universal access to essential medicines: A normative framework and practical examples for national law and policy'. An estimated 400 million people do not receive essential health services, including vaccines and medicines for modern family planning methods, antiretroviral therapy for HIV, and tuberculosis treatment. [1] Striking regional disparities exist for several of these basic health services: coverage rates are lowest in Sub-Saharan African and South Asian countries, where one-third of the world's population lives. (1) The Lancet Commission on Essential Medicines Policies identified five key barriers to universal access to essential medicines: medicines affordability, sustainable financing, medicines quality, optimal medicines use, and research and development of needed medicines. (2) This article focuses on medicines affordability and financing. Essential medicines are defined by the World Health Organization (WHO) as 'those that satisfy the priority healthcare needs of the population'. (3) Essential medicines are chosen considering the local disease prevalence, efficacy, safety, and comparative cost effectiveness [2]. Low-and middle-income countries (LMICs) often have a large proportion of financially-vulnerable people who are dependent on government-subsidised essential medicines provided at public health facilities without charge or for a nominal fee [3,4]. However, frequent stock-outs in these facilities force patients to turn to the private sector, where medicines are available but often at a higher price [3,4]. For example, as much as 60% of households in low-income, 33% in lower-middle, and 25% in upper-middle income countries could not afford four commonly used cardiovascular medicines sold in private pharmacies [5]. Unaffordable medicines confront households with potentially catastrophic health spending, or force families to forgo treatment at the expense of their health and possibly their livelihood [3,6]. Despite high-level political commitments to improving the affordability of medicines (i.e. for noncommunicable diseases), access to essential medicines for the world's poor has made little progress, except for a few medicines such as antiretrovirals [7,8]. To address these challenges, universal health coverage (UHC) is an initiative to broaden equitable access to financial protection and quality essential health care such that 'all people have access to needed health services (including prevention, promotion, treatment, rehabilitation and palliation) of sufficient quality to be effective while also ensuring that the use of these services does not expose the user to financial hardship' [9]. UHC offers financial risk protection to all, including low-income households, by raising funds from pre-paid insurance (and sometimes government contributions) and reducing households' reliance on out-of-pocket expenditures [1]. As such, UHC is an important means for governments to make quality essential medicines available, accessible, and affordable to the vulnerable populations, particularly in LMICs, who currently lack access. Universal access to essential medicines is an integral part of the right to health and UHC, reflected in the Sustainable Development Goal (SDG) 3 for health and SDG Target 3.8 [10]. Essential medicines as part of the right to health Human rights have the potential to transform social, political, and legal norms for more equitable access to medicines [11]. The right to health is legally binding on the 169 national governments that have ratified the International Covenant on Economic, Social, and Cultural Rights (ICESCR). Consequently, these governments are legally obliged to protect and promote health rights in national law (a term I use to convey 'domestic law', meaning all law made by all levels of a government) and policy. The strength of each country's compliance varies depending on the standing of international law in the domestic legal order, and on the presence and content of national implementing legislation. [12]. In 2000, General Comment No. 14, which is an authoritative interpretation of the right to health by the UN Committee on Economic, Social, and Cultural Rights, established that States partyhave the minimum 'core obligation' to provide essential medicines, defined by WHO, with a maximum of available resources [13]. Core obligations are basic minimum standards that serve as the foundation of all other aspects of the right to health [14]. Legal provisions in national law that include right to health language can create a supportive environment for poor patients to claim government -subsidised essential medicines [15,16]. ('Legal provisions' mean the legal language that is used to articulate underlying principles for access to medicines.) National law as an intervention to promote access to medicines National law is a powerful intervention that can promote equitable access to health services and financial coverage for the most vulnerable people [17]. WHO's David Clarke and colleagues explain that 'a strong legal framework sets the rules for how the health system functions, establishes a legal mandate for access to health services and provides the means by which a national government can implement universal health coverage at a population level' [17]. Indeed, national governments frequently use legal tools to shape health systems in response to their available resources and public health needs. Between 2011 and 2014, 70 countries sought WHO's advice for scaling up UHC [18]. Currently, little research investigates whether and how national law supports universal access to essential medicines as part of UHC. The first challenge is the absence of a reliable and up-to-date global repository of national health law. There are two pilot studies from 2010 of language supporting access to medicines in national constitutions and national legislation [19,20]. These studies were an important first step to explore national law and medicines provision; however, the conclusions are limited by shortcomings in the search strategy, the small sample of 4 countries, and little recognition for the then-novel UHC concept. In 2017 WHO and its partners published the report Advancing the right to health: Vital role of the law, offering general legal guidance for Member States on a variety of public health laws, yet none comprehensively cover access to medicines [21]. In 2018, WHO Europe published a collection of medicines reimbursement policies with nine country case studies [22]. Although these are insightful descriptive comparisons of policies in practice, this document lacks a multi-dimensional critical analysis, including from the perspective of human rights [22]. This paucity of evidence and policy advice illustrates how emergent such legal studies are in the field of pharmaceutical policy research. In addition, little is known about to what degree national governments realise their human rights obligations to provide essential medicines to those who can not provide for themselves. A human rights approach can employ indicators to monitor changes in health systems. Three global initiatives have sought to identify and, where possible, collect access to medicines indicators; however, none of these initiatives offer are up-to-date measures of access to medicines as a part of the right to health. In 2008, the then-UN Special Rapporteur on the Right to Health reported on 72 right to health indicators (including eight indicators of access to medicines) in 194 health systems. These indicators have not been updated since 2008 [23]. The Millennium Development Goals (MDGs) and the SDGs provide robust data on national measles immunisation rates from many countries while reporting on essential medicines availability from very few countries [24,25,26]. Drawing from the pool of all foregoing indicators, the Lancet Commission on Essential Medicines Policies proposed a set of indicators (published during the completion of this thesis) and later supported a call for a global accountability mechanism for monitoring access to essential medicines [3,27]. Aims The question guiding this thesis is: How has international human rights law and WHO's essential medicines policy has been embedded in national law and policy for medicines affordability and financing, and been interpreted and implemented in practice to promote universal access to essential medicines? This thesis aims to offer first-ever insight into how access to medicines is framed in the legal provisions of national law and policy,and to explore tracer indicators that signal related impacts on access to medicines in health systems. 'National law' is understood to encompass the written rules adopted by government institutions and agencies that impose both legally binding legislation, whereas 'policies' are the non-binding documents or 'soft' policies, strategies, or plans of action. The central premise of this thesis is that national legal frameworks that include principles and language from international human rights law and WHO's essential medicines policies, can remedy the widespread political indifference in attaining universal access to essential medicines. This thesis focuses on laws and policies that could secure access to medicines for poor people in mostly low-and middle-income countries. The specific aims of this thesis are: (1) to collect and critically analyse the content of different types of national laws and policies ('legal architecture') related to medicines affordability and financing, against the global standards established in international human rights law and WHO's essential medicines policies; (2) to investigate how robust national laws promoting universal access to medicines (identified in aim 2) are interpreted in light of international human rights law by national courts in judicial decisions ('lawmaking'); (3) to understand how national governments realise their right to health obligations for essential medicines by updating the 2008 UN Special Rapporteur on the Right to Health's report on eight right to health indicators of access to medicines in 195 health systems (health 'environment'). Conceptual framework To illustrate why national law and policy, and health systems indicators are relevant for understanding access to medicines and related health outcomes, this thesis employs a modified version of Scott Burris and colleagues' model to study the effect of national law and policy on population health [28,29]. See Figure 1. In this model legal architecture refers to national legislation and policy, understood to be the written and unwritten rules that can impact on public health. In the context of access to medicines, 'legal architecture' may include domestic constitutions, legislation and regulation for universal health coverage and public health, and policies (i.e. national medicines policy (NMP), national health strategy or plan of action, etc.). The mediators in the model are the legal practices of actors (i.e. healthcare professionals, police, etc.) and institutions that give laws meaning and translate them into practice. For example, legal practices are those actions of physicians who give effect to generic substitution laws or policies by prescribing medicines by international non-proprietary name. Legal practices can lead to two types of outputs: changes in health system performance, or the environment, and changes in an individual or a population's health behaviour. The environment refers to the physical surroundings, social structures, and institutions related to health, which are affected by fluctuations in the available resources (ex. public funding for the provision of essential medicines can increase access to medicines), rights and obligations, and incentives and penalties (ex. incentives for pharmacies to dispense generic medicines to patients) [29]. Changes in the health environment aim to achieve a number of goals, which, for essential medicines, include helping people access medicines, and/ or enforcing medicines-related laws and policies [29]. Health behaviour. is either directly affected by legal practices or indirectly through changed environments, both of which make particular behaviours more or less attractive. For example, legislation that changes the health environment by requiring the government to subsidise the cost of essential medicines for patients, could consequently also increase patient demand for these medicines, changing their health behaviour. A feedback loop exists where changes in the environment influence lawmaking, known as the activities of legal actors (i.e. legislative and judicial branches, policy makers in other areas of the health system) that result in written and unwritten rules. Lawmaking includes the actors and the factors that determine which laws are adopted, and the characteristics and interpretation of those laws [29]. For example, national law (the legal architecture) requires the provision of government-funded essential medicines (the health environment), which may increase patient demand for those and other non-essential medicines (health behaviour). In Latin America, patients turn to the courts to request publicly-funded medicines that are not currently accessible/affordable. In these cases, the domestic court interprets the original domestic law in light of constitutional rights and obligations to determine if the patient's request for medicines should be granted, which is the process of lawmaking. The key outcomes in this model are changes in population health, such as rates of morbidity or mortality. Methods This thesis is a multidisciplinary inquiry drawing from the legal and health science disciplines. This synthesis consists of three parts: (1) qualitative document analysis of different types of national laws and national medicines policies, (2) a case study of lawmaking in Uruguay, and (3) a quantitative report of right to health indicators for access to medicines from 195 countries. Methodological aspects of these three parts are summarised in Table 1. Analytical frameworks This thesis derives its analytical framework from the right to health as it is conceptualised in the ICESCR (article 12) and elaborated in General Comment No. 14. General Comment No. 14 is an authoritative interpretation of the right to health in the ICESCR; it is a non-binding document that instructs States on which aims and actions will realise their legal obligations in the ICESCR. Part 1 applies two analytical frameworks that are appropriate to the scope, content, and detail of the different types of law under investigation. Constitutions are analysed through the lens of the tripartite typology, where the State has a duty to respect, protect, and fulfil access to medicines as part of the right to health [13,37]. Respect and protect are negative duties to refrain from interference and to safeguard individuals from the actions of third parties, respectively. Fulfil is a positive duty to take steps to 'progressively realise' the right to health, such as through the provision of health facilities, goods, and services. Progressive realisation is an obligation to take deliberate, targeted, and concrete steps towards realising the right to health. The detail of national medicines policies and UHC legislation in part 1 lends itself to a deeper analyses using the 12-point policy checklist previously developed by the author and three colleagues as part of this thesis [38,14]. See Table 2. This policy checklist is based on overlapping principles in WHO's policies for essential medicines and international human rights law that are important for access to medicines. Principles were drawn from WHO's essential medicines policies, specifically for medicines affordability and the financial protection of vulnerable groups [39,40,41,42,43], and from international human rights law in relation to States' obligations towards social or health rights, the core obligation to provide essential medicines, and/or rights related to good governance [12,40,41,42,43,44,45,46,47]. The normative development of this 12-point policy checklist is described elsewhere [38,14]. It serves as a global normative framework that is both a checklist and a wish list to evaluate State action to provide essential medicines. In part 2, the analytical framework applies two core obligations under the right to health: the State duty to provide essential medicines, and the State duty to ensure access to health goods (including but not limited to diagnostics, devices, technologies, and medicines, which are the focus of this article) on a non-discriminatory basis [13,48]. In part 3, the analytical framework matches right to health principles with corresponding public health data to gauge the realisation of human rights standards in a population [49]. Each indicator corresponds to a specific State obligation under the right to health and has a global target, usually established by WHO. See Table 3. Part 1: 'legal architecture' for access to medicines Part 1 is a mapping study of national law and policy, and cross-national comparative content analysis of Table 1. Overview of studies and methodologies in this article. Framework of analysis a) Right to health in the ICESCR and General Comment No. 14, specifically the government duties to respect, protect, and fulfil the provision of essential medicines b-c) 12-point policy checklist derived from international human rights law and WHO's essential medicines policies [38,14] described in Table 2 Right to health in the ICESCR and General Comment No. 14, specifically the core obligations to provide essential medicines on a nondiscriminatory basis Right to health in the ICESCR and General Comment No. 14, specifically eight indicators described in Table 3 Data ( c) National essential medicines policies (structured online search described in [3]) Secondary: d) Public per capita expenditure on pharmaceuticals [31,32,33] e-f) Availability of a basket of essential medicines in the public and private sectors [34] g) Proportion of children who had received two doses of a measles virus-containing vaccine (MCV2) [35] h) Proportion of children who had received three doses of a vaccine for diphtheria-tetanus-pertussis (DTP3) [36] Method of selecting legal provisions related to medicines. The subject of study is written, formalised national law, which includes all available constitutions and UHC legislation related to medicines from a purposive sample of 16 This study also compared the number and strength of the 12 principles in national medicines policies adopted before (n = 32) and after (n = 39) January 1 st , 2004. The year 2004 was selected as a cutoff year because WHO's latest guidance document for developing a national medicines policy (2 nd edition) was published in 2001 and the author estimated a lagtime of two years (i.e. 2002-2003) would be reasonable for national governments to introduce principles from the 2001 document into the content of subsequent national policies. Associations were determined in SPSS version 25 using Pearson's Chi-squared statistic with significance of p < 0.05. English-language translations of constitutions were sourced from the Constitute Project [52]. In the absence of a global repository of national health law, a structured online search (i.e. mapping exercise) was used. Electronic copies of the NMPs and UHC laws in the original language were crowdsourced. Multilingual researcher assistantsoften working in multidisciplinary teams from the Faculties of Medical Sciences and Lawcompiled in-depth, descriptive country profiles using a template that detailed the national demographics, health system structure and function, and legislation in relation to UHC and access to medicines. Research assistants fluent in the national language produced unofficial translations of legislation and policy when official English translations were not available. Translations for UHC law were peer reviewed by a second research assistant, except for Jordan and Turkey. At least one local pharmaceutical policy expert per country was consulted to verify the accuracy, relevance, and completeness of the primary sources and country profiles. No peer reviewers were located from Algeria or Nigeria in the author's and supervisors' broader network of access to medicines professionals. Legal provisions were located by using explicit search terms and through a manual search of each document. Constitutional text was identified, categorised using the abbreviated framework of analysis, and reviewed by two pharmaceutical policy and human rights experts. For national policy and UHC legislation, two researchers coded the excerpts from national medicines policies and UHC legislation using explicit definitions defined in the 12-point policy checklist. The researchers deliberated any coding differences until consensus was reached. Part 2: 'lawmaking' for access to medicines through uruguayan courts Part 2 is a single-country case study of lawmaking by the Uruguayan appeals courts. It critically analyses whether and how human rights arguments are used by the Uruguayan judiciary in relation to patients' claims for access to specific publicly-financed medicines. Uruguay was selected as a case study because the State has ratified the ICESCR, it has equity-based UHC legislation that includes a State obligation to provide and an individual right to access medicines on a positive reimbursement list (determined in part 1), and the right to health is justiciable in national courts. All retrievable writ of amparo cases claiming a pharmaceutical intervention and decided in 2015 were included in this study. A writ of amparo is a judicial procedure that individuals can use to claim that their fundamental constitutional right(s) are at immediate and significant risk. These procedures usually need to be decided within one week of filing [48]. This selection offers a snapshot of legal interpretation by the courts at the peak of medicines litigation in Uruguay and in the period immediately following legal reform designed to curb medicines litigation. Cases were retrieved from the official Uruguayan national judiciary online databank (keywords 'acceso' and 'medicamento'). Key facts of each case were extracted for analysis, including the facts of the case (i.e. medicine claimed, indication, reimbursement status, etc.), relevant laws and rights invoked in the case, and the legal arguments in the court's decision. Part 3: access to medicines indicators in the health system 'environment' Part 3 is a follow-up report of the eight right to health indicators that were chosen by the UN Special Rapporteur on the Right to Health to reflect access to essential medicines, from 195 countries [23]. See the eight right to health indicators in Table 3. In addition, this study examines the feasibility of using these indicators as dependent variables to evaluate the impact of national law and policy. Data for 195 countries was collected through systematic online searches and authoritative secondary online datasets. This article compares median Table 3. Eight right to health indicators of access to medicines. Data source: 48. Type of indicator Right to health indicator [23] Human rights principle in General Comment No. 14 [13] Global achievements of countries grouped by income economy (World Bank definition) on each indicator to the established global targets (see Table 3). Country-level achievements on all eight indicators and historical trends (between the original 2008 report and this follow-up report) are reported elsewhere [53]. Results Part 1: 'legal architecture' for access to medicines Example texts from national law and national medicines policies that express human rights duties to provide essential medicines in legal language are provided elsewhere [37,38]. Constitutions Twenty-two constitutions included the duty to protect and/or to fulfil access to essential medicines; these commitments were not mutually exclusive. The duty to protect requires the State to safeguard individuals from possible deleterious actions of third parties. In the case of medicines, 14/192 national constitutions required governments to regulate pharmaceuticals or monitor their quality, and/or ensure that access to medicines is not restricted by international trade agreements and commercial rights. Thirteen of 192 national constitutions embed a State obligations to provide medicines, vaccinations, and/or essential goods (duty to fulfil) as part of the right to health. No constitution obliges the State to respect the provision of essential medicines (i.e. Respecting essential medicines could entail the government not interfering with access to specific classes of medicines such as contraception or for medical abortion) [54]. Full results are reported elsewhere [37]. National legislation for universal health coverage One hundred UHC and medicines-related laws were retrieved for 16 countries. The principles most frequently legalised in national law were pooling user contributions (in legislation from 68.8% of countries studied), accountability (in 56.3%), the right to health (56.3%), financial protection of vulnerable populations (56.3%), State obligation (50.0%). See Table 2. The least common principles were for transparency (in legislation from 19.8% of countries studied), participation (19.8%), monitoring (12.5%), and international assistance and cooperation (6.3%). Overall, UHC legislation from Colombia, Chile, Mexico, and the Philippines codifies the most principles for access to medicines. Three trends for access to medicines were more common, although not significant, in the legislation of upper-middle and high income countries than the low-and lower-middle income countries samples. These trends are: (a) affluent countries (i.e. uppermiddle and high income countries) embed explicit individual rights and state obligations about medicines in national law; (b) affluent countries establish in law clear boundaries to these entitlements and obligations; (c) affluent countries codify mechanisms for accountability and redress in national law. A full explanation of these observations is available elsewhere [56]. These trends generate hypotheses to be tested in a larger sample of countries. Part 2: 'lawmaking' for access to medicines through Uruguayan courts Of the 42 claims included in this study, 31 (74%) were decided in favour of the claimant (i.e. usually the patient); 34 claims (81%) accounted for 10 medicines; eight claims (19%) successfully acquired the non-reimbursed medicines cetuximab, lenalidomide, and sorafenib. Interestingly, these medicines were explicitly excluded from the national medicines formulary by Ministerial Order 86/2015 because they are cost-ineffective for their indications. Complete results are reported elsewhere [48]. In the judicial decisions in this sample, the court inconsistently interpreted patients' rights and the State's legal obligations in line with the right to health. Two similar claims for cetuximab for the treatment of metastatic colon cancer illustrate this inconsistency. In the first case, Appeals Circuit/Court 7 found that a lack of costeffectiveness did not justify denying reimbursement to a patient who could not otherwise afford the medicine (10 October 2015). Later, Appeals Circuit/Court 5 decided that excluding the medicine from reimbursement on economic grounds is consistent with the patient's right to health (3 November 2015). (NB: There are seven circuits/courts of appeal. Medicines claims are randomly assigned to a circuit.) The court's decisions to reimburse expensive medicines contradicted the national rules for medicines selection and financing (i.e. Ministerial Order 86/2015), and sometimes also human rights principles in the ICESCR and General Comment No. 14. Part 3: access to medicines indicators in the health system 'environment' Only half of the expected data points were retrievable. No country reported data for all eight indicators, therefore the denominator (number of countries from which data could be retrieved) changes for each indicator reported below. Constitutional recognition for access to medicines was reported in part 1. By 2015, 123/157 countries (78%) adopted an official national medicines policy and 107/173 countries (62%) had an essential medicines lists. See Figure 2. The average national public spending on pharmaceuticals per capita ranged from US $0. 51 42) in high-income countries. To determine whether government spending was sufficient, government reports were compared to the the $12.90/capita annual minimum threshold to provide a basket of 201 essential medicines, as determined by the Lancet Commission on Essential Medicines Policies [3]. Spending was above the threshold in few low-and lower-middle income countries (except Afghanistan, Morocco, Iraq, and Tuvalu), and in most uppermiddle and high-income countries (except in Gabon and the Seychelles). The median availability of a selection of lowestprice generic medicines surveyed is slightly higher in private facilities than in public centres. See Figure 3. Median availability rarely met the 80% global target, except in a republic of Russia (both sectors) and in the private sector of Afghanistan, Tajikistan, Sudan, and Boston, USA. Discussion This article demonstrates that human rights principles and WHO policy have been embedded in Figure 2. Proportion of countries with a constitution that recognises access to medicines, a national medicines policy, or a national essential medicines list. national law and policy for essential medicines and practically implemented in national health systems. Part 1 presents the first systematic, cross-national content analyses of different legal documents relevant for access to medicines, and examples of essential medicines and human rights principles in national law and policy. Some, but not all, of the 12 policy points in the analytical framework are embedded in domestic UHC law and national medicines policies. This research presents innovative ideas for embedding language promoting access to medicines in national constitutions, medicines policies and UHC legislation. National policy makers can use the example texts in this thesis (found in Annexes 1 and 2) as a starting point for designing national law and policy. In part 2, the case study of Uruguay shows that even the most rights-compliant legislation for access to medicines (determined in part 1) is subject to the unique and inconsistent interpretation of domestic courts. The decisions of national judiciaries can diverge, sometimes dramatically, from globallyaccepted interpretations of the right to health. These findings show that medicines litigation in Uruguay offers relief for some individual patients, but fails to address the structural problems behind high medicines prices. More generally, this study illustrates that both the black-and-white letter of national law, as well as domestic courts' interpretations of that law, should be considered when determining to what degree a legal system takes a human rights approach towards essential medicines. Part 3 reports on States' achievement of their right to health obligations for essential medicines in 2015 at the dawn of the 2030 Agenda for Sustainable Development. Many countries for which data were available still fail to meet the official targets. These findings offer an updated reference point to measure future achievements on essential medicines as part of the right to health under the SDGs. The challenges of monitoring access to medicines globally, highlighted during the tenure of the Millennium Development Goals, regrettably persist in the era of Sustainable Development [24,25]. This monitoring study helps move modern human rights practice beyond the tradition of 'naming, shaming, and litigating' rights violations towards real-time measuring and monitoring rights realisation [57]. Theory and practice: the effect of national law and policy on access to medicines To the author's knowledge, this thesis is currently the most robust academic endeavour to develop the evidence base to study the effect of national law and policy on access to medicines in LMICs. A large body of public health law implementation and evaluation research exists, albeit mostly in the US context [28,58,59,60,61,62]. These studies are often based on reliable online repositories of legislation and policy in English, implementation mechanisms described in scholarship and understood in practice, and robust datasets of outcome measures-all of which are commonly unavailable or underdeveloped in LMIC contexts. One of the most significant contributions this thesis makes is to map and critically analyse different types of national legislation and policies related to access to medicines in LMICs, where there was previously little to no data readily available for analysis. Importantly, this 'policy mapping' exercise used transparent and reproducible methods, while coding and describing the 12 policy measures being studied, providing data for future research. As suggested by Scott Burris and colleagues, future studies can use the legislation and policy presented in this article as an outcome of policymaking studies (i.e. to understand the determinants of the policymaking process) as well as an independent variable in evaluation research (i.e. to examine the impact of law) [28]. This thesis critically analysed the content of national law and policy using a framework that is comparable across jurisdictions, types of legal instruments, and time. Part 2 of this study examines lawmaking by judges. This study has shown that how judges understand international human rights law, and specifically the right to health, has important implications for how national law will be interpreted and, ultimately, whether patients will receive government-financed medicines. The eight indicators of access to medicines in part 3 offer useful insight into their feasibility as dependent variables in future evaluation studies examining the impact of national law and policy.This study also demonstrates that reliable data from national health systems-a crucial ingredient for evaluation studies-is scarce in many countries. The question also arises of whether these eight indicators are the best proxy measures of changes in access to medicines in health systems. Further discussion about suitable indicators to evaluate access to medicines is available elsewhere [63]. Future research One question not addressed by this study but worthy of future research is: what effect do rights-based legal provisions in national law and policy have on access to medicines in health systems and on broader population health outcomes? Scott Burris notes that comparative policy analysis in the context of public health is under-theorised yet it has a rich diversity of policy aspects under study, implementation processes, and outcomes [29]. The field of access to medicines is no different. Implementation research is essential to understand how law and policy for access to medicines is translated into desired results, both in terms of the health system (i.e. financing and availability of medicines) and population health outcomes (i.e. morbidity and mortality rates). Piecemeal evidence suggests that a constitutional right to health may shape the 'institutional environment', leading to increased and better health service delivery, and it may lead to increased public spending on healthcare [64,65]. However, there is need for compelling and authoritative studies confirming a causal relationship between a constitutional/legal right to health, changes in the health system environment, and positive health outcomes. Research should also investigate the mechanisms by which national law and policy achieve intermediate outcomes (i.e. better access to medicines) and population-level impacts. Impact on policy Currently, WHO's Guide for developing and implementing a national medicines policy (2001) omits any mention of a government obligation to provide essential medicines to those who can not provide for themselves (a cornerstone of the right to health and the UHC concept). Moreover, WHO does not have any specific guidance for Member States to write or reform national legislation promoting universal access to medicines in UHC schemes, which is currently the subject of high-level political declarations and dominant debate in global health policy. In light of these gaps, WHO should develop modern guidance documents for the UHC era using the 12point policy checklist and examples of legal text from this thesis as a starting point. Some of these examples translate the recommendations of the WHO Consultative Group on Equity and UHC for making fair choices on the path to UHC into legal provisions for national law [10]. This thesis may also assist WHO to develop model legislation for medicines reimbursement, which is goal 7 of WHO's 2016-2030 Medicines & Health Products Strategic Programme [11]. WHO should establish an online repository of national health law in order to centralise the results of these legal mapping exercises. In this repository, legal texts in their original language and translations in English or other UN language can be deposited and publicly consulted. To enhance human rights monitoring and reporting, Member States should self-report on two principles for access to medicines when they deposit legislation: 1) government financing for medicines for the poor and vulnerable groups, and 2) measures to control medicines prices (i.e. prioritising medicines reimbursement based on cost-effectiveness, use of pricing policies and/or TRIPS Flexibilities). Indicator 1 ensures a government duty to provide essential medicines to those who can not provide for themselves-the crux of 'core obligations' under the right to health. Indicator 2 can help set objective boundaries to the right to health, and protect against unreasonable patient requests and spurious litigation for high priced medicines. A concise self-report is a quick snapshot of legal provisions; it is less laborious than an entire country profile and can be verified by consulting the laws in the repository. National policy makers from the executive and legislative branches can preform their own monitoring exercise using the 12-point policy checklist to evaluate national law and policy for medicines. This exercise can identify gaps or weaknesses in existing law for improvement through future reform. The example legal texts in this thesis can be a source of inspiration for legislators writing or amending national law or policy. Moreover, national policy makers should adopt and finance a monitoring and reporting plan for indicators of access to medicines in line with the recommendations in this study and from the Lancet Commission for Essential Medicines Policies. Areas of deficiency should lead to a documented plan for improvements within a set timeframe. Domestic judges should familiarise themselves with the right to health and its interpretation in international human rights law, especially the concepts of essential medicines selection and progressive realisation (i.e. there is no immediate right to all available treatments but a duty to continuously and gradually expand access) [66]. Most importantly, domestic judges should interrogate whether the government has used a maximum of its available resources to provide the medicine(s) in question (part of the standard of reasonableness) before ordering the public to pay for it [14]. Such a reasonableness test should lead to enhanced government action to reduce medicines prices and more efficient use of public resources to finance medicines. Strengths and limitations This thesis is the first systematic enquiry into national law and policy for access to medicines in LMICs. Data collection drew from reputable repositories of constitutional and case law, and secondary datasets from Health Action International's medicines pricing database and WHO/UNICEF, among others. National medicines policies and health legislation were collected using systematic online search and crowdsourcing methods, and a data extraction template. UHC legislation was collected from a purposive sample of 16 countries, which is the largest known comparison of UHC legislation from LMICs. These steps minimised the risk of a reporting bias. In summary, this study has assembled the most comprehensive collection of full-text national medicines policies and domestic health legislation for medicines from LMICs to date. Little data for indicators of access to medicines was available for government financing and medicines availability in both sectors, and for countries in certain economic categories. Compared to the 2008 report of these indicators, this thesis uses the same methodology (in as far as possible) and reports data from more countries. One significant strength of this study is its reliance on primary sources of national law and policy, which are more objective than the common practice of reporting key informants' own interpretations of legislation and policy. Using national law and policy in its original language raises the question of correct translation to English and interpretation within the local legal context. To address these issues, multilingual and multidisciplinary research teams trained in law and medicine collected national laws and policies, and then extracted, translated and analysed relevant legal provisions. To ensure the correct interpretation of national law and policies, this study used an analytical framework based on global standards in WHO's essential medicines policies and international human rights law. These global standards establish clear, standard definitions and legal concepts that have the greatest chance of being consistent over time and across policy/legal instruments. Therefore, unless otherwise stated in the definitions section of national law and policy, it was reasonably assumed that the terminology and concepts unique to these global standards are broadly understood in the same way in national law and policy. Second, at least one national pharmaceutical policy expert per country (except for Algeria and Nigeria) was consulted to confirm that all relevant national law and medicines policies had been located and accurately understood. Conclusion International human rights law imparts important principles that are commonly embedded in the text of national law and policy for access to medicines and traceable through proxy indicators in health systems. This thesis offers researchers and policy makers the tools and the examples to translate human rights law and WHO's essential medicines policies into national legal commitments and to monitor government actions for universal access to essential medicines as part of SDG Target 3.8. It can also inform WHO's future guidance on UHC and essential medicines. This thesis assembled the essential building blocks to study and generated hypotheses to test the effect of national law and policy on access to medicines and population health in LMICs. Author contributions This PhD Review article presents key findings from five articles authored by S. Katrina Perehudoff with Nikita V. Alexandrov, Lucía Berro Pizzarossa, José Castela Forte, Hans V. Hogerzeil, and Brigit Toebes. Disclosure statement No potential conflict of interest was reported by the author. Ethics and consent Not applicable. Funding information The research underlying this article was funded by the Department of Health Sciences at the University of Groningen. Funding to support the present article was received by the Dalla Lana School of Public Health at the University of Toronto. Outside of this employment-related financing, no external funding was received. Paper context Access to essential medicines is a crucial aspect of the right to health and the SDG Target 3.8 for universal health coverage. Few studies examine how national law and policy are written to support universal access to essential medicines. This article reports that human rights principles and WHO policy are embedded in national law and policy related to essential medicines, and practically implemented in health systems. WHO should develop guidance for national legal/policy reform promoting access to medicines.
2020-11-03T14:06:29.867Z
2020-11-02T00:00:00.000
{ "year": 2020, "sha1": "31f64d8d13796be73e3efdea072da48cc276804d", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/16549716.2019.1699342?needAccess=true", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "42f72154a4e27ad1fc506b6fb34e20b2f91e1561", "s2fieldsofstudy": [ "Medicine", "Law", "Political Science" ], "extfieldsofstudy": [ "Business", "Medicine" ] }
236942065
pes2o/s2orc
v3-fos-license
Is the accuracy of individuals’ survival beliefs associated with their knowledge of population life expectancy? BACKGROUND On average, individuals underestimate their survival chances, which could yield suboptimal long-term decisions. OBJECTIVE Is the accuracy of individuals’ survival beliefs associated with their knowledge of population life expectancy of people of their age and gender? METHODS We use the 1995 and 1996 waves of the Dutch DNB Household Survey (DHS) with data on individuals’ survival beliefs and their knowledge of population life expectancy, supplemented with death registry data for the years 1995 to 2018. The accuracy of their survival beliefs is measured by comparing these beliefs with (actual) survival during the years after the survey was conducted. We provide prima facie evidence on the association between individuals’ knowledge of population life expectancy and the accuracy of their survival beliefs, and quantify this association using mortality risk models that control for socioeconomic status and health-related characteristics. RESULTS Individuals with only some overor underestimation of population life expectancy had, on average, about a one-third smaller difference between their survival beliefs and survival rate than those who severely underestimated population life expectancy. In line with this prima facie evidence, we find that, after controlling for socioeconomic and health characteristics, 55-year-old individuals with one-year of better knowledge of population life expectancy underestimated their lifetime with, on average, about 0.3 years less (95% CI: 0.09–0.52). 1 Universiteit Utrecht, the Netherlands. Email: A.S.Kalwij@uu.nl. 2 Munich Center for the Economics of Aging (MEA), Germany. Email: KutluKoc@mea.mpisoc.mpg.de. Kalwij & Kutlu Koc: Accuracy of individuals’ survival beliefs and knowledge of population life expectancy 454 https://www.demographic-research.org CONTRIBUTION We provide empirical evidence in support of the hypothesis that individuals with a better knowledge of population life expectancy have more accurate survival beliefs. One strand of the literature has studied the underlying reasons for biases in survival beliefs empirically. For example, Grevenbrock et al. (2020) found that cognitive weakness, according to which people cannot distinguish well among respective likelihoods of events, increases with age and explains the underestimation (overestimation) of survival beliefs at age 65 (85). Similarly, Bago D'Uva, O'Donnell, and van Doorslaer (2020) found that both men and women underestimate their subjective probability of living to age 75 compared to survival to that age, and that predictions errors are larger for the low-educated and those with low cognitive skills. Further, studies have shown that women, on average, underestimate their remaining lifetime more than men, and that smokers are too optimistic about their remaining lifetime (Bissonnette, Hurd, and Michaud 2017;. Another strand of the literature has focused on the consequences of survival misperceptions for economic decision-making. According to the research, misperceptions can lead to individuals making suboptimal long-term decisions. For instance, Bissonnette, Hurd, and Michaud (2017) showed that misperceptions of mortality risk can lead to welfare losses, and Groneck, Ludwig, and Zimper (2016) and Heimer, Myrseth, and Schoenle (2019) showed that pessimistic survival beliefs at younger ages and optimistic beliefs at older ages explain, respectively, undersaving before retirement and a slower rate of dissaving after retirement. We contribute to the first strand of the literature on the sources of inaccurate survival beliefs by providing empirical evidence in support of the hypothesis that individuals with a better knowledge of population life expectancy (PLE) have more accurate survival beliefs. Insights into these sources are important for, e.g., implementing public policy aimed at alleviating potential negative consequences arising from inaccurate survival beliefs. Our findings suggest that informing individuals about population life expectancy could improve their long-term decisions insofar these require knowledge on their survival chances. Two previous studies suggest that respondents' knowledge of population life expectancy could be associated with their survival beliefs. Elder (2013) found that US respondents' uncertainty in their survival forecasts decreased after having received information on population survival rates. Steffen (2009) combined the elicited beliefs of German respondents on population life expectancy and on their own position relative to it, and his findings suggest that individuals' underprediction of their own remaining lifetime can be related to their underestimation of population life expectancy. Both studies, however, did not provide empirical evidence on the association between individuals' knowledge of population life expectancy and the accuracy of their survival beliefs. Data and descriptive statistics The raw data of the 1995 and 1996 waves of the DNB Household Survey (DHS) contains information on 9,415 individuals from 3,348 households. The DHS oversampled highincome households, and while this does not invalidate our empirical findings, it warrants caution when extending our conclusions to the general Dutch population. We refer to Alessie, Hochguertel, and van Soest (2002) for a detailed description of the DHS. For respondents who were in both the 1995 and 1996 waves, we have used the 1995 responses to avoid a potential influence of repeated interviewing on response behavior (Lazarsfeld 1940;Sturgis, Allum, and Brunton-Smith 2009). The survey data was supplemented with administrative microdata from the causes of death registry that contains the year of death of Dutch residents who died during the 1995-2018 period (CBS 2020). The largest reduction in sample size was due to the selection of about 20% of respondents who were aged 52-84 in 1995 or aged 53-84 in 1996. Only for these respondents could it be determined whether they died before the target age, for which they provided subjective survival probabilities (see below). Item non-response caused a further sample reduction of about 30%. Our final estimation sample contains information on 1,273 respondents. Survival until a certain age is based on mortality information from the death registry that covered the years 1995-2018. Respondents were followed from the year of interview (1995 or 1996) until the end of 2018 or until their death (whichever came first). During the period 1995-2018, 629 (49%) respondents died and their year of death was observed. For our analysis we compared individuals' survival with their survival beliefs to certain ages at the time they were surveyed. These latter beliefs were elicited with subjective survival probabilities (SSPs; Manski 2004) in the DHS using the survey question: What do you think the chances are that you will live to be T years of age or more? Here ∈ {75, 80, 85, 90, 95, 100} is a target age that depends on the respondent's age at the time of the survey. Respondents aged 52-64 reported their SSP to ages 75 and 80; those aged 65-69 reported their SSPs to ages 80 and 85; and those aged 70-74, 75-79, or 80-84, reported their SSPs to ages 85 and 90, 90 and 95, or 95 and 100, respectively. These responses were measured on a 10-point scale, from 0, "no chance at all," to 10, "absolutely certain." Following Hurd and McGarry (1995), we assumed that after having divided these responses by 10, they could be interpreted as survival probabilities, conditional on individuals having reached their current age. Further, following Perozek (2008), we replaced reported the probabilities 0 and 1 by 0.01 and 0.99, respectively, and when equal SSPs were reported, we added 0.05 to the SSP for the lowest target age and subtracted 0.05 from the SSP for the highest target age (about 30% of the cases). 5 On average, respondents were 62 years of age at the time of the interview and answered the SSP question for a target age of 80 years. Hence, survival beliefs and (actual) survival rate refer, on average, to surviving for at least 18 years. In line with the existing literature, the first three bars from the left of Figure 1 show that respondents, on average, underestimated their survival chances: respondents' survival beliefs were, on average, 22% (16pp) lower than their survival rate (56% vs 72%). Figure 1: Respondents' survival beliefs and (actual) survival rate by degree of knowledge of population life expectancy Notes: Survival belief and survival rate refer, on average, to surviving for at least another 18 years. With PLE=Population life expectancy, DK-PLE= Don't know PLE, and PLE-knowledge= knowledge of PLE (see Section 2), severe underestimation of population life expectancy is defined, for those with DK-PLE=0, as PLE-knowledge<-8, average underestimation as -8 ≤ PLE-knowledge ≤ -6, and some under-or overestimation as PLE-knowledge ≥ -5 (see also Figure 2),. Only about 2% of respondents overestimated population life expectancy and this group is too small for considering it separately. At the top of the bars are the percentages. The DHS survey also asked two questions about population life expectancy, at a certain age, in the Netherlands. The first question was: For people of your age and sex there is an average life expectancy. Do you have any idea what age people of your age and sex reach on average? The variable DK-PLE takes the value one if the respondent answered no to this question, and zero otherwise. To 73% of the respondents who answered they know about population life expectancy (DK-PLE = 0), the following question was asked to elicit their knowledge of population life expectancy: The reported ages in this second question were compared to the corresponding actuarial population life expectancy from age and gender specific cohort life tables (Royal Dutch Actuarial Association 2019). These life tables contain actual mortality rates until 2012 and predicted mortality rates from 2012 until 2062. Based on these tables, all men in our sample were expected to live at least until the age of 81, while only 6% of the men in our sample, in answering the second question, gave 81 years or more as the population life expectancy. This latter percentage equaled 8% when the DKs were excluded (when DK-PLE = 1). Hence, most men believed the population life expectancy for men of their age to be lower than the actuarial one. For women, the finding is similar: based on life tables, all women in our sample were expected to live until the ages 85-90, while only 6% believed this to be the case (8% when excluding the DKs). For the empirical analysis, we used the difference between the reported and actuarial population life expectancy as a measure of respondents' knowledge of population life expectancy for people of their age and gender (variable PLE-knowledge, measured in full years). There is considerable variation in the degree of underestimation of population life expectancy, which is, on average, about 6 years ( Figure 2). Not reported in this figure, is that less than 1% of the respondents had this age exactly right (PLE-knowledge = 0 when DK-PLE = 0). In Figure 1, the three sets of bars from the right present prima facie evidence on the relation between PLE-knowledge and individuals' accuracy regarding their survival beliefs. Only, for this figure we grouped PLE-knowledge into three categories (defined in the figure's footnote). Respondents who answered they have no knowledge of population life expectancy (DK-PLE = 1), on average reported a 23% lower survival belief than their survival rate; an underestimation which is about the same as for respondents with average PLE-knowledge (penultimate set of bars). Compared to those who severely underestimated population life expectancy, respondents with some over-or underestimation of population life expectancy (ultimate set of bars) had, on average, a smaller difference between their survival belief and survival rate (-17% vs -26%). In sum, the main insight Figure 1 represents is that individuals with a better knowledge of population life expectancy have, on average, more accurate survival beliefs. Empirical analysis The empirical analysis quantifies the positive association suggested in Figure 1 between respondents' knowledge of population life expectancy and the accuracy of their survival beliefs. Following previous empirical studies on individual mortality, we modelled lifetime with a Gompertz distribution (Cox 1972;Gompertz 1825;Olshansky and Carnes 1997;Perozek 2008). Mortality risk models were estimated using data on individuals' survival over the period 1995-2018 and data on SSPs for their survival beliefs. These models control for individual characteristics such as gender, socioeconomic status, and health behavior, which can be correlated with PLE-knowledge and related to survival or survival belief (e.g. PLE-knowledge (in years) The associations between the covariates and annual mortality belief (see Table 1, right columns) are in line with previous findings, especially in terms of health characteristics, such as having bad health, chronic illnesses, smoking and drinking alcohol (e.g., Teppa 2012; Bissonnette, Hurd, and Michaud 2017). Not separately reported, the finding of no associations between socioeconomic status and the annual mortality rate (see Table 1, left columns) − an association usually found to be negative (e.g., Kalwij, Alessie, and Knoef 2013) − emerged once health-related characteristics were controlled for (see also Kutlu Koc and Kalwij 2017). In line with previous findings, the estimated age gradient in the mortality rate was steeper than the one in mortality belief, and the lower mortality rate for women -compared to the one for men -was not matched with, on average, a relatively lower mortality belief by women (Kutlu Koc and Kalwij 2017). The empirical findings concerning PLE-knowledge are in support of no association with the annual mortality rate (see Table 1, left columns), and a negative association with annual mortality belief (right columns). Individuals who did not know population life expectancy (DK-PLE = 1) had, on average, about a 21% higher annual mortality belief than individuals who knew their population life expectancy exactly (PLE-knowledge = 0 and DK-PLE = 0). Compared to this latter group, individuals who reported a population life expectancy of one-year lower than the correct one, had, on average, a 3% higher annual mortality belief. Hence, the less individuals' underestimated population life expectancy, the higher, on average, were their survival beliefs. Table 2, Panel A, shows that the predicted lifetime based on survival data -i.e., based on the results in the left columns of Table 1 -is about 88 years for a male reference individual. This predicted lifetime is almost seven years lower, at about 81 years, when based on data on survival beliefs -i.e., the results in the right columns of Table 1. The empirical evidence suggests that the degree of this underprediction does not vary with most individual characteristics, with only a few exceptions, that are in line with previous studies (Panel B). This underprediction was for women, on average, about five years more than for men. Furthermore, on average, smokers and obese individuals underpredicted their lifetime less than, respectively, non-smokers and individuals with normal weight (BMI<25). The individuals who reported as being in bad health underpredicted their lifetime more than those not in bad health. Figure 2). Based on estimates of mortality risk models (section 3). A marginal change is the percentage difference in the annual mortality rate, or in the annual mortality belief, compared to a reference individual. The reference individual is a 55-year-old male (at the time of interview); medium educated; employed; married; with medium household income; a non-smoker; not a heavy drinker; has a normal body weight; with no chronic illnesses; feels happy; and reports as being in good health. All covariates affect the annual mortality rate proportionally. Individuals with better knowledge of population life expectancy had, on average, relatively more accurate beliefs regarding their lifetimes (Panel C). Individuals who answered they do not know population life expectancy, underpredicted lifetime, on average, about the same as the reference group, which consists of 55-year-old individuals who underestimated population life expectancy by six years (PLE-knowledge = -6). Furthermore, individuals who knew population life expectancy exactly, underpredicted their lifetime less than the reference group by, on average, almost two years (95% CI: 0.55-3.20). Finally, in terms of marginal changes, the last row of Panel C shows that 55year-old individuals who had one-year of better knowledge of population life expectancy, underpredicted their lifetime with, on average, about 0.3 years less (95% CI: 0.09-0.52). These main findings are by and large robust to different sample selections (see Appendix). Conclusion Our empirical findings support the hypothesis that individuals with a better knowledge of population life expectancy have more accurate survival beliefs. Future research may examine if survival beliefs' inaccuracies are not only associated with but also caused by (insufficient) knowledge of population life expectancy. For instance, by using an experimental setup in which respondents are given, at random, information on population life expectancy. With such a setup, one can investigate whether a policy intervention aimed at improving this knowledge can help individuals form more accurate survival beliefs and, ultimately, make better long-term decisions that require knowledge on their survival chances. Table 2. a) Furthermore, there is no empirical support for a quadratic relationship. b) For these results, all mortality risk models were estimated by gender. Arguably, sample size, or rather the number of deaths, matters for the precision of the results for women. Results not reported show a large and imprecisely estimated association of PLE-knowledge with the annual mortality rate, while the association with annual mortality belief is in line with that of Table 1.
2021-08-07T02:55:58.043Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "8cc5125e65bbfdcc0030c307d5b078af302ebf30", "oa_license": "CCBYNC", "oa_url": "https://www.demographic-research.org/volumes/vol45/14/45-14.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "8cc5125e65bbfdcc0030c307d5b078af302ebf30", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [] }
258041451
pes2o/s2orc
v3-fos-license
SAF-A/hnRNP U binds polyphosphoinositides via a lysine rich polybasic motif located in the SAP domain Polyphosphoinositides (PPIn) play essential functions as lipid signalling molecules and many of their functions have been elucidated in the cytoplasm. However, PPIn are also intranuclear where they contribute to chromatin remodelling, transcription and mRNA splicing. Using quantitative interactomics, we have previously identified PPIn-interacting proteins with roles in RNA processing/splicing including the heterogeneous nuclear ribonucleoprotein U (hnRNPU/SAF-A). In this study, hnRNPU was validated as a direct PPIn-interacting protein via 2 regions located in the N and C termini. Furthermore, deletion of the polybasic motif region located at aa 9-24 in its DNA binding SAP domain prevented PPIn interaction. In conclusion, these results are consistent with hnRNPU harbouring a polybasic region with dual functions in DNA and PPIn interaction. Interaction mapping of hnRNPU with polyphosphoinositides (A) Nuclei were isolated from actively growing HeLa cells and incubated with 5 mM neomycin. Dialysed neomycin-displaced supernatants (100 μg) were overlaid on PIP strips and protein-lipid interactions were detected with an anti-hnRNPU and antimouse IgG-HRP antibodies by chemiluminescence (n = 4). (B) Dialysed neomycin-displaced supernatants were resolved by SDS-PAGE and immunoblotted with anti-hnRNPU and anti-mouse IgG-HRP antibodies (n = 4). (C) Diagram showing the full length hnRNP-U detailing the location of the SAP (SAF-A/B, Acinus and PIAS), SPRY (Sp1A ryanodine receptor) domains as well as the RNA-binding RGG (arginine-glycine-glycine) box, according to uniprot ID Q00839 as well as the deletion constructs generated in this study fused to GST. (D) Purity analysis of GST and GST fused proteins with the N-terminal region (NTR), SPRY domain and C-terminal region (CTR) by SDS-PAGE and Coomassie blue staining. (E) Lipid overlay assay using PIP strips incubated with GST or GST-fused proteins. Protein-lipid interactions were detected using an anti-GST-HRP conjugated antibody with a longer exposure for GST, GST-NTR and -SPRY (450 sec exposure) and shorter exposure for GST-CTR (50 sec) due to higher background (n = 3 for NTD and SPRY, n = 2 for CTD). (F) Upper panel: Diagram showing the location of the polybasic regions in the NTR. Lower panel: Purity analysis of GST and GST fused proteins NTR WT and deletion mutations by SDS-PAGE and Coomassie staining. (G) Lipid overlay assay with the NTR without and with deletions of the indicated polybasic regions (n = 2). The numbers shown in E and G indicate the positions of spotted PPIns and PA as shown in Figure 1A. Description Biomolecular interactions consisting of protein-lipid complexes are essential in cellular processes to allow appropriate biological responses (D'Santos and Lewis, 2012;Saliba et al., 2015). Consequently, alteration of these interactions is implicated in many diseases (Wymann and Schneiter, 2008). This particularly applies to the signalling lipids, polyphosphoinositides (PPIn), which are derivatives of phosphatidylinositol (PtdIns), phosphorylated on three possible hydroxyl groups on the inositol headgroup. They can carry one (PtdIns3P, PtdIns4P and PtdIns5P), two (PtdIns(3,5)P 2 , PtdIns(3,4)P 2 , PtdIns(4,5)P 2 ) or a maximum three (PtdIns(3,4,5)P 3 ) negatively charged phosphate ( (Choy et al., 2017) and nomenclature based on Michell et al (Michell et al., 2006)). PPIn are anchored in cellular membranes via their hydrophobic fatty acid tails while their differently phosphorylated headgroups act as signalling codes to recruit proteins harbouring PPInbinding domains or polybasic amino acid clusters (Hammond and Balla, 2015). In addition, PPIn are present in the nucleus where they have been shown to play roles in chromatin remodelling, transcription, and mRNA processing via a few effector proteins (Fiume et al., 2019;Hamann and Blind, 2018;Jacobsen et al., 2019). To expand our understanding of how PPIn signals in the nucleus, we used an unbiased quantitative proteomics approach to enrich for and identify PPIn effector proteins (Lewis et al., 2011). This approach was based on the incubation of isolated nuclei with the aminoglycoside neomycin, which binds to PPIn and hence displace PPIn effector proteins. Displaced proteins were identified and showed to have roles in mRNA processing, including heterogeneous ribonucleoproteins (hnRNP) U (alias Scaffold attachment factor A -SAF-A). hnRNPU was then identified in the PtdIns(4,5)P 2 and PtdIns(3,4,5)P 3 nuclear interactomes using this approach combined with PPIn pull downs (Lewis et al., 2011;Mazloumi Gavgani et al., 2021) or in the PtdIns4P interactome using immunoprecipitation (Faberova et al., 2020). To first validate the interaction of hnRNPU with PPIn, we tried to express and purify GST-fused hnRNPU. The recombinant full length hnRNPU could not be expressed in bacteria and was unstable, consistent with a previous study (Kim and Nikodem, 1999). We therefore examined its PPIn binding properties by lipid overlay assay using neomycin-displaced protein supernatants obtained from HeLa nuclei followed by detection with an anti-hnRNPU antibody (Figure 1A). We observed an interaction with all PPIn. The neomycin supernatants were also resolved by Western immunoblotting to demonstrate the specificity of the anti-hnRNPU antibody (Figure 1B). To assess direct interaction, three deletion constructs were generated fused with GST and spanning the N-terminus region (NTR) which includes the SAP (SAF-A/B, Acinus and PIAS) domain, the SPRY (Sp1A ryanodine receptor) domain and the remaining C-terminal region (CTR) (Figure 1C). GST and the GST fusion proteins were expressed and purified ( Figure 1D) and used in lipid overlay assays (Figure 1E). GST showed no signal while the NTR and CTR constructs showed interaction with PPIn, albeit with different signal intensity, the most intense signals detected being those with the CTR (Figure 1E). hnRNPU harbours two polybasic regions (PBR) in the NTR, one located within the SAP domain (9-KKLKVSELKEELKKRR-24) and the other located at aa 228-KRPREDHGR-236 ( Figure 1F). Considering the previously reported importance of such motif for PPIn interaction in nuclear proteins ((Karlsson et al., 2016;Mazloumi Gavgani et al., 2021;Viiri et al., 2009) and reviewed in (Jacobsen et al., 2019)), each PBR was deleted within the NTR. The WT and deletion mutants were expressed and purified ( Figure 1F) and the effect of these deletions on PPIn binding was examined by lipid overlay assay (Figure 1G). The first PBR (aa 9-24) was found to be required for PPIn binding (Figure 1G), indicating specificity for this polybasic cluster. The SAP domain is known to bind to A-T rich areas of DNA (Gohring et al., 1997; Kipp et al., 2000) and these results suggest that the PBR located within the SAP domain may have dual functions in DNA binding and PPIn interaction. This has indeed been reported previously for the co-repressor sin3Aassociated protein 30-like (SAP30L), where a PBR was shown to contribute to both DNA and PPIn interaction (Viiri et al., 2009). In addition, the CTR may act as a second site for PPIn interaction. Protein expression and purification pGEX-4T1 containing the NTR, SPRY or CTR fragments of hnRNPU were transformed into E. coli BL21-RIL DE3. Bacterial cultures were grown at 37°C and further induced with 0.5 mM isopropyl-β-D-thiogalactopyranoside for 3 h at 37°C (NTR), or overnight at 26°C (SPRY) or 18°C (CTR). Bacterial cultures were centrifuged at 6000 g for 10 min at 4°C and pellets were resuspended in 0.1 M sodium phosphate pH 7.0, 1 mM DTT, 0.5 mg/ml lysozyme and 1x Sigma protease inhibitor cocktail (including 0.5% Triton-X100 for the CTR), incubated on ice for 30 min and sonicated 3 times for 30 sec at 4°C. NaCl was added to a final concentration of 0.2 M and lysates were further incubated for 10 min on ice and finally centrifuged at 13 000 g at 4°C for 15 min. The CTR fragment was insoluble and was extracted from pellets according to Tassan et al (Tassan et al., 1995). Pellets were lysed in 0.1 M Tris-HCl pH 8.5, 6 M urea overnight at 4°C rotating. The lysate was then centrifuged at 10,000 g at 4°C for 15 min. The supernatant was recovered and diluted 1:10 in 50 mM KH 2 PO 4 pH 10.7, 1 mM EDTA, 50 mM NaCl and incubated at room temperature for 30 min. The pH was adjusted to 8.0 allowing the protein to renature. The renaturation process proceeded for an additional 30 min at room temperature. The insoluble material was centrifuged at 10,000 g at 4°C for 5 min and the supernatant was saved for glutathione purification. Protein purification was performed using glutathione-agarose 4B beads. The GST fused recombinant protein was then eluted by adding elution buffer to the beads (50 mM tris pH 7.6, 250 mM NaCl, 10 mM KCl, 10 mM MgCl 2 , 2 mM DTT, 20 mM reduced glutathione) and left to incubate at room temperature for 30-60 min. All protein preparations were analysed by SDS-PAGE and Coomassie staining. Cell culture, nuclear isolation and neomycin extraction HeLa cells were cultured in Dulbecco's modified Eagles' medium with 10% foetal bovine serum and 1% penicillin/streptomycin at 37 0 C in a 5% CO 2 incubator to about 80-90% confluence. Crude nuclear fractionation was performed as previously reported (Karlsson et al., 2016). Nuclei were washed with retention buffer containing 20 mM Tris pH 7.5, 70 mM NaCl, 20 mM KCl, 5 mM MgCl 2 , 3 mM CaCl 2 and protease inhibitor cocktail. The nuclei were then incubated with freshly prepared 5 mM neomycin (Neomycin trisulfate salt, Sigma-Aldrich) in retention buffer, rotating for 30 min at RT. After centrifugation at 13000 rpm for 5 min, the supernatant containing the neomycin-displaced protein extract was collected. Neomycin supernatants were dialysed three times in 900 mL of cold lipid pulldown buffer containing 20 mM HEPES pH 7.5, 150 mM NaCl, 5 mM EDTA, 0.1 % Igepal using Slide-A-Lyser Mini dialysis units (Thermo Fisher Scientific) for 1 h at 4°C each time. The protein concentration was measured using the bicinchoninic acid protein assay (Thermo Fisher Scientific). Western immunoblotting Dialysed neomycin supernatants were resolved by SDS-PAGE and proteins were transferred to a nitrocellulose membrane. The membranes were blocked with 7% skimmed milk in PBS-T (137 mM, NaCl, 2.7 mM KCl, 10 mM Na 2 HPO 4 , 0.05% Tween 20) and incubated with anti-hnRNPU (Santa Cruz, sc-32315, 1:10,000) overnight at 4°C, followed by anti-mouse secondary antibody conjugated with horseradish peroxidase (HRP) at 1:10,000 dilution for 1 h. The membranes were washed 6 x 5 min with PBS-T following each antibody incubation. The immunoreactive bands were visualised with enhanced chemiluminescence using SuperSignal West Pico Chemiluminescent Substrate (Thermo Fisher Scientific) according to the manufacturer's instructions and detected with a BioRad ChemiDocTM Xrs+.
2023-04-10T05:04:35.231Z
2023-03-24T00:00:00.000
{ "year": 2023, "sha1": "a0b6619a80f418871a7af8cd38594bf5aaa0cb79", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "a0b6619a80f418871a7af8cd38594bf5aaa0cb79", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }