id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
264416025 | pes2o/s2orc | v3-fos-license | Exploration of Hub Genes and Pathogenetic Pathways in Systemic Lupus Erythematosus Complicated with Early Onset Atherosclerosis
Background . Notwithstanding the mounting evidence to suggest that systemic lupus erythematosus (SLE) accelerates the progression of atherosclerosis, the mechanisms underlying this phenomenon are yet to be completely understood. This research examined the molecular mechanism behind this vascular complication. Methods . The Gene Expression Omnibus database was retrieved to acquire the gene expression datasets for SLE (GSE109248) and atherosclerosis (GSE100927). The shared differentially expressed genes (DEGs) of SLE and atherosclerosis were screened with the help of the “ limma ” package in R software, followed by function enrichment analysis, protein – protein interaction (PPI) network construction, key module analysis, hub gene selection, and coexpression analysis. Results . In GSE109248 and GSE100927, 1195 and 418 DEGs in totals were identi fi ed, respectively. Subse-quently, we acquired 78 common DEGs (70 upregulated genes and eight downregulated genes) with the same expression trends by using the Venn diagram. Finally, 12 hub genes, including PTPRC, TYROBP, FCGR3A, ITGAX, LCP2, IL1B, IRF8, LILRB2, CD68, C1QB, CCR7, and C1QA were identi fi ed by using seven different algorithms in Cytohubba. The functional analysis illustrates that these genes were predominantly enriched in immune and in fl ammation response, lipid and atherosclerosis, and osteoporosis. These results indicate an important role of SLE in inducing excessive in fl ammation, which may be medicate by these hub genes and can induce osteoporosis and imbalance of the normal mineral balance in the body as well as lipid abnormalities, which eventually leads to premature onset of atherosclerosis. In total, nine transcription factors (TFs) that may participate in regulating the function of these genes were identi fi ed. All hub genes and four TFs were validated successfully. Conclusion . The results of our research show that SLE and atherosclerosis have common DEGs, pathophysiology, and hub genes. These fi ndings can provide fresh evidence and insights into a further investigation into the mechanisms at play.
Introduction
According to accumulating research data, the incidence and prevalence of systemic lupus erythematosus (SLE) is increasing year by year [1,2].Patients diagnosed with SLE have an elevated risk of suffering from cardiovascular disease (CVD) as well as atherosclerosis [3].Nevertheless, among SLE patients, the early occurrence of atherosclerosis in the lowrisk population (mostly younger women) cannot be adequately elucidated by conventional cardiovascular risk factors such as smoking, hypertension, and hyperlipidemia [4].In individuals diagnosed with SLE, the risk of developing CVD and atherosclerosis is up to 50 times higher than that in age-and gender-matched controls [5,6].Atherosclerosis is a comorbidity and the major contributor to death among SLE patients [7].Both atherosclerosis and SLE share many proinflammatory environmental markers, including systemic and local immune responses and proinflammatory chemokines and cytokines (TNF-α, type I interferons (IFNs), transforming growth factor β, vascular endothelial growth factor, and IL-1, etc.) [8].
Although SLE is considered to be a risk marker for promoting the atherosclerosis process, the underlying mechanisms of these two disorders are still unclear.Neutrophils and neutrophil extracellular traps (NETs)-related cascades are probably one of the most essential pathways.
SLE and atherosclerosis have shared mechanisms in their respective pathogenesis pathways.In SLE, the function of neutrophils is impaired, and the level of NETs is increased [9].NETs could accelerate the inflammatory response by activating inflammatory factors and other inflammatory cells [10].In atherosclerosis, local inflammation and propagated arterial intimal injury and thrombosis could be exacerbated and amplified by NETs [11].In addition, NETs not only reduce the efflux capacity of beneficial cholesterol by inducing oxidative stress and oxidizing high-density lipoprotein particles [12] but also activate NF-κB signaling in macrophages, leading to the aggravation of atherosclerosis [13].
Common transcriptional features provide new clues to studying the overlapping pathogenic processes of SLE and atherosclerosis.The fundamental objective of this research is to discover hub genes that play a role in the pathogenic mechanism of SLE complicated with atherosclerosis.Using comprehensive bioinformatics and enrichment analysis, we examined the differentially expressed genes (DEGs) that are shared between SLE and atherosclerosis.These genes were found by downloading two sets of gene expression data from the Gene Expression Omnibus (GEO) database: GSE109248 and GSE100927.Subsequently, protein-protein interaction (PPI) networks were constructed, gene modules were evaluated, and hub genes were found by retrieving the Search Tool for the Retrieval of Interacting Genes database and the Cytoscape program.Finally, we narrowed it down to 12 hub genes, after which we conducted additional research on the transcription factors (TFs) associated with these genes and effectively validated their expression in two additional gene expression datasets (GSE43292 and GSE112943).These findings suggest a shared pathological pathway and offer a fresh perspective on how the molecular processes underlying these two illnesses are investigated further.
Materials and Methods
2.1.Data Acquisition.GEO (http://www.ncbi.nlm.nih.gov/geo)[14] is a publicly available portal that provides comprehensive microarray and high-throughput sequencing datasets for free download.Using SLE and atherosclerosis as keywords to screen for eligible gene expression datasets.Four datasets, GSE109248, GSE100927, GSE112943, and GSE43292, were acquired from the GEO database.We used GSE109248 and GSE100927 to identify hub genes, while GSE112943 and GSE43292 as external validation datasets.The GSE109248 dataset contains 25 cutaneous lupus and 14 control skin.GSE100927 consists of 69 atherosclerotic samples and 35 control arteries without atherosclerotic lesions samples from deceased organ donors.GSE112943 involved 16 cutaneous lupus and 10 control skin.GSE43292 consists of 32 atheroma plaques and the same number of intact tissues.We then downloaded a series of matrix files and datasheets for the microarray platform, including GPL10558, GPL17077, and GPL6244.This study involved no human or animal subjects.
Identification of DEGs.
We used the data tables of GPL10558, GPL17077, and GPL6244 to annotate the series matrix files of GSE109248, GSE100927, GSE112943, and GSE43292.Then, we identified the DEGs between the disease group and the control group by performing the "limma" R package.The online Venn diagram tool was utilized to detect the shared DEGs.Such DEGs were determined to be any genes whose adjusted p-value was <0.05 and |logFC (fold change) | ≥1.
Enrichment Analyses of DEGs.
The "clusterProfiler" program in the R package and Metascape (http://metascape.org), an online analysis platform [15], were used to carry out the analyses of gene ontology (GO) enrichment and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analysis.p<0:05 suggesting a significant difference.
Development and Module
Analysis of a PPI Network.The Search Tool for the Retrieval of Interacting Genes (STRING) database was utilized to build a PPI network of common DEGs, with a confidence score of >0.4 serving as the cutoff value.Then, the PPI network was entered into Cytoscape for visual representation.Molecular complex detection technology (MCODE), a plugin of Cytoscape, was adopted to evaluate the key functional modules.The K-core value was adjusted to 2, the node score cutoff was established at 0.2, the maximum depth was selected as 100, and the degree cutoff was adjusted to 2. Subsequently, involved modular genes were performed in the KEGG and GO analysis using "clusterProfiler" within the R package.
Detection and Characterization of Hub Genes.
To analyze and determine which genes function as hub genes, we made use of Cytoscape's cytoHubba plugin and seven standard algorithms (EPC, MNC, Stress, Radiality, Closeness, Degree, and MCC).Afterward, the hub genes that had been identified were entered into the database known as GeneMANIA (http://www.genemania.org/)[16], which was utilized to identify internal associations in gene sets to construct a coexpression network.
Verification of the Expression of Hub Genes in External
Data Sets.The GSE112943 and GSE43292 datasets were used as the validation dataset.Wilcoxon-test was performed to compare these two datasets.p-Value < 0.05 was established as the criterion for statistical significance.
Prediction and Validation of TFs. The Transcriptional
Regulatory Relationships Unraveled by Sentence-based Text mining (TRRUST) database is used for predicting the transcriptional regulatory networks and provides information on the target genes corresponding to TFs as well as the regulatory connections between them [17].By employing the TRRUST database, we obtained the TFs that modulate the hub genes.The significance level was determined to be an adjusted p-value < 0.05.After that, the Wilcoxon test was utilized to validate the levels of expression of these TFs in GSE109248 and GSE100927.
Assessment of Immune Cell Infiltration in SLE and
Atherosclerosis.The single sample gene set enrichment analysis (ssGSEA) technique was applied to measure the relative levels of infiltration of 28 different immune cells using the 2 Mediators of Inflammation GSE109248 and GSE100927 datasets [18].R software was used to generate violin plots for demonstrating the differential expression levels of 28 infiltrating cells.
2.9.Statistical Analysis.R software version 4.1.3was used to perform the Wilcoxon test to identify the expression levels of hub genes and the TFs.p-Value <0 .05 was considered significant.Subsequently, after eliminating genes with opposite patterns of expression in GSE109248 and GSE100927, 78 DEGs were retained.
Analysis of the Functional Enrichment of the Overlapping
DEGs.Following the conversion to gene IDs, GO and KEGG enrichment analyses were performed to examine the biological activities and pathways linked to the 94 common DEGs.The analysis of the functional enrichment of DEGs made use of the three components of the GO annotation of DEGs.The findings of GO analysis illustrated a predominant enrichment of these genes in biological activities of immune cells, including regulation of leukocyte activation, positive regulation of cytokine production, T-cell activation, etc. (Figure 3 (a)).In terms of the KEGG pathway, strong enrichment of DEGs was observed in osteoclast differentiation, rheumatoid arthritis, SLE, and immune cell-related signaling pathways, such as Lipid and atherosclerosis, IL-17 signaling pathway, and cytokine-cytokine receptor interaction (Figure 3 The expression levels of 12 hub genes were verified in GSE43292 and GSE112943, respectively The TFs that regulate these hub genes were predicted by the TRRUST database.The four TFs (SPI1, TRF8, TRERF1, STAT1) were significantly upregulated in GSE109248 and GSE100927 The findings of the GO analysis demonstrated a link between 31 genes of these three modules and the immune response and inflammation (Figure 5(a)).According to the findings of the KEGG pathway analysis, these genes are mostly associated with tuberculosis, Chagas disease, and cytokine-cytokine receptor interaction (Figure 5(b)).
From these results, we can easily find that immune response and inflammation might perform a critical function in SLE and atherosclerosis.
Identification and Analysis of Hub Genes.
Seven algorithms of plug-in cytoHubba were utilized to calculate the top 20 hub genes (Table 1).
Next, the GeneMANIA database was utilized to conduct an analysis of the coexpression network as well as the functions linked to these genes.The findings illustrated that these genes had a complex PPI network, with coexpression accounting for 64.77% of the network, physical interactions accounting for 14.78%, colocalization accounting for 14.51%, genetic interactions accounting for 1.41%, pathways accounting for 1.27%, and shared protein domains accounting for 0.56% (Figure 6(b)).
As shown in GO analysis, these genes are predominantly enriched in various immune and inflammatory responses (Figure 7(a)).Additionally, the findings of the KEGG pathway illustrated a significant enrichment of these genes in osteoclast differentiation, infectious diseases, and complement and coagulation cascades (Figure 7(b)).
Validation of the Expression of Hub Genes in External
Datasets.The reliability of the expression levels of these genes was evaluated using two different external datasets containing SLE and atherosclerotic plaques.As shown in Figure 8(a), all hub genes were substantially expressed at a high level in cutaneous lupus lesions compared with control skin.The levels of expression of all genes were likewise considerably elevated in atherosclerotic plaques relative to vascular tissues that served as the controls (Figure 8(b)).
Prediction and Validation of TFs.
We discovered nine TFs that might be implicated in modulating the expression of these genes based on the data acquired from the TRRUST database (Figure 9(a) and Table 3).Subsequently, four TFs that were upregulated in cutaneous lupus lesions and atherosclerotic plaques were identified by validation (Figures 9(b) and 9(c)).They have a function coordinately in the modulation of six hub genes, namely, IL1B, CD68, ITGAX, CCR7, LILRB2, and IRF8.
Immune Infiltration Analysis in SLE and Atherosclerosis.
Investigation into the difference in the infiltration levels of immune cells in different samples was carried out utilizing the ssGSEA algorithm.The immune system-associated 28 cells were visualized regarding their distribution in the GSE109248 and GSE100927 by employing violin plots (Figures 10(a
Discussion
This is the first investigation in the field of bioinformatics that examines SLE and atherosclerosis concurrently.The main purpose is to screen and identify the common DEGs and mechanisms in SLE and atherosclerosis to provide new methods to prevent and treat SLE complicated with atherosclerosis.
In this study, 78 common DEGs were identified in both two diseases, 12 of which were determined to be the hub genes and included PTPRC, TYROBP, FCGR3A, ITGAX, LCP2, IL1B, IRF8, LILRB2, CD68, C1QB, CCR7, and C1QA.The GO and KEGG pathway enrichment analyses both found that these genes have a strong enrichment in immunological and inflammatory pathways.The regulation of cytokine, leukocyte, and chemokines production and activation, including TNF-α, type I IFNs, MCP-1, IL-1, S100A8/9,
Mediators of Inflammation
and Nets, which exert a synergistic effect in both the onset and progression of these two inflammatory illnesses [9,11].GO analysis suggests that the response to lipopolysaccharides also assumes a crucial function in both illnesses.The lipopolysaccharide can not only mediate nuclear transduction of NF-kB but also induce the activation of Toll-like receptor 4, causing the release of inflammatory factors, which ultimately results in the onset as well as the advancement of SLE and atherosclerosis [19,20].Enrichment of these genes was found in Osteoclast differentiation, Complement and coagulation cascades, and Infection-related diseases in the KEGG pathway.Some studies have shown that imbalanced normal mineral homeostasis within the body provides the condition for the formation of vascular calcification and atherosclerotic plaques [21].The onset of both SLE and atherosclerosis is associated with infection [22,23].The autoantibodies and circulating immune complexes of SLE patients can promote the activation of complement and coagulation cascades, which impairs the endothelium, induces pro-adhesive and proinflammatory endothelial cell phenotypes, and changes the metabolism of lipoproteins that participate in atherogenesis to worsen the development of atherosclerosis [24].In addition, our results showed that nine TFs may be involved in the regulation of these genes.Following subsequent validation, we discovered TFs upregulated in SLE and atherosclerotic plaques, which included IRF8, TRERF1, STAT1, and SPI1.They contributed in a coordinated manner to the modulation of six hub genes comprising IL1B, CD68, ITGAX, CCR7, LILRB2, and IRF8.
Inflammation and vitamins also play an important role in both two diseases.The inflammation and deficiency of vitamins that occur in SLE could cause osteoporosis, which Mediators of Inflammation can then progress to atherosclerosis [25].According to the findings of certain studies, the active form of vitamin D has an anti-inflammatory effect.Vitamin D deficiency is significantly corelated with the increased incidence, or aggravation, of SLE [26].By increasing the serum level of vitamin D, the inflammatory response in SLE patients was significantly reduced [27].Vitamin D plays a key role in the regulation of cardiovascular system function.Vitamin D, which plays a protective role in atherosclerosis formation, can reduce the release of inflammatory factors and the formation of foam cells [28].According to clinical data, the serum active vitamin D level of patients with atherosclerosis is significantly lower than that of healthy control patients [29].These evidences show that SLE is closely related to atherosclerosis.Individuals with SLE have a variety of inflammatory mediators that may be detected in their blood, proving that the body was experiencing a systemic inflammation at the time when SLE symptoms first appeared, which caused osteoporosis and an imbalance of normal mineral homeostasis within the body to provide an environmental foundation for the onset of vascular calcification and atherosclerotic plaques [30,31].Research findings have also illustrated that dyslipidemia, which is another comorbidity of SLE, is linked to atherosclerosis and is characterized by abnormally high levels of total cholesterol, triglycerides, and low-density lipoprotein, as well as abnormally low levels of high-density lipoprotein [32].Low-density granulocyte (LDG), which belongs to the neutrophil subset, has been shown to be elevated in SLE [33] and is in connection with vascular dysfunction within the body and inflammation and coronary plaque [34].The LDGs of SLE might promote vascular injury, which is the primary mechanism that leads to neutrophil infiltration in individuals who have concurrent SLE and atherosclerosis, and in an inflammatory setting, the LDG that disrupts high-density lipoprotein function might be an important link between SLE and atherosclerosis [34].
There is some evidence that SLE and atherosclerosis share common inflammatory and immunological regulatory mechanisms.One of them is the IL23/Th17 axis, which is related to the pathophysiology of both disorders [35,36].In addition, a research report that investigated single nucleotide polymorphisms that are linked to CVD in SLE discovered that two novel putative risk loci are implicated in increasing the risk for CVD in SLE [37].In addition, apoptotic cells Mediators of Inflammation gather in various tissues of SLE patients and cause inflammation and necrosis [38].There are also a large number of apoptotic cells in advanced atherosclerotic plaque, which induce secondary necrosis and inflammation [39].The experiment shows that the related pathways of phagocyte clearing necrotic tissue in SLE patients are impaired, and these pathways also exist in atherosclerosis [6].Endothelial dysfunction caused by endothelial cell damage is an early sign of atherosclerosis [40].SLE patients have an autoimmune response against endothelial cells, resulting in endothelial dysfunction.This undoubtedly increases the risk of atherosclerosis [41].IL-1B is one of the members of the interleukin 1 cytokine family, which is a crucial regulator of inflammation and is associated with diverse cellular activities, including proliferation, differentiation, and apoptosis of cells, and associated with the development of CVDs [42].Studies have proved that the levels of IL1B are significantly higher in SLE patients [43].In SLE, IL-1B is also involved in the platelet-mediated activation of endothelial cells, which may take part in the pathogenesis of CVD in patients with SLE [44].In addition, IL-1B can induce NETs formation [45], which performs an integral function in SLE and atherosclerosis [9,11].CD68 is mainly expressed by human tissue macrophages and monocytes and is often used as a marker of macrophage infiltration.
Mediators of Inflammation
Both infiltrating and resident macrophages via producing proinflammatory cytokines that contribute to ongoing injury of lupus nephritis, which is a common complication of SLE [46].In atherosclerosis, macrophages contribute to preserve the local inflammatory response and accelerate and promote plaque development and thrombosis formation [47].ITGAX is also known as CD11c.In SLE, CD11c is highly expressed in DCs, presenting an upregulation of Toll-like receptors 7 and 9 responses with enhanced expression of C-X-C motif chemokine ligand 13 and interleukin 10, which increased immune and inflammatory response [48].In atherosclerosis, the majority of the immune cells that expressed CD11c were macrophages, and the expression of ITGAX/CD11c is involved in the index of vulnerability, levels of proinflammatory plaque cytokine, area of the necrotic core, and with each other [49].The CCR7 is implicated in both immunity and tolerance by leading T cells and antigen-presenting DCs to and retaining them in lymph organs.Increased expression of CCR7 was associated with active SLE [50].In addition to this, it participates in triggering the adaptive immune response by directing the migration, positioning, and interaction of naïve T cells and DCs within secondary lymphoid organs.The loss of CCR7 receptor in murine atherosclerosis not only leads to atherosclerotic plaque content reduction but also leads to disruption of T cell entry and exit within the inflamed vessel wall [51].IRF8 Mediators of Inflammation is a TF of the IFN regulatory factor (IRF) family.In SLE, the recognition of circulating immunological complexes by endosomal Toll-like receptors leads to the activation of downstream IRF8 proteins, which affects the IFN pathway [52].Studies have also shown that the deletion of IRF8 in DCs significantly decreases the development of atherosclerosis [53].LILRB2 is one of the members of the leukocyte immunoglobulin-like receptor family.The receptor is subjected to the expression on immune cells, where it combines with major histocompatibility complex class I molecules that are present on antigen-presenting cells.This combination then transmits a negative signal that suppresses the activation of an immune response [54].However, few studies explore the effects of LILRB2 in SLE and atherosclerosis, which may be a new direction to study the pathogenesis of both diseases in the future.
In addition, the ssGSEA algorithm was used to determine the infiltration of immune cells in SLE and atherosclerosis.
It is noteworthy that the infiltration level of T cells, macrophages, and neutrophils is significantly higher in SLE and atherosclerosis.CD4 + T cells can differentiate into multiple cell types, such as Th1 cells, Th17 cells, and Tregs.The imbalance of T helper cells subsets is closely related to the formation and severity of SLE [55].The infiltration of Treg cells is generally considered beneficial in both SLE and atherosclerosis [56,57].However, with the loss of the protective function of Treg cells in SLE and atherosclerosis, its inhibition of inflammatory response disappeared and can converted to proinflammatory Th cells [58,59].Macrophages are crucial to the occurrence and development of SLE [60].The number of M1-like inflammatory macrophages in peripheral blood determines the severity of SLE [61].Macrophages can not only form foam cells, which are early markers of atherosclerotic plaque formation [62] but also can derive pro-inflammation cytokines that accelerate arteriosclerosis,
10
Mediators of Inflammation which are also found in SLE patients with atherosclerosis [63].Neutrophils, key cells in the aseptic inflammation response, exert their functional effects in SLE via secreting NETs and producing reactive oxygen species (ROS) [64].
Similarly, neutrophils also promote the infiltration of immune cells and the release of inflammatory cytokines through the formation of NETs and ROS to accelerate atherosclerosis [65].These results indicate that the immune microenvironment of SLE patients provides an advantageous condition for the formation of atherosclerosis.This provides a favorable entry point for us to explore the mechanism and new treatment strategy of SLE complicated with early atherosclerosis.While earlier research evaluated the hub genes linked to SLE [66] and atherosclerosis separately [67], our study uses viable bioinformatics to investigate the similarities in the underlying molecular pathways that both of these diseases share.Because of the high rate of comorbidity between SLE and atherosclerosis, as well as the fact that SLE accelerates the course of atherosclerosis, leading to CVDs, atherosclerosis is one of the primary causes of mortality among SLE patients.Precisely for this reason, we hope to further elucidate the mechanisms of SLE and atherosclerosis by identifying common DEGs, TFs, and hub genes between the two diseases, which will help improve the prognosis of patients with SLE combined with atherosclerosis.
However, this study still has some limitations.First, the majority of our bioinformatics findings were on the basis of publicly available datasets.second, we did not validate it in vitro by molecular experiments.Deeper studies in the future
Mediators of Inflammation 13
are indispensable to clarify the fundamental functions and mechanisms of hub genes in these two diseases.
Conclusions
With the help of a series of bioinformatics methods, we examined the common DEGs between SLE and atherosclerosis and conducted PPI network and enrichment analyses.We discovered that the onset and advancement of atherosclerosis in SLE may trace a similar mechanism as atherosclerosis despite the young age, which might be regulated by certain hub genes.The current analysis offers direction for future research that aims to investigate the pathogenesis of SLE and atherosclerosis and develop innovative treatment techniques targeting these hub genes.
Figure 1
depicts the study's workflow.The relevant epidemiological statistical information is presented in FigureS1.In total, 1,195 DEGs in GSE109248 and 418 DEGs in GSE100927 were discovered.The volcano map of DEGs is shown in Figures2(a) and 2(b).In total, 79 common DEGs in total were obtained through the intersection of the Venn diagram (Figure 2(c)).
FIGURE 2 :
FIGURE 2: The volcano and Venn diagrams of DEGs: (a) GSE109248 dataset represented in a volcano map; (b) GSE100927 dataset depicted in a volcano map; (c) 79 DEGs were shared across both datasets.
-FIGURE 3 :
FIGURE 3: Analysis results for common DEG enrichment: (a) the GO pathway enrichment analysis findings; (b) results of KEGG pathwayrelated enrichment analysis.The significance threshold value was determined to be p-value < 0.05.
FIGURE 5 :
FIGURE 5: The modular genes enrichment analysis results: (a, b) enrichment study for GO and KEGG of the modular genes.The number of genes involved is represented by the size of the circle, and the frequency of the genes involved in the term total genes is shown by the abscissa.
FIGURE 6 :
FIGURE 6: Hub gene coexpression network and Venn diagram: (a) seven algorithms were used to screen 12 common hub genes, as shown in the Venn diagram; (b) GeneMANIA was used to analyze hub genes and related coexpression genes.
FIGURE 7 :
FIGURE 7: Hub gene function enrichment analysis: (a, b) analysis of hub gene-related GO and KEGG enrichment.The external circle on the left is the gene symbol, and the external circle on the right is the pathway involved that can indicate which functions the genes are involved in.The significant p-value of the pathway is represented by the internal circle on the right; the darker the color, the smaller the p-value.
FIGURE 10 :
FIGURE 10: Analyzing the immune infiltration in systemic lupus erythematosus and atherosclerosis: (a) violin plot showing the difference of 28 types of immune cells in normal and SLE; (b) violin plot showing the difference of 28 types of immune cells in normal and atherosclerosis.SLE, systemic lupus erythematosus; AS, atherosclerosis. (b)).
TABLE 1 :
The top 20 hub genes rank in cytoHubba.
TABLE 2 :
The details of the hub genes. | 2023-10-23T15:10:48.805Z | 2023-10-21T00:00:00.000 | {
"year": 2023,
"sha1": "559e21cc5e5a654830a4097547b8fbe25fb82987",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/mi/2023/4508436.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4f724d874c5cb5394a48856768344d72bbf98425",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
} |
11263831 | pes2o/s2orc | v3-fos-license | Designer biomass for next-generation biorefineries: leveraging recent insights into xylan structure and biosynthesis
Xylans are the most abundant noncellulosic polysaccharides in lignified secondary cell walls of woody dicots and in both primary and secondary cell walls of grasses. These polysaccharides, which comprise 20–35% of terrestrial biomass, present major challenges for the efficient microbial bioconversion of lignocellulosic feedstocks to fuels and other value-added products. Xylans play a significant role in the recalcitrance of biomass to degradation, and their bioconversion requires metabolic pathways that are distinct from those used to metabolize cellulose. In this review, we discuss the key differences in the structural features of xylans across diverse plant species, how these features affect their interactions with cellulose and lignin, and recent developments in understanding their biosynthesis. In particular, we focus on how the combined structural and biosynthetic knowledge can be used as a basis for biomass engineering aimed at developing crops that are better suited as feedstocks for the bioconversion industry.
Background
Plant cell walls encompass the majority of terrestrial biomass and play many important environmental and economic roles [1]. Cell walls are complex structures that consist of cellulose, hemicellulose (xylans, xyloglucans, mannans, etc.), pectins, lignin, and some proteins [2,3]. The amounts of each wall component can vary greatly depending on species, tissue, and cell type [2]. Xylans are the main hemicellulosic constituent found within the thickly lignified secondary cell walls of woody dicots such as poplar, and the primary and secondary cell walls of many monocot species, such as switchgrass, that are relevant to bioindustry [4]. Xylans in these tissues can account for up to 30% of the plant cell wall's dry weight [5]. Melillo et al. have suggested that approximately 50 billion tons of carbon is incorporated by terrestrial plants annually [6]. If we modestly assume that across all species xylans account for approximately 20% of plant cell walls, then we conservatively estimate that roughly 10 billion tons of carbon is incorporated into xylan polymers annually.
In the biotechnology sector, particularly for the production of biofuels, xylans can present many challenges to efficient fermentation to useful products by contributing to biomass recalcitrance, defined as the resistance of biomass to chemical, thermal or enzymatic degradation. For one, xylans are composed mainly of pentose sugars, bioconversion of which requires metabolic pathways that are distinct from those used to process hexose sugars from cellulose [7]. Such systems for pentose utilization are often lacking in industrially relevant fermentative microbial strains [7]. Furthermore, the complexity of linkages and sidechain structures in xylan necessitate a suite of hydrolytic enzymes for the complete breakdown of the polymer, and the production of such enzymes can result in significant economic and metabolic costs. Finally, xylan is known to be highly substituted with O-acetyl groups, whose release leads to a reduction in pH that can have an inhibitory effect on fermentative microorganisms [8]. Thus, modification of xylans or specific xylan structures are of interest to the biomass-processing industry, as success in this area may facilitate fermentation and thereby substantially lower costs for full biomass degradation.
AcGXs are the predominant type of xylan found within the thick lignified secondary cell walls of hardwoods and herbaceous dicot species such as Poplar and the model plant Arabidopsis thaliana ( Fig. 1) [11][12][13]. These AcGXs are homodisperse in length (approximately 100 residues in Arabidopsis) and, on average, one of every ten xylosyl residues is substituted at O-2 with (4-O-methyl)-α-dglucuronic acid((Me)GlcpA) [13,14]. In addition to glycosyl substitutions, the xylosyl residues in the backbone often bear O-acetyl esters, which are the most abundant substituents in AcGXs. For example, more than half of the backbone xylosyl residues in Arabidopsis and Populus AcGXs are O-acetylated [15][16][17][18]. These xylosyl residues can be mono-acetylated at O-2 or O-3 or di-acetylated at both O-2 and O-3, while the xylosyl residues carrying (Me)GlcpA at O-2 can also be acetylated at O-3. In Arabidopsis and Populus AcGXs, monoacetylated residues at O-2 or O-3 are the most abundant and account for 34 to 49% of all xylosyl residues. Only a small percentage of diacetylated residues are present (6-7%). Virtually all the xylosyl residues substituted with (Me)GlcpA at O-2 are acetylated at O-3 and these xylosyl residues account for approximately 10% of the total backbone residues [11,[15][16][17][18][19][20]. The ratio of 2-O-and 3-O-acetyl substituents in the xylan is difficult to determine since acetyl groups can migrate between the O-2 and O-3 positions of the same xylosyl ring [21]. This phenomenon has made it very challenging to determine the positions of these acetyl substituents when xylan is in the wall or while it is being synthesized in the Golgi. Recent studies of the O-acetylation distribution pattern in Arabidopsis indicated that every other xylosyl residue carries an acetyl ester, suggesting a systematic addition of O-acetyl groups to the GX backbone [16,22]. 1 Xylan structures from spruce, poplar, and switchgrass secondary walls. Graphical representation of the main structural features of (a) arabinoglucuronoxylan (AGX) from spruce (b) acetylated glucuronoxylan (AcGX) from poplar, and (c) acetylated glucuronoarabinoxylan (AcGAX) from switchgrass. Spruce GX and poplar AcGX contain a distinct glycosidic sequence at their reducing ends, which is absent in switchgrass AcGAX, which often has substituted reducing xylosyl residues at the reducing end [25,28,43]. The GlcA and Ara substituents are in even positions and regularly distributed in the main domain of spruce AGX [27,46]. The substituents in the main domain of Arabidopsis AcGX and poplar are also likely to be evenly distributed [22,45]. The pattern of distribution of AcGAX substituents in switchgrass secondary walls is still unknown, but they are less branched than the AcGAX in primary walls and other tissue-specific grass xylans (see text for more details) Aside from backbone decorations, AcGXs contain a distinct tetrasaccharide sequence of Xylp-1,4-β-d-Xylp-1,3-α-l-Rhap-1,2-α-d-GalpA-1,4-d-Xyl (termed Sequence 1) at the reducing terminus, though the biological function of this reducing sequence in the cell wall is still not known [14,23]. Using this distinct sequence as a reference enabled us to determine that each GX polymer present in Arabidopsis and some hardwood species contain approximately 100 xylosyl residues [13,14,24]. Sequence 1 is also present at the reducing ends of coniferous arabinoglucuronoxylans [25]. These AGXs are substituted, on average, with two 4-O-methylα-d-glucuronic acid groups at O-2 and one α-larabinofuranose (Araf) residue at O-3 per every ten xylose units, and are minor components of softwood cell walls [26]. These highly decorated AGXs found in the cell walls of most gymnosperms are generally not O-acetylated (Fig. 1). The exceptions are members of Gnetophyta, which synthesize O-acetylated xylans. These xylans also have other structural features typical of dicot AcGXs, such as undetectable levels of arabinosyl sidechains and low amounts of uronic acid substituents [27].
Xylans from monocot species show considerable structural diversity [28]. Grasses, which include grain (corn and rice) and energy crops (switchgrass and Miscanthus), are the most extensively studied of the monocots. The secondary cell walls of grasses contain AcGAX, which have GlcpA or MeGlcpA substituents at O-2; however, the main substitutions are α-l -Araf residues at O-3. The α-l-Araf residues are frequently further substituted at O-2 with α-l-Araf or β-d-Xylp residues ( Fig. 1) [29,30]. The backbone residues of AcGAXs in primary walls are singularly or doubly substituted with α-1-2 and/or α-1-3 linked arabinosyl residues [31]. High molecular mass neutral AcAX, without uronic acid substituents, can be found in the cell walls of starchy cereal grains [10]. Some grasses contain more complex xylans in specific tissues, for example, AcGAXs in corn bran and corn fiber contain complex sidechains with sugars that are not typically found in xylans, such as α-l-galactose and α-d-galactose [32].
Grass AcGAXs and AcAX are acetylated but to a lesser extent than AcGXs from dicots. However, in addition to the acetyl groups attached to the backbone xylosyl residues, the Araf substituents can also carry acetyls at O-2 [33]. A notable feature of grass AcGAX and AcAX is that their Araf residues are often esterified with ferulic or p-coumaric acids at O-5 [34,35]. Oxidative coupling of ferulic acid substituents leads to the formation of ferulate dimers or trimers, which crosslink different xylan molecules or xylan to lignin [36,37]. Further, it has been proposed that the ferulates are the initiation sites for cell wall lignification in grasses, making them another interesting target for biomass modification [38,39] (Fig. 2). The reducing-end tetrasaccharide, Sequence 1, which is characteristic of xylans from dicots and gymnosperms, has not been detected in xylans isolated from grasses (Fig. 2). Instead several different structures were found at the reducing terminus of grass AcGAX and AcAX, including specifically substituted xylosyl residues at the reducing end of the polymer [28,40]. However, the presence of Sequence 1 in xylans synthesized by some commelinid monocots and its absence in xylans from some non-commelinid species indicate that the structural diversity of xylan in the monocots is greater than what was previously thought [31]. Interestingly, some non-commelinid species (Asparagales and Alismatales) synthesize xylans that lack the reducing-end tetrasaccharide sequence and are substituted with the disaccharide sidechain Arap-1,2-α-(Me)GlcA [28]. This sidechain is also found in xylans isolated from Eucalyptus wood and Arabidopsis primary cell walls, suggesting a potentially conserved structural or biosynthetic role of primary cell wall xylans within evolutionarily distant species [28,41]. Xylan present in woody tissues of Eucalyptus contains sidechains comprised of β-d-Galp attached at O-2 of the MeGlcA residues, in addition to the α-l-Arap-containing disaccharides [17]. Xylan that is highly substituted with more complex sidechains can be found in some seed mucilage and root exudates [10]. For example, the xylan in the mucilage of Arabidopsis seeds contains sidechain xylosyl residues attached directly to the backbone [42].
Xylans are essential components of the thick and strong secondary walls of the specialized cells that constitute fibers and conducting vessels in vascular plants. However, the presence of xylans in the cell wall precedes plant vascularization, and xylan that is structurally similar to secondary wall GX has been found in small amounts in the avascular moss Physcomitrella [43]. In contrast to the GXs from Poplar and other woody species, in which a majority of the GlcA substituents are methyl etherified at O-4 [11], the xylan in Physcomitrella is not methylated [43], suggesting that O-methylation of GXs is a key structural feature of the secondary cell walls of vascular plants. In herbaceous dicots, the extent of 4-O-methylation of the GlcA residues varies depending on the tissue type and growth conditions. Interestingly, differential binding of a MeGlcA-specific carbohydrate binging module (CBM) has demonstrated that GX in the vascular xylem of Arabidopsis has a higher degree of methylation than in interfascicular fibers, which further supports the relationship between high GX methylation and highly lignified hydrophobic walls [44].
Another structural characteristic that affects xylan properties is the spacing between GlcA, O-acetyls or other substitutions, which is believed to be a strictly controlled feature of xylans in dicot and conifer species [16,45]. Recent studies have suggested that xylans may contain domains with distinct GlcA spacing, and that these variations may result in different xylan conformations in vivo [27,45]. This has led to the two domains on Arabidopsis xylan being termed the major domain, where GlcA residues are spaced at approximately 10 backbone xylosyl residues from one another at even intervals, and the minor domain where these substituents are much closer (5-7 residues), and have no preference for even or odd spacing [45]. Similar domains have been proposed for conifer xylans [27]. In spruce xylan, a main domain containing evenly spaced GlcA substitutions and frequent Ara substituents that are approximately two residues apart was identified, along with two other minor domains [46]. However, the question still remains whether these domains are part of the same xylan molecule or represent different xylans with distinct structural features [46].
Xylan interactions with cellulose and lignin
Xylans are structurally similar to cellulose in that their backbones are composed of 1-4-linked xylosyl residues that have equatorial oxygen atoms at both C1 and C4. This common sugar geometry results in polysaccharide backbones with molecular shapes that are complementary to cellulose [23]. As indicated previously, xylans spontaneously bind to cellulose microfibrils produced by Acetobacter xylinum, providing evidence that the physical property of xylans can affect cellulose orientation and aggregation during cell wall assembly [47]. For example, in situ labeling experiments of woody tissues have demonstrated a preferential localization of AcGX in the transition zones between the S layers, where the cellulose changes orientation, supporting the hypothesis that AcGX participates in organizing cellulose microfibrils into a helicoidal arrangement [48][49][50].
Certainly, the type and distribution of the backbone substitutions have important effects on the binding interactions of xylan with itself and other polymers in the wall. It has been reported that sparsely branched xylans have a higher affinity for cellulose microfibrils, and that even small O-acetyl substituents have pronounced impacts on the adsorption of xylans to cellulose [51][52][53]. In contrast, recent studies using molecular dynamics simulation indicate that xylan substitutions stabilize rather than limit the binding of xylan to cellulose. These seemingly contradictory results were rationalized by proposing that the increased absorption of sparsely substituted xylans occurs because a low degree of substitution leads to the self-association of xylans, causing additional xylan molecules to aggregate with xylan molecules that are directly bound to cellulose [46,54].
Current models predict that the threefold helical screw conformation that xylan adopts in solution shifts to a flat helix with twofold screw symmetry when xylan interacts with cellulose [55]. It was proposed that GlcA and/or O-acetyl substituents that are separated by an even number of backbone residues, and thus decorate only one side of the xylan ribbon, facilitate the formation of hydrogenbond networks between xylan and hydrophilic cellulose surfaces. A model was proposed in which the substituents of such xylans point away from the cellulose fibrils, while attachment of substituents to both sides of the ribbon would hinder the interactions between xylans and the hydrophilic surfaces of cellulose [22,55]. In the case of the hydrophobic surface, however, one model suggests that consecutive substitutions strengthen the binding of xylan with cellulose [46].
In addition to interacting with cellulose, xylans are physically and/or covalently bound to lignin in secondary cell walls of lignocellulosic biomass to form a closely associated network [38]. Strong evidence indicated that GAXs in the secondary walls of grasses are crosslinked into lignin by extensive copolymerization of their ferulates [56][57][58]. In the case of hardwoods and other dicots, it has been proposed that GXs are esterified to lignin via their MeGlcpA substituents [59,60]. However, only indirect evidence has been reported to support this hypothesis. Lignin-carbohydrate complexes have been isolated from numerous woody species, but much remains to be learned about the molecular structure of these complexes [61]. Further, recent studies on Populus genotypes with different cell wall compositions suggest that there is a close interaction between lignin and xylan, and that the degree of xylan acetylation influences the interaction between these major cell wall polymers, affecting the efficiency of pretreatment with 0.3% H 2 SO 4 in nonisothermal batch reactors [62].
Enzymes involved in xylan synthesis
Through the diligent work of many different research groups over many years, several of the glycosyltransferases (GT's) responsible for xylan synthesis have been brought to light. Initial research in this field focused on the observed biochemical and phenotypic effects of xylan biosynthetic mutants in the model dicot species Arabidopsis thaliana. Many of these so-called irregular xylem (irx) mutants displayed a collapsed or irregular xylem phenotype resulting in stunted growth and often infertility [63]. Structural analysis of GX isolated from irx mutants, combined with biochemical analysis of the associated gene products, has led to the characterization of enzymes involved in many aspects of xylan synthesis in dicots including backbone elongation [64][65][66]72], sidechain addition [45,[67][68][69], reducing-end synthesis [14], and noncarbohydrate modifications such as the addition of acetyl [20,64,70], and methyl groups [44].
In contrast to the well-known cellulose synthases, which are localized to the plasma membrane of plant and bacterial cells, most enzymes responsible for xylan synthesis are found as membrane-associated proteins within secretory organelles [i.e., endoplasmic reticulum (ER) and the Golgi apparatus] [71]. Hemicellulosic polymers, including xylan and xyloglucan, are synthesized primarily in the Golgi and then exported via poorly characterized mechanisms to developing cell walls. Many of the enzymes involved in xylan synthesis are from distinct carbohydrate-active enzyme (CAZy) GT families [72]; however, they are thought to interact and form dynamic protein complexes within the Golgi and function in a concerted manner to form complex hemicellulosic structures [71]. A proposed model of xylan synthesis is presented in Fig. 3.
Enzymes involved in backbone elongation
Three proteins (and their homologs) have been implicated in xylan backbone synthesis in dicot and monocot species, including IRX9 and IRX14, in the GT43 family, and IRX10/IRX10-L, in the GT47 family. IRX10/IRX10-L proteins have recently been shown by two groups to possess β-1,4-xylosyl transferase activity in vitro when expressed heterologously in either human embryonic kid-ney293 (HEK293) cells or Pichia pastoris [64,73]. Using HEK293-based expression, AtIRX10-L, now renamed to xylan synthase 1 (XYS1), was able, via a distributive mechanism, to transfer xylosyl residues from UDP-xylose to labeled xylo-oligosaccharides as small as xylobiose, and to extend a xylohexaose primer to form products up to 21 xylosyl residues in length [64]. This result came as somewhat of a surprise given that the backbones of all other hemicelluloses with geometric homology to cellulose are synthesized by enzymes belonging to family GT2, which contains the cellulose synthase superfamily. Family GT2 glycosyltransferases are multi-membrane spanning proteins that polymerize polysaccharides processively with simultaneous excretion through the membrane [74]. This is in stark contrast to the GT47 AtXYS1, which does not appear to even contain a transmembrane domain [75], and acts via a distributive mechanism in vitro [64].
IRX9 and IRX14 are also believed to play a role in xylan backbone elongation based on work with mutants that indicated that they are essential for formation of the complete backbone in planta [14,71,76]. Further experiments with microsomal membrane preparations have shown that xylosyl transferase capacity is reduced in microsomes prepared from mutants (irx9 or irx14) of either of these two proteins [71]. However, in vitro analysis using techniques that were employed to demonstrate xylosyltransferase activity of XYS1 have failed to show any xylan synthase activity for these enzymes, whether alone or in combination [64]. Both enzymes are classified as members of the GT43 family; however, it remains unclear if these proteins are themselves catalytic, or if they simply serve as structural components of a larger xylan synthase complex (XSC) or function as accessory proteins that facilitate the transfer from XYS1 to the growing xylan chain. For example, in AtIRX9 the catalytically important DxD motif present in most GTs in the GT-A fold family is replaced by an unusual amino acid sequence ('GLN'). Moreover, the closely related protein IRX9-L has 'DDD' in this position [76]. Interestingly, Ren et al. used site-directed mutagenesis and genetic complementation to show that irx9 null mutants could be successfully complemented by a modified IRX9-L gene in which the 'DDD' motif was changed to ' ADA' [76]. Further, recent work with heterologously expressed Asparagus officionalis AoIRX10, AoIRX9, and AoIRX14 in Nicotiana benthaliama demonstrated that these three proteins form a Golgi-localized XSC in vivo [66]. However, the exact role of each protein in the complex is still not well understood. Mutagenesis experiments affecting the DXD motif of each putative GT, which should disable the protein's catalytic capacity, showed that this motif was essential for AoIRX10 and AoIRX14 activity. However, no decrease in xylosyl transferase activity was observed upon analysis of microsomes containing AoIRX9 in which critical catalytic residues had been replaced [66]. Bimolecular fluorescence complementation (BiFC) analysis with the Asparagus proteins also provided the first direct evidence that AoIRX9, AoIRX10, and AoIRX14A are members of a core XSC localized in the Golgi that likely contains additional proteins [66]. Taken together, these data suggest that IRX9 does not have a direct catalytic role in xylan synthesis, but rather plays a structural or supportive role in the XSC. However, no functional in vitro characterization of any of the GT43 enzymes involved in plant polysaccharide synthesis has yet been reported, therefore their exact role in the XSC remains enigmatic. [115]. Synthesis of the xylan backbone is catalyzed by XYS, which is part of a Golgi-localized xylan synthase complex (XSC) that also includes IRX9 and IRX14; however, the roles of the latter enzymes in this process remains enigmatic. UDP-GlcA is transported into the Golgi by a UDP-uronic acid transporter (UUAT) protein [116], and then GUX enzymes catalyze the transfer of GlcA from UDP-GlcA to the xylan backbone, which is subsequently methyl-etherified by GXMT proteins. For the addition of Araf residues, C-4 epimerization of UDP-Xyl to UDP-Arap is carried out by a Golgi-localized UDP-Xyl 4-epimerase (UXE) or cytosolic UDP-glucose 4-epimerases (UGE) [117]. UDP-Arap produced in the Golgi is either used as a substrate in the synthesis of Arap containing polysaccharides such as pectins, or transported back to the cytosol via an unknown process. In the cytosol, UDP-Arap is interconverted to UDP-Araf by UDP-Ara mutases (reversibly glycosylated polypeptide, RGP) [118], and is then transported back into the lumen of the Golgi apparatus by UDP-Araf transporters (UAfT) [119]. XAT enzymes then catalyze the addition of Araf residues to O-3 of the xylan backbone, which is often further substituted by a β-xylosyl residue to O-2 by XAX enzymes. The xylan present in Arabidopsis seed mucilage is also decorated with β-xylosyl residues at O-2, which are added by the xylosyltransferase MUC1. Acetyl donors, such as Acetyl-CoA or an unidentified acetyl donor, are most likely imported into the Golgi lumen by RWA proteins, and then acetylation of the xylan backbone occurs via a number of xylan acetyltransferases (XOAT), which have different catalytic regiospeficities. * Indicates that activity has not been biochemically confirmed Enzymes involved in synthesis of the reducing-end structure (Sequence 1) As mentioned previously, xylans from dicots and some monocot species often contain a distinct tetrasaccharide motif termed Sequence 1 at their reducing ends [14,28]. The role of this structure in xylan synthesis is still poorly understood, and the biosynthetic mechanism for its creation has remained elusive. Mutagenic experiments in Arabidopsis have presented some candidates for Sequence 1 biosynthesis as this structure is lacking in xylans from plants deficient in certain secondary cell wall expressed proteins. Thus, IRX7/FRA8 (GT47), IRX8/ GAUT12 (GT8), and PARVUS/GATL1 (GT8) are the main glycosyltransferase candidates for synthesis of this unusual structure, although concrete biochemical evidence to support their participation in this process is still lacking [3].
The role of Sequence 1 in xylan synthesis also remains an enigma. Many have speculated that Sequence 1 may be serving as a terminator of xylan synthesis, given the observation that deregulation of xylan chain length occurs when Sequence 1 synthesis is disrupted [14,23]. However, the recent characterization of the xylan backbone synthase (XYS1) has shown that xylosyl addition occurs from the reducing end to the nonreducing end, making the case for a reducing-end terminator unlikely [64]. Further, it is interesting to note that many of the enzyme families involved in xylan synthesis, such as GT47 and GT43, also function together in the biosynthesis of animal glycosaminoglycans (GAG), such as heparan sulfate and chondroitin sulfate, which are charged and heavily sulfated polysaccharides that play many vital roles in animal biology. These polysaccharides require the synthesis of a tetrasaccharide primer before elongation of the GAG backbone can occur. In the case of GAG synthesis, however, the polysaccharide is known to be covalently linked to a serine or threonine of a protein-based acceptor [77]. It is unclear if xylans are linked at the reducing terminus to a protein or lipid in the Golgi apparatus and released at a later time. A proposed model of xylan synthesis is contrasted with that of the biosynthesis of the GAG heparan sulfate in Fig. 4.
Proteins involved in the addition of glycosyl substituents
The roles of several enzymes in the addition of sidechains to the xylosyl backbone have been elucidated in recent years. Three members of GT family 8, In bold are enzymes from the families' common between the two pathways (GT43 and GT47). In heparan sulfate biosynthesis, polysaccharide initiation occurs by the transfer of a xylosyl residue to a protein serine or threonine residue by the enzyme xylosyl transferase 1 (XYLT1) [77]. A linker tetrasaccharide is then synthesized by the enzymes β-1-4 galactosyl transferase 7(β4GalT7), β-1-4 galactosyl transferase 6(β4GalT6) and a GT43 family enzyme Galactosylgalactosylxylosylprotein 3-β-glucuronosyltransferase 3(β3GAT3). Following primer synthesis, the polymer is extended by the GT47/64 heparan synthases, exotosin (EXT) and exotosin-like (EXTL3) proteins, which catalyze the transfer of the repeating segment of glucuronic acid (GlcAp) and N-acetyl glucosamine (GlcNAcp) [77]. This mechanism has similarities to our proposed model for xylan synthesis, where a tetrasaccharide primer may be synthesized while connected to some unknown carrier in the ER/Golgi, potentially in part by GT47 and GT43 family enzymes. This primer is then extended by the GT47 XYS1/IRX10 family of proteins, which most likely function as part of protein complexes that also contain members of GT43 (IRX9, IRX14). The xylan chains are then decorated with sidechains such as acetyl esters and glycosyl units such as (Me) GlcAp GlucUronic acid substitution of Xylan 1 (GUX1), GUX2, and GUX3, have been shown to possess glucuronosyltransferase activity toward xylooligimers, and Arabidopsis mutants lacking these enzymes result in xylans with reduced GlcA and 4-O-MeGlcA substitutions [41,45,68,69]. Further evidence suggests that GUX1 and GUX2 perform distinct functions in decorating the xylan backbone regions, leading to different spacing between GlcA residues. GUX1 is proposed to be responsible for forming the xylan major domain by adding GlcA substitutions about every 10 xylosyl residues, whereas GUX2 has been proposed to decorate segments comprising the minor domain by placing the GlcA residues closer together (6-8 residues) [45]. GUX3 has also been shown to play a defined role by acting as the sole transferase required for GlcA sidechain addition to xylans that are incorporated into the primary cell walls of Arabidopsis [41].
Enzymes involved in the decoration of the arabinoxylan backbone with arabinosyl and xylosyl sidechains have been shown to be members of the GT61 family, which is divided into three clades: A, B, and C [78]. The Xylan Arabinosyl Transferases (XATs) responsible for addition of Araf to O-3 of the xylan backbone have been identified in grasses and are members of GT61 clade A. Heterologous expression of XAT in Arabidopsis resulted in the arabinosylation of Arabidopsis GX, which normally does not possess Araf residues [78]. It is unclear how many enzymes are required to complete the full suite of arabinosyl substitutions found on monocot xylans, given that residues can be arabinosylated at O2, O3, or in both positions. Xylosyl Arabinosyl substitution of Xylan 1 (XAX1), another GT61 enzyme in the grass-specific clade C.IV, has been implicated in the addition of β-xylosyl residues to O2 of α-1,3-Araf residues decorating the xylan backbone [67]. It was also suggested that transfer of xylose enhances feruloylation of the α-1,3-Araf residues, or that feruloylation interferes with hydrolysis of this xylosyl residue during xylan maturation [67]. A forward genetics screen applied to a mutant population of Brachypodium distachyon identified a SNP in Bradi2g01480 (SAC1), a member of grass-specific clade C.III of the GT61 family, that impacts biomass digestibility. Xylan enriched fractions isolated from sac1 plants have less xylose, indicating that SAC1 may have a similar function to XAX1 from rice [79]. Recently, a mutant in MUCILAGE-RELATED 21 (MUCI21), a putative xylosyl transferase in clade B of the GT61 family, was shown to be involved in the synthesis of seed mucilage xylan. Analysis of mucilage from muci21 plants suggests that this enzyme catalyzes the transfer of a β-1,2 xylosyl residue directly to the xylan backbone [42].
4-O-methylation
As discussed previously, a variety of non-glycosyl substitutions are also present in xylan. One of the best characterized of these is the 4-O-methylation of GlcA sidechains. The enzymes responsible for this modification in Arabidopsis were initially identified as Gluruconoxylan Methyl Transferase (GXMT) proteins by researchers in the BioEnergy Science Center [44,80]. Three homologs of these proteins have been studied in Arabidopsis, all containing a Domain of Unknown Function 579 (DUF579). Recombinantly expressed GXMT1 was able to catalyze the transfer of a methyl group from S-adenosyl methionine to the 4 position of GlcA residues present on GX polymers and oligosaccharides [44]. Interestingly, the disruption of normal xylan synthesis in mutants of many of the GT enzymes mentioned previously often leads to an increase in the ratio of methylated to un-methylated GlcA residues in GX [14]. One possible explanation for this is that when xylan synthesis is reduced, pools of methyl donor accumulate, while the concentration of glucuronosyl acceptors is reduced, leading to an increase in the extent of their methylation. Another theory is that slowing of xylan synthesis in biosynthetic mutants provides more time for the methyl transferases to interact with their acceptor substrates. Further characterization of this phenomenon should provide insight into the overall process of xylan biosynthesis.
Ferulic acid and p-coumaric acid esters
Some of the arabinofuranosyl residues of monocot xylans are also decorated at O-5 with ferulic or p-coumaric acid esters. Ferulic substituents form oxidatively-linked dimers and oligomers with wall polymers that result in a covalently linked network within the wall. Although the process by which these modifications are added to the polysaccharide is still poorly understood, recent work has suggested that members of the "Mitchell clade" within the BAHD acyltransferase superfamily are involved in ferulic and p-coumaric acid esterification of monocot xylans [81][82][83]. These enzymes have been shown to localize to the cytoplasm, suggesting that other players are important in this process to complete ferulic acid transfer, which most likely takes place in the Golgi. It's likely that feruloyl-CoA is the primary feruloyl donor in vivo; however, it remains unknown whether the feruloyl moiety is transferred directly to arabinoxylans or to another intermediate, such as UDP-Araf. It has been hypothesized that ferulic acid is first transferred to a glycosyl donor such as UDP-Araf in the cytoplasm, and then feruloylated UDP-Araf is transported into the Golgi where transfer of feruloylated Araf onto the xylan backbone may occur [3].
Recently, Marcia and coauthors showed that downregulation or overexpression of BdAT1, a member of the "Mitchell clade" of BAHD acyltransferases in Brachypodium resulted in reduced or increased levels of monomeric and dimeric ferulic acid esters, respectively [84]. Taken together, their data indicates that BdAT1 is a promising candidate for feruloylation of AX in grasses. Many intermediate steps in this process are still unknown, but when elucidated, will provide several interesting targets for biomass modification.
O-Acetylation
O-Acetylation is one of the predominant modifications of xylan, and at least four protein families are involved in the cell wall polysaccharide acetylation pathway in the plant Golgi. These are Reduced Wall Acetylation (RWA) proteins [85], Trichome Birefringence-Like (TBL) proteins [86], the Altered XYloglucan 9 (AXY9) protein [87], and GDSL acetylesterases [88]. The RWA2 protein was the first protein shown to be involved in cell wall acetylation in plants and was identified in Arabidopsis based on its homology to the Cas1P protein, which is involved in polysaccharide O-acetylation in the pathogenic fungus Cryptococcus neoformans [85]. Mutation of the RWA2 gene resulted in a 20% reduction of acetylation across several polysaccharides, including pectins, xyloglucan, and xylan [85]. RWA2 belongs to a family of four proteins in Arabidopsis. Using combinations of multiple rwa mutants, Manabe et al., demonstrated that RWA proteins have overlapping functions, and any one of the four proteins is able to support some level of acetylation of all polysaccharides in the wall [89]. Shortly after the identification of the RWA family, the plant-specific TBL family was shown to be involved in acetylation of specific cell wall polysaccharides [86]. Analysis of plants bearing mutations in the TBL29 gene (also known as ESKIMO1, ESK1), which is highly expressed during secondary cell wall biosynthesis, has provided insights into its role in vivo. The xylan isolated from tbl29/esk1 mutants has reduced amounts of mono-acetylated xylosyl residues, suggesting an essential role in xylan O-acetylation [20]. Moreover, in vitro biochemical analysis of the TBL29/ESK1 protein by researchers in the BioEnergy Science Center established the precise molecular function of these plant-specific proteins: i.e., the O-acetylation of xylan backbone residues [64]. In addition to TBL29/ESK1, the other eight members of the TBL family in Arabidopsis have been recently biochemically characterized and shown to possess xylan acetyltransferase activities in vitro. TBL28, TBL30, TBL3, TBL31, TBL34, and TBL35 are responsible for mono-acetylation at O-2 or O-3 and/or di-acetylation at both O-2 and O-3 of xylosyl residues, while TBL32 and TBL33 transfer acetyls at O-3 of xylosyl residues substituted at O-2 with (Me)GlcA [90].
TBL proteins are composed of one N-terminal transmembrane domain and two conserved domains, the TBL domain, and a domain of unknown function 231 (DUF231) [91]. The TBL domain harbors a conserved Gly-Asp-Ser (GDS) motif, and the DUF231 domain contains an Asp-x-x-His (DxxH) motif in the carboxy-terminus [92]. It has been hypothesized that one of the two domains binds the polymer while the other facilitates the binding of the acetyl-donor, and then transfers the acetyl group to the polysaccharide acceptors [92]. TBL proteins are predicted to be members of the GDSL-like family based on the presence of these conserved motifs [93]. Members of the GDSL esterases/lipases family harbor a"GDSL" sequence motif that is highly conserved across all kingdoms. GDSL hydrolytic enzymes are functionally diverse, and have been shown to act as proteases, thioesterases, arylesterases, and lysophospholipases [93]. GDSL esterases/lipases belong to the SGNH hydrolase superfamily, which is characterized by four conserved sequence blocks (I, II, III, and V) that were first used to describe the lipolytic enzymes [94]. The GDSL motif is part of block I, where the Ser residue is suggested to form a catalytic triad with the aspartate and histidine residues in DxxH motif in block V [95,96]. Mutations of the GDSL and DxxH in Arabidopsis ESK1 were found to lead to a complete loss of xylan acetyltransferase function [90]. A rice GDSL protein, Brittle leaf Sheath 1 (BS1), has recently been reported to function as an acetyl xylan esterase, which is the first member of the GDSL family in plants that has polysaccharide esterase activity [88]. This conclusion is supported by the observations that recombinant BS1 functions as an esterase in vitro and backbone residues of xylan isolated from bs1 mutants display increased acetylation at O-2 and O-3 [88].
Taken together, these data suggest that RWA proteins operate at a biosynthetic step preceding those of the AXY9 and TBL proteins, and because of their overlapping specificities they are predicted to function in the transport of acetyl donors into the Golgi (Fig. 3). AXY9 is hypothesized to function in an intermediate step between RWA proteins and the TBL acetyltransferases, and may act to shuttle unidentified acetyl donors. Finally, the ability of the BS1 enzyme to modulate xylan acetylation via its acetylxylan esterase activity in the Golgi suggests that it plays a role in maintaining acetylation levels and or patterning on the xylan backbone. RWAs, TBLs, and BS1 provide several potential targets for genetic engineering to improve biomass by altering xylan acetylation.
Xylans as a target to reduce recalcitrance
Xylans are highly abundant polysaccharides in plant secondary cell walls and play a major role in the recalcitrance of crops grown as feedstocks for bioprocessing and bioenergy applications. However, developing strategies to modify xylans that minimize these recalcitrance barriers while simultaneously retaining plant fitness has been very challenging. This is due in part to the largely unpredictable pleiotropic effects of many xylan pathway mutations, combined with severe growth phenotypes associated with these mutations. For example, RNAi silencing of IRX8/GAUT12 in Populus, an enzyme implicated in the biosynthesis of GX Sequence 1, affects GX structure, GX abundance, and levels of pectic polysaccharides [97]. Interestingly, biomass from these plants was less recalcitrant and cell wall polymers were more easily extracted from its cell walls. However, it has been difficult to determine if the primary cause of these characteristics was a change in the structure or overall abundance of xylan or pectin [97]. Attempts to silence or knock out expression of other enzymes implicated in Sequence1 biosynthesis, including IRX7/FRA8 [12,98] and PARVUS/GATL1 [99,100] in Arabidopsis and Populus, resulted in plants with a reduced overall growth, rendering mutants such as these poor choices for use as industrial feedstocks. Given the reports regarding previous attempts to modify xylan structure for increased yields, suggesting that it will be more effective to engineer xylan in which the structures, abundances or spatial distributions of specific sidechains are modified (i.e., substituent engineering) to facilitate bioprocessing.
In biomass-accumulating secondary cell walls, gene expression is controlled by a signal transduction network involving various transcription factors, including secondary wall NAC-domain master switches and their downstream transcription factors [101][102][103]. The distinct expression patterns of different NAC genes in specific cell types potentiates their promoters as tools for spatial manipulation of polysaccharides in modified biomass for improving biofuel production. For example, the dwarf phenotype of Arabidopsis irregular xylem (irx) mutants was rescued by expressing the corresponding xylan synthesis-related genes in vessels using Vascular Related NAC Domain 6 (VND6) and VND7 promoters, which produced transgenic lines with lower xylan and lignin contents, and improved saccharification yields [104]. Thus, a promising strategy to modify cell walls for improved biomass is the use of cell type-specific overexpression or silencing of particular genes of interest. As the regulatory elements influencing the expression levels of certain gene products are characterized, and next generation genome editing techniques such as CRISPR-CAS9 are gradually being realized, manipulation of certain cell wall metabolic enzymes in the right place at the right time is finally becoming practical. Future efforts will utilize promoters that can be induced in specific cell types (e.g., fiber or vessel cells) to control the expression of genes known to impact xylan structure while avoiding undesirable growth phenotypes that often result from the use of constitutive promoters. Utilizing such precise strategies to control gene expression should lessen the detrimental effects of these mutations, thus increasing plant fitness.
Another approach that may be exploited to engineer metabolic pathways and thereby affect biomass recalcitrance is the simultaneous introduction, removal, and/ or modification of several plant genes (i.e., gene stacking). For example, the xylan in tbl29 mutants have a 60% reduction in O-acetylation, resulting in plants with reduced growth; collapsed xylem; and reduced biomass production [70]. However, overexpression of a xylan glucuronosyltransferase (GUX) enzyme in the tbl29 mutant background functionally replaces the missing acetyl substituents with GlcA residues, restoring normal growth while maintaining low acetylation [105]. Genestacking approaches have also been successfully applied to increase β-1,4-galactan content in Arabidopsis [106]. Similar approaches to produce altered xylan structures through gene stacking, combined with the use of specific genetic regulatory elements, are an exciting and promising technique to generate novel xylan modifications with major impacts on plant recalcitrance.
In this context, one strategy to affect recalcitrance is to identify genetic modifications that change the abundance or distribution of xylan sidechain decorations in ways that modulate the strength or extent of the xylan's interactions with itself or other cell wall polysaccharides. It has been suggested that xylan-cellulose interactions rely heavily on the presence of the major and minor domains of xylan as dictated by spacing of (Me)GlcA residues. One could imagine that altered expressions of enzymes involved in the addition of xylan substituents, including glucuronosyltransferases, α-arabinosyltransferases, β-xylosyltransferases, 4-O-methyltransferases, and O-acetyltransferases, may affect the patterning of xylan decorations in ways that disrupt polymer-polymer interactions in the wall, thereby increasing the efficiency of hydrolytic enzymes. A recent example of this idea showed how loss of the xylan acetyltransferase ESK1 results in a dysregulation of GlcA patterning, causing a loss of the normal, even spacing of GlcA sidechains and resulting in disruption of the ability of xylan to bind to cellulose fibrils [55]. Whether modifications of this type can be introduced without adversely affecting the overall wall architecture and plant fitness remains to be seen. Nevertheless, our recent work does suggest that modifying the extent of methylation of the GlcA residues is one relatively straightforward approach to increase the efficiency of biomass processing [44].
The effect of xylan on biomass recalcitrance to deconstruction is closely related to the structure and composition of the cell walls. For example, enzymatic hydrolysis of switchgrass biomass was shown to improve if xylan is previously removed from the wall by extraction with alkali, indicating that xylan is a key substrate-specific feature in switchgrass limiting sugar release [107]. The same treatment in poplar biomass is less effective, while reducing lignin content via chlorite treatment proved more beneficial [107]. Consequently, it will be necessary to find more substrate-specific approaches that address the chemical and structural differences between biomass from grasses or woody species.
Although the roles of xylan arabinosylation in grass cell wall architecture and function remain poorly understood, recent work demonstrating the xylan-specific arabinosyltransferase activities of GT61 enzymes in grasses provides new targets for xylan modification. However, perhaps the most obvious choice for modifying xylan structure to facilitate the deconstruction of grass cell walls may be to modulate the extent of feruloyl and/or coumaroyl acid substitutions. Feruloyl esters are known to crosslink cell wall polymers (especially xylans) by forming intra-and intermolecular bonds [38]. Coupling of xylan sidechains to lignin may provide strong and stable connections that impede the extraction of hemicelluloses and lignin from the wall or inhibit its enzymatic deconstruction. Increased knowledge about the enzymes responsible for the synthesis of these sidechain structures may promote genetic modifications that lead to biomass crops with more easily deconstructable walls.
Improving biofuel production: O-acetylation modification
O-Acetylation of xylans is a key glycopolymer modification contributing to biomass recalcitrance during biofuel production. For example, acetyl groups can sterically hinder the binding of hydrolytic enzymes to their polysaccharide targets [108]. Furthermore, accumulation of acetates released during deconstruction of lignocellulosic biomass inhibits yeast growth and fermentation [109]. Regulation of xylan acetylation is a key strategy to improve biomass processing for biofuel production, and genetic engineering is a way to manipulate acetylation levels in cell wall xylans. So far, many mutants with defects in the biosynthesis of xylan acetylation have been shown to have reduced xylan acetylation levels, but they also displayed irregular xylem phenotypes and dwarfism [20,89,110], which is detrimental to biomass-based biofuel production. Recently, transgenic aspen lines where expression of multiple RWA genes were suppressed using a wood-specific promoter were reported to have a 25% reduction of cell wall acetylation without affecting plant growth [111]. Ground biomass from WT and reduced acetylation lines, with or without acid pretreatment, was subjected to enzymatic hydrolysis. The highest gains were observed on RWA suppression lines when enzymatic saccharification was carried out without pretreatment, resulting in 20% increased yields of all sugars per unit wood dry weight. Less pronounced effects were observed when biomass was subjected to acid pretreatment (4% increased glucose), which was likely due to removal of sugars during the pretreatment process [111].
Beyond suppressing acetylation during biosynthesis in the Golgi apparatus, expressing wall-resident xylan acetylesterases in muro is another strategy to optimize lignocellulosic biomass. A recent study reveals that transgenic aspen trees expressing a fungal acetyl xylan esterase had a 10% reduction in 2-O-monoacetylation, and an increase of cellulose crystallinity and lignin solubility. Without disturbing plant growth, these modifications increased sugar yields during enzymatic saccharification of acid pretreated biomass [112]. A similar experiment, in which a xylan acetylesterase was expressed in Arabidopsis, led to a 30% reduction in cell wall acetylation, and yielded 70% more ethanol relative to wild type biomass that had been pretreated with either hot water or alkali prior to fermentation [113]. Taken together, these results reinforce the notion that reducing wall acetylation increases the accessibility of hydrolytic enzymes to their polysaccharide targets in wood, which is likely due to changes in overall cell wall architecture that are imparted when the amounts and/or distribution of acetyl groups are modified.
Conclusion
In planta modification of xylans remains one of the greatest challenges in feedstock bioengineering for bioindustrial purposes. This ubiquitous family of polysaccharides is composed of complex structures that can vary quite dramatically depending on species and tissue type, making further characterization of naturally occurring xylan structures an area of great interest. Recent developments have significantly furthered our knowledge about xylan synthesis and have begun to elucidate the enzymes involved in backbone elongation, sidechain addition, acetylation, and methylation. However, many areas are still black boxes waiting to be explored, including the role of reducing-end structures in xylan biosynthesis and function, enzymes responsible for the addition of ferulic/ coumaric esters, precise control of chain length, and the relationships between xylan structure and its interactions with other wall components. Due to the sheer abundance of xylan in bioindustry feedstocks, it is imperative to address these gaps in biosynthetic knowledge to pave the way toward engineering better plants with less recalcitrant cell walls.
Recent advances in heterologous expression of plant cell wall GT's in BioEnergy Science Center is finally opening the door for detailed in vitro biochemical and structural studies [64,114], at last allowing unambiguous conclusions regarding the specific functions of proteins involved in xylan biosynthesis. This is an important step in the study of xylan biosynthesis, where many of the proteins remain uncharacterized, and the majority of knowledge concerning them has been gained solely from mutant analysis where the complexities of biology may present bewildering results. Furthermore, new insights into xylan regulation and the development of tractable genetic techniques for manipulating xylan biosynthetic machinery in tissue-specific manners will further our understanding of how gene products affect xylan structure/function in specific tissues. These results, when taken together, will provide important targets to improve biomass crops for industrial processing. | 2017-12-06T21:03:46.933Z | 2017-11-30T00:00:00.000 | {
"year": 2017,
"sha1": "fa4d0ac7066a06139cf972bd175ddbbb7f815153",
"oa_license": "CCBY",
"oa_url": "https://biotechnologyforbiofuels.biomedcentral.com/track/pdf/10.1186/s13068-017-0973-z",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8b6ee9748419b8755b2ec8325db9b6ee61639b49",
"s2fieldsofstudy": [
"Biology",
"Engineering"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
52185742 | pes2o/s2orc | v3-fos-license | “They do not see us as one of them”: a qualitative exploration of mentor mothers’ working relationships with healthcare workers in rural North-Central Nigeria
Background In HIV programs, mentor mothers (MMs) are women living with HIV who provide peer support for other women to navigate HIV care, especially in the prevention of mother-to-child transmission of HIV (PMTCT). Nigeria has significant PMTCT program gaps, and in this resource-constrained setting, lay health workers such as MMs serve as task shifting resources for formal healthcare workers and facility-community liaisons for their clients. However, challenging work conditions including tenuous working relationships with healthcare workers can reduce MMs’ impact on PMTCT outcomes. This study explores the experiences and opinions of MMs with respect to their work conditions and relationships with healthcare workers. Methods This study was nested in the prospective two-arm Mother Mentor (MoMent) study, which evaluated structured peer support in PMTCT. Thirty-six out of the 38 MMs who were ever engaged in the MoMent study were interviewed in seven focus group discussions, which focused on MM workload and stipends, scope of work, and relationships with healthcare workers. English and English-translated Hausa-language transcripts were manually analyzed by theme and content in a grounded theory approach. Results Both intervention and control-arm MMs reported positive and negative relationships with healthcare workers, modulated by individual healthcare worker and structural factors. Issues with facility-level scope of work, workplace hierarchy, exclusivism and stigma/discrimination from healthcare workers were discussed. MMs identified clarification, formalization, and health system integration of their roles and services as potential mitigations to tenuous relationships with healthcare workers and challenging working conditions. Conclusions MMs function in multiple roles, as task shifting resources, lay community health workers, and peer counselors. MMs need a more formalized, well-defined niche that is fully integrated into the health system and is responsive to their needs. Additionally, the definition and formalization of MM roles have to take healthcare worker orientation, sensitization, and acceptability into consideration. Trial registration Clinicaltrials.gov number NCT01936753, registered September 3, 2013.
Background
In the context of HIV service delivery, mentor mothers (MMs), also known as expert mothers, are HIV-positive women with first-hand experience as exemplary clients in the prevention of mother-to-child transmission of HIV (PMTCT) who provide experiential guidance for other women living with HIV [1][2][3][4][5]. Beyond their personal experience, MMs often receive training to enhance their peer support services and work alongside formal healthcare workers (HCWs) [1,4,5]. In this regard, MMs can be considered lay health workers: lay people who have been trained for short periods to assist formal HCWs and take over certain tasks [6][7][8][9]. MMs and similar lay HIV health workers often do not have specific qualifications other than being persons living with HIV [4,[10][11][12]. They work in health facilities, in clients' homes, and in the larger community and ultimately act as a link between health facilities and communities [4,6,10,11].
Depending on the setting, MM roles at health facilities include HIV testing, pre-and post-test counseling, enrolling clients into PMTCT/HIV care, booking client appointments, assisting in drug dispensing and adherence counseling, and tracking clients who have missed appointments or dropped out of care [12][13][14][15][16]. These MM roles are played largely in the framework of task shifting, which the World Health Organization (WHO) defines as "the rational redistribution of tasks among health workforce teams" [17]. In HIV care, task shifting may occur from doctors to nurses or other formal HCW cadres [18,19] and from doctors, nurses, and other formal HCW cadres to lay health workers [4,11,19,20]. The professional-to-layperson model of task shifting has been formally or informally adopted to various degrees in several sub-Saharan African countries to facilitate scale-up of HIV (including PMTCT) services, reach and retain clients, reduce disease burden, and ultimately improve treatment and prevention outcomes [4,10,11,19,20]. Task shifting is particularly helpful in low-resource settings where there is a shortage of human resources for health with concomitant high burden of disease. At 3.2 million, Nigeria has the second largest population of people living with HIV globally, after South Africa [21]. Furthermore, Nigeria has a large PMTCT burden, along with wide program gaps: only 30% of approximately 200 000 HIV-positive and pregnant Nigerian women receive antiretroviral drugs annually, and only 9% of HIV-exposed infants receive timely early infant diagnosis testing [22,23]. In Nigeria, structured MM peer support has been shown to improve maternal retention and viral suppression [24] as well as timely infant presentation for HIV testing [25]. Similar findings on the positive impact of peer support on PMTCT outcomes have been reported from other African countries [1,[26][27][28][29][30].
Despite the well-established benefits of peer support in PMTCT, many high-burden countries-including Nigeria-have not formally adopted these interventions at the national level. For example, in 2005, Nigeria introduced expanded roles for lay voluntary workers including people living with HIV as expert patients in pilot projects such as the Integrated Management of Adult and Adolescent Illness [31]; this was however not fully implemented nationwide. Both the 2009 national decentralization program for HIV treatment scale-up to primary healthcare centers and the 2014 national task shifting/task sharing guidelines [32] formalized policies to only support expanded roles of non-physician health workers already in the Nigerian civil service structure (e.g. nurses, midwives, community health officers, community health extension workers and pharmacy attendants). MMs and other HIV-positive treatment supporters are not part of the current civil service structure and are typically supported by externally funded implementing partners.
Furthermore, challenges exist in terms of defining MMs' roles/niche in PMTCT/HIV programs in particular and the formal health system in general [7,9,13,14,33]. Under these circumstances, professional relationships between MMs (as HIV-positive lay health workers) and the formal HCWs who supervise them may be complicated. Placed in this hierarchical environment, MMs, with often poorly defined roles, little or no education or professional credentials, low wage-earning capacity, and known to be HIV-positive, may be highly vulnerable to stigma, discrimination, marginalization, non-supportive supervision, or other negative experiences at the healthcare facility [10,[12][13][14]34].
The nature of MM's working conditions, especially HCW-MM working relationships, have so far not been well-characterized in Nigeria, a country which stands to gain significantly from the scale-up of peer support interventions in its challenging PMTCT program. This paper explores the nature of the working environment for MMs at primary healthcare centers in rural North-Central Nigeria. Specifically, we aim to describe, from the perspective of MMs, how interactions with healthcare workers shape MMs' working conditions and influence their performance.
Study design
This qualitative study was nested within a larger PMTCT implementation research project, the MoMent (Mother Mentor) study, in North-Central Nigeria. MoMent was a prospective cohort study that compared a standardized, closely supervised MM program with the less-structured, less-supervised routine MM program at primary healthcare centers in rural North-Central Nigeria [35]. Main outcomes included postpartum maternal retention and viral suppression and timely uptake of early infant diagnosis [35]. This article draws its findings from focus group discussions (FGDs) conducted towards the end of the prospective study follow-up, to capture the experiences and opinions of all MoMent MMs (intervention and control) regarding their roles and working conditions during study implementation.
Study setting and population
This study was conducted in rural communities of the Federal Capital Territory and Nasarawa State in North-Central Nigeria. Study participants were MMs engaged at all 20 (10 intervention and 10 control) primary healthcare centers that served as MoMent study sites. Table 1 compares training, supervision and scope of work for all MoMent MMs working in both intervention and control arms [24,35]. MoMent MMs in both arms were chosen from communities surrounding the primary healthcare centers they were assigned to. All of these women had completed the PMTCT cascade at least once and were expected to guide other women in navigating and being compliant with PMTCT services. MMs in both arms were expected to work at both facility and community-level and were provided the same stipend amount. The major differences between the two MM groups were in the supervision and structure built into the intervention arm: intervention MMs received baseline training via a standard curriculum with daily, hands-on supportive supervision from a study-designated MM supervisor. Furthermore, all intervention MMs utilized standardized logbooks for documenting client calls and visits. Random quarterly performance audits were conducted via client feedback, in order to improve and/or maintain MM work performance in the intervention arm [36].
Participant recruitment
Over its 5-year implementation period, MoMent engaged a total of 38 MMs across both intervention (structured peer support) and control (unstructured peer support) sites. The number of MMs assigned per site was guided by a ratio of 1:10-15 between MMs and pregnant or postpartum clients (up to 18-24 months post-delivery) [35]. Ultimately, intervention arm MMs had an average ratio of 1:12 while control MMs averaged 1:14 clients [24]. All MoMent ever-engaged MMs were eligible to participate in the MM FGDs. Towards the end of the study, all 38 MMs (regardless of whether still actively engaged or not) were contacted by telephone, by research officers who had been stationed at each Mo-Ment site during the study. MMs were not contacted or recruited by healthcare workers for these FGDs. The research officers briefed each of the 38 MMs about the FGDs; possible dates and the MMs' availability on those dates were discussed. A total of 36 out of the 38 MMs indicated their interest and availability to participate in the FGDs. These 36 interested and available MoMent MMs were provided information on the date and location of their specific FGD. Two MMs, both from the control arm, were interested; however, they were unable to participate in the FGDs on the scheduled dates-one was recovering from surgery while the other had to travel out of town.
All 36 successfully recruited MMs presented for the FGDs, which were conducted on non-clinic days in private rooms at study primary healthcare centers within the communities where the MMs worked, or at a private venue within their work catchment area. This was done to preserve confidentiality and to encourage discussions on this topic without fear of victimization by facility healthcare workers. Written informed consent was obtained from all MMs before the FGDs. Snacks and transport reimbursement (depending on distance traveled) were provided on the day of the FGD for all participants. Healthcare workers neither participated in nor observed the FGDs.
Data collection
Seven FGDs (four among intervention MMs and three with control MMs) were conducted between September and November 2016, which marked the end of the 6-month postpartum follow-up for all participants for the prospective study's primary outcomes. Each of the seven FGDs conducted had four to six MM participants, ultimately representing all 20 MoMent sites.
Prior to each FGD, an interviewer-administered form was used to capture participants' socio-demographic information including educational attainment, marital status, religious affiliation, parity and duration of engagement as MMs.
Each discussion was audio-recorded and guided by a trained bilingual (English and Hausa) facilitator, with or without a co-facilitator, and an observer. During each FGD, at least one observer took notes on non-verbal cues, which were used to assist in data analysis and interpretation. All facilitators and observers were study staff familiar to participants, due to their interactions with these MMs at study sites during MoMent data collection. All study FGD staff had a minimum of 2 years working experience with the MoMent study and had had at least 1 year experience in conducting qualitative interviews. None of the FGD facilitators or observers worked as facility staff nor had any supervisory role over MMs participating in their respective FGDs.
The FGD guide explored MMs' opinions on their workload and stipends, terms of engagement, scope of work, and relationships with healthcare workers. Each FGD lasted for 60-90 min.
Since the number of FGDs to be conducted was limited by the specific number of MoMent MMs engaged and available, data saturation was not a consideration. However, after transcription and analysis of the initial seven FGDs, we conducted a "member check" to gain participant feedback on the initial findings and to validate the collected data and its interpretation [37]. A cross-section (n = 4) of the 36 original participants were recruited for the member check; these participants were selected in order to equitably represent religion, high and low levels of education, and intervention and control arms. The member check was performed in September 2017 as a group discussion, where the key findings from qualitative data analysis were presented to participants for confirmation, correction, and additional commentary. Ultimately, the member check was in agreement with initial findings. The member check was conducted by the same facilitators and observers who implemented the initial FGDs.
Data transcription and analysis
Audio-recorded FGDs were transcribed verbatim in English or transcribed and translated from Hausa to English were necessary. Manual transcription and analysis were performed by the same facilitators and observers who conducted the FGDs. For data analysis, we adopted the constant comparative method in a grounded theory approach [38]. In this approach, inductive methodology is used to systematically generate theory from the data collected. We selected a series of code words to develop themes and sub-themes from the qualitative data. Preset codes were related to the general themes in our FGD guides and served as the root of our coding tree. The root pre-set code words for our coding tree were "work relationships," "stigma/discrimination," "working conditions and pay," and "roles/responsibilities." Eight paired analysts independently coded and analyzed the data. This was followed by group review, triangulation, and content analysis by iteration until a final consensus on patterns and categorizations was achieved. The research consultant (AO) additionally independently analyzed and coded data with Nvivo 11 (QSR International, Victoria, Australia) and compared the findings to emerging themes identified by the paired researchers.
Results
Of the 36 MMs interviewed, 32 (89%) had worked with the MoMent study for at least 2 years; the remaining four (11%) had worked with MoMent for at least 1 year. Twenty-five of the 36 (69.4%) participants had worked as MMs for between 2 and 5 years ( Table 2); 80.6% (29/36) of these women had all living children confirmed HIV-negative. Table 2 presents details of participants' socio-demographics. Median age of MMs was 32 years, and 18 (50%) were married; notably, 14 of the 18 single women (77.8%) were widowed.
Findings from focus group discussions Figure 1 displays the core themes that emerged from data analysis and interrelationships identified.
Work relationships between healthcare workers and mentor mothers
MMs, much like other lay health workers, are typically accountable to, and supervised by clinical staff, often nurses. Thus, their work conditions-roles, work hours, routines, and integration or marginalization in the clinic-are significantly controlled by these professional health workers. In this study, we noted a mixture of both positive and negative healthcare workermentor mother working relationships. For example, MMs reported relationships with some HCWs as cordial, where the latter were supportive of MMs and their work in the clinic.
The facility staff helps us with the drug appointment dates of the clients and he enters the client's information in our note books. Despite these examples of cordial relations, some MMs felt alienated and not appreciated as fellow health workers by the HCWs they worked with, highlighting the relative lack of MMs' integration into their work environment.
They treat us like we are not part of them; we are not among the staff. Whenever they have meetings, they don't let us join. They don't see us as one of them. They involve others and exclude us. The mentor mothers are nothing.-Intervention MM FGD3 On ANC days, we work together, do everything together. But if there is anything [of benefit], they will say, leave, are you one of the staff? …when it is time to share they will say it's for the staff… when they see a positive mother it is then that they remember us.-Intervention MM FGD3 MMs also discussed how some HCWs would denigrate them and their work performance: They make comments questioning our ability to carry on our duties even to the point of threatening to report us.-Control MM FGD1 Structural and individual healthcare worker characteristics influencing work relationships Mentor mother reports suggest that individual HCW characteristics and structural factors influenced the nature of HCW-MM relationships.
The last matron we had in Facility A, she gave me a "heart attack". As soon as I get to the gate of the clinic my heart always skips because I know it will be trouble all through…But with this new one, I do not have any problems.-Intervention MM FGD3 Structural issues emerging from the discussions included poorly defined and/or communicated MM scope of work, MMs' recruitment process, and their legitimacy and hierarchy within the health facility.
Poorly defined scope of work at the facility Under MoMent's structured peer support program, MMs had a clearly specified scope of work, which was to be devoted largely to client interaction at the facility and/or in the community (Table 1) [35]. Peer support in the control arm was less well-defined and did not use standardized tools. However, similar to the control arm, intervention MMs' scope of work at the facility was not well-defined, especially vis-à-vis HCWs' roles. FGDs highlighted the fact that many MMs were confused about their roles at the facility: I don't know exactly what registers we are supposed to handle and those we should not be responsible for, because we get conflicting information … This clarification will make me focus on those registers that are my responsibility and reject any unrelated tasks.-Intervention MM FGD1 Some MMs also noted that they were performing tasks that they regarded as clearly outside their scope of work. Sometimes, they [HCWs] make us do jobs that are not part of our responsibilities as peer counselors... They give us additional jobs apart from the peer counseling job.-Control MM FGD3 I handle the ANC register. If you do not do that, the nurses will begin to embarrass you. The in-charge will Any extra thing I do in the facility will be on a volunteer basis like buying of food for the in-charge which is something I do willingly, not because it's part of my job.
-Intervention MM FGD1
Despite the willingness to assist, MMs considered some of these tasks unacceptable.
Due to insufficient staff strength, I assist in the card room, I file folders and it is not my job. Yet they still tell me to sweep and mop! I agree I will bring out cards and file documents but the sweeping and the mopping, I do not want to do it.-Control MM FGD3
Workplace hierarchy
Within the healthcare facility hierarchy, MMs appeared to be the lowest in the pecking order, and as such had The lack of working space presented a privacy challenge when MMs counseled their clients.
We are supposed to have something like a counseling room because we have issues with privacy. When you want to counsel women you will start running up and down, looking for where you and the client will sit that will be convenient and you won't have people listening in and all that. While HCWs undergo a rigorous recruitment process that requires academic qualifications and licenses, the MM recruitment and engagement process lies outside the formal health sector, often supported by foreign donor-funded (in other words, non-Nigerian) grants. This seems to further delegitimize and alienate MMs with respect to their status among formal health workers.
[HCWs] said we don't work with a certificate and are not members of staff. Even if we are staff, we are not learn d.-Intervention MM FGD3
Stigma and discrimination from healthcare workers
In more extreme cases, HCWs discriminated against MMs on the basis of their HIV-positive status and this affected MMs' work.
The former in-charge didn't even allow us to come close to his office; he sent us away as soon as we got close because we are HIV-positive. But Unfortunately, in a handful of cases, some MMs faced demands for part of their stipends to be paid to supervising HCWs.
Our salary that they give us…our clinic in-charge collects 3000 naira [~$8.50] and says we must give it to her whether we like it or not …that means we are being forced. So we agreed; every month we give her 3000 naira each. It got to a point that when we don't give her the money, we get into trouble with her.-Control MM FGD3
Mentor mother productivity and work performance
In this study setting, the manner in which HCWs relate to MMs has significant consequences for the latter's work performance and job satisfaction.
In instances where MMs were made to perform extraneous tasks, it was often to the detriment of their clients/primary tasks.
The facility staff sometimes ask us to sweep and mop the facility. Even when our clients are around, they will insist that we must finish the sweeping and mopping before we attend to them.-Intervention MM FGD4 Unrelated tasks sometimes deny me from carrying out my mentor mother responsibilities. They [HCWs] will always tell us to leave our mentor mother roles and fill in for them.-Intervention MM FGD1 Every day we have to go to the facility. If we do not go it becomes a problem. So we do not get time to visit our clients. Sometimes even on Saturdays and Sundays, we are in the facility. We should know the number of days we are to go to the facility so that we can have client home visit days.-Control MM FGD3
Mentor mother job satisfaction and opportunities for formal integration
Despite the aforementioned challenges, there was a clearly emerging theme of MMs' devotion to and enjoyment of their work, partly due to income but largely because the opportunity to support other women living with HIV to deliver HIV-free infants.
We thank God we are being paid, it is better than nothing at all and we enjoy the work. We like the outcomes we get after the job is done because the infants of our clients are negative.-Control MM FGD1 The most rewarding thing in this job is that we are impacting lives. The joy of the lives you impact drives you. You become a model in the community. This morning a little girl ran towards me and I could not recognize her but she reminded me of how I helped her mother. All these kinds of things encourage you, but the money is also important.-Intervention MM FGD1 Job satisfaction notwithstanding, MM expressed a desire for their workforce to be formally integrated with opportunities for advancement in the healthcare system. They understood the importance of integration, because MMs are currently neither state nor federal workers; as such, stipends are paid by donor-funded projects that are time-bound. Health facilities often do not have the financial means to sustain MM activities when donor projects end. Therefore, being absorbed as routine facility staff will give MMs more stable employment and much-needed income in an environment characterized by high levels of unemployment.
Discussion
Our study provides detailed insight into the working conditions of mentor mothers and their professional relationships with healthcare workers in North-Central Nigeria. We have highlighted a mixture of positive and negative examples that on one hand demonstrate supportive working environments in some instances; however, there were also other instances where MMs' working environments were less than conducive, largely due to tenuous relationships with HCWs. Some of the issues emerging were stigma and discrimination on the basis of MMs' HIV-positive and non-formal work status, unclear scope of work at the facility level, and assignment of non-relevant tasks by HCWs.
Our findings support those of prior studies that have reported issues with lack of recognition, complementarity, and integration of lay HIV health worker roles vis-à-vis HCW roles in sub-Saharan Africa [4,10,11,13]. The non-integrated, poorly structured programs in which MMs and similar lay health workers operate may actually limit their impact in the roles for which they were engaged. In our study, MMs discussed being alienated by HCWs on the basis of their non-formal work status. For example, the lack of training certificates and identity cards made MM vulnerable to dismissive treatment by some HCWs. Given the continued threat of vertical transmission to the HIV/AIDS elimination agenda [22], the opportunity to capitalize on the gains from maternal peer support in PMTCT cannot be taken for granted.
A poorly defined scope of work-especially at the facility level-was a major complaint from MMs in our study. Vagueness in job descriptions for lay health workers in HIV and PMTCT have been reported among MMs as well as other lay health workers in HIV programs [4,11,14,16]. The non-formal work status of MMs and other lay health workers in HIV may contribute to the persistence of poorly defined scopes of work. This may also perpetuate HCWs' practice of assigning non-relevant tasks to MMs, which distract them from core duties. Similar experiences have been reported among HIV peer educators in Ghana [12] and expert mothers in Malawi and Zimbabwe [4]. However, as demonstrated by the MoMent study, providing structure can significantly improve the impact of lay peer support on maternal-infant outcomes in PMTCT [24,25]. However, MM scope of work at both facility and community-level must be well-defined, along with oriented input and buy-in from both experienced and new HCWs. The introduction of structure and standards can improve the quality and sustainability of peer support while harnessing the unique motivation of people living with HIV.
In addition to their non-formal work status, MMs' HIV-positive status also factored into some HCWs' attitudes towards to them. MMs reported experiencing HIVrelated stigma and discrimination from the HCWs they were working with. Stigma and discrimination from HCWs towards clients is well-documented [39][40][41][42], but our study additionally highlights that directed from HCWs towards MMs. Health systems that engage MMs and other people living with HIV should be aware of this and make provisions for HCW sensitization and advocacy/protections against workplace stigma and discrimination.
In our study, payment for services was noted to be critical to MMs' motivation to work. While stipends were provided to all MoMent study MMs for the purpose of client home visits and phone calls [4,35], these funds were also used for MMs' livelihood. Both paid and unpaid models of peer support have been implemented in HIV programs, all having different degrees of impact [4,11]; however, head-to-head comparisons of paid and unpaid peer support models within the same study setting are lacking. In their analysis of lay HIV health worker programs in sub-Saharan Africa, Herman et al. report that adequate remuneration in the setting of quality supervision and continuous training is critical for quality and sustainability [10]. Cataldo et al. report similar findings from their synthesis of expert mother studies (including MoMent) in Malawi, Nigeria, and Zimbabwe, noting that adequate remuneration and training are likely to maximize the impact of these interventions in PMTCT [4]. That said, instances, however rare, of MM stipend "garnishing" by HCWs are unacceptable and unethical, and avenues for reporting and addressing these phenomena need to be available to MMs. During the MoMent study, the opportunity for this type of corruption to occur was minimized by paying all MM through their bank accounts and not by cash (via HCWs/clinic administrators), creating a safe avenue for all MMs to report such cases with minimal retaliation, and within the routine PMTCT program, involving local chapters of the Network of People Living with HIV/AIDS in Nigeria in the MM program structure. These local chapters also serve as potential pathways for MMs to report issues at work that may be taken up to funding implementing partners and/or the local health authorities.
While we have discussed MM-HCW tensions in the workplace, it is noteworthy that MMs are engaged to complement and not supplant HCWs' jobs. As other studies have reported, much of the tensions between HCWs and lay health workers stem from poorly defined lay worker roles and HCWs' fear of their roles being usurped from "encroachment of territory" [10,[12][13][14]. Thus, training and empowering MMs may work against them in their relationships with HCWs at the facility level. It is interesting to note that in our study, MMs mentioned little of tensions with HCWs or poorly defined scopes of work with regard to community-level MM activities. We suggest positioning MMs as clearly defined task shifting resources at the facility level, while protecting time for MMs to perform their community-level duties. Data on the costs and cost-effectiveness of PMTCT peer support programs have been encouraging [43] and further support the call for their standardization, integration, and scale-up.
The potential impact of MoMent's MM program structure on MM working conditions should also be mentioned. Part of the intervention package included supportive supervision from knowledgeable, PMTCTtrained staff who acted as advocates for their assigned MMs. While not reported here, intervention arm MMs have noted how collegiate support from their studyassigned MM supervisors made them feel valued and helped them cope with job-related stress [4]. The supportive element of the supervision may very well have contributed to better MM client outcomes in the prospective MoMent study by way of higher-quality, more impactful MM counseling [24].
Our study is limited in that only the views and experiences of MMs are presented here. Our approach to the MM FGDs was to gather information on their experiences and working conditions during the MoMent study. While we interviewed HCWs (among many stakeholders) to assess acceptability of MMs as part of the formative aspect of MoMent [35,44], we did not interview HCWs for the latter FGDs-they were limited to MMs only. Obtaining HCWs' views on the issues explored here may have yielded additional perspectives on MM working conditions and roles. Additionally, exploration of community-level challenges faced by MMs can potentially fill in prevailing gaps in understanding their working conditions; this was not addressed in this paper. Lastly, our study was not designed to explore HCW-related experiences of MMs with shorter versus longer-term engagements; it is thus difficult to tell whether HCW-MM relationships have changed over time, for instance before MoMent and during/after MoMent. However, other reports published before this paper point to similar prevailing conditions for lay workers in HIV, albeit not in Nigeria. It appears not much has changed, likely because not much attention has been paid to developing and implementing solutions.
Conclusions
Mentor mothers are functioning as peer counselors, community health workers, and task shifting resources and can potentially serve as mental health and domestic violence resources [45]. In HIV programs, there is a unique advantage in engaging people living with HIV to deliver HIV-related services to their peers. MMs are critical to the success of PMTCT programs in high-burden, low-income countries like Nigeria. Findings from impact evaluation studies such as MoMent provide the impetus to make accommodations for MMs within the formal health sector. This involves formally adopting and integrating structured MM programs nationwide, and sustaining them through country-and/or state-level domestic funding rather than the current (and dwindling) donor support. To capitalize on their motivation and to maximize their impact, MMs need to occupy a well-defined and well-supported niche that is minimally threatening to formal healthcare workers-so that MMs can be seen as "one of them." Abbreviations FGD: Focus group discussion; HCW: Healthcare worker; MM: Mentor mother; PMTCT: Prevention of mother-to-child transmission of HIV; WHO: World Health Organization
Funding
The MoMent Nigeria study was funded by the WHO through an award for the INtegrating and Scaling up PMTCT through Implementation REsearch (INSPIRE) initiative from Global Affairs Canada. The funders had no role in the design of the study nor in the collection, analysis, and interpretation of data or in writing the manuscript.
Availability of data and materials
The datasets generated during the current study are not publicly available due to ongoing analysis for further publications, but are available from the corresponding author on reasonable request.
Disclaimer
The opinions expressed in this article do not necessarily reflect the views or policies of the World Health Organization or Global Affairs Canada.
Authors' contributions NASA designed the study, contributed to the data acquisition, analysis, and interpretation, and drafted and critically reviewed the manuscript. AO contributed to the data analysis and interpretation and drafted and critically reviewed the manuscript. MJB and GN contributed to the data acquisition, analysis, and interpretation and drafted and critically reviewed the manuscript. CNE contributed to the data analysis and interpretation and drafted and critically reviewed the manuscript. ENI contributed to data interpretation and critically reviewed the manuscript. LJC designed the study, contributed to the data acquisition, analysis and interpretation, and critically reviewed the manuscript. All authors read and approved the final manuscript.
Ethics approval and consent to participate The study was approved by the Nigerian National Health Research Ethics Committee, the Ethics Review Committee of the World Health Organization, and the Institutional Review Boards of the University of Maryland Baltimore and the University of Georgia Athens. Written informed consent was obtained from all study participants.
Consent for publication Not applicable
Competing interests | 2018-09-16T06:22:59.770Z | 2018-09-10T00:00:00.000 | {
"year": 2018,
"sha1": "5186cfa787243a74b7a6ae47a12e3c9d0eb6b115",
"oa_license": "CCBY",
"oa_url": "https://human-resources-health.biomedcentral.com/track/pdf/10.1186/s12960-018-0313-9",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5186cfa787243a74b7a6ae47a12e3c9d0eb6b115",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
91185146 | pes2o/s2orc | v3-fos-license | Cure Behavior and Thermomechanical Properties of Phthalonitrile–Polyhedral Oligomeric Silsesquioxane Copolymers
Phthalonitrile–polyhedral oligomeric silsesquioxane (POSS) copolymers were prepared by adding two different POSS cage mixtures: epoxycyclohexyl POSS (EP0408) and N-phenylaminopropyl POSS (AM0281). The cure behavior and properties of these polymers were analyzed and compared using differential scanning calorimetry (DSC), thermogravimetric analysis (TGA), dynamic mechanical analysis (DMA), Fourier transform far infrared (FTIR) measurements, and rheometric studies. The POSS-containing polymers showed higher chemical reactivity, better thermal stability and better mechanical performance in comparison to their unmodified counterparts. All the polymers showed water absorption below 1.5%. As revealed by FTIR measurements, the polymerization products contained triazine ring structures that were responsible for the superior thermal properties exhibited by these POSS-containing polymers.
Introduction
Fiber-reinforced composites have been widely used over the past several decades in numerous structural applications (e.g., aircraft, missile, ship, and vehicle construction) owing to their good mechanical performance and light weight. These materials have developed quickly due to their potential for future applications. Phthalonitrile polymers [1][2][3][4][5], a new class of thermal materials combining low flammability and high strength, have shown great potential in the aerospace sector as components for maintaining airframe loads in the next generation of aeronautical and space vehicle systems. In comparison to other high temperature-resistant materials, phthalonitrile-based composites [6][7][8] have numerous advantages. These include superior mechanical, thermal, and oxidative stability properties compared to most state-of-the-art thermal composites (e.g., polyimide and phenolic triazine). Additionally, phthalonitrile-based composites do not release any by-products during the curing process and the corresponding prepolymer (B-staged resin) can be prepared and stored with an unlimited shelf life under ambient conditions. Regarding fire resistance properties, phthalonitrile-based composites are among the few materials meeting the U.S. Navy's stringent requirements articulated in the MIL-STD-2031 directive for the usage of polymer composites aboard Navy submarines.
The phthalonitrile monomer 4,4 -bis(3,4-dicyanophenoxy)biphenyl (BPh) was first synthetized by Keller [1,2] and, since then, extensive work has been carried out by his group and others [9][10][11][12][13][14][15][16][17][18][19][20][21] involving various types of phthalonitrile monomers and curing agents. Thus, new monomers with lower melting temperature were prepared, thereby widening the processing window. However, high temperatures and long curing times are still required to form cross-linked structures during polymerization. Phthalonitrile-polyhedral oligomeric silsesquioxane (POSS) compounds provide unique opportunities to create revolutionary material combinations through a melding of the desirable properties of ceramics and polymers at the 1 nm length scale. These new combinations enable the circumvention of classic material performance trade-offs by exploiting the synergy and properties of materials that only occur on the nanoscale. POSS reagents consisting of an inorganic silsesquioxane cage have multiple reactive groups able to interact with the cyanate group at high temperatures, thereby offering a unique opportunity to prepare nanocomposites with a truly molecular dispersion of inorganic fillers. Additionally, POSS copolymers with enhanced characteristics (e.g., higher glass transition (Tg) temperatures and superior mechanical and oxidative or fire resistant properties) have been prepared in a large range of thermoplastic (e.g., polyethylene, polypropylene, and polycarbonate) and thermosetting (e.g., polyimide, epoxy resins, polyurethane, and cyanate ester resin, among others) polymers [22][23][24][25][26][27][28][29][30]. Kaliavaradhan [31] synthesized tri-phthalonitrile phenyl POSS polymers with high thermal and retardant properties. Bu synthetized POSS-polysulfonamide (PSA, silicon-containing arylacetylene) resins with higher flexural strength and impact fractured energy, by 80.5% and 92.8% respectively [32]. Pen prepared POSS-TiO2-epoxy nanocomposites with enhanced thermal stability and UV resistance [33]. Despite these works, phthalonitrile-POSS copolymers have been scarcely studied in literature. Therefore, this work aims to study the effect of POSS on the chemical reactivity and thermal stability properties of phthalonitrile-POSS copolymers. EP0408 and AM0281 are hybrid molecules, each containing an inorganic silsesquioxane core and either eight epoxycyclohexyl or N-phenylaminopropyl organic groups at the corners of the cage, respectively. Epoxycyclohexyl POSS (EP0408) and N-phenylaminopropyl POSS (AM0281) were chosen with the clear objective of studying the effect of POSS on the cure behavior, thermomechanical properties, and the reactive mechanism of phthalonitrile copolymers. The structures of the monomer and the two POSS reagents used herein are shown in Figure 1.
Aromatic amine was used with EP0408 and AM0281 to co-cure the phthalonitrile monomer such that the main structure of the phthalonitrile polymers is retained.
Preparation of the Prepolymers, Polymers, and Their Nanocomposites
A mixture of BPh and BAPP (curing agent, 2 wt %) was melted at 260 • C. EP0408 and AM2081 were separately added at varying compositions (0.1, 0.5, 1, 5, and 10 wt %) and the resulting mixtures were stirred for 10 min and cooled down to room temperature to prepare the prepolymers, respectively named the neat prepolymer, EP0408-X prepolymer, and AM2081-X prepolymer, where X indicates the POSS content in wt %. The prepolymers were pulverized before performing differential scanning calorimetry (DSC), dynamic mechanical analysis (DMA) and rheological tests. Neat polymers and those containing EP0408 and AM2081 at 0.1, 0.5, 1, 5, and 10 wt % were cured at 260 (4 h), 300 (8 h), and 325 • C (8 h). Before the DMA experiments, phthalonitrile casting samples were cured in molds (50 × 10 × 2 mm) with an air-circulating oven at 260 (8 h), 300 (8 h), and 325 • C (8 h). The samples were subsequently post-cured under an inert atmosphere of nitrogen at 350 • C (4 h), 350 and 375 • C (4 h), 350 (8 h), and 375 • C (4 h). The rheological behavior of the neat prepolymers and the prepolymer-POSS blends was studied at 280 • C to obtain the complex viscosity-time plots. The neat prepolymers, polymers, and POSS polymers containing EP0408 and AM0281 were studied by Fourier transform far infrared (FTIR) analysis.
Characterization
DSC experiments were conducted in a flowing nitrogen atmosphere on mixtures containing the monomer and either EP0408 or AM2081 (0.1, 0.5, 1, 5, and 10 wt %). The experiments were conducted within a Perkin-Elmer Pyris-6 DSC calorimeter (Perkin-Elmer, Richmond, CA, USA) at a heating rate of 10 • C/min. DSC curves at different heating rates (5 • C/min, 10 • C/min, 15 • C/min, 20 • C/min) were measured. The activation energies were calculated using the following equation: where β is the heating rate, T p is the peak temperature of each DSC curve at different heating rates, and R is the universal gas constant. Thus, the E value was obtained through the linear dependence of In(β/T p 2 ) on 1/T p , at various heating rates.
Thermal analysis was performed on polymers with different contents of EP0408 and AM2081, using a TA Instruments SDTQ600 thermogravimetric analyzer (TA Instruments, Eden Prairie, MN, USA). The TGA tests were carried out under flowing nitrogen (100 mL/min) at a scan rate of 10 • C/min. The dynamic storage modulus (G') and damping factor (tanδ) of rectangular phthalonitrile polymer specimens (50 × 10 × 3 mm) were obtained by DMA on a DMS-6100 instrument (NSK Ltd., Tokyo, Japan) with a flowing nitrogen atmosphere and a temperature range of 30-400 • C (4 • C/min, frequency: 10 Hz). Thus, T g was estimated from the modulus-temperature plots obtained by the DMA. Dynamic viscosity measurements were performed on a TA Instruments AR-2000 rheometer (TA Instruments, Eden Prairie, MN, USA). The water uptakes of the neat polymers and the phthalonitrile-POSS copolymers were monitored under ambient conditions. The FTIR studies were performed on a Nicolet Avatar 370 FT-IR spectrometer (Thermo Fisher Scientific, Grand island, NY, USA) with potassium bromide pellets containing a low amount of sample.
Results and Discussion
The prepolymers containing EP0408 or AM2081 (0.1, 0.5, 1, 5, and 10 wt %) were studied by DSC, and the results are shown in Figures 2 and 3, respectively. All the DSC scans showed one Polymers 2017, 9, 334 4 of 13 exothermal peak at 98-127 • C corresponding to the initial reaction between amine and phthalonitrile. The endothermal peaks at 170-250 • C observed were ascribed to the melting of the curing agent, monomer, and prepolymers. Consequently, the position and shape of these endothermal peaks are expected to change with the POSS content. As shown in Figure 2, the area of the exothermal peak at ca. 350-380 • C, the peak ascribed to the formation of networks, increased with EP040 content. However, when using AM2081 (Figure 3), no noticeable exothermal peaks were observed, thereby revealing that cross-linking polymerization is taking place at a very low rate. Considering the above results, it can be determined that the reaction process involves two steps: prepolymerization, since large amounts of prepolymers were produced during the initial reaction, and cross-linking, where cyanogen groups react and form triazine ring structures. The position of the peaks was dependent on the EP0408 or AM0281 contents. The activation energies (peaks at 115-120 • C) for the neat and the EP0408-and AM0281-containing polymers were 135, 129, and 778 kJ/mol, respectively. Thus, the AM0281-containing polymers react more easily at low temperatures compared to the other polymers. The DSC scans of the EP0408-containing polymers showed noticeable peaks at high temperatures (350-370 • C) ( Figure 2) with a corresponding activation energy of 1934 kJ/mol. This activation energy was well above that obtained at low temperature, thereby indicating that the reaction at a high temperature is more hindered than that at a low temperature.
The TGA profiles of the EP0408-and AM0281-containing materials (0, 0.1, 0.5, 1, 5, and 10 wt %) are shown in Figures 4 and 5, respectively. The EP0408-containing polymers, particularly the EP0408-0.5 polymer, showed higher thermal stability compared to the other polymers. The weight retain percent at 900 • C is about 48% higher than that of the neat polymer (31%). The Derivative Thermogravimetry (DTG) curves exhibit one obvious peak at 640-680 • C. The rate of decomposition became higher from 550 • C and reaches a maximum value at about 640-680 • C.
Most of the AM0281-containing polymers, particularly the AM0281-1 polymer, showed excellent thermal stability compared to the other polymers. The weight retain percent at 900 • C is about 40% higher than that of the neat polymer (31%). However, the AM2081-10 polymer showed poorer thermal stability than the neat polymer. Since AM0281 bears a hindered amine with low reactivity, an excess of this POSS agent in the mixture caused the unreacted amine to decompose at high temperatures, thereby decreasing the stability of the polymer. DTG curves shown in Figure 5 exhibited two obvious peaks at 540-580 • C and 640-680 • C, respectively. These were the temperatures at which the polymers decomposed at the fastest rate. phthalonitrile. The endothermal peaks at 170-250 °C observed were ascribed to the melting of the curing agent, monomer, and prepolymers. Consequently, the position and shape of these endothermal peaks are expected to change with the POSS content. As shown in Figure 2, the area of the exothermal peak at ca. 350-380 °C, the peak ascribed to the formation of networks, increased with EP040 content. However, when using AM2081 (Figure 3), no noticeable exothermal peaks were observed, thereby revealing that cross-linking polymerization is taking place at a very low rate. Considering the above results, it can be determined that the reaction process involves two steps: prepolymerization, since large amounts of prepolymers were produced during the initial reaction, and cross-linking, where cyanogen groups react and form triazine ring structures. The position of the peaks was dependent on the EP0408 or AM0281 contents. The activation energies (peaks at 115-120 °C) for the neat and the EP0408-and AM0281-containing polymers were 135, 129, and 778 kJ/mol, respectively. Thus, the AM0281-containing polymers react more easily at low temperatures compared to the other polymers. The DSC scans of the EP0408-containing polymers showed noticeable peaks at high temperatures (350-370 °C) ( Figure 2) with a corresponding activation energy of 1934 kJ/mol. This activation energy was well above that obtained at low temperature, thereby indicating that the reaction at a high temperature is more hindered than that at a low temperature.
The TGA profiles of the EP0408-and AM0281-containing materials (0, 0.1, 0.5, 1, 5, and 10 wt %) are shown in Figures 4 and 5, respectively. The EP0408-containing polymers, particularly the EP0408-0.5 polymer, showed higher thermal stability compared to the other polymers. The weight retain percent at 900 °C is about 48% higher than that of the neat polymer (31%). The Derivative Thermogravimetry (DTG) curves exhibit one obvious peak at 640-680 °C. The rate of decomposition became higher from 550 °C and reaches a maximum value at about 640-680 °C.
Most of the AM0281-containing polymers, particularly the AM0281-1 polymer, showed excellent thermal stability compared to the other polymers. The weight retain percent at 900 °C is about 40% higher than that of the neat polymer (31%). However, the AM2081-10 polymer showed poorer thermal stability than the neat polymer. Since AM0281 bears a hindered amine with low reactivity, an excess of this POSS agent in the mixture caused the unreacted amine to decompose at high temperatures, thereby decreasing the stability of the polymer. DTG curves shown in Figure 5 exhibited two obvious peaks at 540-580 °C and 640-680 °C, respectively. These were the temperatures at which the polymers decomposed at the fastest rate.
As shown in Figures 4 and 5, the EP0408-0.5 and AM0281-1 samples were the most thermally stable polymers of their respective groups. Both materials were compared in terms of thermal stability in Figure 6 and no significant differences were found between both materials, although the EP0408-containing material showed a slightly higher thermal stability compared to the AM0281-containing polymer. Figure 6 and no significant differences were found between both materials, although the EP0408-containing material showed a slightly higher thermal stability compared to the AM0281-containing polymer.
Weight / (wt%) Weight / (wt%) From the TGA curves of each system, a gradual increase in the content of POSS has the thermal stability increasing at first, before decreasing. When a small amount of POSS agent was added to the system, the POSS ring will enter the network of triazine and make the cross-linked density higher as shown in Figure 7. Thus, the thermal stability is also improved. However, with an excessive amount From the TGA curves of each system, a gradual increase in the content of POSS has the thermal stability increasing at first, before decreasing. When a small amount of POSS agent was added to the system, the POSS ring will enter the network of triazine and make the cross-linked density higher as shown in Figure 7. Thus, the thermal stability is also improved. However, with an excessive amount From the TGA curves of each system, a gradual increase in the content of POSS has the thermal stability increasing at first, before decreasing. When a small amount of POSS agent was added to the system, the POSS ring will enter the network of triazine and make the cross-linked density higher as shown in Figure 7. Thus, the thermal stability is also improved. However, with an excessive amount of POSS agent in the system, the network of the polymer will be highly-branched at the branching point of the POSS ring, the steric hindrance makes triazine difficult to form, and the cross-linked density is decreased. This causes decreased thermal stability.
Polymers 2017, 9,334 7 of 12 of POSS agent in the system, the network of the polymer will be highly-branched at the branching point of the POSS ring, the steric hindrance makes triazine difficult to form, and the cross-linked density is decreased. This causes decreased thermal stability. DMA measurements were carried out to study the dynamic mechanical properties of the polymers as a function of different post-cured treatments. These preliminary studies were used to identify the curing conditions required to obtain polymers with optimum mechanical properties. Thus, the samples post-cured at 350 °C exhibited a sharp drop in the storage modulus and a peak in the tanδ curves as the temperature increased (curve a in Figures 8 and 9, respectively). This behavior indicates that the polymer shifted from a glassy state to a rubbery state as the segmental motion of the polymer chains increased. While the storage modulus and the tanδ plots were relatively flat for the samples heated at elevated temperatures for long times (curves b and c), the curve c, in particular, showed a higher storage modulus, thereby revealing the presence of a stable cross-linked network hindering the segmental motion of the polymers. This enhancement in the dynamic mechanical properties of the phthalonitrile polymer was attributed to advances in the cross-linking of the thermoset upon elevated temperatures for long times. DMA measurements were carried out to study the dynamic mechanical properties of the polymers as a function of different post-cured treatments. These preliminary studies were used to identify the curing conditions required to obtain polymers with optimum mechanical properties. Thus, the samples post-cured at 350 • C exhibited a sharp drop in the storage modulus and a peak in the tanδ curves as the temperature increased (curve a in Figures 8 and 9, respectively). This behavior indicates that the polymer shifted from a glassy state to a rubbery state as the segmental motion of the polymer chains increased. While the storage modulus and the tanδ plots were relatively flat for the samples heated at elevated temperatures for long times (curves b and c), the curve c, in particular, showed a higher storage modulus, thereby revealing the presence of a stable cross-linked network hindering the segmental motion of the polymers. This enhancement in the dynamic mechanical properties of the phthalonitrile polymer was attributed to advances in the cross-linking of the thermoset upon elevated temperatures for long times. of POSS agent in the system, the network of the polymer will be highly-branched at the branching point of the POSS ring, the steric hindrance makes triazine difficult to form, and the cross-linked density is decreased. This causes decreased thermal stability. DMA measurements were carried out to study the dynamic mechanical properties of the polymers as a function of different post-cured treatments. These preliminary studies were used to identify the curing conditions required to obtain polymers with optimum mechanical properties. Thus, the samples post-cured at 350 °C exhibited a sharp drop in the storage modulus and a peak in the tanδ curves as the temperature increased (curve a in Figures 8 and 9, respectively). This behavior indicates that the polymer shifted from a glassy state to a rubbery state as the segmental motion of the polymer chains increased. While the storage modulus and the tanδ plots were relatively flat for the samples heated at elevated temperatures for long times (curves b and c), the curve c, in particular, showed a higher storage modulus, thereby revealing the presence of a stable cross-linked network hindering the segmental motion of the polymers. This enhancement in the dynamic mechanical properties of the phthalonitrile polymer was attributed to advances in the cross-linking of the thermoset upon elevated temperatures for long times. DMA was carried out on the neat and the AM0281-and EP0408-containing polymers after cure, then post-cured under treatments to evaluate Tg of the cured polymers. As shown in Figure 10, in all cases, the dynamic storage modulus gradually decreased with temperature, which was attributed to a stress relaxation of the polymer network. In comparison with the neat polymer, the storage moduli of the polymers containing EP0408 and AM0281 decreased at significantly lower rates, thereby indicating that temperature had less influence on these polymers. However, the POSS-based polymers, particularly those containing AM0281, showed a lower modulus. Figure 11 shows the damping factor plots and that all the polymers showed a similar Tg, thereby revealing that, in all cases, a stable cross-linked network was formed, hindering the segmental motion. DMA was carried out on the neat and the AM0281-and EP0408-containing polymers after cure, then post-cured under treatments to evaluate T g of the cured polymers. As shown in Figure 10, in all cases, the dynamic storage modulus gradually decreased with temperature, which was attributed to a stress relaxation of the polymer network. In comparison with the neat polymer, the storage moduli of the polymers containing EP0408 and AM0281 decreased at significantly lower rates, thereby indicating that temperature had less influence on these polymers. However, the POSS-based polymers, particularly those containing AM0281, showed a lower modulus. Figure 11 shows the damping factor plots and that all the polymers showed a similar T g , thereby revealing that, in all cases, a stable cross-linked network was formed, hindering the segmental motion. DMA was carried out on the neat and the AM0281-and EP0408-containing polymers after cure, then post-cured under treatments to evaluate Tg of the cured polymers. As shown in Figure 10, in all cases, the dynamic storage modulus gradually decreased with temperature, which was attributed to a stress relaxation of the polymer network. In comparison with the neat polymer, the storage moduli of the polymers containing EP0408 and AM0281 decreased at significantly lower rates, thereby indicating that temperature had less influence on these polymers. However, the POSS-based polymers, particularly those containing AM0281, showed a lower modulus. Figure 11 shows the damping factor plots and that all the polymers showed a similar Tg, thereby revealing that, in all cases, a stable cross-linked network was formed, hindering the segmental motion. The complex viscosity changes accompanying the phthalonitrile polymerization reaction were investigated by performing isothermal rheometric measurements at 280 °C on the neat and phthalonitrile-POSS copolymers ( Figure 12). The phthalonitrile-POSS copolymers, especially the AM0281-containing materials, showed a more rapid increase in their viscosity compared to the other polymers. The phthalonitrile-POSS copolymers cured in a far shorter time than that of the neat polymer. As expected, the phthalonitrile-POSS copolymers showed high reactivity and cured at a high rate while maintaining their thermal stability. Figure 13. Saturated absorption conditions were reached after 31, 16, and 11 days for the neat polymer, phthalonitrile-EP0408 copolymer, and phthalonitrile-AM0281 copolymer, respectively. However, the amount of water absorbed at saturation did not follow this trend. Thus, the phthalonitrile-EP0408 copolymers showed the lowest saturation value among all The complex viscosity changes accompanying the phthalonitrile polymerization reaction were investigated by performing isothermal rheometric measurements at 280 • C on the neat and phthalonitrile-POSS copolymers ( Figure 12). The phthalonitrile-POSS copolymers, especially the AM0281-containing materials, showed a more rapid increase in their viscosity compared to the other polymers. The phthalonitrile-POSS copolymers cured in a far shorter time than that of the neat polymer. As expected, the phthalonitrile-POSS copolymers showed high reactivity and cured at a high rate while maintaining their thermal stability. The complex viscosity changes accompanying the phthalonitrile polymerization reaction were investigated by performing isothermal rheometric measurements at 280 °C on the neat and phthalonitrile-POSS copolymers ( Figure 12). The phthalonitrile-POSS copolymers, especially the AM0281-containing materials, showed a more rapid increase in their viscosity compared to the other polymers. The phthalonitrile-POSS copolymers cured in a far shorter time than that of the neat polymer. As expected, the phthalonitrile-POSS copolymers showed high reactivity and cured at a high rate while maintaining their thermal stability. Figure 13. Saturated absorption conditions were reached after 31, 16, and 11 days for the neat polymer, phthalonitrile-EP0408 copolymer, and phthalonitrile-AM0281 copolymer, respectively. However, the amount of water absorbed at saturation did not follow this trend. Thus, the phthalonitrile-EP0408 copolymers showed the lowest saturation value among all Figure 13. Saturated absorption conditions were reached after 31, 16, and 11 days for the neat polymer, phthalonitrile-EP0408 copolymer, and phthalonitrile-AM0281 copolymer, respectively. However, the amount of water absorbed at saturation did not follow this trend. Thus, the phthalonitrile-EP0408 copolymers showed the lowest saturation value among all the polymers tested (0.6%), whereas phthalonitrile-AM0281 showed the highest saturation value (1%), which was larger than that of the neat polymer (0.7%).
Polymers 2017, 9,334 10 of 12 the polymers tested (0.6%), whereas phthalonitrile-AM0281 showed the highest saturation value (1%), which was larger than that of the neat polymer (0.7%). Figure 14 shows the FTIR spectra of the neat prepolymer, the neat polymer, the EP0408-0.5 polymer, and the AM0281-1 polymer. FTIR was used to monitor the polymerization process. The evolution of the nitrile absorption peak (2234 cm −1 ) and the formation of new peaks were studied to obtain information on the polymerization mechanism ( Figure 14). The nitrile band was more intense for the prepolymers compared to the cured polymers, confirming that the nitrile groups reacted at high temperatures. The weaker nitrile peak showed by the AM0281-1 sample revealed a more rapid polymerization for this material. The cured polymers showed noticeably different FTIR spectra compared to the prepolymers. The peaks centered at 1518, 1520, 1522 cm −1 and at 1352, 1368, and 1364 cm −1 (Figure 13, curves b-d respectively) were ascribed to triazine rings. Moreover, the carbonyl bands at 1713, 1725, and 1717 cm −1 (Figure 13, curves b-d respectively) were produced by the oxidation of the polymer at high temperatures. Based on these results, the potential structures of triazine and POSS within the triazine ring network were proposed and shown in Figures 7 and 15 respectively. The POSS functionality is grafted to the polymer chains upon reaction with the matrix, occupying the triazine network ( Figure 7). As a result, the cross-linked density of the polymer was increased, which led, in turn, to higher thermal stabilities. Figure 14 shows the FTIR spectra of the neat prepolymer, the neat polymer, the EP0408-0.5 polymer, and the AM0281-1 polymer. FTIR was used to monitor the polymerization process. The evolution of the nitrile absorption peak (2234 cm −1 ) and the formation of new peaks were studied to obtain information on the polymerization mechanism ( Figure 14). The nitrile band was more intense for the prepolymers compared to the cured polymers, confirming that the nitrile groups reacted at high temperatures. The weaker nitrile peak showed by the AM0281-1 sample revealed a more rapid polymerization for this material. The cured polymers showed noticeably different FTIR spectra compared to the prepolymers. The peaks centered at 1518, 1520, 1522 cm −1 and at 1352, 1368, and 1364 cm −1 (Figure 13, curves b-d respectively) were ascribed to triazine rings. Moreover, the carbonyl bands at 1713, 1725, and 1717 cm −1 (Figure 13, curves b-d respectively) were produced by the oxidation of the polymer at high temperatures. Based on these results, the potential structures of triazine and POSS within the triazine ring network were proposed and shown in Figures 7 and 15 respectively. The POSS functionality is grafted to the polymer chains upon reaction with the matrix, occupying the triazine network ( Figure 7). As a result, the cross-linked density of the polymer was increased, which led, in turn, to higher thermal stabilities.
Polymers 2017, 9,334 10 of 12 the polymers tested (0.6%), whereas phthalonitrile-AM0281 showed the highest saturation value (1%), which was larger than that of the neat polymer (0.7%). Figure 14 shows the FTIR spectra of the neat prepolymer, the neat polymer, the EP0408-0.5 polymer, and the AM0281-1 polymer. FTIR was used to monitor the polymerization process. The evolution of the nitrile absorption peak (2234 cm −1 ) and the formation of new peaks were studied to obtain information on the polymerization mechanism ( Figure 14). The nitrile band was more intense for the prepolymers compared to the cured polymers, confirming that the nitrile groups reacted at high temperatures. The weaker nitrile peak showed by the AM0281-1 sample revealed a more rapid polymerization for this material. The cured polymers showed noticeably different FTIR spectra compared to the prepolymers. The peaks centered at 1518, 1520, 1522 cm −1 and at 1352, 1368, and 1364 cm −1 (Figure 13, curves b-d respectively) were ascribed to triazine rings. Moreover, the carbonyl bands at 1713, 1725, and 1717 cm −1 (Figure 13, curves b-d respectively) were produced by the oxidation of the polymer at high temperatures. Based on these results, the potential structures of triazine and POSS within the triazine ring network were proposed and shown in Figures 7 and 15 respectively. The POSS functionality is grafted to the polymer chains upon reaction with the matrix, occupying the triazine network ( Figure 7). As a result, the cross-linked density of the polymer was increased, which led, in turn, to higher thermal stabilities.
Conclusions
The following conclusions can be drawn from this study: Conclusion 1: The polymerization rates of neat and POSS-containing polymers were determined by the quantity and reactivity of the reactive groups in the curing system. The activation energies of for the neat polymers and the EP0408-and AM0281-containing polymers were 135, 129, and 78 kJ/mol respectively, indicating that the POSS-containing polymers had higher chemical reactivities and superior stability than the neat polymers.
Conclusion 2: According to the DMA data ( Figure 8), high curing temperatures and long curing times resulted in polymers with enhanced oxidative stabilities.
Conclusion 3: According to the TGA measurements, the neat polymer showed higher weight losses at 900 °C (25% of initial weight remaining) than either the EP0408-0.5 (45% weight remaining) or AM0281-1 (40% remaining) polymers. Thus, polymers containing POSS showed superior thermal stabilities than the neat polymer.
Conclusion 4: As revealed by FTIR, the polymerization products were triazine ring structures, which are believed to be responsible for the good thermal properties of the modified polymers.
Conclusions
The following conclusions can be drawn from this study: Conclusion 1: The polymerization rates of neat and POSS-containing polymers were determined by the quantity and reactivity of the reactive groups in the curing system. The activation energies of for the neat polymers and the EP0408-and AM0281-containing polymers were 135, 129, and 78 kJ/mol respectively, indicating that the POSS-containing polymers had higher chemical reactivities and superior stability than the neat polymers.
Conclusion 2: According to the DMA data ( Figure 8), high curing temperatures and long curing times resulted in polymers with enhanced oxidative stabilities.
Conclusion 3: According to the TGA measurements, the neat polymer showed higher weight losses at 900 • C (25% of initial weight remaining) than either the EP0408-0.5 (45% weight remaining) or AM0281-1 (40% remaining) polymers. Thus, polymers containing POSS showed superior thermal stabilities than the neat polymer.
Conclusion 4: As revealed by FTIR, the polymerization products were triazine ring structures, which are believed to be responsible for the good thermal properties of the modified polymers. | 2019-04-05T00:28:33.181Z | 2017-08-01T00:00:00.000 | {
"year": 2017,
"sha1": "81a6de127d7bce6270d652655c6e54f237848120",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4360/9/8/334/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "81a6de127d7bce6270d652655c6e54f237848120",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
236949265 | pes2o/s2orc | v3-fos-license | Contribution of traffic-originated nanoparticle emissions to regional and local aerosol levels
Abstract. Sub-50 nm particles originating from traffic emissions pose risks to human health due to their high lung deposition efficiency and potentially harmful chemical composition. We present a modelling study using an updated EUCAARI number emission inventory, incorporating a more realistic, empirically justified particle size distribution (PSD) for sub-50 nm particles from road traffic. We present experimental PSDs and CO2 concentrations, measured in a highly trafficked street canyon in Helsinki, Finland, as an emission factor particle size distribution (EFPSD), which was then used in updating the EUCAARI inventory. We applied the updated inventory in a simulation using the regional chemical transport model PMCAMx-UF over Europe for May 2008 to test the effect of updated emissions in regional and local scales and in contrast to atmospheric new particle formation (NPF). Updating the inventory increased simulated average total particle number concentrations by only 1 %, although the total particle number emissions were increased to a 3-fold level. The concentrations increased up to 11 % when only 1.3–3 nm-sized particles (nanocluster aerosol, NCA) were considered. These values indicate that the effect of updating overall is insignificant in a regional scale during this photochemically active period, during which the fraction of the total particle number originating through atmospheric NPF processes was 91 %. These simulations give a lower limit for the contribution of traffic to the aerosol levels. Nevertheless, the situation is different when examining the effect of the update spatially or temporally, or when focusing to the chemical composition or the origin of the particles. For example, daily average NCA concentrations increased by a factor of several hundreds or thousands in some locations on certain days. Overall, the most significant effects–reaching several orders of magnitude–from updating the inventory are observed when examining specific particle sizes (especially 7–20 nm), particle components, and specific urban areas. While the model still has a tendency to predict more sub-50 nm particles compared to the observations, the most notable underestimations in the concentrations of sub-10 nm particles are, after updating, overcome and the simulated distributions now agree better with the data observed at locations having high traffic densities. The findings of this study highlight the need to consider emissions, PSDs, and composition of sub-50 nm particles from road traffic in studies focusing on urban air quality. Updating this emission source brings the simulated aerosol levels particularly in urban locations closer to observations, which highlights its importance for calculations of human exposure to nanoparticles.
Introduction
Detailed emission inventories are necessary for predictions of air quality and atmospheric composition in general. At present, very few of the standard inventories focus in enough detail on particle number concentrations and size distributions of particles from various sources. Several modeling studies using the regional chemical transport model PMCAMx-UF (Jung et al., 2010) over Europe (Fountoukis et al., 2012;Ahlm et al., 2013;Baranizadeh et al., 2016;Julin et al., 2018;Patoulias et al., 2018) have relied on the pan-European particle number emission (Denier van der Gon et al., 2009;Kulmala et al., 2011) and carbonaceous aerosol inventories developed in the EU-CAARI (European Aerosol Cloud Climate and Air Quality Interactions) project (the combination of these inventories is referred to here as the EUCAARI inventory). The EUCAARI inventory includes emissions from electricity production, industry, road and non-road transport, waste disposal, and agriculture. Paasonen et al. (2016) estimated future projections of particle number concentrations at a global scale using emission inputs based partially on the same inventory but, for example, traffic emissions based on the EU FP7 project TRANSPHORM database (Vouitsis et al., 2013).
While road transport is a significant particle source in areas affected by vehicles, such as in urban environments (Shi et al., 2001;Kumar et al., 2014), the EUCAARI inventory, however, does not fully consider the traffic-originated emissions of the smallest (especially sub-50 nm in diameter, D p ) particles. This results partially from the fact that only nonvolatile particles larger than 23 nm have been selected as the regulated ones in current road transport number emission standards (Giechaskiel et al., 2012) because measuring them is far more reproducible than measuring volatile ones. Many of the components of the smallest particles do, however, evaporate when heated. Hence, there are also emissions of particles larger than 23 nm (volatile ones), which are currently unregulated. The emission factors (EFs) of the smallest particles are quite variable across the vehicle fleet due to the nature of the nucleation process -their main origin at least in diesel exhaust -which is very sensitive to several factors, e.g., fuel properties, driving parameters, exhaust aftertreatment technology, and environmental parameters (Keskinen and Rönkkö, 2010). Only emissions of particles larger than 10 nm were estimated in the EUCAARI inventory because emissions of especially sub-10 nm particles for many emission sources have not been determined with high enough certainty or not determined at all.
Particles formed via a nucleation process are typically observed as a different mode -called nucleation mode -in the particle size distribution (PSD) of the exhaust. Although the nucleation mode particles are formed from primary gaseous emissions after the exhaust is released from the exhaust pipe, they are modeled similarly to primary emissions in regional or global models because the grid sizes can be kilometers, but the nucleation processes occurring in exhaust plumes occur at scales of a few meters at most. In addition to the high level of variation in the concentrations of the smallest particles in vehicle exhaust, PSD measurements with a differential mobility particle sizer (DMPS) or scanning mobility particle sizer (SMPS) typically underestimate the concentrations in the sub-10 nm size range (Kangasluoma et al., 2020). Furthermore, particles smaller than 3 nm have remained undetected until recent advances of measurement techniques, such as the introduction of the particle size magnifier (PSM), which is capable of detecting particles down to ∼ 1 nm ( Vanhanen et al., 2011). Traffic has recently been shown to be a major source of those previously undetected particles (nanocluster aerosol, NCA) in traffic-influenced areas (Rönkkö et al., 2017).
Sub-50 nm or sub-23 nm particles originating from traffic are not negligible in terms of human health effects: they have higher deposition efficiency in the human respiratory system as compared with larger particles and can even translocate to the brain (Oberdörster et al., 2004). They also overlap with the sizes of particles formed and grown during atmospheric new particle formation (NPF) events and therefore have the potential to contribute to the climate effects of aerosols (Kerminen et al., 2018). Such particles form a complex external aerosol mixture influenced by local co-pollution, meteorology, and atmospheric processes . Anthropogenic emissions overall can also greatly affect the frequency and intensity of NPF events in urban air (Saha et al., 2018). Additionally, emissions of diesel vehicles can include metal-containing particles, which can be found in a separate size mode from non-volatile particles near 10 nm . Metallic combustion-originated nanoparticles have also been found to exist in the brains (Maher et al., 2016).
In this study, the EUCAARI inventory has been updated for more realistic, measurement-derived PSDs originating from road transport. PSDs between 1.2 and 800 nm particles measured in a traffic-influenced street canyon in Helsinki, Finland, were incorporated into the inventory in order to better represent real-world particle emissions from vehicles. The updated inventory was then applied in the PMCAMx-UF model, and the effects of updating were studied at different spatial and temporal scales, compared to the observational data, and contrasted with NPF. The simulated period (May 2008) was photochemically relatively active, which elevates NPF to the major source of new particles. This period was chosen because the same period has been simulated in several other related studies as well, providing plenty of comparable data and pre-defined input files for emissions and meteorology. Since the street canyon measurements were performed in 2017 -using more recent technologies for PSD measure-ments -trends of urban aerosol and vehicle emissions were used to scale the determined emissions from 2017 to 2008.
Experimental data
The original EUCAARI inventory was updated using PSDs and CO 2 concentrations measured at the Mäkelänkatu supersite, located in a highly trafficked street canyon in Helsinki, Finland. The street canyon measurements were performed in May 2017 and in May 2018. PMCAMx-UF simulations were done for May 2008 as in the previous PMCAMx-UF studies over Europe (Fountoukis et al., 2012;Ahlm et al., 2013;Baranizadeh et al., 2016;Julin et al., 2018). More recent measurements for determining traffic emissions were used because PSD measurements down to ∼ 1 nm were unavailable in 2008. Hourly PSD data are also available for several atmospheric measurement stations across Europe for May 2008.
Determining traffic emission factors
The Mäkelänkatu supersite is a continuous measurement site operated by the Helsinki Region Environmental Services Authority (HSY). It is located at a curbside of a highly trafficked (28 000 vehicles per workday) street canyon about 3 km north of the city center of Helsinki, Finland. About 1/10 of the traffic is comprised of heavy-duty vehicles. The detailed information on the supersite and the measurements performed in May 2017 can be found elsewhere Hietikko et al., 2018;Olin et al., 2020). Additionally, the composition of NCA (volatile and nonvolatile fractions), measured at the Mäkelänkatu supersite in May 2018 (Lintusaari et al., 2022), and the particle compositions (black carbon (BC), sulfate (SO 4 ), and primary organic aerosol (POA) fractions) in diluted exhaust of a diesel bus, obtained from a simulation with an aerosol dynamics model coupled with a computational fluid dynamics (CFD) model (Olin, 2013), were used in splitting the EFs further into chemical compound categories specified by the EUCAARI inventory.
PSDs (dN/d log D p ) were determined with the combination of a particle size magnifier (PSM), two condensation particle counters (CPCs), and a differential mobility particle sizer (DMPS), as described by Olin et al. (2020). In addition to the study by Olin et al. (2020), taking only a large-particle dilution ratio (DR = 8.2) of the used bridge diluter into account, DR is now afterward corrected for very small particles. The correction was done using a DR vs. D p curve determined in an inverse modeling study with CFD . The corrected DRs for the first two size bins (1.2-3 and 3-7 nm) are 10.7 and 8.8 instead of the constant value of 8.2.
The concentrations (N in cm −3 ) of every size bin of the determined PSDs were converted to EFs (n in kg −1 fuel ) using simultaneous CO 2 concentration measurements (exam-ples shown in Fig. S1 in the Supplement) in 1 min time resolution, as was done for the NCA concentration by Olin et al. (2020). To express all data in a similar time resolution, the PSDs measured with the DMPS at 9 min resolution were interpolated to 1 min resolution before calculating the EFs. Whereas NCA measured at the curbside probably originates from the studied street or via atmospheric NPF, larger particles -having longer atmospheric lifetime -can be originated also from larger areas, including nearby streets or the whole urban area. Nevertheless, due to the fact that linear fitting of the particle concentrations from every size bin against the CO 2 concentration is possible (Fig. S1), their relation to the traffic is evident, although all particle sizes may not be originated from the studied street. The calculated EFs are here represented as an emission factor particle size distribution (EFPSD; dn/d log D p ), presented later in Sect. 3.2.2.
Atmospheric measurement stations
Simulation results are compared with the observations from several atmospheric measurement stations across Europe. PSD data from six measurement stations from the EU-SAAR (European Supersites for Atmospheric Aerosol Research) network and from the SMEAR (Station for Measuring Ecosystem-Atmosphere Relations) III station in Helsinki were utilized in the model evaluation.
The selected EUSAAR stations represent different types of locations: Aspvreten, Sweden, and Mace Head, Ireland, are located in coastal areas; Hyytiälä, Finland, and Vavihill, Sweden, are located in rural continental areas; Ispra, Italy, and Melpitz, Germany, are not in close vicinity of pollution sources but are still affected by traffic emissions. The SMEAR III station in Kumpula, in Helsinki, Finland, is located in an urban background area, and the nearest busy road (50 000 vehicles per day) is separated from it by a 150 m band of deciduous forest (Järvi et al., 2009). The Kumpula station is less than 1 km away from the Mäkelänkatu station; thus, they are quite comparable. However, the Mäkelänkatu station is much more affected by traffic because it is located at a curbside of a busy street canyon (Okuljar et al., 2021). They fall inside the same computational grid cell of the PMCAMx-UF model in this regional-scale application.
Simulations
Simulations were performed with the PMCAMx-UF model for 1-29 May 2008, similarly to Julin et al. (2018). The results from the first 2 d were omitted from the analysis to minimize the effects of uncertain initial conditions. The model was run with the original and with the updated emission inventory. The effects of traffic emissions and atmospheric NPF were also examined by performing the model runs also without NPF.
Model description
The three-dimensional regional chemical transport model PMCAMx-UF simulates both the size-dependent particle number and chemically resolved mass concentrations (Jung et al., 2010). Vertical and horizontal advection and dispersion, wet and dry deposition, and gas-phase chemistry descriptions are based on the publicly available CAMx (Comprehensive Air Quality Model with Extensions) air quality model. Aerosol dynamics processes in PMCAMx-UF, NPF, condensation, and coagulation are modeled using the DMAN (Dynamic Model for Aerosol Nucleation) by Jung et al. (2006). DMAN tracks the aerosol mass and number distributions using the TOMAS (two-moment aerosol sectional) algorithm (Adams and Seinfeld, 2002), in which particles are logarithmically divided into 41 size bins between 0.8 nm and 10 µm.
This study used the most recent version of the PMCAMx-UF model, used also by Julin et al. (2018). In this version, particles contain 15 chemical components: POA, BC, SO 4 , ammonium (NH 4 ), five secondary organic aerosol (SOA) components separated according to their volatility, crustal material, nitrate, sodium, chloride, a surrogate amine species, and water (H 2 O). The model predicts NPF rate from the sum of the rates of three included NPF mechanisms: sulfuric acid (H 2 SO 4 )-ammonia-H 2 O and H 2 SO 4 -dimethylamine-H 2 O mechanisms based on the cluster kinetic model ACDC (Atmospheric Cluster Dynamics Code;McGrath et al., 2012;Olenius et al., 2013) and the classical-nucleation-theorybased H 2 SO 4 -H 2 O mechanism (Vehkamäki et al., 2002). The used computational grid covered the European domain with a 36 km × 36 km horizontal grid resolution and 14 vertical layers, reaching an altitude of 6 km. More detailed information of the used model version can be found in Julin et al. (2018).
Updating the emission inventory
3.2.1 Extracting the road-transport-related particle emissions from the EUCAARI inventory Hourly gridded particle emissions in the EUCAARI emission inventory are separated into 15 source categories and sub-categories. One of the categories is for road transport, and it is further separated to four sub-categories: gasoline, diesel, liquefied petroleum gas, and non-exhaust (e.g., from tires or brakes) emissions. Because the particle number emission rates in 41 size bins in a source category level were not openly available, updating only road-transport-related emissions was not straightforward. The road-transport-related emissions were extracted from the inventory -reporting the particle number emissions as a sum of all 15 sources (separated in all size bins and components) -through a positive matrix factorization (PMF) analysis. The most optimal solution from the PMF analyses was obtained when the inventory was represented with 16 fac-tors, according to the decrease in the normalized error while increasing the number of factors. Due to an inexact nature of PMF, the optimal solution was not obtained with 15 factors even though the inventory has been constructed with 15 sources. Figure S2 presents maps of the monthly mean abundances of all 16 PMF factors. The factors 5, 6, 7, 11, and 12 have features reflecting real traffic patterns. However, Fig. S3 presenting means of diurnal variations in the abundances of the PMF factors in Kumpula and Mäkelänkatu as well as Melpitz displays that reasonable diurnal cycles for both stations are seen only with the factors 6, 7, and 11. Of these, only the PSD from the factor 6 ( Fig. S4) corresponds to the on-road diesel exhaust PSD, presented by Denier van der Gon et al. (2009), which is also a bimodal distribution having the modes at 23 and 57 nm. The road-transport-related source in the original EUCAARI inventory was available as the total particle mass emission rate. Thus, the map of and diurnal variation in the particle mass emission rate from the factor 6 were compared with the ones from the EUCAARI inventory ( Fig. S5). It can be seen that the map features, diurnal variations, and the level of the values overall are very comparable with some exceptions, such as some ship routes in the factor 6, due to inexactness of PMF. However, marine areas were omitted from the following emission updates.
Finally, the PMF factor 6 was selected to represent the road-transport-related sub-category updated in this work. Although the road-transport-related emissions in the inventory consist of four sources, only the factor 6, which is presumably only diesel-related, was used in updating the inventory because it was connected to road emission with high certainty. Omitting the other sub-categories (gasoline, liquefied petroleum gas, and non-exhaust emissions) is not significant because the abundances of the other factors are lower compared to the factor 6 and because using this factor already slightly overestimates the mass emissions ( Fig. S5). Figure 1 presents the EFPSD derived from the PSD measurements in Mäkelänkatu. Its shape agrees well with the shape of the difference PSD (background PSD subtracted from the PSD measured when wind blew from the road) from the same experiment reported by Hietikko et al. (2018), with the exception of a slightly higher soot mode in the difference PSD. The agreement implies that deriving an EFPSD from bin-by-bin calculation of EFs using CO 2 concentrations is an acceptable method. The concentration at the first size bin (1.2-3 nm) is calculated as the average (circled dot) of two values: the value (dot) derived from the experiment in 2017 and the value (circle) derived from the experiment in 2018 (Lintusaari et al., 2022). This was due to a reason that the concentration of the first bin was lower than the next bin (3-7 nm) with the year 2017 data. This is unexpected and possibly caused by uncertainties involved in the detection and penetration efficiency corrections for the particles in the first The vertical line at 57 nm denotes the highest D p considered in the updating process and is also the size where the PSDs overlap. The shape of the difference PSD measured at Mäkelänkatu (Hietikko et al., 2018) is also shown for comparison (the data are scaled so that they can be easily compared with the EFPSD data). bin (NCA-sized). The efficiencies of NCA are very low and thus prone to high relative uncertainty. The EF of NCA from the study by Lintusaari et al. (2022) -which, in that case, is higher than the EF of the next bin -was utilized because more sophisticated efficiency calculations were performed there, and it is thus considered to be more accurate. Particles in the first two size bins simulated with the PMCAMx-UF model (0.8-1.3 nm) originate only from NPF processes; such particles also cannot be measured using aerosol instrumentation.
Emission factor particle size distribution
The EFPSD, expressed in the unit of kg −1 fuel , was converted to correspond to the emission source input of the model, expressed in the unit of m −2 h −1 in the following way. The yearly CO 2 emissions from road transport in the EU were 7.9 × 10 11 kg in 2008 (European Environment Agency, 2021). It corresponds to the fuel combustion of 2.5 × 10 11 kg fuel a −1 , which was further corrected with the factor of the population count within the simulation domain and in the EU, resulting in the fuel combustion of 5.7 × 10 7 kg fuel h −1 . The EFPSD, determined for the year 2017, expressed as PM 2.5 is 0.31 g kg −1 fuel . However, due to tightened emission regulations, which have led to the introduction of vehicles emitting fewer soot particles (Diesel-Net, 2021), e.g., by equipping vehicles with a diesel particulate filter (DPF) (Wihersaari et al., 2020), the EF of PM 2.5 was higher in 2008 (EMEP, 2021). Decreasing BC and PM 2.5 concentrations in Mäkelänkatu have also been observed from the long-term measurements since 2015 ( Barreira et al., 2021;Luoma et al., 2021). The determined EF of PM 2.5 was thus estimated to correspond to the EF for the year 2008 using the yearly decrease rate of PM 2.5 , 7.1 % a −1 (Luoma et al., 2021), resulting in the EF of 0.87 g kg −1 fuel . That leads to the value of 4.9 × 10 7 g h −1 for the simulation domain. This value is the same for the hourly emission of PM 2.5 obtained from the PMF factor 6, which leads to the levels of EFPSD and the PSD from PMF matching with each other at D p of 57 nm. The yearly decrease rate of PM 2.5 (7.1 % a −1 ) was, however, reported as statistically not a significant trend (Luoma et al., 2021), and also it only covers the trend between the years 2015 and 2018. Thus, a trend was also estimated with the data from Kumpula, which fully cover the years between 2008 and 2017. Applying a seasonal Mann-Kendall test and Sen's slope estimator -as done by Luoma et al. (2021) -to the particle number concentration at 56 nm gives the yearly decrease rate of 4.4 % a −1 for the years between 2008 and 2017. Since this trend is for Kumpula, the trend for Mäkelänkatu could be around 7.1 % a −1 because the trends of other quantities for Mäkelänkatu were found to be approximately 2-fold the trends for Kumpula in the study by Luoma et al. (2021). Additionally, the PM 2.5 trend was calculated from the data of yearly (1990-2019) road transport emissions (without road, tire, and brake wear) in Finland, reported by EMEP (2021). The decreasing trend calculated for the years between 2008 and 2017 is 6.0 % a −1 , which corresponds relatively well to the trend applied here (7.1 % a −1 ).
Because fuel efficiency has developed during the years, CO 2 emissions from road transport were at different levels in 2008 and in 2017. The method of determining EFs using CO 2 concentrations gives the EFPSD with respect to kilograms of fuel combusted. Therefore, it can be applied to any year. However, the total amount of combusted fuel in the computational grid with respect to time has changed, leading to the need of scaling the time-based particle emission rates -which is the form of the emission input of the model -upwards from the year 2017 to the year 2008. This scaling has, however, already been performed when the EFs were scaled using the trends of PM 2.5 because ambient PM 2.5 concentrations have decreased not only due to equipping vehicles with a DPF but also due to the fact that the total amount of fuel combusted has decreased.
The shapes of the PSD from PMF factor 6 and the estimated EFPSD beyond 57 nm agree relatively well, suggesting that the soot mode was estimated quite accurately already in the original EUCAARI inventory. Because the PSDs of the soot modes lie at similar levels, the emitted particle mass was affected only marginally in the update. The soot mode is assumed to be already estimated well also because exhaust soot measurements have a much longer history than measurements of smaller particles. Additionally, the soot particle concentration is not as sensitive to driving and environmental parameters as the concentration of smaller particles. A value of 57 nm was selected as the upper limit for which the updating process was applied; i.e., no changes to the original inventory for D p > 57 nm were made.
Uncertainties involved in updating the emission inventory
Here we elaborate further on the uncertainties involved in representing the road-transport-related emissions Europewide with a single EFPSD determined from the measurements in Mäkelänkatu in 2017.
Estimating the level of the EFPSD for the year 2008 from the measurements performed in 2017 includes high uncertainty because the used yearly decrease rate of PM 2.5 by Luoma et al. (2021) was determined from the measurements not beginning until 2015 and includes its own uncertainty (including statistically not a significant result). Nevertheless, the scaling of the soot modes was a primary objective here because, hence, the update of the inventory considers only updating the shape of the emitted PSD (below 57 nm) but not its level overall. Additionally, estimating the possible change in the shape of the PSD during the years was not possible. It is, nevertheless, expected that while equipping vehicles with DPFs, soot particle concentrations are decreased, but also the smaller particles may have been decreased. That is because a DPF can filter small particles also -if they are primarily emitted -and because fuel sulfur content has been reduced (DieselNet, 2021), leading to fewer particles formed via sulfur-driven nucleation (Maricq et al., 2002;Kittelson et al., 2008). It should, however, be noted that while the particle emissions from diesel vehicles have been decreased over the last few years, the gasoline vehicle fleet has begun to emit more particles due to the increased favoring of gasoline direct injection technologies (Awad et al., 2020). On one hand, this increases the uncertainty in estimating the EFPSD for 2008 using the data from 2017; but on the other hand, it provides better estimation on the air quality affected by the modern vehicle fleet.
Vehicle fleets differ among countries, e.g., by fuel selection and by the ages of the vehicles. The average vehicle age in Finland is similar to the European average, while diesel vehicles are on average slightly less popular in Finland than in the rest of Europe (Eurostat, 2021). It should, however, be noted that averaging of vehicle ages or fuel types over Europe is not the most representative in terms of the average emissions or particle exposure because there are countries with old vehicle fleets with mostly diesel vehicles -a combination with a plenty of soot emissions -but also countries with new vehicle fleets also with mostly diesel vehicles -a combination with the fewest particle emissions. In addition, there are countries with other possible mixtures of fleet ages and fuel types of vehicles as well.
Particle emissions depend on driving parameters, such as on engine load (Rönkkö et al., 2006). Therefore, particles emitted on an urban street, such as Mäkelänkatu, do not fully represent the particles emitted on other road types, such as on motorways, where higher engine loads are utilized. However, there are signal-controlled intersections on Mäkelänkatu, near the measurement site, also providing data for emissions with higher engine loads during accelerations.
Particle emissions depend also on environmental parameters, such as temperature (Mathis et al., 2004;Olin et al., 2019) and radiation . Therefore, particle emissions can differ between nighttime and daytime. Here, we aim for a first-level approximation of PSDs of the emissions using a single EFPSD -for the most representative average covering the whole vehicle fleet, driving parameters, and environmental parameters in May. Despite this simplification, it is a useful first step in determining the importance of these particles. To our knowledge, in addition to the Mäkelänkatu site, no other location with simultaneous CO 2 and PSD measurements down to ∼1 nm is available.
Number-based EFs of especially sub-30 nm particles could be quite different if the emissions were determined from measurements performed at a different location, on a different road-type, and at a different time. In contrast, EFs of particles larger than 30 nm -mainly soot -would possibly differ much less with differing location or time. Nevertheless, the approach in this study still represents the most realistic approximation currently available, and it improves the representation of the road traffic emissions in the original inventory, which excluded all sub-10 nm particles. Emissions of sub-10 nm particles have also been applied in the study by Paasonen et al. (2016), who included a size bin for 3-10 nm particles, based on the TRANSPHORM database (Vouitsis et al., 2013). However, they did not include any modes smaller than 10 nm; thus, this size bin was only an extension from PSDs with larger modes. Kontkanen et al. (2020) compared annual size-binned particle emissions between their estimations from ambient data measured in urban Beijing and the model by Paasonen et al. (2016). They observed that the ambient data suggest significantly more particles in the sub-60 nm size range. This is due to the fact that the ambient data represent emissions from a more localized -traffic-influenced -area but also because the smallest particles are omitted from the traffic emissions in the TRANSPHORM database.
Parameters of the emission factor particle size distribution utilized in updating the emission inventory
To utilize the determined EFPSD within PMCAMx-UF, it was transformed to the model size bins through a continuous fit (Fig. 1). A trimodal fit consisting of a power law distribution and two log-normal distributions (see the Supplement for the detailed equation) is used because there seems to be features of two log-normal distributions -as typical in vehicle exhaust -but the smallest particles cannot be fitted very well to any log-normal distribution. A power law distribution fits moderately and is suggested by the theory of simultaneous nucleation and growth processes (Olin et al., 2016). The parameters of the fit are presented in Table 1. It is interest- ing that trimodal size distributions of non-volatile particles -with quite similar particle sizes to the ones found in this study -were also detected in diesel exhaust by Kuuluvainen et al. (2020). They conclude that the mode in the middle is originated from lubricating oil, whereas it is associated here with nucleation-originated particles.
The contribution of the road-transport-related particle number emissions (from the PMF factor 6, which is presumably related only to diesel vehicles) to the total emissions from all emission sources was averagely 8 % in the original inventory. In updating the inventory, these road-transportrelated particle number emissions were increased to a 26fold level, resulting in the increase in the total number emissions to a 3-fold level. Hence, in the updated inventory, the contribution of these road-transport-related particle number emissions (from diesel vehicles) to the total emissions becomes 69 %. Due to the lack of all sub-10 nm particle emissions in the original EUCAARI inventory, sub-10 nm particle emissions in the updated one come exclusively from road transport. By considering only the number concentrations of ultrafine particles (UFPs; sub-100 nm particles), the road-transport-related emissions were increased to a 28-fold level. This resulted in the total UFP number emissions increasing by a factor of 3.1.
Vehicle-emitted particles originate primarily via three routes: in-cylinder processes (soot mode, ash particles, nonvolatile core), nucleation after the exhaust pipe (nucleation mode), and a less-known source of NCA (power law mode). Therefore, a trimodal fit suits well in separating particle composition between the three sources. However, it should be noted that the vehicle exhaust particle formation is a complex process, and this approach is only an approximate. Studies such as Kuuluvainen et al. (2020) and Alanen et al. (2020) divide the non-volatile PSDs of internal combustion engine emissions into three categories, based on PSDs and particle morphology studies, and nucleation mode observed in vehi-cle exhaust does not always require H 2 SO 4 -driven formation process.
To add particles to the original road-transport-related PSD, a selection for their chemical composition was needed. Because measuring chemical composition for sub-50 nm particles is challenging, this study relies on CFD simulations of particle composition 10 m behind a diesel-fueled bus by Olin (2013). They consist of a situation where a Euro III bus is driving at a speed of 40 km h −1 with an engine power of 40 % of the maximum (see the Supplement for a more detailed description). The CFD simulations give mass fractions of BC, SO 4 , POA, and H 2 O for the nucleation and soot modes. The road transport emissions in the original EUCAARI inventory consist solely of BC, SO 4 , POA, and crustal material. Thus, the CFD-simulated mass fractions can be directly utilized in the inventory, with the exception of H 2 O, which is not included in the emissions due to an equilibrium-type behavior of H 2 O dynamics in the model. The chemical composition for the power law mode is determined by, firstly, assuming a fraction of 16 % of non-volatile particles (the non-volatile fraction of NCA; Lintusaari et al., 2022) and, secondly, assuming the nucleation mode composition for the remaining volatile part. The non-volatile part is here lumped together with BC due to the lack of more specific information on its composition and because adding an extra component would have required several modifications to the model code. BC together with the unknown non-volatile part is abbreviated here to BC*. Figure 2 presents the particle chemical composition of the traffic-emitted particles as a function of D p in the original and in the updated inventory. The composition between 10 and 57 nm is modified to contain more POA and less BC because nucleation mode particles -consisting mainly of POA -were considerably added. Nucleationmode-sized particles were also in a relatively low SO 4 concentration in the original inventory, but more SO 4 is included in the updated inventory. No particles below 10 nm were included in the original inventory. Importantly, the inventory does not include metallic ash particles, which have been reported to contribute particle emissions especially in the ultrafine particle size range.
The selection of the CFD simulations of a diesel-fueled bus for determining chemical composition of particles was further elaborated by examining other related studies as well. Kostenidou et al. (2021) (Table 1). However, it should be noted that in the mentioned studies, SO 4 and POA were measured using aerosol mass spectrometers, which do not efficiently detect particles smaller than ∼ 50 nm. Therefore, the composition of the nucleation mode, or especially of the power law mode, is barely covered in the measured compositions, and studies related to these compositions are very scarce. According to the formation principle of nucleation mode particles, they do not contain BC; thus, POA dominates in the mass fractions of the nucleation mode (Table 1) as it dominates in the mass fractions of the volatile (SO 4 and POA) part of the soot mode. Hao et al. (2019) collected PM 2.5 particle samples on filters from a highway tunnel in China and reported BC, SO 4 , and POA mass fractions of 0.12, 0.09, and 0.34, respectively. These values lie in the range between the mass fractions of the nucleation and soot modes from the CFD simulations. In conclusion, due to the scarcity of studies on chemical composition of vehicle-emitted particles and because the CFDsimulated mass fractions (of a diesel bus only) are reasonable according to the other studies (including tailpipe emissions of both gasoline-and diesel-fueled light-duty vehicles and emissions from a real traffic mixture from a road tunnel), the CFD-simulated ones were used here to cover the whole vehicle fleet. In addition, this study primarily focuses on the updating of the shape of the PSD but not on the exact chemical composition of emitted particles, which was, however, required to be estimated for running the model with the updated inventory.
Comparing simulated particle number concentrations with observations
Particle number concentrations from the PMCAMx-UF simulations were first compared to the ones observed at the measurement stations. Figure 3 presents hourly means of number concentrations of particles smaller than 10 nm (N <10 ) and larger than 10 nm (N >10 ) with the original (orig) and updated (upd) emission inventories. The data of N <10 are shown only for the stations that had reliable PSD measurements in the sub-10 nm size range. The lower D p limit in N <10 and the upper D p limit in N >10 for the simulated and the observed values depend on the corresponding limits of the PSD measurements and vary slightly between the stations. There are overestimations in simulated concentrations of particles between 10 and 50 nm and slight underestimations for particles larger than 100 nm (N >100 ) in the previous studies (Baranizadeh et al., 2016;Julin et al., 2018) with the PMCAMx-UF model, possibly due to missing condensable vapors and particle growth mechanisms (Baranizadeh et al., 2016). Even higher overestimations but also underestimations are seen in N <10 (Fig. 3a); however, the most notable underestimations are now overcome when using the updated emission inventory (Fig. 3c). The highest overestimations in N <10 still exist, especially for rural locations. In the case of N >10 , no notable differences can be seen between the original (Fig. 3b) and updated emission inventories (Fig. 3d) except slightly increased -but still underestimated -concentrations at the lowest end of the simulated concentrations. The agreement and the correlation with the hourly observations and the scatter for N <10 , N >10 , and N >100 are also presented in Table 2 in terms of normalized mean bias (NMB), correlation coefficient (R), and normalized mean error (NME), respectively. Whereas the values remain nearly constants for N >100 after updating the inventory, NMB values for N >10 are further increased. The most significant differences after updating the inventory are observed with the logarithms of N <10 , for which NMB is increased from +12 % to +53 %. Overestimations of the concentrations of the smallest, roughly sub-50 nm particles -becoming even more substantial after updating the inventory -highlight the possibility of overestimated NPF rates. On the other hand, overestimation of the simulated N <10 can also be perceived as underestimation of the observed N <10 due to the inaccuracy (typically underestimating; Kangasluoma et al., 2020) of PSD measurements in the sub-10 nm size range. It should be noted that there are observations (particularly from Hyytiälä and Vavihill) of very low hourly averages of N <10 (below 1 cm −3 ) which may not be from very reliable data due to low counting statistics and which thus play a major role in the disagreement. In contrast to the agreement, improvements for N <10 (logarithms) after updating the inventory can be seen in the correlation and in the scatter: R increases from +0.37 to +0.54, and NME decreases from 64 % to 58 %, also seen in Fig. 3a, c as overcoming the most notable underestimations with the updated inventory. In the case of urban locations, even better improvements are seen, e.g., NME decreasing from 42 % to 16 % for Kumpula. Table 2. Normalized mean bias (NMB), correlation coefficient (R), and normalized mean error (NME) of the simulated particle number concentrations compared to the observed ones. The values in parentheses denote the values with the original emission inventory. The top values are calculated from the ordinary concentrations and the bottom values from the logarithms of the concentrations. The bold values highlight the most notable differences between the inventories (the best-performing in bold). 3.3.2 Effect of updating emission inventory on relative particle concentrations Figure 4 presents how much the concentrations of 1.3-3 nm (N NCA ), 7-20 nm (N 7−20 ), and all particles (N tot ) change after updating the inventory. The concentrations remain nearly unchanged, especially N tot , but are also stretched out in both directions, toward decreased and toward increased concentrations. However, all the histograms are slightly displaced from the ratio of 1 so that increased concentrations are more common. There are also notable extremes in the concentration ratios, especially for NCA (min: 0.0003; max: 4225), denoting that N NCA was decreased or increased with factors of up to several thousand in certain locations on certain days. Although updating the inventory increases emissions for all particle sizes, it also leads to decreased concentrations at certain times in certain areas having a high NPF rate. This results via increased primary emissions of particles increasing the condensation and coagulation sinks, which can reduce nucleating gaseous precursors and newly formed particles, respectively, and thus lead to fewer small particles. Due to a complex relationship between the increases in the sinks and the appearance of small particles, updating the emission inventory can change the particle concentrations in both directions. It is clear that decreased concentrations are related to the connection between NPF and emissions because simulating with NPF processes switched off results in the situation in which updating the inventory only increases the concentrations. Figure 5 presents the ratios of the concentration change as maps. In contrast to the histograms in Fig. 4, the ratios on the maps are calculated from the monthly mean values, representing the total aerosol exposure of people living in certain areas. The roughest extremes of the ratios do not exist when examining monthly means, but there are still sporadic areas in which concentrations were decreased or increased by a factor of ∼ 2 (not shown on the maps). The monthly mean concentrations, especially of N 7−20 , were increased by tens of percent in densely populated areas, especially in western Europe, but there are also areas with ratios far below or above 1 over marine areas, such as over the Mediterranean Sea.
The ratios of the concentration change calculated from the monthly means are also presented as mean and median values in Table S1. The values for N <10 , N <23 (totally unregulated vehicle-emitted particles, D p < 23 nm), and N <100 (UFPs, D p < 100 nm) are also shown. Additionally, the values are presented as population-density-weighted values using the gridded population count data for 2010 from CIESIN (2018). Updating the emission inventory increased total particle count in Europe for the whole month by only 1 %. However, the increase is 2 % when using the population density weighting. That can be interpreted as meaning that the total human exposure to particle number is estimated as being 2 % higher when using the updated inventory compared to the original one. Moreover, the increase is 11 % if only NCA- sized particles are considered. The highest differences are observed when considering particles between 7 and 20 nm, for which the population density weighting gives a mean increase of 10 % and a median increase of 4 %. The latter value can be interpreted as meaning that half of the people within this European domain are, on average, at least 4 % more exposed to N 7−20 compared to what would have been estimated using the original inventory.
Comparing simulated particle size distributions with observations
The results so far have displayed that the particle concentrations were slightly increased after updating the inventory when the concentrations are averaged over long times and wide areas. The effect of updating the inventory is examined locally and more temporally next, first by comparing PSDs simulated with the original and with the updated inventory together with the observations. Figure 6 presents monthly means of PSDs at selected measurement stations separately for mornings (05:00-09:00 LT) and daytime (10:00-14:00 LT). Daytime typically experiences the highest NPF rates due to the solar radiation cycle but also high traffic densities. Mornings, instead, have typically even more traffic but not yet solar-radiation-ignited NPF. PSDs in the daytime do not differ notably between the original and the updated inventories, with the exception of slightly higher concentrations with the updated inventory in Melpitz and Kumpula for ∼ 5-30 nm particles. Agreement of the daytime PSDs with the observations is fairly good for particles larger than 10 nm, but the overestimation of the simulated particles (or underestimation of the measured particles) smaller than 10 nm can be seen. Melpitz and Kumpula are again different, having higher observed concentrations than the simulated ones. These are locations affected by road traffic, especially Kumpula, and the results hence indicate that traffic emissions may still be underestimated even with the updated inventory. However, it should be noted that the grid cell including the Kumpula station consists of not only urban areas but rural and marine areas too. Therefore, the average concentrations within the grid cell are, indeed, expected to be lower than the concentrations within urban areas only. Additionally, there are busy harbor areas and a busy airport within a radius of 15 km from the Kumpula station. It is certainly possible that, in addition to road transport, other activities, such as aviation and shipping, can also involve underestimated particle emissions. Hence, other anthropogenic particle emission sources may also need to be addressed better in emission inventories in order to have the simulated PSDs agree with the measured ones.
In the case of the morning PSDs, differences between the emission inventories are more notable. The updated inventory predicts levels of sub-30 nm particles up to 3 orders of magnitude higher in areas affected by road traffic (Ispra, Melpitz, and Kumpula) than the original inventory. The use of the original inventory fails to predict PSDs for sub-30 nm particles for the mornings. The updated inventory, instead, gives fairly good agreements for the PSDs when the possible underestimation of PSD measurements for sub-10 nm particles is taken into consideration. People exposed to outdoor air in the mornings in urban areas are exposed to sub-30 nm particles remarkably more than would have been predicted using the original inventory. Furthermore, the differences could be even higher within the urban centers, but the used coarse grid resolution cannot capture the effect at more localized scales.
Change in particle composition after updating the emission inventory
Sub-30 nm particles may carry potential health issues because they lie in the range of the highest lung deposition ef-ficiencies (> 30 % for 6-50 nm particles; ICRP, 1994) and can thus end up in the human body, even in the brain via the olfactory nerve (Maher et al., 2016). Therefore, they are of high importance, especially in urban areas and if their origin is traffic because emissions from fossil fuel combustion include harmful substances. Simulated particle composition is examined in Fig. 7 as instantaneous composition in Melpitz on 24 May 2008 at 09:00-10:00 LT. The selection of this location and time is made to demonstrate how particle composition changes due to updating the inventory, while PSD and particle concentration do not significantly change (N upd tot /N orig tot = 0.93, N upd <10 /N orig <10 = 0.71). The reason particle concentrations even decrease after updating is increased condensation and coagulation sinks, as discussed before. In this case, the total NPF rate was lowered to a level of onethird of the rate simulated using the original inventory. However, the sinks are actually ∼ 4 % lower with the updated inventory during the time range presented in Fig. 7. Instead, the sinks just before the time range were ∼ 6 % higher and even ∼ 10 % higher in an adjacent grid cell. The effect of increased sinks with the updated inventory on the appearance of small particles is not always observed within a single time step or grid cell but within later time steps or nearby grid cells instead, due to a history effect and transportation of components between the grid cells.
The composition of sub-30 nm particles was changed so that particularly the mass fractions of POA (and slightly BC/BC*) were increased at the expense of the other components, while the composition of particles larger than 30 nm did not substantially change (Fig. 7a, b). The reason why particularly POA and BC/BC* were increased is because POA and BC* were selected (Sect. 3.2.4) as the main components of the particle emissions of road traffic through CFD simulations instead of direct particle composition measurements. Therefore, BC* can also comprise other non-volatile components, such as metals, in this context. By examining the change in PSD in Fig. 7c, the effect of updating the inventory seems only minor. Nevertheless, by examining the mass size distributions of certain components in Fig. 7d, it can be seen that POA and BC/BC* masses for sub-10 nm particles were increased significantly from near-zero levels even though N <10 was decreased. In conclusion, whereas the effect of updating the inventory on PSDs is minor in some locations, masses of potentially harmful components in small -efficiently lung-depositing -particles can still be substantially increased and potentially pose elevated health risks.
3.3.5 Comparing the effects of emissions and atmospheric new particle formation on particle size distributions The effects of primary emissions of particles and atmospheric NPF are examined in Fig. 8, presenting the monthly means of PSDs in Melpitz in the mornings and in Hyytiälä in the daytime. In the mornings in Melpitz, NPF plays a minor role only in PSDs if the updated inventory is used. It is unambiguous due to the location and time range having high traffic densities but not much atmospheric NPF yet. The original inventory, instead, predicts up to 3 orders of magnitude fewer 2-20 nm particles. In the case of Hyytiälä, the effects are opposite instead. Even the updated inventory does not sufficiently predict the observed aerosol levels (about an order of magnitude lower) when the NPF processes were switched off. Conversely, also the original inventory is sufficient to predict the observed levels, and no notable differences are seen between the inventories when the NPF processes were kept on. This was expected as Hyytiälä is a rural location not greatly affected by road traffic, and the daytime is typically associated with atmospheric NPF. Examining the effects of NPF and emissions within the full European domain displays that the major source of the total particle number is NPF: monthly means of N tot were, on average, decreased by 91 % when the NPF processes were switched off. Without NPF processes, average particle number concentrations increased by 38 % after updating the inventory, although the total particle number emissions increased to a 3-fold level due to non-linearities in the model, e.g., coagulation. With the NPF processes, the average particle number increase was only 1 %, which is one-third of what is expected from the increase in the emissions if adding particles would not have a lowering effect on NPF rates.
Summary and conclusions
Road-transport-related particle number emission factors were determined from measurements performed at the curbside of an urban street canyon in Helsinki, Finland. The emission factors were determined separately for every measured particle size bin (1.2-800 nm) and were presented as an emission factor particle size distribution (EFPSD). Deriving an EFPSD from bin-by-bin calculation of emission factors was found to be an acceptable method based on the agreement with the reported difference between the PSDs measured with wind blowing from the road and from the background direction.
A separate nucleation mode (CMD = 13 nm) and soot mode (CMD = 59 nm) are seen in the derived EFPSD, but also a considerable number of particles exist in the sub-10 nm size range. Notably fewer sub-50 nm particles and no sub-10 nm particles are included in a road-transport-related PSD of the EUCAARI emission inventory, used in several previous studies. This is due to challenges involved in determining emission factors reliably for nucleation mode or smaller particles. In this study, the road-transport-related particle emissions of the original EUCAARI inventory were updated using the EFPSD derived here, assuming that it represents the average PSD of the particle emissions from the whole vehicle fleet in Europe.
The PMCAMx-UF model was utilized in simulating aerosol levels for May 2008 over the European domain. The simulations were performed using both the original and the updated emission inventory in order to discover the effect of including the previously partly excluded emissions of sub-50 nm particles. The model overestimates the concentrations of sub-50 nm particles, regardless of the used inventory. Especially sub-10 nm particles are overestimated, and the overestimation became even higher when using the updated inventory. The reason for the overestimations may be related to overestimated new particle formation (NPF) or underestimated particle growth but also to possibly underestimated particle concentrations from the PSD measurements, which are known to become inaccurate for particle sizes below ∼ 10 nm. At least the overestimations of sub-10 nm particles using the updated inventory are not caused by overestimating their emissions because the overestimations were observed also using the original inventory, in which all sub-10 nm particle emissions were excluded. Nevertheless, the greatest underestimations of the model for sub-10 nm particles were overcome, and the correlation between the simulated and the observed concentrations was increased when the updated emission inventory was used.
Ratios of simulated particle concentrations after and before updating the inventory were examined from daily and monthly means of local concentrations. The ratios over and below 1 were observed, while the mean and median values were slightly over 1: the predicted concentrations were increased or decreased with a factor of up to several thousand, depending on the examined particle size range, in certain locations and at certain times after updating the inventory. Although particle emissions were only increased in updating the inventory, it also resulted in decreased concentrations due to increased condensation and coagulation sinks, lead-ing to fewer small particles. Examining the ratios from the monthly mean concentrations revealed that, although the total anthropogenic particle number emissions were increased to a 3-fold level, the total particle count in Europe for the whole month was increased by only 1 % and the total human exposure to particle number by 2 %. The highest mean ratios were observed when considering only 1.3-3 nm particles (11 % increase) and the highest human exposures when considering only 7-20 nm particles (10 % mean increase and 4 % median increase). The highest increases were observed in densely populated areas, especially in western Europe.
The updated inventory predicts sub-30 nm particle concentrations up to 3 orders of magnitude higher during the mornings than the original one in traffic-influenced locations. In those urban locations, simulated PSDs also agree notably better with the observed PSDs.
Because sub-30 nm particles deposit efficiently in the human respiratory system, they pose a significant health risk, especially if their origin is combustion processes emitting harmful substances. Even in cases in which the simulated particle number concentrations did not change markedly, particulate mass of potentially harmful components can increase substantially in the sub-10 nm size range. This results from the substitution of NPF with traffic as the main origin of those particles.
In conclusion, it is important to consider the emissions of sub-50 nm particles from traffic in more detail in chemical transport models because the previous underestimations (with the original EUCAARI inventory) of particles are located mainly in populated areas and are the greatest for the most efficiently lung-depositing particle sizes. Additionally, the underestimations are especially for particle components having possibly harmful effects on human health. Further investigations of traffic-emitted particles are needed at more local scales than with the coarse grid resolution used in this study. The used model can be operated with a grid resolution of down to, for example, 1 km 2 , provided that an emission inventory for that resolution is available. In addition to road transport, other anthropogenic emission sources, such as aviation and shipping activities, may need to be addressed better in emission inventories because they may involve underestimated particle emissions as well. Furthermore, estimating long-term particle exposure requires the simulations to also be done for seasons with less photochemical activity, in which the role of traffic emissions may be even more highlighted. The results of this study denote only a lower limit of the contribution of traffic to local aerosol levels due to the coarse grid resolution and due to the selection of the simulation period during which the NPF processes are dominating the particle formation.
Author contributions. MDM, IR, MO, SNP, TR, and JVN designed the research. HK performed the measurements. HK and MO analyzed the measurement data. MO and DP updated the emission inventory. MO ran the simulations. MO, IR, and MDM analyzed the simulation data. MO prepared the paper with contributions from all co-authors.
Competing interests. The contact author has declared that neither they nor their co-authors have any competing interests.
Disclaimer. Publisher's note: Copernicus Publications remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Financial support. This study has been funded by the Finnish Cultural Foundation; by the Academy of Finland through the ACCC Flagship (grant no. 337551); and by Tekes (grant no. 2883/31/2015), HSY, and Pegasor Oy through the Cityzer project.
Review statement. This paper was edited by Maria Kanakidou and reviewed by two anonymous referees. | 2021-09-28T15:46:09.144Z | 2021-07-20T00:00:00.000 | {
"year": 2022,
"sha1": "c8fbed3ca075e31278f7d3d273dcb261c56b8754",
"oa_license": "CCBY",
"oa_url": "https://acp.copernicus.org/articles/22/1131/2022/acp-22-1131-2022.pdf",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "863f01bf5735f1bc3f58d8edbf50b743ef6245bd",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
15134292 | pes2o/s2orc | v3-fos-license | Considering Structural, Individual and Social Network Explanations for Ecologically Sustainable Agriculture: an Example Drawn from Washington State Wheat Growers
As acceptance of the concept of agricultural sustainability has grown, it has become increasingly recognized that notions of sustainability and how to promote it will necessarily vary depending on the commodity in question. It thus becomes important to investigate how movements towards sustainability are emerging for different commodities. The objective of our paper is to present the results of an analysis of Washington wheat producers that investigates the degree to which interest in sustainability exists amongst those farmers and whether structural factors and farmer personal characteristics are more or less significant than social network factors in explaining farmers' views of possible sustainable methods. Our findings indicate that a measure indicating use of local social networks to gain information is associated with a higher degree of interest in new production methods aimed at improving agricultural sustainability.
Introduction
Discussions about how to promote ecologically sustainable agricultural production systems have moved from the fringe to the mainstream in the United States and around the world.Many observers focus on defining "sustainable," a challenge made more difficult by the realization that a viable OPEN ACCESS definition must take account of the diversity of situations that exist across agriculture, as well as between agriculture and other sectors of the political economy.That diversity is linked to differences in how particular commodities are produced, the geographies in which they are produced, the particular political-economic structures associated with a given commodity, and the social systems in which the farmers are producing.A first step in developing such a nuanced conceptualization of sustainability in agriculture is to examine the sustainability challenges facing each agricultural sector.
We seek to contribute to the development of a more nuanced conceptualization of agricultural sustainability by examining some of the specific challenges facing wheat farmers in Washington State.Wheat production in Eastern Washington is characterized, as is the case with many other grain crops, by the persistence of family ownership of most of the means of production.This means that analyzing the influences on farmer decision-making is important for understanding how sustainable practices may or may not be advanced in this sector.
Other important characteristics of Eastern Washington wheat farming include: (1) some of the main weed problems are from plants that are, like wheat, grasses; (2) the main producing region is in a hilly region that lends itself to erosion; and (3) a high degree of dependence on export markets.The development of technologies and practices that enable farmers to manage these challenges while maintaining a competitive position in overseas markets is frequently a focal point of conversation when wheat producers gather to discuss the management challenges they face and their potential solutions.
This paper is based on the assumption that the determination of what practices help make agriculture more sustainable must take into account the conditions that are unique to the production of a particular commodity, and that an understanding of those conditions exists in local knowledge [1].At the same time, we also recognize that interest in sustainability and degree of connection to the networks where local knowledge is stored is never uniform.Thus, the objective of this paper is to further improve our understanding of the role of farmers in promoting a more sustainable direction for agriculture by investigating the extent to which Washington State wheat farmers are interested in a more ecologically sustainable alternative vision of agriculture, and to isolate the factors that are significant in identifying those farmers who would be most likely to pursue such alternative strategies in that particular agricultural sector.In particular, we assess whether structural factors and farmer personal characteristics are more or less important than social network factors in explaining farmers' views of the desirability of a possible alternative vision to the conventional model of agricultural production.
Theoretical Perspectives
Sociological research on the development of agricultural production systems, and how these systems interact with processes of social change in rural communities, provides a useful starting point for developing questions about social processes associated with efforts to create more sustainable agricultural production systems.A longstanding tradition within this body of work has focused on isolating the types of farmers who are most willing to adopt new agricultural management practices and technologies, and how this process contributes to increased efficiency on the farm and improved food supplies for society at large [2].The empirical evidence generated from such studies generally has found that earlier adopters of new technologies were more likely to have higher educational levels and socioeconomic status [3], which led to the view of such early adopters as "progressive" farmers.
A second tradition utilizes political economy approaches to analyze how structural conditions can restrict agricultural change and development, with corresponding negative social, economic and environmental consequences [3,4].Such research argues that the causes of economic failure in agriculture are not simply a lack of expertise or desire to become "modern" on the part of individual farmers, but also a product of the history and particular institutional arrangements that arise in contemporary capitalist society [3,5].Consequently, a research stream has emerged that evaluates how farmers are structurally embedded within the context of an industrially oriented agrifood system [6][7][8][9].A common thread connecting this tradition with research on which types of farmers are "modernizers" is an assumption that the impetus for modern technological change in agriculture originates off-farm.
However, in more recent years, interest has been growing in examining how individual, network and structural factors may or may not play a role in promoting the development of more sustainable alternative agricultural development visions and strategies [10,11].These studies are not necessarily conceptualizing farmers as completely independent actors, but they do recognize that farmers often look for and engage in strategies to improve their farm management practices while simultaneously coping with structural conditions.This approach is of particular interest to scholars who are trying to understand the processes associated with the utilization of alternative agricultural practices [12][13][14].Such a tradition assumes that farmers adopting alternative practices must have a strong motivation to do so, but also need the support of social networks and institutions.
Within this literature on the adoption of alternative agricultural practices, scholars are increasingly emphasizing the importance of farmer networks in promoting transitions to organic and sustainable agriculture [15][16][17][18].Such studies often emphasize that the land-grant university system has traditionally favored conventional agriculture, making it necessary for farmers interested in organic and more sustainable agriculture to conduct their own research and to share the knowledge they generate through interactions with other farmers [15,18].Research also indicates that farmers recognize that there may be local social, economic, and ecological conditions that can best be addressed through conversations with others who are dealing with the same conditions [15][16][17][18].Morgan and Murdoch [16] go so far as to assert that farmers in the midst of a transition to more sustainable agricultural management must forget how to farm conventionally and relearn how to farm ecologically, and that locally organized farmer networks are crucial for promoting this learning process.
In other words, research in the Sociology of Agriculture is becoming more nuanced in what is being studied and in the conceptualization of the social processes associated with changes in agricultural development towards a system that more directly incorporates practices that are thought to be more sustainable.Not only is the definition of what a "modern" agriculture should look like being contested, but a greater sophistication is emerging in conceptualizations of how the complementarity between individual roles and social processes influences changes taking place in agriculture.Specifically, an increasing body of research is acknowledging that the notion of what constitutes sustainable agricultural development must expand to incorporate a systemic vision of agriculture that is economically, culturally, politically and environmentally more viable and equitable.In addition, agricultural sociologists are recognizing that further agricultural development will not be achieved without incorporating a conceptualization of how individuals and groups become active agents for change while simultaneously coping with political-economic realities [19].
On the surface, the language of research on the adoption of alternative practices suggests a strong connection to research on early adopters of modern technologies.A striking example of this is Warner's [20] work on the emergence of local social networks devoted to promoting agroecological farming.Warner's description of "leading growers" sounds reminiscent of "progressive farmers."However, the picture that Warner presents of farmers, which is similar to the analyses of farmer characteristics in studies that have examined the adoption of "biologically integrated farming practices" [21] and of no-till agriculture [22], is of farmers who work with other actors in a network to respond to environmental, political and economic challenges.This is not a theoretical image of farmers who are making an individual choice to adopt modern innovations that were developed off-farm, but one of farmers who link with other farmers and non-farmers to actively engage in innovative processes that, at least in part, respond to historical, structural and environmental constraints.
This theoretical shift in the depiction of the role of farmers within the agrifood system is evident in the types of problems investigated and the selection of dependent and independent variables of interest.Rather than identifying and analyzing those farmers who are adopting those innovations that are deemed to be most efficacious at promoting the development of a "modern," industrialized agrifood system, the newer focus is to investigate the development of alternative strategies of agrifood development, such as organic production systems [12,[23][24][25], on-farm environmental practices [21,22,26], post-fordist strategies [27,28] and new international regulatory regimes [29].The change or outcome under investigation becomes not the adoption of a modern practice, but rather the possibility for the creation of alternative agricultural production strategies that include active farmer involvement and that might lead to a revised vision of modernity that is equitable and sustainable.
This change in emphasis is also reflected in the breadth of variables selected to analyze which farmers are more likely to pursue these alternative strategies.In classic adoption studies, the main explanatory interest is in measures that might be thought of as indicators of human and financial capital, in particular the educational, income, and farm size attributes of farmers.Contemporary research on farmers as actors in networks remains interested in these indicators, but adds membership in social networks [20,22], as well as variables that reflect farmers' exposure to alternative views of how to change agrifood systems [30] to the explanatory framework.
By choosing an expanded set of variables, the theoretical debate surrounding the role of farmers in agrifood systems shifts from a comparison of traditional and modern farmers, to a discussion of the degree to which farmers are interested in, and capable of, creating alternatives to the conventional agrifood system.Of course, this general question is complicated by the great diversity in goals and approaches, not only across, but also even within types of alterative strategies, as well as across different commodity systems.Raynolds et al.'s [29] discussion of the variety that exists amongst fairtrade approaches is an example of this complexity.
The purpose of our analysis is to contribute to this theoretical dialogue concerning the interplay between farmers' actions and the structural conditions they operate within by asking whether and how farmers are developing an interest in engaging in alternative agricultural practices in Eastern Washington wheat production, which in our particular analysis is measured as an interest in employing on-farm conservation practices and in saving seed for a future planting.The latter is of particular interest because the purchase on a yearly basis of improved seed varieties has long been emblematic of a modern agricultural practice.Kloppenburg [5] and Pfeffer [31] have described how the USDA historically pushed the adoption of purchased inputs from agribusiness as a means of modernizing U.S. agriculture.Through an analysis of farmer interest in conservation practices and seed saving, we investigate whether the degree to which farmers are interested in adopting either of these practices is associated with indicators of the human capital and social networks characteristics of farmers.
Many studies that incorporate an analysis of social networks in agrifood-system change conduct ethnographic or other qualitative research to depict the rich nature of the social interaction taking place.Our goal is to employ a quantitative approach that evaluates the relative importance of social networks and other theorized predictors of change.In this way, we can assess the relative importance of social network considerations.In particular, we are keenly interested in addressing the following general questions.First, are young, highly educated, larger scale farmers more likely to be the kind of "modern" farmer envisioned in much of the traditional innovation of diffusions research, or are these young, highly educated farmers becoming more interested in "progressive" alternatives, like those farmers envisioned in the work of Warner and others?Second, are these indicators of human capital and size more or less important than indicators of social networks and farmer attitudes about the role of farming in the social structure in predicting which farmers are most likely to be interested in practices associated with a more sustainable agriculture?
The answers to these questions will help us contribute to the theoretical literature on change processes in contemporary U.S. agriculture.We recognize the limitations of using empirical insights based on a study of those who produce one particular commodity in one corner of the United States.On the other hand, as wheat has been an important commodity in world agrifood system trade for over one hundred years [32], and as wheat producers continue to be primarily family based, our analysis will offer one perspective on the possibilities for the plausibility of alternative agrifood system development.
Data and Methods
From January through March of 2006, as part of a collaborative project between the wheat breeding program of the Department of Crop and Soil Sciences and the Department of Community and Rural Sociology at Washington State University, a survey of wheat growers in Washington State, USA, was conducted.The primary objective of the survey was to determine whether Washington State University's wheat breeding programs' research priorities reflected the needs of the state's wheat producers.
With the cooperation of the Washington Association of Wheat Growers (WAWG), a total of 1,374 names were drawn from the Association's membership list.Three hundred and seven (307) names were removed from this original sample because of ineligibility, bad addresses and other reasons.In collaboration with Washington State University's Social and Economic Sciences Research Center (SESRC), a sixteen page survey, which was pre-tested on several wheat farmers who were known to team members, was mailed to the corrected sample of 1,067 growers.Questionnaires were sent out in accordance with the procedures outlined in Dillman's [33] Tailored Design Method.Of those wheat farmers who were sent surveys, 553 returned completed questionnaires, for a completion rate of 51.8 percent.An additional 239 ineligible surveys were also returned, which meant that the survey's return rate was 61 percent.
Many of the survey items were designed to assess the degree of farmer interest in a variety of wheat breeding and marketing options.For this reason, Likert scales were utilized throughout the survey, including for the dependent variables of interest for this paper.Given that these variables measure outcomes that are ordered into more than two categories, a maximum-likelihood ordinal logistic estimation technique [34,35], provided in the STATA® software package, was utilized to analyze the data.
Analysis
In one section of the wheat farmer survey, respondents were asked to evaluate the importance of a list of nine management goals that might influence the success of their farm operation.The responses to several of these statements reflect what we would consider to be traditional, modernist thinking about farming amongst a large percentage of growers.For example, 87 percent of the wheat farmers stated that ensuring high yields was extremely important, while 83 percent responded that lowering input costs was extremely important.Similarly, only 28 percent of the respondents stated that maintaining genetic diversity was extremely important, while 26 percent responded that emphasizing environmental conservation was important.We also note that nearly 60 percent of those surveyed stated that they would plant a genetically modified wheat variety if it were available.This response pattern indicates that a majority of Washington wheat farmers maintain a view of agriculture where the main farming goal is both production and profit maximization.
On the other hand, the fact that a quarter of respondents indicated that considerations such as maintaining genetic diversity and environmental conservation were important on their farms indicates a recognition on the part of many growers of the need to blend environmental with economic considerations in wheat farming.While this should not be interpreted as reflecting a radical interest in environmental issues or a political-economic transformation of the agrifood system, we believe it does indicate that a substantial number of farmers are interested in exploring more sustainable farm management approaches.This way of thinking, at a minimum, recognizes the need to blend environmental and economic factors in farm management In order to assess which types of farmers were more likely to have an interest in blending economic and environmental dimensions of agriculture, we combined the responses to the two statements on conservation and genetic diversity into a single measure for use as a dependent variable in our analysis.The final variable was coded 2 for those who felt that either maintaining genetic diversity and/or environmental conservation were extremely important while also feeling that the remaining goal was at least mostly important (29.5 percent).Respondents who felt that both goals were mostly important (26 percent) were coded 1, and a coding of 0 was used for those who felt that neither goal was extremely or mostly important (44.5 percent) (see Table 1).For our second dependent measure of an alternative vision for wheat production, we selected the variable measuring farmer interest in saving and planting one's own seed.There has been strong academic interest in the topic of control of the plant breeding process, and of genetic material [5].As part of our survey, we asked wheat farmers how important it was for them to be able to continue to save and replant seed.Although saving seed was, until the 20 th Century, a necessary practice in United States agriculture, in recent decades the purchase of seeds on a yearly basis has become the recommended, conventional practice.More recently, however, maintaining the right to plant one's own seed is being revisited as a right that should be preserved in order to promote a more equitable and viable form of agriculture [36].So, for this variable, we distinguished between those who view this ability as extremely important (29.7 percent), as mostly or slightly important (38.2 percent), or not important at all (32 percent).
As noted previously, for our analysis, we wanted to contrast the explanatory power of more traditional independent measures of human capital and socio-economic status with variables that could reveal the extent to which farmers rely on different social networks to obtain information about new technologies and production practices (see Table 1).For indicators of human capital, we utilized measures for age (those less than 45 years of age (11.4 percent), those between 45 and 59 (52 percent), and those more than 59 years of age (36.6 percent)) and formal education (high school degree (9.8 percent), post-secondary training (35.8 percent), and baccalaureate degree or higher (54.3 percent)).For measures of size of farm operation, which we consider to be an indirect indicator of socioeconomic status, our challenge was that more than 14 percent of our sample refused to report their farm receipts, and number of acres is a difficult measure to use in analyzing Eastern Washington wheat farming because there is a great deal of natural variability in rainfall in the region.In particular, in much of the western part of the region, rainfall is such that land can be farmed only every other year.Thus, size of farm operations is more accurately interpreted as an indicator of geographical zone than socio-economic status of the farm operation.So, to measure size, we utilized a variable that measured the percentage of farmers that obtained three-quarters or more of the farm's income from agriculture (52.7 percent).While this is not as direct a measure of size as receipts would be, we do note that for those farmers who reported their receipts, percentage of income from agriculture and receipts were highly correlated (Chi-square of 117.07,P <0.001).
For measures of social networks, we used variables that asked respondents how important it was for them to a) attend Farm Bureau meetings, and b) meet with neighbors in order to obtain information to help with on-farm decision-making.For each of these variables, farmers were separated into one of three categories: not important, slight importance, mostly (or extreme) important.We also utilized a social capital variable that measured farmer attendance at field days run by the land grant university to serve wheat producers.Respondents were asked how many field days they had participated in over the previous five years.The variable is coded from one to six or more, with the mean number of attendances being 2.4.For all three of these variables, it is important to recognize that we are measuring the extent of social network interaction.
One research objective was to compare the influence of human capital and social networks influences on the dependent variables with respondent behaviors and attitudes about farm management and structural issues in agriculture.For this reason, we incorporated five additional independent variables into the analysis.One variable measured whether the respondent believed that high yields are the most important factor in determining farm success, an attitude we assume is linked to a modernist orientation towards agriculture.We also asked farmers whether they did actually save any of their seed for future planting, as well as whether the felt the development of perennial wheat should be a breeding priority.We assume that these two variables reflect a farmer inclination towards controlling genetics and using such genetics to develop a more environmentally sustainable form of farming.Finally, we measured farmer concern about structural issues facing agriculture by asking respondents whether they felt that local decline in farm numbers and community had an effect on their farm operations, and whether the current commodity system for wheat should be maintained.All of these variables were utilized in ordinal logistic regressions on each of our two dependent variables (Tables 2 and 3).All of the models computed had significant R 2 values, but readers are cautioned to remember that in logistic regression, unlike in OLS regression, the R 2 statistic is a measure of goodness of fit, not proportion of variance explained in the dependent variable [37,38].Also of interest is that in all of the models, age of respondent, formal educational level and percentage of income derived from Agriculture are not significant independent predictors.In other words, factors that traditionally were thought to be important in understanding which types of farmers would be most likely to adopt new, modern technologies in agriculture are not useful in understanding which wheat farmers in our study have an interest in conservation, genetic diversity and saving seed for future use.
In the models presented in Table 2, farmers who valued information from their neighbors (P <0.01), who believe that high yields are an important factor for success (P <0.05), who place a high priority on the development of perennial wheat (P <0.05), and who believe that a decline in farm numbers and community has an affect on farm operations, were significantly more likely to be interested in Conservation and Genetic Diversity.This was true even after controlling for age, educational level and the size of the farm operation.This finding corroborates the work of Coughenour [22] and others that suggests that social networks (in this case neighbors) and sensitivity to community dynamics are positively associated with interest in incorporating environmental and genetic diversity considerations in making farm decisions.However, the fact that emphasis on high yields was also significant suggests that, in the minds of farmers at least, there is no contradiction between interest in maximizing production and in conservation and genetic diversity.In the case of interest in saving seed, two variables were significantly associated with the dependent variable.These were whether the farmers saved their own seed (P <0.001) and placing a high priority on the development of perennial wheat (P <0.05).These findings demonstrate a consistency between attitude and behavior in terms of seed saving, as well as interest in developing traits in wheat that might help wheat producers farm more sustainably.Indeed, it is most intriguing that the one independent variable that was significant in both sets of models was the variable that asked whether farmers placed a high or low priority on the development of perennial wheat.Scientists involved in developing perennial wheat describe their efforts as challenging the trends in conventional agriculture [36].More research is needed to explore the degree to which farmers' perspectives on perennial wheat parallel those of the scientists.
Discussion and Conclusions
The overall goals of our analysis were to investigate the degree to which wheat farmers in Washington State are moving towards acceptance of some agricultural production practices that are believed to enhance sustainability.We sought to determine if young, highly educated, larger scale farmers are more likely to be the kind of "modern" farmer envisioned in much of the traditional innovation of diffusions research, or if these young, highly educated farmers are becoming more "progressive."Furthermore, we sought to determine if indicators of human capital and size are more or less important than indicators of social networks and farmer attitudes in predicting which farmers are most likely to be interested in practices associated with a more sustainable agriculture.
The evidence we have presented suggests that there is interest amongst some farmers in management schemes that blend alternative, more ecologically sustainable farming practices into mainstream practices, and that this interest is related to activity in social networks and concern about the effects of community decline on agriculture.These findings provide support to studies indicating that the spread of new knowledge regimes in support of more sustainable agricultural practices is supported by social networks that connect farmers to one another.Clearly, more in depth research is needed to investigate the processes by which farmers in these networks share information about sustainable agricultural practices.Nonetheless, our analysis has demonstrated that individuals active in networks are not only more likely to be interested in conservation practices, genetic diversity and saving their own seed, but also appear to be interested in blending this management style with more conventional goals of increasing yield and maximizing profits.
Table 1 .
Descriptive Statistics for all Variables.
Table 2 .
Ordered Logistic Regressions of Interest in Conservation and Genetic Diversity.
Table 3 .
Ordered Logistic Regression of Value Placed on Saving Seed. | 2016-03-01T03:19:46.873Z | 2009-04-14T00:00:00.000 | {
"year": 2009,
"sha1": "57969bc38ae3ccd9a8d8d109f372945c1cc42003",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/1/2/120/pdf?version=1424776415",
"oa_status": "GOLD",
"pdf_src": "Crawler",
"pdf_hash": "57969bc38ae3ccd9a8d8d109f372945c1cc42003",
"s2fieldsofstudy": [
"Environmental Science",
"Sociology",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Economics"
]
} |
188132155 | pes2o/s2orc | v3-fos-license | Comparative analysis of route selection behaviors between full-service and low-cost airlines
In order to study the similarities and differences between the route selection strategies of full-service and low-cost airlines, Spring Airlines and Eastern Airlines are the representatives of low-cost and full-service airlines respectively, and based on the actual operation data of Spring Airlines and Eastern Airlines routes in the past seven years, the panel-probit model is used to investigate the impact of route characteristics, airport characteristics, characteristics of competing airlines, and the characteristics of airlines on the route selection behaviors of full-service and low-cost airlines. The results show that the route HHI, time slot control airport, airport size and number of competitors have a significant impact on the route decision of the Eastern Airlines, while the time slot control airport, airport HHI and serviced airports have a significant impact on the route decision of the Spring Airlines. Finally, based on the empirical results, the similarities and differences between the decision of Eastern Airlines and Spring Airlines routes are summarized, which provides a theoretical basis for the decision-making of full-service and low-cost airlines.
Introduction
The route is one of the elements of the construction of the route network and is the basic condition for aviation system to provide transportation services. The rapid development of China's economy, the demand for civil aviation transportation network will certainly bring about the establishment of new routes in the aviation network [1]. Therefore, in the market competition environment, how to choose a new route becomes an urgent problem for airlines to solve. This paper selects the historical operational data of Spring Airlines (9C) and Eastern Airlines (MU) in 2010-2017 to study the decisive factors affecting airline route selection. The reason why Spring Airlines and Eastern Airlines will be selected as research objects is that Spring Airlines and Eastern Airlines are the more successful airlines among low-cost airlines and full-service airlines, respectively, studying their route selection model has great significance for other airlines. On the other hand, the headquarters of these two airlines are all in Shanghai,choosing them can avoid the economic advantages brought by Shanghai's strong economic strength. The impact of the experimental results can be more objective and accurate to study the differences in the route selection strategies of the two airlines.
Research methods
In formula (1), when i y is equal to 1, it indicates that Spring/Eastern Airlines chooses to enter route i ; if i y is equal to 0, it indicates that Spring/Eastern Airlines does not choose to enter route i .
The probability of the route decision variable i y (the dependent variable takes 1 or 0) is In equation (2), i x is the vector formed by all the explanatory variables on the sample route i , that is, the influence factors of the route decision, β is the vector formed by the coefficients, and the probability of 1
Data sources and choices
Based on the actual operational data of all airlines provided by the OAG database in 2011-2017, the routes that have been operated (served) for seven years are selected as potential entry route samples. In order to ensure the continuity of data, we eliminate the discontinuous operation and the lack of data, and let the last remaining 579 routes as a sample of potential entry routes. In order to delete a charter flight with a lower frequency, a route with a total annual frequency of 20 or more is defined as the service being served [3]. If Spring/Eastern Airlines does not serve the route during the previous period, and serve the route during the current period. It is said that Spring/Eastern Airlines will choose this route at this stage. Otherwise, Spring/Eastern Airlines will not choose this route at this stage.
Probit empirical model, variable description and data processing
Combined with the selected route decision factors and correlation analysis results, the expression of the Spring/Eastern Airlines route selection strategy is obtained (3) In equation (3), the variables are defined as shown in Table 1. When the route i has at least one endpoint airport that is a time slot control airport, the variable takes a value of 1, otherwise, the value is 0.
In the previous period, the maximum number of flights served on the route i endpoint airport.
In the previous period, the minimum number of flights served on the route i endpoint airport. i Tourist When the route i has at least one endpoint airport where the city is a tourist city, the variable takes a value of 1, otherwise, the value is 0.
In the previous period, if Eastern/Spring Airlines served at both end airports of route i , the variable would take a value of 1, otherwise it would be 0. Table 2.
Analysis of results
The probit model analysis of the panel data was performed using the software Stata 11, and the estimation results of the Eastern Airlines are shown in Table 3. In terms of route characteristics, for the route distance variable i Dist , the coefficient of Eastern Airlines is negative, the value is -5.4E-05, the coefficient of Spring Airlines is positive, the value is 0.000155, the coefficient of the two is small and neither is significant. This indicates to a certain extent that the route distance has less influence on the route decision of Spring Airlines and Eastern Airlines. For the route density variable , 1 i t Passenger − , the coefficient of the Eastern Airlines and Spring Airlines is small and not significant, indicating that the Eastern/Spring Airlines did not consider the route density as an important consideration when selecting the route. The higher the number does not mean the more passengers can be obtained.
In terms of airport characteristics, for the time slot control airport variable i Slot , the coefficient of Eastern Airlines is -0.25164, which is significant at 90%. The coefficient of Spring Airlines is 1.149121, which is significant at 95%. This indicates that Eastern Airlines does not tend to entering the time slot control airport, because the time slot control airport's time has a high purchase cost, the competition situation is severe at the time slot control airport . Spring Airlines tends to enter the time slot control airport, because the time slot control airport tickets have higher fares and the passengers are generally higher, Spring Airlines enters it to take advantage of low fares and attract more travelers. MaxAirFli − of Eastern Airlines is significant at 90%, indicating that Eastern Airlines is not inclined to enter larger airports, because there are more competitors in larger airports, and the high fares of full-service airlines reduce their competitive advantage. While Spring Airlines tends to enter larger airports, Spring Airlines can take advantage of low fares to get more passenger. For the tourist city, the i Tourist variable, the coefficient of Eastern Airlines is 0.065655, the coefficient of Spring Airlines is 0.312062, and the coefficient of Spring Airlines is 5 times that of Eastern Airlines, indicating that Spring Airlines is more inclined to enter the tourist city, which also verifies that the target passengers of Spring Airlines are mostly price-sensitive passengers.
In terms of the characteristics of the airline itself, for the , 1 − i t SerBothEnds variable, the coefficient of Eastern Airlines is 0.020753, the coefficient of Spring Airlines is 1.192231, and it is significant at the 90% confidence level, indicating that Spring Airlines is more inclined to open up at the airport where it has served. On the route, the airline expands the route based on the existing service airport, and can better utilize the existing resources of the service airport, thereby reducing operating costs and | 2019-06-13T13:22:45.042Z | 2019-02-01T00:00:00.000 | {
"year": 2019,
"sha1": "a9bfba883b3012b4bfb8f796a4a4aa890fea9650",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1168/3/032112",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "c8793cf4882259285a4c0620349eb000a5094f87",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Physics",
"Business"
]
} |
56429147 | pes2o/s2orc | v3-fos-license | Examining the effects of customer satisfaction on commitment and repurchase intentions of branded products
Abstract This paper examined the associations between satisfaction, commitment and repurchase intention of branded products from the perspective of customers in the Gauteng Province of South Africa. The study employed the Social exchange theory as its theoretical underpinning. Data was collected through purposive and convenient sampling techniques of 268 users of branded products from the Province. Structural Equation Modelling (SEM) statistical technique with Smart PLS version 3.0 was used to analyse the data. The results identified normative commitment as an important driver of satisfaction. It was also observed that, calculative commitment had a greater influence on customer repurchase intention. The results have implications for relationship managers, brand managers and scholars who use service evaluation and interactive commitment as a multidimensional construct in predicting customer repurchase intention.
Introduction
Customers have been searching for different, distinctive and outstanding relationships with brands over the past years (Reydet & Carsana, 2017). Satisfaction is one of the prevalent constructs, that ABOUT THE AUTHOR Dr Phineas Mbango is a senior lecture at the University of South Africa. He has been in the academic field for more than 15 years. He specializes in relationship marketing and has written many academic papers on this subject. Before joining the academia, he worked in the corporate in various positions in Sales and Marketing, as well as in Human Resources Management.
PUBLIC INTEREST STATEMENT
This paper looked into the associations between satisfaction, commitment and repurchase intention of branded products in the Gauteng Province of South Africa. Branding has become a very important issue for business success. Consumers buy brands. It is therefore important for both academic and business to understand some of the variables that make consumers loyal to a brand. Data was collected through purposive and convenient sampling techniques of 268 users of branded products. Structural Equation Modelling (SEM) with Smart PLS version 3.0 was used to analyse the data. The results identified normative commitment as an important driver of satisfaction. It was also observed that, calculative commitment had a greater influence on customer repurchase intention. The results have implications for academic, practice and policy. has extensively been explored by academicians and researchers for its impact on repurchase intention from the perspective of consumer behavior (Chiu, Fang, Cheng, & Yen, 2013;Kim, 2012). According to Saufiyudin, Fadzil, and Ahmadc (2016), customer satisfaction is seen as influencing repurchase intentions on behavior-which leads to improvement in organisations' proceeds and earnings. Outstanding positive experience leads to affirmative behaviours in companies' and firms' activities-making customers loyal to organisations' products and services (Straker, Wrigley, & Rosemann, 2015). The escalating competition in the retail industry demands that, managers of businesses ascertain factors reinforcing customers' commitment towards brands (Shukla, Banerjee & Singh, 2016). Customer's valuation of service excellence is a vital information needed for service providers to improve in their business performance while positioning themselves tactically in the market place (Cronin & Taylor, 1992;Jain & Gupta, 2004). Organisations always expect their customers to be devoted to their brands with strong feelings. Espejel et al observed that, the higher the level of satisfaction, the more committed customers become.
According to Kotler and Armstrong (2004), branding is a significant means which helps in forming a positive image on consumers from other competing products. The propagation of different brands, characterised with its resultant opportunities for customers to adjust their preference rather than to be committed; has become an enigma to marketers about the subject of commitment on brands (Shukla et al., 2016). A greater number of researchers perceived satisfaction from a general perspective, although it ought to be measured distinctly from transaction based and experience-based satisfaction (Huang & Dubinsky, 2014). Industrial intelligences are recognising the developing portent of changing commitment levels among customers (Euromonitor, 2014). According to Shukla et al. (2016), there is limited academic research on customer commitment in the luxury sector. Researchers, such as Morgan and Hunt (1994), Bansal, Irving, and Taylor (2004) all opined that the foremost stream of research works on consumers identify commitment as a crucial component in developing and maintaining continuing relationships is limited (Bansal et al., 2004;Morgan & Hunt, 1994).
Nevertheless, many studies positioned commitment as a unidimensional variable or construct in relationship marketing studies, which, according to Amofa and Ansah (2017), has a greater effect on the degree of deviation on its findings other than measuring it from the multidimensional perspective-as employed in the current study; with succeeding research works using "commitment" as a multidimensional construct (Bansal et al., 2004;Eisingerich & Rubera, 2010). The three component structure of commitment propounded by Allen and Meyer (1990) in the purview of organisational science provided an appropriate platform for investigating the emotional (affective), functional (calculative) and social (normative) phases-reflecting a valid level of measuring commitment on brands. Also, it has been observed that, research on the factors that influence consumer repurchase is limited (Milner & Rosenstreich, 2013).
Despite several studies on satisfaction, only a small number had examined the associations between satisfaction, commitment as a multidimensional construct, as well as repurchase intention in same study. This study attempts to bridge the gap by developing and testing a model of relationship behavior-aiming at clarifying the relationships among customer satisfaction, affective commitment, normative commitment and calculative commitment all on repurchase intentions. Therefore, the main purpose of the study is to examine the effects of customer satisfaction on commitment and repurchase intention of branded products. This paper presents the proposed models based on a number of hypothesised relationships derived from an extensive literature review. The models are then tested in an empirical study. As a final point, the paper concludes with a discussion of the findings, implications, limitations, and potential future research directions.
Social exchange theory
Social exchange is explained as an intentional actions of individuals that are driven by the returns they are anticipated from other parties (Blau, 1964). The central basis of the theory is that people involved in interactions willingly offer benefits, beseeching responsibility from the other party to reciprocate and provide some benefit in return (Yoon & Lawler, 2005). The countered benefits can take a form of monetary rewards or social benefits. The underlying principle of social exchange theory validates a reciprocated backings, which is created by collective bonds among exchange actors (Konovsky & Pugh, 1994). According to Thye, Yoon, and Lawler (2002), social exchanges creates feelings of personal obligation, appreciation, and trust among partners.
In relating the social exchange theory to the current study, this study succumbs to an exertion that, the more consumers or customers experience a greater return from their respective branded products-regarding satisfaction, the more they are likely to be committed to the products. In addition, as a result of this apparent fair treatment enshrined as part of the features of social exchange theory, committed consumers or customers are more likely to embark on a repurchase intention. For that reason, the higher the satisfaction level of customers, the more they are probable to be committed that will eventually lead to repurchase intension.
Customer satisfaction
Satisfaction is what a product or service provides through a delightful level of consumption-related realisation (Zeithaml & Bitner, 2003). According to Kim (2012), satisfaction is perceived as an assertiveness, that results from a psychological comparison of the service and quality that a customer or a consumer assumes to receive from a transaction after purchase. Customer satisfaction labels an anticipated result of service that involves an assessment of whether the service has met the customer's wishes and anticipations (Orel & Kara, 2014). According to Thaichon and Quach (2015), customer satisfaction is defined as customers' feelings of pleasure, fulfillment and desire towards a service rendered. Satisfaction is also viewed as a result of the customer's postpurchase valuations of both tangible and intangible brand attributes (Krystallis & Chrysochou, 2014). The current study employed the definition by Thaichon and Quach (2015), who defined customer satisfaction as customers' feelings of pleasure, fulfillment and desire towards a service provider or a service rendered.
Affective commitment
Affective commitment is defined as the passionate connection with the brand that represents strong logic of personal identifications. According to Pring, affective brand commitment relies on identification and mutual value with the brand. Consumers with greater brand commitment would have stronger affective attachment towards the brand (Keh et al., 2007). Mcalexander, Schouten, and Koenig (2002) opined that affective commitment describes the deep attachment to a dedicated brands. Allen and Meyer (1990: 253) observed that affective commitment was an emotional attachment to an organization. Bansal et al. (2004: 236) revealed that affective commitment becomes more apparent when a customer or a consumer is glued emotionally to a company or a product-just because they are genuinely committed to it.
Normative commitment
Normative commitment is intellectualised as a responsibility towards the organization (Allen & Meyer, 1990). Normative commitment is well-defined as a form of association that relies on idiosyncratic norms recognised over time, where customers or consumers envisage that, they ought to stay with the company (Bansal et al., 2004). According to Shukla (2011Shukla ( , 2012), normative norm is moulded by the perception of the customer-which is influenced by the social environment. Customers are influenced by their social environment and act in such a way to gratify their peers or group, so as to associate themselves significantly with the brand.
Calculative commitment
Calculative commitment is seen as the functional link customers tend to have with products and organisations. Calculative commitment is applied comprehensively in business and consumer research to examine a variety of issues, such as the backgrounds of brand loyalty (Li & Petrick, 2008). It relates to the sentiment of having to stay with a company or an organisation, either due to less attractive alternatives or no alternatives (Bansal et al., 2004). Gilliland and Bello (2002: 28) observed that calculative commitment was a state of attachment to a partner or cognitively experienced as a realization of the benefits that would be sacrificed and the losses that would be incurred if the relationship were to end. Allen and Meyer (1990) defined the concept as a restriction based relationship that is formed due to the switching cost an employee has to face, if they were to leave the firm.
In concluding this section, Figure 1 below depicts the conceptual model to be tested in this study.
Customer satisfaction and commitment
The effects of customer satisfaction on brands' commitment have been examined by numerous authors ( Darsono & Junaedi, 2006). The moment customers become satisfy with their total experience, they are more liable to be committed and ensured a continued relationship (Beatson, Cotte, & Rudd, 2006). Once customers become satisfied, they show commitment to constantly buy same brand of product (Ballantyne & Warren, 2006). Preceding studies have reported positive influence of satisfaction on relationship length (Seiders, Voss, Grewal, & Godfrey, 2005;Zeithaml, Berry, & Parasuraman, 1996). Customers tend to associate themselves to a product; once they have feelings of obligation towards the company or the product (Einwiller, Fedorikhin, Johnson, & Kamins, 2006). According to Garbarino and Johnson (1999), as well as Camarero and Garrido (2011), who opined that consumers' overall assessment of satisfaction with their consumption experiences offer a positive effect on commitment. According to Chien-Lung, Chia-Chang, and Yuan-Duen (2010); Belanche, Casaló, and Guinalíu (2013), customer satisfaction is highly correlated with commitment. Accordingly, the following hypothesis is proposed: H1: There is a significant positive relationship between customer satisfaction and customer affective commitment on branded products H2: There is a significant positive relationship between customer satisfaction and customer normative commitment on branded products H3: There is a significant positive relationship between customer satisfaction and customer calculative commitment on branded products
Customer commitment and repurchase intention
Commitment as a concept has been defined as "an implicit or explicit pledge of relational continuity between exchange partners" (Dwyer, Schurr, & Oh, 1987, p. 19) or as "psychological attachment" to an organization (Gruen, Summers, & Acito, 2000, p. 37). Garbarino and Johnson (1999); Hennig-Thurau, Gwinner, and Gremler (2002) defined customer commitment as an exchange people tend to sustain towards a continued relationship with another. Morgan and Hunt (1994) observed that commitment encourages buyers and suppliers to continue their association with brands. Commitment contributes to successful relationships because, it leads to cooperative behaviors (Morgan & Hunt, 1994). The strength of commitment are positive (Bansal et al., 2004;Fullerton, 2003;Gruen et al., 2000;Harrison-Walker, 2001). A study by Verhoef (2003) in the banking services observed a direct result of commitment on repurchase intention. Harrison-Walker also observed a positive relationship between commitment and branded products. It then showed that commitment has an effect on repurchase intentions.
Hence, the following hypotheses are proposed: H4: There is a significant positive relationship between customer affective commitment and customer repurchase intention on branded products H5: There is a significant positive relationship between customer normative commitment and customer repurchase intention on branded products H6: There is a significant positive relationship between customer calculative commitment and customer repurchase intention on branded products.
Method
This sections outlined the details of the research methodology that comprised measurement of the instrument, sampling, data collection, as well as the testing of hypotheses.
Population and sample
The respondents for the study included customers and consumers of branded products from the Gauteng province of South Africa. The respondents consisted of government employees, private sectors employees, self-employed, the unemployed and students in the Gauteng province. A total of 268 useable questionnaires were used, out of the 300 questionnaires that were distributed-representing 89%. The sample was deemed fit for the analysis using Roscoe (1975) calculator on sample sizes, which suggested that, sample sizes ought to be more than 300 and less than 500 applicable for an utmost research.
Pre testing of the instrument
The original version of the research instrument was pre-tested with 15 participants, who were sampled from Braamfontein, Rosebank and Campus Square-all in the Gauteng Province. Each participant was presented with a copy of the questionnaire through four trained research assistants-who were trained towards the distribution and the collection of the questionnaires.
Participants were asked to provide their opinion and comments on the clarity of the instructions; the wording of the questions; the layout of the questionnaire, as well as the time taken in completing them. Corrections were then made-regarding the feedbacks, which were factored into the final questionnaires towards the actual analysis.
Data collection procedure
Convenience sampling and purposive sampling techniques were used to collect the data from the respondents-who were basically interested in the usage of branded products. The collection lasted for a month and two weeks-before the required sample size was received. The participants were made to fulfil certain requirements before answering the questions. First, participants were supposed to be users of branded products. Second, they were to be residents at the Gauteng Province. After that clarification, questionnaires were then distributed to the respondents by the research assistants in offices, locations and institutions-which were all in the province. The basis of the research work was first explained by the research assistants without any compulsion on participants' part to either accept to take part in the study or to ignore answering the questionnaires-after which questionnaires were handed to them to complete. Those who were willing to take part in the exercise but were not comfortable in filling the questionnaires on their own were assisted by the research assistant until the required sample was obtained for the actual analysis.
Measurement and questionnaire design
The research constructs were developed solely on already validated measures. All scale items were rearticulated to relate exactly to the context of the current study's requirement. A five point Likert scale was employed to measure the constructs ranging from "1-strongly disagree" to "5strongly agree". Satisfaction used a four-item scale which was adapted from Oliver (1997); Repurchase intention used a four-item scale which was also adapted from Yi and La (2004); affective commitment, normative commitment, as well as calculative commitment all employed six-item scale each-which were adapted from Lee, Allen, Meyer and Rhee (2001). In line with the commendation by Nunnally (1978), a minimum of three items were used per construct so as to guarantee suitable reliability.
The questionnaire was divided into three parts; Part A contained the introductory summary of the entire questionnaire to the participants; Part B contained demographic profile with gender, age, marital status, level of education, as well as the occupation of the respondents, while Part C contained questions about the variables that were used in the study-namely: satisfaction, commitment and repurchase intention using a five point Likert scale that was anchored from "1strongly disagree" to "5-strongly agree".
Data analysis
The research structure of analysis used in the study was developed by Partial least square (PLS) model using Smart PLS 3.0 software. The software was used in assessing the measurement and structural model (Henseler, Ringle, & Sinkovics, 2009). First, it determined the relationship of the constructs Second, it identified the effects of each measuring constructs on the other in the research framework. It also estimated the statistical significance of factor loadings and the path coefficients (Chin, 2001;Davison, Hinkley, Young, 2003) using non-parametric bootstrap technique.
Reliability assessment
The reliability of the study was measured using Cronbach alpha and composite reliability (CR). The Cronbach's alpha (α) of all constructs were greater than 0.70, and the CR values were greater than 0.80, indicating adequate internal consistency of the constructs (Hair et al. 2010). In the current study, the values for Cronbach alpha ranged from 0.735 to 0.876 while that of the CR values ranged from 0.829 to 0.939, indicating a high internal consistency as shown in Table 1.
Convergent validity
Convergent validity simply explicates the extent at which multiple items measuring the same concept are in agreement. Babin and Zikmund (2016:283) observed that convergent validity depends on internal consistency-where multiple measures converge on dependable basis. Hair et al. (2010) observed that, for convergent validity to be evident in a study, the loadings for all items ought to be greater than 0.50. In the current study, the CR and the AVE values all exceeded the recommended value. Therefore, the overall measurement model of the study established satisfactory convergent validity as shown in Table 1.
Discriminant validity
Discriminant validity signifies how unique or distinct a measure is, a scale should not correlate too highly with a measure of a different construct (Babin & Zikmund, 2016:283). The study employed Fornell and Larcker (1981) assessment in determining the discriminant validity. Table 2 presents the discriminant validity such that, the value of square root of AVE exceeded the construct correlations with all other constructs.
As recommended by Fornell and Larcker (1981), discriminant validity is assessed by examining the AVE and squared correlations between the constructs. As illustrated in Table 2, all constructs met the discriminant validity as the AVE for each construct was higher than the squared correlation with the other constructs.
Goodness of fit
The study's goodness of fit statistics (GOF) was calculated using a formula by Tenenhaus, Vinzi, Chatelin, and Laura (2005), where the averages of the average variance extracted (AVE) was first multiplied by the averages of the R 2 values, after which the multiplied value or the result was squared to determine the model fit.
The calculated GoF was 0.527, which exceeded the threshold of GoF > 0.36 recommended by Wetzels, Odekerken-Schroder, and Van Oppen (2009). Thus, the study therefore concluded that, the research model had a better overall fit.
Presentation of hypothesised results
The hypothesised relationships between the constructs were analysed using Smart PLS 3.0 software. Path analysis and levels of significance were used in assessing the hypothesised associations of the study. Bootstrapping strategy was used to validate the results of the hypotheses, with 300 bootstrap samples selected for a one-tailed test-which relied on critical t statistical values of 1.65 (significance level 5%) and 2.33 (significance level 1%) (Hair et al., 2010). R 2 value of 0.862 for repurchase intention designated that, 86.2% of the variance was explained by the affective, normative and calculative commitment. The R 2 value of 0.541 for normative commitment revealed that 54.1% of the variance was described by the satisfaction level of the respondents. "Affective commitment" as a variable exhibited an R 2 value of 0.331 which was explained by satisfaction while calculative commitment as a variable recorded the least value of R 2 with 0.005, which was explained by satisfaction level.
Discussion of results
The study examined the influence of satisfaction on commitment and repurchase intention of branded products in Gauteng province of South Africa. Six hypotheses were outlined for the test analysis. First, it was evident from the study that, satisfaction had a greater influence on the normative commitment of customers towards branded products than affective and calculative commitment levels. The findings of the study were in consistent with Ballantyne and Warren (2006); Camarero and Garrido (2011) who opined that consumers' complete valuation of satisfaction with their consumption practices provide a positive consequence on commitment. It demonstrated that once customers are satisfied with the attributes of a particular branded product, they are more likely to be committed to the said products.
In addition, the study findings also made it apparent that, commitment has greater effects on repurchase intention. The result of the study are in consonance with Morgan and Hunt (1994), who revealed that commitment has strong effect on buyers and suppliers in continuing their association with brands. In the current study, calculative commitment, recorded a greater value on repurchase intention than affect and normative commitment. That is to say that, customers are highly glued towards their intention in repurchasing; once there is a cost associated with a switch from a particular brand to another brand. The study findings are in consistent with Bansal et al. (2004), who opined that the sentiment of having to stay with the company or an organisation, either due to less attractive alternatives or no alternatives has more likely to restrict a consumer or a customer from alternating or changing a brand. The findings are also in line with Gilliland and Bello (2002: 28), who observed that calculative commitment was a state of attachment to a partner or cognitively experienced as an awareness of the benefits that would be sacrificed and the losses that would be incurred if the relationship were to end. Finally, the findings were also in consistent with the theory of Social exchange. According to Blau (1964), social exchange theory is expounded as a deliberate actions of individuals that are driven by the returns they are anticipated between two parties. In the current study, there seemed to be a strong relationship between satisfaction and commitment-such that the more customers are satisfied with a particular branded products, the more they become committed to the said brands. Again, it was evident that, there was a positive relationship between commitment and repurchase intention of branded products. The more customers become committed towards a brand; the more their intents towards the purchase of the products become very high.
Conclusion
The purpose of this study was to examine the influence of satisfaction on commitment levels and repurchase intention of customers and consumer-who are attached to branded products in the Gauteng Province of South Africa. The study revealed that, there was a relationship between satisfaction and commitment. The more customers of branded products are satisfied, the more they tend to be loyal to the brands in question. Within the same context, it was also observed that, satisfaction is moulded by the perception of the customer-which relied on the social environment for one to be committed other than the exorbitant price and feelings customers employ towards branded products. It is the society that tend to influence customers towards the usage of a particular brand. The study also concludes that calculative commitment has a greater influence on customer intention to purchase. The cost associated with a switch from one brand to the other compels customers towards their intent on future purchase of the same branded product. The findings of the current study do not only provide significant insights to practitioners, but also contribute to the literature on relationship marketing from the viewpoint of emerging economies.
Theoretical implication
This study offers valuable insights regarding the measurement of cognitive and affective dimensions of consumer-brand relationships on commitment. The results of the study have a number of important implications for both theory and practice as recommended by Shukla et al. (2016) and Amofa and Ansah (2017) on the application of a multidimensional construct in bringing out an extent of differences from a construct or a variable other than a unidimensional measurement.
Managerial implications
The findings have imperative implications for practitioners; principally those in the wholesale and retail business, such as distributors of branded products. First and foremost, by empirically testing the vital drivers of customer satisfaction on commitment and repurchase intention, this research seeks to offer managers of branded organisations with strategic activities that is likely to motivate both behavioral and attitudinal commitment. The need to intensity communication activities on brands through comparative advertising towards the influence of the climate of opinions in societies on branded products-other than over reliance on the image of the brand. Consequently, the findings of the study seek to enlighten managers' regarding what factors to prioritize in generating higher levels of commitment and purchase, hereafter helping them to advantageously situate their customer retaining investments.
Limitation and future research directions of the study
This study contributes immensely to theory and practice. However, it has some limitations. First, the application of non-probability sampling techniques in the study limit the generalisability of its findings. In addition, the current study was limited to Gauteng province in South Africa without including other provinces. For results comparison, subsequent researchers ought to consider replicating this study in other South African provinces and other developing countries. Finally, the study did not consider how customer commitment may vary with regards to established versus new branded products. While this research explicitly focused on customers who use and intent to use branded products, an imperative area for future studies is to investigate how commitment towards branded products compares with non-branded products. Future studies should also investigate other probable antecedents to commitment, such as scarcity, positive and negative emotions associated with a brand, beside with prior brand familiarity.
QUESTIONNAIRE RESEARCH QUESTIONNAIRE
Please answer the following questions by marking the appropriate answer(s) with an X. This questionnaire is strictly for research purpose only.
SECTION A: GENERAL INFORMATION
The section is asking your background information. Please indicate your answer by ticking (X) on the appropriate box. | 2018-12-15T05:14:53.092Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "4e8f48aa8bc8d6c3000eb6f468216d4de6263f6f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1080/23311886.2018.1521056",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "4e8f48aa8bc8d6c3000eb6f468216d4de6263f6f",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
242918229 | pes2o/s2orc | v3-fos-license | Experimental investigation on the viscoelastic properties 1 of constituents in mudstone 2
In deep underground engineering, the creep behaviors of soft rocks have been widely investigated 10 to help understand the mechanism of the time-dependent large deformation and failure of underground 11 engineering structures. However, rocks were used to be regarded as homogeneous materials and there are 12 limited studying results about the time-dependent properties of constituents in them to reveal their creep 13 mechanism. In this context, the targeting nanoindentation technique (TNIT) was adopted to investigate the 14 viscoelastic characteristics of kaolinite and quartz in a two-constituent mudstone sample. The TNIT consists 15 of identifications of mineralogical constituents in mudstone and nanoindentation experiments on identified 16 constituents. After conducting experiments, the unloading stages of the typical indentation curves were 17 analyzed to calculate the hardness and elastic modulus of constituents in mudstone. And the 180 s load-holding 18 stages with the maximum load of 50 mN were transformed to the typical creep strain-time curves for fitting 19 analysis by using the Kelvin model, the standard viscoelastic model and the extended viscoelastic model. 20 Fitting results show that the standard viscoelastic model can perfectly express the nanoindentation creep 21 behaviors of both kaolinite and quartz and fitting constants are suitable to be used to calculate their creep 22 parameters. The creep parameters of kaolinite are much smaller than that of quartz, which drives the time- 23 dependent large deformation of the soft mudstone. At last, the standard viscoelastic model was verified on a 24 sandstone sample.
28
Creep property, a time-dependent behaviour, is the inherent attributes of rocks, soft mudstone in 29 particular(J. Sun, 2007). In deep underground engineering, the creep deformations of rocks become more 30 common due to the high stress (He et al., 2005). The time-dependent deformations of rocks negatively affect the 31 mining safety(X. Li
47
Nanoindentation technique, also called the depth-sensing indentation, was first proposed and used by 48 Kalei in 1968 in Russia (Kalei, 1968). This technique has proven to be an effective and convenient method 49 for determining the elastoplastic mechanical properties of solids based on a small rock sample, most 50 notably elastic modulus, hardness (Oliver and Pharr, 1992) and fracture toughness (Zeng et al., 2019).
85
As shown in Fig. 1(a), the coal mine named Tongting is located in Anhui province, China; and the sampling 86 layer is above the 7# coal strata deposited at the Permian layer, where the stratigraphic column is presented in 87 Fig. 1
208
For the elastic element in Fig. 6, the stress ( ) is proportional to the strain ( ), while for the viscous 209 element, it is proportional to the strain rate ( & or Pharr, 1992, 2004). As shown in Fig. 7(a), a 228 typical load-depth curve can be obtained as the indenter enters into and exists from the surface of mudstone.
229
The curve consists of loading segment (o-a), load-holding segment (a-b), and unloading segment (b-c); also, 230 segment b-d is the curve tangent of unloading segment (c) at its initial portion. In Fig. 7(b), a typical Berkovich In this section, the proposed nanoindentation creep viscoelastic models in Fig. 6 will be used to fit the
309
Fitting curves by using the Kelvin model did not yield desirable results. As show in Fig. 9 (a) and
324
According to the fitting results by using the standard viscoelastic model, the creep parameters of both 325 kaolinite and quartz in mudstone can be obtained, as shown in Table 2. 326
350
According to the fitting results in Fig. 12(a) Table 4. It also shows that the mechanical properties of quartz in sandstone are slightly 358 larger than that of quartz in mudstone. For example, the mean elastic modulus of quartz in sandstone is 85.81 359 GPa and that in mudstone is 66.54 GPa. The reason for the smaller mechanical properties of quartz in mudstone 360 is probably due to the influence of soft kaolinite as matrix in mudstone.
363
In this paper, the viscoelastic characteristics of kaolinite and quartz in mudstone were investigated by 364 using the targeting nanoindentation technique. Conclusions are as follows.
365
(1) The soft mudstone sample studied in this studied are composed of kaolinite and quartz, in which 366 kaolinite is soft and as matrix with hard quartz embedded in it.
367
(2) For broken soft rocks that cannot provide intact standardized samples, the nanoindentation method can
379
(4) The mechanical properties of quartz in mudstone and sandstone are a slightly different, which may be 380 due to the influence of soft clay minerals in mudstone that soften the mechanical performance of quartz in it. | 2020-06-11T09:10:53.275Z | 2020-06-04T00:00:00.000 | {
"year": 2020,
"sha1": "a7ae5635f7315718f1cde8775de62260867a95da",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40789-020-00393-2.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "3dd72ce258d2d7a330bde71cf0c37786115b7905",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
} |
18658426 | pes2o/s2orc | v3-fos-license | Exploring HPSG-based Treebanks for Probabilistic Parsing HPSG grammar extraction
We describe a method for the automatic extraction of a Stochastic Lexicalized Tree Insertion Grammar from a linguistically rich HPSG Treebank. The extraction method is strongly guided by HPSG-based head and argument decomposition rules. The tree anchors correspond to lexical labels encoding fine-grained information. The approach has been tested with a German corpus achieving a labeled recall of 77.33% and labeled precision of 78.27%, which is competitive to recent results reported for German parsing using the Negra Treebank.
Introduction
In (Neumann, 2003) we applied the idea of data-oriented parsing (DOP) for achieving domain-adaptation to HPSG.The basic idea of HPSG-DOP is to parse all sentences of a representative training corpus using an HPSG grammar and parser in order to automatically acquire from the parsing results a stochastic lexicalized tree grammar.The decomposition operation is guided by the head feature principle of HPSG.A major drawback of this approach was that nonheaded constructions were not factored out consequently due to the lack of structural refinements.However in (Chiang, 2000) (and others) a number of approaches for the automatic extraction of Tree Adjoining Grammars (TAGs) from treebanks are presented, which treat the factorization of modifier constructions more systematically.In this paper, we extend HPSG-DOP by combining it with Chiang's method and apply it on a linguistically rich HPSG treebank for German which is based on the recently developed Redwoods Treebank (cf.(Oepen et al., 2002) and sec.3.).To our knowledge, our approach is the first time that a rich linguistic theory together with a stochastic TAG is applied to the German language.This is not a trivial task, as recently (Dubey and Keller, 2003) and (Levy and Manning, 2004) have shown that treebank parsing for German yields substantial lower performance compared to English Penn treebank parsing, probably due to the fact that differences in both languages and treebank annotation may be involved.
Stochastic Lexicalized Tree Grammars
The set of lexically anchored trees extracted via the original HPSG-DOP method already characterizes a lexical treesubstitution grammar, i.e., a tree-adjoining grammar with no auxiliary trees, cf.(Schabes, 1990).In (Neumann, 1998), and subsequently in (Xia, 1999), (Chen and Vijay-Shanker, 2000), and (Chiang, 2000) it is shown how tree adjoining grammars can be extracted from the Penn Treebank by performing a re-construction of the derivations using head-percolation rules.Here, we follow the approach developed in (Chiang, 2000), because his approach only requires a minimal amount of treebank preprocessing, which makes it easier to adapt it to other kind of treebanks.1For efficiency reasons, a restricted form of lexicalized tree adjoining grammars is considered viz.lexicalized tree insertion grammars (LTIGs).LTIG has been introduced in (Schabes and Waters, 1995) as a TAG-formalism in which all auxiliary trees are either left or right auxiliary trees.No elementary wrapping auxiliary trees or elementary empty auxiliary trees are allowed.Furthermore, left (right) auxiliary trees cannot be adjoined to a node that is on the spine of an elementary right (left) auxiliary tree; and there is no adjunction allowed to the right (left) of the spine of an elementary left (right) auxiliary tree (cf.figure 1).The parameters of a probabilistic TAG which control the combination of trees by the substitution and adjunction are: where α ranges over initial trees, and β over auxiliary trees, and η over nodes.P i (α) is the probability of beginning a derivation with α; P s (α | η) is the probability of substituting α at η; P a (β | η) is the probability of adjoining β at η; P a (NONE | η) is the probability of nothing adjoining at η; P sa (β | η, i, X) is the probability of sister-adjoining, and P sa (STOP | η, i, X) is the probability of no further sisteradjunction.X is the root label of the previous tree to sisteradjoin at the site (η, i), or START if none.The probability of a derivation can then be expressed as the product of the probabilities of the individual operations of the derivation, cf.(Chiang, 2004) for more details.LTIGs have context-free power and can be parsed in O(n 3 ).Two parseres are available to us: a two-phase Early-style LTIG parser based on (Schabes and Waters, 1995) written in Lisp at our Lab, and a CKY-style bottom-up parser based on (Schabes and Waters, 1993) written in C by David Chiang.For the experiments reported in this paper in sec.5., we are using David's parser, because currently, it is much faster than the Early-based Lisp parser, and can be handled much more flexible.The CKY-parser implements sister-adjunction, and uses a beam search, computing the score of an item [η, i, j] by multiplying it by the prior probability P (η).All items with score less than a given threshold compared to the best item in a cell are pruned.
HPSG TreeBank
The HPSG treebank (codename Eiche) we use in our study is based on a subset of the Verbmobil corpus which has been automatically annotated with a German HPSG grammar.The analyses provided by the grammar have then been manually disambiguated using the Redwoods treebanking technology, cf.(Oepen et al., 2002).The underlying HPSG grammar itself has originally been developed as a large-scale competence grammar of German by Stefan Müller and Walter Kasper in the context of the Speech-to-Speech machine translation project Verbmobil (see (Müller and Kasper, 2000)), and has subsequently been ported to the LKB (Copestake, 2001) and PET (Callmeier, 2000) processing platforms.In 2002, grammar development has been taken over by Berthold Crysmann.Since then, the grammar has undergone several major changes, most importantly the treatment of verb placement in clausal syntax (Crysmann, 2003).
Some basic properties of German syntax
The syntax of German features a variety of phenomena that makes syntactic analysis much harder than that of more configurational languages.Chief among these is the relative free word order in which syntactic arguments of a verb can appear within the clausal domain.Assuming continuous constituents only, the argument structure is therefore only partially known in bottom-up parsing, until the other member of the discontinuous verb cluster is found.In German matrix clauses, the finite verb typically surfaces in second position, the first position being occupied by some fronted, i.e. extracted, constituent.Thus, in contrast to English, presence of non-local dependencies is the norm, rather than the exception.Taken together, permutation of arguments, modifier interspersal, discontinuous complex predicates and the almost categorial presence of non-local dependencies give rise to a considerable degree of variation in tree structure.As a consequence, we expect data-driven approaches to parsing to be more prone to the problem of data-sparseness.In the context of grammar induction from treebanks, it has already been observed, e.g., by (Dubey and Keller, 2003) that methods which are highly successful in a more configurational language, such as Collins PCFG parser for English, cf.(Collins, 1997), give less optimal results when applied to German.This problem is further enhanced by the fact that German is a highly inflectional language, with 4 distinct cases, 3 gender and 2 number distinctions, all of which enter into agreement relations.The same holds for the verbal domain, where up to 5 person/number combinations are clearly distinguished.
The grammar
In the spirit of HPSG as a highly lexicalised grammatical theory, most of the information about an items combinatorial potential is encoded in the lexical entries itself, in terms of typed feature structures.Syntactic composition is then performed by means of highly general rule schemata, again, implemented as typed feature structures, which specify the flow of information within syntactic structure.As a result, the DFKI German HPSG specifies only 87 phrase structure schemata, as compared to some 280+ leaf types for the definition of parameterised2 lexical entries, augmented by 56 lexical rules and 286 inflectional rules.The rule schemata, which make up the phrase structure backbone of the HPSG grammar, correspond quite closely to principles of syntactic composition: by themselves they encode basic functional relations between daughter constituents, such as head-subject, head-complement, or headadjunct, rather than intrinsic properties of the node itself.Thus, a rule like h-comp can be used to saturate a subcategorised complement of a preposition, a verb, or, a noun.Similarly, which constituents can function as the complement daughter of the h-comp rule is mainly determined by the information represented on the SUBCAT list of the lexical head.The rule schemata merely ensure that the subcategorisation constraints formulated by the head will actually be imposed on the complement daughter, and that the saturated valence requirement will be canceled off.
Since the underlying processing platforms (LKB/PET) do not currently support the segregation of immediate dominance and linear precedence, some rule schemata are further specialised according to the position of the head: alongside h-adjunct, h-subj and h-comp rules for verb-initial clauses and prepositional phrases, the grammar also defines their head-final counterparts (adjunct-h, subj-h, comp-h), required for verb-final clauses, adjectival phrases and postpositional phrases.Within NPs some modifiers, e.g.adjectives are licensed by adjunct-h structures, whereas PPs are licensed in post-head position only.To summarise, the rules of the CF backbone provide crucial information about the position of the syntactic head, as well as the functional status of the non-head daughter.Scrambling of complements is licensed in the German grammar by special lexical rules that permute the elements on a head's SUBCAT list.Modifier interspersal and scrambling across the subject are accounted for by permitting the application of h-subj, h-comp, and h-adjunct rules in any order.Argument composition and scrambling of arguments from different verbs is captured by shuffling the SUBCAT lists of the upstairs and downstairs verb (e.g., vcomp-h-0 . . .vcomp-h-4).Discontinuous verb clusters are modelled by means of simulated verb movement ( (Müller and Kasper, 2000) expanding an earlier idea proposed by (Kiss and Wesche, 1991)).Essentially, the subcategorisation requirements of the initial verb are percolated down the tree to be shuffled with those of the final verb.Finally, extraction is implemented in a fairly standard way using slash feature percolation.Slash introduction is performed, at the gap site, by a unary rule.For subjects and complements, slash introduction saturates an argument requirement of the head by inserting its LOCAL value into the SLASH list.For adjuncts, the slash introduction also inserts a local object into SLASH, but since there is no valency to be saturated, it only semantically attaches the extracted modifier to the head.At the filler-site, SLASH specifications are retrieved, under unification: for semantic reasons, the grammar crucially distinguishes here between whfillers (wh-h rule) and non-wh-fillers (filler-h rule).Besides these more basic constructions, the grammar also provides rule schemata for different types of coordinate structures, extraposition phenomena (Crysmann, in press), dislocation, as well as some constructions more specific to German, such as auxiliary flip and partial VP fronting.
The treebank
The version of the HPSG formalism underlying the LKB and PET processing systems assumes continuous constituents only.Thus, the derivation tree of a sentence analysed by the grammar corresponds to a context free phrase structure tree.Given a grammar, the full HPSG analysis of a sentence can therefore always be reconstructed deterministically, once the derivation tree is stored together with the unique identifiers of the lexical entries on the terminal nodes.This fact is actually exploited by the Redwoods treebanking infrastructure to provide a compact representation format.From the fully reconstructed feature structure representation of a parse, it is possible to extract additional derived structures: one such auxiliary structure that deserves particular mentioning is an isomorphic constituent tree decorated with more conventional node labels, such as S, NP, VP, PP, etc.These labels are obtained by testing the unifiability of a feature structure description against the AVM associated with the node, and assigning the label of the first matching description.Since these derived trees are isomorphic to the derivation history, the "functional" decorations provided by the rule backbone can be enriched straightforwardly with "categorial" information, providing for a very rich annotation.As already mentioned before, the primary data used for the construction of the Eiche treebank are taken from the Verbmobil test corpora.To give the reader an idea about the complexity of the disambiguation task, the grammar assigns on average around 16 distinct analyses to each sentence.In order to minimise duplication of annotation effort, only unique sentence strings have been incorporated into the treebank.Thus, redundancy in the data is limited to partial structures.
HPSG-Supertag Extraction
The main purpose of the grammar extraction process is twofold: 1) extract automatically all possible supertags, i.e., an LTIG, and 2) to obtain a maximum-likelihood estimation of the parameters of the extracted LTIG.The grammar extraction process actually re-constructs TAG derivations underlying the parse trees and is quite similar to the headdriven decomposition operation used in HPSG-DOP, but now adapted for the case of LTIG extraction.
The extraction method
Similar to (Magerman, 1995) and (Chiang, 2000), we use head-percolation and argument rules that classify for each node η exactly one child of η as the head and the others as either argument or modifier.However, as we will discuss below, our rules are based on HPSG and as such, are much more smaller in number and less heuristic in nature as those defined in (Chiang, 2000).Using these rules, the derivations are re-constructed using the method described in (Chiang, 2000), and summarized here for your convenience: • If η is an adjunct, excise the subtree rooted at η to form a modifier tree.
• If η is an argument, excise the subtree rooted at η to form an initial tree, leaving behind a substitution node.
• If η has a right corner θ which is an argument with the same label as η (and all intervening nodes are heads), excise the segment from η down to θ to form an auxiliary tree.
From the determined structures, supertags are generated in two steps: first the tree template (i.e., the elementary tree minus its anchor), then the anchor.From there, the probabilities are decomposed accordingly and three back-off levels are computed, as described in (Chiang, 2000).Furthermore, all words seen n or fewer times in training are treated as a single symbol UNKNOWN, in order to handle unknown words.
The rule definition
The following two tables contain the HPSG-based head and argument rules currently in use: The list of rules is processed in the order specified and the first rule that fires is applied.A rule fires if the label of the current node matches with one of the parent node labels specified in the rule list.A head rule like "SUBJ-H last *" determines that the last child of a parent node with label SUBJ-H is the head, regardless of the child's label.The head rule "* first *" means that for a parent with an arbitrary node label its leftmost child is chosen as the head daughter.This rule plays the role of a default head rule.The argument rules work in the same way.For an explanation of the linguistic content of these rules, cf.sec.3..
Experiments
We performed a ten-fold cross-validation over a corpus of 3528 sentences from the Verbmobil domain with an average sentence length of 7.2 words.The anchors of the extracted supertags consist of the preterminals of the derivation trees and are lexical labels (LEX).These are much more finegrained than Penn Treebank preterminal tags, covering information about POS, morpho-syntactic, valence and other information.The UNKNOWN symbol relates to corresponding words in the training set (it maps words seen fewer than N times to this symobl), i.e., stems that only occur in the test set, but not in the training set, are not covered by the grammar.Hence, the parser will deliver no result for sentences which contain "out-of-vocabulary" stems.We trained and tested our method on the full encoding of the symbols, which among others encode values for gender, number, person, case, tense and mood.Furthermore, the symbols also encode the valency of verbs.It seems clear that using lexical labels as anchors will effect at least the coverage and recall.In order to test this, we also run an experiment, where we used only the Part-of-Speech (POS) of the lexical labels, which are retrieved from the yield of the corresponding phrase tree.This will lead to a much more coarse-grained classification of word forms, but probably also to a less restrictive tree selection.The where LR(t.)/LP(t.)-t.stands for total -is measured over all sentences, and LR(c.)/LP(.)-c.stands for coverageover the parsed sentences, i.e., for sentences without outof-vocabulary stems.
Discussion
To date, there is only little work on full probabilistic parsing of German from treebanks.The first probabilistic treebank parser for German (using the Negra Treebank) is presented in (Dubey and Keller, 2003).They obtain (for sentence length of ≤ 40): LR=71.32% and LP=70.93%(coverage = 95.9%).(Müller et al., 2003) also present a probabilistic parser for Negra.They study the consequences that the Negra implies for probabilistic parsing, and concentrate on the role of two factors (1) lexicalization and (2) grammatical functions.The results they report: LR=71.00% and LP=72.85%(coverage = 100%).Furthermore, (Levy and Manning, 2004) present experiments on probabilistic parsing using Negra concentrating on non-local dependency reconstruction.Their results also suggest that current stateof-art statistical parsing is far better on Penn Treebank than on the Negra Treebank.
Related Work
Current stochastic approaches for HPSG basically focus on parse tree disambiguation using the English Redwoods Treebank, cf.(Oepen et al., 2002).For example, (Toutanova et al., 2002), present a parse selection method using conditional log-linear models built over the levels of derivation tree, phrase structure tree, and semantic dependency graph in order to analyse the effect of different information levels represented in the Redwoods Treebank.The best reported result (in terms of accuracy) is obtained for the derivation tree representation and by implementing an extended PCFG that conditions each node's expansion on several of its ancestors in the derivation tree (with a manually specified upper bound of 4 ancestors).They report an exact parse accuracy of 81.80% for such an extended PCFG, which was only slightly improved when combining it with a PCFG based on the semantic dependency graph representation (82.65%).In (Toutanova and Manning, 2002) this work is extended by the integration of automatic feature selection methods based on decision trees and ensembles of decision trees.Using this mechanism, they are able to improve the parse selection accuracy for the derivation tree based PCFG from 81.82% to 82.24%.
Conclusion and Future Work
We have presented an approach of extracting supertags from a HPSG-based treebank, and have evaluated the performance of the grammar using a stochastic LTIG parser.In future work, we will consider the following aspects.First, we will explore how the current results can be improved by either adding more information to the tree labels or by generalizing those tree labels which are currently too specific.Second, we will investigate how this technology can be used to provide the N-best derivation trees and to use them as input for the deterministic feature structure expansion step using the HPSG-source grammar.In this way, a preference-based parsing schema for HPSG using a treebank model will function as a filter.
teacher gave the book to the pupil as a present' b. weil der Lehrer das Buch dem Schüler schenkte c. weil dem Schüler der Lehrer das Buch schenkte d. weil dem Schüler das Buch der Lehrer schenkte e. weil das Buch der Lehrer dem Schüler schenkte f. weil das Buch dem Schüler der Lehrer schenkte Almost anywhere between the arguments modifiers can be interspersed quite freely.the teacher gave the book to the pupil as a present'This situation is further complicated by the combined effects of verb cluster formation and argument composition, which permit permutation even amongst the arguments of different verbs within the cluster.the teacher promised him to buy the book.'
Figure 2 :
Figure 2: Examples of a derivation tree and its corresponding phrase tree representation.See text below for an explanation of the different symbols.
Table 2 :
Arg rules for the HPSG Treebank.The symbol * stands for any label.
table below presents our current results: | 2014-10-01T00:00:00.000Z | 2006-01-01T00:00:00.000 | {
"year": 2006,
"sha1": "6318e35d29d0b96f5982bcfb68aca6aead328aae",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "92b0b6e5b3997095464620e2acff1ae2cca9a900",
"s2fieldsofstudy": [
"Computer Science",
"Linguistics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
251133413 | pes2o/s2orc | v3-fos-license | Pharmacological sequestration of mitochondrial calcium uptake protects against dementia and β-amyloid neurotoxicity
All forms of dementia including Alzheimer’s disease are currently incurable. Mitochondrial dysfunction and calcium alterations are shown to be involved in the mechanism of neurodegeneration in Alzheimer’s disease. Previously we have described the ability of compound Tg-2112x to protect neurons via sequestration of mitochondrial calcium uptake and we suggest that it can also be protective against neurodegeneration and development of dementia. Using primary co-culture neurons and astrocytes we studied the effect of Tg-2112x and its derivative Tg-2113x on β-amyloid-induced changes in calcium signal, mitochondrial membrane potential, mitochondrial calcium, and cell death. We have found that both compounds had no effect on β-amyloid or acetylcholine-induced calcium changes in the cytosol although Tg2113x, but not Tg2112x reduced glutamate-induced calcium signal. Both compounds were able to reduce mitochondrial calcium uptake and protected cells against β-amyloid-induced mitochondrial depolarization and cell death. Behavioral effects of Tg-2113x on learning and memory in fear conditioning were also studied in 3 mouse models of neurodegeneration: aged (16-month-old) C57Bl/6j mice, scopolamine-induced amnesia (3-month-old mice), and 9-month-old 5xFAD mice. It was found that Tg-2113x prevented age-, scopolamine- and cerebral amyloidosis-induced decrease in fear conditioning. In addition, Tg-2113x restored fear extinction of aged mice. Thus, reduction of the mitochondrial calcium uptake protects neurons and astrocytes against β-amyloid-induced cell death and contributes to protection against dementia of different ethology. These compounds could be used as background for the developing of a novel generation of disease-modifying neuroprotective agents.
mitochondrial Na + /Ca 2+ exchange can be induced by tau as shown in cellular models 9,10 and importantly, disfunction of mitochondrial Ca 2+ efflux was also found in a mouse model of AD 11 . β-Amyloid is also able to induce mitochondrial calcium overload 12 and leads to a profound mitochondrial depolarization 13 and opening of the mPTP 14 . Importantly, cyclophilin D deficiency, simultaneously with increasing the threshold to mPTP induction, not only reduces mitochondrial and neuronal abnormality but also ameliorates learning and memory in Alzheimer's disease 15 . Additionally, increased mitochondrial calcium level was shown to be a trigger for neuronal loss in a mouse model of Alzheimer's disease 16 .
Recently we found that compound Tg-2112x ( Fig. 1) restricted but did not completely block mitochondrial calcium uptake and protected neurons against glutamate-induced excitotoxicity 17 . Considering the importance of mitochondrial Ca 2+ in the mechanism of neurodegeneration and dementia, in this study we used this compound and also its derivative Tg-2113x ( Fig. 1) 18 , which has some advantages, i.e. affinity to glutamate receptors and microtubules stabilizing properties, to study not only how pharmacological sequestration of Ca 2+ in mitochondria protect neurons against β-amyloid-induced cell death in primary neuronal cell cultures but also how Tg-2113x influence the memory on mouse models of dementia.
The derivatives of carbazoles, ɣ-carbolines, particularly the known neuroprotector Dimebon (Latrepirdine, Fig. 1) and derivative DF-302 have high pro-neurogenic and neuroprotective activities which are tightly connected with a mitoprotective effect [19][20][21][22][23] . On the other hand, Memantine (3,5-dimethyltricyclo [3.3.1.13,7] decane-1-amine, Fig. 1) is one of the approved drugs for treating dementia. Memantine also inhibit the calcium-induced mitochondrial permeability transition and increases the calcium retention capacity of mitochondria 20,24 . In our work, Memantine, containing a free amino group, was used as a basis to design new conjugates with ɣ-carbolines and carbazoles, and among them Tg-2113x was chosen as one of the leaders according to previous in vitro studies 18 .
Following previous in vitro assays, we expected that Tg-2113x could exhibit cognition-stimulating and neuroprotective properties. Thus, Tg-2113X has been shown to increase the rate of polymerization of tubulin to form microtubules of normal structure and therefore stabilize microtubules, effectively binds to the NMDA (N-Methyld-aspartic acid) subtype of glutamate receptors, selectively inhibits butyrylcholinesterase, and increases the resistance of mitochondria to the induction of the mitochondrial permeability transition (MPT) 18 .
In the present work, we have explored the potential neuroprotective effect of Tg-2113x on cellular models of neurodegeneration with calcium overload and β-amyloid toxicity, and in in vivo models of cognitive dysfunction. For the latter, we have used three different mouse models-(1) age-related decline in cognitive function in 16-months-old C57Bl/6j mice; (2) scopolamine-induced amnesia in 3-months-old C57Bl/6j mice; and (3) transgenic cerebral amyloidosis and Alzheimer's disease model, 5xFAD mice.
We evaluated the effectiveness of Tg-2113x in aged mice, since age is considered one of the main etiological factors in the development of dementia. An important advantage of the model is the natural development of complex molecular abnormalities, which are not fully understood yet, and that lead to behavioral changes similar to the clinical signs of dementia 25 . Senile dementia is largely associated with an impairment of mitochondrial functions, in particular with a reduced threshold for induction of the mPTP, and with the disruption of cholinergic transmission [26][27][28][29] , that according to our in vitro study can be eliminated by Tg-2113x.
We chose scopolamine-induced amnesia as a model of the cholinergic impairment that often accompanies normal and pathological aging, and dementia 30 . Scopolamine is a non-selective, competitive inhibitor of muscarinic receptors and is widely used in preclinical studies for a "cholinergic" model of memory impairment [31][32][33][34][35] . www.nature.com/scientificreports/ It is believed that the amnestic effect of scopolamine can also be explained by the decreased activity of NMDA receptors 36 . The activity of glutamate receptors is important for the development of long-term potentiation (LTP), the memory formation mechanism 37 . Glutamate increases the potential to a certain level, leading to the removal of the magnesium block from the channel, are required to activate NMDA receptors. The process of increasing the potential regulates low-conductance calcium-activated potassium channels, through which the potassium ions leave the cell. Activation of M1 muscarinic receptors leads to the loss of sensitivity to calcium ions, and calcium-activated potassium channels cease to work. Scopolamine blocks the M1 receptors, and leaves the channels open, which makes it difficult to maintain the LTP, thereby causing amnesia 38 . 5xFAD is considered to be one of the most aggressive models of the hereditary form of AD or cerebral amyloidogenesis as one of the possible triggers of the senile form of AD. In these mice, the biochemical markers of dementia, such as, for example, PDAPP, Tg2576, TgAPP/Ld/2, appear 10-12 months earlier than other transgenic lines 39 . They express five mutations in human beta-amyloid precursor protein (AβPP) and presenilin (PS1, one of the four core subunits of γ-secretase) that promote the amplified production of pathological forms of β-amyloid: 3 mutations in the human APP (Swedish mutation K670N / M671L; Florida mutation I716V; and London-V717I, named for the country where the it was found) and 2 mutations in PS1 (M146L and L286V) 40 . In this model, the Swedish mutation increases the production of all Aβ, while the other four mutations increase the production of especially neurotoxic Aβ42. Thus, the simultaneous combination of many mutations leads to the formation of amyloid plaques in 1.5-2-month-old mice, and around the age of 6-months Aβ fills most of the hippocampus [41][42][43] .
It is known that not only associative learning, but also the extinction of the memory, i.e., suppression of irrelevant information, is important for normal cognitive functions, and this process is impaired in elderly people and patients with dementia. But while the processes of memory consolidation are widely studied and prospective therapeutic drugs are offered, the pathology in extinction processes has been much less explored, both from researchers and pharmaceutical companies. Therefore, in this paper, we used the protocol for fear conditioning, which includes, in addition to the conditioning session, the extinction session to understand the protective role of Tg-2113x (Fig. 7a).
Methods
Mitochondrial isolation. Rat brain non-synaptosomal mitochondria were isolated by centrifugation in Percoll gradient 21,44 . In brief: rat was euthanized by Carbon Dioxide inhalation and the brain was quickly removed, homogenized in an ice-cold isolation buffer (IB), pH 7.4: 75 mM sucrose, 225 mM mannitol, 10 mM K-HEPES with addition of 0.5 mM EGTA, 0.5 mM EDTA and 1 mg/ml BSA, and the homogenate was centrifuged for 11 min at 1500g. The pellet was homogenized in half of the volume of the same buffer and centrifugation was repeated. The combined supernatants were centrifuged at 10,500 × g for 11 min. The resulting pellet was resuspended in 12% Percoll, layered to Percoll gradient (40-23%) and centrifuged at 30,700 × g at 4 °C for 15 min. The mitochondrial layer was collected and washed twice using centrifugation. The final pellet was resuspended in the IB containing 0.02 mM EGTA. The mitochondrial protein concentration was determined using a biuret procedure with bovine serum albumin as the standard.
Measurements of mitochondrial potential in isolated rat brain mitochondria. Safranine O (10 µM) was used as a membrane potential probe 45 . Fluorescence intensity at 580 nm (excitation at 520 nm) was measured with Victor3 multi-well fluorescence plate reader (Perkin Elmer). Mitochondrial protein concentration was 0.2 mg/ml. The medium for measurements contained 75 mM sucrose, 225 mM mannitol, 10 mM K-HEPES (pH 7.4), 0.02 mM EGTA, 1 mM KH 2 PO 4 . After a 4-min incubation, substrates of respiratory chain (5 mM glutamate, 2 mM malate and the 5 mM succinate) were added to produce the mitochondrial potential. Then the different concentrations of the compound or the same volume of vehicle (DMSO) were injected to the mitochondrial suspension. After 4 min, 12.5 µM CaCl 2 was added to each probe to induce the depolarization of mitochondria which leads to the opening of mPTP. Results on mitochondrial membrane potential changes after calcium addition were presented as the mean ± SD where the mean is the maximum rate of change in fluorescence normalized between control probe and rate of change in fluorescence before calcium additions.
Ionomycin-induced calcium overload in differentiated neuroblastoma SH-SY5Y cell culture. SH-SY5Y neuroblastoma cells were cultured in Dulbecco's modified Eagle's medium (DMEM) containing high glucose (25 mM), l-glutamine (2 mM), and sodium pyruvate (1 mM). This medium was supplemented with 10% (v/v) heat-inactivated fetal calf serum and 1% penicillin streptomycin. Cells were cultivated at 37 °C with 5% CO 2 at saturated humidity in 96-well plates. The differentiation of SH-SY5Y cells was carried out in DMEM containing high glucose (25 mM), l-glutamine (4 mM), 1% P/S, and no sodium pyruvate. The medium was further supplemented with 10 µM all-trans retinoic acid before adding the medium to the cells. Differentiation lasted 4 days, on the 5th day the experiment was carried out. Cells were incubated with different concentrations of the test compound or an equal volume of the vehicle (< 1% of the whole volume of the medium under the layer of cells) and 3 µM ionomycin for 24 h. The cell viability was evaluated as the dehydrogenase activity with the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) assay and the absorbance was measured at 570 nm using a Victor microplate reader (Perkin Elmer).
Primary neuronal cell culture. Mixed cultures of hippocampal and cortical neurons and glial cells were prepared as described previously 17 www.nature.com/scientificreports/ Animal (Scientific Procedures) Act of 1986 and with approval of the University College London Animal Ethics Committee. Hippocampi and cortex were removed into ice-cold PBS (Ca 2+ , Mg 2+ -free, Invitrogen, Paisley, UK). The tissue was minced and trypsinised (0.25% for 15 min at 37 °C), triturated and plated on poly-d-lysine-coated coverslips and cultured in Neurobasal A medium (Invitrogen, Paisley, UK) supplemented with B-27 (Invitrogen, Paisley, UK) and 2 mM l-glutamine. Cultures were maintained at 37 °C in a humidified atmosphere of 5% CO 2 and 95% air, fed once a week and maintained for a minimum of 12 days before experimental use to ensure expression of glutamate and other receptors. Neurons were easily distinguishable from glia: they appeared phase bright, had smooth rounded somata and distinct processes, and laid just above the focal plane of the glial layer. Cells were used at 12-15 days in vitro (DIV) unless otherwise stated. (1 µM, Molecular Probes) was added into the cultures during the last 15 min of the Fura-2 loading period, and the cells were then washed 3-5 times before experiment. Fluorescence measurements were obtained on an epifluorescence inverted microscope equipped with a 20× fluorite objective. [Ca 2+ ] i and ∆ψ m were monitored in single cells using excitation light provided by a Xenon arc lamp, the beam passing sequentially through 10 nm band pass filters centred at 340, 380 and 490 nm housed in computer-controlled filter wheel (Cairn Research, Kent, UK). Emitted fluorescence light was reflected through a 515 nm long-pass filter to a cooled CCD camera (Retiga, QImaging, Canada). All imaging data were collected and analysed using software from Andor (Belfast, UK). The Fura-2 or Fura-ff data have not been calibrated in terms of [Ca 2+ ] i because of the uncertainty arising from the use of different calibration techniques and were presented as 340/380 nm ratio. Accumulation of Rh123 in polarised mitochondria quenches the fluorescent signal in cytosol; in response to depolarisation the fluorescence signal is dequenched; an increase in Rh123 signal in the whole neuron therefore indicates mitochondrial depolarisation. We have normalised the signals between resting level (set to 0) and a maximal signal generated in response to the protonophore FCCP (1 μM; set to 100%).
Imaging [Ca
Imaging cytosolic and mitochondrial Ca 2+ . Cortical neurons were loaded for 30 min at room temperature with 5 μM Fluo-4 AM, x-rhod-1 AM and 0.005% Pluronic and confocal images were obtained using a Zeiss 710 CLSM using a 40× oil immersion objective. The 488 nm Argon laser line was used to excite Fluo-4 fluorescence which was measured at 505-550 nm. Illumination intensity was kept to a minimum (at 0.1-0.2% of laser output) to avoid phototoxicity and the pinhole set to give an optical slice of ~ 2 µm. For x-rhod-1 measurements the 563 nm excitation and 580-630 nm emission were used. All data presented were obtained from at least 5 coverslips and 2-3 different cell preparations.
Toxicity experiments. For toxicity assays the cells were loaded simultaneously with 20 µM propidium iodide (PI), which is excluded from viable cells but exhibits a red fluorescence following a loss of membrane integrity, and 4.5 µM Hoechst 33342 (Molecular Probes, Eugene, OR), which labels nuclei blue, to count the total number of cells. Using phase contrast optics, a bright field image allowed identification of neurones, which look quite different to the flatter glial component and also lie in a different focal plane, above the glial layer. A total number of 600-800 neurones were counted in 20-25 fields of each coverslip. Each experiment was repeated four or more times using separate cultures.
In vivo studies of the effectiveness of Tg-2113x. Animals. All animal procedures were carried out in accordance with the local regulations and approved by the Bioethics Committee of IPAC RAS (Approval No. 41, date 29 November 2019). 3 and 16-months-old male C57BL/6j mice used in the study. All animals were housed individually, under 12 h light-dark cycle (lights on: 7:00 a.m.) with food and water ad libitum, under constant controlled laboratory conditions (22 ± 1 °C, 55% humidity).
Mice were administered Tg-2113x and scopolamine in the vivarium from 8:30 to 9:00. Behavioral studies were carried out after at least 1-h acclimatization time to the experimental room, in the dark, from 9:00 to 18:00. All efforts were undertaken to minimize the potential discomfort of experimental animals.
The equipment of the "Centre for Collective Use of IPAC RAS" was used in this work.
Study design. Tg-2113x was dissolved in dimethyl sulfoxide and sterile 0.9% saline (DMSO:NaCl = 1:20) and administered intraperitoneally and a volume of injection of 0.01 ml per 10 g of body weight. Scopolamine was diluted with sterile 0.9% saline and administered subcutaneously, 0.05 ml per 10 g of body weight. Mice were treated with the drugs for 5 consecutive days, while in the 3rd day mice were exposed to the fear conditioning test (Fig. 1A). The choice of the administration protocol and the doses (Tg-2113x-0.5 mg/kg/day and scopolamine-0.1 mg/kg/day) were based on pilot experiments (data not shown). Given that the experimenter is a contextual signal for animals 46 , all experiments were conducted by one person who was at the same place throughout the test.
In addition, Tg-2113x was investigated in the novel cage, dark-light box and Porsolt's tests to eliminate potential anxiety-and depressive-like effects.
Fear conditioning test. In the fear conditioning paradigm, mice were trained with a 2 s foot-shock (0.5 mA, 50 Hz) by a shocker (Evolocus, Terrytown, NY, USA), which was delivered after a 2-min acclimatiza- www.nature.com/scientificreports/ tion period. The apparatus (Open Science, Russia) consisted of a transparent plastic cubicle (25 × 25 × 50 cm) with a stainless-steel grid floor (33 rods/2mmin diameter). After delivery of the current, the mouse was immediately placed back into the home cage. Twenty-four hours later, freezing behavior was scored in a 180-s recall of extinction session. The occurrence of freezing behavior was assessed every 10 s, and each 10-s period was assigned to a freezing or non-freezing period, and the percentage of time spent freezing was calculated. Immediately after a recall session, animals were exposed to a memory extinguishing procedure. Therefore, mice were left for another 7 min in the apparatus, so the total procedure of memory extinction was 10-min long. During this period, no foot shock was applied, and animals were free to explore the apparatus. Twenty-four hours later, freezing behavior was scored again in a 180-s recall of extinction session as in the previous trial and percentage of time spent in freezing was calculated 47 .
Results
Tg-2112x and Tg-2113x did not change the amplitude of the acetylcholine-induced calcium signal in cortical neurons. We have tested the effect of the compounds on the major receptors which were shown to be involved in the mechanism of pathology of neurodegeneration. Thus, 1 µM acetylcholine (Ach) induced a peak in [Ca 2+ ] c of primary cortical neurons (n = 155 cells; Fig. 2A) 49 . Pre-incubation of the cells with 0.5-5 µM Tg-2113x or 0.5-5 µM Tg-2112x had no effect on the number of neurons showing calcium signals or the amplitude of Ach-induced [Ca 2+ ] c changes ( Fig. 2A-C).
Tg-2113x partially inhibits glutamate-induced calcium signals in neurons. Transient application of 5 µM glutamate to the cortical co-culture induced a rise in [Ca 2+ ] c typical for this concentration in neurons but not in astrocytes (Fig. 2D). In agreement with previous data 50 , 0.5 µM Tg-2112x did not reduce the glutamate-induced calcium signal in neurons (n = 114 neurons; Fig. 1F). In contrast, pre-incubation of the cells with 0.5 µM Tg-2113x reduced the amplitude of the glutamate-induced calcium signal (n = 165 neurons; from 1.55 ± 0.2 Fura-2 ratio to 0.6 ± 0.07; p < 0.01; Fig. 2E,F). Thus, Tg-2113x partially inhibits glutamate-induced calcium signal that may be explained by a previously shown effect of this compound on NMDA receptors 51 .
Tg-2113x inhibits calcium uptake in mitochondria of permeabilized neurons and astrocytes.
To confirm that the effects seen in the experiments with intact neurons are directly related to the changes in the activity of the mitochondrial Ca 2+ transport we measured Ca 2+ uptake in mitochondria of permeabilized cells. Application of buffered Ca 2+ (0.2 μM and 1 µM, n = 6 experiments; Fig. 6A) increased fluorescence of the mitochondrial calcium marker Rhod-5 N. Addition of the same concentrations of CaCl 2 to permeabilized neurons and astrocytes in the presence of Tg-2113x (0.5 µM; N = 5 experiments) significantly reduced the effect on [Ca 2+ ] m . Importantly, these mitochondria were still viable, because the electrogenic ionophore Ferutinin 56,57 induced a further increase in mitochondrial calcium (Fig. 6B). Thus, Tg-2113x inhibits physiological influx into mitochondria while an alternative transport, such as the one induced by the electrogenic calcium ionophore Ferutinin is still able to produce an increase of Ca 2+ in these mitochondria.
Tg-2113x decreases the Ca 2+induced depolarization of rat brain mitochondria and protects cells from calcium overload.
Previously we showed that the derivative of tetrahydrocarbazole and aminoadamantane (Tg-2112x) effectively inhibits the opening of mPTP in brain mitochondria and increases their calcium retention capacity 17 . The influence of the derivative of tetrahydrocarbazole and dimethylaminoadamantane (Tg-2113x) on calcium-induced depolarization was also studied. We observed that this compound did not influence the mitochondrial potential at all studied concentrations, but in concentrations from 1 µM and higher decreased the calcium-induced depolarization of mitochondria (Fig. 6C,D). This allows us to conclude that Tg-2113x, like the related compound Tg-2112x can delay mPTP opening.
In vivo studies of the effectiveness of Tg-2113x. Tg-2113x neutralizes scopolamine-induced amnesia
in young mice, but does not affect the memory of the Non-scopolamine animals. In the model of scopolamineinduced amnesia in 3-months-old C57Bl/6j mice (Fig. 7a), we found a significant group difference in the freezing behavior (Fig. 7b). RM two-way ANOVA followed by Sidak's multiple comparisons showed that associative learning (freezing in the test 1) was significantly different between control and scopolamine (ScA)-treated mice (P = 0.0069), between only ScA and ScA simultaneously with Tg-2113x treated mice (P = 0.0229), between ScA and Tg-2113x treated mice (P = 0.0027). But extinction (freezing in the test 2) was only significantly different between control and ScA treated mice (P = 0.0497) and between ScA and Tg-2113x treated mice (P = 0.0267).
In vivo experiments showed a decreased freezing in scopolamine-treated mice (Fig. 7b), suggesting a violation of the process of remembering a dangerous context. Tg-2113x administration prevented the scopolamineinduced decrease in freezing, i.e. prevents memory impairment. Tg-2113x administration to the "Non-Scopolamine" group of mice did not change the freezing behavior (Fig. 7b), what we regarded as no influence on normal memory processes.
All groups showed significant decreased freezing during test 2 in comparison to the test 1 (P = 0.0015 for control mice; P = 0.0015 for Tg-2113x treated mice; P = 0.0028 for ScA-treated mice; P < 0001 for ScA and TG-2113x-treated mice), suggesting that neither treatment alters memory extinction, and it is impossible to conclude www.nature.com/scientificreports/ on the impact of Tg-2113x on this form of memory in young and/or scopolamine-treated mice. A lack of changes of extinction in scopolamine-treated mice is consistent with other authors work 58 .
The results confirm the neuroprotective effects of Tg-2113x and suggest that it does not improve cognitive function of young healthy mice without neurodegenerative pathology.
Tg-2113x improves contextual memory and its extinction in 16-months-old mice. In the aged mice RM twoway ANOVA followed by Sidak's multiple comparisons showed that freezing significantly distinguishes between control and Tg-2113x-treated mice in test 1 (P = 0.0022, Fig. 7c) not in test 2, moreover freezing was significantly different between test 1 and test 2 Tg-2113x-treated mice (P = 0.0081, Fig. 7c), but not for control 16-month-age mice (P = 0.3851, Fig. 7c).). In test 1, Tg-2113x-treated mice spent significantly longer percent of time in freezing than vehicle-treated mice, which suggests that Tg-2113x improved contextual memory of 16-months-old mice. While in the control group, percent of freezing did not differ between tests 1 and 2 (Fig. 7c), Tg-2113x-treated mice spent significantly less time in freezing in test 2, demonstrating effective fear extinction. Based on these data, we hypothesized that Tg-2113 can restore the age-impaired decline in memory extinction and contribute to greater plasticity of cognitive processes with age.
The results suggest that Tg-2113x prevents Aβ-induced impaired fear conditioning, but not fear extinction, in 5xFAD mice. www.nature.com/scientificreports/ Tg-2113x does not affect the exploratory, anxiety-, and depressive-like behaviour of young mice. To further evaluate the effect of Tg-2113x in the behavior of mice that could influence the results and discard potential undesirable effects, we performed additional tests. The possible effect of Tg-2113x on depressive-like behavior of mice was investigated with the Porsolt's test. There was no difference in the latency and floating duration between Tg-2113x-and vehicle-treated groups (t = 1.809, df = 14, P = 0.0920 and t = 0.9651, df = 14, P = 3509, respectively, unpaired t-test, Fig. 8A). Tg-2113x did not alter anxiety-like behavior of mice in the dark-light box), as it was shown by no difference in the latency of the first exit in the light apartment and time spent there, between Tg-2113x-and vehicle-treated groups (t = 1.127, df = 13, P = 0.28 and t = 0.2322, df = 14, P = 0.8197, respectively, unpaired t-test, Fig. 8B). Moreover, animals were scored for exploratory rears in the novel cage test. There was not a significant difference in exploratory rears in the test for the experimental groups (control: M = 10.0 (7.0; 13.0), TG-2113x: M = 11.5(10.75; 12.50); p = 0.2762; Mann-Whitney test, Fig. 8C). Thus, it can be suggested, Tg-2113x does not affect the general behavior of young healthy mice.
Discussion
Dementia has a multifactorial pathogenesis, and no model includes all disease aspects, but only partially mimics pathological and/or etiologic factors 59,60 . Therefore, we believe that to comprehensively study the new potential treatments is critical to use numerous and diverse models of the disease. Following this idea, this study was performed in various mouse models of neurodegenerative disease, induced by age (16-months-old C57Bl/6j mice), cholinergic dysfunction (scopolamine-induced amnesia in 3-months-old C57Bl/6j mice) or Aβ plaques (9-months-old 5xFAD mice). Mice were exposed to tests to assess cognitive function, cognitive plasticity and general behavior. It was shown that 5 days dosage with 0.5 mg/kg/day with Tg-2113x improves cognitive function of mice in models of different etiology of dementia but does not affect the memory and general behavior of young healthy animals. The last finding suggests a decreased risk of undesirable side-effects of Tg-2113x in the clinic. The protective effect of Tg-2113x in all dementia models could be mostly explained by its ability to limit the calcium uptake by mitochondria. Effect of Tg-2113x on glutamate receptors may also have an implication, but it could not be the solely mechanism of protection. Thus, in the model of scopolamine-induced amnesia in 3-months-old C57Bl/6j mice with cholinergic dysfunction, inhibition of glutamate receptors may not have a significant effect and Tg-2113x has no effect on Ach-induced calcium signal (Fig. 1). The data with Aβ plaques (9-months-old 5xFAD mice) are also in agreement with the effects of Tg-2113x on Aβ -induced calcium signal, mitochondrial membrane potential and mitochondrial calcium and suggested that inhibition of mitochondrial calcium uptake could be a major mechanism for cell protection.
Both in humans and animals, solving a particular problem is a choice between actual and irrelevant information at the moment. This choice is carried out by controlled inhibitory processes that suppress irrelevant information. In aged persons or patients with dementia, disruption of these processes leads to disorder of memory extinction and a competition between information, and therefore, difficulties in solving a problem 61 . Therefore here, we chose the protocol of fear conditioning test that includes the session of memory extinction. In comparison to young, aged mice show a decrease in memory function, as reported here and in other works 62 . But unlike our study authors usually used animals on six months older than we did, and our results suggest that aged-related cognitive dysfunctions can already be detected in 16-months-old C57Bl/6j males. Furthermore, as far as we know, age changes in memory extinction have not been evaluated in classical Pavlovian conditioning, and our work is probably one of the first to show an impairment of memory extinction in aged mice. In our view, this impairment means a disorder of the controlled processes that suppress irrelevant information that was mentioned above. Another model which is designed to mimic age-related dementia, the scopolamine-induced amnesia 63 , did not induce changes in fear extinction of mice, so the 16-months-old mice can be proposed as a well valid model of age-induced dysfunction of cognitive plasticity. Tg-2113x was shown to improve memory conditioning and extinction in aged mice, implying recovery of age-defected cognitive plasticity.
During memory extinction, two processes occur-the consolidation of and the suppression of irrelevant one-relating to the same issue. However, the primary memory remains, and this distinguishes extinction from www.nature.com/scientificreports/ forgetting 64 . The suppression of irrelevant memory can be realized via the GABAergic system. Indeed, GABA antagonists were shown to impede extinction, and GABA agonists facilitate it 65 . Moreover, memory extinction is associated with changes in the expression of genes associated with the GABAergic system. For example, the decrease of the mRNA level of α2 and β2 subunits of GABA receptors, the glutamate decarboxylase, that catalyzes reaction of glutamate to GABA, and the GABA transporter were observed 66 . At the same time, both the formation of memory and its extinction involve the glutamatergic system. While administration of NMDA receptor antagonists blocks the extinction of conditioned fear, NMDA agonists facilitate it 67 . There is an evidence that the GluN2B subunit of the NMDA receptor is specifically involved in this process 66 . Biochemical experiments have also shown that extinction is associated with a decrease in the expression of AMPA receptors (GluA1 and GluA2) 68 . As NMDA receptors is one of the Tg-2113x targets, it can be suggested that its positive effect on cognitive plasticity of mice is mediated by the modulation of the glutamatergic system. On the other hand, prevention of the glutamate-induced calcium influx into neurons and the mitoprotective action of Tg-2113x may be the basis of the neuroprotective effect in experiments in vivo, especially under conditions of scopolamine-induced amnesia.
The effectiveness of Tg-2113x in scopolamine-treated animals, and in aged mice can be explored by its ability to modulate the cholinergic system. As we have shown Tg-2113x does not affect the acetylcholine-induced activation of neuronal calcium uptake, but Tg-2113x inhibits butyrylcholinesterase, that catalyzes the hydrolysis of acetylcholine, thereby increasing the choline level essential for memory formation 18,51 . The efficacy of selective inhibitors of butyrylcholinesterase as cognitive-stimulating compounds has already been demonstrated by other authors 69 . Moreover, in models of scopolamine-induced amnesia and 5xFAD mice the previously proposed neuroprotective functions of Tg-2113x were proven.
Data availability
All data supporting the conclusions of this manuscript are provided in the text and figures. | 2022-07-29T06:17:42.326Z | 2022-07-27T00:00:00.000 | {
"year": 2022,
"sha1": "75472bd23feb550a0c005ecdb79b0eb08233f3d5",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "0d66b75689eabd3a0dcc2283b75e9dd98cdebc25",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233079464 | pes2o/s2orc | v3-fos-license | Assessment of cases of Herpes zoster- A clinical study
Background: Herpes zoster caused by the neurodermotropic virus called “varicella zoster virus” is distributed worldwide. The present study was conducted to assess cases of Herpes zoster. Materials & Methods: The present study was conducted on 75 cases of Herpes Zoster reported to the department. The segment of involvement, morphology, and pattern of the lesions, regional lymph node enlargement, motor complications, dissemination of the lesions etc. were recorded in all patients. Results: Out of 75 patients, males were 45 and females were 30. Age group 0-10 years had 2 patients, 11-20 had 4, 21-30 had 10, 31-40 had 16, 41-50 had 18, 51-60 had 20 and >60 years had 5 patients. The difference was significant (P< 0.05). There was involvement of cranial nerve in 30, cervical in 8, thoracic in 15, lumber in 6, sacral in 10, cervicothoracic in 4 and thoracolumber in 2 cases. The difference was significant (P< 0.05). Conclusion: Authors found that common age group involved was 41-50 years and cranial nerve was mostly affected.
Introduction
Herpes zoster caused by the neurodermotropic virus called "varicella zoster virus" is distributed worldwide. This benign localized viral disease has been recognized as a distinct entity since ancient times. It manifests as a result of reactivation of the virus laid dormant in the sensory ganglion following a clinical or sub clinical varicella (chicken pox) infection early in life or occasionally in utero [1] . Replication and transmission of the virus in the nerves and skin lead to the cardinal features of herpes zoster-pain and rash. In some people the rash is preceded by a prodromal phase lasting 48-72 hours or longer, consisting of throbbing pain and paraesthesia in the region of the affected sensory nerve [2] . This may sometimes be confused with other acute medical conditions such as angina, cholecystitis, or renal colic, depending on the dermatome involved. The rash of herpes zoster is typically vesicular, affects a single dermatome, and lasts for three to five days before the lesions pustulate and scab [3] . Complications due to the involvement of ophthalmic, splanchnic, cerebral, and motor nerves are reported in herpes zoster. However, the most commonly seen complication is post-herpetic neuralgia [4] . Vaccination against herpes zoster virus is the mainstay of prevention of herpes zoster infection. Many treatment modalities have been developed for herpes zoster infection as well as for post-herpetic neuralgia. Nevertheless, approximately 22% of patients with herpes zoster still suffer from post-herpetic neuralgia [5] . Rise in the incidence of herpes zoster and post-herpetic neuralgia is expected with the increase in life expectancy and increase in prevalence of the modern-day epidemic human immunodeficiency virus (HIV). Wider use of varicella vaccination leads to reduced prevalence of varicella, thereby resulting in reduced chances of periodic re-exposure to varicella. This in turn can reduce natural boosting of immunity and lead to an increased incidence of herpes zoster [6] . The present study was conducted to assess cases of Herpes zoster.
Materials & Methods
The present study was conducted in the department of Dermatology. It comprised of 75 cases of Herpes Zoster reported to the department. The study was approved from the institutional ~ 2 ~ ethical committee. All were informed regarding the study and written consent was obtained. Data such as name, age, gender etc. was record. The segment of involvement, morphology, and pattern of the lesions, regional lymph node enlargement, motor complications, dissemination of the lesions etc. were recorded in all patients. Results were subjected to statistical analysis. P value less than 0.05 was considered significant. Table III, graph II shows that there was involvement of cranial nerve in 30, cervical in 8, thoracic in 15, lumber in 6, sacral in 10, cervico-thoracic in 4 and thoraco-lumber in 2 cases. The difference was significant (P< 0.05).
Discussion
Herpes zoster is a clinical manifestation of the reactivation of latent varicella zoster virus infection [7] . It is a cause of considerable morbidity, especially in elderly patients, and can be fatal in immunosuppressed or critically ill patients. The pain associated with herpes zoster can be debilitating, with a serious impact on quality of life, and the economic costs of managing the disease represent an important burden on both health services and society [8] .
Herpes zoster, or shingles, is the painful eruption of a rash, usually unilateral, caused by the varicella zoster virus. Varicella zoster virus usually persists asymptomatically in the dorsal root ganglia of anyone who has had chickenpox, reactivating from its dormant state in about 25% of people to travel along the sensory nerve fibres and cause vesicular lesions in the dermatome supplied by that nerve. Herpes zoster is more common in people with diminished cell mediated immunity. This includes elderly people, patients with lymphoma, those receiving chemotherapy or steroids, and people with HIV. In contrast to herpes simplex, precise triggers for herpes zoster are not known [9] . The present study was conducted to assess cases of Herpes zoster.
In this study, out of 75 patients, males were 45 and females were 30. We found that age group 0-10 years had 2 patients, 11-20 had 4, 21-30 had 10, 31-40 had 16, 41-50 had 18, 51-60 had 20 and >60 years had 5 patients. Abdul et al. [10] . analyzed the incidence, pattern of occurrence and evolution of herpes zoster with special attention to provocative factors. Incidence of herpes zoster was mainly in the fourth and third decades of life. A definite history of chicken pox was present in only 63.4% cases. In the majority (70%) herpes zoster occurred spontaneously. In 30% cases, immuno-suppression due to chemotherapy, malignancy, HIV infection, diabetes mellitus were observed. The commonest segment affected was thoracic (42.4%) followed by cranial (28.2%) and cervical (12.1%). Majority resolved in 7-14 days except immunosuppressed. 34.6% of the patients had complications such as secondary bacterial infection, post herpetic neuralgia, and motor weakness. Ten patients had HIV infection as a provocative factor. We observed that there was involvement of cranial nerve in 30, cervical in 8, thoracic in 15, lumber in 6, sacral in 10, cervico-thoracic in 4 and thoraco-lumber in 2 cases. Gauthier et al. [11] reported that 19.5% of herpes zoster patients develop PHN1 (pain persisting at least 1 month after rash onset) and 13.7% develop PHN3 (pain persisting at least 3 months after rash onset). Similarly, an Italian study showed a proportion of 9.4% for PHN1 and 7.2% for PHN3 among immuno-competent patients with herpes zoster. Herpes zoster can usually be diagnosed clinically. However, early zoster and zoster presenting in the sacral and cervical area may be difficult to distinguish from herpes simplex. In these cases, the diagnosis can be confirmed by sending swabs to the local virology laboratory, but treatment should not be delayed while waiting for test results. The top of the lesion should be lifted and a sterile swab used to rub the base of the lesion. The swab should then be wiped across a sterile glass side or over three wells on a Teflon coated slide. The slide should be air dried and sent to the laboratory for staining with immunofluorescent antibodies [12] . | 2020-05-21T00:04:59.503Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "1b7b04c720b0dcd6b13c6eb9327e8459f3dfdab1",
"oa_license": null,
"oa_url": "http://www.dermatologypaper.com/article/view/15/2-1-1",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "b65c3e57dee5d019b6640d2e48a98e7c5c45f5d9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
136206335 | pes2o/s2orc | v3-fos-license | CORRELATION BETWEEN PIN MISALIGNMENT AND CRACK LENGTH IN THT SOLDER JOINTS
In this manuscript, correlations were searched for between pin misalignments relative to PCB bores and crack propagation after cyclic thermal shock tests in THT solder joints produced from lead-free solder alloys. In total, 7 compositions were examined including SAC solders with varying Ag, Cu and Ni contents. The crack propagation was initiated by cyclic thermal shock tests with 40°C / +125°C temperature profiles. Pin misalignments relative to the bores were characterized with three attributes obtained from one section of the examined solder joints. Cracks typically originated at the solder/pin or solder/bore interfaces and propagated within the solder. It was shown that pin misalignments did not have an effect on crack propagation, thus, the solder joints’ lifetime.
Introduction
Both use of lead-free solder alloys and the related researches widespread since the issue of RoHS (Restriction of Hazardous Substances) and WEEE (Waste Electrical and Electronic Equipment) directives [1,2].Sn-Ag-Cu (SAC) solder alloys are promising candidates for alternatives of the previously used Sn-Pb alloys.Many researches have been carried out that aiming the improvement of mechanical properties and resistance against failure through modifying the composition of SAC alloys [3][4][5].Special attention is given to the effect of Ni, one of the main alloying elements on microstructure, mechanical properties and fracture modes [4][5][6].In regard to reliability, most researches focused on the examination of failure modes and thermal cycle performance of Ball Grid Array (BGA) solder joints [1,[7][8].It is generally accepted that besides chemical composition and microstructure, the geometry of the solder joints strongly affects the resistance against thermal fatigue.On the other hand, available literature on solder joints produced by Through Hole Technology (THT) is deficient.During THT soldering, the pins of the electronic components are placed into the bore of the Printed Circuit Board (PCB) and the solder melt fills the void all around in between the pin and bore.However, pins have a small degree of displacement before producing the solder joint, which result slightly misaligned pins relative to the center of the bore in the solidified solder joint.In such a case, the thickness of the solder varies around the pin and from top to bottom of the PCB.It is well known that the difference in thermal expansion coefficients of the solder, pin and PCB is * INSTITUTE OF PHYSICAL METALLURGY, METAL FORMING AND NANOTECHNOLOGY, UNIVERSITY OF MISKOLC, H-3515 MISKOLC-EGYETEMVAROS, B1/109., HUNGARY # Corresponding author: fembenke@uni-miskolc.hu one of the main reasons of solder joint failures [1,[7][8].Different solder thickness can result uneven thermal stresses around the pins which can influence crack initiation.The aim of the present work is to investigate the influence of pin misalignments relative to PCB bore on crack initiation and propagation in THT solder joints made of SAC solder alloys with different alloying element/impurity contents.
Experimental
In total, 7 types of SAC solder alloys with different Ag, Cu and Ni concentrations were used to prepare THT solder joints.The compositions of the solder alloys were determined with ICP (Inductively Coupled Plasma spectrometry).The measured compositions are listed in Table 1.For the THT assembly, electronic capacitors with one pin made of Fe and the other made of Cu coated with Ni and Au were used.Only the Cu pins were examined.The pins were soldered to test PCBs using an ERSA ECOSELECT 2 type selective wave soldering equipment.To initiate crack propagation, the test PCBs were subjected to cyclic thermal shock tests in a VÖTSCH VT 7012 S2 equipment.During thermal cycling, the test PCBs were held at -40°C for 30 minutes, then rapidly heated to +125°C and held isothermally for 30 minutes.Test PCBs were taken out for examinations after 1500, 3000 and 4500 thermal cycles (TC).After each cycle, 6 THT solder joints per solder alloy were cut from the test PCBs and sections of the solder joints were prepared through standard metallographic preparations (grinding, polishing) for optical microscope examinations.Optical microscope images were obtained with a ZEISS AXIO IMAGER M1m type microscope.Pin misalignments relative to bore were characterized with three properties: eccentricity, off-plane (OP) tilt and tilt, measured on one section of each solder joint.Eccentricity describes the position of the pin relative to the center of the bore on the examined section.Eccentricity was calculated by dividing the larger area occupied by the solder with the smaller area occupied by the solder on the left and right sides of the joint on the examined cross section (Fig. 1a), (Eq.1).
Off-plane tilt describes the tilting of the pin relative to the centerline of the bore in the plane perpendicular to the examined section.Off-plane tilting is calculated by dividing the larger width of the pin with the smaller width of the pin measured at the top and bottom PCB surfaces on the examined section (Fig. 1b), (Eq.2)./ /
MAX T B OP tilt MIN T B
(2) Tilt describes the tilting of the pin relative to the centerline of the bore in the plane of the examined cross section.Tilting is measured as the angle between the centerline of the pin and the centerline of the bore on the examined section (Fig. 1c).Crack lengths were measured in the vertical direction (parallel with the PCB bore) on the obtained optical microscope images and are given in % values relative to PCB thickness.
Results and discussion
Typically, two crack initiation locations were observed after cyclic thermal shock tests in THT solder joints: one at the solder/ pin interface, the other at the solder/PCB interface.The cracks propagated either along the solder/pin interface or within the solder (Fig. 2).The crack initiation locations and crack propagation Figs.3-5 show the measured crack length vs eccentricity, crack length vs off-plane tilt and crack length vs tilt plots of all the examined THT solder joints after 1500, 3000 and 4500 thermal cycles, respectively.It can be seen in Fig. 3 that for all examined solder alloys and thermal cycles, the variation of measured crack length does not follow any trend with increasing eccentricity.Thus, it can be deduced that pin eccentricity does not have an influence on crack propagation during cyclic thermal shock tests.
According to Fig. 4 the measured crack length versus offplane tilt plots are chaotic independently on solder alloy composition and thermal cycle.This suggests that off-plane tilt does not affect crack propagation during cyclic thermal shock tests.
As seen in Fig. 5, the measured length of cracks does not depend on with tilt for all examined solder alloy compositions and thermal cycles.Again, tilt does not affect crack propagation during cyclic thermal shock tests.
It was seen in Figs.3-5 that the measured length of cracks within the examined THT solder joints increased as thermal cycle number increased.On the other hand, no correlations were found between pin misalignments and crack initiation and propagation.Pin misalignments cause varying solder thickness between pin and PCB along the PCB bore.Since the pin, the solder and the PCB have different thermal coefficients, different thermal stresses perpendicular to the bore centreline are expected at different solder thicknesses.However, the observed crack propagation was independent on pin misalignments, which suggests that the effect of stresses perpendicular to bore do not play noticeable role in crack propagation.More likely, thermal stresses parallel with the bore are the major stress components affecting crack propagation in THT solder joints.
It was also seen that crack initiation was not affected by composition change in the examined SAC solder alloys.It is known that similarly to all other alloys, mechanical properties of SAC solder alloys depend on their composition.However, it was seen that crack initiation occurred at well-defined locations within the THT solder joints independently on their mechanical properties.
Conclusions
In this study, correlations were searched for between pin misalignments of SAC THT solder joints, composition, crack initiation locations, crack propagation modes and crack lengths after 1500,3000 and 4500 cycles of thermal shock tests.It was found that misalignments of the pins and varying solder compositions do not influence crack initiation and propagation, nor on crack length of THT solder joints made of SAC solder alloys with varying Cu, Ag and Ni concentrations.It can be concluded that the unavoidable small displacements of pins relative to PCB bores during industrial soldering do not affect crack propagation and solder joint lifetime.
TABLE 1
Chemical composition of the solder alloys determined by ICP [wt.%] | 2019-04-29T13:17:31.982Z | 2017-06-01T00:00:00.000 | {
"year": 2017,
"sha1": "a9dfa7c0a9adf7bbc4a742e7205207a0b1f81ffb",
"oa_license": "CCBYNC",
"oa_url": "http://journals.pan.pl/Content/104972/PDF/10172-Volume62_Issue2-085_paper.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "f361a3381af4df06ee40cc08a853cfc09a8af412",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
260808736 | pes2o/s2orc | v3-fos-license | Spastin locally amplifies microtubule dynamics to pattern the axon for presynaptic cargo delivery
Neurons rely on the long-range trafficking of synaptic components to form and maintain the complex neural networks that encode the human experience. With a single neuron capable of forming thousands of distinct en passant synapses along its axon, spatially precise delivery of the necessary synaptic components is paramount. How these synapses are patterned, as well as how the efficient delivery of synaptic components is regulated, remains largely unknown. Here, we reveal a novel role for the microtubule (MT)-severing enzyme spastin in locally enhancing MT polymerization to influence presynaptic cargo pausing and retention along the axon. In human neurons derived from induced pluripotent stem cells (iPSCs), we identify sites stably enriched for presynaptic components along the axon prior to the robust assembly of mature presynapses apposed by postsynaptic contacts. These sites are capable of cycling synaptic vesicles, are enriched with spastin, and are hotspots for new MT growth and synaptic vesicle precursor (SVP) pausing/retention. The disruption of neuronal spastin level or activity, by CRISPRi-mediated depletion, transient overexpression, or pharmacologic inhibition of enzymatic activity, interrupts the localized enrichment of dynamic MT plus ends and diminishes SVP accumulation. Using an innovative human heterologous synapse model, where microfluidically isolated human axons recognize and form presynaptic connections with neuroligin-expressing non-neuronal cells, we reveal that neurons deficient for spastin do not achieve the same level of presynaptic component accumulation as control neurons. We propose a model where spastin acts locally as an amplifier of MT polymerization to pattern specific regions of the axon for synaptogenesis and guide synaptic cargo delivery.
In brief
Aiken and Holzbaur reveal a novel role for the microtubule-severing enzyme spastin during neuronal development, demonstrating that the local amplification of microtubule growth by spastin directs presynaptic accumulation along the developing human axon.
INTRODUCTION
The human central nervous system (CNS) is composed of billions of neurons that connect with one another through trillions of synapses, the majority of which form en passant along the axon shaft.With a single neuron responsible for actively maintaining potentially thousands of individual pre-and postsynaptic connections, a considerable challenge is faced during neurodevelopment: where do individual synapses form and how are they maintained?One critical regulator of synapses is long-range, microtubule (MT)-based trafficking of synaptic components.Within the axon, MTs are organized with their dynamic plus ends facing outward. 1 Anterograde axonal movement from the soma toward the axon tip is driven by kinesin motors that move processively toward MT plus ends.Retrograde movement from the distal axon back to the soma is driven by dynein motors that move toward MT minus ends.The activities of these motors are tightly regulated in a cargo-and compartment-specific manner to build presynaptic sites and maintain neuronal connectivity. 2,3he cellular mechanisms dictating how and where synapses are built are not well understood.Classical models suggest a temporal progression from initial recognition of synaptic partners, assembly of pre-and postsynaptic compartments, and finally the maturation of fully functional synapses.However, there is evidence across evolution for presynaptic specializations that can release neurotransmitters prior to contact with a postsynaptic partner, [4][5][6][7][8][9] both in culture 5,6,10 and in vivo. 4,7,9his axonal pre-patterning is proposed to influence dendritic arborization, axodendritic contact, and synapse maturation, [9][10][11] thus influencing downstream synapse site selection.
Once presynapses have formed, local MT dynamics influence synaptic vesicle precursor (SVP) pausing, retention, and activityinduced exchange between boutons. 12,13SVPs are synthesized in the soma, anterogradely transported by the kinesin-3 motor kinesin family member 1A (KIF1A), 14 and eventually captured at en passant presynaptic sites along the axon, where they mature into synaptic vesicles. 15In mature primary rat hippocampal neurons, en passant synaptic regions are hotspots for new MT polymerization. 12,13Localized MT growth primes micrometer-scale axonal zones for SVP deposition, as KIF1A preferentially detaches from GTP-rich MT plus ends. 12Anterograde SVP retention at en passant synapses can be curtailed either by reducing MT polymerization or by expression of a KIF1A mutation that diminishes detachment from dynamic MT plus ends. 12Together, these observations indicate a mechanistic link between local MT dynamics and KIF1A-mediated SVP delivery to presynaptic regions.Recent data in rat cholinergic single-cell microcultures (legend continued on next page) provides further support for the importance of dynamic MTs at presynapses, identifying a depolymerization-sensitive MT pool at presynaptic sites that regulates spontaneous neurotransmission. 16However, while MT dynamics are critical for SVP delivery and neurotransmission, the factors that regulate MT growth at developing presynaptic zones have yet to be investigated.
Spastin is a hexameric AAA-ATPase MT-severing enzyme that pulls tubulin heterodimers from the lattice [17][18][19] and is found neuronally enriched at axonal growth cones, 20 axonal branch sites, 20 and neuromuscular synaptic boutons. 17Paradigm-shifting studies using reconstituted single-molecule assays revealed that spastin, traditionally thought to dismantle MT networks, can also act to amplify MT mass in vitro. 21,22Studies in Drosophila first exposed the paradoxical nature of spastin's MT-severing activity in vivo: spastin overexpression was found to dismantle MT networks in muscle cells, but instead of observing increased MT mass in spastin-null mutants, fewer MT bundles were found. 174][25][26] Spastin's potential role in regulating synaptic connections of the CNS is bolstered by the observation that spastin knockout mice exhibit fewer hippocampal synapses. 27Further, human patients harboring spastin mutations frequently present with psychiatric comorbidities, including memory impairment, intellectual disability, autism spectrum disorder, and severe depression. 28,29ogether, these observations support the compelling hypothesis that spastin may regulate synapse biology by influencing MT dynamics.
Here, we find that spastin acts as a local amplifier of MT polymerization to pattern the axon for presynaptic cargo accumulation in human induced pluripotent stem cell (iPSC)-derived neurons.Prior to the formation of robust synaptic connections, we note the presence of presynaptic component accumulations, or protosynapses, along the axon.These sites are hotspots for dynamic MTs and are marked by SVP pausing in both the anterograde and retrograde directions.Spastin depletion leads to decreased MT polymerization and SVP pauses/retention at these presynaptic specializations, while spastin overexpression leads to increased MT polymerization outside of these zones.Pharmacologic inhibition of the ATPase activity of spastin by spastazoline (SPZ) leads to mislocalization of new MT growth outside protosynaptic sites.Finally, we developed a heterologous synapse assay in which microfluidically isolated human axons rapidly induce robust presynapse formation upon contact with neuroligin-expressing human embryonic kidney (HEK) cells.This assay confirms that axonal spastin enhances delivery of presynaptic components.Thus, while spastin has been canonically considered to disassemble MT networks, 17,30 we find instead that spastin activity amplifies dynamic MT plus ends to locally regulate presynaptic trafficking in human iPSC-derived neurons.These findings support a model where spastin guides presynaptic cargo distribution and patterns the axon for synaptogenesis.
RESULTS
Presynaptic accumulation sites are hotspots for MT polymerization events in developing i 3 Neuron axons In primary rat hippocampal neurons, en passant presynaptic sites are locally enriched for MT polymerization. 12,13Here, we observe that sites of presynaptic component accumulation are hotspots for MT dynamics within developing human axons using a homogeneous population of glutamatergic cortical-like i 3 Neurons generated from an iPSC line with a doxycycline-inducible neurogenin-2 (NGN2) expression cassette inserted into the adeno-associated virus integration site 1 (AAVS1) safe harbor locus. 31,32We transiently expressed GFP-MACF43, an MT plusend marker, 33 and mScarlet-synaptophysin (Syp), a synaptic vesicle transmembrane protein found in SVPs, and mapped the position of MT growth events, or ''comets'' along the axon (Figures 1A and 1B).We found that i 3 Neuron axons are punctuated by sites of stable accumulation of presynaptic components Although only 22% of the total analyzed axon is occupied by stable SVPs at 21 days in vitro (DIV21) (Figure 1D), 70% of MT comets initiate, pass through, or terminate at these SVP+ regions (Figure 1E).When standardized for distance and time, SVP+ sites exhibited an 8.8-fold increase in comet events (comets that initiate, terminate, or pass through the site) compared with regions lacking accumulated presynaptic cargos (Figure 1F).SVP+ sites are hotspots for both comet initiation and termination events (Figures 1G and 1H; Videos S3 and S4), demonstrating that MT polymerization preferentially starts and stops within these regions.
Both anterograde and retrograde SVPs preferentially pause at presynaptic accumulation sites during transport along the axons of human i 3 Neurons
The enhanced MT dynamics we observed at SVP+ sites led us to ask whether trafficking SVPs might preferentially pause at these sites during transit along the axon.We rapidly imaged i 3 Neurons expressing presynaptic cargo mScarlet-Syp to track axonal SVP movement (Figures 1I and 1J).We observed robust SVP flux in both the anterograde and retrograde directions, punctuated by discrete pausing and retention events (Figures 1J and 1K).Anterograde SVP flux was observed at a rate of 3.2 vesicles/min moving at an average instantaneous velocity of 2.7 mm/s (Figures S2A and S2B).In the retrograde direction, SVP flux was observed at 0.9 vesicles/min, with SVPs moving at 2.2 mm/s (Figures S2A and S2B).
SVPs preferentially paused at SVP+ sites in both the anterograde and retrograde direction across culture time points (DIV21, Figure S2C; DIV35, Figure 1K).Despite stable SVP accumulations occupying only 34% of the total analyzed axon distance at DIV35 (Figure 1L), 70% of anterograde and 56% of retrograde SVPs paused specifically at these sites (Figure 1M).All axons exhibited increased anterograde SVP pausing at SVP+ sites compared with regions lacking stable SVPs, with 67% showing a 1-to 5-fold increase and 33% showing a >5-fold increase (Figure S2D).Localized SVP pausing was also observed for vesicles moving in the retrograde direction, with 83% of axons exhibiting increased pausing frequency within SVP+ regions (Figure S2D).Notably, we observed pausing of anterograde SVPs at the plus end of MTs at and near SVP+ sites (Figure S2E, red and black arrows, respectively).Our results demonstrate that both anterograde and retrograde SVPs preferentially pause at SVP+ regions in human i 3 Neurons, in contrast to previous observations in primary rat neurons, where only anterograde-directed SVPs showed a preference for pausing at stable SVP accumulation sites. 12Thus, the increased MT polymerization and pausing preference for anterograde-moving SVPs at stable SVP accumulations is conserved across systems.Presynaptic cargos accumulate along human i 3 Neuron axons prior to robust formation of bona fide synapses and are enriched with MT comets and the MT-severing enzyme spastin i 3 Neurons are an emerging model to study the cell biology of human neurons in culture, 32,34,35 but synapse formation has not yet been thoroughly characterized.For i 3 Neurons grown in monoculture to DIV21, a commonly used end-point for experiments, we find that most presynaptic puncta are not apposed by postsynaptic counterparts as revealed by synapsin I/II (Syn), postsynaptic density protein 95 (PSD-95), and MT-associated protein (MAP)2 immunocytochemistry (Figures 2A and 2B), with only 15% of Syn puncta at DIV14 and 18% at DIV21 co-localizing with PSD-95 along MAP2+ dendrites.Overall, we noted a sparse 0.9 and 1.2 synapses per 100 mm 2 of somatodendritic area at DIV14 and DIV21, respectively (Figure 2C).Thus, it is likely that only a minority of the SVP+ presynaptic accumulations characterized by live imaging in Figure 1 are apposed by postsynapses, and instead reflect a state upstream in synaptogenesis, which we term ''protosynapse.''i 3 Neurons in extended co-culture with primary rat astrocytes reach more mature synaptic states (Figures 2A-2C).Although astrocyte co-culture is not sufficient to increase synapse density at DIV21, DIV42 i 3 Neurons co-cultured with astrocytes increased the percentage of Syn puncta apposed by postsynaptic marker PSD-95 to 39% (Figure 2B) and synapse density to an average of 4.4 synapses per 100 mm 2 of somatodendritic area (Figure 2C).These findings are similar to observations in primary rat neuron cultures at DIV14, where approximately 40% of synaptophysin puncta colocalize with postsynaptic marker a-amino-3-hydroxyl-5-methyl-4-isoxazole-propionate receptor (AMPA-R), leading to 5 synapses per 100 mm 2 cell area. 36Primary rat neuron astrocytes (bottom).Light pink box/inset highlights presynapse puncta without postsynaptic counterpart (i.e., protosynapse).Dark pink solid and dashed boxes/ insets highlight bona fide synapses with apposed pre-and postsynapse.(B) Percent of Syn puncta that are apposed by PSD-95 (dark pink) or alone (light pink) in i 3 Neurons at DIV14 and DIV21 in monoculture (mono) and i 3 Neurons at DIV21 and DIV42 co-cultured (co) with primary rat astrocytes.Error bars represent the standard deviation of the dataset and reported p values are from one-way repeated measures ANOVA.(C) Synapses per mm 2 of MAP2 somatodendritic area for monoculture (mono) i 3 Neurons at DIV14 and DIV21 and co-cultured (co) i 3 Neurons with primary rat astrocytes at DIV21 and DIV42.Pre-and postsynaptic puncta were determined using FIJI SynapseJ macro and 3D Object Counter on z stack images.Error bars represent the standard deviation of the replicate means and reported p values are from one-way repeated measures ANOVA.(D-F) Maximum intensity projections of bona fide synapses (top panels, Synapse) and protosynapses lacking a postsynaptic partner (bottom panels, Protosyn) in immunostained DIV21 monoculture wild-type i 3 (legend continued on next page) cultures also increase the percent of presynaptic puncta apposed by postsynaptic markers over time, 36 although the time course appears to be slower for human neurons.Of note, the appearance of presynaptic specializations prior to postsynaptic contact we observed in i 3 Neuron cultures is consistent with in vivo reports in mammalian systems. 4,9e sought to characterize both bona fide synapses defined by apposition of pre-and postsynaptic densities, as well as protosynaptic sites marked by accumulations of presynaptic marker only, using DIV21 monoculture i 3 Neurons.Both presynaptic (Synapse) and protosynaptic (Protosyn) puncta contain cycling vesicle clusters, as determined by the uptake and release of FM4-64 dye upon K+ stimulation (Figures 2D, 2G, S3A, and S3B).The observation that presynaptic accumulations are capable of cycling synaptic vesicles independent of postsynaptic contacts is in agreement with data generated from primary rat neurons and in vivo murine models. 5,6,9,10sing immunocytochemistry, we observed that the MT +TIP EB3, a protein that tracks newly polymerized MT plus ends, is frequently found associated with both presynapses and protosynapses (Figures 2E and 2H), consistent with the enrichment of MT comets at SVP+ sites in live axons (Figures 1 and S1).We also observed an enrichment of endogenous spastin at both presynaptic and protosynaptic puncta (Figures 2F and 2I).The specificity of the spastin antibody was confirmed by comparing signals from control, spastin-overexpression, and spastin KD neurons generated by CRIPSR interference (CRISPRi) (Figures S3C-S3E).Within bona fide synapses, spastin appears both pre-and postsynaptically enriched; spastin co-localization to the postsynapse is consistent with previous reports of spastin knockout leading to dendritic spine morphology and density defects in mice. 27,37Huygen's deconvolution and Imaris 3D rendering software highlight the association of spastin puncta with Syn+ protosynapses (Figure S3F).To further assess protosynaptic accumulations specifically, we immunostained microfluidically isolated i 3 Neuron axons lacking postsynaptic targets.We find that the synaptic vesicle component synaptobrevin-2 38 (Syb) punctuates the developing axon and that spastin is enriched at these protosynaptic accumulation sites (Figures S3G and S3H).These data support a presynaptic role for spastin, where it is ideally positioned to locally regulate MT dynamics in developing axons.
Spastin promotes appropriate positioning of axonal MT comets and SVP+ site organization
To examine whether spastin regulates axonal MT polymerization, we used CRISPRi (Figure 3A) to KD spastin, achieving a 75% reduction of wild-type levels (Figures 3B, 3C, and S4A). 34,39,40Non-targeting control i 3 Neurons were compared with spastin KD i 3 Neurons (Sp KD) and with i 3 Neurons transiently overexpressing SNAP-tagged spastin (Sp OE).Two distinct isoforms of spastin are expressed from different initiation codons: M1 and M87.We overexpressed the M87 isoform as it is the predominant isoform identified in i 3 Neurons (Figure 3B) and in the developing CNS. 41We noted that spastin CRISPRi-depletion did not alter global tubulin or post-translationally modified polyglutamylated or acetylated tubulin levels (Figures 3B and 3C), in contrast to the marked hyperglutamylation observed in spastin knockout mice neurons, 27 suggesting that genetic knockout but not KD may lead to compensatory changes in the cytoskeleton or that there may be species-specific differences in response to lowered spastin levels.
Spastin depletion induced striking changes in the patterns of MT polymerization observed along the axon, and in particular at SVP+ sites (Figures 3D, 3F-H, and 4A-4E).To facilitate covisualization of MT comets and presynaptic cargo, we generated a bicistronic vector that simultaneously expresses GFP-MACF43 and fluorescently tagged SVP cargo synaptobrevin-2, mScarlet-Syb.Spastin KD leads to fewer axonal MT comets (Figures 3D, 3E, and 3G) and less total MT polymer added (sum of comet lengths standardized to kymograph distance and time; Figure 3H) compared with control i 3 Neurons.In contrast, spastin OE did not significantly alter the average number of MT comets observed along the axon or the total MT polymer added (Figures 3D and 3F-3H).However, consistent with spastin's potential to act as an MT network amplifier or dismantler, 17,21,22,30 axons overexpressing spastin exhibited more variability, with some axon segments displaying numerous MT comets (Figure 3F, left panel) while others displayed few, or no, MT comets (Figures 3F right panel and S4B).Interestingly, either decreasing or increasing neuronal spastin levels caused similar decreases in the MT polymerization rate (Figure S4C) and a trend toward a shorter polymerization distance and longer duration (Figures S4D and S4E).Visualizing fluorescently labeled SNAP-M87 spastin revealed that 60% of MT comets correspond to stationary spastin puncta (Figures S4F and S4G), despite spastin occupying only 14% of the total analyzed axon distance (Figure S4H).Taken together, these data indicate that spastin can amplify new MT polymerization events in the axon.
We analyzed the kymographs generated from control, spastin KD, or spastin OE i 3 Neurons expressing mScarlet-Syb-IRES-GFP-MACF43 and quantified the number of the MT comet events associated with stable SVP+ sites (pink) or independent of SVP+ sites (blue; Figures 4A-4C).In control axons, MT comets were enriched in regions inhabited by stable SVP accumulations, as demonstrated in the example kymograph tracing (Figure 4A).As in wild-type neurons, non-targeting control axons displayed increased MT comet initiation and termination observed at SVP+ sites (Figures 1G and 4D).In contrast, spastin KD led to a reduction in MT comets associated with SVP+ sites.Although regions lacking stable SVP accumulations (SVPÀ) exhibited no change in MT comet density between control and spastin KD conditions, SVP+ regions demonstrated a significant decrease in MT comets and decreased preference for both comet initiation and termination when spastin was depleted (Figures 4B and 4D-4F).In contrast, spastin OE resulted in no change to MT comet density at SVP+ sites, but increased the density in SVPÀ regions (Figures 4C-4F).The mislocalized comets in SVPÀ regions were often immediately adjacent to SVP+ sites, suggesting a ''spill over'' effect.Accordingly, we observe ectopically expressed SNAP-M87 spastin, both near and between SVP+ sites (Figures 4C and S4F).Taken together, our live-imaging results indicate that spastin locally influences axonal MT comet density and position, with spastin KD leading to fewer MT polymerization events within SVP+ sites and spastin OE leading to more polymerization events outside SVP+ regions.
Importantly, spastin KD and OE axons exhibited a significant decrease in both the intensity and the number of SVP+ sites (Figures 4G and 4H).In wild-type neurons, the amount of new MT polymer added correlates with SVP+ axon coverage (Figure S4I).This relationship is disrupted in both the spastin KD and OE conditions, indicating that precise regulation of MT polymerization via spastin influences presynaptic patterning.
Spastin enzymatic activity influences the enrichment of MT comet events at SVP+ sites Spastin-associated synaptic deficiencies in patients are linked to mutations to the AAA functional domain. 28,29To determine whether spastin's enzymatic activity regulates the axonal localization of MT growth events, we treated wild-type i 3 Neurons with SPZ, a chemical inhibitor of spastin's AAA-ATPase activity. 42 3 Neurons expressing mScarlet-Syb-IRES-GFP-MACF43 were exposed to SPZ or DMSO overnight prior to live imaging (Figures 5A-5C, S5A, and S5B).Inhibiting spastin's enzymatic activity significantly reduced both the amount of MT polymer added (Figure 5D) and the MT polymerization rate (Figure 5E), suggesting that reducing spastin activity may limit the available pool of polymerization-competent tubulin in axons.Consistent with these data, SPZ treatment significantly shortened MT comet length (Figure S5C) but did not change overall comet density or growth duration (Figures S5D and S5E).
DMSO control axons exhibited the consistent increase in MT initiation and termination at SVP+ sites that we have observed across developmental time points and cell backgrounds (Figures 5B and 5F-5J).This increase is lost in SPZ-treated i 3 Neurons (Figures 5C and 5F-5J), with fewer MT polymerization events associated with SVP+ sites (Figure 5H) and reduced SVP+ comet initiation and termination (Figures 5I and 5J).In fact, while nearly all DMSO-treated axons exhibit a higher percentage of comet initiation and termination events at SVP+ sites compared with SVPÀ regions (100% and 95%, respectively), the SVP+/SVPÀ ratio is reduced in axons with inhibited spastin activity (Figure 5J).Additionally, while the axonal SVP+ coverage was not dramatically altered upon SPZ treatment (Figure 5F), the intensity of SVP+ sites was significantly reduced after overnight spastin inhibition (Figure 5K).This is consistent with the established role of enhanced MT dynamics promoting SVP pausing/ retention, 12,13 as well as the observation of SVP pausing in MTcomet-rich regions outside of SVP+ zones upon SPZ treatment (Figure S5B).Together, these results reveal that spastin's MT enzymatic activity regulates MT dynamics in the axon, enriching MT growth initiation and termination locally at SVP+ zones.
Depletion of neuronal spastin interrupts the localization of anterograde presynaptic cargo pausing and retention
We next explored how local reduction of MT comets as a result of spastin depletion influences synaptic trafficking behavior.To this end, we rapidly live imaged mScarlet-Syb in spastin KD and nontargeting control axons (Figures 6A and S6A).Spastin KD does not grossly alter SVP flux (Figure S6B) or velocity (Figure S6C).However, spastin depletion caused a marked reduction in SVP pause frequency, the percent of SVPs that exhibit a pause, and SVP retention, specifically in the anterograde direction (Figures S6D-S6F).SVP pause duration, in contrast, is unaffected (Figure S6G), suggesting that when an SVP pauses, it reengages with the MT network similarly in both control and spastin KD conditions to initiate its next processive run.
We probed whether the changes in axonal anterograde SVP pausing upon spastin depletion can be ascribed to delivery defects at SVP+ sites.The kymographs of control or spastin KD i 3 Neurons expressing mScarlet-Syb were analyzed for SVP pauses and retention in relation to stable SVP+ accumulations (Figures 6A-6K).We noted very similar SVP pausing behavior at SVP+ sites using distinct SVP markers (Syp and Syb), control lines, and culture time (Figures 1K, 6D, 6H, and S2B), revealing consistency across experimental paradigms.Although we analyzed axonal regions with similar SVP+ coverage (18% and 15% of control and spastin KD axons, respectively), anterograde pauses at SVP+ regions decreased from 52% in control to 25% when spastin is depleted (Figures 6B and 6C).DIV21 control axons exhibited increased anterograde SVP pausing at SVP+ sites compared with regions lacking stable SVPs, with 69% showing a 1-to 5-fold increase and 31% showing a >5-fold increase (Figures 6D and 6E).Spastin KD axons exhibited fewer SVP pauses at SVP+ sites (Figure 6D) and a decreased SVP+/ SVPÀ pause ratio (Figure 6E), reminiscent of the shift observed for SVP+/SVPÀ MT comets (Figure 4E).SVP retention at SVP+ sites was similarly decreased upon spastin KD (Figures 6F and 6G).Indeed, anterograde SVP retentions shifted from nearly all (E) Ratios of microtubule comet initiation and termination in SVP+/SVPÀ regions.Percent of axons with ratios of 0-1, 1-5, and 5+ are displayed in light, medium, and dark purple.100% of control axons exhibit an increased microtubule comet initiation frequency at SVP+ sites.Both Sp KD and Sp OE shift the SVP+/SVPÀ ratio to more comets initiating outside of SVP+ regions.(legend continued on next page) occurring at SVP+ regions in control axons to less than a third in spastin KD axons (96% vs. 30%, respectively; Figure 6G).
Spastin depletion more mildly affected retrograde SVP trafficking (Figures 6H-6K).Both control and spastin KD conditions exhibited increased retrograde pause frequency at SVP+ sites (Figure 6H).Spastin KD caused a shift toward fewer axons experiencing an enrichment in SVP+/SVPÀ retrograde pause events (Figure 6I), a trend toward fewer SVP+ retentions per SVP (Figure 6J), and a decrease in the percent of retrograde retentions occurring at SVP+ sites from 74% in control to 36% in spastin KD (Figure 6K).
Heterologous human synapses model presynapse formation
With spastin potentially regulating both pre-and postsynaptic compartments, we sought to interrogate spastin's role specifically in presynapse formation using an innovative human heterologous synapse model.In this system, non-neuronal HEK cells expressing neuroligin-1 (NL1) 43 are introduced to microfluidically isolated i 3 Neuron axons, which form presynaptic connections with the presented postsynaptic ligand within 24 h.Where axons cross non-NL1-expressing HEK cells (example outlined in white in Figure 7A), no presynapses are established, but where axons encounter HEK cells expressing NL1, robust presynaptic connections are formed (Figure 7A).These human heterologous presynapses are enriched in presynaptic markers synapsin, synaptophysin, synaptobrevin-2, and the excitatory transmitter VGLUT1, and contain synaptic vesicles that can cycle upon depolarization, as shown by FM4-64 dye uptake (Figures 7B, S7A, and S7B).Interestingly, the density of heterologous synapses in axons crossing NL1+ HEK cells is similar to the density of protosynapses, with approximately 1 puncta per 10 mm of axon in all instances (Figure S7D).
Similar to bona fide presynapses and protosynapses, heterologous presynapses are enriched for spastin and the +TIP EB3 (Figures 7C, 7D, and S7C white insets).They do not, however, exhibit enrichment of the MT nucleator g-tubulin (Figure S7C, white inset), which had previously been implicated in presynaptic MT regulation upon stimulation in mature rodent neuron cultures. 13Our immunofluorescence results indicate functional consistency across protosynapses, bona fide presynapses, and heterologous presynapses, with all capable of cycling synaptic vesicles and enriched for spastin and MT plus ends.
Spastin regulates presynaptic component accumulation at heterologous presynapses
To determine whether spastin levels impact accumulation of presynaptic cargos during presynapse initiation, we employed the heterologous human synapse model to compare presynapse accumulation between CRISPRi spastin-targeting and non-targeting control i 3 Neurons (Figures 7E and 7F).Upon spastin depletion, we observed a decrease in the intensity of presynaptic components Syn and Syb 24 h after NL1 introduction (Figure 7F).Compellingly, spastin depletion altered accumulation of both presynaptic components despite Syn and Syb being trafficked via distinct mechanisms, with Syn undergoing slow axonal transport and Syb undergoing fast axonal transport by KIF1A. 14,44,45hese results reveal a reliance on spastin for priming axonal regions for presynaptic zone formation upon contact with postsynaptic ligands (Figure 7G).
DISCUSSION
Spastin's severing activity has been implicated in numerous neuron-specific functions, including facilitating axon outgrowth, 20,25,46 locally fragmenting MTs in retreating motor axon branches, 47 enhancing axon branch formation, 46 regulating synaptic area in Drosophila neuromuscular junctions, 48 and regulating CNS dendritic spine formation and maturation. 27,37The importance of spastin in neural connectivity is bolstered by reports of human patients with mutations to spastin's AAA functional domain exhibiting synapse-related deficiencies, including memory impairment, intellectual disability, seizures, autism spectrum disorder, and severe depression. 28,29Here, we reveal a novel role for spastin: locally enhancing MT plus-end amplification to pattern synaptically immature axons for presynaptic cargo pausing and accumulation.Spastin KD, overexpression, and pharmacologic inhibition leads to misplaced axonal MT comets and decreased intensity of presynaptic accumulation SVP+ sites in i 3 Neurons.Our (I) Microtubule initiation (left plot) and termination events (right plot) in regions of DMSO-or SPZ-treated axons lacking stable Syb (SVPÀ, light gray) and populated by stable Syb (SVP+, dark gray).Each of the paired data points represent SVPÀ and SVP+ pausing frequency within one axon.Reported p values are from multiple paired t tests (between SVP+ and SVPÀ values) and from one-way repeated measures ANOVA (between DMSO and SPZ values).(J) Ratios of microtubule comet initiation and termination in SVP+/SVPÀ regions for DMSO-or SPZ-treated axons.Percent of axons with ratios of 0-1, 1-5, and 5+ are displayed in light, medium, and dark purple.100% of control axons exhibit an increased microtubule comet initiation frequency at SVP+ sites (ratio >1).Inhibition of spastin-severing activity by SPZ shifts the SVP+/SVPÀ ratio to more comets initiating outside of SVP+ regions.95% of control axons exhibit an increased microtubule comet termination frequency at SVP+ sites.SPZ treatment alters the SVP+/SVPÀ ratio of termination events so only 55% of axons exhibit a ratio higher than 1. (legend continued on next page) results support a causal link between MT polymerization and SVP delivery, advocating a model where spastin-mediated MT plus-end growth at specific axonal sites guides presynaptic cargo distribution and synaptogenesis.
Spastin-mediated MT polymerization influences presynaptic cargo pausing, retention, and accumulation along the axon Previous studies have revealed the importance of MT polymerization at en passant presynaptic boutons in murine neurons, both for precise delivery of SVPs 12 as well as activity-induced exchange of synaptic vesicles. 13In agreement with these findings, we found that presynaptic accumulations along human axons are hotspots of MT dynamics, with MT polymerization preferentially initiating in, terminating at, and spanning these regions.
We used CRISPRi-mediated depletion to interrogate spastin's role in MT dynamics and SVP delivery.KD resulted in a 75% decrease of spastin protein without significantly altering the tubulin landscape, in contrast to neuronal spastin knockout systems that lead to drastic tubulin hyper-polyglutamylation and accompanying trafficking defects, 27,49 potentially due to the engagement of compensatory mechanisms regulating the neuronal cytoskeleton.
We found that spastin is enriched at SVP+ sites and its enzymatic activity directs localized MT growth, as treatment with SPZ, a specific pharmacologic inhibitor of spastin AAA-ATPase activity, significantly disrupted MT polymerization enrichment at presynaptic accumulation zones.We hypothesize that, upon spastin inhibition, preexisting MT ends that were normally confined near SVP+ sites via spastin undergo unregulated bouts of dynamic instability, leading to MT comet misplacement and inappropriate SVP delivery.Although our data support a local role for spastin in delivery of SVPs to presynaptic sites, we cannot rule out possible contributions from spastin activity in other cellular compartments that may also indirectly influence synaptic development and/or local MT dynamics.
In our i 3 Neuron system, both anterograde-and retrogrademoving SVPs preferentially pause at SVP+ sites.Spastin depletion decreases anterograde SVP pausing and retention at presynaptic accumulation sites, as measured by a reduction in presynaptic marker intensity via live imaging and in our heterologous synapse model, uncovering a previously unappreciated role for spastin in regulating presynaptic cargo delivery.However, spastin KD only mildly affects retrograde trafficking behaviors.This suggests that retrograde SVP pausing/retention is regulated independently of spastin-mediated MT modulation.Although it is not yet possible to visualize individual MT filaments or naked minus ends within human axons, the observed spastinmediated increase in MT plus ends (visualized as MACF43 comets) might be expected to correspond to an increase in MT minus ends if spastin severs MT filaments in two.If this is the case, one might expect a more dramatic retrograde response upon spastin depletion as a consequence of dynein encountering fewer MT minus ends.In contrast to this model, we hypothesize that spastin acts at SVP+ regions to generate MT lattice defects capable of promoting rescue (new bout of growth) without full MT severing, as proposed by Vemu and colleagues. 21In this scenario, spastin would promote localized MT polymerization without fully severing MTs and generating new minus ends.This model is consistent with the potent decrease in anterograde SVP pausing but non-significant change to retrograde SVP behavior upon spastin KD, as well as the periodic, recurrent MT growth observed at SVP+ sites.Advances to live imaging and superresolution microscopy may provide insight into which (if not both) of these models are in play at presynaptic sites along the axon.
Our human heterologous synapse assay brings together elements of previously employed mixed-culture synapse assays 43 and microfluidic axon isolation to dissect the role of spastin in presynapse formation.When i 3 Neuron axons are induced to rapidly amass presynaptic components at heterologous presynapses, spastin KD inhibits the accumulation of two different presynaptic cargos, synapsin and synaptobrevin-2.Membrane-bound presynaptic components (including synaptobrevin-2 and synaptophysin), are packaged into SVPs and trafficked rapidly along the axon by KIF1A at 200-400 mm/day. 14,44,45In contrast, synapsin undergoes slow axonal transport at 2-8 mm/day. 44Reduced synaptobrevin-2 accumulation at heterologous presynapses formed by spastin KD neurons is consistent with our SVP live-imaging experiments and in line with MT dynamics influencing KIF1A-mediated delivery. 12However, the mechanism underlying the reduction of slow-moving synapsin upon spastin depletion is not immediately clear.As synapsin moves anterogradely through stochastic, short-lived co-transport with synaptophysin-positive SVPs, 50 the reduction to synapsin accumulation upon spastin KD could also be attributed to altered MT plus-end regulation of KIF1A.Future live-imaging studies of human heterologous synapse formation may reveal how distinct motors, vesicle pools, and specific presynaptic components respond to presynaptic MT regulation.
Additional MT regulators may influence presynaptic MT dynamics and promote local spastin position/activity.The MT nucleator g-tubulin, for example, has been shown to nucleate MTs in an activity-dependent manner at presynaptic boutons in primary rat neurons. 13Although g-tubulin is not observed at human presynapses in our heterologous culture model, it may be recruited later in synaptic development to further promote presynaptic MT polymerization.Consistent with this hypothesis, we observed an increase in EB3 intensity at bona fide synapses compared with protosynapses in DIV21 i 3 Neurons.Further, synaptically immature human i 3 Neurons exhibit one third the average number of axonal comet events compared with synaptically mature rat primary cortical neurons, as determined by tracking GFP-MACF43. 51These divergences may reflect species-specific regulation or reveal a shift in MT dynamics upon increased synaptic connectivity as development proceeds.
We hypothesize that spastin's local axonal enrichment may be achieved through variations to MT post-translational modifications (PTMs), such as polyglutamylation, and MAP decoration, such as tau or SS nuclear autoantigen 1 (SSNA1).All three of these factors have previously been shown to regulate spastin localization and activity in neurons, 24,48,[52][53][54][55][56] but it has not yet been explored whether they are playing a role at the protosynapse during axon development.Further, our data and work from others demonstrate that protosynapses and en passant presynaptic sites are enriched for labile MTs 12,13,16 and, therefore, are likely to be tyrosinated and de-acetylated.How these local changes to the tubulin landscape influence spastin and/or potential upstream regulators at presynaptic specializations will be an exciting area of future study.
Protosynapse formation and maintenance may represent an upstream step in neuron synaptogenesis Characterization of i 3 Neuron synapses yielded the unexpected result that most presynaptic puncta within human axons at DIV21 in monoculture are not apposed by postsynaptic densities.In fact, over 80% of presynaptic puncta are protosynapses lacking a postsynaptic partner.By extending i 3 Neuron maturation to DIV42 by co-culture with primary rat astrocytes, the percentage of presynaptic puncta that are apposed by postsynaptic partners increases dramatically, and a nearly 5-fold increase in synapse density is observed.We find that bona fide presynapses and protosynapses are enriched for endogenous MT +TIP EB3 and spastin and are capable of cycling synaptic vesicles (SVs), as shown by uptake of FM4-64 dye.This aligns with previous reports of neurons establishing functional presynaptic zones independent of postsynaptic contacts, both in culture 5,6,10 and in vivo, 4,7,9 and the rapid rate at which SVPs and active zone proteins coalesce into functional synaptic puncta behind the growth cone in C. elegans posterior deirid neuron (PDE) axons without required apposition to postsynaptic densities. 57Thus, our observed axonal presynaptic component clustering in i 3 Neurons may be an upstream step in synaptogenesis.In this pre-patterning model, presynaptic machinery would be interspersed along the axon, ready to be stabilized or mobilized upon axodendritic contact.This model is further supported by observations that spontaneous neurotransmitter release from presynaptic specializations can promote dendritic arborization 9 and stable contacts. 10Taken together, our pre-patterning model advocates for the structural landscape of the axon, the MTs themselves, priming specific regions of the axon for synaptogenesis.
We observe that i 3 Neuron axons typically host stretches of protosynapses that are generally smaller and further apart than heterologous presynapses, which suggests there may be a mobilization and coalescing of puncta upon axonal contact with postsynaptic ligand.The observed patterning of synapses in the heterologous presynapse model suggests a cellular (G) Model for spastin regulation of presynaptic cargo delivery.In wild-type neurons, presynaptic accumulations along the axon are enriched for spastin and exhibit increased localized microtubule polymerization.Spastin-mediated amplification of microtubule plus ends leads to increased anterograde SVP pausing/ retention and overall presynaptic component accumulation.Upon spastin depletion, axons exhibit a decrease in localized microtubule growth events and reduced anterograde SVP pausing/retention, resulting in fewer and less-intense presynaptic accumulations in spastin knockdown neurons.See also Figure S7.mechanism in place to limit presynapse size and set spacing.Future live-imaging experiments may shed light on whether/ how protosynapses are mobilized or stabilized upon postsynaptic contact.
If protosynapse establishment occurs prior to postsynaptic contact, what dictates initial protosynapse formation?With spastin enriched at the distal axon and the knowledge that presynaptic puncta can form rapidly behind an extending growth cone, 57 it is tempting to speculate that the spastin-mediated SVP targeting mechanism reported here may also influence protosynapse formation in the nascent axon.Live-imaging experiments in advancing growth cones of control and spastindepleted axons may shed light on whether this supposition is correct.Probing this question in vivo, where neurons experience numerous cell types and external cues, will also be essential in understanding physiological regulation of axonal presynaptic patterning.
Generation of CRISPRi spastin-targeting or non-targeting control iPSCs
Spastin-specific sgRNA guide GACCGACGGGAACCAAGCGA or non-targeting sgRNA guide GTGCCAGCTTGTGGTGTCGT was cloned into pCRISPRia-vs2 (Horlbeck et al. 39 ; Addgene; Plasmid 84832) using previously described methods. 40Briefly, the guides were flanked with BstXI and BlpI cutsites and ligated into BstXI/BlpI-digested pCRISPRia-vs2 using NEB Quick Ligase (New England Biolabs; M2200S).Guide incorporation was verified by sequencing using the Forward and Reverse oligonucleotides ggcttggatttctataacttcg and ctactgcacttatatacggttc, respectively.The sgRNA guides were then packaged into lentivirus and transduced into iPSCs based on a previously published protocol 34 using the following steps: 15 cm culture dishes were seeded with 8x10 6 HEK293T cells in 20 ml Dulbecco's Modified Eagle's Medium (DMEM; Corning; 10-017-CM), which was supplemented with 10% FBS (HyClone; SH30071.03).After 24 hours, the cells were transfected with a 1:1:1 mix of 2.5 mg sgRNA plasmid, psPAX2 (HIV pol+gag), pCMV-Vsv-g with Lipofectamine 2000 Transfection Reagent (Invitrogen; 11668027).Eight hours following addition of transfection solution, media was replaced with 20 ml Growth Medium supplemented with 1:500 dilution ViralBoost (Alstem; VB100).Two days following transfection, cell media was collected, filtered, and centrifuged with Lentivirus Precipitation Solution (Alstem; VC100) to isolate viral pellet.The virus-containing pellet was resuspended in 10 mL E8 media with ROCK inhibitor, aliquoted, and stored at -80C.For viral transduction, 8x10 5 iPSCs were seeded into T25 cell culture flasks with 2 ml thawed E8 media containing virus and ROCK inhibitor and incubated for six hours before adding an additional 3 ml E8 supplemented with ROCK inhibitor.Two days following virus introduction, iPSCs were passaged (1.5x10 6 cells per 10 cm plate) and cultured in 10 ml E8 supplemented with ROCK inhibitor and 0.8 mg/mL puromycin (Takara; 631305).After four days of replacing media with E8 supplemented with 1 mg/mL puromycin, cells were collected and cryopreserved in liquid N 2 .
Rat primary astrocyte isolation and culture
All experiments were performed in accordance with the guidelines set forth by The Children's Hospital of Philadelphia and The University of Pennsylvania Institutional Animal Care and Use Committees.Primary rat astrocytes were isolated from brains of Sprague Dawley rats (Charles River Laboratories, Wilmington, MA RRID: RGD_737891) at postnatal day 1 and plated on poly-D-lysine-coated T75 flasks and cultured in Neurobasal medium (GibcoÔ/Thermo Fisher Scientific; 21103049) with 2% B27 supplement at 37 C with 5% CO 2 as previously described. 61After 24 hours, growth medium was changed to Neurobasal medium with B27 and 10 ng/ml bFGF, 2 ng/ml PDGF-AA, and 1 ng/ml neurotrophin-3.Once cells reached confluence (around DIV7), astroglia cells were separated from oligodendroglial cells using the ''shake-off'' method, 62 and remaining astroglial were split into new T75 flasks using Pre-equilibrated i 3 Neuron Maintenance Media was added to each well twice per week.At DIV13, HEK cells transfected 24 hours prior with 6 mL:1 mg mix of FUGENE (Progema; E2692) and pBI-NL1-BFP (bicistronic vector expressing untagged NL1 and cytosolic BFP) were added to the axonal chamber of the XonaChip.XonaChips containing DIV14 i 3 Neurons and HEK cells were fixed for immunocytochemistry 24 hours after HEK cell addition.
FM4-64 synaptic cycling assay
For FM-dye labeling of active synaptic vesicle cycling, cells were exposed to 10 mM SynaptoRedÔ C2 (Equivalent to FMÒ4-64; Biotium; 70027) in high K+ Stimulation Buffer consisting of 31.5 mM NaCl, 90 mM KCl, 5 mM Hepes, 1 mM MgCl 2 , 2 mM CaCl 2 , 30 mM glucose, and 50 mM D-AP5 (Thomas Scientific; 14539-5) for 2 min.Cells were then incubated with 10 mM SynaptoRedÔ C2 in Wash Buffer consisting of 50 mM D-AP5 and 10 mM CNQX in Hibernate A for 5 min.Cells were washed with Wash Buffer alone, Wash Buffer with 1 mM ADVASEP-7 (Biotium; 70029), and finally Wash Buffer alone.Cells were then either live imaged to asses FM4-64 release or fixed in 4% paraformaldehyde/4% sucrose in PBS for 10 min at room temperature and immunostained for synaptic markers (details below).For live-imaging of FM4-64 dye, 0.2 mm step z-stacks encompassing the cell layer were acquired before depolarizing the culture by addition of Stimulation Buffer.Z-stacks of the same field were acquired 1 minute and 5 minutes following depolarization.
Figure 1 .
Figure 1.Presynaptic accumulation sites along human i 3 Neuron axons are hotspots for microtubule polymerization and SVP pause events (A) Human iPSCs induced into i 3 Neurons were live imaged for mScarlet-Syp and GFP-MACF43 on DIV21 at a rate of 1 frame/s to visualize microtubule comets in relation to stable SVP accumulations.(B) Representative times series (over 42 seconds) from an SVP+ site experiencing a microtubule comet termination (yellow arrowheads) and initiation event (white arrowheads) corresponding to event #1 in the kymograph in panel C. SVP+ sites of stably accumulated SVPs are numbered and emphasized in gray.See also Figure S1 and Videos S1, S2, S3, and S4.(C) Kymographs of MACF43-GFP microtubule comet events in axonal regions housing SVP+ sites displayed in (A) and Figure S1A.Upper panels: mScarlet-Syp signal reveals SVP+ sites.Lower panels: GFP-MACF43 kymographs display location and timing of new microtubule polymerization.SVP+ sites are emphasized in gray.Arrowheads highlight initiation events (white) and termination events (yellow) at numbered SVP+ sites (1-3) displayed in (A) and Figure S1A.
(
D) Stable SVP coverage as percent of total axon distance in DIV21 i 3 Neurons (stable SVP mm/total analyzed axon mm, dark gray).Error bars represent the standard deviation of the dataset.(E) Percent of microtubule (MT) comets that occur at stable SVP regions (SVP+-associated comets/total comets, dark gray).Error bars represent the standard deviation of the dataset.(F) Microtubule comet events associated with SVPÀ vs. SVP+ axonal regions standardized to analyzed kymograph distance (SVPÀ or SVP+ distance) and time.Error bars represent the standard deviation of the replicate means.(G) Microtubule comet initiation and termination events standardized to distance and time in axonal areas lacking stable SVPs (SVPÀ, light gray) and populated by stable SVPs (SVP+, dark gray).(H) Ratios of microtubule comet initiation and termination in SVP+/SVP-regions.Percent of axons with ratios of 0-1, 1-5, and 5+ are displayed in light, medium, and dark purple.(I) Human iPSCs induced into i 3 Neurons were rapidly live imaged for mScarlet-Syp on DIV35 at a rate of 5 frames/s to visualize SVP pausing in relation to stable SVP accumulations.(J) Example kymographs of axonal motility of SVPs.Anterograde and retrograde tracks are green and magenta, respectively.Upper panels: still image of axonal mScarlet-Syp signal.Lower panels: kymograph of SVPs imaged at 200 ms per frame.SVP+ sites of stably accumulated SVPs are emphasized in gray.Examples of anterograde and retrograde SVP pauses are highlighted with white and yellow asterisks, respectively.(K) Anterograde and retrograde SVP pause events standardized for vesicle flux and distance in axonal areas lacking stable SVPs (SVPÀ, light gray) and populated by stable SVPs (SVP+, dark gray).Each of the paired data points represent SVPÀ and SVP+ pausing frequency within one axon.Reported p values are from multiple paired t tests (between SVP+ and SVPÀ values) and from one-way repeated measures ANOVA (between anterograde and retrograde values).(L) Stable SVP coverage as percent of total axon distance at DIV35 (stable SVP mm/total analyzed axon mm, dark gray).Error bars represent the standard deviation of the dataset.(M) Percent of SVP pauses that occur at stable SVP regions (SVP+ pauses/total pauses, dark gray) in the anterograde and retrograde directions.Error bars represent the standard deviation of the dataset.See also Figures S1 and S2 and Videos S1, S2, S3, and S4.
Figure 2 .
Figure 2. Synaptic characterization in human i3Neurons reveals accumulation of presynaptic cargo prior to robust synapse assembly (A) Representative immunocytochemistry maximum-projection images of endogenous MAP2 (somatodendritic compartment, blue), PSD-95 (postsynaptic marker, green), and synapsin I/II (Syn, presynaptic marker, magenta) in DIV21 i 3 Neurons in monoculture (top) and DIV42 i 3 Neurons co-cultured with primary rat (legend continued on next page) Neurons.White dashed line demarks region used for intensity profile of each channel provided to the right.(D) Syn+ presynaptic puncta (magenta) of bona fide synapses (top) and Syn-only protosynapses (bottom) contain cycling synaptic vesicles as visualized by FM4-64 dye (yellow).(E) Microtubule +TIP protein EB3 (yellow) reveals microtube plus ends at Syn+ (magenta) bona fide synapses (top) and Syn-only protosynapses (bottom).(F) Spastin (yellow) is found at Syn+ (magenta) bona fide synapses (top) and Syn-only protosynapses (bottom).(G-I) FM4-64 (G), EB3 (H), and spastin (I) intensity in axonal regions lacking synapsin signal (SynÀ), at protosynaptic accumulations consisting of synapsin without a postsynaptic partner (Protosyn), and at synapsin puncta apposed by PSD-95 signal (Synapse).Intensities are normalized to SynÀ values.Error bars represent the standard deviation of the replicate means.Provided p values are from one-way repeated measures ANOVA of replicate means.See also Figure S3.
(F) Microtubule comet density in SVPÀ (left graph) or SVP+ (right graph) regions in control, Sp KD, and Sp OE axons.Sp OE leads to more microtubule comets positioned outside SVP+ regions, while Sp KD leads to fewer SVP+ microtubule comets.Errors bars represent the standard deviation of the replicate means.Reported p values are from one-way repeated measures ANOVA.(G) Normalized Syb intensity at SVP+ sites in control, Sp KD, and Sp OE axons.Reported p values are from one-way repeated measures ANOVA.Error bars represent the standard deviation of the replicate means.(H) SVP+ site density (number of stable SVP+ puncta/10 mm; J) for control, Sp KD, and Sp OE.Reported p values are from one-way repeated measures ANOVA.Error bars represent the standard deviation of the replicate means.
Figure 5 .
Figure 5. Inhibition of spastin-severing activity interrupts positioning of new microtubule polymerization (A) Schematic of spastazoline experiment paradigm.Wild-type i 3 Neurons expressing mScarlet-Syb-IRES-GFP-MACF43 were treated overnight with spastazoline or DMSO and live imaged at a rate of 1 frame/s on DIV21 with continued drug treatment.(B and C) Representative kymographs of axonal microtubule comet events in i 3 Neurons treated with DMSO (DMSO; B) and 10 mM spastazoline (SPZ; C).Upper panels: mScarlet-Syb (Syb) signal demarks SVP+ sites.Lower panels: GFP-MACF43 kymographs and tracks reveal location and timing of new microtubule polymerization.SVP+ sites are emphasized in gray.Corresponding tracks differentiate microtubule comets not associated with SVP+ regions (SVPÀ, blue) from comets that that initiate, terminate, and/or pass through an SVP+ site (SVP+, pink).
(
D and E) New microtubule polymer added (D) and microtubule polymerization rate (E) in DMSO-and SPZ-treated axons.Reported p values are from unpaired t test of experimental replicate means.Error bars represent the standard deviation of the replicate means.(F) Stable SVP coverage as percent of total axon distance (stable SVP mm/total analyzed axon mm, dark gray).Error bars represent the standard deviation of the dataset.(G) Percent of microtubule comets that initiate, terminate, or pass through stable SVP regions (SVP+ comets/total pauses, dark gray).Error bars represent the standard deviation of the dataset.(H) Microtubule comet density in SVPÀ (left plot) or SVP+ (right plot) regions in DMSO-or SPZ-treated axons.SPZ inhibition of spastin-severing activity leads to more microtubule comets positioned outside SVP+ regions and fewer within.Reported p values are from unpaired t test of experimental means.Error bars represent the standard deviation of the replicate means.
(K) Normalized Syb intensity at SVP+ sites in DMSO control and SPZ-treated axons.Reported p values are from unpaired t test of replicate means.Error bars represent the standard deviation of the replicate means.See also Figure S5.
Figure 6 .
Figure 6.Depletion of neuronal spastin interrupts localization of anterograde presynaptic cargo pausing and retention (A) Representative kymographs of axonal SVP movement in DIV21 i 3 Neurons expressing mScarlet-Syb for CRISPRi non-targeting control (control) and spastin CRISPRi-mediated knockdown (Sp KD).Images were collected at a frame rate of 200 ms to capture rapid SVP movement.Anterograde and retrograde trafficking is displayed in green and magenta, respectively.SVP+ sites are emphasized in gray.Arrows highlight examples of anterograde SVP retention (white), pausing at stable SVP+ sites (yellow), and pausing at regions lacking stable SVPs (blue).(B) Axonal SVP+ coverage as percent of total axon distance (stable SVP mm/total analyzed axon mm, dark gray) for control and Sp KD.Error bars represent the standard deviation of the dataset.(C) Percent of SVP pauses that occur at SVP+ (dark gray) or SVPÀ (light gray) regions in the anterograde (left) or retrograde (right) directions.Error bars represent the standard deviation of the dataset.
(
D) Anterograde SVP pause events standardized for vesicle flux and distance in axonal areas lacking stable SVPs (SVPÀ, light gray) and populated by stable SVPs (SVP+, dark gray) for control and Sp KD axons.Paired data points represent SVPÀ and SVP+ pausing frequency within one axon.Reported p values are from multiple paired t tests (between SVP+ and SVPÀ values) and from one-way repeated measures ANOVA (between control and Sp KD values).(E) Ratios of SVP pause frequency in SVP+/SVPÀ regions in the anterograde direction.Percent of axons with ratios of 0-1, 1-5, and 5+ are displayed in light, medium, and dark purple.(F) Number of retentions at SVP+ sites per anterograde SVP for control and Sp KD.Sp KD significantly decreases the number of SVP+ retentions.Error bars represent the standard deviation of the replicate means and provided p values were determined by t test comparing the four experimental replicate means.(G) Percent of anterograde SVPs retained at SVP+ regions (SVP+ retentions/total retentions; dark gray) for control and Sp KD.Error bars represent the standard deviation of the data set.(H) Retrograde SVP pause events standardized for vesicle flux and distance in axonal areas lacking stable SVPs (SVPÀ, light gray) and populated by stable SVPs (SVP+, dark gray) for control and Sp KD axons.Reported p values for SVP+ and SVPÀ comparisons were determined by multiple paired t tests.p values for control and Sp KD comparisons were determined by one-way repeated measures ANOVA.(I) Ratios of SVP pause frequency in SVP+/SVPÀ regions in the retrograde direction.Percent of axons with ratios of 0-1, 1-5, and 5+ are displayed in light, medium, and dark purple.(J) Number of retentions at SVP+ sites per retrograde SVP for control and Sp KD.Error bars represent the standard deviation of the replicate means and provided p values were determined by t test comparing the four experimental replicate means.(K) Percent of retrograde SVPs retained at SVP+ regions (SVP+ retentions/total retentions; dark gray) for control and Sp KD.Error bars represent the standard deviation of the data set.See also Figure S6.
Figure 7 .
Figure 7. Spastin regulates presynaptic component accumulation in human heterologous synapse assay (A) Heterologous synapse model set up with red inset showing bIII-tubulin+ axons forming Syn+ presynapses with NL1+ transfected HEK cells.Dotted white line emphasizes non-transfected HEK cell.Scale bars, 20 mm.(B and C) Maximum intensity projections of i 3 Neuron axons crossing NL1+ HEK cells.Scale bars, 10 mm.(B) Presynaptic markers synapsin (Syn), synaptophysin-1 (Syp), and excitatory transmitter VGLUT1 accumulate at heterologous synapses.(C) Spastin is enriched at Syb+ presynapses.White box/inset highlight presynaptically enriched spastin at heterologous synapses.(D) Spastin intensity in heterologous synapses crossing NL1-expressing HEK cells (NL1+) and axonal regions crossing HEK cells not expressing NL1 (NL1À), normalized to spastin values in axons crossing NL1+ HEK cells.Error bars represent the standard deviation of the replicate means and provided p values were determined by t test comparing the means of three experimental replicates.(E) Representative maximum intensity projections of CRISPRi non-targeting control (control) and spastin CRISPRi-mediated knockdown (Sp KD) axons crossing pBI-BFP-NL1-expressing HEK cells (blue), stained for Syn (magenta) and Syb (green).Scale bars, 5 mm.(F) Plots display Syn (upper plot) and Syb (lower plot) normalized intensity values at heterologous synapses in Sp KD and control axons normalized to control.Error bars represent the standard deviation of the replicate means and provided p values were determined by t test comparing the means of three experimental replicates.(G)Model for spastin regulation of presynaptic cargo delivery.In wild-type neurons, presynaptic accumulations along the axon are enriched for spastin and exhibit increased localized microtubule polymerization.Spastin-mediated amplification of microtubule plus ends leads to increased anterograde SVP pausing/ retention and overall presynaptic component accumulation.Upon spastin depletion, axons exhibit a decrease in localized microtubule growth events and reduced anterograde SVP pausing/retention, resulting in fewer and less-intense presynaptic accumulations in spastin knockdown neurons.See also FigureS7. | 2023-08-12T13:12:08.264Z | 2024-02-01T00:00:00.000 | {
"year": 2024,
"sha1": "7b00fc84475a72cc2ec518443bc7cc90cad4ee38",
"oa_license": "CCBYNC",
"oa_url": "http://www.cell.com/article/S0960982224003075/pdf",
"oa_status": "HYBRID",
"pdf_src": "Elsevier",
"pdf_hash": "80397bdde6bebe6ab5c8e1557f2a2da383008dd4",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
208947369 | pes2o/s2orc | v3-fos-license | Long Non-Coding RNA HOTAIR Modulates KLF12 to Regulate Gastric Cancer Progression via PI3K/ATK Signaling Pathway by Sponging miR-618
Purpose Long non-coding RNA (lncRNA) HOX transcript antisense RNA (HOTAIR) has been reported to dysregulate in many tumors. However, the mechanism of HOTAIR was rarely reported in GC. Methods The levels of HOTAIR, microRNA-618 (miR-618) and Krueppel-like factor 12 (KLF12) in GC tissues and cells were detected by quantitative real-time polymerase chain reaction (qRT-PCR). The cell viability and apoptotic rate were assessed via cell counting kit-8 (CCK-8) assay and flow cytometry, respectively. The migrating and invading abilities were tested by Transwell assay. The protein levels of KLF12, p-PI3K, PI3K, p-ATK and ATK were measured by Western blot assay. These interactions between miR-618 and HOTAIR or KLF12 were predicted by DIANA tools, and then, dual-luciferase reporter assay and RNA immunoprecipitation (RIP) assay were conducted to validate these interactions. Besides, the xenograft tumor experiment was performed to further verify the roles of HOTAIR in GC. Results The levels of HOTAIR and KLF12 were significantly upregulated and the level of miR-618 was strikingly downregulated in GC tissues and cells. miR-618 was verified as a direct target of HOTAIR or KLF12. HOTAIR silencing blocked GC progression and PI3K/ATK signaling pathway by sponging miR-618 and also restrained xenograft tumor growth in vivo. miR-618 inhibited GC progression and PI3K/ATK signaling pathway by targeting KLF12. Mechanistically, HOTAIR modulated KLF12 expression by sponging miR-618 in GC cells. Conclusion These data unraveled that HOTAIR promoted GC progression through PI3K/ATK signaling pathway via miR-618/KLF12 axis.
Introduction
Gastric cancer (GC) is the second leading cause of cancer death worldwide, especially in some eastern Asia countries, including China, Japan and Korea. [1][2][3] Despite some improvements in early detection and therapeutic in recent decades, the survival time of GC patients is still short and more serious in the advanced stage. 4,5 Therefore, it is urgent to search for novel therapeutic targets for GC patients.
Long non-coding RNAs (lncRNAs), a group of non-coding RNAs with the length of more than 200 nucleotides (nt), may affect target gene expressions at the transcriptional and posttranscriptional stages. 6 In GC, a number of reports showed that lncRNAs, including lncRNA GIHCG, 7 ATB, 8 FEZF1 antisense RNA 1 (FEZF1-AS1), 9 SNHG15 10 and CRNDE, 11 were aberrantly expressed as well as related to the processes in cancer progression. HOX transcript antisense RNA (HOTAIR), located on human chromosome 12, has been documented to play an oncogenic role in cancer. Previous researches indicated that HOTAIR dysregulation was associated with cancer progressions, such as ovarian cancer 12 and colon cancer. 13 However, the biological mechanism of HOTAIR in GC was rarely reported.
MicroRNAs are a class of small RNAs with about 22 nt in length and suppress target gene expression by inhibiting the translation of message RNAs (mRNAs) or mediating the degradation mRNAs. 14 Emerging evidence implicated that microRNA miR-618 was abnormally expressed in breast cancer, 15 prostate cancer 16 and anaplastic thyroid cancer, 17 as well as in GC. 18 Krueppel-like factor 12 (KLF12) is encoded by the KLF12 gene which is located on human chromosome 13. KLF12 was also reported to dysregulate in endometrial cancer 19 and GC. 20 Phosphoinositide 3-kinases (PI3K)/protein kinase B (AKT) signaling pathway, a signal transduction pathway and one of the most frequently deregulated pathways in cancer, is implicated to the pathogenesis of various human cancers. 21 However, the mechanisms of miR-618 and KLF12 were barely defined in GC. In this study, we mainly explored the mechanism of HOTAIR in GC, thus in turn providing novel therapeutic target for GC patients.
Tissue Samples
The study was approved by the Ethics Committee of The First Affiliated Hospital of Zhengzhou University and performed according to the Declaration of Helsinki Principles. Thirty-five GC tissue samples were collected from The First Affiliated Hospital of Zhengzhou University as well as thirty-five corresponding adjacent normal tissue samples. The GC patients (n=35) were divided into two groups: patients with low HOTAIR expression (n=16) and patients with high HOTAIR expression (n=19). All tissue samples were immediately frozen in a −80°C refrigerator until further use. Written informed consent was provided by all GC patients or guardians.
Cell Counting Kit-8 (CCK-8) Assay
CCK-8 (Beyotime, Shanghai, China) was used to measure the cell viability. In brief, the MGC-803 and AGS cells (4×10 3 per well) were seeded into 96-well plate and incubated for 0 hr, 24 hrs, 48 hrs and 72 hrs. Then, 10 μL CCK-8 reagent was injected into each well and incubated for another 2 hrs. The absorbance at 450 nm was measured via a spectrophotometer (Thermo Fisher Scientific).
Transwell Assay
Transwell chambers (Corning, Tewksbury, MA, USA) were used to detect the migrating and invading abilities of transfected MGC-803 and AGS cells. For migration, the upper chamber was supplemented with serum-free DMEM, while DMEM with 10% FBS was added in the lower chamber. After 24-hr incubation, the cells in the lower chamber were fixed with 4% methanol and then stained with 0.1% crystal violet for 24 hrs. The cell counted for 5 randomly selected fields under a microscope (Olympus, Tokyo, Japan). For invasion, the protocols were similar to those in migration. The difference is that the Transwell chamber was coated with a Matrigel matrix (BD Biosciences, San Jose, CA, USA) in invasion experiments.
Cell Apoptosis Assay
Annexin V/PI cell apoptosis analysis kit (Servicebio, Wuhan, China) was used to evaluate the apoptotic rate of transfected MGC-803 and AGS cells. After digestion, the re-suspended samples were stained with Annexin V fluorescein isothiocyanate (FITC) and propidium iodide (PI) and further incubated for 15 mins. The cell apoptotic rate was analyzed by flow cytometry (BD Biosciences).
Western Blot Assay
The protein in MGC-803 and AGS cells was extracted by RIPA Lysis and Extraction Buffer (Thermo Fisher Scientific). Following the measurement of concentration, the protein samples were separated via sodium dodecyl sulfonate-polyacrylamide gel electrophoresis (SDS-PAGE) . Then, the protein samples were transferred onto a PVDF membrane (Millipore, Billerica, MA, USA). The membranes were blocked in skim milk for 2 hrs and then incubated with primary antibody overnight at 4°C as well as the incubation with secondary antibody for another 2 hrs. The chemiluminescence intensity was evaluated using eyoECL Plus Kit (Beyotime). All the antibodies were purchased from Bioss (Beijing, China).
Dual-Luciferase Reporter Assay
The interactions between miR-618 and HOTAIR or KLF12 were predicted by DIANA tools (http://diana. imis.athena-innovation.gr). The sequences of HOTAIR or its negative mutant were cloned and inserted into psiCHECK2 vector (Promega, Madison, WI, USA) to conduct the luciferase reporter, namely, HOTAIR WT or HOTAIR MUT. The co-transfection of luciferase reporter HOTAIR WT or HOTAIR MUT and miR-618 mimics or miR-NC were conducted using Lipofectamine 2000 Reagent (Invitrogen). The protocols were also applied to KLF12 3ʹUTR.
RNA Immunoprecipitation (RIP) Assay
RIP assay was carried out using Magna RNA immunoprecipitation kit (Millipore). After lysis of MGC-803 and AGS cells with RIP lysis buffer, the sample was incubated with magnetic beads conjugated with anti-Ago2 or anti-IgG antibodies. The enrichment of RNA was measured by qRT-PCR.
Mice Xenograft Models
The experiment in nude mice was performed according to the procedure and approved by the Animal Care Committee of The First Affiliated Hospital of Zhengzhou University. Sixweek-old BALB/c nude mice were randomly divided into two groups (n = 6 per group) and implanted with AGS cells transfected with sh-HOTAIR or sh-NC. The tumor volume was measured every 7 d for 5 times and calculated with the (length × width 2 )/2 method. After injection for 35 d, the xenograft tumor was excised for the weight measurement or the further study.
Statistical Analysis
Statistical analysis was performed using GraphPad Prism 7 (GraphPad, La Jolla, CA, USA). All values are presented as means ± standard deviation (SD). The comparison between two groups was processed by Student's t-test, while among three or more than three groups, it was analyzed by one-way analysis of variance (ANOVA) followed by Tukey's post hoc. P< 0.05 was considered as statistical significance.
HOTAIR Was Strikingly Upregulated in GC Tissues and Cells
It is well known that lncRNA HOTAIR plays a role as an oncogenic molecule in different cancer cells. To explore the potential roles in GC, we firstly detected the level of HOTAIR in GC tissues and cells. The qRT-PCR results showed that the level of HOTAIR was conspicuously elevated in GC tissues related to that in corresponding adjacent normal tissues ( Figure 1A) as well as elevated in human gastric carcinoma cell lines MGC-803 and AGS compared to that in human stomach epithelial cell lines GES-1 ( Figure 1C). In addition, we demonstrated that a high level of HOTAIR had a low survival rate, while the low HOTAIR expression associated with a high survival rate ( Figure 1B). These results implied that lncRNA HOTAIR was apparently enhanced in GC tissues and cells.
HOTAIR Knockdown Suppressed Cell Proliferation, Migration, Invasion and PI3K/ATK Signaling Pathway While Induced Cell Apoptosis in MGC-803 and AGS Cells
To investigate the functions of HOTAIR, si-HOTAIR was transfected into MGC-803 and AGS cells. The qRT-PCR results confirmed the knockdown efficiency, indicated by the notable downregulation of HOTAIR in MGC-803 and AGS cells (Figure 2A and B). Furthermore, the CCK-8 assay exhibited that si-HOTAIR significantly constrained cell viability in MGC-803 and AGS cells ( Figure 2C and D). Also, the Transwell assay presented that the transfection of si-HOTAIR#2 remarkably reduced the cell migrating and cell invading abilities in MGC-803 and AGS cells ( Figure 2E and F). The flow cytometry showed that the apoptotic rate was distinctly promoted in MGC-803 and AGS cells transfected with si-HOTAIR#2 ( Figure 2G). Besides, the Western blot assay indicated that the ratios of p-PI3K/PI3K and p-ATK/ATK were both declined in MGC-803 and AGS cells transfected with si-HOTAIR#2 ( Figure 2H). These data implicated that HOTAIR depletion repressed cell proliferation, migration, invasion and PI3K/ATK signaling pathway but promoted cell apoptosis in MGC-803 and AGS cells.
miR-618 Was Negatively Interacted with HOTAIR and Was Markedly Decreased in GC Tissues and Cells
To search the biological mechanism of HOTAIR in GC, DIANA tools online database was utilized to predict the putative target of HOTAIR. The results showed that miR-618 had complementary binding sites with HOTAIR ( Figure 3A). Following dual-luciferase reporter assay indicated that the transfection with miR-618 mimics resulted in the dramatical decrease of luciferase activity of HOTAIR WT reporter in MGC-803 and AGS cells, while the luciferase activity of HOTAIR MUT had no significant fluctuation in any group ( Figure 3B and C). Also, the RIP assay presented that the level of HOTAIR was distinctly more enriched by Ago2 antibody in MGC-803 and AGS cells transfected with miR-618 in contrast to in IgG group ( Figure 3D). Moreover, the qRT-PCR results showed that the level of miR-618 was significantly downregulated in GC tissues and cells ( Figure 3E and F). The scatter diagram indicated that the level of HOTAIR was negatively linear correlated with the level of miR-618 ( Figure 3G). Additionally, the level of HOTAIR was strikingly increased and the level of miR-618 was evidently reduced in MGC-803 and AGS cells transfected with HOTAIR, while the transfection with si-HOTAIR#2 contributed to the downregulation of HOTAIR and the upregulation of miR-618 ( Figure 3H and I).
Taken together, these data demonstrated that miR-618 was a direct target of HOTAIR and was downregulated in GC tissues and cells. To explore the functions of HOTAIR and miR-618 in GC, the si-HOTAIR#2 and in-miR-618 were co-transfected into MGC-803 and AGS cells. The CCK-8 assay showed that the transfection with miR-168 inhibitor reversed the suppressive effect on the cell viability in MGC-803 and AGS cells caused by si-HOTAIR#2 ( Figure 4A and B). Subsequently, the Transwell assay exhibited that miR-618 inhibitor mitigated the repressive impacts on the migrating and invading abilities in MGC-803 and AGS cells transfected with si-HOTAIR#2 ( Figure 4C and D). Furthermore, the flow cytometry implied that the apoptotic rate was reverted in MGC-803 and AGS cells co-transfected si-HOTAIR and in-miR-618 compared to that in si-HOTAIR#2 group ( Figure 4E). Besides, the Western blot assay demonstrated that the downregulation of miR-168 alleviated the suppressive effects on the ratios of p-PI3K/ PI3K and p-ATK/ATK in MGC-803 and AGS cells transfected induced by si-HOTAIR#2 ( Figure 4F). To sum up, these results revealed that miR-618 inhibitor relieved the inhibitory impacts on cell proliferation, migration, invasion and PI3K/ATK signaling pathway as well as the accelerated effect on cell apoptosis in MGC-803 and AGS cells induced via HOTAIR depletion.
KLF12 Was a Direct Target of miR-618 and Significantly Enhanced in GC Tissues and Cells
To explore the mechanism of miR-618 in GC, DIANA tools online database was used to search the putative target of miR-618. The results presented that KLF12 3ʹUTR had complementary sequences with miR-618 ( Figure 5A). The dualluciferase reporter assay implicated that the luciferase activity of KLF12 3ʹUTR-WT reporter was effectively declined in MGC-803 and AGS cells transfected with miR-618 related to that in miR-NC group; however, the luciferase activity of KLF12 3ʹUTR-MUT reporter had no apparent change in any treatment ( Figure 5B and C). Moreover, the qRT-PCR assay indicated that the level of KLF12 was remarkably augmented in GC tissues and cells ( Figure 5D and E). In addition, the scatter plot exhibited that the level of KLF12 was negatively linear correlated with the level of miR-618 ( Figure 5F). The Western blot assay implied that the protein level of KLF12 was obviously retarded in MGC-803 and AGS cells transfected with miR-618 in comparison with that in miR-NC group ( Figure 5G). These results manifested that KLF12 was negatively interacted with miR-618 and significantly upregulated in GC tissues and cells. Based on the earlier results, we demonstrated that KLF12 was a direct target of miR-618. Subsequently, the functions of miR-618 and KLF12 were further studied. The CCK-8 assay exhibited that the cell viability was relieved in MGC-803 and AGS cells transfected with KLF12 caused via miR-618 ( Figure 6A and B). Meanwhile, the Transwell assay implicated that the overexpression of KLF12 mitigated the restraint effects on cell migration and invasion ability in MGC-803 and AGS cells promoted by miR-618 ( Figure 6C and D). Furthermore, the flow cytometry results showed that the apoptotic rate was apparently elevated in MGC-803 and AGS cells transfected miR-618, while the emergence of KLF12 weakened the promoted effect on cell apoptotic rate ( Figure 6E). Besides, the Western blot assay uncovered that KLF12 overexpression regained the ratios of p-PI3K/PI3K and p-ATK/ATK in MGC-803 and AGS cells inhibited by miR-618 ( Figure 6F). These data suggested that KLF12 overexpression receded the constraint effects on cell proliferation, migration, invasion and PI3K/ATK signaling pathway and the promoted impact on cell apoptosis in MGC-803 and AGS cells caused by miR-618 mimics.
HOTAIR Depletion Downregulated KLF12 Expression by Targeting miR-618
To explore the relationship among HOTAIR, miR-618 and KLF12, MGC-803 and AGS cells were co-transfected with si-HOTAIR#2 and in-miR-618. The qRT-PCR showed that the level of KLF12 was regained in MGC-803 and AGS cells transfected with in-miR-618 suppressed by si-HOTAIR#2 ( Figure 7A). Also, Western blot assay indicated that miR-618 inhibitor rescued the protein level of KLF12 in MGC-803 and AGS cells transfected with si-HOTAIR#2 ( Figure 7B). In addition, the qRT-PCR results exhibited that the level of HOTAIR was positively linear correlated with the level of KLF12 ( Figure 7C). These data unraveled that HOTAIR silencing downregulated KLF12 expression by regulating miR-618.
HOTAIR Depletion Constrained Xenograft Tumor Growth in vivo
To further investigate the role of HOTAIR in GC, sh-HOTAIR was transfected in AGS cells and then injected into mice. The measurement results implicated that the tumor volume and weight were both declined in mice injected with sh-HOTAIR related to that in sh-NC group ( Figure 8A and B). The qRT-PCR results presented that the level of HOTAIR was notably decreased, and the level of miR-618 was obviously enhanced in sh-HOTAIR group compared to that in sh-NC group ( Figure 8C and D). The qRT-PCR and Western blot assay showed that the mRNA and protein levels of KLF12 were both distinctly downregulated in sh-HOTAIR group ( Figure 8E and F). Besides, the Western blot assay indicated that the ratios of p-PI3K/PI3K and p-ATK/ATK were also evidently declined in sh-HOTAIR group ( Figure 8G). To sum, these results uncovered that HOTAIR silencing restrained xenograft tumor growth in vivo.
Discussion
Gastric cancer is the second leading cause of cancerrelated death in the world. 3 Accumulating evidence reported that lncRNAs play crucial roles in cancer. In this research, we explored the biological mechanism of lncRNA HOTAIR in GC. The results indicated that HOTAIR modulated KLF12 expression to promote cell proliferation, migration and invasion and repress cell apoptosis in GC through PI3K/ATK signaling pathway by sponging miR-618.
Recent researches indicated that HOTAIR was aberrantly expressed in diverse cancers. For example, a previous study in ovarian carcinoma indicated that the level of HOTAIR was apparently increased in ovarian carcinoma tissues and cell lines than that in the negative controls. 12 Another study in colon cancer documented that the relative expression of HOTAIR was significantly upregulated in colon cancer tissues than that in matched adjacent normal tissues. 13 In this study, we validated that the level of lncRNA HOTAIR was significantly increased in GC tissues and cells (MGC-803 and AGS). Following the transfection of si-HOTAIR resulted in the remarkable decrease of cell viability, migrating and invading abilities, as well as the distinct increase of apoptotic rate in MGC-803 and AGS cells. Besides, the patients with a high level of HOTAIR associated with a low survival rate. The mice xenograft models also indicated that HOTAIR silencing constrained the xenograft tumor growth and the ratios of p-PI3K/PI3K and p-ATK/ATK in vivo. These results demonstrated that HOTAIR knockdown inhibited GC progression.
Recently, a hypothesis proposed that lncRNAs may act as competing endogenous RNAs (ceRNAs) to recruit miRNAs, thus resulting in the depression of miRNA targets. 22 For instance, a study in cervical cancer showed that HOTAIR functioned as a ceRNA to sponge miR-143-2p and then resulted in the promotion of cervical cancer cell growth. 23 Another study demonstrated that HOTAIR augmented cell proliferation, migration and invasion and suppressed cell apoptosis in colorectal cancer by sponging miR-197. 24 Also, Dong et al reported that HOTAIR promoted cell viability, migration, invasion and epithelialmesenchymal transition (EMT) in GC by sponging miR-217. 25 In the present study, the dual-luciferase reporter assay and RIP assay verified that the miR-618 was a direct target of HOTAIR. Besides, we found that the level of miR-618 was conspicuously declined in GC tissues and cells and was negatively linear correlated with HOTAIR. The further functional experiments indicated that miR-168 inhibitor relieved the restraint effects on cell viability, migrating and invading abilities as well as the facilitated effect on apoptotic rate induced by HOTAIR depletion. These data manifested that HOTAIR promoted GC progression by sponging miR-618.
Emerging evidence manifested that KLF12 was involved in the cancer progression of various cancers. For example, Ding et al reported that KLF12 facilitated cell proliferation, migration and inhibited cell apoptosis in vitro as well as promoted tumor size in vivo in endometrial cancer. 19 Another study in GC indicated that lncRNA TTN-AS1 modulated the expression of KLF12 to induce cell proliferation, cell migration and invasion and impaired cell apoptosis by targeting miR-376b-3p. 20 In this study, we demonstrated that KLF12 negatively interacted with miR-618 by the results of dual-luciferase reporter assay. Furthermore, the level of KLF12 was apparently enhanced in GC tissues and cells and negatively linear correlated with the level of KLF12. The following functional study implied that KLF12 overexpression alleviated the suppressive impacts on cell viability, migration and invasion ability, as well as the promoted impact on the apoptotic rate in MGC-803 and AGS cells caused by miR-618 overexpression. The restoration experiments implicated that the transfection of miR-618 inhibitor regained the mRNA and protein levels of KLF12 in MGC-803 and AGS cells transfected with si-HOTAIR. These results unraveled that HOTAIR contributed to GC progression via miR-618/KLF12 axis.
Recent studies demonstrated that PI3K/AKT signaling pathway was implicated in many processes in tumor progression, including cell proliferation, metastasis and apoptosis. 21,26 For example, a study in endometrial cancer indicated that KLF12 facilitated tumor progression in endometrial cancer by activating PI3K/ATK signaling pathway. 19 Another study in human thyroid carcinomas demonstrated that miR-618 blocked tumor progression in thyroid carcinomas by confining PI3K/ATK signaling pathway. 27 In the current research, HOTAIR depletion inhibited the protein levels of p-PI3K and p-ATK in GC cells by regulating miR-618. miR-618 confined p-PI3K and p-ATK expression by targeting KLF12 in GC cells. Furthermore, p-PI3K and p-ATK were also declined in sh-HOTAIR group. These data disclosed that HOTAIR/miR-618/KLF12-induced GC progression was mediated by PI3K/ATK signaling pathway.
In conclusion, we confirmed that the levels of HOTAIR and KLF12 were strikingly augmented and the level of miR-618 drastically decreased in GC tissues and cells. Combined with the functional and mechanistical experiment results, we concluded that HOTAIR positively regulated KLF12 expression to promote GC progression via PI3K/ATK signaling pathway by sponging miR-618.
Ethical Approval
The study was approved from the Animal Care Committee of The First Affiliated Hospital of Zhengzhou University and followed for the welfare of the animals in the guidelines of the National Institutes of Health. | 2019-11-28T12:30:35.926Z | 2019-11-27T00:00:00.000 | {
"year": 2019,
"sha1": "c2907a2411b71fbc370f8460532d057b034c84a4",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=54268",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "248ef2569698ffd5094fe9f55793ab7a662f6f62",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
8844125 | pes2o/s2orc | v3-fos-license | Aedes aegypti Larval Indices and Risk for Dengue Epidemics
Entomologic indices can identify areas at high risk for disease transmission.
W hile a vaccine is under research, without immediate prospect for success, vector control remains the only way to prevent dengue transmission (1)(2)(3). Vector control programs are essentially based on source reduction, eliminating Aedes aegypti larval habitats from the domestic environment, with increasing community involvement and intersectoral action in recent decades (4,5). However, current entomologic indicators do not seem to reliably assess transmission risks, define thresholds for dengue epidemic alerts, or set targets for vector control programs (6,7). Therefore, defining new indicators for entomologic surveillance, monitoring, and evaluation are among the research priorities of the World Health Organization Special Programme for Research and Training in Tropical Diseases.
Although only adult female Aedes mosquitos are directly involved in dengue transmission, entomologic surveillance has been based on different larval indices (8,9). The house index (HI, percentage of houses positive for larvae) and the Breteau index (BI, number of positive containers per 100 houses) have become the most widely used indices (6), but their critical threshold has never been determined for dengue fever transmission (9,10). Since HI≤1% or BI≤5 was proposed to prevent yellow fever transmission, these values have also been applied to dengue transmission but without much evidence (8,11). The Pan American Health Organization described 3 levels of risk for dengue transmission: low (HI<0.1%), medium (HI 0.1%-5%), and high (HI>5%) (12), but these values need to be verified (13). The vector density, below which dengue transmission does not occur, continues to be a topic of much debate and conflicting empiric evidence. For example, dengue outbreaks occurred in Singapore when the national overall HI was <1% (14). In contrast, researchers from Fortaleza, Brazil, found that dengue outbreaks never occurred when HI was <1% (15). However, different geographic levels are used to calculate the indices in the various studies, and the appropriated level for entomologic indices is in itself an issue of debate (16). Furthermore, the appropriateness of larval indices has been questioned; recently, as an alternative, pupal indices were developed by Focks et al. (7) to better reflect the risk for transmission. Still, their utility for source reduction programs is controversial, and the feasibility of pupal collection in routine Aedes surveillance is untested (17).
In this study, we assessed the usefulness of larval indices for identifying high-risk areas for dengue virus transmission. We examine the influence of measurements at different geographic levels, establish a threshold for epidemic outbreaks, and discuss their utility for communitybased Aedes control programs.
Context
The Cuban dengue prevention program has been hailed as among the few success stories in Aedes control (18,19). It was initiated in 1981, during the first dengue hemorrhagic fever epidemic in the Americas (20). As a result of this effort, Cuba was free of dengue from 1982 to 1996, although Aedes was reported again from 1992 (21). In 1997, dengue transmission occurred in Santiago de Cuba, a municipality located in the eastern part of the country (22). The epidemic remained limited to this city, but Aedes mosquitoes were observed in 29 other municipalities, including Havana, the capital city, in the northwest of the country. After intensification of vector control activities in the entire country (22), HIs from 0.05% to 0.91% were observed in Havana between 1997 and 2001 (23). In spite of these low indices, an outbreak of 138 cases of dengue fever occurred in September and October 2000; both dengue 3 and dengue 4 viruses were isolated (1). Dengue serotypes 3 and 4 had never circulated in Cuba, and we can assume low or nonexistent immunity in the population. From June 2001 to February 2002, a new outbreak occurred, and 12,889 new dengue cases were confirmed (23).
Study Area
The study was conducted in Playa Municipality, in the northwest of Havana. The municipality has an area of 34.90 km 2 and a population of 182,485 inhabitants. It has an average annual temperature of 25°C and precipitation of 132.9 mm in the rainy season (May-October). The population density is 5,228 habitants per square kilometer. The municipality has a noncontinuous water supply (every 2 days) and irregular garbage collection. It is divided into 9 health areas, each providing primary care to ≈30,000 people. We performed an in-depth study in the 5 health areas where dengue transmission occurred in the September-October 2000 epidemic.
Study Design
We conducted a case-control study. Two units of analysis were used: blocks of houses (a block has on average 50 houses) and neighborhoods, which were defined as a block plus surrounding blocks (this definition generally results in clusters of 9 blocks with a radius of ≈100 m). These units are defined by manmade boundaries and not by ecologic determinants, per se, to usefully guide community-based control. We defined a "case" as a block (or neighborhood) of houses in the study area where ≥1 inhabitant was detected with confirmed dengue fever during the September-October 2000 outbreak. "Control" blocks (or neighborhoods) were randomly sampled from those in the study area where no dengue case was reported.
Dengue Fever
Dengue cases were defined as patients with fever and ≥2 symptoms of dengue fever such as myalgia, arthralgia, headache, and rash, with serologic confirmation by immunoglobulin M-capture enzyme-linked immunosorbent assay (1,12) at the national reference laboratory of viral diseases in the Institute of Tropical Medicine, Havana.
During the epidemic, suspected cases were identified through the health services. Additionally, a seroepidemiologic survey was conducted in the study area at the end of October 2000; all family physicians made home visits to families under their responsibility, searching for recent denguelike illnesses. Blood samples were collected from all persons with a history of fever.
All confirmed dengue patients (passively and actively found) were interviewed by their family physician, supervised by an epidemiologist of the health area, to determine the exact date of symptom onset and places visited in the 10 preceding days. The completeness of the collected information was verified by epidemiologists of the Institute of Tropical Medicine, and if necessary, patients were revisited.
Entomologic Information
We used entomologic surveillance data that were independently recorded by the National Vector Control Program. At 2-month intervals, vector control technicians exhaustively inspected every house in the Playa Municipality for larval stages of Ae. aegypti. We used data collected in 3 cycles, July-August 2000 (before the epidemic), September-October 2000 (during the epidemic), and November-December 2000 (after the epidemic). We extracted information on the number of inspected houses, positive containers (with Ae. aegypti pupae or larvae), and houses with ≥1 positive container. We eliminated 4.8% of the blocks from the study because they were not inspected in the 3 inspection cycles.
Data Analysis
We related all data collected to geographic coordinates by a unique house block code and introduced it in MapInfo software (MapInfo Corporation, Troy, NY, USA). Casepatients were located by their address in the corresponding block. For the 3 entomologic inspection cycles, HI and BI were calculated at the block, neighborhood, and health area level. Additionally, we identified the BI max , which is the highest or maximum BI at the block level for each neighborhood of the case and control blocks included in the study. This variable is derived with the following equation: , where BI i is the BI of the ith block belonging to the concerned neighborhood N, and ∀i⊂N indicates that all BI i of N are considered to identify the BI with the highest value as BI max .
All data were exported to SPSS (SPSS Inc., Chicago, IL, USA) for analysis. We calculated the Spearman rank correlation coefficient between the different indices in the 3 inspection cycles. The entomologic indices were transformed to approximately normal distributions (by using square root transformation) for calculating means, standard deviations, and 95% confidence intervals. Differences in the distribution of the indices were assessed with the Mann-Whitney test.
We assessed the discriminative power of the indices by using receiver operating characteristic (ROC) curves. Their accuracy to discriminate between case and control blocks (and neighborhoods) was classified according to the value of the area under the ROC curve (AUC) (24) as noninformative (AUC≤0.5), less accurate (0.5<AUC≤0.7), moderately accurate (0.7<AUC≤0.9), highly accurate (0.9<AUC<1) and perfect (AUC = 1). The value of the indices with the highest sensitivity, >50% specificity, for discriminating case and control geographic units was taken as the optimal cutoff point. The lower limit of 50% specificity was set to safeguard positive predictive value and decrease the number of units falsely classified at high risk for dengue transmission, which triggers unnecessary action and generates unproductive costs. The association between the entomologic indices and dengue transmission was further explored by logistic regression models.
Results
During the epidemic, health services assisted 4,679 febrile patients in the 5 health areas included in the study. All patients were serologically tested 5 days after onset of fever, and dengue infection was confirmed in 47.
In the seroepidemiologic survey, 82.5% of the families were effectively visited by their family physician. The survey found 7,008 persons with symptoms of fever between September and October 2000 who had not previously attended the health services. Serum specimens were collected from all of them, and dengue infection was confirmed in 22.
As a result, 69 (47 passively identified plus 22 actively identified) dengue cases were confirmed, all patients were interviewed, and 4 cases epidemiologically related to outbreaks in other municipalities were excluded from the study. The final sample consisted of 65 confirmed dengue fever patients who lived in 38 different blocks in the 5 health areas included in the study.
In the July to August inspection cycle, before the outbreak, the overall municipal BI and HI were 0.92 and 0.87%, respectively ( Table 1). The mean values of the indices calculated at the health area level were also ≈1 for areas with or without dengue cases during the subsequent epidemic. However, the mean BI and HI were >1 for case neighborhoods and substantially <1 for neighborhoods without cases. During the epidemic, the effect of the level of measurement of the indices was still more pronounced. The HI and BI at the municipality level were 1.53% and 1.73, respectively, but all health areas with dengue cases attained a BI >1. Even more marked differences existed at the block and neighborhood levels, and after the outbreak the indices returned to average values <1 at all levels of measurement. The mean values for case blocks and neighborhoods were, in all instances, consistently substantially and significantly higher (all p<0.05) than those for corresponding control units. A high correlation was observed between block-level BI and HI values (r≥0.94, p<0.05). In most positive houses (89.6%), only 1 container with Aedes larvae or pupae was found.
The Figure shows the spatial distribution of Ae. aegypti larval infestation during the inspection cycles before, during, and after the epidemic and the location of the dengue fever cases in the first (September) and second (October) month of dengue virus transmission. In most blocks (70%), no Aedes infestation was present before the epidemic period, but 8.8% of blocks had BI values >4, with a maximum BI of 50. Of the 17 confirmed dengue patients in September, only 3 (18%) lived in a block with BI≥4 in the July-August inspection cycle. However, 15 (88%) lived in a neighborhood with at least 1 block with BI≥4. The Aedes infestation increased during the second inspection cycle and then decreased again, concurrent with the intensified vector control activities during the epidemic. From November to December, after the outbreak, 71.6% of house blocks were Aedes-free, while 6.3% had BI>4.
The mean block BI, the mean neighborhood BI, and the mean BI max for case and control blocks are given in Table 2. Before the epidemic, the mean BI values were approximately equal for case and control units. However, the BI max values were significantly higher for neighborhoods of case blocks. While transmission started in neighborhoods with high BI max infestation levels, it spread into blocks and neighborhoods with lower mean BI values in October. Still, during the epidemic, the indices remained systematically and significantly higher in case blocks. After the epidemic, they returned to similar values for case and control units.
The entomologic indices from inspection cycles before and during the epidemic were less to moderately accurate at predicting subsequent transmission. The highest AUC value, 0.71, was attained with the BI max from the July to August inspection cycle. At the cutoff of 4.07, it reached a sensitivity of 77.8% and a specificity of 63.2% for predicting September transmission. A neighborhood BI≥1.30 gave similar results. Block-level BIs were less accurate. Comparable cutoff points for the indices in the September to October inspection cycle discriminate best for predicting transmission in October (data not shown). After the epidemic, in the November to December inspection cycle, the indices had a high specificity: 89.6% for BI<1 and 85.7% for BI max <4, which points toward their usefulness in nonepidemic periods. Table 3 shows the odds ratios (OR) for dengue transmission at optimal BI cutoff values. From July to August, consistent with previous results, only BI max ≥4 was a significant predictor for identifying blocks with a case in September (OR 6.00, p<0.05). In contrast, the OR for all the different September-October BIs were significant; blocks above threshold had 3-5 times the chance of having a dengue case in October. Additionally, during the outbreak, the presence of a single positive container in a block was associated with a higher risk for dengue transmission (OR 3.49, p<0.05).
Discussion
We show that entomologic indices, BI in particular, allow identification of geographic units at high risk for dengue transmission. However, in regions with low Ae. aegypti density, identifying such units requires analysis at different levels, i.e., for blocks and neighborhoods, and short intervals between inspection cycles. Optimal cutoff values were identified for our study setting.
The existence of detailed surveillance data before, during, and after the dengue epidemic in Playa Municipality offered a unique opportunity to analyze entomologic information at different geographic levels. Entomologic data collected through routine systems, however, has some limitations. First, larval prevalence was possibly slightly underestimated: blocks were inspected by different vector control technicians, procedures used may not have been completely standardized, and few data are (randomly) missing. Second, when dengue cases were reported, the control program intensified, and more Aedes foci may have been detected. Third, sampling Aedes aegypti can be time sensitive (25), and our inspection cycles at 2-month intervals may not have fully captured the temporal variability of the entomologic indices. Besides, we may not have been able to identify all dengue patients who were infected outside their area of residence. Also, the study design did not allow us to detect asymptomatic dengue infections, which likely occurred in some control blocks and neighborhoods. However, we expect the potential misclassification to be nondifferential, i.e., independent of the entomologic indices. Furthermore, the experience of the technicians of the vector control program, their close supervision (including systematic revisiting of 33.3% of the inspected houses), and the interviews conducted with all dengue patients to exclude outside infection guarantee that biases, if any, are minimal. Various researchers have investigated the relationship between dengue transmission and the Aedes population, expressed as larval (15,(26)(27)(28)(29)(30)(31), pupal (7,13,32), and adult indices (33). Moore (28) in Puerto Rico and Pontes (15) in Fortaleza, Brazil, used temporal graphics to compare the seasonal fluctuation of rainfall, Aedes larval indices, and dengue incidence. They observed a strong relation in the patterns of the 3 series. In Puerto Rico, the peak incidence of confirmed infection followed the peak larval density by ≈1 month. In Salvador, Brazil, sentinel surveillance in 30 areas detected a significant 1.4× higher seroincidence when the HI was >3% (31). Recently, Scott and Morrison (16) showed that traditional larval indices in Peru are correlated with the prevalence of human dengue infections. The variety of thresholds proposed in these and other studies could be partially explained by different methods and geographic levels of analysis used, but other factors influence the relationship between Aedes density and transmission risk, such as herd immunity (11), population density (31), mosquito-human interaction (34), virus strain, and climate, which affects mosquito biology and mosquitovirus interactions (16).
Entomologic indices, however, were strongly associated with transmission, and we used ROC analysis (24) to assess the potential of these indices to predict in which blocks transmission would occur and to select an operating point that would provide an optimum tradeoff between false-positive and false-negative results (35). BI max ≥4 followed by neighborhood BI≥1 during the preceding ≈2 months provides good predictive discrimination. At longer intervals, the sensitivity of these indices becomes too low. More frequent inspection cycles might perform better since Aedes needs only 9-12 days to develop from egg to adult (36). Care should, however, be taken when extrapolating these findings to communities with other herd immunity levels or different environmental conditions.
Our data also show that the geographic level of analysis determines the Aedes indices obtained. Marked heterogeneity is not only found inside Playa Municipality but also inside smaller health areas. Indices at the neighborhood level perform best, followed by indices at the block level. Geographic scale has too often been neglected when dengue transmission is studied. In general, overall indices are calculated for communities (sometimes of different sizes) defined by administrative boundaries, which do not constitute entomologically homogeneous units. Notwith-standing, local variability of larval indices can be inferred from the literature, in which it is sometimes mentioned. Chan et al. (27) noted that HI in different sections of Singapore's Chinatown varied from 10.2% to 25.0%. However, Goh et al. (30) reported an overall HI of 2.4% in Singapore, but at the level of 7 blocks taken together (approximately the same scale as our neighborhood), HI up to 17.9% were found. Tran et al. (36) defined 400 m and 40 days as the spatial and temporal boundaries of maximum dengue transmission in a dengue focus. Perez et al. (37) identified areas in Havana with heterogeneous risks for vector infestation by using a geographic information system. Spatial heterogeneity has also been observed at the household level for both Aedes populations (10,38,39) and dengue transmission (26,29,40), but this level seems less suitable for identifying areas for intervention. Blocks or neighborhoods, given the epidemiologic situation in our study area, are a more appropriate scale.
The unit of analysis used in our study, the block, is based on manmade boundaries. While these may not describe the ecology of risk, they seem to be useful markers from the perspective of community-based control interventions. In most settings, appropriately sized and locally meaningful geographic units could be similarly defined for entomologic surveillance, but the use of different boundaries or different analytical techniques could produce different results.
In our study, BI≥1 and BI max ≥4 seemed to be a suitable action threshold and target, respectively, in community based dengue prevention. However, these results are derived from the analysis of 1 epidemic, and the thresholds identified may not constitute suitable targets in another epidemic or in locations where different ecologic conditions prevail. Similar studies in future epidemics and in other settings are necessary to verify the general applicability of our results. | 2016-05-04T20:20:58.661Z | 2006-05-01T00:00:00.000 | {
"year": 2006,
"sha1": "8051bc647d70bbf0cee0304a02437a35b23f1a4d",
"oa_license": "CCBY",
"oa_url": "https://wwwnc.cdc.gov/eid/article/12/5/pdfs/05-0866.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "65c3b43d57e14259d301d7dcc2ede6a3d23541da",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
13900865 | pes2o/s2orc | v3-fos-license | Identification and Replication of Three Novel Myopia Common Susceptibility Gene Loci on Chromosome 3q26 using Linkage and Linkage Disequilibrium Mapping
Refractive error is a highly heritable quantitative trait responsible for considerable morbidity. Following an initial genome-wide linkage study using microsatellite markers, we confirmed evidence for linkage to chromosome 3q26 and then conducted fine-scale association mapping using high-resolution linkage disequilibrium unit (LDU) maps. We used a preliminary discovery marker set across the 30-Mb region with an average SNP density of 1 SNP/15 kb (Map 1). Map 1 was divided into 51 LDU windows and additional SNPs were genotyped for six regions (Map 2) that showed preliminary evidence of multi-marker association using composite likelihood. A total of 575 cases and controls selected from the tails of the trait distribution were genotyped for the discovery sample. Malecot model estimates indicate three loci with putative common functional variants centred on MFN1 (180,566 kb; 95% confidence interval 180,505–180, 655 kb), approximately 156 kb upstream from alternate-splicing SOX2OT (182,595 kb; 95% CI 182,533–182,688 kb) and PSARL (184,386 kb; 95% CI 184,356–184,411 kb), with the loci showing modest to strong evidence of association for the Map 2 discovery samples (p<10−7, p<10−10, and p = 0.01, respectively). Using an unselected independent sample of 1,430 individuals, results replicated for the MFN1 (p = 0.006), SOX2OT (p = 0.0002), and PSARL (p = 0.0005) gene regions. MFN1 and PSARL both interact with OPA1 to regulate mitochondrial fusion and the inhibition of mitochondrial-led apoptosis, respectively. That two mitochondrial regulatory processes in the retina are implicated in the aetiology of myopia is surprising and is likely to provide novel insight into the molecular genetic basis of common myopia.
Introduction
Myopia is the most common eye disorder, affecting an estimated 36% of adults over 20 years in the United States [1] and up to 61% in East Asia [2]. Myopia is a significant cause of vision loss [3], and is becoming the most common single cause of blindness in the working age population [4]. Refractive error, measured in spherical equivalent (SE) diopters, is a quantitative trait influenced by multiple genetic and environmental factors. Myopia develops as a result of structural changes in the eye, particularly ocular axial length elongation, causing parallel rays of light to be focused in front of the retina, forming a blurred image. There are animal models for myopia development [5], but the mechanisms responsible for detecting lack of focus and the signalling pathways from the retina to the choroid and ultimately to the sclera to induce eye growth, are not wellunderstood.
Epidemiological studies have identified close visual work (and correlates of this such as hours spent reading, education and IQ) to be a significant risk factor for myopia development in children and that outdoor activity appears to be protective [2]. Twin studies also consistently demonstrate a large heritability for individual variation in refractive error around a specified population mean, ranging from 75-94% [6]. We previously described a genome-wide linkage analysis using autorefractor data for 221 dizygotic (DZ) female twin pairs, which identified 4 possible susceptibility loci, MYP7 on chromosome 11p13, MYP8 on chromosome 3q26, MYP9 on chromosome 4q12, and MYP10 on chromosome 8p23 [7]. However, to date, these loci have not been replicated, and no known myopia susceptibility genes have been identified [8].
The aims of this study were twofold. The first was to replicate the linkage signals at these four loci using an independent sample of DZ twins to the original study, using measures of refractive error (optician prescription) obtained via a postal questionnaire. The second aim was to conduct a follow-up association study of the genomic region with strongest evidence of replicated linkage, using linkage disequilibrium mapping to identify possible susceptibility genes and to replicate results using an independent sample.
Subjects
Refraction data, either from autorefractor or postal prescription, were available for 4273 UK twin subjects (1716 complete pairs) with SE data. Overall, the SE mean (between sib-pair standard deviation) was 20.29D (2.36), range 220D to +8.75D with an inter-quartile range of 21.06D to +1.125D and 26% of the subjects were myopic using a threshold of SE, = 21D. A total 1846 autorefractor measures (915 complete pairs) and 997 postal prescriptions (485 pairs) were available for the discovery phase of this study and an independent sample of 1430 twins for replication ( Table 1). The mean age of subjects was 53.7 years (SD 13.1), range 16-82 years, and 90.3% of subjects were female. The high proportion of women is the result of long-term recruitment of female volunteers for study of phenotypes such as osteoporosis.
Linkage Mapping
For this study, we attempted to replicate linkage to four loci previously reported by our group [7]. Figure 1 illustrates linkage peaks to 3q26 for a discovery sample using autorefractive data with LOD 3.7 (DZ twin pairs = 221) based upon previously published data [7] and replication sample using postal prescription data with LOD 2.12 (DZ pairs = 485). Combined linkage using pooled data gave LOD 2.63 (DZ pairs = 706).
Marginal evidence for replicated linkage using the original Généthon map and microsatellite data for independent samples was also observed for MYP7 (11p13) and MYP9 (4q26), but not for MYP10 (8p23). These loci are currently subject to further investigation.
Association Mapping: Selection of Myopic Cases and Hyperopic Controls for Discovery Sample
Autorefractor rather than postal prescription data were used for discovery stage association mapping, since autorefractor data are observed to be more precise with a smaller standard deviation (AR total sib-pair SD = 2.48; postal SD = 2.76; p = 0.0006). Subjects measured using an autorefractor were measured in the same standardized manner with no transcription errors that tend to be associated with postal prescription data.
For the initial fine mapping study of the 3q26 region (Map 1), a total of 243 cases and 257 controls selected from myopic and hyperopic concordant sib-pairs, respectively, were defined from the lower and upper quartiles of the SE quantitative trait using 915 twin pairs with complete autorefractor data (see Materials and Methods). Seventy-nine of these had depleted DNA (46 with none and 33 samples with poor DNA quality or case-wise missing . = 30%), leaving 205 cases (myopic individuals with a myopic sibling) and 216 controls (hyperopic individuals with a hyperopic sibling), a total of 421 case-controls for the preliminary Map 1 analysis (Table 1B).
The Map 2 data contained 154 new samples and approximately 70% of the samples from Map 1, yielding a total of 443 casecontrols. Hence a total of 575 cases and controls were genotyped for either the Map 1 (n = 421) or Map 2 (n = 443) discovery samples (Table 1B), with 289 samples genotyped for both. It was intended to genotype the same samples for Maps 1 and 2, but due to low DNA stock for some of the original Map 1 samples sent to Ellipsis for genotyping; new case/control samples with sufficient DNA stock were used to replace depleted Map 1 samples. Figure 2 illustrates the high-resolution linkage disequilibrium unit (LDU) map (Figure 2A), based upon 24,331 HapMap PHASE II SNPs for the 3q26 region (described in Materials and Methods), used to select informative markers for this study. The figure plots the relationship between cumulative genetic distance on the Y-axis (LDU) and physical location on the X-axis (kb). The LDU map provides detailed information on fine-scale linkage disequilibrium. The horizontal steps seen in Figures 2B and 2C represent regions of extended LD, while rapid increments in cumulative LDU represent regions of breakdown in LD, primarily due to recombination [9]. The LDU map for the entire 3q26 region used for this study is presented in Table S1.
LDU Maps and SNP Selection
For the Map 1 samples, we attempted to genotype a total of 2304 SNPs. After removing non-polymorphic SNPs (384 SNPs), SNPs with a call rate #90% (84), evidence of Hardy-Weinberg disequilibrium (38) and MAF, = 1% (0), a total of 1800 out of 1920 polymorphic SNPs remained for Map 1 analysis.
For the second stage of the association study (Map 2), in order to further refine the location of detected association, we genotyped an additional set of 382 SNPs for those LDU regions from Map 1 that showed evidence of association with myopic case-control status. Hence Map 2 had a high local LD resolution. After removing nonpolymorphic Map 2 SNPs (33), SNPs with a call rate #90% (19), evidence of Hardy-Weinberg disequilibrium (20) and MAF, = 1% (7), a total of 307 SNPs remained for Map 2 analysis.
Author Summary
Successful gene mapping strategies for common disease continue to require careful consideration of basic study design with the advent of genome-wide association studies. Here, we take advantage of prior information that the heritability of the quantitative trait myopia in the general population is high and shows evidence of replicated linkage to chromosome 3q26. Based on this, we conducted a fine map linkage disequilibrium association study for the region, using a high-resolution genetic map derived from population-based HapMap Phase II data. For analysis, we used efficient multi-locus tests of association using single nucleotide polymorphism markers genotyped for our sample data and placed on the genetic map measured in linkage disequilibrium units. We followed up preliminary evidence of association for the discovery samples with further genotyping in the same samples to improve the model location estimates for the common functional variants we identified. Three locations were replicated using an independent sample. Two of the identified genes are likely to play an unexpected role in myopia with both pivotal in the healthy housekeeping metabolism of retinal mitochondria. Both proteins interact with OPA1, with nonsynonymous OPA1 mutations causing the unrelated Mendelian disease Autosomal Dominant Optic Atrophy (ADOA) by triggering mitochondrial-led retinal ganglia cell apoptosis.
downstream of SOX2OT (p = 1.6610 25 ) and MCF2L2/PSARL regions (p = 0.01), but not LPP (p = 0.03). Hence four of the regions provided statistically significant evidence of association at the discovery phase with a significance threshold of a = 10 24 (accounting for discovery multiple testing, see Materials and Methods), with the MFN1 and upstream SOX2OT regions attaining genome-wide significance (a<10 28 ).
For replication, we used an opportunistic sample in which we excluded all discovery twin samples (and their co-twins) from the TwinsUK register, to obtain 1430 individuals complete for autorefractor or postal SE and genotypes at 3q26 based upon the Illumina genome-wide Hap300 chip made available from other ongoing studies. Using quantitative tests of association, the same Malecot models and analytical LDU windows were fitted to the replication data. A significance threshold of a = 10 22 was used for the replication tests (see Materials and Methods).
All single-SNP allelic tests of association results (see Association Mapping, Materials and Methods) for Map 1, Map 2 and replication samples are presented in Tables S2, S3, and S4, respectively.
MFN1 Region
Based on the Malecot model, the most likely physical location for a putative common functional variant in the MFN1 region was estimated to be at 180,566 kb with the 95% confidence interval ranging from 180,505-180,655 kb (Map 2, Table 2). The variant location estimate at 180,566 kb lies in exon 7 of the MFN1 gene (180,548-180,594 kb, approximately 45.5 kb in length), but the Phenotype summary: Refraction data (autorefractor and postal) for all samples (All twins), linkage replication (postal samples) and association samples (discovery = Map 1 and Map 2; replication). Association discovery = total autorefractor data used to select discovery case-controls; case-control = total number of Map 1 and Map 2 samples; Association replication = independent twin sample with autorefractor/postal spherical equivalence and Hap300 data; Pairs = number of twin pairs with complete refraction data for both siblings; SD = standard deviation (for unrelated case-controls) and between sib-pair standard deviation (for related samples). confidence interval for this estimated location also includes the genes ZNF639 (ZASC1), MFN1 and GNB4 ( Figure 3). Individual SNPs that showed strongest evidence of association for this window were rs6794192 (180,510,506 bp), rs10460887 (180,538,836 bp), rs9822116 (180,557,316 bp), rs17293193 (180,606,558 bp) and rs7618348 (180,627,432 bp; all p-values provided in Table S1). All five SNPs gave low p-values (p,10 23 ) for single-SNP tests of association, with SNPs rs6794192 and rs7618348 genotyped and providing low p-values for all three samples (Map 1, Map 2 and replication) and combined sample single-SNP p-values of 10 23 and 10 24 , respectively ( Table 2). An annotated pair-wise LD plot is also presented for this region in Figure S1.
The MFN1 gene region result replicated for the independent sample of 1430 twins (x 2 1 = 7.6, p = 0.006) using a quantitative test of association, the same analytic LDU window and a different panel of markers for the 3q26 region derived from the Hap300 chip.
Upstream of SOX2OT
Analysis of the Map 1 data yielded a significant window that covered the SOX2OT gene region (x 2 1 = 7.0, Table 3). Further analysis using the Map 2 data showed a large increase in the significance level (composite likelihood x 2 1 = 46.1, p = 1.1610 211 ). The physical location for the putative associated common variant in the region using the more informative Map 2 data was estimated to be at 182,595 kb with a 95% confidence interval of 182,533-182,688 kb (Table 3). This location is approximately 156 kb upstream (59) from the alternate-splicing ncRNA gene SOX2OT and 317 kb from the SOX2 transcription start sites ( Figure 4). The confidence interval includes no known genes, but does include two predicted non-coding genes of unknown function (floylorbu 0.51 kb in length and flerlorbu, 21.4 kb [10] and five putative alternative promoters upstream of SOX2OT, which between them cover a region of approximately 490 kb [11]. Individual SNPs most strongly associated with myopia for this window were rs1518933 (182,538,071 bp), rs733422 (182,604,752 bp) and rs4855026 (182,609,663 bp). Figure S2 provides a pair-wise marker LD plot for the SOX2 gene region.
These results were also observed for the replication sample using a quantitative test of association with Hap300 SNPs covering the same region (x 2 1 = 14, p = 1.8610 24 , Table 3). The SNP coverage for Map 2 did not include SNPs within or in close proximity of SOX2 (Figure 4), although the Map 2 evidence for association was based on an LDU analytical window that included the gene. The LDU maps illustrated in Figures 2C and 4 show evidence of multiple recombination hot spots around SOX2.
PSARL Region
Preliminary marginal statistical evidence of association for Map 1 data was observed for the analytical LDU window at 184,313-185,257 kb (x 2 1 = 6.0, p = 0.014; Table 4). This region has high recombination rates, is gene rich and includes the genes LAMP3, MCF2L2, B3GNT5, KLHL6, KLH24, YEATS2, MAP6D1, PSARL, ABCC5 and HTR3D ( Figure 5). Based on Map 2 data, the physical location for a putative causal variant was estimated to be at 184,386 kb with 95% confidence intervals 184,356-184,441 kb (x 2 1 = 6.2, p = 0.01; Table 4). This location estimate lies within intron 3 of the 30-exon gene MCF2L2 with the confidence intervals including LAMP3 and MCF2L2. An annotated pair-wise LD plot is presented for this region in Figure S3.
Strong evidence of association to this LDU window was also observed for the replication sample using a quantitative test of association for Illumina Hap300 SNPs genotyped for the same LDU window (x 2 1 = 12, p = 5610 24 , Table 4). However, the estimated location for a putative common causal variant for the same window and using Hap300 SNPs was different to that from the discovery SNP coverage (Maps 1 and 2). For Hap300 SNPs, the variant was estimated to be at 185,100 kb (95% CI 185,036-185,115 kb), located in the 39 UTR of PSARL, with the confidence intervals including exons 4-10 of PSARL and the 59 UTR of the neighbouring gene, ABCC5 ( Figure 5). The statistical evidence for the PSARL location (x 2 1 = 12) was stronger than that for MCF2L2 (x 2 1 = 6.2).
MYNN, Downstream (39) from SOX2OT and LPP Gene Regions
The estimated locations for common functional variants at these three loci are presented in Table 5. Evidence of association to these loci did not replicate using Hap300 samples, suggesting either Type 1 errors or failure to replicate due to the different genetic coverage of these regions provided by the Map 1 discovery and Hap300 marker sets.
Discussion
Having first replicated the initial linkage to 3q26, the strategy we adopted for fine mapping the large 30 Mb genomic region was to pursue evidence of association in two stages. First, using a highresolution genetic map, we selected an informative set of SNP markers across the entire region, but at relatively low density to ensure economic feasibility. The second was to follow up those regions that showed strongest evidence of association in the region, with a denser set of markers placed on the same genetic map, on the assumption there are detectable common genetic variants in the region responsible for generating the observed linkage signal.
The approach succeeded, with evidence of replicated association to the MFN1, SOX2OT and PSARL gene regions. It is worth noting that the association initially detected in the three loci regions using Map 1 were only of marginal significance (at p = 0.02, p = 0.008 and p = 0.014, respectively). However, when a higher-resolution map was genotyped for the locus, association was detected at genome-wide significance for MFN1 and SOX2OT.
Evidence of association to the MCF2L2/PSARL gene region using Map 2 data remained the same (p = 0.01), but the same LDU window was subsequently replicated more strongly using a different panel of Hap300 SNPs (p = 0.0005). The diverging location estimates in the MCF2L2/PSARL region using two different SNP marker sets suggests the possibility of more than one common functional variant and co-incidental association for this LDU window ( Figure 5).
The latter emphasises how important informative SNP coverage is for detecting common variants and the use of marker panels that provide similar coverage of local LD patterns. The use of multimarker tests can efficiently use the LDU locations to provide localization estimates, while for sparse marker sets the use of single SNP tests is likely to result in reduced power to detect association depending upon local LD structure. The results presented here are all the more remarkable in that we were able to replicate the same regions using an unselected sample, for a different panel of SNPs (Hap300) genotyped at different centres. Some of the regions we have investigated on 3q26 are complicated with high recombination rates or a high density of genes. We have used a model that assumes common susceptibility loci with little or no allelic heterogeneity. As such we recognise there are likely to be more variants and genes in this region that will be identified and replicated by further mapping studies.
The Malecot model delimits the MFN1 gene region (using the most informative marker set, Map 2) with a 95% confidence interval ranging from 180,505-180,655 kb. Although the strongest evidence of association peaks at 180,565.8 kb in the middle of the MFN1 gene at exon 7, the confidence interval includes two neighbouring genes, ZNF639 and GNB4.
Mitofusin-1 (Mfn1, the protein derived from MFN1) is a mitochondrial outer membrane protein, widely expressed in human tissues but varying in mRNA expression levels between tissues [12]. Mfn appears to be a key player in mediating mitochondrial fusion and morphology in mammalian cells [12]. Its interest as a possible candidate gene involved in ocular function stems from its relationship with OPA1, a dynamin-related protein of the inner membrane which is mutated in autosomal dominant optic atrophy [13,14]. OPA1 requires Mfn1 to regulate mitochondrial fusion [15]. OPA1 is expressed in embryonic retina at many levels, not just the ganglion cells leading to the optic nerve, and continues to be expressed in adult retina with unknown function [16]. GNB4 is of interest as a myopia susceptibility gene, as the Gb4 protein has been shown to be expressed in retinal ON bipolar cells. The function of bipolar cells in the retina is detection of the edge of objects, and so these cells may be involved in detection of hyperopic blur that is believed to drive the signal for eye growth in myopia. Inhibition of the retinal ON bipolar cells stops the compensatory eye growth when a negative lens or occluder is placed over chick or kitten eyes [17].
SOX2 is a fundamental homeobox gene, 2 kb in length, involved in ocular development, with mutations leading to anophthalmos [18]. The known interaction between the SOX2 and PAX6 genes in lens development suggests the possibility that these may also influence development of refractive error. PAX6 lies at the centre of our 11p13 linkage signal from the original linkage scan, although we found no intra-genic association with PAX6 using tagging SNPs [7]. Recent studies illustrate the important role that gene regulatory elements can play in disease susceptibility including for example, a homeobox transcription factor that influences heart development and subsequent risk of atrial fibrillation [19]. There is considerable body of evidence for the role of regulatory elements associated with PAX6 [20], and on regulatory regions for SOX2 [21].
SOX2 itself lies in the intron of another larger (240 kb) noncoding RNA gene SOX2OT, which may play a regulatory role in SOX2 expression [21]. SOX2OT is a highly complex locus, which appears to produce several proteins with no sequence overlap, with 14 documented alternative splicing mRNAs, 5 non-overlapping alternate last exons and 7 validated alternative polyadenylation sites. Upstream of SOX2OT there are also 5 possible alternative promoters [11] (DA281835-DA310380) and two putative ncRNA genes of unknown function, flerlorbu and floylorbu ( Figure 4). Whether these elements co-operate with SOX2OT in regulating SOX2 is unknown.
The protein presenilin-associated rhomboid-like protein (PARL, coded for by the gene PSARL) is a mitochondrial inner membrane protease, which interacts with OPA1 to inhibit the mitochrondrial remodelling process that signals apoptosis [22]. This reflects the broader phenomenon that molecular mechanisms behind mitochondrial morphology have been recruited to govern novel functions, such as development, calcium signalling, and apoptosis [23].
PARL plays two important known roles. The enzyme cleaves OPA1 to produce the anti-apoptotic truncated soluble form of OPA1, which prevents cristae remodelling and the subsequent release of mitochondrial cytochrome c into the cytosol to stimulate apoptosis. The anti-apoptotic effects of these proteins are independent of mitochondrial fusion [22]. In addition, PARL appears to be implicated in mitochondria-to-nucleus signal transduction -following proteolytic processing of PARL, a small peptide sub-unit (P-beta domain) is released and translocated to the nucleus by an unknown mechanism [24].
The OPA1 gene encodes a 960 amino acid mitochondrial dynamin-related guanosine triphosphatase (GTPase) protein, which is transported from the nucleus to the outer surface of the inner mitochondrial membrane and interacts with Mfn1 and PARL to cause mitochondrial fusion and suppress mitochondrial-led apoptosis, respectively. OPA1 is widely expressed throughout the body, but most abundantly in the retina, followed by the brain. In the eye, OPA1 is present in the cells of the retinal ganglion cell layer, inner and outer plexiform layers and inner nuclear layer [16].
In summary, we have detected and replicated three novel loci at MFN1, SOX2OT and PSARL using a multi-marker approach that models LD structure. We performed a two-stage design to ensure adequate SNP coverage using a high-resolution LDU map. Prior evidence of replicated linkage to this region means these associations are likely to be real with MFN1, GNB4, PSARL genes and regulatory non-coding RNAs in the vicinity of SOX2, all plausible candidates. Although the mechanisms are not clear, this study strongly suggests that two fundamental mitochondrial molecular pathways are implicated in the aetiology of myopia.
We are confident that additional mapping studies for these data are likely to replicate further candidate genes at 3q26-28 and genomewide, which along with MFN1 and PSARL, can be taken forward to clarify the molecular genetic aetiology of common myopia.
Subjects
Twins in this study volunteered through media campaigns to be on the TwinsUK Adult Twin Registry at St Thomas' Hospital, London [25]. Subjects were invited to attend the hospital for a visit, which involved collection of multiple phenotypes including measurement of refractive error using non-cycloplegic autorefraction (ARM-10 autorefractor, Takagi Seiko, Japan), as well as venepuncture for blood collection for DNA extraction. For all studies, full informed consent was obtained, and protocols were reviewed by the Local Research Ethics Committee.
A postal enquiry was also initiated in 2002-2003, asking about ocular history and requesting subjects' ocular refraction prescription from their optometrist. We used postal data for those subjects without autorefraction data. Included in the questionnaires were questions about spectacle wear to cross-check refractive error data supplied. Subjects were excluded if they gave a history of cataract surgery, laser refractive surgery, retinal detachment or other ocular problems that might have influenced refractive correction.
Spherical equivalent (SE) was recorded in the standard manner as the sum of the spherical power and half the cylindrical power in diopters (D). The mean SE for left and right eye was calculated for each individual, and where data was available for only one eye, this was used as the SE for the subject.
Population Stratification
Although there has been little evidence of population stratification in population-based studies of self-reported Britons, we assessed for possible stratification with little or no evidence of stratification observed for these data [26].
Linkage Replication
We attempted to replicate the evidence from our original genome-wide analysis using AR data [7] for linkage to chromosomes 11p13 (MYP7), 3q26 (MYP8), 4q12 (MYP9) and 8p23 (MYP10) using the same ABI prism microsatellite marker set and Généthon genetic map. Refraction data for an independent sample of 485 DZ twin pairs was obtained from the postal questionnaire refraction data described above. Multipoint genome-wide linkage analyses were performed by use of the unadjusted mean SE of both eyes (in D) and optimal Haseman-Elston regression methods, implemented by use of a generalized linear model [27].
Association Mapping: Selection of Myopic Cases and Hyperopic Controls for the Discovery Sample
The most informative individuals were selected for genotyping from the lower and upper quartiles of the continuous SE diopter distribution. We selected individuals from a dataset of 915 twin pairs with complete autorefractor data, of whom 431 were monozygotic and 484 DZ pairs, which included the 221 DZ autorefracted pairs from the original linkage study. From the 915 twin pairs, a total of 575 unrelated cases and controls were selected for the discovery sample -255 monozygotic and 320 DZ singletons. To enrich for genetically informative cases and controls, individuals were selected if they were myopic and had a myopic twin (a ''super'' case) or alternatively, were hyperopic and had a hyperopic twin (a ''super'' control). The most myopic individuals (cases), with a diopter score of less than ,21, were selected from each twin pair, where the pair mean was equal to or less than = ,20.75 diopters. Similarly, the more hyperopic individuals (controls), with a diopter score of at least .+1, were selected from twin pairs with a pair mean greater than .+1 diopters.
This resulted in an ascertained sample, designed to differentially increase allele frequencies between cases and controls for disease susceptibility alleles that predispose individuals to develop myopia. We chose hyperopic rather than normal sighted controls as a strategy to increase power, on the assumption that the aetiology for myopia and hyperopia lie on a continuum between health and disease and that both share genetic risk mechanisms. The discovery data were analysed using case-control status, on the supposition that most of the information would be captured by affection status, but we also tested the case-control data for quantitative association using the original diopter measurements for the selected data.
Case and control samples were simultaneously genotyped using the same platform and arbitrarily allocated to the same plates. Case-control status was independent of plate and well assignment (data not shown).
LDU Maps and SNP Selection
The LD maps [28] assign markers to locations in linkage disequilibrium units (LDU) that describe the underlying structure of LD in the form of a metric map with additive distances. A highresolution LDU map for the whole of chromosome 3 was constructed using the CEU PHASE II data from the HapMap Project [29]. The resulting LDU map for 3q26 was used for this study, corresponding to the region with replicated evidence of linkage. The 659 LDU region corresponds to approximately 42.7 cM on the decode linkage map [30], implying that for 3q26, on average ,15 LDU correspond to 1 cM. For the first part of the project (Map 1), we selected three to four SNPs per 1 LDU across the entire 30 Mb region. This yielded a SNP density of approximately 1 SNP per 15 kb. This selection scheme captured the block-step structure of the high-resolution LD map and ensured good coverage of the LD steps.
Map 1 (the entire 3q26 region) was first partitioned into 51 nonoverlapping windows based on the LDU map with a minimum length of 10 LDU per window and by default, not breaking LDU blocks. For the six out of 51 LDU windows showing strongest evidence of association for the Map 1 data (Figure 2), an additional 382 SNPs were genotyped to refine the location estimate (Map 2).
Replication Data for Association Mapping
Further to the Map 1 and Map 2 studies, we also examined an independent replication sample of 1430 individuals for quantitative association, from which discovery samples and their relatives were excluded. The replication sample was composed of 460 unrelated individual female monozygotic and 338 dizygotic twin singletons and 316 DZ twin pairs. Tests of association were calculated using robust standard errors (clustered by family identifier) to account for relatedness with samples complete for refraction error (autorefractor or postal) and 3q26 genotype data. SNP genotypes for the replication samples were derived from a genome-wide Illumina HumanHap 300 dataset made available at the Twin Research Unit from other studies [26].
To assess the validity of postal SE with AR measures, we compared 138 individuals with both types of measure. The correlation between AR and postal data was 0.93, with a mean difference (AR -postal) of 20.241 (standard deviation = 0.93) and no observed relationship between the differences and the means for the two measures (p = 0.75).
Genotypes and Quality Control
Discovery Map 1 and Map 2 sample handling, DNA genotyping and genotype calls were performed by Ellipsis Biotherapeutics Corporation (Toronto) using an Illumina Beadstation. SNPs were screened for quality control before analysis and rejected if the marker showed strong evidence of Hardy Weinberg disequilibrium (at a threshold of x 2 1 . = 12), SNP-wise missing rates greater than 10% or MAF#0.01. Samples with a total of more than 30% case-wise missing were also removed before analysis.
For the 3q26 replication sample, all samples were typed using the Infinium assay (Illumina, San Diego, USA) with fully compatible SNP arrays, the Hap300 Duo, Hap300, and Hap550. Quality control measures taken for these data are detailed in [26].
Association Mapping
Allelic tests of association were initially performed for each marker. The association measure, z, from the 262 table between the myopia phenotype (0, 1) and the two alleles of each SNP marker were obtained for Map 1 and Map 2 as z = |D|/f(12R), where D is the covariance between myopia-status and the marker alleles, f is the frequency of myopic individuals in the sample and R is the minor allele frequency [31].
The significance of each window (or LDU region) was tested using a composite likelihood approach that simultaneously combines information from all markers within each window [32] on the basis of the Malecot Model. For the i th SNP, the observed association z i has an expectation E(z i ) estimated by the model as: E(z i ) = (12L) Me 2eD(Si2S) +L. The parameter M (intercept) reflects a monophyletic or polyphyletic origin of susceptibility alleles (i.e. proportion of disease alleles transmitted from founders). The parameter L (asymptote) is the spurious association at long distance.
The object of LD mapping is to estimate S, which is the estimated location of the putative disease gene in the map. The parameter e measures the rate of exponential decline in association with distance and hence S i is the LDU location of the ith marker. The Kronecker D is used for map direction and assures a correct sign, where D = 1 if S i $S or D = 21 if S i ,S. Given the observed associations for z i , the Malecot parameters are estimated iteratively by combining information over all loci within a window. The composite likelihood is calculated as L = g K i [z i 2E(z i )] 2 , where z and E(z) are the observed and expected association values, respectively, at the ith marker SNP. Their squared difference is weighted by an information index K i , which is estimated as: K z = x 2 1 /z 2 , where x 2 1 is the Pearson's x 2 1 from the 262 table (myopia status by SNP alleles).
Following Maniatis et al. [32], we used two different subhypotheses of the model to test for evidence of association. The null hypothesis is model Null where M = 0. The alternative model Full allows the estimation of both M and S. Hence the contrast between these two models tests for association to a region and for a disease determinant at location S. The difference in marker-density between Map 1, Map 2 and the replication samples genotyped for Hap300 SNPs, was taken into account by the use of an F statistic with df 1 and df 2 degrees of freedom. The degrees of freedom df 1 was the number of SNPs minus df 2 parameters estimated in the Full model. The F-value was estimated as F(df 1 , df 2 ) = [(L Null 2L Full )/df 2 ]/ L F /df 1 . Subsequently, to facilitate model fit comparison between tests with different degrees of freedom, p-values from the F-statistic were converted to a x 2 1 (full details of methods are presented in [32]). The 95% confidence interval (CI) for the estimated location Ŝ was obtained as: Ŝ6t SE, where t is the tabulated value of Student's t-test for df 2 degrees of freedom and SE is the standard error of parameter Ŝ. Estimates of Ŝ in LDU were converted to kb by linear interpolation of the two flanking SNPs.
Same procedures were used for the replication samples (Illumina HumanHap 300, 1430 individuals). However, as the Hap300 chip had been genotyped for a large number of unselected twins, for this analysis we used the quantitative phenotype instead of the case-control status. Therefore the composite likelihood was calculated using the observed regression coefficient (b i ) for each SNP marker (i) and the expected E(b i ), which was estimated using the Malecot model for every i th distance in LDU.
Multiple Testing and Significance Thresholds
For the association mapping study we present nominal p-values that do not correct for multiple testing. We used the following thresholds to indicate statistical significance at each stage:
Linkage
For the original discovery sample [7] we used a threshold of LOD 3.2 (a<10 24 ) to indicate genome-wide significance. For evidence of linkage replication presented in this study, we lowered the threshold to LOD 2 (a<2610 23 ), since replicating a true initial linkage result for complex traits is recognized to be difficult due to upward bias in discovery sample estimates [33].
Discovery Association (Case/Control data; LDU Maps 1 and 2) Based upon Map 1 results, we took forward six LDU windows for further genotyping (Map 2) that corresponded to the six most statistically significant results. For Map 2 we used a threshold of a = 10 24 , which is conservative, since a Bonferoni correction would provide a threshold of a<10 23 (0.05/51) based upon approximately 51 independent tests (i.e. 51 analytical windows were used to span the 3q26 30 Mb region).
Replication association (SE quantitative trait; Hap300 SNPs) We attempted to replicate the six analytic LDU windows, with each test window independent of one another. Hence we considered replication using a threshold of a = 10 22 (0.05/6) based upon a Bonferoni correction.
Electronic-Database Information
The URLs for data software presented herein are as follows: HapMap, http://www.hapmap.org/ (for HapMap data)
Supporting Information
Figure S1 Pairwise LD plot (D') for the MFN1 gene region (180,400-180,700 kb) using HapMap Phase II SNPs (Build 35, release 21). The top ideogram represents the whole of chromosome 3, with the yellow bar high-lighting the gene region of interest. Below that shows the local physical distance in kb, local coalescent recombination rates (cM/Mb) and gene locations. Note that two annotated hotspots (cM/Mb) either side of the MFN1 gene coincide with the linkage disequilibrium unit (LDU) steps depicted in Figure 3, at approximately 180,500 kb and 180,660 kb. | 2014-10-01T00:00:00.000Z | 2008-10-01T00:00:00.000 | {
"year": 2008,
"sha1": "7e9f0107175fb83eb21c5c781fce9ce84d65406f",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosgenetics/article/file?id=10.1371/journal.pgen.1000220&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "aca1df92a7be9a1478cd6a4fd122419f34ee3119",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
229634761 | pes2o/s2orc | v3-fos-license | Modification Methods and Applications of CNTs/WPU Composite Material
. Carbon nanotubes have lots of good properties, and occupy a great position in the field of science today. However, they also have some disadvantages, such as the existence of Van der Waals forces between the nanotubes bundle, which can result the attraction between the bundles and finally will cause more serious agglomeration. So researchers make the CNTs and WPU into the composite via solution blending, melt blending and in situ polymerization, in order to give play to the excellent properties of the two materials. This paper is aimed at summarizing various modification methods of CNTs/WPU composite material in the current research status and show the effects after each modification method. Finally, it can be concluded that there were differences between composites and the results of modification methods were affected.
Introduction
Carbon nanotubes (CNTs) is a one-dimensional material curled up by graphite carbon atomic layer, which was found in 1991 by Japanese NEC scientists in the study C60 structure [1]. Its weight is lighter, its hexagonal structure is excellent, and its radial length can reach nanoscale, axial length can reach micron grade. There are also five-membered ring and seven-membered ring structures in the carbon nanotubes structure, which make the wall of the carbon nanotubes concave and form a curved part. In addition, Carbon atoms mainly exist as sp2 hybridization and sp3 hybridization, so they have excellent mechanical, electrical and chemical properties. Its application is also extensive. For example, in the field of electronics, carbon nanotube materials can be used as batteries, precision electronic devices. In the field of adsorbent, it can be used as adsorbing pollutants in water and hydrogen storage materials. Polyurethane (PU) is mostly made by diisocyanate and oligomer alcohol via condensation polymerization, and the constitutional repeating unit is -NHCOO-. Because of its unique structure, polyurethane material has good toughness and mixed with high polymer material. Waterborne polyurethane(WPU)is a kind of material in which polyurethane particles disperse in water to form an aqueous solution. It not only has the excellent properties of polyurethane, but also has the characteristics of low toxicity, low pollution, easy modification and construction. Its due field has adhesive field, coating material field.
Although carbon nanotubes have a lot of good properties and occupy a great position in the field of science today, they also have some disadvantages. Such as the existence of Van der Waals forces between the nanotubes bundle, which can result the attraction between the bundles, therefore, it will cause more serious agglomeration. This disadvantage will make the dispersion of carbon nanotubes in the composite material worse, the cost of preparation is increased. However, waterborne polyurethane also have the defects of poor water resistance, poor solvent resistance and poor mechanical properties. Therefore the modification to carbon nanotubes and waterborne polyurethane is needed to offset the shortcoming so that the composite can perform better and different modification methods will bring different effects to the composite.
Organic alcohol-modified multi-walled carbon nanotubes
Shaohui Wang's group [7] treated MWCNTs with a mixed acid of concentrated sulfuric acid and concentrated nitric acid 3:1 to graft them into the organic segment -COOH. And then mixed and dispersed the carboxylated MWCNTs with N,N-dimethylformamide. After that, SOCl2 was added to make the MWCNTs acyl chlorinated. Then acyl chlorinated MWCNTs were polymerized with ethylene glycol(EG), glycerol alcohol(GL) and dimethylolpropionic acid(DMPA), respectively, to form MWCNTs grafted at different organic alcohol chain segments. Finally, composite materials were synthesized by solution blending with waterborne polyurethane. The results showed that the diameters of carbon nanotubes were significantly increased after esterification with polyols. MWCNTs grafted to -COOH organic chain segment began to decompose at 206℃, and the Thermo-gravimetric rate of MWCNTs grafted to -COOH at 800℃ was 7.8%. The TG rate, Zeta potential and the increase in tensile stress(compared with WPU) of MWCNTs grafted to EG, GL and DMPA was showed in table 1. The higher the TG rate was, the larger the molecular weight of alcohol is, the more organic chain segments are grafted. Because the surface energy of MWCNTs was decreased by grafting hydrophilic groups, the higher the grafting rate was, the higher the absolute value of Zeta potential was. Therefore, the hydrophilicity of MWCNTs-DMPA was better than that of the other two organic alcohol segment grafted MWCNTs. The change of particle size of MWCNTs in WPU was followed by the largest size of unmodified MWCNTs , MWCNTs-COOH takes second place, and the smallest size of MWCNTs modified by organic alcohols. However, the particles size of MWCNTs modified by different organic alcohols was different from that of DMPA<GL<EG, indicating that the higher the graft rate of the organic alcohol chain segment is, the smaller the particle size of MWCNTs in WPU composite emulsion is. In the mechanical properties. The organic grafted make MWCNTs and WPU have better binding ability, and the more the amount of the hydrophilic groups grafted, the more the tensile stress increased. In the conductivity of the composites, the conductivity of composite of MWCNTs' mass fraction is 1.5% was 9.4×10-8S/cm. The conductivity of the composite grafted to the organic chain segment -COOH was reduced to 1.8×10-8S/cm, and that of the MWCNTs grafted to the organic alcohol group was enhanced to above 6.3×10-5S/cm, improved by nearly three orders of magnitude, the higher the graft rate is, the higher the conductivity of the composite is.
Organosilicon modified multi-wall carbon nanotubes
Cui Gao's group first treated MWCNTs with mixed acid, grafted the organic chain segment -COOH, and then mixed the oxidized MWCNTs with silane polyethylenedrunk(s-PEG) to make the two esterification reaction and reach the surface of MWCNTs to graft the organosilicyl groups. Then the composite was prepared by solution blending. Results show that s-PEG covering the surface MWCNTs, wall became coarse, as the surface caverage reached 25%. When the s-PEG-MWCNTs content was 1%, the tensile strength and elongation at break of composite materials is 15.8Mpa and 585%, compared with pure WPU increased by 597% and 152%. The introduction of organic silicone group makes the dispersion properties of the composite increased greatly and interface between the MWCNTs and WPU combined with together increase. So the tensile properties increased, so as to improve the mechanical properties of the composite material. The conductivity of the composite increase with the increase of MWCNTs' content. When the amount of MWCNTs reaches 5%, the conductivity of the composite increases 9 orders of magnitude compared with the pure WPU material. The better dispersion performance of MWCNTs in WPU, the better power network structure can be formed in the composite material, thus reducing the resistivity of the composite material. However in the unmodified composites, MWCNTs were agglomerated to a certain extent in WPU, leading to the failure of forming a good electrified tunnel. Therefore, the conductivity of the modified organosilicone composite was significantly higher than that of the unmodified composite.
Epoxy modified MWCNTs
Huafeng Duan's group treated MWCNTs with H2SO2-FeSO4, grafted hydroxyl groups and then grafted epoxy groups by coupling reaction with γ-glycidyl ether oxypropyl trimethoxysilane. Finally, the carboxylated PU material with alcohol as solvent and the modified MWCNTs were cross-linked to produce the composite material KH560-MWCNTs/WPU. The hydroxylated MWCNTs and 3-amino-propyltriethoxysilane were then reacted in the same way, and the composites E51-MWCNTs/PU were prepared by ultrasonic dispersion after they were added to epoxy resin E51. The results showed that the agglomeration of MWCNTs was significantly weakened and the tube wall was significantly thickened under scanning electron microscope. The initial thermal decomposition temperatures and the TG rates at 800℃ were showed in table 2. Table 2 indicated that the graft rate of E51-MWCNTs was higher than that of KH560-MWCNTs. The tensile strength and elongation at break of the 1.5% unmodified MWCNTs' composites were showed in table3. It shows that the addition of epoxy groups can improve the dispersion property of MWCNTs in PU, improve the mechanical property and reduce the elongation at break. In the field of electrical properties , the resistivity of the unmodified composite material decreased with the increase of increased of the addition of MWCNTs. When the addition of MWCNTs reached 2%, the resistivity of the unmodified composite material was 2.3×108Ωꞏcm, the resistivity of the KH560-MWCNTs decreased to 2.5×105Ωꞏcm, and the resistivity of the E51-MWCNTs decreased to 8.6×105Ωꞏcm. This indicates that the modified MWCNTs are more uniformly dispersed in PU, forming a conductive network. However, when the graft rate is high, the resistivity between energized tunnels will also increase due to the insulation of the grafted organic matter, so the resistivity of E51 is higher than that of KH560.
MWCNTs modified by amination
Zhenglong Yang's group first treated MWCNTs with mixed acid to graft them to the organic chain segment -COOH, and then reacted with sulfoxide chloride to acyl chlorination of carboxylated MWCNTs. Then they were mixed with ethylenediamine and triethylamine to make amine-modified MWCNTs, and then prepared composite materials by solution blending. The results showed that the introduction of amine groups improved the dispersion of MWCNTs in water, and the interaction ability of MWCNTs with WPU materials was also significantly improved. The particle size of the composite increases with the increase of the amount of MWCNTs and the viscosity also increases. Thermal stability properties of the composites increased, their initial thermal decomposition temperature and the temperature of the maximum thermal weightlessness rate increased with the increase of MWCNTs, 0.1% of MWCNTs composite material of two kinds of temperature than the unmodified composites of two kinds of temperature raised 20℃. The mechanical performance, itself MWCNTs and WPU composite can improve the mechanical properties of composite materials, and amino modified MWCNTs reinforcement for mechanical properties of ascension. In addition, the ultraviolet absorption capacity of the composites of such doped MWCNTs was also significantly improved, for example, the visible light transmittance of the composites of 0.1% MWCNTs reached about 80%.
Zhiqian
Xie's group mixed sodium dodecylbenzenesulfonate (SDBS) and MWCNTs were mixed ground and centrifuged to obtain SDBS modified MWCNTs. The composites were prepared by solution blending according to the amount of modified MWCNTs (0%, 0.1%, 0.3%, 0.5%, 0.9%, 1.2%, 1.5%). Date on mechanical properties was showed in table 4. The results show that with modified MWCNTs are added, composite materials tensile strength and elongation at break increased after decreased, when SDBS-MWCNTs content is 0.3%, the mechanical properties of composite materials to achieve the optimal conditions, the tensile strength and elongation at break increased by 9% and 29%, TG showed that after modification of the surface of MWCNTs alkyl chain interact with WPU soft chain segment, MWCNTs reduces the soft even the period of crystallization. Table 5 showed that the composites' TG (when the composites mass loss reach 50%). It indicated that when the MWCNTs were added, WPU's heat stability decreased. It may be due to the good thermal conductivity of carbon nanotubes, and after being combined with polyurethane, the external heat is easily transferred through the carbon nanotubes to the polyurethane matrix, leading to the easier decomposition of polyurethane and the reduction of the decomposition temperature of composite materials. In terms of electrical properties, when the content of modified MWCNTs was 0.9%, the conductivity of the composite increased by nearly 9 orders of magnitude.
3 Application field of carbon nanotubes/waterborne polyurethane
Application of mechanical properties
Carbon nanotubes have high mechanical strength and elasticity. They are more than 100 times stronger than steel, but have 1/6 of the density of steel. They are excellent carbon-dimensional materials, better than any fiber. The composite materials with waterborne polyurethane have most of the excellent properties of carbon nanotubes, which can be used as reinforcing materials for metal materials, giving more durability and strength to industrial products in industry. Its elasticity can also be used in the textile field, making the textile more durable, elastic and ductile
Application of electrical performance
Carbon nanotubes/waterborne polyurethane composite shell in electrical and electronic components application field is very broad, polyurethane as electrical shell material has a light texture, easy processing, the advantages of low cost, and large specific surface area of the carbon nanotubes, to a certain range of electromagnetic wave absorption characteristics, thus achieved the effect of electromagnetic shielding electromagnetic, the effect is also electrical and electronic components an integral part of preventing electromagnetic interference. In addition, it can be used as a coating material on military aircraft, so that the military aircraft achieve a "stealth" effect. The structure of carbon nanotubes also determines that the composite material has the properties of both metal and semiconductor. Therefore, the composite material can be used as a semi-conductor device, a conductor of a microcircuit, a nanoscale transistor and other microelectronic components.
Application of adsorption performance
The carbon nanotubes as a new type of adsorbent and have become one of the excellent properties of composite materials with WPU. Their adsorption of heavy metal ions, such as Pb2+, Cd2+, Cr6+, Cu2+, Zn2+, Co2+, Hg2+, As3+ and Ni2+, can purify industrial wastewater and domestic water. In addition to the adsorption of metal ions, they also adsorb some organic pollutants. In addition, composite materials for hydrogen has become a kind of hydrogen storage material, and composite material of hydrogen energy is a kind of efficient, clean and pollution-free new energy source. For hydrogen storage, it is divided into two kinds of high pressures and liquefaction storage method, but these methods all have make the risk of a hydrogen explosion, so can be used as a hydrogen storage material of carbon nanotubes/waterborne polyurethane composite materials become one of the hot composite material.
Application of biomedicine
Polyurethane materials in human body and has good blood compatibility, therefore, in the field of medicine, it is widely applied to promote bone tissue repair and production of artificial organs such as artificial heart, muscle fiber material, the current study was to improve the physiological activity of polyurethane material surface to improve its contact with the blood, but both in physical and chemical methods to make the new graft hydrophilic groups and physiological active substances, bacteria can get together ammoniac material mechanical performance is affected, while the MWCNTs can compensate for this defect, so the composite material is a kind of both can and blood compatibility and new medical material with good mechanical properties.
Energy saving materials
Carbon nanotubes in directional carbon nanotubes/waterborne polyurethane composites are vertically arranged, and there are long and narrow gaps between CNTs, The size of CNTs is exactly in line with the wavelength range of visible light. When the sunlight is incident, CNTs will constantly reflect into the film and cannot escape. Therefore, the composite material has a strong absorption and heat conduction effect on sunlight, and can be used as a kind of energy saving material for effective use of solar energy.
Development prospect and technical improvement of composite materials
Carbon nanotubes/waterborne polyurethane composites can greatly improve the excellent properties of carbon nanotubes and waterborne polyurethane and make the shortcomings of the two materials complement each other. The unique properties of both materials can be used better and more efficiently. In the future, the technical improvement of carbon nanotubes/waterborne polyurethane composites can be divided into the following aspects: 1)The modification method can reduce the use of and dependence of acid and other organic substances that are polluted to environment. 2)To explore the modification methods of polyurethane materials to improve the dispersibility of carbon nanotubes in them. At present, most of the research methods are modification of carbon nanotubes, while modification methods of polyurethane are rarely studied. Currently, the preparation and characterization of composites are mostly studied, but the enhancement mechanism of composites by carbon nanotubes is rarely introduced. 3)To improve the modification effect of physical methods on carbon nanotubes and to explore the modification effect of some equipment without the help of chemical reagents is also one of its research directions. Modification methods of carbon nanotubes in composites materials will become the key to the development of composite materials in the future.
The development trend of carbon nanotubes/waterborne polyurethane composite modification technology is to explore new technologies and adopt more diverse modification methods to find the optimal process to improve the composite properties.
Conclusion
Currently, the most commonly used modification method is to modify MWCNTs with acid. Studies have shown that, for MWCNTs grafted with -COOH, its mechanical properties, thermal stability and conductivity are not as good as those grafted with silicon group, epoxy group, amino group, and other chain segments. This paper mainly summarizes five methods of modified MWCNTs: organic alcohol modification, organosilicon modification, epoxy modification, animation modification and SDBS modification. In the field of carbon nanotubes/waterborne polyurethane composite material, there are a lot of methods. The results obtained by different modification methods are also obviously different. But in terms of simple experiment theory data, it is difficult to compare which modified method is better than other modification methods, only roughly drawing the advantages and disadvantages of each modification methods. Because when researchers prepare the composites, each method may be different due to synthetic method, experimental location, experiment condition and distribution of polyurethane soft and hard segments. Finally it caused the different between modified carbon nanotubes and differences in mechanical properties between polyurethanes. So it can be concluded that there were differences between composites and the results of modification methods are affected. In the future, the researchers will hammer at research on the modification of waterborne polyurethane and get rid of the acid dependence of the modification method. | 2020-12-03T09:07:43.700Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "2f79fde499c3c19927af1c354e112afa49cb2b58",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2020/73/e3sconf_acic2020_02026.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "dd8dfcad6ba9a02d6967f6f3ac061c9f84d38609",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
204092300 | pes2o/s2orc | v3-fos-license | Optimal pressure management in water distribution networks through district metered area creation based on machine learning
ABSTRACT Integrated management of water supply systems with efficient use of natural resources requires optimization of operational performances. Dividing the water supply networks into small units, so-called district metered areas (DMAs), is a strategy that allows the development of specific operational rules, responsible for improving the network performance. In this context, clustering methods congregate neighboring nodes in groups according to similar features, such as elevation or distance to the water source. Taking into account hydraulic, operational and mathematical criteria to determine the configuration of DMAs, this work presents the k-means model and a hybrid model, that combines a self-organizing map (SOM) with the k-means algorithm, as clustering methods, comparing four mathematical criteria to determine the number of DMAs, namely Silhouette, GAP, Calinski-Harabasz and Davies Bouldin. The influence of three clustering topological criteria is evaluated: the water demand, node elevation and pipe length, in order to determine the optimal number of clusters. Furthermore, to identify the best DMA configuration, the particle swarm optimization (PSO) method was applied to determine the number, cost, pressure setting of Pressure Reducing Valves and location of DMA entrances.
INTRODUCTION
Water supply systems play a key role in urban design, not only to ensure that citizens can have access to essential goods, but also for public safety reasons (DI NARDO, DI NATALE, 2011;GRAYMAN et al., 2009). The management of water supply systems become increasingly complex in the face of the reduction of available natural resources, with the need to reduce energy consumption and water loss.
The division of the water distribution network (WDN) into districts allows a better management and increase of hydraulic and energy efficiency, since the operations are directed to the needs of each district, besides the greater control from measurements and monitoring. However, such division can be a complex task due to the size of the network and its peculiarities, such as the number of loops, the variation of the geometric dimensions and the modification in the hydraulic conditions, which can make such a division inconsistent if they are not considered (DIAO et al., 2012).
For the definition of a district metered area (DMA) it is necessary to determine the supply points (entrance points) and their influence regions. In this definition, water supply should provide sufficient quantity and quality to consumers. Operating pressures must be ensured inside a standardized range, a condition normally achieved by using pressure reducing valves (PRVs). The location of supply points in the district and the operating pressure are fundamental in the clustering process.
Corroborating the importance of the division of networks into districts, important work are proposed in the literature for the development of clustering tools. Tzatchkov et al. (2006) present a model based on graph theory for the segmentation of supply networks. The authors were based on graph analysis and graph partition in order to find a suitable design for the DMAs. Swamee and Sharma (2008) propose the segmentation of multiple sources assigning pre-defined zones of influence for the clustering. Herrera et al. (2010) proposed the use of partitioning with methods based on machine learning for the definition of DMAs. Also based on graph partition, the authors included the non-supervised learning approach to the DMA design, developing a hybrid graph theory / data mining algorithm for the DMA design. Diao et al. (2012) proposed the automatic creation of boundaries for the determination of measurement districts based on social structures, a tool in the field of Artificial Intelligence, and the decomposition theorem of complex systems (Simon, 1962). Campbell et al. (2014) proposed a clustering method based on social networks for the determination of districts using energy efficiency as criteria. In this work, the authors found a robust and computational efficient technique for DMA design in large networks based on graph partitioning and data mining technique for nodal clustering. The authors used topological criteria, such as the maximal demand of a district, or the maximal difference in node elevation as criteria for the graph partition . Di Nardo et al. (2014) proposed a method based on graph theory coupled to an optimization algorithm for the determination of the districts of a supply network also aiming energy efficiency improvement.
Among the several clustering tools, the k-means algorithm is the most prominent. Initially proposed by Steinhaus (1956), it is widely used for clustering problems due to its simplicity, versatility and speed of operation (WU et al., 2008), emphasizing its ability to handle a large amount of data (HUANG, 1998). On the other hand, with the advent of modern neurology and the consequent discoveries of cerebral functioning, mathematical models based on the behavior of this organ were proposed. Among them, Alhoniemi et al. (1999), Vesanto and Alhoniemi (2000) and Kohonen (2001) proposed the use of a self-organizing map (SOM), which simulates the recognition of patterns by the brain for grouping, classifying, estimating and predicting different types of problems, being widely used in the area of water resources.
The challenge of creating DMAs in supply networks is not fully solved from a database. Once defined the districts, it is necessary to define the entrance of each of these districts, thus allowing the installation of control elements, such as PRVs, to ensure complete isolation in cases of emergency or maintenance. The current propositions make use of hybrid optimizer-cluster models to determine the districts, minimizing structural costs and deterioration (GALDIERO et al., 2015).
During the last decades, the water companies have developing to divide the water network, aiming a better management. The recommendation of United Kingdom, early of 1980's (FARLEY, 1985 has change the management of water distribution systems and, by the strategical placement of pressure control devices, the leakage rate could be reduced. Nevertheless, the task to create DMAs is still a complex task because many variables are playing important rules, such as topological and topographic features, costs, benefits etc. In order to develop an automatic tool for DMA design coupled to the optimal pressure management, this work develops and analyzes two models of DMA creation in water supply networks using two sets of criteria, the mathematical and topological. The first model is based on the k-means clustering algorithm and the second one is a hybrid method, combining the SOM and k-means methods, both with the purpose of determining the optimum number of groups of nodes with similar characteristics. Four mathematical criteria to determine the number of DMAs are evaluated, namely Silhouette, GAP, Calinski-Harabasz and Davies Bouldin. In addition, the influence of three clustering topological criteria is evaluated: the maximal water demand, maximal difference in node elevation and total pipe length. Finally, an optimization model, based on the bio-inspired particle swarm optimization (PSO) algorithm, is applied for the allocation of control and isolation valves of the districts, as well as their operation point, minimizing the installation costs.
In this sense, the purposed method is composed by 2 stages. The first one, based on physical (elevation) and topological (space position) parameters of the networks, a clustering algorithm (K-means) is applied. The algorithm will divide the network in K groups, based on Euclidian distances from K-centers, initially randomly distributed, and recurrently self-organized, based on the mean value of each k-group. The important task at this stage is to define the value of K. To help the solution of this task, mathematical and topological criteria are explored in this paper. Each criterion is considered separately. For future works, mainly for the topological criteria, the analysis of correlation or interference between the criteria could be considered.
Self-organizing maps
The main objective of a SOM is to process input data in arbitrary dimensions and bring them to a one or two-dimensional set of data, with transformations that guarantees topological similarity (HAYKIN, 2001). In general, the algorithm distributes a group of neurons within the characteristic space and as iterations occur, this group changes so that the synaptic weights are representative of the multidimensional space, without previous knowledge of the behavior of such surface.
The position of each node j of the network j w , also called the neuron can be represented by equation 1: where N is the total number of neurons in the network. The similarity between a weight vector ji w and an input pattern i x can be measured in terms of the distance between the two vectors. The neuron that satisfies the optimal condition of minimum distance is called the winning neuron and has associated to it a topological neighborhood that will define an activation zone. The criterion of similarity is given by equation 2: in which -C x w represents the Euclidean distance between the network neurons and c represents the chosen winning neuron.
The weights of the winning neuron and its neighboring neurons are then adjusted according to the following equation 3: where t represents the iteration of the training, x i (t) is the input pattern and ( ) C h t is the neighborhood nucleus around the winning neuron.
The definition of the neighborhood usually follows the idea in which the activation of nearby neurons is greater than the activation of distant neurons. Figure 1 presents, in a simplified way, a two-dimensional SOM with a two-dimensional input vector. The darker circle at the center represents the winning neuron and the gray scale shows the influence of the neighborhood in the adaptive process.
Once the actuation neighborhood is defined, each of the weights is updated so that all topological proximity information is considered. With the learning process finalized, each neuron will be close to a certain set of input data represented in the output space. Each of the neurons can then be defined as the center of a cluster with a set of data around it, then labeled.
k-means
K-means is an unsupervised learning algorithm used to group the points of a network according to similar characteristics. The algorithm works by determining the centroid for each cluster. The best clustered data will have their centroids located farthest from each other, allocating the points of the network to the nearest centroids. The k centroids are selected randomly in the input space and each input data is classified according to their distance to the centroids. After the allocation, it is necessary to recalculate the position of the centroids and evaluate if there is any change regarding the previous position, repeating the process until there are no changes. The new position of the centroids is calculated with equation 4.
is the distance between an input vector ( ) j i x and the centroid j c , k is the number of centroids and n is the number of nodes in the network. In this study, the input vector x i has four dimensions, representing the demand, elevation, latitude and longitude of node i of the network, as shown in equation 5.
Criteria for clustering in districts
Clustering criteria are used to feed the algorithms with information in order to identify similar network nodes, grouping them in specific DMAs. Two types of criteria were considered for clustering: topological and mathematical. The first takes into consideration only the physical features of water supply networks. The second considers the quality of the clusters created.
Topological criteria
The topological criteria of a water supply system such as the maximal water demand, maximal difference in node elevation and total pipe length. define the hydraulic behavior of the network. Identifying such criteria in the clustering process can favor the pressure management in the districts.
The maximum water demand, the maximum elevation difference between nodes and the maximum pipe length of the (2006)).
Optimal pressure management in water distribution networks through district metered area creation based on machine learning 4/11 same district were used to determine the number of clusters, varying the limit values of each one separately to verify the influence of each factor.
Mathematical criteria
The main purpose of clustering data is to determine groups with solid characteristics that differ as much as possible from each other. In addition, the more compact the clusters, the less ambiguity the overall clustering. Thus, measurements of quality are shown in the literature as means to evaluate both the distance between clusters and their compactness. The mathematical criteria used for the quality-cluster analysis were: GAP, Silhouette, Davies-Bouldin and Calinski-Harabasz.
GAP
The GAP criterion (TIBSHIRANI et al., 2001) consists of obtaining a graph of error measurements in the clustering relating to the number of clusters of the network. The optimal clustering occurs when the maximum reduction of related error is achieved. Reduction in errors in relation to the number of clusters represents higher GAP values, with the optimal result occurring at the highest GAP value, local or global, considering tolerance limits. The GAP value is defined as shown in equation 6: where n is the sample size, k is the number of clusters being evaluated and Wk is the measure of dispersion within each cluster.
The expected value E * n {log (Wk)} is determined by the Monte Carlo method through a reference distribution and the log (Wk) is computed by the sample data, as shown in equation 7.
where n r is the number of data in a cluster r, and Dr is the sum of the distance between all points of cluster r.
Silhouette
The Silhouette criterion (ROUSEEUW, 1987;KAUFMAN;ROUSEEUW, 1990) consists of a similarity analysis of specific data points in relation to the data of the same cluster compared in relation to the data of other clusters. The silhouette value ranges from -1 to +1, with low or negative values representing poor results and high values representing appropriate clustering results. This value is given by equation 8: where i a is the average distance of the i th point in relation to other points in the same cluster and i b is the smallest mean distance of the i th point in relation to other points in different clusters.
Davies-Bouldin
The Davies-Bouldin criterion (DAVIES; BOULDIN, 1979) consists of a ratio of the distance of nodes within a given cluster to the distance between clusters. The Davies-Bouldin index is given by equation 9: where D i,j is the ratio of distances within the same cluster i and the distances between clusters i and j. The equation 10 shows the ratio of distance in mathematical terms: where di is the mean distance between each point in the i th cluster and its centroid, dj is the mean distance between each point in the j th cluster and its centroid, and , i j D is the Euclidean distance between the centroids of i th and j th clusters. The maximum value of , i j D results in the worst DMA creation performance, while the minimum value represents optimal creation.
Calinski-Harabasz
The Calinski-Harabasz criterion, or "variance ratio criterion" (VRC) (CALINSKI; HARABASZ, 1974), consists of the relation between intra-cluster distances. The VRC is given by equation 11: where B SS is the total variance between clusters -equation 12, W SS is the total variance within each cluster equation 13, k is the number of clusters and N is the number of observations.
where i m is the centroid of cluster i, m is the overall average of the sample data, and i m m − is the L 2 norm (Euclidean distance) between the two vectors.
where x is a sample data, i c is the i th cluster and i x m − is the L 2 norm (Euclidean distance) between the two vectors.
High values of B SS and low values of W SS represent well defined clusters. The higher the k VRC index, the better the clustering, with optimum number of clusters defined by the solution with highest Calinski-Harabasz index.
Optimal pressure management
Considering the optimum pressure management within each of the DMA, it is proposed in this study the optimal allocation of valves in the entrance of each district, and their pressure setting, aiming the highest uniformity of pressure within the district.
5/11
The choice of the nodes belonging to a previously grouped DMA should comply with the minimum and maximum pressure constraints in addition to operational criteria that are raised throughout the study and enable better management of the districts.
Considering as decision variables the location of each of the valves and their respective pressure setting, the problem can be written as the minimization of the operating pressures of the system and the pressure uniformity parameter ( k PU ), which expresses the pressure deviation of each node with respect to the mean pressure of the nodes of a district. This measure was proposed by Alhimiary and Alsuhaily (2007) shown in equation 14. The minimization problem is subject to the pressure constrains (Equation 15) and the number of nodes belonging to a DMA (equation 16).
( )
, , , where k PU is the pressure uniformity parameter for a given district k, is the total simulation period, k N is the number of nodes belonging to district k, , is the pressure at a given node i for the time step t, ,t is the mean pressure of district k in time step t, i and a are the minimum and maximum standardized pressures respectively. The bio-inspired Particle Swarm Optimization (PSO) algorithm is used to determine the position of the valves and respective pressure settings.
Particle Swarm Optimization -PSO
Particle Swarm Optimization (PSO) is a population-based algorithm that has particles as the elemental unit. The particles are composed of two vectors of size D (dimension of the problem).
One of these vectors represents the position of the particle and the other its displacement velocity. The first step of the method is the initialization of the particles, done randomly within a range of interest, both for position and for the velocity. At each iteration n, the particle information is updated, considering its best position ever achieved (p id ) and the group best position (g id ) as shown in Equations 18 and 19 (EBERHART; KENNEDY, 1995). The process continues until one of the stopping criteria is reached, such as the maximum value with the arbitrated error, the maximum number of iterations, the lack of improvement in the objective function for a determined iteration interval and other stopping criteria widely used in numerical problems (FAIRES; BURDEN, 1998) where d = 1,2, ... m, with m the number of variables of the problem, n = 1,2, ... N, with N the maximum number of iterations. Also, r 1 and r 2 are numbers randomly chosen within the range [0,1], and c 1 and c 2 the cognitive and social coefficients respectively. The first is used in the initial iterations to perform a global search, while the second improves the local search, for the final iterations, when it is expected to be close to an optimal solution.
RESULTS AND DISCUSSION
The method proposed was applied to the D-Town network (MARCHI et al., 2013), composed of 398 nodes, 458 pipes, 7 tanks, 1 reservoir, 13 pumps and 4 valves, as shown in Figure 2.
The SOM was configured to have 25 rows and 25 columns with a squared topology, to execute a maximum number of 4000 iterations and a defined topological neighborhood size of 4 neurons. This arrangement was chosen through a sensitivity analysis, considering the processing time and the efficiency of the algorithm, measured by the quantification of the errors.
Topological criteria
A total of 18 scenarios were generated, 9 with the k-means algorithm and 9 with the hybrid algorithm, varying the district's maximal water demand, maximal difference in node elevation and total pipe length for the district. For each criterion, the cluster quality was evaluated using the Calinski-Harabasz index (VRC), in which a higher index value represents a higher quality of DMA creation.
Starting with the demand criterion, Table 1 presents the VRC index values for each of the limits used. There is a slight difference between the demand limit of 140 l/s when compared to the other values for hybrid clustering. Still, the best value of VRC is obtained by creating DMAs with the k-means method. Figure 3 presents the best creation scenario for each of the methods using the demand of 140 l/s as the limit value.
It can be noticed a spatial difference of the clustering patterns between one method and another. The k-means method generates more circular districts, around a center of gravity, which is more compatible with reality.
Following the evaluation of the criterions for DMA creation, Table 2 shows the value of the VRC index using the elevation as parameter. In both methods the best value for VRC occurs with the maximum elevation difference of 75 m. Figure 4 shows the final distribution of the districts for each of the algorithms.
The last topological criterion analyzed was the maximum total pipe length for the district. Table 3 shows the value of the VRC index for each of the criteria boundaries. It is observed that the district with a maximum of 15 km has the best performance, and the clustered network for this limit value, in each one of the algorithms is presented in Figure 5. Within the topological criteria, the one that presented the best performance, when evaluated by the VRC index, was the scenario generated by the k-means algorithm with the maximum district length criterion. This result is very close to the districts generated by the same algorithm with the maximum demand criterion. In general, the k-means algorithm presented better performance alone when compared to the districts generated by the hybrid model.
Mathematical criteria
A total of 8 scenarios were generated, 4 with the k-means algorithm and 4 with the hybrid algorithm, varying the mathematical criteria. For each mathematical criterion, the quality of the district was also evaluated using the Calinski-Harabasz index. Table 4 shows the value of the VRC index for each of the mathematical criteria used.
It can be noticed, for the k-means method, the scenario obtained by the VRC criterion itself had the best result, similar with those found in the topological criteria. On the other hand, the evaluation of the hybrid model had a better result with the scenario generated by the Davies-Bouldin criterion (DB), but once again, in all cases of the hybrid model, the clustering had lower quality values than the method k-means pure. Figure 6 shows the final distribution of the districts for each of the criteria.
Optimization of entrance location and operational point of VRPs
For each criterion used in the creation of DMAs, an optimization was performed on the k-means method, with the purpose of analyzing the cost involved in the optimal allocation of PRVs and the distribution of the pressures in the network under conditions of maximum and minimum demand for a period of 24 hours. The choice of k-means models is justified because they presented better results in the creation of DMAs, with well-distributed and compact clusters.
The total cost represents the cost involved in the installation of PRVs, while the unit cost represents the cost per valve implanted. The cost for PRV are based on Saldarriaga et al. (2019). This analysis was made in order to obtain insights on the costs associated with the pressure optimization. Table 5 presents the optimization results for each criterion. A good pressure distribution in the network occurs when the operating pressures of the system and the standard deviation between them are minimized, both in the conditions of minimum and maximum demand, comparing to the situation without optimization. The topological criteria presented an improvement in the distribution of pressure in the network, with emphasis on the "Length 15 km" criterion, which showed a significant reduction in the pressure required by the network, evident in Figure 7. The mathematical criteria also showed an improvement in the distribution of pressure in the network, with Calinski-Harabasz, which presented a significant reduction in the pressure required by the network, evident in Figure 8.
DISCUSSION
It is possible to notice from Figures 3-6 that the models that only used k-means to group the nodes of the network had a well distributed and compact aspect districts. By using the hybrid model, it is possible to notice that all the clusters maintained the same pattern of clustering in diagonal bands, losing the essence of compact clusters and possibly representing difficulties in the strategic management of the districts, since they have an elongated aspect.
The variation of the topological criteria resulted in changes in the arrangements and number of districts, in which the increase of the criteria values tended to reduce the number of districts. Novarini et al.
9/11
The mathematical criteria did not show drastic differences among them, with the Calinski-Harabasz criterion presenting the largest number of districts and the GAP criterion the lowest number of districts in the case of the model using only k-means.
When analyzing the Calinski-Harabasz index in the clustering, it is possible to notice that the models with the k-means algorithm presented, in general, higher indexes, thus with a higher quality. The best clustering with respect to the topological criteria was given for the maximum DMA water demand equal to 140 L/s (6 DMAs generated), the difference in node elevation between DMAs equal to 75 m (6 DMAs generated) and the maximum total pipe length of the DMA equal to 15 km (6 DMAs generated). The best clustering in relation to the mathematical criteria was obtained by using the Calinski-Harabasz method (8 DMAs generated), although the methods Silhouette (2 DMAs generated) and Davies-Bouldin (2 DMAs generated) presented very close indexes.
When analyzing the creation of DMAs in terms of mathematical criteria, the Silhouette, Davies-Bouldin, and GAP presented poor hydraulic results with only 2 DMAs created, which is not a significant improvement for management purpose. The Calinski-Harabasz criterion presented a good result, with 8 compact districts well distributed throughout the network and good quality evaluation indexes, in addition to a lower unit cost for PRVs installation (U$ 1,780).
When analyzing the creation of DMAs in terms of the topological criteria, all presented good results, with 6 DMAs created, compact and with well distributed characteristics. The criterion "Demand 140 l/s" presented the lowest total cost (U$ 33,092) and unit cost (U$ 1,947) for PRVs installation.
It is possible to notice that the total cost of installation increases with the number of DMAs. However, the unit cost tends to decrease, as there are more limit tubes and more likely to work with smaller diameters, reducing the costs of PRVs Figures 7 and 8 highlight the efficiency of the network optimization as to the distribution of pressures under the conditions of minimum and maximum demand of the system, reducing the overall pressure required by the network distribution. From the quantitative point of view, the PU in the network was reduced from 52,07 to 44,33 in average. This reduction at the PU corresponds to a leakage reduction of 30% at the entire network. This leakage is calculated following the methodology presented by Brentan et al. (2017) and take into account a scenario of the network in operation without DMA's and the scenarios with DMAs.
Even if the benefits of DMA design are clear, knowing the diversity and dynamic of WDN, it's a hard task to evaluate how much will be this benefit for a water utility without simulations and deeper studies of particular cases.
CONCLUSION
This work presented the comparison between a hybrid model (SOM + k-means) and a k-means method model for the creation of DMAs with the purpose of optimizing the water supply system, considering the similarity of the topological conditions of the nodes of the network, mathematical criteria and topological criteria to find the optimum number of DMAs.
The topological similarity of nodes in the water distribution network was essential for the effective creation of DMAs. The k-means method performed well, presenting good quality assessment indexes and the ability to simplify the water supply network, an important feature for water distribution management.
The use of mathematical criteria by itself can generate an impractical solution from the hydraulic point of view and for future work, the topological criteria must be considered jointly with the mathematical criteria to improve the quality of the creation of DMA.
Depending on the criteria used, the size and configuration of the DMAs will be unique and it is up to the system's managers to choose the criteria that will best suit the water distribution network, considering the costs involved.
From the mathematical point of view, the DMA design process can be affected not only by the hydraulic or physical features, but for the optimization design problem. In this work, the optimization is applied for the optimal control valves placement. In this sense, the costs of the valves (related to the number of valves and diameter size) are minimized, taking into account operational parameters, such as the pressure deficit and pressure uniformity in a single-objective approach. he problem could be easily passed for a multi-objective optimization, considering the evaluation parameters (Resilience, Pressure uniformity, etc) as objectives, or becoming the constrains of the problem, in objectives to be reached. If in one hand, the multi-objective approach can be useful for real and complex problems, on the other hand, the final Pareto's front should be treated and the opinion of decision makers will play an important rule for the final solution of the problem. | 2019-10-03T09:02:32.954Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "ecc99f52023387a40cbc19ef8596b63ef83837e8",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/rbrh/v24/2318-0331-rbrh-24-e37.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "650882f15fea343a3d200da613c09fe75405a132",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
16144773 | pes2o/s2orc | v3-fos-license | Staple Line Polyposis and Cytomegalovirus Infection after Stapled Haemorrhoidectomy
Early bleeding after stapled haemorrhoidectomy (SH) is not uncommon. Late and persistent bleeding occurring weeks or months after SH, however, is rare; it has only been described in more than 10% of cases. It is attributed to the development of inflammatory polyps at the staple line. Occurrence of rectal bleeding in the presence of palpable polypoid lesions at the stapled anastomotic line can cause diagnostic confusions, and it is not uncommon that such lesions are initially confused with rectal carcinoma. We report a case of a 38-year-old male who presented with persistent rectal bleeding some 6 months after SH performed in another hospital. Rectal and colonoscopic examinations revealed polypoid lesions at the anastomotic line. The biopsy failed to confirm malignancy, but identified cytomegalovirus (CMV) infection. The development of multiple inflammatory polypoid lesions in conjunction with CMV infection at the stapled anastomotic line has caused a diagnostic confusion, but – after exclusion of cancer – this complication was efficiently treated by CMV infection eradication combined with surgical excision of the remaining polyps due to persistence of bleeding. This case is reported to highlight late bleeding due to inflammatory polyps after SH and to increase the awareness of surgeons and gastroenterologists of this benign but somewhat common complication.
Introduction
Haemorrhoids or piles are a common surgical ailment affecting many individuals in the world. Treatment of haemorrhoids is diverse, but the best treatment is prevention which can be achieved by avoiding constipation, intake of a high-fibre diet and administration of bulk laxatives, if necessary. Local symptoms such as anal irritation and pain can be alleviated by soothing creams and suppositories, but they hardly provide long-term benefit. Although nonsurgical treatments of piles such as rubber band ligation, sclerotherapy, photocoagulation and cryotherapy are well accepted and very popular among patients, they are not suitable for all grades of haemorrhoids. Hence, for piles that are not suitable for nonsurgical treatment and for those which fail to respond to medical treatment, surgical intervention become a necessity. These surgical procedures vary from gentle anal sphincter dilatation to standard Milligan-Morgan haemorrhoidectomy. The latter is often associated with morbidity, severe pain and discomfort, which results in a bad reputation and makes it unpopular among patients. A reasonably new promising operation that is suitable especially for piles that are accompanied by mucosal prolapse is stapled haemorrhoidectomy (SH). This procedure was introduced by Longo [1] and it has been gaining popularity due to its numerous advantages and is now accepted as one of the procedures of choice for the treatment of prolapsing haemorrhoids [2,3]. It is associated with much less postoperative pain, early discharge and return to work [4]. Although it has been hailed as safe and effective, short-and long-term complications have been reported, some of which are life-threatening [5][6][7]. Such complications include staple line bleeding, stenosis, pelvic sepsis, rectovaginal fistula, rectal lumen obliteration, acute rectal obstruction and perforation [7][8][9].
Early staple line bleeding after SH is not uncommon. However, late and persistent bleeding that occurs weeks or even months after SH is rare; it has only been described in more than 10% of cases [10]. It is attributed to the development of inflammatory polyps at the staple line. Occurrence of rectal bleeding in the presence of such palpable polypoid lesions can cause diagnostic confusions, and it is not uncommon that such inflammatory polyps may be initially mistaken for rectal carcinoma. Coexisting cytomegalovirus (CMV) infection of the rectum may aggravate the development of these inflammatory polyps and may contribute to the persistence of rectal bleeding.
We report such a late staple line complication which occurred in conjunction with CMV infection some 6 months after a uneventful SH.
Case Report
A 38-year old male patient presented with bleeding per rectum and a 1-year history of constipation. He underwent colonoscopy, which was normal, and SH was performed 6 months earlier in another hospital with early improvement in his haemorrhoidal symptoms. However, 4 weeks later, he started to complain of rectal bleeding on defaecation again, which became progressively worse. He had no past history of inflammatory bowel disease (IBD), diabetes or homosexual tendencies, and his family history of IBD and colorectal cancer was negative. Abdominal examination was unremarkable, but rectal examination revealed multiple polypoid lesions at the level of the anastomotic line. Routine blood tests, C-reactive protein levels and tumour markers were within normal limits and the HIV status was negative. Colonoscopy revealed multiple inflammatory polyps at the staple line ( fig. 1). Due to high clinical suspicion of malignancy, computerized tomography (CT) scan was ordered, with rectal contrast showing multiple polypoid lesions in the lower rectum with diffuse circumferential wall thickening extending from the rectosigmoid junction down to the anorectal ring ( fig. 2). The histology of the colonoscopic biopsy showed an inflammatory response with evidence of CMV infection. He was started on ganciclovir therapy, but his bleeding symptoms persisted, and a second sigmoidoscopy 6 weeks later revealed a marked improvement in the endoscopic features of the polyps ( fig. 3). His bleeding ceased completely after simple surgical excision of the remaining polyps and 2 staples at the anastomotic line. The final histology revealed inflammatory polyps with no evidence of malignancy or CMV infection. He remained totally asymptomatic at the 12-month follow-up.
Discussion
Although SH has been considered one of the procedures of choice for the treatment of prolapsing haemorrhoids [2,3] and has been hailed as safe and effective, it is associated with some short-and long-term complications which are attributed to a breach in the mucosal lining at the anastomotic line. The most serious complications are rectovaginal fistula, rectal perforation and deep pelvic sepsis which may prove fatal in some cases [4,[6][7][8][9]. This case highlights yet another staple line complication, although it was not lifethreatening, but caused diagnostic dilemma and confusion. This patient developed multiple polypoid lesions which were felt clinically to be malignant, but proved to be inflammatory polyps which may well be associated with CMV proctitis based on the histological examination of the colonoscopic biopsies and the CT scan examinations. Therefore, the patient was started on ganciclovir therapy which resulted in a marked improvement in the endoscopic appearance but had no significant effect on his symptoms. Surgical excision of the remaining polyps led to the complete disappearance of the patient's symptoms. The marked improvement in the endoscopic features of the polyps after treatment with ganciclovir may indicate that CMV infection could have played an important role in increasing the intensity of the inflammatory reaction at the staple line, with the subsequent development of the polyps. This CMV infection may also explain the associated thickening in the rectal wall that was seen on the CT scan. However, it is difficult to establish retrospectively whether the patient has harbored CMV infection or not as the SH procedure was performed in another hospital.
CMV infection of the colorectum is usually observed in immunocompromised individuals such as diabetics and HIV patients in whom it can be life-threatening. It has also been reported in immunocompetent individuals, but is usually mild and subclinical as in this case [11]. However, severe CMV proctitis with massive fatal rectal bleeding in immunocompetent patients has also been described [11]. Although a strong association between CMV infection and severe ulcerative colitis exists (CMV positivity 57%) [12,13], the association with colorectal cancer is very weak or even nonexistent (CMV positivity 14% only) [14,15]. Nevertheless, it may masquerade radiologically and endoscopically as cancer, especially in immunocompromised patients [15,16].
In the study by Fondran et al., late and persistent bleeding due to inflammatory polyps at the staple line weeks or months after SH has been described in 11% (9/82) of patients [10]. Like in this case, the bleeding was mild and resolved in all cases after surgical excision of the polyps [14]. Fondran et al. concluded that bleeding from inflammatory polyps occurs in a significant number of patients undergoing SH [10]. It was also recommended that such bleeding several weeks or months after the procedure should prompt a search for inflammatory polyps at the staple line and that simple surgical excision is adequate to prevent rebleeding [10]. Quah et al. described 2 patients with persistent fresh rectal bleeding due to inflammatory polyps occurring more than 12 months following SH [17]. Drummond and Wright also observed this bleeding occurrence in a patient presenting with intermittent bleeding per rectum 4 years after SH [18]. Examination under anaesthesia revealed palpable staples protruding from the mucosa which appeared to show signs of recent bleeding [18]. There was no other pathology noted and the staples were thus removed [18]. It is speculated that passage of stools over residual staples can result in recurrent local trauma and subsequent bleeding [18]. This may be the cause of the persistent bleeding even after CMV eradication in this case, as loose staples were found in the anastomotic line ( fig. 3). Staple line reinforcement by sutures to stop the immediate bleeding that occurs after firing the staplers has been implicated as a precursor to polyp formation. However, in this case, it is not clear whether the formation of the polyps was related to the reinforcement by sutures as the surgery was carried out somewhere else. Generally speaking, it is believed that intense inflammatory reaction may be triggered by staples as this complication is also observed in non-suturereinforced anastomoses [10]. Whether these polyps develop as a result of an inflammatory reaction to exposed staples, staple line reinforcement by sutures, or due to CMV proctitis as in this case, it is not uncommon for inflammatory polyps to be initially confused with cancer.
This case highlights the possible development of inflammatory polyps at the anastomotic line which may be aggravated by superimposed CMV infection several months after SH. This complication may initially masquerade as rectal cancer. Hence, it cannot be overemphasized that awareness of this benign complication and exclusion of cancer is of prime importance to avoid unnecessary surgery for an inflammatory condition. | 2018-04-03T05:43:16.788Z | 2010-06-19T00:00:00.000 | {
"year": 2010,
"sha1": "1512d6392482bb11c2b2bb5fee1c689dc3a7ab03",
"oa_license": "CCBYNCND",
"oa_url": "https://www.karger.com/Article/Pdf/316634",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1512d6392482bb11c2b2bb5fee1c689dc3a7ab03",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
268413649 | pes2o/s2orc | v3-fos-license | The incidence and risk factors of secondary epilepsy after viral encephalitis in children: A 10-year single-center retrospective analysis
Secondary epilepsy is a common concomitant disease of viral encephalitis (VE) in children. However, the risk factors for secondary epilepsy after VE remain debated. The aim of this study was to perform a 10-year single-center retrospective analysis to investigate the incidence and risk factors of secondary epilepsy after VE in children. A total of 8691 patients suffered from VE in our hospital between December 2011 and February 2022 were included. The patients were divided into control group (Group C) and epilepsy group (Group E) according to whether they followed secondary epilepsy. Information about treatment process was collected from medical records to determine the incidence. Univariate analysis and multivariate logistic regression analysis were performed to identify the independent risk factors. In the current study, the occurrence of secondary epilepsy after VE in pediatric patients was 10.99% (385 of 3503). The results of univariate and multivariate analysis showed that unconsciousness, convulsions, times of epilepsy >2, epileptiform discharge of Electroencephalogram (EEG), and cortical and subcortical damage of magnetic resonance imaging/computer tomography were the significant risk factors for secondary epilepsy after VE. Nearly one tenth of pediatric patients suffered from secondary epilepsy after VE. Interventions for identified risk factors should be used to prevent the occurrence of secondary epilepsy.
Introduction
Viral encephalitis (VE) is characterized as an acute inflammation of the brain caused by viral infection.Clinical investigations indicate that the causative virus is detected in only a range of 33% to 25% of cases, with enteroviruses being responsible for the majority (80%) of cases, followed by arboviruses and adenoviruses. [1,2]The manifestations of VE can vary depending on the virulence of the pathogen and the response of the host.If the lesion primarily affects the meninges, viral meningitis is the predominant clinical presentation, whereas VE becomes the prominent feature if when the lesion main brain is primarily affects the brain parenchyma. [3,4]From an anatomical standpoint, the meninges and brain parenchyma are in close proximity, thus when both are afflicted, the condition is referred to as viral meningoencephalitis.This particular ailment predominantly affects children aged 2 to 10, exhibiting a notable propensity for infecting the nervous system.Manifestations of this condition encompass fever, vomiting, convulsions and, in cases where the brain parenchyma sustains damage, coma. [5]he severity and prognosis of VE exhibit considerable heterogeneity among pediatric patients.Mild cases generally demonstrate a favorable prognosis, whereas severe cases can result in various debilitating outcomes such as epilepsy, limb paralysis, hearing or visual impairment, altered consciousness, and even mortality. [6]Existing literature indicates that secondary epilepsy frequently manifests as a complication of VE. [7,8] Notably, children with VE are more susceptible to developing secondary epilepsy compared to adults.Following VE, approximately 8.9% of cases experience secondary epilepsy, with refractory epilepsy accounting for 24.12% of these instances.Patients with epilepsy may encounter motor dysfunction, mental impairments, or even fatality due to the ailment.Failure to timely control seizures can exacerbate the irreversible harm caused by the primary disease and inflict damage on multiple bodily systems.Consequently, there is an urgent requirement for interventions aimed at averting secondary epilepsy subsequent to VE. [9,10] Although certain studies have investigated the risk factors associated with VE complicated by secondary epilepsy, the sample size was limited, the duration of observation was brief, and the analysis lacked thoroughness.
Given that, the aim of this study was to analyze patients with VE at the Children's Hospital of Hebei Province over the past decade and establish a theoretical foundation for the development of a more comprehensive treatment approach for children with VE by examining the incidence and risk factors of secondary epilepsy after VE in children.Additionally, the study sought to propose preventive measures and drug treatments based on the current clinical data of VE to mitigate the occurrence of secondary epilepsy.
Materials and methods
This retrospective study adhered to the principles outlined in the Declaration of Helsinki guidelines.Approval was obtained from the institutional review board of the Children's Hospital of Hebei Province (Grant No. 202302), and all participants provided informed consent to participate in the study.
Inclusion and exclusion criteria
From December 2011 to February 2022, a retrospective study was conducted at the Children's Hospital of Hebei Province to recruit patients who had experienced VE.The recruitment process involved querying electronic medical records.inclusion criteria encompassed the following aspects: The presence of infectious diseases, upper respiratory tract infections, or acute or subacute onset.Manifestation of symptoms indicative of brain parenchymal damage such as fever, convulsions, drowsiness, mental and emotional abnormalities, and even comas.Cerebrospinal fluid White blood cells were normal or slightly increased, lymphocytes dominated the classified count, and cerebrospinal fluid virus culture and specific antibodies appeared positive.Electroencephalogram (EEG) is characterized by diffuse or localized abnormal slow wave background activity.The clinical data of participants were complete.The exclusion criteria for this study included the following: A confirmed case of Japanese encephalitis.Family history of epilepsy, previous epilepsy, or other systemic progressive diseases causing epilepsy.Acute or post-discharge death due to other complications.Meningitis of the cerebrospinal fluid, caused by other immune-mediated diseases.History of taking psychotropic drugs.Other intracranial pathogenic infections, such as purulent meningitis, tuberculous meningitis, and cryptococcal meningitis.Reye syndrome.Not actively receiving treatment.Lost to follow-up for various reasons.
Diagnosis and grouping
According to the diagnostic criteria and historical records of secondary epilepsy, as well as EEG and head imaging examination, the participants of this study were categorized into 2 groups: control group (Group C) and epilepsy group (Group E) based on the presence or absence of secondary epilepsy.
Data collection
Two researchers (MSZ and GYZ) inquired patients' electronic medical records and made telephone follow-up to record demographic information of each participant such as age, height, weight, gender, length of hospital stay, antiviral drugs in acute phase, clinical manifestations in acute phase, virology, cerebrospinal fluid, EEG monitoring, imaging examination, epilepsy control, and prognosis.Virological examination results include serum virus antibody examination, serum and throat swab virus nucleic acid examination were carefully recorded.
Statistical analysis
All statistical analysis was performed with the Statistical Package for Social Sciences software (version 23.0, SPSS Inc., Chicago, IL, USA).The continuous data were expressed as mean ± standard deviation (SD) or median (interquartile range).First, a univariate logistic analysis was performed to evaluate the relationship between each categorical variable and secondary epilepsy after VE.Whitney U test or t test was used to evaluate continuous variables, when appropriate depending on the data distribution (equal variance and normality or not).Multivariate logistic regression analysis was used to evaluate the risk of secondary epilepsy.P values lower than .05were interpreted as statistically significant in all the statistical analysis model.
Characteristics of secondary epilepsy after viral encephalitis
Four thousand one hundred ninety-three patients were assessed for study eligibility during this study.Six hundred ninety patients were excluded from the study, including 425 patients did not meet the inclusion criteria (Japanese encephalitis confirmed case: 48 patients, family history of epilepsy: 45 patients, died due to other complications in acute stage: 13 patients, died due to other complications in post-discharge: 12 patients, history of taking psychotropic drugs: 36 patients, history of tuberculous meningitis infection: 35 patients, history of suppurative meningitis infection:34 patients, history of cryptococcal meningitis infection: 33 patients, Reye syndrome: 19 patients, not actively receiving treatment: 52 patients, lost to follow-up for various reasons: 98 patients) and 265 patients declined to participate.Finally, a total of 3503 patients were enrolled in this study.As shown in Figure 1, there were 2097 males and 1406 females, with a mean age of 4.72 ± 2.13 years (range from 2 m-10 yrs).During the follow-up period, a total of 385 patients developed secondary epilepsy following VE and were subsequently assigned to the epilepsy group (Group E).Conversely, the control group (Group C) consisted of 3118 patients who did not experience secondary epilepsy after VE.Consequently, the incidence of secondary epilepsy after VE was determined to be 10.99%.Despite observations suggesting that patients in Group C exhibited higher age (4.83 vs 4.31), height (108.72 vs 106.58), and weight (16.64 vs 15.98) compared to those in Group E, no statistically significant differences in age, gender, weight, and height were found between the 2 groups.
Univariate analysis of clinical manifestation and secondary epilepsy in patients with viral encephalitis
The study conducted a univariate analysis to determine the significant risk factors for secondary epilepsy following VE.Factors such as length of hospital stay, rate of antiviral drugs usage, convulsions, unconsciousness, headache, ataxia, positive meningeal irritation sign, diarrhea, positive Pan's test and positive pyramidal tract sign and times of epilepsy >2 were found to be the significant risk factors.However, factors such as age, gender, peak body temperature, focal neurological dysfunction, weight, height, times of epilepsy <2 were not associated factors with secondary epilepsy after VE.For further details, please refer to Table 1.
Univariate analysis of auxiliary examination results and secondary epilepsy in patients with viral encephalitis
The significant risk factors for secondary epilepsy following VE were identified as cerebrospinal fluid herpes simplex virus infection, diffuse/extensive slow wave of EEG, epileptiform discharge of EEG, simple subcortical damage, cortical and cortical damage, and thalamic basal ganglia damage observed through magnetic resonance imaging (MRI)/computer tomography (CT), as indicated by the findings presented in Table 2.There was no significant association found between secondary epilepsy after VE and elevated intracranial pressure, White blood cell count, glucose levels, protein levels, monocyte count of cerebrospinal fluid, mycoplasma infection, rubella virus infection, cytomegalovirus infection, respiratory syncytial virus infection, enterovirus infection, coxsackievirus infection, rotavirus infection, chlamydia infection, adenovirus infection, Epstein-Barr virus infection in cerebrospinal fluid, or simple subcortical damage observed on MRI/CT scans.
Multivariate logistic analysis of clinical data and secondary epilepsy in patients with viral encephalitis
In the multivariate model, several factors were found to be significant risk factors for secondary epilepsy after VE, including convulsions, unconsciousness, positive meningeal irritation sign, diarrhea, cerebrospinal fluid herpes simplex virus infection, times of epilepsy >2, epileptiform discharge of EEG and cortical and subcortical damage of MRI/CT.Following adjustment for confounding factors, unconsciousness, convulsions, times of epilepsy >2, epileptiform discharge of EEG and cortical and subcortical damage of MRI/CT were remained as independent risk factors associated with secondary epilepsy after VE (P = .005,<.001, .006,.014and .003),and the adjusted odds ratio was 7.39 (4.12-16.83),15.88 (11.26-29.74),6.27 (3.89-10.11),4.68 (3.06-6.29),and 9.97 (7.48-18.59)respectively.The detailed information is presented in Table 3.
Discussion
VE, a prevalent childhood infection, exhibits a notable prevalence and fatality rate. [11]Consequently, children afflicted with this condition may experience neuronal impairment and neuropathy, leading to significant implications for their overall well-being, developmental trajectory, and physical growth.During the acute phase of VE, brain edema, nerve cell necrosis, and inflammatory infiltration in the frontal and temporal lobes are frequently observed. [12,13]The etiology of abnormal neuronal discharge in this context can be attributed to viral agents, autoimmune responses, toxic metabolites, and cerebral arteriovenous thrombosis.The prognosis and long-term impairment of brain function following VE complicated by epilepsy, a prevalent and serious complication, are unfavorable.Consequently, the objective of this study was to assist clinicians in assessing the risk factors associated with secondary epilepsy following VE in pediatric patients, in order to offer a more comprehensive approach to diagnosis and treatment.In this study, we conducted a 10-year retrospective study on children with VE in the Department of Neurology of Children's Hospital of Hebei Province.Our investigation revealed that the incidence of secondary epilepsy after VE is 10.18%.Patients with unconsciousness, convulsions, times of epilepsy >2, epileptiform discharge of EEG, and cortical and subcortical damage of MRI/CT have a higher risk to secondary epilepsy after VE.
Childhood exerts a pivotal influence on the development of the brain, particularly with regards to intellectual growth.Infections targeting the central nervous system can significantly impede children's development, potentially resulting in conditions such as epilepsy, attention deficit hyperactivity disorder, and learning disabilities, among others. [14]Notably, children afflicted with viral meningitis face a tenfold higher risk of developing secondary epilepsy compared to those unaffected by the condition. [9]This association is frequently observed within a 5-year timeframe following VE.In previous research, Wan et al investigated the risk factors associated with VE complicated by epilepsy.They identified the duration of acute epileptic seizures, herpes simplex virus infection, and changes in the focal nervous system as independent risk factors.However, it is important to note that the sample size in each study was limited and the duration of the research was relatively short. [15]Therefore, we intend to conduct a comprehensive assessment of the risk factors of secondary epilepsy after VE in children.
Convulsion refers to a single, multiple or persistent state of convulsion in the acute phase.Studies have shown that the risk of secondary epilepsy is the highest within 3 to 5 years after VE. [16] 22% of patients with convulsions in the acute phase of VE can have secondary epilepsy.The risk of secondary epilepsy is 10 times that of the general population, while the incidence of secondary epilepsy in patients without convulsions in the acute phase is 10%. [17]In the study of this project, the incidence rate of convulsions accounted for 10.18% of all patients with secondary epilepsy, which was consistent with the research results of Prof Misra UK. [9] Since children can not clearly express their feelings, the observation of the condition is very important.The assessment of consciousness and mental state in children can be determined through dialogue, calling, and pain stimulation in a clinical setting.Unconsciousness following VE is attributed to prolonged high fever and infectious agents associated with the condition, leading to neural damage in the brain.Our findings indicate that convulsions and unconsciousness are autonomous risk factors for the development of secondary epilepsy in children following VE, aligning with the conclusions drawn in Stafstrom CE's research. [18]he necrosis of brain cells and the infiltration of inflammatory cells in the acute stage of VE can affect the stability of nerve cell membrane and lead to acute seizures.After the acute phase, the necrosis, deletion, structural disorder and even abnormal proliferation of neurons in the lesion, potassium outflow and calcium influx caused by the imbalance of proton pump in cell membrane, abnormal biochemical metabolism, and the decrease of γ-aminobutyric acid can lead to the formation of permanent epileptic lesions. [19]Recurrent epileptic seizures during the acute phase of encephalitis have the potential to result in ongoing discharge of brain cells, resulting in damage to these cells.This damage can subsequently lead to the formation of new epileptic lesions, the onset of epilepsy, and the eventual development of postencephalitis epilepsy without any external triggers. [20]ble 1 Univariate analysis of factors associated with secondary epilepsy on demographic and clinical data of patients between 2 groups (χ ± s).Consequently, it is crucial to actively manage epileptic seizures during the acute phase of encephalitis.Our study identified that experiencing epilepsy more than twice is an independent risk factor for the development of secondary epilepsy following VE in children.EEG serves as a crucial diagnostic tool in the assessment of VE disorders, enabling the evaluation of brain function impairment and aiding in the identification of symptomatic epilepsy.The extent of brain function damage is directly associated with the abnormality level observed in EEG recordings.Epileptiform discharge denotes the anomalous EEG patterns exhibited by individuals with epilepsy, commonly characterized spike wave, sharp wave, spike slow complex wave or sharp slow complex wave.
Group
There are several limitations inherent in this study that warrant Acknowledgments.Firstly, the retrospective nature of the study introduces a certain degree of bias, potentially impacting the accuracy of the obtained results.Secondly, the inclusion of data solely from a single hospital restricts the generalizability of the findings, thus rendering a large sample multicenter research design more desirable for enhancing the persuasiveness of the research outcomes.Lastly, the substantial number of patients lost to follow-up (130) and those who declined to participate (177) may introduce interference that could potentially affect the final results of this study.
Conclusion
In summary, our data suggest nearly one tenth of pediatric patients suffered from secondary epilepsy after VE.VE patients with unconsciousness, convulsions, times of epilepsy >2, epileptiform discharge of EEG, and cortical and subcortical damage of MRI/CT have a higher risk to develop secondary epilepsy.Therefore, deliberate treatment plan and close follow-up are necessary for cases with these risk factors.
Table 2
Univariate analysis of factors associated with secondary epilepsy on auxiliary examination data of patients between 2 groups (χ ± s).
Table 3
Multivariate logistic regression analysis of factors associated with secondary epilepsy. | 2024-03-16T05:09:34.256Z | 2024-03-15T00:00:00.000 | {
"year": 2024,
"sha1": "797786f5053fa5f195045ca117eb182efc33a8bd",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "797786f5053fa5f195045ca117eb182efc33a8bd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259125756 | pes2o/s2orc | v3-fos-license | Indonesian market demand patterns for food commodity sources of carbohydrates in facing the global food crisis
Global food consumption tends to rise more quickly than supply. This has to do with important global issues like population growth. Additionally, global conflicts are going to hinder the distribution of food. Indonesia has an enormous opportunity of anticipating these circumstances considering its promise as one of the largest supplies of food worldwide. Rice is still the staple food in Indonesia, but the dynamics of society are under threat from wheat food. It is possible to create strategy plans to deal with potential food scarcity by understanding the behavior of food demand trends for big carbohydrate sources like corn, cassava, sweet potatoes (as a substitution), and the development of wheat as a beneficial food. The results of the study indicate that rice, corn, cassava, and sweet potatoes—food commodities that are major sources of carbohydrates—are inelastic, which means that their prices are not affected by variations in demand. The community still relies on rice as the primary food source. Cross elasticity >0 in these non-wheat food commodities indicates mutually beneficial replacement among the foods that are sources of carbohydrates. That is, with the dynamics of an increase in income, for example, it will also increase consumption. The results of the study also demonstrate that wheat food items are only a complementary, not a staple food needed, thus concerns about wheat's dominance as a food component in industrial products actually have no impact on local food. The availability of high-yielding varieties of rice, corn, cassava, and sweet potatoes, the implementation of food reserves by the Indonesian National Logistics Agency (Bulog) from the government center to the regions, food diversification, changing preferences, and creating an awareness of local food pride with massive education are some of the anticipatory steps taken in response to the global food crisis.
leading a healthy and fulfilling life. It is crucial to analyze food consumption for carbohydrate sources in order to collect data for making policies that address the food situation in the future. With regards to the predictions of food scarcity, this study seeks to examine the desire for food sources of carbohydrates on a nationwide level.
Theoretical framework
Demand is the overall relationship between the amount of certain goods or services that consumers will purchase during a certain period and the price [31]. If prices vary while other factors such as prices of substitute or complementary goods, income, and preferences are considered constants, then the relationship between price and the amount of goods or services that consumers will purchase can be described in a curve obtained from the consumer's equilibrium points. Many factors, including self-price, income, the price of similar items, income distribution, population size, preferences, and other elements, determine the large number of commodities that consumers are willing to buy during a particular time.
By assuming that the variables influencing other variables exist, it is possible to evaluate each variable separately. According to the law of demand, the quantity demanded and the price of goods are inversely related. This implies that demand will increase when the price of the commodity is low, and vice versa. As a result, both price and quantity demanded can be represented by a curve with a negative slope.
Food demand is affected by population in two ways. First, demand increases linearly with population. Second, because population growth can sometimes slow growth in per capita income, the per capita income, which is calculated as total income divided by population, does not always increase demand. In rare cases, where income does not increase at all, per capita income increases as population grows. This circumstance can counteract the direct effect of population growth on food demand [31].
Data collection technique
The time series data used in this study is secondary data which was documented in the period of 2007 to 2020 and was obtained from FAOSTAT, Statistics Indonesia (BPS), and The Ministry of Agriculture's Center for Agricultural Data and Information Systems (Pusdatin). Collecting and confirming data from FAOSTAT at the following links: https://www.fao.org/faostat/en/#data, https:// www.fao.org/statistics/en/, and https://www.fao.org/food-agriculture-statistics/en/are data collection steps. In addition, FAO-STAT data from databases at the BPS and the Ministry of Agriculture will be supplemented for the desk research databases at the BPS and the Ministry of Agriculture. The institutional links are https://satudata.pertanian.go.id/, https://www.bps.go.id/subject/5/ konsumsi-dan-pengeluaran.html, and https://www.bps.go.id/.
In this study, data on Indonesia's per capita income, food needs, and commodity prices were collected for analysis. From the three data collected, it is possible to obtain the annual average statistics for the general population in Indonesia, both in rural and urban areas. Five food commodity sources of carbohydrates namely corn, cassava, sweet potatoes, wheat, and rice were used as comparison. Since FAO does not provide a separate price variable, the price for each food commodity is calculated by dividing household expenditure (IDR) by the total amount of food purchased. The amount spent was for the food groups of tubers, grains, rice, and flour. The Ministry of Agriculture and BPS provided each of these supporting databases in specific details.
Data analysis method
1 Exponential smoothing technique for using time series data The data investigated in this study cover the years 2007 to 2020, so that by using the exponential smoothing technique, a logical price was obtain for the same time period in one unit of observation [32]. By employing an exponential smoothing technique will reduce the bias in the use of time series data in the Log-Linear model. To see the overall long-term flow of data, time series data can be smoothed using the exponential smoothing approach. For example, data on the real price of a commodity for a specific time period can be obtained by using exponential smoothing to generate long-term predictions (one or two periods in the future) for a time series. This method provides a series of exponentially weighted moving averages over a time series, that is, throughout the series, any calculation or forecast of future smoothing depends on all observed values that precede it. The values of the most recent observation are given the highest weight in exponential smoothing, followed by the values of the previous observation, and so on, with the value of the earliest observation getting the smallest weight [33]. So, the general equation in the exponential smoothing model is: where: X t = the actual value Y t = the last prediction Y t+1 = prediction for the next period a = smoothing constant The exponential smoothing technique approach was applied in this study's time series data processing using MINITAB and SPSS software [34].
2 The demand function model for food commodities In this study, the data were analyzed using the theory of the relationship between the elasticity coefficients in the demand elasticity matrix. Pyles describes the method of estimating elasticity with this matrix in "Demand Theory and Elasticity Matrix Construction" [35]. This method is more efficient because it can indirectly estimate the elasticity value of a commodity from the income elasticity value. The method can then be applied to situations with limited. Since the regression method involves a limited number of independent variables, multicollinearity is also minimized. In addition, the price variable so far has a tendency for high collinearity [35].
An econometric technique was used to evaluate data from FAOSTAT, using the Double Log, Log-Linear, or Constant-Elasticity Models [36]. Because the regression coefficient of the linear function model directly reflects the elasticity coefficient of each variable [37], it is used to estimate the elasticity of demand for food sources of carbohydrates in Indonesia. As demonstrated by Refs. [37][38][39], this model has long been used to study food consumption patterns.
The demand for a commodity is certainly influenced by many factors simultaneously. In simple terms [36], explained that in purchasing a number of commodities i, a consumer will definitely be influenced by the price of the commodity (p) and the total income (I) (as an income approach) with the function equation as follows: The above function is called "Marshallian demand function." Several other factors affecting demand include prices of other commodities, market taste, income distribution, population, consumer welfare, and government policies. In traditional demand theory, the factors that affect demand are emphasized on four items, namely the price of the commodity in question, the prices of other commodities, consumer income, and tastes. The Marshallian demand function derives price elasticity, cross elasticity, and income elasticity [40]. This is indicated by the following formula:
Price elasticity
Cross elasticity Income elasticity The technique for estimating the coefficients of the linear regression equations in the linier log model is Ordinary Least Squares (OLS). The OLS Estimator makes it easy to use this model to obtain the best estimate. The BLUE (Best Linear Unbiased Estimator) characteristics are generated in manufacture of OLS estimators. Real food prices (P), level of demand (Q) for the five commodities studied, and real income (I) per person for the entire population are the variables considered. The exponential model must first be transformed as shown below so that the non-linear model becomes a linear model [40,41]: The natural logarithm equation can be transformed as shown below to estimate the previously mentioned equation: The demand function model for each food commodity: The demand function model for food commodities correlation: Rice to corn, cassava, sweet potato, wheat: Q rice = α P β corn Q rice = α P β cassava Q rice = α P β sweet potato Q rice = α P β wheat Corn to rice, cassava, sweet potato, and wheat: Q corn = α P β rice Q corn = α P β cassava Q corn = α P β sweet potato Q corn = αP β wheat Cassava to rice, corn, sweet potato, and wheat: Q cassava = α P β rice Q cassava = α P β corn Q cassava = α P β sweet potato Q cassava = αP β wheat Sweet potato to rice, corn, cassava, and wheat: Q sweet potato = α P β rice Q sweet potato = α P β corn Q sweet potato = α P β cassava Q sweet potato = α P β wheat Wheat to rice, corn, cassava, and sweet potato: Similar to the theory of Pyles (1989), the elasticity values of many of the analyzed commodities can be calculated using this demand elasticity matrix approach by simply determining the elasticity value itself or the cross-elasticity value of one of the commodities and the income elasticity value of product. This will determine the elasticity of one of the commodities and the value of the income elasticity of the five commodities using the findings of the regression of the demand functions for the four commodities [42].
Food consumption pattern of the community
In Indonesia, the pattern of community food consumption remains dominant in the rice staple food. Rice has become the primary and first staple food. Even people who used to eat non-rice staple foods have switched to rice [43]. Rice and wheat are the most widely consumed foods in urban areas across all economic levels (including their derivatives). Meanwhile, the first staple food pattern for all expenditure groups in rural areas was rice, followed by corn, cassava, and wheat in the low-income group. Meanwhile, in the middle and upper classes, rice was followed by only flour [44,45].
The elasticity of food demand for carbohydrate sources
These food commodities' prices significantly impact community behavior regarding food consumption patterns. The elasticity value indicates the sensitivity between demand for food and the price level of these food commodities [46]. The elasticity of demand occurs when the price of goods or services affects consumer demand [36]. Consumers will buy more if the price falls. If the price rises, they will stop their purchases and wait for the price to return to normal. The price elasticity of demand will show that the food commodity's behavior is elastic (>1), elastic per unit (=1), and inelastic (<1) [42,47]. The impact of changes in prices of goods, income, and prices of other related goods, whether substitute or complementary, will have different impacts for each commodity [48]. The impact will differ both nationally and within a region. Therefore, it is important to understand the sensitivity of demand for carbohydrate-source food commodities to these changes. This sensitivity can be measured using own-price elasticity, cross-price elasticity, and expenditure/income elasticity [48,49].
Based on the findings of the study, the demand for food commodities in the pattern of the community's behavior, as represented by the value of the elasticity of demand for each carbohydrate food source, is as follows:
Rice commodity
Rice consumption per capita decreased significantly from 2008 to 2020. Meanwhile, per capita consumption of wheat-based foods has risen. Rice is a staple food that all Indonesian households have consumed for generations. Rice consumption at the national level was 93.44 kg per capita per year in 2008, then decreased by 4.24% throughout the year [50]. The average consumption of rice by the Indonesian population has increased since the pandemic, according to research by the [50]. Rice consumption, including local, superior quality, and imported rice, averaged 1.404 kg per capita per week in 2018. In 2019, this number dropped to 1.374 kg per capita per week. When the pandemic struck, however, the average consumption increased to 1.379 kg per capita per week. Consumption has also risen in the second year of the pandemic, reaching 1.451 kg per capita per week in 2021.
The analysis results show that rice's price elasticity is 0.26 (Table 1). This shows that commodity demand is inelastic or unresponsive to price fluctuations. When rice costs rise by 10%, the quantity demanded falls by less than 10%. Meanwhile, the income elasticity of rice demand is 0.24 > 0 (positive). This shows rice as a normal good, indicating that as income increases, so will rice consumption. The conclusion is that rice is still needed by society in general. This indicates that rice is still a staple food in the community. These findings align with prior research that rice was still the most favorable carbohydrate source for Indonesian people. Bread and processed food were considered luxurious, while rice, wheat flour, cereals, and roots were normal goods [17].
All food commodities analyzed are substitutes for rice. This is indicated by the cross-elasticity value of all commodities analyzed, which is > 0. This means that these foods have the same role or function as rice and can be a substitute food when rice costs rise (Table 1).
Corn commodity
Corn is a normal good (ЄPcorn = 5.09 > 0) and is the most elastic in terms of demand among food commodities. This is indicated by the own-price elasticity of 5.09 > 1. The correlation to the price of food sources of other carbohydrates has a significant effect, as shown by the cross-elasticity (ЄP), which has a positive value of more than 1. This means that even though the price of other foods has increased, it has not changed the increase in demand for corn. Likewise, if people's income increases, the amount of corn demand also, although the effect is insignificant (ЄIcorn = 0.13) ( Table 2). This shows that the increase in demand for corn is not due to food consumption but demand for many non-foods (feed) industries.
The increasing population and better income of Indonesian people have caused an increasing demand for livestock products, especially chicken and eggs. These demands have driven a dramatic increase in feed demand and corn as the main component of feed [51].
Cassava commodity
The analysis shows that cassava's elasticity is positive, so cassava is a normal good and inelastic substitute for other foods (ЄPcassava = 0.62 > 0 biggest than others). The effect of increasing cassava prices is very significant on its demand (ЄPcassava = 0.62**). At the same Time, the increase in income has no significant effect on the demand for cassava (Table 3). So far, the demand for cassava has continued to increase for consumption, animal feed, the processed industry (dried cassava, chips, tapioca, and cassava flour), and new renewable energy materials.
Cassava is a rice substitute that is important in supporting the food security of regions in Indonesia. However, many obstacles still exist in changing the community's consumption patterns. Therefore, regarding food security in regions, it is necessary to disseminate cassava-based food diversification as an alternative to rice or corn. Various cassava-based products (intermediate and end-products) have been produced in small-scale industries with simple equipment and large-scale with modern machinery [52]. As an intermediate cassava product, tapioca has been growing rapidly in Indonesia. In recent years, modified cassava flour (mocaf) agro-industry has also been started [53]. Several agro-industries produce cassava end-products, such as cakes, chips, brownies, traditional sweets (dodol), fermented cassava (tape or tapai) and so on. In addition, cassava processing wastes or by-products can be processed into fertilizer, especially for plantation crops, and cassava peels can be processed into animal feed [53].
Sweet potato commodity
The characteristics and behavior of consumers towards sweet potato commodities market are the same as for cassava commodities, namely as normal and inelastic goods, and it can be used as substitutes (ЄP>0) for other foods ( Table 4). The demand for sweet potatoes also increases with the need for raw materials for the food processing industry (sauce, snacks, and other functional foods).
The dependence and linkages of the demand for commodities provide different consumer behavior patterns in viewing a product. Information on the development of alternative sweet potato foods in terms of demand elasticity is needed. Through this analysis, people's behavior and consumption patterns will be known due to the influence of changes in people's income (income elasticity) and the price level of its and other commodities (own and cross elasticity) of sweet potato demand [54,55]. Notes: * = significant effect at α level of 10%. ** = significant effect at α level of 5%.
Wheat commodity
There is concern that Indonesia would "collapse" if food diversification is not rapidly strengthened in the face of increasing public consumption of imported wheat. The figure for Indonesia's wheat import demand in 2019 was 10.69 tons; in 2020, it reached 10.29 tons; and in 2021, it increased to 11.17 tons [50]. The analysis results show that the wheat commodity has a negative value, indicating that it is a relatively soft good (P = − 0.39 and I = − 0.04) because rising income reduces demand for flour. This shows that wheat-based foods are not now causing a threat to the Indonesian people. Food local commodities are still the primary source of food. This is also evidenced by wheat's negative cross-elasticity to other foods (rice, corn, cassava, and sweet potatoes), implying that increasing other foods will reduce wheat consumption. As a result, the community continues to prefer local food for their needs (Table 5).
Several studies have shown that the nutritional content of local food is better than wheat, as traditional tuber flour's physicochemical contents are better than wheat flour. Another aspect, wheat and corn production will be increasingly impaired by ecological Notes: * = significant effect at α level of 10%. ** = significant effect at α level of 5%. Notes: * = significant effect at α level of 10%. ** = significant effect at α level of 5%. Notes: * = significant effect at α level of 10%. ** = significant effect at α level of 5%. Notes: * = significant effect at α level of 10%. ** = significant effect at α level of 5%.
drivers such as land degradation, water scarcity and climate change [22,56].
The dynamics of income to the pattern of community demand
Many factors determine the community's food consumption pattern, but the two primary ones are income and the community's knowledge of food and nutrition. The analysis results show that the level of community welfare continues to rise, as evidenced by an increase in people's income. However, changes in income have not had a major qualitative impact on changes in consumption patterns. In certain ways, changes in people's income have not changed food consumption patterns that positively impact health and improve the quality of human resources. Food preferences change regularly. Price changes will have an impact on preferences. According to the law of demand, customers will reduce the consumption of commodities when the prices rise and vice versa. Income has an impact on preferences as well. Households with higher earnings will have a higher preference for carbohydrate-rich foods compared to lowincome households. Due to their limited resources, low-income households may not have many options for changing their food patterns [23,57].
The corn commodity is of concern because the elasticity value is very significant and the largest among other food commodities >1 (Table 6). Currently, the corn-based processed food and feed industries are growing rapidly. In addition, people affected by health problems switch to consuming corn as staple food. Corn, as a functional food, contains much dietary fiber needed by the body, essential fatty acids, isoflavones, minerals (Ca, Mg, K, Na, P, Ca and Fe), anthocyanins, beta-carotene (provitamin A), essential amino acid composition, and others [58].
Income elasticity (ЄI) measures the sensitivity of demand for carbohydrate food sources to income. The dynamics of increasing income for cassava is greater than for other foods in influencing the demand (ЄI cassava = 0.51) ( Table 6). This shows that cassava has an opportunity to become a prospective food in the future related to the dynamics of people's income. Cassava has a low glycemic index (GI), making it suitable for people with diabetes. Cassava can substitute rice as an alternative food source in Indonesia [45,59]. A decrease in income reduces wheat flour consumption; vice versa, an increase in income increases wheat flour consumption. However, the increase is smaller than local food commodities and insignificant. In conclusion, the data in Table 6 reveal that the concern that wheat dominance shifts local food is unjustified for the time being.
Global issues in the future
Currently, rice provides 96% of the calorie and protein needs of the Indonesian population and 70% of most of the Asian population, especially the poor. Rice surpasses other foods in terms of nutrition. Rice is edible in all parts and has 360 calories per 100 g and 6.8 g of protein per 100 g. Rice contributed to 54.3% of per capita energy consumption. That is, rice makes up more than half of our energy intake. Rice supplies about 40% of the protein in Asian [60,61].
Rice farming in Indonesia supports 25.4 million households or more than half of the country's population. In short, rice is a vital commodity for Indonesia since it contributes to its triple security: food, economic, and national. Most Asian countries are particularly interested in rice, not just as a wage good but also as a political commodity. Rice is an important economic commodity in Asia. It is not surprising that these countries allocate additional sums for farmers, both through various subsidy schemes and the construction of dams and irrigation networks, because rice farming is still applied by millions of farmers in most countries, including Vietnam, Myanmar, Thailand, India, and China, and rice contributes to the country's foreign exchange [61].
During 1996-1998, China allocated USD 18.2 billion per year for green box policies in the agriculture sector [62]. India provides subsidies for fertilizers, fuel, agricultural equipment, and various output pricing policies [63]. In addition to providing export credit subsidies and collateral-free bank loans, Thailand created a paddy mortgage scheme through the Bank of Agriculture and Cooperatives [64]. Everything is done with the main goal of meeting household food demands (self-sufficiency).
Anticipatory strategic steps for food in Indonesia
Indonesia's agricultural land has enormous potential in the agriculture sector. There are 100.7 million ha appropriate for Table 6 Matrix of demand elasticity for food commodities. Notes: * = significant effect at α level of 10%. ** = significant effect at α level of 5%.
agricultural land, of which 24.5 million ha are good for wetland (rice fields), 25.3 million ha are suitable for dry land for seasonal crops, and 50.9 million ha for dry land for annual crops. Because of its tropical environment, Indonesia permits agricultural business to be done annually [64,65].
The following action plans should be taken to anticipate the global food crisis:
Increasing national food reserves
The government stabilizes the supply and pricing of staple foods, particularly rice, to maintain farmers' income and purchasing power while maintaining consumer affordability. One of the stabilization efforts is the management and maintenance of the Government Rice Reserves (CBP). The fundamental reason for considering the importance of creating national rice reserves is the global rice market's volatility and instability [66]. According to the data, the global rice trade volume is small, reaching only 10% of the total global rice output [67].
Total rice production increased between 2000 and 2016, according to data from the United States Department of Agriculture (USDA). However, the ratio of rice exports to rice production did not increase significantly (about 10% of total production), indicating that the international rice market was relatively low. This can be an obstacle, especially during a food crisis. For example, many countries implemented export restrictions during the global food crisis of 2007/2008. Among them include raising export taxes and limiting the amount of rice exported. These limits are implemented not just by rice-exporting countries but also by rice-importing and re-exporting countries. In this context, the capacity of a country to reduce the need for consumer goods, particularly during a crisis, must be increased [68][69][70].
Food reserves are one of the price stability instruments, particularly for overcoming seasonal food production patterns and anticipating the consequences of international market shocks. As mandated by Food Law 18/2012, Indonesia has established a multilayered mechanism of national food reserves, consisting of a central government food reserve, regional government food reserves (provincial, district/city, and village level), and community food reserves [71]. According to Presidential Decree Number 48 of 2016 concerning Assignments to Indonesian National Logistics Agency (Bulog) in the Framework of National Food Security, it is stipulated that Bulog is assigned to manage Government Rice Reserves (CBP) in order to maintain food availability and stabilize food prices at the consumer and producer levels for the staple food type rice. The amount of government rice reserves is determined regularly, keeping into account the level of community needs.
There are three main methods for calculating CBP. First, the Food and Agriculture Organization (FAO) uses the concept of Stock Utilization Ratio (SUR), which is the ratio of rice stock/supply to total population rice needs (consumption and other needs). FAO recommends a safe SUR figure of 17-18% [61]. Second, the ASEAN Food Security for Information System (AFSIS) study recommends that national rice reserves represent 20% of the total national rice demand. Third, ASEAN countries were compared at the Policy Workshop on Food Security and Disaster Risk Reduction in Asia in 2018, held in Bangkok, Thailand. The amount of CBP can be determined by calculating the national need for rice for the entire population of a country facing an emergency within a specific time frame. CBP plays a strategic role in maintaining rice price stability, dealing with emergencies, disasters, and food insecurity, implementing the ASEAN Plus Three Emergency Rice Reserves Agreement, international cooperation in social assistance, and other needs that are in the best interests of the government [72]. CBP, controlled by Bulog, is primarily responsible for managing food prices through market operations and social or natural disaster emergency assistance [73].
Bulog will play two roles in increasing national food reserves, according to food policy. First, Bulog has a role as an operator for food procurement and price stabilization must be reinforced. Second, Bulog ensures food security, including price support for farmers, keeping prices accessible for consumers, and delivering food help to those who need it.
Functional food diversification
Food diversification is difficult to achieve quickly since it is strongly related to a preference ('taste') and consumption habits. The general people have consumed rice for a long time, and with food diversification, this will drastically change to cassava, sweet potatoes, or corn, for example. In Indonesia, there is a lack of socialization and promotion of the importance and value of these non-rice carbohydrate food sources.
Along with the increasing public awareness on the importance of healthy living, consumer food preferences are also changing. Foods that are becoming increasingly popular among consumers must not only have a high nutrient quality and an appealing appearance and taste, but they must also fulfill certain physiological functions for the body. Specifically, food is functional food or healthy food, so eating does not only fill the stomach but also improves body health and fitness.
Despite high calories (approximately 123 calories per serving), sweet potatoes also have high concentration of nutrients, particularly red sweet potatoes. Red sweet potatoes have higher vitamin A concentration than grains and tubers, up to 7,700 Sl. Sweet potato leaves contain more vitamin C than other fresh vegetables. It can reach 45-62 mg in sweet potato leaves but only about 25 mg in cassava shoots. Beta carotene and anthocyanins benefit health, particularly in preventing degenerative diseases such as coronary heart disease, stroke, and cancer [74,75].
Technological support in food production and processing
We are highly advanced in the field of sweet potato research and development. Since 1977, 14 superior and high-productivity varieties, including Daya, Prambanan, Borobudur, Mendut, Kalasan, Sukuh, Papua Solossa, Papua Pattipi, Sawentar, purple and yellow sweet potatoes, and so on, have been developed by the Indonesian Ministry of Agriculture. Cassava commodities from superior varieties of Adira 1, Adira 4, Malang 1, Malang 3, Malang 4, Malang 6, UJ-3, UJ-5, and so on have their own advantages, both in cultivation and in its utilization for industry and consumption. Similarly, food technology is already accessible, such as processing technology for sweet potatoes and cassava, for example chips, flour as a basic ingredient in various food products such as breads, cakes, and so on [76].
Non-rice and non-wheat processing technology development are limited. Rice and wheat flour are readily available in the market; however, flour from corn, cassava, sweet potato canna, and taro is available in limited quantities and not permanently. In addition, when compared to rice and wheat, processing technology, including equipment for local food, has not been optimally developed, because rice cookers are well known but there are no corn or cassava cookers yet. Even if there is, the development, dissemination, and absorption of local food processing technology to improve processing practicality, nutritional value, economic value, social value, image, and acceptance are slow. Even non-rice alternative food industries made from tubers, such as instant traditional tiwul and gatot from cassava, casava flour, bija flour (from sweet potato), and so on, are still relatively limited and even on a small or domestic scale. The development of these types of industries is quite slow. Many factors influence it aside from the issue of raw material supply continuity and marketing concerns related to limited demand. Food diversification education and promoting nutritious foods are important for the community in advancing food diversification.
Food diversification patterns and improved nutritional fulfillment can be carried out by formal and non-formal institutions in the community. Understanding of healthy food must be done from an early age. Example of non-formal institutions in the community is health activities for mothers and children such as supplementary feeding activities (PMT). The activity's target is the attainment of the expected food pattern towards diverse, nutritious, and balanced food. The level of acceptance and resistance to non-rice food innovations in food security is still difficult for the public to accept, because non-rice carbohydrate foods only have the status of snacks.
Preference and pride of local food products
The public is still unaware of the importance of food diversification and nutrition. The prestige factor is sometimes more dominant than the health aspect in eating patterns, sometimes acting impulsively. This includes raising public knowledge about food safety. Formal and non-formal institutions like integrated service institutions (Posyandu) can carry out food diversification and nutritional fulfillment patterns. This institution is involved with local community health activities, particularly those that involve mothers and children, such as supplementary feeding programs. This activity aims to achieve the intended food pattern of diversified, nutritious, and balanced food. Non-rice alternative food programs are still lacking in this activity. Because non-rice carbohydrate foods are solely considered snacks, the public's acceptance, and resistance to non-rice food improvements in security remains difficult.
Conclusion
Based on the price elasticity of demand, rice, maize, cassava, and sweet potato commodities are inelastic, meaning they are not responsive to changes in its price. The cross elasticity of these non-wheat commodities shows mutual substitution between the carbohydrate source foods. This is indicated by cross elasticity >0. The four non-wheat food commodities are normal goods. An increase in income, for example, will also increase consumption. The maize commodity is of concern because the demand elasticity value is significant and the largest among other food commodities >1. At the same time, the dynamics of increasing income for cassava is greater than for other foods in influencing the demand. This indicates that maize and tubers are prospectively for food in the future. The dynamics of an increase in income will reduce the demand for wheat (I negative) for all food commodities, and the decrease in the effect will not be significant. This means that wheat food is only a complement, not a staple food need, so the concern about the domination of wheat as a food ingredient with practically processed products in society has not shifted local food significantly. Facing the global food crisis, the anticipatory steps are the availability of high yielding varieties of rice, corn, cassava, and sweet potatoes, implementing food reserves by Bulog from the center to the government regions, food diversification, changing preferences to consume local food with pride.
Data availability statement
The datasets generated during and/or analyzed during the current study are available at the following link: https://data.mendeley. com/datasets/n6yfrp8gf3/1. | 2023-06-11T05:13:33.180Z | 2023-05-30T00:00:00.000 | {
"year": 2023,
"sha1": "320e994d8c309003c344e76fdaab8257097ddefc",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.heliyon.2023.e16809",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "320e994d8c309003c344e76fdaab8257097ddefc",
"s2fieldsofstudy": [
"Agricultural And Food Sciences",
"Economics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
139985998 | pes2o/s2orc | v3-fos-license | Effect of NTP Pretreatment on Thermal Resistance and Fouling Components of Oilfield Wastewater
In order to prevent scaling in the process of oilfield wastewater evaporation, low temperature plasma is used for pretreatment of heavy oil wastewater. It reacts with the ions and radicals produced by the low-temperature plasma and then is send into the evaporator. The changes of various indexes of the distilled water and the distribution of fouling in the evaporation process of heavy oil wastewater after plasma pretreatment were studied. The results showed that the content and hardness of silica in wastewater were decreased after plasma pretreatment, which was more suitable for evaporation treatment. At the same time, the content of salt and oil in distilled water is reduced, and the quality is improved. In addition, when the steam concentration was 30∼40 times, the suspended solids in the concentrated solution of the wastewater increased significantly after the plasma treatment. Correspondingly, the fouling at the bottom of evaporator is greatly reduced. Comparing the various indexes of distilled water and the feed water index of gas injection boiler, it can be seen that the excessive oil content in distilled water is the biggest obstacle to the recovery of distilled water to boiler feed water. Low temperature plasma pretreatment can provide a quick and new way to solve the scaling problems and water quality problems in the recovery of distilled water from a large number of heavy oil wastewater.
Introduction
At present, most of the oil fields in China have entered the middle and late stage of oil exploitation. The water content of the crude oil has reached 70% -80% [1][2]. Some of the oilfields are even up to 90%. With the increase of the water content of the produced fluid, the waste water of the produced liquid needs to be increased rapidly [3]. Heavy oil wastewater refers to the water discharged from oil and water separation after treatment of heavy crude asphalt petroleum (heavy oil) production fluid. Its water quality is characterized by high water temperature, large oil content, large viscosity of sewage oil and serious emulsification [4]. The main pollutants in heavy oil wastewater are petroleum, suspended solids, sulfide, volatile phenols, chloride, fluoride, ammonia nitrogen, saprophytic bacteria and sulfate reducing bacteria. It has a certain degree of hardness and salinity. Because of the high oil content, high salt content and heavy composition of the heavy oil wastewater, the membrane treatment method lacks the longterm effective pretreatment process, so that the heavy oil wastewater treatment is very expensive. As the heavy oil wastewater has a high temperature, the use of chemical treatment will waste a lot of waste water waste heat [6][7][8] certain pretreatment, it can be directly into the evaporation equipment. Even if the high oil-containing wastewater enters, it will only affect the efficiency of the evaporation system in a short time without affecting the normal operation of the entire system. In addition, evaporation process can utilize the heat energy of heavy oil wastewater. With the increase of wastewater temperature, the heat transfer area of evaporator decreases [5]. For the heavy oil wastewater with relatively high temperature, evaporation process can make more effective use of waste heat of wastewater. A large number of heavy oil wastewater cannot be discharged into the environment, and a large number of high-quality water sources are needed to supply the boiler feed water [11][12][13]. However, the evaporation process can treat oilfield wastewater for high quality boiler feed water, which can be used as a good resource recycling model [14]. Due to the high content of silicon in the heavy oil wastewater, there is still a residue even after the silicon removal process [9]. During the evaporation process, the silicon scale will be very difficult to handle, which seriously affects the evaporation efficiency and the use of the evaporation equipment [15].
Experimental device
The experimental setup consists of two parts: the NTP preprocessor and the evaporative condensed distilled water. Heavy oil wastewater first enters the direct-current (DC) high pressure narrow pulse NTP generator for pretreatment. After pretreatment, the water samples were collected and evaporated and condensed to obtain distilled water and a small amount of residual concentrated liquid. The quality of distilled water and concentrated liquid were detected respectively. The plasma machine is a DC narrow pulse type with a peak voltage of 30K V. The energy of single discharge is 2 joules / times. The pulse discharge frequency can be adjusted between 0~1000 /s. During the experiment, the flow was atomized by the nozzle group and then entered the plasma cylinder [10]. The amount of water handled per hour is 6 t. Therefore, when the discharge frequency is 1000 times / s, tons of electricity consumption is only 0.333 kW · h. No reagent was added in the experiment. The treated wastewater is pumped back to the storage tank and sprayed into the plasma tube again. The treated water is evaporated and then condensed with an evaporator. After the steam is heated and evaporated, the steam is recovered by the condenser. Treatment process of oilfield wastewater evaporation process is shown in Figure 1.
Experimental method
In the NTP pretreatment experiment, the water was injected into the NTP excited tube after pressure atomization, and the residence speed in the tube was about 0.2 s. The water samples were subjected to cyclic pretreatment at different pulse frequencies. The changes of silica, hardness, heavy metal and oil in water samples treated with different pulse frequency were studied. The appropriate pulse frequency for the treatment of heavy oil wastewater is determined. The number of pulses and the number of cycles are shown in Table 1 The original water samples of the heavy oil wastewater and the water samples of the heavy oil waste water pretreated by NTP were respectively evaporated and condensed. The quality of distilled water and concentrated liquid is compared.
The change of conductivity before and after pretreatment
The pH of the water sample is always weakly alkaline and the pH varies between 9.4 and 9.6. Since the A water sample and the B water sample are taken from different heavy oil fields, the pH of the A water sample is higher than that of the B water sample. The pH values of water samples changed slightly after NTP treatment. For A samples, when the discharge frequency was 300 /s and 500 /s, the P H value decreased slightly, and it decreased further with the increase of processing times. When the discharge frequency was 900 /s, the pH value increased slightly after the cycle. For B samples, the pH value still declined slightly even when the discharge frequency was 900 /s. Compared with the hardness before and after treatment, it can be found that the hardness of two water samples of A and B decreases to less than half of the original hardness after NTP treatment, which will greatly reduce the scaling tendency during evaporation. For group A water samples, the lowest hardness appears at 500 times / s frequency after treatment once. For other cases and water samples of group B, the hardness was further reduced after 2 treatments. Conductivity is the index of water conductivity. When the number of ions in the water is relatively stable, the greater the total ion concentration in water, the greater the conductivity. When the impurity component in the water is more stable, the conductivity can represent the salt content in the water. The greater the conductivity of water, the greater the amount of soluble salt. Table 2 shows the conductivity changes of water samples after treating heavy oil wastewater with different pulse number. The conductivity of the A sample and its NTP-treated water samples changed significantly with the increase in the number of pulses and showed an upward trend in general. This is because NTP treatment, heavy oil wastewater absorbs free radical ions and increases the concentration of ions. B water samples showed a decreasing trend, because the original ion type of B water samples was different. The absorption of free radicals caused by plasma may lead to the precipitation of some ions, resulting in the decrease of conductivity. The water quality of wastewater treated by low temperature plasma is shown in Table 2 As can be seen from Table 2, after plasma treatment, the dissolved concentrations of silica in A water samples and B water samples were significantly reduced. However, in order to obtain the minimum concentration of silica, there is one of the most reasonable treatment conditions. For group A water samples, the lowest SiO2 concentration appeared at 500 times /s frequency, and the removal rate of soluble silica reached 80%. For group B, the lowest concentration of silica in water samples was B9001 water sample. NTP treatment can significantly reduce soluble silica in heavy oil wastewater. Similar to the decrease in hardness, it is beneficial to subsequent evaporation treatment and can reduce the tendency of silicon fouling in evaporation equipment.
The scaling of the evaporation process and the inhibitory effect of NTP pretreatment
After evaporation, all the water samples of the evaporator wall are present in varying degrees of fouling. After plasma treatment, the scaling of water in the evaporation process will also change. It is reflected in two aspects: First, the scale is deposited from the wall of the evaporator, and it is difficult to remove the scale into a precipitate dispersed in the concentrate. Second, in the process of continuous evaporation, due to changes in the wall of the evaporator wall, the evaporator wall and saturated boiling water temperature difference is not consistent. Wastewater A and NTP-treated A3001 and Wastewater B and NTP-treated B9001 were continuously evaporated in the evaporator to nearly 40 times. The concentrated solution was filtered through a microporous filter membrane of 0.45 μm pore size, and the suspension was separated and weighed and dried. The wall of the evaporator is scratched with a hard plastic sheet and then washed with dilute nitric acid. All the washings and dirt are collected. After the evaporation of dry water, the dry matter is weighed to obtain the quality of the dirt remaining at the bottom of the evaporator. The comparison of the mass of suspended solids with dirt is shown in Table 3. As can be seen from shown that after NTP treatment, the crystallization of the fouling ions migrates from the evaporation surface to the liquid phase.
Conclusion
In this paper, the effect of heavy oil wastewater after NTP pretreatment on its evaporation recovery process and its distilled water quality was studied. The conclusions are as follows: After NTP treatment, the pH value of heavy oil wastewater decreased slightly, and the conductivity also changed. The content and hardness of silica decrease by more than half, which is helpful to prevent the formation of fouling during evaporation, and reduce the conductivity and hardness of distilled water. The quality of distilled water produced by different water treatment is different. Other measures should be taken to prevent foam and foam from being carried by steam, so as to improve the quality of distilled water and meet the requirements of steam injection boiler. The pretreatment of NTP can meet the requirement of oil content and SiO2 content. After NTP pretreatment, the fouling of the heavy oil wastewater is obviously inhibited. The fouling ions are obviously transferred to the liquid phase. After NTP pretreatment, the evaporation and fouling of wastewater is slow, which shows that the difference of temperature difference between evaporator and boiling water rises slowly.
Fund project
Supported by The Science and Technology Research Project of Chongqing Municipal Education Commission (KJ1605301). Author's brief introduction: ZHAO Jie, male, lecturer, direction of environmental protection, organic synthesis and Polymer materials, Email: zhaojie0001@163.com. | 2019-04-30T13:07:02.648Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "6a3ac1aa311a65ab0dbc2a0cf54115dbe0550b48",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/108/3/032029",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "bb6585375e3ed9b364b39072035402207ae13921",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science",
"Physics"
]
} |
142383062 | pes2o/s2orc | v3-fos-license | Investigation of the Effect of Using a Novel as an Extensive Reading on Students’ Attitudes and Reading Ability
The present study aims at investigating learners’ perceptions on the use of a novel as an extensive reading in a college EFL reading course. For this purpose, fifty Iranian EFL students read and received instructions on an unabridged short novel in addition to, their text book for one semester. Three questionnaires were used to measure students’ attitudes toward novel-reading, students’ confidence in novel reading ability and students’ perceptions toward using a novel as an auxiliary material, prior to and after reading the novel. In addition, three open questions were offered to obtain benefits and obstacles of the novel reading. T-test analysis were used and findings revealed that there was a significant improvement after reading the novel in students’ attitudes, confidence, interest and their novel-reading ability. However, they suggested reading the novels according to the theme that they preferred. The result of this study are of pedagogic significance to EFL teaching in that they indicated how well a novel was received in an EFL Advanced reading class.
Introduction
To learn L2 and particularly FL the input is severely limited and the quality of input is not ideal, despite L1 that the amount of input for exposure is abundantly available.Thus extensive reading plays a great role in learning FL (Renandya, 2007) and it reduces the exposure gap between L1 and FL, Based on Krashen (2006), newer approaches to language teaching do not rely on painful skill-building approaches , that is, to learn language is not by practicing rules and vocabulary consciously but are based on comprehension hypothesis.Thus, the manifestation of the comprehension hypothesis in literacy is conducted by reading hypothesis.A large number of materials are incorporated for extensive reading in many ESL / EFL programs such as newspapers, magazines and books ( Day & Bamford ,1998).Kembo (1993) stated that extensive reading motivates students and increases their reading confidence.Literature lends itself to the integration of reading education and the development of other language skills.Krashen (2006) claimed that "the methodology of the future will, I hope, include sheltered popular literature" (p.145) to explore current cultures in other countries and to stimulate interest in reading.Melon (1994) observed that novels are excellent sources of plenty of comprehensible input.Although authentic text book novels are recommended as material list for pleasure reading they are scarcely employed to supplement reading (Chih-hsin Tsai, 2012).
There are some benefits of using novels as authentic texts to develop student-centered learning, since novels provide plot, characters , the context of settings all of which contribute to the engagement of the reader, regardless of specific proficiency levels, grammatical charts or writing exercises (Garies et al.2009).Tsou (2007) stated careful selection of novels as text books is a crucial factor to teach to students of all levels of proficiency.In contrast, with this views other researchers expressed the obstacles of considering unabridged novel as a course material that may seem as a "too radical a leap from tradition" (Garies et al., 2009, p.145).Thiongo (1986) mentioned through using well known literary texts and engaging learners in English culture, we are imposing a kind of "cultural imperialism".Garroli (2002, p.113) stated that "there is a need for qualitative studies, focused on learners to explore the relationship between literature, language and students" students require to enjoy learning the language as when there is "a mental block, caused by effective factors…that prevent input from reaching the language acquisition device" (Krashen, 1985, p.100).Therefore another factor is to consider students' perception rather than concentrating on the teachers or instructors alone.There has been a steady increase in the number of studies on the use of literature, particularly short stories and novels as the basic source of authentic texts.However, the results of EFL /ESL studies on the effect of using stories and novels on reading comprehension were not homogeneous.This study, considered novel as an authentic auxiliary source of reading for Iranian EFL students, investigated the possibility of using a novel as the auxiliary material in a college reading course.It focused on evaluating the impression of novel-teaching in terms of students' subjective perceptions instead of objective linguistic gains.
Reading Comprehension
Reading skill is a crucial element in learning another language, particularly for academic skills (Anderson, 1994).Alptekin (2006) defined reading as "an interaction of the readers' text-based and knowledge based processes.In processing texts, readers combine literal comprehension, based on higher level cognitive processes of reading such as lexical access and syntactic parsing , with referential comprehension based on higher-level cognitive processes such as the text based of comprehension (to understand what the text says) and the transition model of interpretation (to understand what it is about)" (p .494).
Extensive Reading and Literature
Extensive reading is based on this principle that we learn to read by reading.Appropriate selection of the material for extended reading is an important factor to be considered.Therefore, Literary texts (novels) are recommended to be used for extensive reading due to four reasons: first, Linguistic Development: Novels are ideal instruments to support isolated skills due to their length, varied content and to support integration of skills.Novels support the teaching of grammar in both integrated and isolated curricula through built-in context and reforming of forms and structures (Garies et al, 2004, p.143).Second, cognitive Reasons: Literature (novel) develops critical thinking skill as they are engaged in the literary texts and learners apply their own feelings and ideas regarding them.Third, motivation: Garies (2004) stated intensive reading textbooks in the traditional ESOL text books seemed boring, but using novels as the course books made readers more enjoyable and helped them to diversify their reading habits.Fourth, cultural awareness: Learners can discover the way the characters behave, feel and think.Thus, literary works develops understanding of the communication that takes place in that country.
Research on EFL students' novel reading
Some studies have been performed on the effect of novels and short stories on EFL reading comprehension.Gareis et al (2004) showed that the "benefits of extended reading; as novels are motivating and authentic; and they can support any curriculum and be used in a variety of programs" (145).Fan-ping Tseng (2010) investigated students' perceptions toward teacher's presentation of twenty four literary works.The result of survey showed that most of the students had a positive attitude towards novels most followed plays, short stories and poems.In contrast, to the previous researches, Sell (2005) criticized FL textbooks includes literary works which are full of imaginary and unnatural matters that rarely apply in real-life, to practice the language skills.Therefore, the using of literary work seems unsatisfactory for Iranian EFL learners.To the best knowledge of the author, in the EFL context of Iran, no study is performed to explore the possibility and effectiveness of teaching unabridged novels in the language classroom, thus the present study investigated EFL students' perceptions of using unabridged novel as the auxiliary material in the Advanced English Reading Course.
Research questions
In particular, the study aims to answer the following questions: 1.Are there changes in the students' attitudes, perceptions of confidence toward novel-reading ability and using a novel as an auxiliary material after the entire reading class?2. What are the students' perceived gains and obstacles from reading the novel?3. What are the strengths and weaknesses of the novel-reading class as perceived by the students?
Settings and Participants
This study was conducted in two classes of Advanced English Reading course offered at Ilam Azad University in Iran.The course constitutes sixteen weeks (one semester) of two-hour sessions and had a general goal to enhance students' reading ability.The two classes consisted of fifty sophomore EFL students (40 females and 10 males).Students' ages ranged between 18 to 24 years and considered to be of upper-intermediate reading proficiency.
Course Material
In the present study an unabridged short novel of the Red Pony was chosen as an auxiliary material.The novel was written by John Steinbeck in 1993.The full book published in 1937 by Covici Friede.John Steinbeck is one of the most popular story tellers of the twenties century.This book was chosen due to some reasons, first this book is one of the most accessible and favorite books second, this book is adequately challenging for upper-intermediate and finally, it has a special organization and setting.Furthermore, the present novel has 112 pages, and divided into 4 chapters.Each chapter is an episode holding together by common characters, locations, themes and they follow a similar time line.Therefore, this simple plot contributes students to manage these pedagogic unites efficiently and each story can stand alone.
Teaching procedure
The advanced Reading class lasted for 14 weeks once a week and the novel of The Red Pony was used in addition to their reading book.At the start of the class, quick facts about the course and the organization of the novel were given.In addition, the teacher had the role of modeling of good reading practices; therefore teacher trained students some instructional strategies, such as how to read extensively and how to get the main idea instead of reading word by word.
IJALEL 3(4):55-64, 2014 57 Students were told to read 8 to 10 pages each week to finish the entire novel by the end of the semester.Each week students orally summarized what they had read to the whole class and teacher asked questions to assess their comprehension of the content.No tests were given to the students to assess literary texts since assessing may deteriorate students' interest.In the current study a pre-test at the first week of the class and a post-test questionnaire at the last week were given to students including students' attitudes toward novel-reading, students' confidence on novel-reading ability and students' perceptions on using a novel as an auxiliary material.In addition, three open-ended questions were used to discover benefits and obstacles of novel-reading class.
Research Design
The present study is a survey research design which employs a mixed method approach.Since it includes survey (likertscale questionnaire) and open-ended questions, therefore, both quantitative and qualitative approaches are utilized to solicit students' attitude before and after the novel-reading process.As Munn and Drever (1993) stated that questionnaires "tend to describe rather than explain why things are the way they are ", due to this limitation, in the present study the qualitative data from the open-ended questions are obtained to back up the quantitative data from the Likert-scale questionnaires.
Instrument
A pair of novel-reading questionnaires adopted from Tsi (2012) was used.Tsi reported that these questionnaires were validated and reliable tools with reference to Bacha (2012) and Chiang (2010).First, a pre-reading questionnaire (pretest) incorporated questions about students' demographical information and their previous reading experiences, was used.Then, a pair of questionnaires measured students' perceptions prior to and after the novel-reading experience in the Advance English Reading course.Both questionnaires consisted of 23 similar Likert 5-scale items pertaining to measure students' perceptional changes in terms of their attitudes toward novel-reading (8 items), their confidence in novel-reading (5 items), and the appropriateness of using a novel as an auxiliary book (10 items).Besides Likert items, three open-ended questions were offered to elicit students' opinions about the course.
Data Collection and Analysis
The pr-etest and the post-test questionnaires were given respectively when the class met for the first (1st week) and the last week (14th week) during the semester.
Pre-and post-test Likert-scale test results were analyzed to gain better understanding of the students' reading experiences and to discover statistically significant differences in their perceptions toward novel-reading prior to and after the reading process.To serve this purpose, the analysis procedure involved two stages.First descriptive statistics were used to answer the first three research questions via frequency and percentage analysis.Then, through Pairedsample t-tests hypothesis were tested, to compare the means of the pre-test and post-test scores in the 23 Likert-scale items by using SPSS package 21.0 for windows.Finally, Responses of three open-ended questions in the post-test were analyzed after coding and frequency calculation.
Results
In the present study first back ground knowledge obtained through the demographical pretest Questionnaire showed that the students had a little previous experience.Only one -fourth of 50 students had read simplified novels.The results of the pretest and posttest Likert-scale questionnaires are represented below:
Results of research question two: Are there changes in the students' confidence in novel reading ability before and after the reading?
The results of (items8-13) responses of students' perceptions of confidence in novel reading ability before and after reading the novel are indicated in bar chart 2 and table2.The results obtained clearly indicate students' confidence in novel-reading ability increased after reading the novel.The results of (items13-23) the total responses of students' perceptions of using a novel as an auxiliary book before reading the novel are presented visually as a bar chart in Figure 3 and in table3 the frequency of data indicate students' perceptions of considering a novel as the auxiliary book after the novel enhanced significantly.The descriptive statistics showed significant positive changes after reading the novel (post test) in students' attitude toward novel reading, their confidence in reading ability and setting a novel as an auxiliary book for the reading course.
Testing the hypotheses
To answer the first research question that consists of three parts three hypotheses are provided.
Hypothesis one:
There are not changes in the students' attitudes toward novel-reading ability before and after the reading.
To test this hypothesis Paired-sample t-tests was run.In this test, if Sig <0.05, the hypothesis is rejected and if not, it is accepted.Table 4. indicates Siq = 0.002 < 0.005 the null hypothesis is rejected, therefore, there are significant changes in the students' attitudes before and after reading the novel.In addition, due to the above-average mean scores 3.93 after reading the novel (post-test) in comparison to pre-test mean score of 3.52 students held positive attitudes toward the novel-reading.4.4.2The results of hypothesis two: There are not changes in the students' confidence in novel-reading ability before and after the reading.In table 5. the data revealed that Siq = 0.006 < 0.005 thus, the null hypothesis is rejected, therefore, there are significant changes in the students' confidence in the pre-test and the post-test and due to the above-average mean scores 3.05 after reading the novel (posttest) in comparison to pretest mean score of 3.23 students' confidence in novel-reading ability enhanced significantly after reading the novel.4.4.3The result of hypothesis three: There are no changes in the students' perceptions toward using a novel as textbook before and after the reading.Table 6.revealed that Siq = 0.025 < 0.005 the null hypothesis is rejected, therefore, there are significant changes in the students' perceptions toward considering a novel as the textbook in the pre-test and the post-test and due to the above mean score 3.31 after reading the novel in comparison to pre-test mean score of 3.23, students held positive attitudes toward using a novel as the main textbook after reading the novel.7. shows the most mentioned gains are consequently related to students' linguistic knowledge, their reading ability, novel-reading strategies, and their background culture enhanced.In addition, students' pleasure in reading skills and grammatical structure developed and finally their everyday use of English and students extra knowledge extended from the content.Extra knowledge extended from the content 2 Table 8 shows most students perceived difficulty with linguistic aspects.As some students stated, they had to look up unfamiliar and old vocabulary in the dictionary to understand the novel words.Several students complained about complex grammatical structures, in addition some students were not satisfied with the theme of the novel due to a large amount of description about the nature instead of every day dialogues, finally just a few students were not interested in novel reading.Table9.demonstrated that twenty students left it blank, while a total of 24 students stated 'nothing' as an answer, 6 students complained about the details of content and grammatical points which were not explained by the teacher, just 2 students stated the novel was too difficult and finally, 2 students reported two hours a week was not enough to discuss the content of the novel.
Discussion and conclusion
The outcomes obtained from the present study confirmed previous studies that proposed to incorporate literature into ESL/EFL classes (Garies et al., 2009;Kim, 2004;Paran, 2008, Tsou, 2007;Wu, 2005;Tsai, 2012).That is, using authentic literary work make students hold a positive attitude and take pleasure in extensive reading.Although the students in the study had some doubt in their ability to cope with the predicted obstacles before reading the novel, their confidence in novel-reading ability enhanced significantly after reading the novel.This improvement might be due to students' instruction in extensive reading strategies practically, as the results supported by Anderson (2005).The result of using a novel as the auxiliary course material in reading is most welcomed in the present study since it presents narrative and plot, as also pointed out by Fan-ping Tseng's (2010).Furthermore, Wu (2005) stated students tend to give warm greetings to "anything other than a conventional textbook " (p.60).
Other than that, the study provides students' views of gains and obstacles encountered during the novel class.Students mentioned they developed their linguistic knowledge by acquiring a large number of vocabulary and familiarizing with different grammatical patterns, as is indicated by several studies ( pecllicer-Sanchz and Schmitt, 2010).Students were satisfied with learning extensive reading strategies.In addition, they improved their cultural awareness that changed their views toward the world.The obstacles some of the students encountered at the initial of reading novel, the high frequency of referring to a dictionary gradually wore out their patience and high frequency of complex grammatical patterns made the content of novel difficult to understand.Owing to limited course hours, some students couldn't participate in the discussions to share data by the whole class each session.Finally, a few students were not much interested in the classical novels and they suggested to have an opportunity to select their favorite novel before reading the novel as Fan-ping Tseng (2010) stated "Every individual literary taste differs, teachers are recommended to survey their students' literary preferences before teaching literature to them.
Implications
No border is defined to separate language from literature in practice since "no teacher of literature ignore linguistic problems and no language teacher really wants to leave his students speaking a sterile impoverished version of the language" ( Smith, 1972, p.275).Novels are useful resources to be used in the reading classes since students prefer reading prose fiction (Fan-ping Tseng, 2010).As this study aimed at exploring the feasibility of using novel as the auxiliary textbook in Advance reading class.Thus, this study suggests course designers and teachers to use short and authentic novels as the main course material in the Advance reading course since reading novels makes them motivated in reading, enhanced their confidence in reading ability and contributes them to develop their linguistic knowledge.In addition, before starting the novel some points are necessary to be considered first, what novel to teach is an important factor since it should arouse not only students' interest but also it should be enough challenging to their proficiency level of study, otherwise it may deteriorate students' interest.Second, teachers train students some instructional strategies of extensive reading to enable them handle the difficulties they encounter during the study.
Limitations and suggestions for further study:
However, there are some limitations need to be mentioned: the sample size was 50 learners, if the sample increases the result may differ and the subjects are 40 female and 10 male thus, the gender was not considered, if the role of gender was taking into account the results may differ.Further research is recommended to study students' linguistic gains from novel-reading.
4. 1
Figure1.Total comparison Percentages of Students' Attitudes before and after reading the novel
Figure 2 .
Figure 2. Total Percentages of Students' Confidence in Novel-Reading
4. 3
Results of research question three: Are there changes in the students' perceptions of using a novel as an auxiliary book before and after the novel reading?
Figure 3 .
Figure 3.Total Percentages of Students' Perceptions of a Novel as an Auxiliary Book
Table 1 .
Frequency of Students' Attitudes Toward Novel-reading
Table 2 .
Frequency of Students Confidence in Novel-Reading ability
Table 3 .
Frequency of Students' Perceptions of Using a Novel as an Auxiliary Book
Table 4 .
Paired Samples Test of Students' Perceptions of Novel-Reading
Table 5 .
Paired Samples Test of Students' Confidence in Novel-Reading Ability
Table 6 .
Paired Samples Test of Students' Perceptions of Using a Novel as Textbook Results of the second research question: What are the students' perceived gains and obstacles from reading the novel?
Table 7 .
Frequency Counts of Students' Self-stated Gains from the Novel-reading
Table 8 .
Frequency Counts of Students' Self-stated Obstacles during Novel-Reading Results of the third research question: what do you think are the weaknesses of the novel-reading class?
Table 9 .
Frequency Counts of Students' Feedback on the Weaknesses of the Novel Class (n=34) | 2018-12-10T22:31:17.344Z | 2014-07-01T00:00:00.000 | {
"year": 2014,
"sha1": "191e601d05e0dcaf4dd957bd007aec4d113907cc",
"oa_license": "CCBY",
"oa_url": "http://www.journals.aiac.org.au/index.php/IJALEL/article/download/1116/1046",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "191e601d05e0dcaf4dd957bd007aec4d113907cc",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
251281375 | pes2o/s2orc | v3-fos-license | The lithological characteristics of natural gas hydrates in permafrost on the Qinghai of China
The environment is seriously threatened by the methane emitted as permafrost melts. Studying deposits of natural gas hydrates that include methane is therefore important. This study presents a novel approach based on the rock Archie formula to discover the porosity and saturation of gas hydrates. The relationship between resistivity and porosity and the porosity of hydrates was studied, and the results showed that the resistivity of hydrate reservoirs was closely related to porosity and hydrate saturation, and the polarization rate was only related to the concentration of natural gas hydrates and had nothing to do with porosity. Using the multi-channel time domain induced polarization (MTIP) method, the profile with five boreholes in the Muli area of the permafrost area of the Qinghai-Tibet Plateau was observed, and the thickness of the shallow permafrost distribution and the underground structure were inferred based on the resistivity of the MTIP data. The polarization rate and hydrate saturation of the inversion assessed the presence of hydrates in the Muli region. The results show that the MTIP method can be used to detect the thickness of permafrost distribution, determine fault boundaries, reveal the distribution of natural gas transport paths, and evaluate the presence of natural gas hydrates.
In recent decades, the China Geological Survey Bureau has supported research on NGH in permafrost regions. In 2008 and 2009, research was carried out in the permafrost region of the Qilian Mountains along the northern edge of the Qinghai-Tibetan Plateau, which has conditions suitable for NGH. An NGH Scientific Drilling Project was carried out in the Qilian Mountains. Boreholes DK-1, DK-2, DK-3, DK-4, and DK12-13 were drilled, and sufficient rock samples of gas hydrates were obtained to give rise to scientific and economic importance 37,38 . An electromagnetic method 39 has been used for NGH exploration in China since 2009. We have further evaluated hydrates in a typical section in the Qilian Mountains Muli region using multichannel timedomain induced polarization (MTIP) to determine the distribution of permafrost, source rocks, and transport channels of hydrates, as well as the distribution of hydrates delineated according to polarizability and NGH saturation, providing methodological support for an in-depth understanding of the distribution pattern and resource potential of gas hydrates in the area.
Study area
Geological background. According to 40 , the Qilian Mountains are in the northeast of the Qinghai-Tibet Plateau, China. There are three major tectonic units: the north Qilian tectonic belt (Hexi corridor and South Mountain corridor), the middle Qilian continental block (Tolai Mountain), and the south Qilian tectonic belt, which correspond to I2, I3, and I5 in Fig. 1, respectively. The main body of the southern Qilian tectonic belt is a superposed basin from the late Paleozoic to the Mesozoic, which developed by early Paleozoic tectonic evolution.
Boreholes DK-1, DK-2, and DK-3 were drilled in the town of Tianjun in Qinghai Province, in the permafrost regions of Muli, which are at an elevation of between 4026 and 4128 m. The three holes revealed a permafrost thickness of approximately 95 m 41 and an average annual surface temperature of approximately − 2 to − 2.5 °C in the area, with the main drilling area being in the South Qilian structural belt, which is subordinate to the Muli Depression 42,43 .
The central part of the study area is composed of anticlinal Triassic strata, and in the north and south, there are two synclinal Jurassic coal-bearing strata. The large-scale thrust nodes on the north and south of the anticline control the boundary of the depression. The north-south synclines have caused a series of large shear faults in the northeast that cut the depression into intermittent segments of different sizes (Fig. 1). The boreholes reveal that strata within the study region contain the Jurassic Jiangcang Formation (J 2 j) and Muli Formation (J 2 m), but not the Quaternary system. The Muli Formation roughly corresponds to the Xiangtang Formation (J 2 x) and the Yaojie Formation (J 2 y) in this region.
Lu et al. 40 claim that there are several recoverable coal seams in the strata mentioned above. The Jiangcang Formation (J 2j ) is dominated by black and gray oil shale, mudstone, gray sandstone, and fine sandstone. The Figure 1. Schematic diagram of structures within the study area overlaid on an aerial photograph. I 1 represents the Alxa landmass; I 2 represents the Northern Qilian neo-Proterozoic to early Paleozoic suture zone. I 2-1 represents the Qilian-Menyuan magmatic arc zone, middle and late early Paleozoic (O-S). I 3 represents the Qilian block in the center. I 4 represents the Suture zone between Shule Nanshan and Laguiyama in the Early Paleozoic. I 5 represents the Southern Qilian's landmass. I 6 represents the Late Paleozoic-Early Mesozoic Fracture Trough (D-T2) at ZongwuLongshan-Qinghai Nanshan. I 6-1 represents the Zongwu Longshan-Xinghai Aola Trough (D-P). I 6-2 represents the Post-Foreland Basin of the Zeku Arc (T [1][2] www.nature.com/scientificreports/ Muli Formation (J 2 m) is dominated by gray and gray-white siltstone, fine sandstone, medium sandstone, coarse sandstone (gravel), deep gray mudstone, and oil shale, which are sediment from a braided river delta and the main coal-bearing section. It contains two major coal seams and several local thin coal seams. However, the hydrate is mainly distributed in the mudstone, siltstone, oil shale, and fine sandstone. It is between 130 and 400 m deep in rock fractures that may not be visible to the naked eye. It appears as an abnormality in finely disseminated deposits distributed in rock pores. These strata belong to the Jiangcang Formation.
Electrical and lithological characteristics of NGH. The MTIP survey carried out in the permafrost region of the Qilian Mountains was based on differences in resistivity between the targeted geological bodies (e.g., permafrost and structural faults) and the surrounding rocks. Gas hydrates occur in fissures of siltstone, mudstone, oil shale, or in pores of sandstone. The content of organic carbon in the oil shale is 0.98-5.76%, which satisfies the standard for high-quality source rock 44 . Oil shale has entered its mature period and is the main source of gas 40,45 . NGH is unstable under normal temperatures and pressure, and thus, it is difficult to determine its physical characteristics by collecting samples. However, it is not difficult to analyze the characteristics of the resistivity of NGH and permafrost using in-situ measurements from well logging. An analysis of log data from this area revealed that the NGH and permafrost have a higher resistivity than the normal sedimentary strata. www.nature.com/scientificreports/ siltstone, oil shale, and mudstone, within which the NGH was mainly deposited, ranged from 133 to 283.7 m, and from 314 to 396 m, respectively. The gas hydrate-bearing layers show obvious high resistivity anomalies in the resistivity logs of DK-1 and DK12-13, while other log resistivity curves have weaker displays. According to the lithological characteristics of five well logs, the resistivity values of hydrate-bearing layers are statistically classified in Table 1. It can be seen in Table 1 that NGH revealed by well DK-1 exists in sandstone and siltstone. The mean resistivity value of the hydrate gas-bearing layers is 3.35 times higher than that of the surrounding rock. The NGH revealed by well DK12-13 exists in siltstone, shale, and mudstone, and the mean resistivity value of the gas hydrate-bearing layers is 2.30 times higher than that of the surrounding rock. The NGH revealed by wells DK-2, DK-3, and DK-4 exists in mudstone, siltstone, and oil shale, and the mean resistivity value of the gas hydrate-bearing layers is 1.70 times higher than that of the surrounding rock. The mean resistivity value of the NGH layers in five holes is 2.26 times higher than that of the surrounding rock. It is consistent with the conclusion pointed out by Fang et al. 39 that the resistivity of the gas hydrate layer is two to three times higher than that of the surrounding rock.
Comprehensive information from the drilling and cores shows that the NGH mainly occurs in the Jiangcang formation in the Middle Jurassic of the Muli permafrost.
In well DK-1, porosity measurements from core samples of the four wells ranged from 5 to 20%. The range of NGH saturation obtained by the Archie equation is 13-86% 46,47 . In wells DK-2 and DK-3, the mean value of NGH saturation obtained by the Archie equation is 9.5% and 15.5%, respectively 48 . Well DK12-13, the range of NGH saturation achieved by the Archie equation is 13-85% 49 . Therefore, the porosity of the rocks in the four wells varied from 5 to 20%, and the saturation varied from 13 to 86%.
The reservoir resistivity range of the gas hydrate-bearing layers is the minimum and maximum values of the corresponding logging resistivity curves, and the surrounding rock resistivity range is the minimum and maximum values of the logging resistivity curves corresponding to the upper and lower formations of NHGbearing reservoirs.
MTIP sounding layout. An experimental study of the MTIP sounding method for the detection of NGH has been ongoing in the Muli area since 2008. The survey lines are shown in Fig. 3. Line 3 was across wells DK-4, DK-3, and other gas hydrate investigation wells, which were 2100 m long. In the pole-dipole setup, the dipole spacing was used at 20 m.
Methods
The MTIP principle. MTIP is an array exploration method based on the difference between in conductivity and polarizability between the study object and the surrounding rock and the distribution of the conduction current underground under the action of an artificially stabilizing current field 50 . The survey diagram is shown in Fig. 4. It is a time-domain-induced polarization method. As with conventional ECR with polarization, all receiving electrodes and receiving wires on a profile are laid out prior to measurement, and pole-dipole devices are used for observation. However, the difference is that our team's multi-purpose GDP electrical system (Zonge Ltd., USA) It was developed to be used with an 8-channel transfer switch developed to observe the data through the transfer switch. This allows the use of GDP's high-power transmitter and high-precision data acquisition device for deep apparent resistivity and polarization measurements. The distance between the measuring points and the electric dipole moment can be flexibly varied depending on the depth. Therefore, MTIP resistivity and polarizability imaging is a detection method with large depths (10-800 m). au/ what-we-do/ data-proce ssing) was used for MTIP data inversion. A smoothing model inversion is a robust way to convert resistivity and polarizability data into a smoothly varying model profile. The finite element forward-modeling algorithm used in TS2DIP calculates the apparent resistivity and polarizability with an accuracy of 5% from a 2D model. When information about the terrain is included in the model, the terrain is clearly reflected in the finite element mesh of TS2DIP. Average values of the apparent resistivity and polarizability were calculated and used in the initial background resistivity model. The interactive tool allows the user to edit the background model autonomously based on known geological information. The iterative modification of the 2D model was guided by constraints on both its smoothness and the differences between the background model and the inversion model. This method considered many measures, including the RMS error, to measure data misfit, distance from an a priori background model, model roughness, average RMS model-constraint residual, RMS minimization criteria, and the largest changes in the model parameters after each iteration until the calculated resistivity and polarizability matched the observed data as closely as possible.
Porosity and saturation calculation methodology. In order to use MTIP to explore the NGH in the Muli area of the Qinhai-Tibetan Plateau, it was necessary to study the lithological characteristics based on resistivity and polarizability. www.nature.com/scientificreports/ The physical parameters affecting the electrical properties of rocks in the area containing NGH are the porosity and saturation of the gas hydrate. Archie's equation 51 is commonly used to evaluate a reservoir and can be applied to NGH: where ρ t is the resistivity of the formation (Ω m), ρ w is the resistivity of the water in the formation (Ω m), and ϕ is the porosity (percentage). It is generally believed that the pores of hydrate-bearing reservoirs contain only hydrates and water, S w is the saturation of pores in the formation due to water and gas hydrate saturation S h is obtained by: The parameters a, m, and n are empirical indices that can be determined for the stratum. In general, 1.5 < m < 3, 0.5 < a < 2.5, and 1.86 < n < 2.06 46 .
According to past research on NGH reservoirs 46,52 , ρ w = 2 Ω m, n = 1.9386, a = 0.51, and m = 1.32, so Eq. (1) can be written as Equation (3) shows that the resistivity of a NGH reservoir is a function of the porosity and saturation of the NGH. Thus, the resistivity of the NGH reservoir can be deduced from these two parameters in the study area.
In the time-domain IP method, the measured voltage in the rock and ore increased over time with a stable current, indicating that the resistivity of the rock and ore or NGH changed with supply time. In other words, the effect of volumetric polarization of the medium is equivalent to the increase in its resistivity when the supplied current is stable. The equivalent resistivity of the IP is given by Seigel 53 .
where ρ t is the resistivity of the formation (Ω m), ρ 0 is the resistivity of non-excited electricity generation when the water content is zero, and η is the polarizability (percentage). Therefore, the polarizability can be estimated with Eqs. (3) and (4). The resistivity calculated according to porosity and NGH content is the equivalent resistivity, and the resistivity calculated without the NGH is the resistivity without excitation. Knowing the resistivity and polarization rate, ϕ and S h can be obtained by solving together with Eqs. (3) and (4). Figure 5a shows the two-dimensional resistivity inversion section of MTIP data.
MTIP sounding results.
It reflects the details of these resistivity logs, especially the high-resistivity anomaly (650 Ω m or more) between depths of 0 and 150 m for the section, which is consistent with the resistivity logs of DK-3 and DK-4. The highresistivity anomaly shows that there was a layer of frozen soil within the shallow part of this section, and the thickness of the point measurement reaction near the boreholes DK-1, DK-2, and DK-3 coincides with the thickness of the known permafrost layer of about 95 m 41 . The resistivity logs of DK12-13, DK-3, and DK-4 indicate the presence of a lower resistivity region between depths of 200 and 590 m, 100 and 600 m, and from 70 to 260 m, respectively. The low resistivity region was also observed in the resistivity section. NGH reservoirs are distributed in this region. NGH in the Muli area mainly occurs in fractures of mudstone or oil shale, which causes the inclined low resistivity zone of inclined mudstone and the middle-high resistivity anomaly of the NGH reservoir. The results show that the section has seven faults: (a) five south-dipping faults (F0, F1, F2, F27, and F3) and two north-dipping faults (F4, and F5), which reflect the low resistivity seen in MTIP data. (b) MTIP data revealed two north-dipping faults (F4 and F5) associated with low resistivity. The results indicate that the F1, F2 and F27 fracture zones control the formation of NGH. This is consistent with geological and drilling findings that F1, F2 and F27 faults are migration channels for NGH and accumulation spaces for NGH. However, it is difficult to distinguish the NGH layers in the two-dimensional MTIP resistivity inversion section. There are two main reasons for these blind spots. First, the NGH layer is small and it is difficult to identify the deposit with the available detection precision. Secondly, the NGH layer is close to the permafrost layer or close to the faults; hence, the difference in resistivity within the region is very small. Figure 5b shows the two-dimensional polarizability inversion section for MTIP. There are many high-polarizability anomalies in the section. I, II, and III are inferred ranges of NGH reservoirs. There is a correlation between the known ore-bearing sites and the high-polarizability anomaly I between depths of 190 m and 425 m for DK12-13; depths of 145 m and 390 m for DK-2 and DK-3, respectively; and the high-polarizability II between depths of 145 m and 395 m for DK-4. The I and II high polarizability anomalies are located near the F1, F2 and F27 faults. The high-polarizability anomaly III is near the F3 fault.
Porosity and NGH saturation. To investigate the relations between resistivity, porosity, and NGH content, we assumed that porosity varied from 1 to 95% and at NGH saturation from 1 to 95% based on known drilling information. According to the porosity and saturation of the NGH in the permafrost area of Qilian Mountain, the resistivity of the reservoirs can be estimated with Eq. (3), as shown in Fig. 6. For constant NGH saturation, the resistivity of the NGH reservoir reduced as the porosity increased from 1 to 95%. Similarly, when www.nature.com/scientificreports/ the porosity was fixed, the resistivity increased as the NGH saturation increased from 1 to 95%. This indicates that the resistivity of the NGH reservoir is closely related to both porosity and NGH saturation. It can be seen from Table 1 that the resistivity of the NGH reservoir varies from 24.17 to 396.6 Ω m. It can be found in Fig. 6 that the variation range of porosity and saturation corresponding to this resistivity is 5-20% and 50-70%, respectively. Figure 6 shows that when the resistivity of the gas hydrate reservoir is higher than 396.6 Ω m, the corresponding porosity will be less than 5% and the saturation will be higher than 70%. It indicates that the reservoir is a low porosity, high saturation reservoir. According to the above analysis, when the porosity is less than 5% and the saturation is higher than 70%, the resistivity parameters of the MTIP method cannot identify and define the NGH reservoirs in the permafrost area of the Qilian Mountains.
The IP can, thus, be calculated, and the polarizability as a function of porosity and as a function of NGH content is shown in Fig. 7. For fixed NGH saturation, the polarizability was constant as the porosity increased from 1 to 95%. However, for a fixed porosity, the polarizability increased as the NGH saturation increased from www.nature.com/scientificreports/ 1 to 95%. This indicates that the polarizability depends on the NGH content but not on the porosity. The polarizability, thus, indicates the presence of NGH and can guide the subsequent exploration and drilling. Based on the ranges of the porosity and saturation of the NGH in the permafrost in the Qilian Mountains, porosity and NGH content can be calculated. The storage capacity of NGH can be found by combining the resistivity and polarizability obtained by MTIP inversion. Hence, based on the difference in polarizability between NGH and the surrounding rock, the polarizability of MTIP is suitable for the geophysical exploration of NGH in the Muli area of the Qinhai-Tibetan Plateau.
The porosity and NGH saturation can be inverted using the MTIP resistivity and polarizability data using Eqs. (3) and (4). The amplitude of the porosity (Fig. 8a) ranges from 0 to 20%. In the shallow permafrost region, the high resistivity corresponds to low porosity, as low as 1%. Faults at elevations between 3700 and 3900 m have a high porosity, up to 20%. The porosity and resistivity distribution reflect the underground lithological characteristics and fault zones. Similarly, NGH saturation (Fig. 8b) ranges from 0 to 32%. The I, II, and III high saturation anomalies are consistent with the I, II, and III high polarizability anomalies. The IV high saturation anomaly is not in the polarizability section. www.nature.com/scientificreports/ According to the resistivity and porosity results, as shown in Figs. 5a and 8a, it can be concluded that the fault zone is characterized by a low-resistivity, high-porosity anomaly. The fault zone is characterized by high polarizability and high NGH saturation, as shown in Figs. 5b and 8b. It can be inferred that the NGH in this region depends on the fault zone. The well-developed fracture is a good channel in which NGH can rise, forming NGH in the low-temperature environment due to the layer of permafrost. The fracture can be inferred from the resistivity. When combined with porosity, the degree of fracture development can be determined. The polarization and saturation indicate the presence of NGH.
Conclusions
The electrical and lithological characteristics of gas hydrate reservoirs were studied for use in exploring the presence of NGH in the Muli area, and the presence of NGH in the fault zone was evaluated using the NGH saturation based on MTIP data inversion. The main conclusions are as follows: 1. The porosity of a rock controls its resistivity, and NGH saturation and polarizability are in nice agreement.
Three polarizability and saturation anomalies have been recognized as known NGHs, and one saturation anomaly has been identified as a potential NGH. The inferred permafrost overburden thickness and the five south-dipping faults provide a favourable geological environment for hydrate movement and storage. 2. Based on the analysis of the physical properties of underground NGH reservoirs. The resistivity of the sandstone reservoir containing hydrate is 2-3.5 times that of the surrounding rock, and its thickness is thin, so it is difficult to identify the hydrate by resistivity alone, but obtaining resistivity parameters from MTIP can delineate the thickness of the permafrost layer and the fracture distribution to infer the underground NGH source and transport channel. 3. A summary of electrical and lithological characteristics can be used to evaluate the existence of the NGH, The MTIP measurement results are basically consistent with the borehole logging data, and the polarizability and saturation can assess the possibility of the existence of the NGH, which provides an important basis for the identification and distribution of natural gas hydrate reservoirs.
Data availability
Data associated with this research is available and can be obtained by contacting the corresponding author. www.nature.com/scientificreports/ | 2022-08-04T06:17:06.509Z | 2022-08-02T00:00:00.000 | {
"year": 2022,
"sha1": "980151cd57f338b7e6988ac93930839cf5deece5",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "f01383ff32449985968e1107bce95745dc9be181",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
73992 | pes2o/s2orc | v3-fos-license | N-Homocysteinylation Induces Different Structural and Functional Consequences on Acidic and Basic Proteins
One of the proposed mechanisms of homocysteine toxicity in human is the modification of proteins by the metabolite of Hcy, homocysteine thilolactone (HTL). Incubation of proteins with HTL has earlier been shown to form covalent adducts with ε-amino group of lysine residues of protein (called N-homocysteinylation). It has been believed that protein N-homocysteinylation is the pathological hallmark of cardiovascular and neurodegenerative disorders as homocysteinylation induces structural and functional alterations in proteins. In the present study, reactivity of HTL towards proteins with different physico-chemical properties and hence their structural and functional alterations were studied using different spectroscopic approaches. We found that N-homocysteinylation has opposite consequences on acidic and basic proteins suggesting that pI of the protein determines the extent of homocysteinylation, and the structural and functional consequences due to homocysteinylation. Mechanistically, pI of protein determines the extent of N-homocysteinylation and the associated structural and functional alterations. The study suggests the role of HTL primarily targeting acidic proteins in eliciting its toxicity that could yield mechanistic insights for the associated neurodegeneration.
Introduction
Hyperhomocysteinemia/homocystinuria is a genetic disorder of methionine metabolism caused due to elevated level of plasma homocysteine (Hcy). The toxic Hcy is known to metabolize to methionine by remethylation or to cystiene by trans-sulfurylation. However, mutations in the Hcy metabolizing enzymes, cystathionine b-synthase (CBS) or methylene tetrahydrofolate reductase (MTHFR) cause an impaired ability to metabolize the toxic Hcy resulting in an increased levels of cellular and plasma Hcy [1,2,3,4,5]. The total concentrations of plasma Hcy may range from 15-20 mM (mild forms) up to 500 mM (severe forms), compared with 5-10 mM under normal conditions [6,7,8,9,10,11]. This elevated Hcy levels are associated with increased incidences of cardiovascular diseases, including atherosclerosis and thrombosis [12,13], pregnancy disorders [14] and with various neurodegenerative pathologies such as dementia, Parkinson's and Alzheimer's diseases [15,16,17]. Indeed, cells have evolved mechanisms for the clearance of this toxic HTL. Urinary excretion of HTL in the kidneys [3,18] and serum homocysteine-thiolactonase associated with high density lipoprotein which is known to hydrolyse HTL [19], form the extracellular mode of HTL clearance from the body, while bleomycin hydrolase (BHL) is the major intracellular HTL-hydrolysing enzyme which protects the cells against intracellular HTL [20]. However, under severe hyperhomocysteinemic conditions, the effectiveness of these mechanisms in protecting cells against HTL toxicity are still not yet investigated and properly understood.
Several studies have shown that one of the likely mechanisms underlying harmful effects of homocysteine (Hcy) is the chemical modification of protein by homocysteine thiolactone (HTL), a highly reactive cyclic thioester of Hcy [21,22,23,24], which is formed by methionyl-tRNA synthetase [21,24,25] in an error editing reaction. It has been demonstrated that HTL preferentially forms amide bonds with e-amino group of protein lysine residues in a non-enzymatic mechanism; a process referred to as ''protein N-homocysteinylation'' [19]. Protein N-homocysteinylation, has therefore, been considered to be one of the basic causes of HTL toxicity. Incorporation of HTL to the proteins is believed to result in loss of protein functions due to alterations in the protein structure and become susceptible to further damage by oxidation [22,26,27]. In addition, it has also been observed in few proteins that N-homocysteinylation induces protein aggregation or amyloid formation and hence considered to be an independent risk factor for neurodegenerative diseases in human [17,22,28,29,30,31,32]. It may be noted that a large fraction of the plasma proteins and many proteins from other sources have been identified for protein N-homocysteinylation both in vivo and in vitro [22,33]. However, all the structural and functional based studies have been limited to very few proteins. Since, the total number of proteins modified is large and includes proteins having different physico-chemical properties and fold types, it is important to investigate the structural and functional consequences of HTL on proteins having different physico-chemical properties. In the present study, we have investigated the effects of HTL on three different proteins having different physico-chemical properties (namely lysozyme, RNase-A and alpha-LA). We found that N-homocysteinylation has opposite consequences on acidic and basic proteins suggesting that pI of the protein determines the extent of homocysteinylation, and the structural and functional consequences due to homocysteinylation. Most interestingly, basic proteins are resistant to the structural and functional loss due to N-homocysteinylation indicating that homocysteinylation does not necessarily lead to functional alterations.
Lysozyme, RNase-A, a-LA, CN and CA solutions were dialyzed extensively against 0.1 M KCl at pH 7.0 in cold (,4˚C). Protein stock solutions were filtered using 0.22 mm millipore syringe filters. Concentrations of the protein solutions were determined experimentally using e, the molar extinction coefficient values of 39000 M 21 cm 21 at 280 nm for lysozyme [34], 9800 M 21 cm 21 at 277.5 nm for RNase-A [35], 29210 M 21 cm 21 at 280 nm for a-LA [36], 11,000 M 21 cm 21 at 280 nm for CN [37] and 57,000 M 21 cm 21 at 280 nm for CA [38]. The concentration of GdmCl stock solution was determined by refractive index measurements [39]. All solutions for optical measurements were prepared in the appropriate degassed buffer. All experiments were carried out in 0.05 M potassium phosphate buffer (pH 7.4) containing 0.1 M KCl at 37˚C.
Protein N-homocysteinylation
All proteins (2 mg ml 21 ) were incubated in the presence of different concentrations of HTL (0-1000 mM) in 0.05 M potassium phosphate buffer, pH 7.4 overnight at 37˚C. The HTL treated/untreated protein samples were further used for subsequent studies.
Protein sulfhydryl estimation using Ellman's reagent
Protein sulfhydryl (SH) group estimation was carried out as described by Ellman [40] with some minor modifications. Briefly, fractions containing unmodified and modified proteins were solubilized in 6 M guanidinium hydrochloride in presence of 2 mM b-mercaptoethanol (ME) and incubated for 1 h at 37˚C as described earlier [30,31,32]. Proteins were then precipitated down with 10% TCA to remove unbound HTL. Protein pellets were collected and resolubilized in phosphate buffer, pH 7.0. The levels of thiol groups in control and homocysteinylated protein samples were assayed using 5, 59-Dithiobis (2-nitrobenzoic acid), the Ellman's reagent [40]. The absorbance of the samples was measured at 412 nm, using a 1 cm path-length cuvette. The amount of 59-nitrothiobenzoate released was estimated from the molar extinction coefficient (e) of 13,700 M 21 cm 21 .
Circular Dichroism (CD) Measurements
CD measurements (at least in triplicates) were made in a Jasco J-810 spectropolarimeter equipped with a Peltier-type temperature controller with six accumulations. Protein concentration used for the CD measurements was 0.5 mg ml 21 . Cells of 0.1 and 1.0 cm path lengths were used for the measurements of the far-and near-UV CD spectra, respectively. Necessary blanks were subtracted for each measurement. All readings were procured at 37˚C. The CD instrument was routinely calibrated with D-10-camphorsulfonic acid. Secondary structure estimation from the far-UV CD spectra was calculated using Yang's method [41].
Fluorescence Measurements
Fluorescence spectra of the protein samples were measured in a Perkin Elmer LS 55 Spectrofluorimeter in a 3 mm quartz cell, with both excitation and emission slits set at 10 nm (at least in triplicates). Protein concentration for all the experiments was 2 mM for lysozyme; 5 mM for RNase-A, a-LA, CN and CA. For intrinsic fluorescence measurements, RNase-A was excited at 268 nm, while the emission spectra were recorded from 290-350 nm. Lysozyme, a-LA, CN and CA were excited at 295 nm and the emission spectra were recorded in the wavelength region 300-450 nm.
For ANS-protein binding experiments, the excitation wavelength was 360 nm, and emission spectra were recorded from 400-600 nm. ANS concentration was kept 16 fold that of protein concentration. For ThT-protein binding experiments, the excitation wavelength used was 450 nm, and emission spectra were recorded from 475-570 nm. ThT concentration was kept 25 mM. Necessary blanks were subtracted for each sample. Each spectrum was repeated at least three times.
Transmission electron microscopy
Modified protein solutions were placed on a copper grid and left at room temperature for 5 min. For negative staining of the samples, 1.0% uranyl acetate solution was added on to the copper grid and allowed to air dry before examination using a FEI Tecnai G2-200kV HRTA transmission electron microscopy (Netherland) operating at 200 kV.
Dynamic light scattering measurements
Size distribution of the particles present in the protein sample were obtained using a Zetasizer Micro V/ZMV 2000 (Malvern, UK). Measurements were made at a fixed angle of 90˚using an incident laser beam of 689 nm. Fifteen measurements were made with an acquisition time of 30 seconds for each sample at sensitivity of 10%. The data was analysed using Zetasizer software provided by the manufacturer to get hydrodynamic diameters. The protein concentration was 2.0 mg ml 21 . All measurements were performed at 37˚C.
Activity measurements
For measuring lysozyme activity, we used M. luteus cells as substrate and followed the procedure of Maurel and Douzou [42]. The reaction was followed in Jasco V-660 UV/Visible spectrophotometer. We observed that the apparent specific absorbance (the slope of the straight line obtained by plotting the decrease in absorbance at 450 nm against concentration of the substrate in the range 0-150 mg l 21 ) of an aqueous suspension of M. luteus cells was e 450 50.65610 22 mg l 21 . Just before the initiation of the enzymatic reaction, a given concentration of the substrate in the buffer was transferred to sample and reference cuvettes which was kept at 37.0¡0.1˚C and allowed to equilibrate for 15 min. In order to follow the progress curve, 25 ml of lysozyme from the stock of 2 mg ml 21 was added in the sample cuvette by rapid mixing. To reflect the same dilution, 25 ml of buffer was also added in the reference cuvette. The decrease in absorbance, which occurred during the lysis of the cell wall, was recorded at 450 nm for 20 min. The initial velocity (v) of lysis was deduced from the slope of the linear part of the recordings, usually over 30 s. This experiment was repeated for different concentrations of the substrate in the range 10-150 mg ml 21 , and a plot of v versus [S] (in mg l 21 ) was generated. The plot of v versus [S] was analyzed for K m and V max using the relation (Equation 1), where v is the initial velocity, and [S] is the substrate concentration. From this analysis the values of k cat were determined. In order to see the effect of a HTL on the kinetic parameters (K m and k cat ) of lysozyme, the substrate and the enzyme were pre-incubated in a given concentration of the HTL. Reaction at each concentration of the HTL was followed exactly the same way as described for the control experiment.
Following the procedure described by Crook et al. [43], RNase-A activity using cytidine 29-39 cyclic monophosphate (C.p) as a substrate was measured. Progress curve for RNase-A mediated hydrolysis of C.p in the concentration range (0.05-0.50 mg ml 21 ) in the absence and presence of a given concentration of a HTL was followed by measuring change in absorbance at 292 nm for 20 min in Jasco V-660 UV/Vis spectrophotometer. Sample and reference cells were maintained at 37.0¡0.1˚C. From each progress curve at a given substrate concentration and in the absence and presence of a fixed HTL concentration, initial velocity (v) was determined from the linear portion of the progress curve, usually 30 s. The plot of initial velocity (v) versus [S] (in mM) at each HTL concentration was analyzed for K m and k cat using Eq. (1).
Thermal Denaturation Studies
Thermal denaturation studies were carried out in a Jasco V-660 UV/Visible spectrophotometer equipped with a Peltier-type temperature controller at a heating rate of 1˚C per minute. This scan rate was found to provide adequate time for equilibration. Each sample was heated from 37 to 85˚C. The change in absorbance with increasing temperature was followed at 287 nm for RNase-A, 300 nm for lysozyme. About 500 data points of each transition curve were collected. Measurements were repeated three times. After denaturation, the protein sample was immediately cooled down to measure reversibility of the reaction. Each heat-induced transition curve was analyzed for T m (midpoint of denaturation) using a non-linear least squares method according to the relation (Equation 2), where y(T) is the optical property at temperature T (Kelvin), y N (T) and y D (T) are the optical properties of the native and denatured protein molecules at T K, respectively, and R is the gas constant. In the analysis of the transition curve, it was assumed that a parabolic function describes the dependence of the optical properties of the native and denatured protein molecules (i.e. and c D are temperature-independent coefficients) [44].
Results
To investigate the effects of N-homocysteinylation on proteins, we have chosen lysozyme, RNase-A and a-LA. The choice of the proteins was to have different isoelectric points (pI) and fold types keeping in mind that the proteins should also have different lysine contents (See Table 1). To modify proteins by HTL, each of the protein samples (2 mg ml 21 ) was treated with different concentrations of HTL, ranging from 0-1000 mM (incubated overnight at pH 7.4) and analyzed for the free SH contents using Ellman's reagent (See Table 2). HTL has been shown to be quite stable at room temperature with less than 10% degradation after 24 hours under physiological conditions [18,21,45,46,47,48]. Hence, overnight (12-15 hrs) incubations would thereby lead to minimal hydrolysis of HTL. It is seen in Table 2 that the proteins have been incorporated with HTL as suggested by large increase in the free SH contents. Fig. 1 shows far-UV CD (left panel) and near-UV CD (right panel) spectra of the three HTL-modified proteins. This figure indicates that HTL-induced modification has different structural consequences on a-LA relative to lysozyme and RNase-A, in terms of secondary (far-UV CD) and tertiary (near-UV CD) structures. We further confirmed the alterations in the tertiary structure of the proteins due to the modification by measuring tryptophan/tyrosine fluorescence spectra of the homocysteinyalated proteins. We then performed ANS and ThT binding assays to investigate if Nhomocysteinylation induces aggregate formation of the HTL-modified proteins ( Fig. 3 and Fig. 4). It was observed that there is no binding of both ANS and ThT in case of lysozyme and RNase-A at different concentrations of HTL, as neither the l max was blue shifted (for ANS) nor there is an increase in the relative fluorescence intensities for both ANS and ThT. Whereas in the case of a-LA there is an increase in both the ANS and ThT binding in an HTL concentration dependent manner upon modification by HTL indicating the presence of protein aggregates that might be amyloidogenic. Transmission electron microscopy images (Fig. 5) further confirm that N-homocysteinylation-induced aggregates in a-LA are not amyloidogenic in nature. S1 Fig. shows the size distribution by volume and Table 3 shows the hydrodynamic diameter of protein aggregates obtained from DLS measurements. It is seen in S1 Fig. that there is existence of large aggregates due to modification. The results together indicate that protein covalent modification by N-homocysteinylation has different consequences in terms of structure and aggregation propensities on different proteins. Following the procedure described in the preceding section, functional activity parameters (K m and k cat ) of lysozyme and RNase-A were measured in the absence and presence of different concentrations of HTL at pH 7.4 and 37˚C. Table 4 shows the enzyme kinetic parameters of HTL-modified (and -unmodified) lysozyme and RNase-A. It is seen in this table that there is no significant effect on the functional activity parameters (K m and k cat ) of lysozyme and RNase-A due to the modification. No changes in the K m and k cat might possibly be due to no affects in the thermodynamic stability of the modified proteins. To uncover this possibility, we performed heat-induced denaturation studies of the modified lysozyme and RNase-A by monitoring the changes in absorbance at 300 nm for lysozyme and 287 nm for RNase-A as a function of temperature. The denaturation profiles are shown in S3 Fig. and the measured thermodynamic parameter (T m ) are presented in Table 5. It should however be noted that a complete transition curve could not be obtained in the temperature range of 37˚C-85˚C in the case of lysozyme. Therefore, in order to bring down transition curves in the measurable temperature range, 1.5 M GdmCl was added to the samples in case of lysozyme. Therefore, the transition curves shown in S3 Table 5. It is seen in S3 Fig. (and Table 5, for T m ), that there is no significant change in protein stability in terms of T m . Our results clearly indicate that N-homocysteinylation brought about by HTL does not affect the functional activities of lysozyme and RNase-A by not affecting the thermodynamic stability, T m .
Discussion
It has been previously reported that protein N-homocysteinylation basically targets the free amino group of lysine residues in protein. Therefore, first of all we have investigated if there is any difference in the extent of homocysteinylation among the proteins (lysozyme, RNase-A and a-LA) having different lysine residues (see Table 1). The HTL concentrations (0-1000 mM) used in the present study have been selected keeping in mind that these also represent a near pathological concentrations of Hcy found in severe homocysteinuric conditions [7,8,9,10,11]. Covalent adduct formation between HTL and protein lysine residues results in the availability of a free SH in the modified protein. Since for each HTL molecule reacting with the free amino group of the protein, an SH group is added which can be assessed using Ellman's reagent. Hence increase in the free SH content upon HTL treatment is regarded to be a good signature of protein covalent adduct formation by HTL. It is seen in Table 2 that there is an increase in the free SH content of the proteins in a HTL concentration manner suggesting that the proteins have been incorporated with HTL. Interestingly, the extent of homocysteinylation is almost same in the case of lysozyme and RNase-A, but is highest in case of a-LA (Fig. 6). In terms of number of lysine residues present in the proteins, RNase-A and a-LA should have almost the similar extents of homocysteinylation (as the number of lysine content does not vary much, RNase-A and a-LA have 10 and 12 lysine residues respectively); while lysozyme and RNase-A should have different extent of homocysteinylation as the number of lysine content in RNase-A is relatively higher (See Table 1). In fact, the dependence of the extent of homocysteinylation on the number of lysine content in proteins has been challenged by several studies [28,30,49,50]. Since, both lysozyme and RNase-A are basic proteins with close pI values and a-LA is an acidic protein (See Table 1), we speculated that the differential behaviour of Nhomocysteinylation may most probably be due to the differences in their pI values (or the acidic or basic nature) of the proteins. Several serum proteins have been found to have modified with HTL and their rate of N-linked modification due to HTL has also been reported [22]. It may be noted that most serum proteins (Fig. 7). Fig. 7 shows that there is a close relation between the pI and the rate of homocysteinylation (k). The existence of such a close relation between the pI and the k values led us to believe that the acidic proteins are prone to Nhomocysteinylation while basic proteins tend to have lesser extent of homocysteinylation.
It has been known that protein N-homocysteinylation results in native state structural alterations, loss of enzyme functions and induces protein aggregation/ amyloidogenesis [11,22,28,29,30,31,32,51,52,53]. Therefore, it is important to investigate if these consequences of protein homocysteinylation depend on the acidic or basic nature of the proteins. In this spirit, we have investigated if acidic and basic proteins behave differently on being modified by N-homocysteinylation. We have investigated if any difference in the native state conformational changes due to homocysteinylation is responsible for the differential behavior of homocysteinylation on the proteins. For this, we have measured the far-UV, near-UV CD and intrinsic fluorescence spectra of the homocysteinylated proteins. It is seen in Fig. 1 that the secondary structural contents of lysozyme and RNase-A are not changed due to the modification at different concentrations of HTL. In contrast, the secondary structure of a-LA has been reduced significantly in a HTL concentration dependent manner and completely denatured at the highest concentration of HTL (1000 mM). Evaluated percent secondary structural changes (see Table 6) also suggest that there is no alterations in alpha and beta content of lysozyme and RNase-A samples treated with HTL. However, in the case of a-LA, we observed a decrease in the percent alpha helical content and a concomitant increase in the percent beta sheet structural composition. Thus, N-linked modification of a-LA by HTL brings about structural transition from an alpha to beta sheets but this conversion is absent in the case of lysozyme and RNase-A. Near-UV CD and intrinsic fluorescence spectral measurements of the modified lysozyme and RNase-A at different HTL concentrations also indicate that there is no tertiary structural change due to modification. In case of a-LA, HTL binding induces alterations in the tertiary structure leading to complete loss of tertiary structure at the highest concentration of HTL. In addition to lysozyme and RNase-A used in the present study, many proteins have earlier been reported to be N-homocysteinylated but do not result in structural alterations [49,51]. Conformational measurements therefore indicate that the differential extent of Nhomocysteinylation on acidic (a-LA) and basic (lysozyme and RNase-A) proteins is due to the different effects on the native states of the proteins. At present, we have no concrete explanation for the different effects of HTL-induced modifications on the native states of different proteins. It might, however be possible that disruption of the tertiary contacts due to N-homocysteinylation might be limiting step as the tertiary structures of both lysozyme and RNase-A could not be disrupted due to the modification. Perhaps the incorporation of HTL to specific lysine residues in different proteins might be responsible for the opening of the tertiary structure which in case of a-LA is easily accessible while it is difficult to target in case of lysozyme and RNase-A. In support of our argument, it has been previously reported in cytochrome c (cyt c), that modification of four lysine residues (Lys8 or -13, Lys86 or -87, Lys99, and Lys100) does not bring about any significant change in its native state tertiary structure [49]. However, a single residue modification in case of insulin results in denaturation, ultimately leading to aggregate formation [28,54]. In addition, different reactivities of lysine residues in hemoglobin toward HTL were observed in another study [50]. Furthermore, it has been shown that Lys525 is a predominant site of Nhomocysteinylation in case of human serum albumin and the status of Cys34 determines the reactivity of albumin lysine residues, including Lys525 [55]. All these results provide clear evidence that modification of specific lysine residues is responsible for the observed structural and hence functional consequences.
We have further investigated if all of the homocysteinylated proteins used in this study undergo aggregate formations. For this purpose we analysed the homocysteinylated proteins for ANS and ThT binding propensities. The absence of ANS and ThT binding in lysozyme and RNase-A indicate no possibility of the presence of aggregates in the protein samples treated with HTL. Interestingly the extensively homocysteinylated protein, a-LA (as compared to lysozyme and RNase-A) was found to have a very prominent binding of both ANS and ThT indicating the presence of non-native aggregates which might be most probably amyloids. In addition, DLS measurement suggests the presence of at least two different aggregate species at concentration of HTL beyond 100 mM (Table 3). On being modified with HTL the hydrodynamic diameter of a-LA is highly increased with predominant particle size of ,1500 nm (at 1000 mM) as compared to 3.56 nm of the control native proteins suggesting that at this concentration of HTL, the monomeric native a-LA has completely disappeared. To further confirm if the aggregates formed are amyloidogenic in nature, we have analyzed them using transmission electron microscopy, which revealed the presence of amorphous structures but not fibrils. Binding of ThT, but absence of fibrils/ amyloids might possibly mean that the native a-LA has changed from its alpha helical conformation to predominant beta rich conformations that do not have the propensity to form fibrils. It may be noted that the HTL-induced amyloid formation has so far been reported only on very few proteins [30,32]. Hence Nhomocysteinylation-induced amyloid transformation might not be a general consequence on all proteins.
To examine the generality of the dependence of HTL-induced structural alterations on acidic proteins, we performed similar measurements on two additional acidic proteins (a-casein, CN, pI 4.2 and carbonic anhydrase, CA, pI 5.9). Similar to a-LA, it was also found that HTL induces structural alterations in CN and CA, as evident from the changes in the intrinsic fluorescence emission maxima on being treated with HTL and binding of ANS and ThT (See Figs. S2, S4). In agreement to the observed aggregate formation on a-LA, CN and CA, many acidic proteins have been reported to form large oligomers to aggregates upon modification by HTL [28,29,30,32,52]. Thus it is clear from the results that in contrast to the observed effect on the basic proteins (lysozyme and RNase-A), N-homocysteinylated acidic proteins are unfolded leading to aggregate formation.
We have further investigated the effect of N-homocysteinylation on the functional activity of the two basic proteins (lysozyme and RNase-A). For this, we carried out measurement of enzymatic kinetic parameters (K m and k cat ) of HTLmodified lysozyme and RNase-A (see Table 4). It should be noted that the value for each kinetic parameter given in Table 4 represents the mean of three independent measurements and ¡ represent the mean error. The kinetic parameters in the absence of the HTL, shown in Table 4, are in excellent agreement with the previous reports [42,43]. This agreement led us to believe that our measurements of the enzyme-catalyzed reactions and the analysis of the progress curves for kinetic parameters are accurate and authentic. Interestingly, it is also seen in Table 4 that the K m and k cat of the homocysteinylated lysozyme and RNase-A are not altered significantly due to N-homocysteinylation suggesting that the enzyme activity is not changed. However, in the case of CA which is an acidic protein, at 1000 mM HTL concentration, there was almost complete loss of enzyme activity (see S5 Fig.). No alterations in the thermodynamic stability (in terms of T m , see S3 Fig. and Table 5) due to homocysteinylation in lysozyme and RNase-A further support the fact that the protein functional activity should not be perturbed as protein stability and activity has a direct relation. It is now important to highlight that the commonly held belief of the loss of enzyme activity due to homocysteinylation has been derived from availability of large amount of Nhomocysteinylated protein in serum of homocysteinuric patients and the existence of antibodies against these protein adducts, but not based on direct enzymatic activity measurements. However, based on systematic activity measurements, at least four proteins (namely NOS: nitric oxide synthase, MetRS: methionyl-tRNA synthetase, DDAH: dimethylarginine dimethylaminohydrolase and paroxonase) have been reported to result in loss of enzyme activity due to homocysteinylation [22,51,53,56,57]. All of these proteins reported were found to be acidic in nature with pI values ranging from 5.1-6.2. Thus considering all previous reports and this study, we conclude that N-homocysteinylation-induced protein modifications might be pI dependent, with acidic proteins having higher propensities for structural and functional alterations as compared to basic proteins.
Misfolded protein aggregates and plaques in neuronal cells are emblematic signature of most of the neurological disorders and can trigger cascade of events ultimately resulting in synaptic dysfunction and consequent neuronal death with devastating clinical consequences, and one of the basic symptoms of increased levels of Hcy is neurodegeneration. It is important to note that almost all cytoskeletal proteins are acidic in nature. The possibility of these cytoskeletal proteins getting homocysteinylated is quite high, resulting in loss of their structure and function. This will eventually hamper intracellular transport and trafficking systems, thereby resulting in reduced/loss of neuronal activity. To date, at least two proteins of cytoskeletal origin (microtubule-associated tau protein, neuronal and glial intermediate filaments, IF) [58] have been reported to be affected by increased level of Hcy. N-homocysteinylation of tau protein has been shown to result in altered tubulin assembly dynamics in vitro [59]. These results clearly indicate that cells or tissues rich in acidic proteins could be the major targets for Hcy induced cytotoxicity and hence a prime cause of neurodegeration. Taken together, we conclude that N-homocysteinylation-induced loss of protein functions is not generally true and may depend on the physico-chemical properties of the proteins.
Conclusions
We provide here for the first time that the structural and functional consequences due to N-homocysteinylation depend on pI of the proteins. Basic proteins are resistant to structural and functional alterations due to N-homocysteinylation, whereas acidic proteins result in denaturation and ultimately leading to aggregation. The study indicates that not all proteins are prone to HTL-induced structural and functional modifications. In addition, cells or tissues rich in acidic proteins could be the major targets for Hcy-induced cytotoxicity. | 2017-07-29T05:38:50.915Z | 2014-12-31T00:00:00.000 | {
"year": 2014,
"sha1": "fa7e68955ced35e323753e01ef6856ddcc5c8b7e",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0116386&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fa7e68955ced35e323753e01ef6856ddcc5c8b7e",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
54574854 | pes2o/s2orc | v3-fos-license | Pediatric Retinal Detachment in Indonesia: Clinical Characteristics, Risk Factors, and Treatment Outcomes
Purpose: To describe the clinical features and risk factors of pediatric retinal detachment among patients in Indonesia. Methods: This is a retrospective study involving 46 eyes of 34 children (younger than 18 years) diagnosed with pediatric retinal detachment. A detailed history was taken and a complete ophthalmic examination and a systemic examination were performed as required. Clinical characteristics, risk factors, and treatment choices were noted. Retinal detachment was categorized as tractional, exudative, or rhegmatogenous. Results: Mean patient age was 8.5 years (range, 0–18 years). Most patients (70%) were boys. Twelve (35%) patients had bilateral involvement at presentation. Tractional retinal detachment was found in 17 eyes (37%) and in this study was caused by retinopathy of prematurity (grade IV-V) in all cases. Exudative retinal detachment was found in 12 eyes (26%), the most common causes of which were panuveitis and Coat’s disease (both 50%). Rhegmatogenous retinal detachment was found in 17 eyes (37%), the most common risk factor for which was trauma (58%). Conclusions: Different approaches are needed to treat pediatric retina detachment in patients with different risk factors. Recognition of risk factors and early management will help to prevent childhood blindness due to retinal detachment.
Introduction
Pediatric retinal detachment (RD) is a devastating ophthalmic condition if not properly treated.The prevalence of RD is reported as 12.4 cases per 100,000 in- dividuals, of which 3.2% -5.6% (0.38 -0.69 cases per 100,000 individuals) are of pediatric age [1] [2] [3] [4].Management of pediatric RD remains challenging due to the viscosity of pediatric vitreous, the difficulty of scleral buckling in developing eyes, and the risk of proliferative retinopathy caused by a raised immune response [2] [3] [5].
The surgical management of pediatric RD is difficult; therefore, prevention of the condition is important.An improved understanding of the etiological risk factors associated with the disease may help to avoid visual morbidity and childhood blindness.In addition, early detection and diagnosis is essential because timing is a crucial factor in the success of RD management.This is especially important in developing countries, where individuals often seek treatment too late for viable surgical management.
We performed this study to describe the clinical characteristics and most common risk factors for pediatric RD in the Cicendo National Eye Hospital (CNEH), as the top referral eye hospital in Indonesia.
Methods
This is a retrospective descriptive study utilizing hospital medical records of patients diagnosed with RD in 2013.All new patients under 18 years of age and diagnosed with RD by the consultant in the vitreoretinal unit were included.
Age, sex, laterality, visual acuity, and type of RD were recorded.RD sub-types were classified by etiology, age group, risk factors, and management
Result
Retinal detachment was observed in 46 eyes of 34 pediatric patients.The clinical characteristics are shown in Table 1.
Characteristics of Retinal Detachment
The retinal detachment characteristics are shown in Table 2.The mean age of patients in this study was 8.5 years (range: 0 -18 years), with 70% of cases being male.Twelve (35%) patients had bilateral involvement, with tractional RD and exudative RD affecting eight and four patients, respectively.Visual acuity determination was difficult because of age, but most cases had light perception (41%) followed by hand movement detection (29%).Visual acuity measurement was not possible for 13 (38%) patients.When stratified by RD sub-type, most patients had unilateral rhegmatogenous RD (37%) and bilateral tractional RD (37%).
Etiology, Risk Factors, and Management of Pediatric Retinal Detachment
The overall frequency of RD was highest in the 0 -5 year age group, with tractional RD the most common sub-type (89%) in these patients.Retinopathy of prematurity (ROP) stage IV -V was the sole etiological factor in all cases of disease process.In this study, we noted four patients with Coat's disease (11 -15 years) that could not be treated with photocoagulation or vitreoretinal surgery because of advanced disease progression.
Rhegmatogenous RD was found in 37% of eyes, with 41% of these occurring in the 6 -10 year age group.Trauma (58%) was the most common risk factor for this RD sub-type.
In this study, the primary management modality for RD was observation because most cases presented too late for effective medical or surgical treatment.
Medication was given for cases of exudative RD with panuveitis as a risk factor.
Vitreoretinal surgery was performed in 36% of cases; however, most had poor anatomical outcomes.Table 3 shows the detail management of retinal detachment recorded.
Discussion
This study characterizes pediatric RD in a government tertiary referral hospital in Indonesia and highlights some differences in disease patterns compared with developed countries.
We found that ROP was the only cause of tractional RD in this study.Report from study involved of 21 health facilities in Indonesia, had a result of 5.05% (32/613) of premature babies that were diagnosed with ROP [6].Similar studies from Mongolia, Malaysia, and Latin America detected more case of ROP [7].In a report from Malaysia, 20 of 294 premature babies examined, about 7%, had ROP [8].According to a 2005 report, two-thirds of the 50,000 children worldwide estimated to be blind from ROP are from Latin America.Unfortunately, more mature infants are developing severe ROP in countries with lower or modest levels of development than those in highly developed countries [7].Further investigation of this risk factor needs to be prioritized because ROP plays an important factor in pediatric RD and childhood blindness.
Indonesia has a National Guideline for screening and treatment of ROP.Coat's disease-associated RD can be achieved [10].
Thirty seven percent cases in this study were of the rhegmatogenous RD sub-type.Trauma remained the most common cause of RD in the 6 -10 years of age group (58%), followed by myopia in 23% of cases.These findings are comparable to those of a study from Saudi Arabia, in which 32% and 17% of 152 eyes had rhegmatogenous RD secondary to trauma and myopia, respectively [2].Similarly, Rumlet noted 41% and 11% of cases of RD, secondary to trauma and myopia, respectively [1].However, in a Taiwanese report, Chang found that How to cite this paper: Irfani, I. and Kartasasmita, A.S. (2017) Pediatric Retinal Detachment in Indonesia: Clinical Characteristics, Risk Factors, and Treatment Outcomes.Open Journal of Ophthalmology, 7, 249-255.https://doi.org/10.4236/ojoph.2017.74033
Table 1 .
Clinical characteristic of retinal detachment.
Table 2 .
Clinical characteristics, etiology, and other risk factors stratified by retinal detachment sub-type.Cytomegalovirus and Herpes), as infection remains a leading cause of ocular inflammation in Indonesia.Coat's disease may occur in the first decade of life and has devastating visual sequelae if not managed early in the
Table 3 .
Management of pediatric retinal detachment.
[6]ed on data from the General Hospital in Jakarta, ROP prevalence had decreased between 2004 and 2010.In 2007, the reported prevalence of ROP was 21.7% with 71% of cases being at Stage 3. Subsequent estimates of prevalence were 14% and 18% in 2008 and 2009, respectively[6].An important extension of the national guidelines means increasing the need for ROP centers in the country.Currently, ROP is only detected and managed at a small number of centers, usually tertiary care hospitals and certain private eye clinics, and many infants at risk do not have access to such facilities.Data from the 2010 ROP workshop reflect the high mortality of premature babies in Indonesia, especially in clinics with minimum neonatal facilities[6].This study shows that ROP was the sole cause of pediatric tractional RD at a tertiary referral eye hospital in Indonesia (CNEH).Delay in the detection and treatment of RD was influenced by inadequate ophthalmic screening of premature babies, combined with parental reluctance to subject their child to potential surgery.The most common risk factor for exudative RD was panuveitis secondary to TB and TORCH infection.The prevalence of TB in Indonesia is high and the estimated prevalence of disease in the pediatric population for 2015 was about 75,000.[9] In this study, 50% of cases with exudative RD had panuveitis as an underlying condition and all cases had a positive history of tuberculosis.We found that another common cause of exudative RD was Coat's disease.50% of cases of exudative pediatric RD were due to Coat's disease in our cohort, but all presented at a late stage of disease (11 -15 years of age group).This highlights the importance of regular retinal screening in childhood, as with early recognition and appropriate treatment, anatomical and visual rehabilitation in I. Irfani, A. S. Kartasasmita DOI: 10.4236/ojoph.2017.74033253 Open Journal of Ophthalmology This has been observed in previous case series, with Rumelt noting that 63% of cases of exudative RD were caused by the disease [1].In developing countries such as Indonesia, regular retinal screening in children is a difficult public health strategy to implement.While a white pupil in a child is a diagnostic sign for Coat's disease, usually by this stage the retina has detached and laser photocoagulation is no longer a viable management option.Mjeren followed 15 cases of Coat's disease in the early stages of presentation for up to 28 months.These patients were treated with a combination of laser photocoagulation, cryotherapy, and vitreoretinal surgery.Stable visual outcomes and anatomic improvement were achieved in 12 cases with no enucleation necessary [10].Comparatively, | 2018-12-02T14:07:44.386Z | 2017-09-29T00:00:00.000 | {
"year": 2017,
"sha1": "40ca4112c42359a39199b2794a2cc8253fe4fbc6",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=79442",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "40ca4112c42359a39199b2794a2cc8253fe4fbc6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252301970 | pes2o/s2orc | v3-fos-license | Case report: Clinical manifestations and genotype analysis of a child with PTPN11 and SEC24D mutations
Background The PTPN11 gene, located at 12q24. 13, encodes protein tyrosine phosphatase 2C. Mutations in the PTPN11 gene can lead to various phenotypes, including Noonan syndrome and LEOPARD syndrome. The SEC24D gene is located at 4q26 and encodes a component of the COPII complex, and is closely related to endoplasmic reticulum protein transport. Mutations in SEC24D can lead to Cole-Carpenter syndrome-2. To date, dual mutations in these two genes have not been reported in the literature. Methods We report a patient with short stature and osteogenesis imperfecta as the primary clinical manifestation. Other clinical features were peculiar facial features, deafness, and a history of recurrent fractures. Whole exome sequencing was performed on this patient. Results After whole-exome sequencing, three mutations in two genes were identified that induced protein alterations associated with the patient's phenotype. One was a de novo variant c.1403C>T (p.Thr468Met) on exon 12 of the PTPN11 gene, and the other was a compound heterozygous mutation in the SEC24D gene, a novel variant c.2609_2610delGA (p.Arg870Thrfs*10) on exon 20 and a reported variant c.938G>A (p.Arg313His) on exon 8. Conclusions Concurrent mutations in PTPN11 and SEC24D induced a phenotype that was significantly different from individual mutations in either PTPN11 or SEC24D gene. Personalized genetic analysis and interpretation could help us understand the patient's etiology and hence develop treatments and improve the prognosis of these patients.
Background: The PTPN gene, located at q . , encodes protein tyrosine phosphatase C. Mutations in the PTPN gene can lead to various phenotypes, including Noonan syndrome and LEOPARD syndrome. The SEC D gene is located at q and encodes a component of the COPII complex, and is closely related to endoplasmic reticulum protein transport. Mutations in SEC D can lead to Cole-Carpenter syndrome-. To date, dual mutations in these two genes have not been reported in the literature.
Methods:
We report a patient with short stature and osteogenesis imperfecta as the primary clinical manifestation. Other clinical features were peculiar facial features, deafness, and a history of recurrent fractures. Whole exome sequencing was performed on this patient.
Results: After whole-exome sequencing, three mutations in two genes were identified that induced protein alterations associated with the patient's phenotype. One was a de novo variant c.
C>T (p.Thr Met) on exon of the PTPN gene, and the other was a compound heterozygous mutation in the SEC D gene, a novel variant c. _ delGA (p.Arg Thrfs * ) on exon and a reported variant c. G>A (p.Arg His) on exon .
Conclusions: Concurrent mutations in PTPN
and SEC D induced a phenotype that was significantly di erent from individual mutations in either PTPN or SEC D gene. Personalized genetic analysis and interpretation could help us understand the patient's etiology and hence develop treatments and improve the prognosis of these patients.
Introduction
There are several reasons for short stature, some of which are syndromes with the main manifestation being short stature, including Turner syndrome, Noonan syndrome, and Silver-Russell syndrome. The primary clinical manifestation of Cole-Carpenter syndrome-2 is osteogenesis, which includes a group of clinical manifestations such as reduced bone mass, increased fragility, craniofacial abnormalities and growth retardation. There are numerous reports on children with short stature and osteogenesis imperfecta, however, patients with dual molecular diagnoses have rarely been reported. Here we describe a Chinese girl with mutations on both PTPN11 and SEC24D. Her unique phenotype is analyzed in detail in this report.
Case presentation
The proband was an 8-year-old girl who was noticed to be shorter than her classmates for more than 2 years before her parents brought her to our clinic. The patient weighed 3.8 kg at birth and was found to have sensorineural deafness. A cochlear implant was used to restore her hearing. Her communication skills were like any other child her age. However, her pronunciation of certain words was affected. At the age of 2, she suffered a fracture on her right femur twice in 1 year due to two accidental falls. Her leg recovered well after external fixation, and her daily activities were not affected. Her parents noticed her short stature when she was 6 (her height was not recorded). However, they did not take it seriously until 2 years later when her growth seemed to be stunted compared to her school classmates. The patient has a 6-month-old brother who was healthy with no history of fractures. Her parents are of average height with normal body proportions and facial features.
The patient's height and weight were 116 cm (-2.4SD) and 23.5 Kg (-0.8SD). She had normal intelligence. A Cafe-au-lait spot was observed on the skin of the left anterior chest wall, with a few pigmented spots on her face. Her fontanelle has already closed. Abnormal facial features were observed by her physician, characterized by posterior occipital bone convexity, hypertelorism, down slanting palpebral fissures, and a wide and flat nose. Her neck and spine were noticeably short, with a wide, and flat thorax, winged and posteriorly convex scapulae, and bilateral cubitus valgus and genu valgum. She has an overall normal gait. Heart, lung, and abdomen showed no significant abnormalities in physical examinations. Both breast and pubic hair were of Tanner I.
Skeletal imaging supported the findings of the physical examinations. Broad skull, open sagittal suture, wormain bones, flattened spinal vertebrae, thin ribs, and scoliosis were observed (Cobb's angle: 11 • ). Her bone age was around 8 years and 10 months old. Abnormal morphology of the pelvis with non-homogeneous bone density and increased angle of the femoral neck stem was observed by pelvis X-ray images ( Figure 1). Cardiac ultrasound was not performed. Laboratory examinations for liver and renal function, myocardial enzymes, blood electrolytes, thyroid function, IGF-1, electrocardiogram, bone age, abdominal ultrasound, and pituitary MRI were all normal (Table 1).
Methods
Genotype analysis: Informed consent was obtained from the child's guardian. 3 mL of peripheral blood was collected from the child and her parents. Genomic DNA was extracted using the conventional phenol-chlorination method. GenCap R Whole Exon Gene Capture Probe V4.0 (Mackinaw Gene Technology Co. Ltd., China) was used for library construction. Whole-exome sequencing was performed on the MGISEQ-T7 sequencer (UWI Technology Co., Ltd., Shenzhen, Guangdong, China). Reads were mapped to the human genome sequence GRCh37/hg19. Parental Sanger validation of putative pathogenic mutations was performed subsequently.
Results
Whole-exome sequencing identified pathogenic variants in both PTPN11 and SEC24D genes. A de novo missense variant c.1403C>T (p.Thr468Met) was identified in exon 12 of the PTPN11 gene (NM_002834) (Figure 2A), which caused a non-synonymous substitution in the cystine-based protein tyrosine phosphatase domain ( Figure 2B). Based on the gnomAD database, this variant has a frequency of 0.000003981 in normal individuals. Protein function prediction software REVEL predicted this variant to be deleterious. Sanger verification showed that neither of the proband's unaffected parents carried this variant. Based on the American College of Medical Genetics and Genomics (ACMG), this variant was classified as a pathogenic mutation. This mutation allele has already been reported in the literature as a hotspot mutation. Clinical phenotypes associated with this variant were LEOPARD syndrome and Noonan syndrome with multiple lentigines (4,6,7).
Biallelic variants in the SEC24D gene were also identified. One was paternal c.2609_2610delGA (NM_014822) in exon 20 on the helical domain, resulting in a frameshift mutation p. Arg870Thrfs * 10 ( Figure 2A). It was absent in the normal population databases and was classified as pathogenic based on ACMG guidelines. To our knowledge, this variant had not been reported previously. The second was maternal c.938G>A (p.Arg313His) in exon 8 (NM_014822), with a mutation frequency of 0.000641 in normal humans (Figure 2A). It was predicted to be potentially harmful by the protein function prediction software REVEL and was classified as a likely pathogenic variant by ACMG. Compound heterozygotes have .
/fped. . been reported in the literature, among which there was a patient with a premature stop codon and the same missense variant of the SEC24D gene ( Figure 2B) (1, 2).
Discussion
Among patients with positive molecular diagnoses on whole exome sequencing, 4.9% had two or more genes involved (8). In our study, a heterozygous mutation in PTPN11 and compound heterozygous mutations in the SEC24D gene were identified in the proband.
The PTPN11 gene, located on 12q24.13, encodes the SHP-2 protein and is widely distributed in heart and skeletal muscle. SHP-2 mediates cell proliferation and serves as a key signaling molecule in the RAS-MAPK kinase pathway (9,10). Phenotypes of the same PTPN11 gene may differ between mutated loci, thus, distinctive mutation locus needs to be considered for clinical diagnosis (11). Germline PTPN11 mutationrelated phenotypes include Noonan syndrome, LEOPARD syndrome, and metachondromatosis. Noonan syndrome and LEOPARD syndrome have some overlapping phenotypic features, such as peculiar facial features, sensorineural hearing loss, scoliosis, short stature, and cardiomyopathy. However, multiple lentigines are unique clinical characteristics of LEOPARD syndrome.
Located on 4q26, the SEC24D gene encodes a component of the COPII complex, which is involved in protein transportation in the endoplasmic reticulum. Mutations in the SEC24D gene result in a reduced outward procollagen transport from the endoplasmic reticulum and dilatation of the endoplasmic reticulum canal, which leads to Cole-Carpenter syndrome-2 (12). Cole-Carpenter syndrome-2 is an autosomal recessive syndrome characterized by abnormal skeletal development due to low bone mass and osteogenesis imperfecta, which include open fontanelle, hydrocephalus, abnormal facial development (e.g., forehead bulge, midface hypoplasia, micrognathia, and ear malformation), and recurrent fractures (1).
Only a few cases of dual mutations have been reported related to the two genes we mentioned above. Martina Caiazza et al. reported a patient with hypertrophic cardiomyopathy who carried a dual mutation in the PTPN11 and MYBP3 genes (13). To our knowledge, co-mutations of the SEC24D gene with other genes have not been reported. The combined effect of the mutant two alleles may explain the difference between the phenotype of this patient and that of a typical patient with a single mutant allele (Table 1).
Short stature is often a parents' focus. However, mixed genetic pathogeny of short stature makes our patient distinctive and relatively poor prognosis. Our patient was of short stature, with a height of 116 cm, and was lower than the third percentile for height for Chinese children of the same age and gender. Patients with Noonan syndrome or LEOPARD syndrome due to mutations in the PTPN11 gene can develop a short stature (5,14). For Cole-Carpenter syndrome-2 patients, osteogenesis imperfecta may also result in slower growth and abnormal body proportions (1,2,12).
Physical and radiography examinations of the child showed a short neck, scoliosis, valgus scapula, cubitus valgus and genu valgum, and thorax malformation, all of which were consistent with patients with PTPN11 mutations (15)(16)(17). However, the concurrent SEC24D gene mutation also played a major role in the skeletal development of this patient, Craniofacial dysmorphism + --+ + +
Open anterior fontanelle
Hearing loss such as wormain bones, pelvis malformation and possible low bone mass. Patients with Cole-Carpenter syndrome-2 are typically characterized by low bone mass, craniofacial abnormalities, various skeletal deformities, and a tendency for repeated fractures (1,2,12). The dual gene mutation in this patient could have aggravated the manifestations of bone malformations and low bone mass in the spine, thorax, and pelvis. Repeated fractures also confirm that the SEC24D mutation had a significant impact on osteogenesis and bone deformity.
The patient showed abnormal facial features. Orbital hypertelorism, down slanting palpebral fissures, broad and flat nose, and kyphotic occipital bones are similar to the facial features observed in Noonan syndrome and LEOPARD syndrome patients. The wide and deformed skull and wormian bones may be caused by bone abnormalities associated with the SEC24D gene mutation. However, SEC24D mutant patients may have a more severe phenotype as previously reported, such as a bulged forehead and large, open fontanelle (1,2).
The number of lentigines in our patient was much lower compared to a typical patient with LEOPARD syndrome. However the site of single nucleotide variation was identified and almost all of the other clinical diagnosis was concordant with LEOPARD syndrome (3,4), we identified our patient based on gene diagnosis as well. The reason for the absence of lentigines in our patient may be that the disease had not progressed to the certain stage, or the SEC24D gene mutation had affected the typical manifestations of the other gene mutation.
Currently, there are no effective etiological based therapies for the LEOPARD syndrome and Cole-Carpenter syndrome-2. Symptomatic treatments should be administered to reverse the impact on their growth and pubertal development. Our patient was found to have hearing loss at an early age, thus, a cochlear implant helped improve her hearing. As a result, her communication was essentially unaffected. Short stature and scoliosis are the biggest concerns for parents of these children. However, some of the patients with PTPN11 gene mutations were repored to have a relatively poor response to rhGH treatment (18-20). What is perplexing is that patients may develop new scoliosis or accelerate their scoliosis progression after rhGH treatment (21)(22)(23). For patients with mild scoliosis (Cobb's Angle <45 • ), early orthopedic correction has been shown to be beneficial for lower respiratory complications (24). However, due to the probable low bone mass of this patient, spinal fusion could be much more difficult after surgical treatment and may affect her quality of life. A judicious assessment should be performed before resorting to orthopedic surgery.
Even though there were no clinical manifestations of cardiovascular involvement in our patient, the possible risk of cardiovascular abnormalities may be high. Severe obstructive cardiomyopathy is a common cause of sudden death in patients with PTPN11 mutation (25, 26), thus comprehensive cardiac examinations should be performed for early diagnosis.
Conclusion
The combined effects of the dual gene mutations observed in our patient led to a phenotype that was not consistent with patients having individual gene mutations. This suggests the importance of comprehensive screening for relevant genomic variations in patients with growth retardation, scoliosis, and facial malformations. For some patients, the reasons for these heterogeneities in the clinical phenotypes may not only be due to the mutation loci but may be due to multiple gene mutations. Hence, proper diagnosis requires a more in-depth, individualized interpretation of the genetic landscape. This will help us understand the driving factors for the clinical manifestations, and hence guide us to formulate appropriate treatment plans.
Data availability statement
The datasets for this article are not publicly available due to concerns regarding participant/patient anonymity.
Requests to access the datasets should be directed to the corresponding author.
Ethics statement
Written informed consent was obtained from the individual(s), and minor(s)' legal guardian/next of kin, for the publication of any potentially identifiable images or data included in this article.
organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. | 2022-09-16T15:24:00.339Z | 2022-09-14T00:00:00.000 | {
"year": 2022,
"sha1": "1c8ccf48c35715f08b84a5ca5e0a3f337bce166d",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fped.2022.973920/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "d37daacc5738fddec68bdbdcef83a4244add64e8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
216480548 | pes2o/s2orc | v3-fos-license | Assessing Information Security Vulnerabilities and Threats to Implementing Security Mechanism and Security Policy Audit
: In spite of the massive investment of money, time and efforts an organization devotes to growth and enhancement with continuous improvements of a sound information security strategy, the human factor is ultimately and eventually the one behind the keyboard. As a result, human beings remain the most vulnerable and weakest entity in the information security chain. A negligent and irresponsible in-house employee can be a threat even to the most breath-tight secure environment, an organization built to keep intruders away from unauthorized access. Information security is considered to be one of the most necessary and crucial issues of the field, with the rapid changes in the information technology that takes place every day and with the business models chasing it and trying to catch up, recently and for a while, it has become one of the most interesting fields to technology and business communities.
Introduction
Eventually, moving from the Personal Computing era to the necessity of having computer networks all around where the daily life cannot be carried on without having the connectivity in most if not all the time.Corporations in different sizes, large or small, look to the Internet as an essential component for conducting business.Internet explosion praised for evolution of better ways of conducting business, is at the same time denounce network security vulnerabilities.Computer systems and information security must be controlled and monitored, starting with security policy.The best security policy is not of value if it is not practically applied and it should clearly define the tools and methodology that will be employed to monitor all the network generally accessible resources to make sure that the policy rules are being followed.The security policy should include fairly and clearly outline the types of corrective actions to be enforced on the policy violators.Auditing, Monitoring and Enforcement of a well-designed and clearly-stated security policy will positively contribute to controlling any unlawful access to the network.While there is no such thing as absolute network security, there are still different ways to focus on security by constantly monitoring the network behavior and trying to prevent any unauthorized access -with good tools and skills; capabilities of the network administrators, developers and end-users that can constantly audit the computer network for vulnerabilities.
As there are many initial vulnerabilities and potential threats where attacks are becoming the headlines of our daily news, information security specialists are struggling to keep computer networks safe and secure from such attacks.The crucial and challenging mission of network administrators and security experts trying to control and prevent any unlawful entries and attacks equipped with the finest tools available.However, still getting attacked.
In the literal sense of the meaning, there is no absolute, total and an all-round computer security that does exist.Post facto computer security is a bad dream of every security specialist and there is nothing like a completely secured system.In continuous work, there are many great efforts contributing to the security measures raising the stakes to make the information security harder and harder target for intruders.With the adequate implementation of security measures, the chances of being attacked have decreased significantly.
In the past times, little more than two decades, information security was not a major issue, but later on, it has become the focus and the center of the computer networks.The information security intersects with many other disciplines, including but not limited to network system operations and administration and computer programming.Few organizations would justify the needs and the additional expenses of recruiting an information security professional and yet, most of these organizations would be affected by the most casual penetration (Suduc et al., 2010).
This paper is addressing the security issues such as vulnerabilities and threats to magnify the view and contribute to taking preventive actions to the information security potential attacks, then audit the procedures to come up with a reasonably safe and sound security policy that helps in the everlasting quest to secure computers and information from harmful attacks.In addition, it will enlighten the administrators and the users to be aware and closer to remaining secure for the obvious vulnerabilities and the well-known attacks.
Security Policy Editing
Security policy should be designed and reviewed to be clear, concise, complete and accessible to all the users and groups.Once designed and reviewed, it must be followed with a thorough auditing procedure of the policy document; otherwise, there is no point in creating such a policy.It might seem normal, but the only to have a secure network infrastructure is through effective security monitoring.At this point, it is very important to state the stakeholders clearly and who is the directly responsible individual or group for the write-up of the policy and who will be a responsible entity for the continuous enforcement of such a policy.
The security policy must have signatures from all the stakeholders such as Division Managers, Technical Officers, Legal Counsel and of course, the writers of the policy, making sure to mark the date of all signatures.
The security policy is an organization's master document to reflect the vision, mission and the only ultimate source that steers and informs all the stakeholders of what you are protecting and how you decide to protect it.If an organization outsources or hires a consultant to draft the network security policy, this responsibility falls on the shoulders of network administrator.The network policy details can be accessed from several online resources.These online resources include: • There are many other good web sites with general security policy information that can be of great help to design and start the write-ups that leads to a good security policy.
If one did not have a security policy at all and just about to start writing a new one, this paper proposes some certain guidelines that have to be followed, especially when it comes to the essential components of information security policy as follows:
Introduction
This part should have a brief history of the company, its activities, purpose, vision and mission.The introduction should include all the references related to its information system and all service providers.On the top of that are; Internet Service Providers (ISP) and security solutions such as firewalls.Also, the infrastructure provider information blueprint along with the equipment provides -any other company or individuals involved with the network or system setups.
Network Layout Diagram
It should clearly show all the building blocks of the interior and exterior components.The diagram should illustrate the different groups and their blocks, their users' terminals and the clear borders with either their neighbors or the borders with the main network blocks.The diagram should also illustrate how the internal network gateway connects to the ISP's edge router.The network diagram should show the physical connections between the terminals and routers along with the connection to the servers for easy operations of troubleshooting and helping to solve day-to-day problems that occurs on the network.
Physical Security
The architectural design of the facility where the different points of entrances and exits along with any other access points for the whole building area.It should also define the areas of off-limit access for the general system users.The roles and entities of the users who will be allowed to access these areas and how secure they will gain access to it.This should secure access to the complete area, including the physical assets, including the server platforms, control panel configuration terminal, power lines, UPS systems or standby generators and console accessibility.The physical security should define the various types of access control and how they access data centers, network racks and closets areas.As a network administrator, it is vital to regulate access areas to deter any unauthorized, inexperienced and unconcerned employees from accessing the data centers or other sensitive company resources.Lack of proper access control mechanisms can lead to damage or loss of resources.
Remote Access
It must be clearly stated in the policy whether there will be a need or necessity for allowing remote access.This part of the policy should answer a number of questions that will shape the remote access policy and if it is going to be allowed, some of these questions are: What will be the special restrictions access for remote users?Will it be allowed via normal dial-up, leased line, or public connection dedicated VPN? Who will have access remotely?How secure the connection will be?Are there any encryption techniques will be used and who will be having authorized access to the Intranet using remote access clients through the Internet?
Firewall Configuration
It is a very important component in the quest of securing the network.A firewall component can either be logical or physical; it prevents unauthorized access to network systems.The policy should include the details of the firewall detailed configuration, settings and references of all perimeters.It should also include the defense devices settings, access control rules, logging and log files facilities, different authorization and authentication methods.
Users Accounts Policy
By creating user accounts and creating groups, it will be much easier to maintain the permission and denials for accessing data controlling who accesses what.The user accounts policy should include the choice of usernames, password expiration periods and what are the termination rules, storage space quotas and allocation, process resources.
Data Usage and Access Policy
The data usage and user access along with the file permissions are collectively a very serious issue that should be cleared and extended to cover all the different types server permissions, like user access.There should be a clear distinction between the reading, writing and executing permissions of a certain file.This section of the policy should have the initial access permissions and privileges.
Monitoring Policy
A network monitoring system has the ability to detect and report failures or low performance of the system or any other devices as it happens.This section refers to the network traffic monitoring to know, review and analyze network traffic for any abnormality that may interfere with network performance and availability of network resources.Network monitoring is very important and absolutely necessary to secure the network and it may require more than one expert to be in duty around the clock to be able to detect any abnormal activities as it happens, to stop any damage to be caused and take a preventive action to secure the network saving a lot of money and eliminate many problems.
Auditing Policy
It usually refers to the Network Security Audit Policy process that involves the investigation of the company's security policy and the assets on the network to identify any deficiencies that put the company's system and data at risk of any type of security breach, where assets are anything that has value to the organization (ISO IEC, 2005;2013).Policy Audit procedure help in effective determination of security to solving essential network security concerns.
What Else?
Periodical Policy Editing Review and Security Auditing is very crucial and important as the dramatic changes in the technology takes place every day, security policy cannot be a static document but very dynamic that needs to be visited and modified or even updated with the latest technologies available at the current time.As company's network infrastructure transforms, data is definitely growing every day, there grows the need that makes it necessary and mandatory to make sure that a periodical revision is conducted to the security policy parallel to the growth making sure that it is updated to reflect the needs of its purpose.
Initially, with the briefly mentioned guidelines to start the first security policy of any institution, all these sections should be included in the document so that the policy will be a long lasting document including but not limited to all the sensitive criteria covering the different policy aspects.It will need to be visited from time to time as the changes occur and the necessity is required.
Information Security Audits
Information security audit is used to evaluate effectiveness of deployed security measures and policies.The audit process is very crucial and thorough to fully understand how far the institution is secure and protected against security breaches and threats.Network systems security audit is considered to be the most important part of the whole audit process, it should be taken care of periodically to ensure the flawless and smooth operation of the entire system.The information security audit is subject to having a static and dynamic aspects to be taken in consideration, the static information such as network addresses, protocols, password rules, firewall settings and user accounts.On the other hand, the dynamic information such as data files creation, modifications, transfer and exchange, access to databases, log files and activities.
System Architecture Audit
It usually refers to the infrastructure and the foundation that supports the information system.It consists of the physical and virtual resources that supports the smooth operation of the entire system including the system location, hardware assets, processes and the interconnection between all of them.System architecture audit process aims evaluating security measures deployed and its users, also it addresses the efficiency and robustness of the collection of hardware assets such as servers, storage and connectivity among them all.
System Integration Audit
It is to make sure of providing the right equipment and components comprising the information system hardware and putting all together in action for examining the operation of these components interaction with each other to make sure that they perform correctly and efficiently once in real operation.Now-a-days with the variety of different technologies available, one must find it a little bit difficult to make a choice among good brands.The better the choice at the very early stages is definitely the better the results and performance will be in the future until the hardware component or even the system will be commissioned.The system integration audit extremely important especially at the early stages, as it ensures the quality of system integration and how reliable it can be for processing information.
Operating System Audit
It all starts with the choice of the operating system that you intend to install and use, how transparent it can be when you start the platform-level auditing to make sure of any existing vulnerabilities and how will you fix them.At this stage, one should understand the power of open source operating systems Vs any other operating system type.The nature of the open source operating systems allows the administrators, developers and even users to constantly audit and detect flaws and vulnerabilities.Not to forget the privilege and ability to look "under the hood" that makes the open source operating systems one of the most recommended choices and the main platform for environments where security is important.Another crucial aspect is End-User License Agreement (EULA) that has to be studies carefully and thoroughly to find the gaps and potential security breaches.Then at last, look into how many viruses are effective and can harm the operating system (Afifi and Nehal, 2017).
Link Level Security Audit
It is to make sure that the messages are protected while being exchanged between queue managers.Link level security Audit is crucial to enhance confidentiality during message transmission; considering that every connection is insecure for interception or eavesdropping; so at each end, messages need to be authenticating its partner that starts while establishing the communications pathway and right before messages being transferred.Confidentiality can be achieved by ensuring that the sender encrypts the message before sending (IBM, 2020).
Application Software Audit
It is to make sure that the application software is specifically, thoroughly and exhaustively tested and audited dependently and the level of control is up to what is expected to be delivered to the degree of challenges and risks involved in the incorrect or unauthorized processing of data.As shown in Fig. 1, The application software audit should deliver a detailed evaluation of the application code for many aspects, starting with network infrastructure and the possible vulnerabilities on connectivity, non-secure coding practices that may result programming bugs or runtime errors and the natural protection against all widely known attack techniques, finally the level of data encryption if needed at this level.The audit process should involve; both automated and manual penetration tests and attackrelated code detection.Application level security is vital for security services invoked at the interface between application and queue manager.As shown in Fig. 1, application level security can also be referred to as end-toend or message security (IBM, 2020).
Assessing Vulnerabilities
The information security has always been and always will be about the CIA triad that is considered as the principle and core requirements of information security for a smooth and safe utilization of storing, accessing and moving data around the same network or even to another network, where anything else comes next.CIA stands for Confidentiality, Integrity and Availability where these are the main three basic and initial objectives of having information security.Confidentiality reflects the necessity to keep the information private, to keep data concealed from unauthorized users.It includes restriction of authorized access.The opposite of confidentiality is disclosure.Data integrity involves trust that the transmitted information has not been manipulated.Availability means making data accessible to those authorized personnel when needed.Often, this implies that the resources are availed at a higher rate, over pacing normal functionalities of the wider system.The opposite of availability is basically interruption or the denial of the service (Carr et al., 2009).
In the early stages, all the computer networks are basically designed and interconnected to other networks with security vulnerabilities.Initially, to start the foundation and implementation of information security, is to evade and eliminate the obvious well-known vulnerabilities.Vulnerability refers to a system that is susceptible to access by unauthorized people (Kizza, 2009).A network can be vulnerable if there are no set or enough security mechanisms and policies to secure the network.
There are many aspects that information security vulnerabilities have been addressed from; as they may come from hardware and/or software security flaws, where they have not been covered in the policies and procedures stated for the network.The classification and sources of vulnerabilities developed by Kizza (2009) includes; protection barriers and vulnerability presence, among others (Weber et al., 2005).The possibility to study the whole list in one visit is very difficult, hence it should be a continuous improvement to keep addressing the major and obvious sources of vulnerabilities while pursuing and establishing a standard information security policy.
To start with, several vulnerabilities occur by default once the interconnection between computers in a network has been established as a result of providing the ability to communicate among these devices.These cannot be considered as a design flaws but its basic existence is related to the networking concept existence.Hardware systems have less design flaws, after all, it was designed to execute specific instructions to perform a specific task.However, the software installed in a hardware can have vulnerabilities that might affect the hardware.The software development process might produce security flaws due to many reasons depending on the developers' ability to produce quality software.For example, programmers' memory lapses, implementation of weak algorithms, failure to conduct security testing, complacency, or installation of backdoors by the programmers (Dowd et al., 2006).
Another important factor of the increased vulnerabilities is the lack of knowledge and literature for the computer users that are not even aware about security.Users that leave their computers and accounts logged in and never log out, people that save their passwords and have them managed by a computer program, employees that record their login credentials on a piece of paper and stick on the computer screen for quick reference, people that allow strangers to use their computers, people that do not care about their personal information and sensitive data and the list goes on.
The lack of trustworthy software sources sums up that for more than two decades we started to find about big or even giant software companies involved and accused in many security and spying issues, scandals having access to their customers' computers without permissions, copying data, spying on usage history and invading their privacy.These companies have built different means of security breaches in their products, so that they can access their customers' computers.Ironically, when discovered and exposed, they basically admit it or announce that was a programming bug (Choi et al., 2008), then that will be followed with the famous multiple fixes and batches that a user will never know if it was really fixed or not.
Fig. 2: CVE vulnerabilities by year
Other vulnerabilities are existing because of the weak or loose security management.Where the most likely scenario is that the computers have been connected and the users started to work and use the shared resources with no concept of security essentials or policy of who does what and how!The non-procedural implementation of network exposes it to vulnerabilities; vulnerabilities that will later require to be fixed.Such a network is more like a public park that does not require an entry pass, where everyone can simply get in and out so easily anytime, all the time.
Finally, the most-likely and everlasting source of vulnerabilities that cannot be completely controlled or secured is the internet with all its available services that one can use including the web-applications.As stated by Common Vulnerabilities and Exposures (CVE, 2019); vulnerabilities reported has been on rise for the last 3 years, as shown in Fig. 2. Software is the richest host of vulnerabilities, whether it is the operating system or the application programs, both may have very security breaches, opened ports vulnerabilities, web application bugs, public network connection error and client-server network protocols issues.
Assessing Threats
cannot deny that, no matter how strong and secure we can be, all of us as individuals, companies and organizations are susceptible to information security threats as we are potentially vulnerable one way or another (somehow).Clearly, the information security awareness and education are our first line of defense besides the technical methods and strategies of protecting personal and corporate information in our possession.
For faster information exchange between business entities comes the necessity of connectivity over telecommunication technologies.Successful business operations are really difficult or nearly impossible to be achieved without information technology connectivity.As a result, the information system becomes exposed to security threats.To assess these threats and know how dangerous they can be?First and foremost, an individual or organization should be able to identify how many people out there are really going to attempt to break in to have access to their information systems.Knowing the possible dangers by studying the potential threats and their impact on the security CIA triad is relatively a good and clean start to assess the information security status.
Internal and external threats are two sets of Information security threats.In my opinion, the internal threats within an organization is much more important and severe than the external ones, described as follows:
Internal Threats
They occur when someone has an authorized access to the information system.It can be a result of an intentional or accidental act of an authorized person.These internal threats include the physical access to the facility and being near by the information system having an easy access to computers and servers, that can lead to a direct exposure to vandalism, which is a deliberate willful damage or destruction of the information system, no matter how small is the damage size appears to be, even minor damage can have a significant impact on the information system and can cost a fortune.Physical access can lead to a direct information theft, where data can be copied on external storage of a very small compact physical size as we have seen in many cases (Snowden, google and others).Also it facilitates the implanting of any type of malicious software (malware) that causes real and severe damage to the information system either immediately or on the long run.Malware includes programs like Viruses and Worms.Taking photos of the physical equipment and hardware devices being used makes it easy to mimic and work on similar equipment to find vulnerabilities.Human factor such as errors, accidental misuse, bad habits and attitude, lack of responsibility, or lack of experience can lead to a greater impact on the information security.
External Threats
These threats involve external unauthorized access to information system, this is considered one of the most threat.Now, if that unauthorized person manages to alter or modify the data, the threat compromises the integrity of data and if manages to delete data form the information system, where no up-to-date backup is made, the threat compromises the availability of data.The situation is worse if the unauthorized access is by someone with ability to cause damage.Natural disasters and force majeure is also considered is an external threat such as fires, hurricanes, twisters, floods and earthquakes that physically damage the information system (Jouini et al., 2014).
Security Mechanism Implementation
Individuals and organizations having a broad range of information assets that might vary from the simplest PDA device to the most sophisticated server installed in a data center, both are considered of a value based on the information that either holds.Hence, every device is considered to be a valuable asset to an organization in terms of the data it holds and processes (ISO IEC, 2005;2013).
In the past, information security was a purview of few individuals, but nowadays it is a responsibility of every employee of an organization (Kanatov et al., 2014).Therefore, everyone has an obligation to security self-awareness and to ensure availability of a secure environment (Ku et al., 2009).With the extremely rapid evolution of the technology, many businesses are not -if not all-capable of growing at the same pace adapting the new technologies as it happens, so that in many times, the gap between technology and business will exist and remain there as that exactly what makes information security vulnerabilities being found.
The best way to initiate the information security implementation mechanism process is to identify the system valuable areas carefully, assessing the size and level of loss if the security were breached and how long will it take to recover.Afterward, generate questionnaires (Richardson and Director, 2008), check lists and steps in the quest of forming the security mechanism full picture and building the HOW TOs strategies.The questionnaires should be simply designed in the form of Yes-No questions and answers (Suduc et al., 2010), where no deeply technical questions are asked keeping in mind that most of the employees conducting the questionnaires might not be information security literate people, making sure that mission-critical data is not being available to all employees, especially the naïve users.Also making sure to set the permissions and denials of data access with the right level of read/write/execute to each and every group or individual.ISO 17799 check list (SANS, 2003), provides ISO standard security testing checklist.Audit Tool (ISO IEC, 2005;2013), offers an extensive variety of audit questions relating to advisable security practices, along with possible actions in cases where dissenting answers are provided.Though these tools are dependent on additional security measurements, they are still very useful (Kanatov et al., 2014).
Knowledge base is another effectual approach for audit process.It avails necessary information for Chief Information Security Officers (CISOs) to make precise information security policy decisions.Basic knowledge base components are: Assets and vulnerabilities (Stepanova et al., 2009).
All the steps together will form the security mechanism implementation where every step is going to be stated as a reference for both the vulnerability and its protection standard, as well as a comparison to the crossreferences with the in house guidelines.This security mechanism will provide organizational guidelines, standards and component analysis that leads to issuance of the recommendations.Consequently, the supposed metamode of the security standard recommendations could be created (Atymtayeva et al., 2012;Kozhakhmet et al., 2012;Atymtayeva et al., 2011).
Security Policy Audit
The evolution of information technology and telecommunications services in the last two decades or more, it has become compulsory for every organization regardless the nature of activities or services that they offer to use information technology, as the computer usage invaded all the fields and made the job easy for the various business models.Ever since, the need of sound information security measures has become compulsory as well.The news of organizations being attacked every day is a solid evidence for the necessity to address the issue from different perspectives.Security policy auditing has appeared to handle the majority of such events.
The content of the policy should illustrate the brevity and clarity of the content presented, especially for the asset descriptions which are basically limited to removable media.The source of the document should signify the criteria in which the guidelines are taken.Two sources are depicted here; ISO 27002 (ISO IEC, 2005;2013) and the UCISA Information Security Toolkit (UCISA, 2005), along with the related subclasses that define each criteria's structure.
The information security audit process should lead to produce an information security management document (manual) that will explain the details of the following information: 1 A security policy is an organization's unique written document that describes basic best practices when dealing with information systems (Afifi, 2018).A security policy should classify all of an organization's resources, as well as all the possible and potential threats to those resources.
Conclusion
Regardless of how much resources an organization devotes to development of an enhanced security strategy for information systems, human users and operators are eventually at the control seat and they are often the most error-prone participant in the information security echosystem.The most undisputed airtight of defense rules will be surrendered and absolutely useless by an insider with a little tiny flash drive that can copy gigabytes of data faster than the time taken to drink a cup of coffee.Strong password security policies are useless if users note down their credentials on paper and leave it unsecured.Other users are habitual to storing their passwords in browsers, leaving their computers logged in and unsupervised, saving passwords in unencrypted documents and the list goes on.All these bad practices would never reward back the money, time and efforts being put to build a tight information security.With all the various security techniques and strategies that can be constructed, a well-designed security policy should be implemented and strictly deployed with the best standards and practices that specifies procedures and how it should be followed to achieve the maximum and adequate secured environment.Also visiting the security policies and procedures from time to time, even if things are fine, to find gaps and the narrow margins of security breaches that may arise in times.
Finally, planning security is very important and crucial starting all the way from the information security literacy and the proper education for the employees to be aware of the threats and the type of attacks they could be experiencing from time to time and how to avoid being vulnerable.Also the physical facility security is very important along with the choice of the hardware equipment.All of this should be sealed by the security policies and procedures which is definitely essential to complete the defense line for a sound and safe information security environment.
Fig. 1 :
Fig. 1: Link-level security and application level security | 2020-03-25T01:38:57.007Z | 2020-03-01T00:00:00.000 | {
"year": 2020,
"sha1": "1d156007d081fb237da3f2a1743e71d61a8a75a8",
"oa_license": "CCBY",
"oa_url": "https://thescipub.com/pdf/jcssp.2020.321.329.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "3ad7cd3413517c40f024fe3eeb370d5cc9776cd1",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
6731827 | pes2o/s2orc | v3-fos-license | Effect of occupational noise on the course and outcome of pregnancy.
OBJECTIVES
The goal of this investigation was to examine the effects of occupational noise during pregnancy prospectively.
METHODS
The exposed group [continuous A-weighted sound level (LAeq(8 h)) > or = 78 dB] consisted of 111 pregnant women, and the reference group comprised 181 pregnant women with approximately similar work conditions but without noise exposure. The noise-exposed women had more frequently other inconveniences in their work, however, like shift work, impulse noise exposure, vibration, and a high or low temperature.
RESULTS
With the limit of 78 dB (LAeq (8 h)), the course and outcome of pregnancy did not differ between the groups. When the noise exposure was 90 dB (LAeq (8 h)) or more, a decline in birthweight, either absolute [mean 3304 (SD 585) g for the exposed versus mean (SD 548) g for the unexposed, 95% CI of mean difference -471--+15 g] or related to the gestational age (below the 10th percentile [5 of 25 (20%) versus 13 of 180 (7%)]), was seen. These findings were more pronounced if the woman was simultaneously exposed to a standing work position or shift work.
CONCLUSIONS
Working in high noise exposure can be considered a form of risk during pregnancy.
ly airpo rt noi se, with pretenn birth, low birthweight, and malformations , but the associ ation with malformati ons ha s not always been confirmed (7)(8)(9)(10)(11). The associati on of occupational no ise exp osure with low birth wei ght and preterm deli very is so mew hat contro ver sial on the ba sis of two recently published articl es (12-1 3).
Ou r prospective cohort study was undertaken to evalu ate the effect of occupational noise exposure on the co urs e and outcome of pregnancy , especially on maternal blood pressure, preterm birth, bi rthweight, and mal formations.
Subjects and methods
Th e s ubjects were enrolled from workpl ace s with no ise ex pos ure in the province s of Oulu, Lapland, and Harne in Finland. Oc cupation al health officers were info rmed about the research, and they in turn informed the women in the workp lac es. Enrollment to ok pl ace between April 1983 and December 1987. A measur ed 8-h equivalent continuou s A-weighted so und level (L Aeq (8 hi) of~78 dB was se lec te d as the criterio n for noi se exposure for a woman to be rega rded as an ex posed subject.
The wom en contacted the researc h group voluntaril y at the beginning of pregn ancy, and the group in turn co ntacted the respecti ve matern ity health ce nter.
The nurse in the matern ity health center selected one to three unexposed moth ers as referents for each exposed mother , matched by age (± 3 years) and parity (l = nullip ara , 2 =primipara-tripara, 3 =quadripara or more) according to the instructions of the research group . Th eir work conditions were also to be as similar as possible but without noise exposure.
The une xposed group was somewhat younger [mean 27.7 (SD 5.3) years] than the exposed group [mean 26.5 (SD 5. 3) year s]. Nulliparous subjects made up 45 % of the unexpo sed group, parity being, on the average, higher among the exp osed gro up. Social class, as j udged accord ing to the husband' s occupational status, and accordin g to the women's own status in the case of unmarri ed women, tended to be somewhat lower amon g the exposed women. The two groups did not differ with regard to their obstetric history (ie, spont aneous or induced abortions, earli er preterm deliveries, and malformed or stillborn infants). The prevalen ce of chronic diseases possibly influ encing the cour se of the current pregnancy was also similar. The groups did not differ as to their reported drinking and smoking habits (table I).
The final popul ation consisted of I I I exposed and 18 I unexp osed pregnant women. Fifty-thr ee of the exposed women had one referent , 46 had two referents, and 12 had three referents. There were problems in finding sufficie ntly concordant referents. All of the mother s were monit ored in the same way during pregnancy. The data on the course of pregnancy were obtained from the maternity health centers and also from the maternity outpatient clinics of the hospitals. The data on the deli veries and the neonates were co llected from the hospital records. The occupation al health officer at each work place filled out a questionnaire co ncerning the work conditi ons and measured exposures and work loads, and these data were chec ked at the Oulu Regional Institut e of Occupational Health. After delivery, the women were also asked to answer a postal questionnaire with regard to daily habits and soci al conditions. Six exposed and four unexposed mother s did not respond to the questionnaire.
The occupations of the exposed and unexposed women are shown in table 2 acco rding to the standard industrial classification of occupations (16 ). The largest occupational groups among the exposed women were pro cessing of food (N = 19) and textiles (N = 55 ) and among the unexposed women the largest corresponding groups were the processing of textiles (N =23), retail trade (N =31), restaurants and hotels (N = 17), and publi c services (N =27).
Almost all of the wome n in the expo sed group ( I 10 of I I I) were manual workers in term s of occupational statu s, the remaining one being of lower-grade staff status. In the unexposed grou p 68% were manual wor kers and 3 I% were lower-grade staff. One person in the unexposed group was selfemployed ( 17) . Although there were more lower-Scand J Work Environ Health 1994. vol 20, no 6 grade staff in the unexposed group, the actual types of work were simil ar in the two groups .
The average time elapsing before the first contact with the maternit y health center was 10.9 (SD 2.5) gestational weeks in the exposed group and 10.4 (SD 2.2) wee ks in the unexposed group. The mean numbe r of conta cts with the matern ity health center was 14.2 (SD 3.9) versus 14.6 (SD 3.1), respectively.
The mean height of the women was similar for the expo sed [163.7 (SD 5.3) em] and unexposed [163.5 (SD 5.1) cm] group s, whereas the exposed women were somewhat heavier at the beginning of pregnancy, 62.3 (SD 7.9) kg versus 61.0 (SD 9.4) kg. Th ere was a neglible difference in mean weight gain during pregnancy between the exposed and unexpo sed women [mean 12.3 (SD 4.3) kg versus 12.8 (SD 4.3) kg], but not in the mean hemoglobin conce ntration at the beginning of pregna ncy or at the last exami- Table 1. Background characteristics of the noise-exposed and unexposed women. Table 3. Work load and exposures in the noise exposed and une xposed groups. The work loads and exposures were classified according to the principles used by the Oulu Regional Institute of Occupational Health (14). Impulse noise exposure was not measured, but it was classifi ed into three levels (none, moderate, considerable) acco rding to the report s of the occ upatio nal health officers.
The measured work loads and exposures are presented in table 3. The women's own opinions regarding their work loads differed from those of the occupational health officers. The physical work load, for exa mple, was j udged to be heavy by 32% of the exposed subjects and 24% of the unexposed ones, and, correspondingly, heavy men tal load was reported by 14 and 9%, respectively, these figures being substantially higher than those give n by the occupational health officers.
Blood pressure was measured at every visit to the maternity health cent er and in the hospital. The means of the systolic and diastolic blood pressures were calculated separately for eac h trimester of pregnancy.
The exposed and unexposed women were compared with respect to the outcome variables with the use of joint stratification by age and parity. The se comparisons were also performed separately for certain possible confounding or modifyi ng work conditions, such as vibration , a stand ing work position and shift work. The contrast with the unexposed women was further evaluated in subgro ups of the exposed subjects defined by noise level (low dose = noise exposure <9 0 dB L Aeq l8 hl and high dose = noise expos ure 2?: 90 dB L Aeq 18hi) and by the presence or absence of impulse noise.
The means and stan dard deviations for continuous outcomes (diastolic blood pressure, birthweight, and height ) were ca lculated in the approp riate gro ups. T he adjusted mean difference between the exposed and unexposed subjects was calc ulated as a preci-sia n-maximizing weighted average of the stra tumspecific differences, on the assumptio n of a constant error varia nce over the strata. Counts and percentages were obtained for the occ urrence of preterm birth , low birthweight for gestational age [below the l Oth percentil e ( 15)], malform ations, and care at a neonata l unit. No weight summary estimate s for the differe nces in the proportions were calculated, as the data became too sparse after stratifica tion.
Results
Thirty-two percent of the exposed women were subjected to considerable impul se noise in their work, as were two women in the unexposed group, although their 8-h equivalent continuous A-weighted sound level was still below 78 dB . The women exposed to considerable impulse noise (N =35) more ofte n worked in a standing position than the other noise-exposed women (74 versu s 56%).
Forty-eight percent of the women in the impulsenoise gro up were exposed to noise of more than 89 dB L Aeq (8 h) versus 14% of the other expose d women, and 60% of the impul se-noise gro up were also exposed to vibratio n, as co mpared with 14% of the others. Altoge ther 51% of the noise-exposed women reported using hearin g protectors for over 80% of their worktime, and 39% of them for ove r 95%. All except one woman working with noise exposure exceedi ng 89 dB reported the time spent using protectors to be over 95%. Nine percent of the exposed women stated that they did not consider their work to involve exposure to noise. Seve nty women in the exposed group had been working in noise for at least three years .
None of the women in the exposed or unexposed group had chro nic arterial hypertension, and the mean systolic and diastolic blood pressures did not differ between the group s, either in the overall comparison or when stratified separately by the presence or absence of given work cond itions (ie, vibration, standing position or shift work) (table 4). Similarly, no differences were found when the exposed women were further subdivided by the presence of impulse noise and the level of noise (table 5). Antihypertensive medication durin g pregnancy was prescribed for only two women in each gro up, and sic k leave on account of elevated blood pressure was prescribed for four in the exposed gro up and lO in the unexposed gro up (4 versus 6%).
The numbers of women admitted to a prenatal hospital ward and the main reasons for admitta nce are presented in table 6. The mean number of days spent in a prenatal ward did not differ grea tly between the groups [7.2 (SO 11.0) d versus 5.6 (SO 5. 1) d]. The mean duration of sic k leave was 3.7 (SO 4.3) weeks in the exposed gro up and 3.3 (SO 3.9) weeks in the unexposed gro up, and the mean worktime during pregnancy was 29 .6 (SD 5.8) and 29.9 (SD 5.5 ) weeks , respec tive ly.
The various outcomes of pregnancy are summarized in table 7. The mean gestational week of delivery [39.1 (SD 2.1) versus 39.2 SD 1.7)] and the number of preterm deliveries were equal in the two groups. However, four of the five preterm deliveries among the exposed mothers occurred in the highnoise group; in other words, 16% (4 of 25) of this particular exposure group had a preterm delivery. These four individuals were also exposed to vibration and a standing position in their work, and three of them also to impulse noise and shift work.
There were no differences between the groups in the prevalence of low birthweight for gestational age (below the 10th percentile), mortality (table 8), or Table 4. Diastolic blood pressure in the third trimester of the women exposed to occupational noise and of the unexposed referents. Means, standard deviations (SO), the numbers of women (N) in the groups, and the adjusted difference (stratified by age and parity) in group means with a 95% confidence interval (95% CI). Overall comparison and subdivided separately by the presence of vibration, standing position, or shift work. Table 5. Diastollc blood pressure in the third trimester of the women exposed to occupational noise, subdivided separately by the presence of impulse noise and level of noise. Means, standard deviations (SO),the numbers of women (N) in the groups, and the adjusted difference (stratified by age and parity) of the group mean to that of the unexposed women with a 95% confidence interval (95% CI). Table 6. Admissions of the exposed and the unexposed Table 7. Outcomes of pregnancy of the noise-exposed and unwomen to a perinatal ward. exposed women. mean birthweight of the neonates (table 9). Birthweight was not systematically related to the other work conditions (table 9) or to impulse noise (table 10), but the mean birthweight was, on the average, 0.2-0.3 kg lower in the group experiencing a high noise level than among the referents or those exposed Table 8. Data on the newborns of the noise exposed and unexposed mothers.
Newborns of Newborns of exposed unexposed mothers mothers (N = 108) ( to a lower level of noise, and this contrast became more pronounced among the exposed women in this group who also had a standing work position or were engaged in shift work. The prevalence of low birthweight for gestational age was also higher in the high-noise group (5 of 25) than in the reference group (13 of 180; difference + 13 percentage points, 95% CI -3-+29) or the low-noise group (4 of 82; difference +15 percentage points, 95% CI -1-+32). The prevalence of low birthweight was no higher in the impulse-noise group, nor was it systematically related to the other work conditions. The infants of mothers belonging to the high-noise group more commonly needed care at a neonatal unit than those in the low-noise group (5 of 25 versus I I of 83). There were nine malformed infants in the exposed group. Most of the malformations among the exposed cases (8 of 9) occurred in the low-noise group, one being an autosomally recessively inherited defect. When this case was excluded, the prevalence difference between the exposed groups was 5% (95% CI -1-+10). The only perinatal death, because of an intrauterine infection, occurred for an exposed mother (table 8). Table 9. Birthweight of the infants born to the women exposed to occupational noise and to the unexposed women. Means, standard deviations (SO), the numbers of women (N) in the groups, and the adjusted difference (stratified by age and parity) of the group means with a 95% confidence interval (95% CI), overall and subdivided separately by vibration, standing positions, and shift work. Table 10. Birthweight of the infants born to women exposed to occupational noise subdivided separately by the presence of impulse noise and level of noise. Means, standard deviations (SO), the numbers of women in the groups, and the adjusted difference (stratified by age and parity) of the group mean to that of the unexposed women with a 95% confidence interval (95% CI).
Discussion
The occupational activity of Finnish women is high, and about 78% of women work outside the home during pregnancy (12). In an earlier report on occupational noise exposure during pregnancy (12), we ascertained that only 3.5% of the mothers were exposed to noise if the limit was set at 81 dB, and another report from Finland has set the corre sponding percentage at 2.9% if the limit is 85 dB or more (11). Data on work conditions already exi st since occupation al health care is prescribed by law in Finland, but, for more accurate data on the work conditions of the subj ects in our study , the work loads and exposures of the women were primarily obtained from the occupational health care system, which informed the women about the research. The enrollment of the subjects proceeded slowly, becau se of the obvious reluctance of some employers.
As suggested by our earlier study, it could be assumed that noise would hardly be a major hazard affecting the course and outcome of pregnancy (12). To contr ol some well-known factors int1uencing the outcome of pregnancy, such as maternal age and parity , we aimed at matching the unexposed women in this respect. In addition to other effects, women with children are more likely to terminate their employment than childless ones (18) . The matching by type of occupation also seemed to control the soci al class of the women well. The other base-line characteri stics of the exposed and unexposed women were comparable. For example, there was an exces s of women smoking in both group s (over 30%) relati ve to the figure of 15% reported in Finnish perinatal statistics (19) . Maternity care is equally available to all women in Finland, and is free of charge to ever y woman, so that 99 .8% of pregnant women use these services ( 19). The use of these serv ices was comparable to the average figures for the whole country in both groups ( 19). Statutory maternity leave begins on the 36th gestational week, and paid sick leave can be obtained earlier for medical reasons. The mean worktime during pregnancy was the same in the exposed and unexposed groups, about 29 weeks.
The matching by occupational status was not perfect in that almost all of the exposed women were manual workers, whereas one-third of the women in the unexposed group were of lower-grade staff. Still, the percentages of women with heavy physical and mental loads and a standing work position were comparable. The noise-exposed women more frequently had other inconveniences in their work , like shift work , impul se noise, vibration, and a high or low ambient temperature, and these condition s were more prevalent at higher levels of noise expo sure . On the other hand , there was a clear difference between the information given by the women them selve s and the health officer. A heavy physical load , for example , was reported by 32% of the exposed subj ects them-------------Scand J Work Environ Health 1994. vol 20. no 6 selves in contrast to the figure of 6% by the health officers.
Elevated blood pressure has been connected with noise exposure, although the result s are ambiguous (3)(4)(5)(6). In our earlier study of experimental noise exposure durin g normotensive and hypertensive pregnancy. we could not find any effect of noise on blood pressure levels (20). Nurminen & Kurppa (13) reported that pregnancy-induced hypertension was not associated with noise exposure alone but that , upon additional strain caused by shift work, the pregnant women expo sed to noise at a level of about 85 dB L Ae (8 h) or higher had a distinctly elevated risk of pregna~cy hypertension. Similarly, shift work alone was not related to this complication of pregnancy. The present survey similarly did not detect any association between occupational noise exposure and hypertension in pregnancy.
There were no differences in the number of preterm deliveries between the groups. The mean gestational week at delivery over the whole country was 39.7, and the proportion of preterm deliveries «37 weeks) was 5.2% (19) , figures which are very close to the present ones. The mean birth weights of the infants of the group s did not differ significantly, and they were only a little lower than that for the whole country [3550 (SD 582 ) g (19)]. The prevalence of low birth weight for gestational age was also similar to that for the whole country. The only difference between the exposed and unexposed group s as a whole was seen in the occurrence of congenital malformation s, but this contrast was statistically nonsignificant due to the small numbers.
When noise expo sure rose to 90 dB (L Aeq 8 h) or more, there was no difference in the systolic or diastolic blood pressure , although a lower than average birth weight , either in absolute terms or in relation to gestational age , was observed, albeit with rather wide confidence intervals. The neonate s also needed observation at the neonatal unit more often. These findings were more pronounced for women simultaneously exposed to a standing work position or shift work. Four women out of twenty-five in this exposure group (16%) had a preterm delivery, but the effect of noise on this complication was impossible to distinguish from other coincident exposures associated with preterm birth. The noise expo sure level was not associated with malformations.
In conclu sion, it can be stated that high noise levels can have an independent effect on birthweight and they may be associated with preterm delivery, although the situation may be alleviated somewhat in our country by the opportunities for obtaining sick leave. With respect to noise-induced occupational hearing loss, 39% of the women reported an adequate use of hearing protectors, and it can also be assumed that these protector s had some effect on our result s. Only a minority of women in our country are exposed to high noise levels in general or during pregnancy. On the other hand, high noise levels are of-ten associated with other untoward conditions, and therefore th ey should perhaps be co ns ide re d a form of occupationa l ri sk during pre gn an cy aft e r all. | 2018-04-03T03:42:07.390Z | 1994-12-01T00:00:00.000 | {
"year": 1994,
"sha1": "bac643336e5caeb01743225c99729ac401c5625e",
"oa_license": "CCBY",
"oa_url": "https://www.sjweh.fi/download.php?abstract_id=1376&file_nro=1",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "5935111b38db927394f1b741252de1472d1e7ebf",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
118888363 | pes2o/s2orc | v3-fos-license | Double resonance of Raman transitions in a degenerate Fermi gas
We measure momentum-resolved Raman spectra of a spin-polarized degenerate Fermi gas of $^{173}$Yb atoms for a wide range of magnetic fields, where the atoms are irradiated by a pair of counterpropagating Raman laser beams as in the conventional spin-orbit coupling scheme. Double resonance of first- and second-order Raman transitions occurs at a certain magnetic field and the spectrum exhibits a doublet splitting for high laser intensities. The measured spectral splitting is quantitatively accounted for by the Autler-Townes effect. We show that our measurement results are consistent with the spinful band structure of a Fermi gas in the spatially oscillating effective magnetic field generated by the Raman laser fields.
I. INTRODUCTION
Spin-orbit coupling (SOC) interwines the motional degrees of freedom of a system with its spin part, giving rise to many intriguing phenomena such as the atomic fine structure, the spin Hall effect [1,2] and topological insulators [3]. In ultracold atom experiments, SOC has been realized using Raman laser dressing techniques [4][5][6], where a two-photon Raman transition couples two different spin-momentum states. This optical method was successfully applied to many fermionic atom systems [7][8][9][10] and recently extended to two-dimensions [11], boosting the interest in exploring new exotic SOC-driven many-body phenomena [6,12,13].
Alkaline-earth-like atoms with two valence electrons such as ytterbium and strontium provide a beneficial setting for studying SOC physics. Their transition linewidth is narrow in comparison to the hyperfine structure splitting, which is helpful to alleviate the unavoidable heating effect due to light-induced spontaneous scattering under the Raman dressing [6,9] and also to generate spin-dependent optical coupling to the hyperfine ground states. Furthermore, as recently demonstrated with 173 Yb atoms [14,15], the interorbital interactions between the 1 S 0 and 3 P 0 states can be tuned via a so-called orbital Feshbach resonance [16], which would broaden the research scope of the SOC physics with alkaline-earth-like atoms.
In this paper, we present momentum-resolved Raman spectra of a spin-polarized degenerate Fermi gas of 173 Yb atoms, which are measured in the Raman laser configuration of the conventional SOC scheme. In particular, we measure the Raman spectra over a wide range of magnetic fields as well as laser intensities to investigate the interplay of multiple Raman transitions in the SOC scheme. We observe that two Raman transitions become simultaneously resonant at a certain magnetic field and a doublet structure develops in the spectrum for strong Raman laser intensities. We find that the spectral split- * yishin@snu.ac.kr ting at the double resonance is quantitatively accounted for by the Autler-Townes doublet effect [17].
In the conventional SOC scheme, since one of the Raman laser beams has both σ + and σ − polarization components with respect to the quantization axis defined by the magnetic field, the Raman transition from one spin state to another, if any, can be made to impart momentum in either direction along the relative Raman beam propagation axis. In typical SOC experiments, the system parameters are set to make one of the transitions energetically unfavorable such that it can be ignored, but the double resonance observed in this work results from involving both of the Raman transitions. When all the Raman transitions are taken into account, the effect of the Raman laser fields is represented by a spatially oscillating effective magnetic field [18]. We show that our measurement results are consistent with the spinful band structure of the Fermi gas under the effective magnetic field.
The paper is organized as follows. In Sec. II, we describe our experimental apparatus and procedures for sample preparation and Raman spectroscopy. In Sec. III, we present the Raman spectra measured for various conditions and the observation of the spectral doublet splitting at the double resonance. In Sec. IV, we discuss the results in the perspective of the spinful band structure of the SO-coupled Fermi gas. Finally, a summary and outlooks are provided in Sec. V. Figure 1(a) shows the schematic diagram of our experimental apparatus for generating a degenerate Fermi gas of 173 Yb atoms [19]. We first collect ytterbium atoms with a Zeeman slower and a magneto optical trap (MOT). For the slowing light, we use a 399 nm laser beam that has a dark spot at its center to suppress the detrimental scattering effect on atoms in the MOT. The frequency modulation method is adopted for the 556 nm MOT beams to increase the trapping volume and capture velocity of the MOT [20]. As a result, more than 10 8 atoms are collected in the MOT within 15 s. We transfer the atoms into an optical dipole trap (ODT) formed by a focused 1070 nm laser beam, where the transfer efficiency is ≈ 13%. Then, we transport the atoms by moving the ODT to a small appendant chamber which provides better optical access and allows high magnetic field application, and we generate a crossed ODT by superposing a focused 532 nm laser beam horizontally with the 1070 nm ODT. After evaporation cooling, we obtain a quantum degenerate sample in the F = 5/2 hyperfine ground state. For an equal mixture of the six spin components, the total atom number is N ≈ 1.0 × 10 5 and the temperature is T /T F ≈ 0.1, where T F is the Fermi temperature of the trapped sample. The spin composition of the sample can be manipulated during evaporative cooling with optical pumping or removal of spin states by resonant light. For the case of a fully spin-polarized sample in the m F = −5/2 state, N ≈ 1.2 × 10 5 and T /T F ≈ 0.35. The trapping frequencies of the crossed ODT are (ω r , ω z ) = 2π × (52, 450) Hz at the end of the sample preparation.
II. EXPERIMENTS
The setup for Raman spectroscopy is illustrated in Fig. 2(a). A pair of counter-propagating laser beams are irradiated on the sample in the x direction and an external magnetic field B is applied in the z direction. The two laser beams are linearly polarized in the y and z directions, respectively. With respect to the quantization axis defined by the magnetic field in the z direction, Raman beam 1 with linear y polarization has both σ + and σ − components and Raman beam 2 with linear z polarization has a π component. Thus, a two-photon Raman process, e.g., imparting momentum of +2 k Rx by absorbing a photon from Raman beam 1 and emitting a photon into Raman beam 2 changes the spin number by either +1 or −1, where k R is the wavenumber of the Raman beams. This is the conventional Raman laser configuration for SOC in cold atom experiments [4,7,8,10]. The Raman lasers are blue-detuned by 1.97 GHz from the | 1 S 0 , F = 5/2 to | 3 P 1 , F = 7/2 transition [ Fig. 2(b)]. This laser detuning, set between the hyperfine states of 3 P 1 , is beneficial to induce spindependent transition strengths for the F = 5/2 hyperfine spin states [23,24]. The frequency difference of the two Raman beams is denoted by δω [ Fig. 2(a)]. The two beams are set to the same power P and focused onto the sample. Their 1/e 2 intensity radii are ≈ 150 µm, which is much larger than the trapped sample size of 30 µm. We assume that the laser intensities are uniform over the sample.
Raman spectroscopy is performed by applying a pulse of the Raman beams and taking a time-of-flight absorption image of the sample. The image is taken at B = 0 G along the z-axis with a linearly polarized probe beam resonant to the 1 S 0 → 1 P 1 transition. Two exemplary images are shown in Fig. 2(c) and 2(d), showing that atoms are scattered out of the original sample with different momenta for different δω. Since the expansion time τ is sufficiently long such that ω r τ ≈ 5, we interpret the time-of-flight image as the momentum distribution of the atoms. The 1D momentum distribution n(k x ) is obtained by integrating the image along the y direction [ Fig. 2(e) and 2(f)], where k x = mx/( τ ) with m being the atomic mass and x the displacement from the center of mass of an unperturbed sample. In our imaging, the absorption coefficient for each spin state was found to vary slightly, within ≈ 10% [ Fig. 1(b)], which we ignored in the determination of n(k x ).
The normalized Raman spectrum is measured as where n ref is the reference distribution obtained without applying the Raman beams. In the spectrum, a momentum-imparting Raman transition appears as a pair of dip and peak, which correspond to the initial and final momenta of the transition, respectively. We observe that the spectral peaks and dips exhibit slightly asymmetric shapes, which we attribute to elastic collisions of atoms during the time-of-flight expansion [25]. The Fermi momentum of the sample is k F /k R ≈ 1.2 in units of the recoil momentum.
III. RESULTS
The atomic state in an ideal Fermi gas is specified by wavenumber k and spin number m F , and its energy level is given by The first term is the kinetic energy of the atom and the second term is the Zeeman energy due to the external magnetic field B, where g F is the Landé g-factor and µ B is the Bohr magneton. The last term E S denotes the spin-dependent ac Stark shift induced by the Raman lasers. For a Raman transition from |k i , m i to which changes the momentum by 2r k R and the spin number by ∆m F , the energy conservation requires E(|k f , m f ) − E(|k i , m i ) = r δω, which gives the resonance condition for the initial wavenumber k i as where E R = ( k R ) 2 /2m = h × 3.7 kHz is the atomic recoil energy, B R = E R /(g F µ B ) = 17.9 G and ∆E S = E S (m f ) − E S (m i ). Here we neglect the quadractic Zeeman effect and the atomic interactions which are negligible in our experimental conditions. We first investigate the resonance condition of Eq. (3) by measuring its dependence on various experimental parameters. Figure 3(a) shows a Raman spectrum measured by scanning the Raman beam pulse duration for δω = 4E R / at B = 16.6 G. Spin-polarized samples were used and both of the Raman beams were set to linear z polarization to make sure ∆m F = 0. Momentumdependent Rabi oscillations are clearly observed and the Rabi frequency is found to be well described by Ω(k) = Ω 2 0 + ( k R k/m) 2 with Ω 0 ≈ 2π × 7 kHz. The decoherence time is measured to be ≈ 1 ms, which seems to be understandable with the characteristic time scale for momentum dephasing in the trap, π/(2ω r ) ≈ 5 ms. In the following, we set the pulse duration of the Raman beam to 2 ms, which is long enough to study the steady state of the system under the Raman laser dressing. Figure 3(b) displays a spectrum of the equal mixture sample in the plane of wavenumber k and frequency difference δω. Here, B = 0 G and the Zeeman effect is absent in the measurement. The r = 1 and r = 2 transitions are identified in the spectrum with their spectral slope of dk dδω = k R 4E R and different offsets as predicted by Eq. (3). The (k, δω) ↔ (−k, −δω) symmetry of the spectrum indicates that the differential ac Stark shift is negligible in the measurement. Figure 3(c) shows the Raman spectrum of the m F = −5/2 spin-polarized sample over a range of magnetic fields from B = 100 G to 195 G for δω = 13.4E R / . In the spectral plane of k and B, the Raman transition with (r, ∆m F ) = (1, 1) appears as a line having the slope dk dB = − k R 4B R as expected from Eq. (3). A linear spectral shift is observed with increasing Raman beam power P [ Fig. 3(d)], which demonstrates the effect of the differential ac Stark shift ∆E S . In our experiment, ∆E S = E S (−3/2) − E S (−5/2) ≈ 1.2 E R for P = 1 mW. This is in good a agreement with the Raman beam intensities estimated from the Rabi oscillation frequency Ω + ∝ √ I σ I π , where I σ,π are the intensities of the Raman beam 1 and 2, respectively. The comparison of ∆E s and Ω + suggests I π = 0.6I σ , which we attribute to a slight mismatch of the beam waists.
Next we investigate a situation where one spinmomentum state is resonantly coupled to two final states simultaneously, which we refer to as a double resonance. When the two corresponding Raman processes are characterized with (r 1 , ∆m F 1 ) and (r 2 , ∆m F 2 ), we see from Eq. (3), neglecting the small ∆E S term, that the double resonance occurs when For the primary transition with (r 1 , ∆m F 1 ) = (1, 1), To observe the double resonance of the (r, ∆m F ) = (1, 1) and (2, 0) transitions at B = 4B R ≈ 72 G, we measure the Raman spectra of the spin-polarized sample in the k-B plane over a range from B = 0 G to 140 G [ Fig. 4]. Here we set δω = 8E R / to have k x = 0 atoms on resonance for the (2, 0) transition, which is insensitive to B for ∆m F = 0. For low P , the (1, 1) transition appears with the spectral slope of − k R 4B R as observed in Fig. 3(c) and the double resonance is indicated by a small signal at (k, B) = (4k R , 4B R ) [ Fig. 4(a)]. This is understood as enhancement of the second-order Raman transition from |k = 0, −5/2 to |k = 4k R , −5/2 due to its intermediate state |k = 2k R , −3/2 being resonant. When the Raman beam power increases, we observe development of a spectral splitting at the resonance [Figs. 4(b) and 4(c)]. The overall pattern of the high-P spectrum shows the avoided crossing of the spectral lines corresponding to the two (1, 1) and (2, 0) transitions.
Near the double resonance, the system can be considered as a three-level system consisting of |k, −5/2 , |k + 2k R , −3/2 and |k + 4k R , −5/2 [ Fig. 5(a)]. For simplicity, we denote them by |0 , |1 , and |2 , respectively. Since the Raman transition between |0 and |1 involves the σ + component of Raman beam 1 but that between |1 and |2 involves the σ − component, the coupling strengths Ω + and Ω − of the two transitions, respectively, can be different. In our case with 173 Yb atoms in the m F = −5/2 state, Ω − = 5.3 Ω + . Since the cou- pling between |1 and |2 are much stronger than that between |0 and |1 , the observed spectral splitting with increasing Raman beam intensity can be described as an Autler-Townes doublet [17]: two dressed states |α and |β are formed with |1 and |2 under the strong coupling and their energy level splitting is probed via Raman transitions from the initial |0 state. In the rotating wave approximation, the energy levels of the two dressed states are given by E α,β = 1 The resonant wavenumbers k α,β of the initial state |0 are determined from E(|0 ) = E α,β and for δω = 8E R / and B = 4B R , we obtain k α,β = ± k R 8 √ 2 Ω− E R . We find our measurement results on the double resonance at B = 4B R in good quantitative agreement with the estimation [ Fig. 6(b)]. The coupling strength Ω − was separately measured from the Rabi oscillation data of the |0, −5/2 → | − 2k R , −3/2 transition for δω = −13.4E R / at B = 166 G.
The Raman spectra in Fig. 4 reveal another double resonance at B = 4 3 B R ≈ 24 G, where the (r, ∆m F ) = (2, 0) line crosses the (r, ∆m F ) = (1, 3) line. Although the (1, 3) transition is a third-order Raman transition, its spectral strength is observed to be higher than that of the (2,0) transition. In the intermediate region of B ≈ 35 G, many Raman transitions are involved over the whole momentum space of the sample and the spectral structure for high Raman laser intensity shows interesting features which cannot be simply explained as crossing and avoided crossing of the spectral lines. It might be necessary to take into account the ac Stark shift effect and a further quantitative analysis of the Raman spectra will be discussed in future work.
IV. DISCUSSION
In the Raman laser dressing scheme described in Fig. 2(a), two ways of couplings are allowed between the two spin states because the Raman beam that has linear polarization orthogonal to the magnetic field contains both σ + and σ − components. This means that a Raman transition from one spin state to the other spin state can occur while imparting momentum in either of the x and −x directions. In typical experimental conditions [4,[6][7][8], one of the couplings is resonantly dominant over the other, giving rise to a form of SOC that has equal strengths of the Rashba and Dresselhaus contributions. However, when the Fermi sea of a sample covers a large momentum space, this approximation cannot be applied and it is necessary to include both of the Raman couplings for the full description of the system. Furthermore, as observed in the previous section, the two Raman couplings can be doubly resonant and play cooperative roles in the SOC physics of the system.
As an archetypal situation, we consider a spin-1/2 atom under the Raman dressing for δω = 0. Here, the counterpropagating Raman beams form a stationary polarization lattice with spatial periodicity of π/k R . In-cluding all the allowed Raman transitions, the effective Hamiltonian of the system is given by where δ is the sum of the differential Zeeman and ac Stark shifts, σ i are the 2 × 2 Pauli matrices, and Ω x,y = Ω + ± Ω − . The final form of H shows that the Raman dressing is equivalent to an effective magnetic field B = (δ, Ω x cos(2k R x), Ω y sin(2k R x)), which has two parts: a bias field along the z axis and a spatially oscillating field on the x-y plane. Its chirality is determined by the sign of Ω x Ω y = Ω 2 + − Ω 2 − . In the presence of the spatially oscillating magnetic field, the energy dispersion of the atom has a spinful band structure [ Fig. 6]. Figure 6(b) displays a band structure for Ω − = 5.3 Ω + and δ = 4E R / , which straightforwardly explains the observed spectral splitting at the double resonance. In the experiment, δω = 8E R / and the polarization lattice of the Raman beams moves in the lab frame with velocity of +2 k R /mx. Initially, the atoms in the trapped sample occupy the low quasi-momentum region of the second and third bands of the bare spin-down state, which is indicated by a gray region in Fig. 6(b), and they are projected to the eigenstates of the spinful band structure via the Raman spectroscopy process. The quasi-momentum separation between the gap opening positions, which is marked with ∆k in Fig. 6(b), is the spectral splitting observed in our Raman spectrum. We note that ∆k = 0 in the symmetric case of Ω − = Ω + [ Fig. 6(e)] and the spectral splitting would not occur in the Raman spectrum.
V. SUMMARY AND OUTLOOK
We have measured the Raman spectra of a spinpolarized degenerate Fermi gas of 173 Yb atoms in the conventional SOC scheme and investigated the double resonance of Raman transitions. We observed the development of a spectral splitting at the double resonance of the (r, ∆m F ) = (1, 1) and (2, 0) transitions and provided its quantiative explanation as the Autler-Townes doublet effect. Finally, we discussed our results in the context of the spinful energy band structure under the Raman laser dressing.
In general, when the system has multiple SOC paths in its spin-momentum space, a spinful energy band structure is formed because of the periodicity imposed by them. In previous experiments [8,18], spinful band structures were designed and demonstrated by applying a RF field to the SO-coupled systems under the Raman laser dressing, where the role of the RF field was to open an additional coupling path between the two spin states. The results in this work highlight that the conventional Raman laser dressing scheme provides two ways of SOC and intrinsically generates a spinful band structure without the aid of an additional RF field. An interesting extention of this work is to investigate the magnetic ordering and properties of a Fermi gas in the spatially rotating magnetic field B. In the F = 5/2 173 Yb system, the chirality of B can be controlled to some extent by the choice of the two spin states that are coupled by the Raman lasers. If the m F = ±1/2 states are employed, Ω y = 0 and B changes from an axial field to a alternating transverse field as a function of δ. In particular, when δ = 0, B = 0 points are periodically placed, which might profoundly affect the magnetic properties of the system. It was discussed in Ref. [8] to engineer a flat spinful band structure, which might be pursued via proper tuning of the parameters of our system.
VI. ACKNOWLEDGMENTS
This work was supported by IBS-R009-D1 and the National Research Foundation of Korea (Grant No. 2014-H1A8A1021987). | 2017-03-31T13:03:56.000Z | 2017-03-01T00:00:00.000 | {
"year": 2017,
"sha1": "ccd5cd022288f66cf0b82878249f83a39c8fcbb2",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1703.00359",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e8f2400db71eebae048d2cac57db23c1c2034d22",
"s2fieldsofstudy": [
"Physics",
"Chemistry"
],
"extfieldsofstudy": [
"Physics"
]
} |
77539691 | pes2o/s2orc | v3-fos-license | T-helper 1-type cytokines induce apoptosis and loss of HER-family oncodriver expression in murine and human breast cancer cells
A recent neoadjuvant vaccine trial for early breast cancer induced strong Th1 immunity against the HER-2 oncodriver, complete pathologic responses in 18% of subjects, and for many individuals, dramatically reduced HER-2 expression on residual disease. To explain these observations, we investigated actions of Th1 cytokines (TNF-α and IFN-γ) on murine and human breast cancer cell lines that varied in the surface expression of HER-family receptor tyrosine kinases. Breast cancer lines were broadly sensitive to the combination of IFN-γ and TNF-α, as evidenced by lower metabolic activity, lower proliferation, and enhanced apoptosis, and in some cases a reversible inhibition of surface expression of HER proteins. Apoptosis was accompanied by caspase-3 activation. Furthermore, the pharmacologic caspase-3 activator PAC-1 mimicked both the killing effects and HER-2-suppressive activities of Th1 cytokines, while a caspase 3/7 inhibitor could prevent cytokine-induced HER-2 loss. These studies demonstrate that many in vivo effects of vaccination (apparent tumor cell death and loss of HER-2 expression) could be replicated in vitro using only the principle Th1 cytokines. These results are consistent with the notion that IFN-γ and TNF-α work in concert to mediate many biological effects of therapeutic vaccination through the induction of a caspase 3-associated cellular death mechanism.
INTRODUCTION
The human epidermal growth factor receptor family is comprised of four known members (HER-1-4).They each perform a variety of normal physiological functions, but can also serve as oncodrivers in tumorigenesis [1,2].Of these, HER-2 and HER-3 are particularly inter-dependent proteins, since by themselves they are functionally incomplete receptors.For example, HER-2 has no known ligand, while HER-3 lacks kinase activity; but as a heterodimer, they form a highly efficient functional unit that constitutes the most active signaling dimer in this family [3].HER-2 overexpression in breast cancers is associated with invasiveness, poor prognosis and resistance to chemotherapy [4,5].HER-3 also plays a critical role in tumor cell growth and proliferation in tumors driven by HER-2 over-expression [6,7].HER-2 and HER-3 are therefore attractive targets for novel breast cancer treatments, both pharmacological and immunological.
A previous DC1-polarized dendritic cell-based neoadjuvant vaccine trial to treat early breast cancer (HER-2 pos ductal carcinoma in situ; DCIS) generated strong and durable Th1-polarized immunity against HER-2 [8,9].Of 27 evaluable patients, five had complete responses (i.e.no evidence of tumor at the time of surgery).In addition, within the patient subset with residual DCIS at the time of surgery, there was evidence of reductions in the area of disease for a number of individuals [10].Another interesting observation was that for about half of these remaining patients, levels of HER-2 expression decreased, often to nearly undetectable levels [8,10].These alterations in the tumors were accompanied by lymphocytic infiltrations into the diseased areas of the breast.Of T lymphocytes, CD4 pos Th cells made up the majority of infiltrating cells; CD8 pos CTLs on the other hand were relatively few.Substantial B-cell infiltrates were also observed.Taken together, these observations suggest an immune-mediated destruction and alteration of tumor cells in many of the vaccinated subjects.At the time of these studies, we interpreted the observed losses of HER-2 expression to be a special vaccine-induced form of the "immunoediting" phenomenon described previously by others [11,12]; HER-2-specific CTL were presumably destroying the HER-2-overexpressing cell population within heterogeneous tumor masses and thus selecting for a residuum of disease that was HER-2 low/neg and thus poor targets for continued destruction by the CTL.Although this appeared the most reasonable explanation at the time, there were several unsatisfying elements in this narrative.For example, this reasoning required tumor cell death to be mediated by MHC class I-restricted CTL, yet the CD8 pos infiltrates seemed somewhat sparse to account for all the changes to the tumors.In contrast, there were quite sizable infiltrates of CD4 pos cells.However these Th cells should not be able to recognize tumor cells directly, due to their MHC Class II-restricted nature, and besides most of the CD4 pos cells were observed to congregate just outside the diseased ducts [10] rather than intermingling with the tumor.
An alternate explanation would be that the observed infiltrating MHC class II-expressing B cells would present shed tumor antigens to the vaccine-induced anti-HER-2 CD4 pos Th1 cells, and these in turn would produce a soluble factor(s) that could diffuse into the tumor bed and mediate the observed biological effects on the tumor cells.This would allow the Th cells to affect the tumor without direct contact and recognition.But if secreted factors were involved, which ones could they be?The principle Th1 cytokines are IFN-γ and TNF-α, and there is considerable evidence that these can have effects on tumor cells [13,14].Indeed, a recent manuscript showed that the paired combination of IFN-γ and TNF-α could induce a state of permanent growth arrest in some murine and human cancer cells consistent with senescence [15].We therefore hypothesized that the combination of IFN-γ and TNF-α could replicate, in vitro, the observed anti-breast tumor effects of DC1 vaccination, including induced cell death with associated loss of HER-2 expression.We show in the present studies that the combination of these cytokines can indeed induce in a number of HER family-expressing murine and human breast cancer lines apoptosis, as well as a strong suppression of HER expression, both of which are associated with the activation of caspase-3.The demonstrated in vitro action of these paired cytokines can therefore account for most of the observed changes that occur in HER-2 pos DCIS as a consequence of Th1 immunity induced through polarized DC1 vaccination.
Th1 cytokines prevent growth of murine breast cancer lines
To study the effect of TNF-α and IFN-γ on murine rHER-2 pos breast cancer cells, TUBO and MMC15 lines were cultured in the presence of either or both cytokines for up to 96 hours.The rHER-2 neg 4T1 line was likewise tested for comparison.Initial studies assessed cell response to cytokines via the Alamar Blue assay, which measures metabolic activity of cells through reduction of the Alamar Blue dye, a change that can be followed spectrophotometrically.We found that both TUBO and MMC15 cell lines metabolized the alamar blue dye at comparable levels when left untreated, or treated with single cytokines (Figure 1A upper and middle panels).However, when treated with both IFN-γ and TNF-α, metabolic activity was dramatically suppressed (p < .001).For 4T1 cells, the differences in metabolic activity between untreated and dual cytokine treated cells were still statistically significant (p < .01),yet very small in magnitude, indicating a relative insensitivity to the cytokines (Figure 1A lower panel).
The Alamar Blue assay indicates differences in metabolic activity between treatment groups, but cannot distinguish whether contrasts are due to variances in aerobic respiration between groups of equally viable cells, differences in cell proliferation between groups over the culture interval, or differences caused by actual cell death.We therefore sought to gain information on cell viability through vital staining and direct microscopic observation of cultures.Cultured cells were treated as before, and observations made every 24 hours over the course of four days.Cultured cells were either harvested each day for Trypan Blue staining and counting, or were subjected to direct photomicroscopy in situ.Trypan blue staining of TUBO and MMC15 cells revealed that for untreated or single cytokine-treated groups, cell counts increased steadily over the course of four days (Figure 1B upper and middle panels).For dual cytokine-treated groups, however, no increases in number were seen for either MMC15 or TUBO cells over the course of 96 h.In contrast, 4T1 cells continued to grow vigorously under all conditions (Figure 1B lower panel), and although only small distinctions were apparent at 96 hours between untreated and dual cytokine-treated cells, the difference was nonetheless statistically significant (p < .01).Nonetheless, a dramatic contrast in sensitivity to Th1 cytokines was apparent for 4T1 cells compared with TUBO or MMC15 cells.
Microscopic examination of cultured cells told a similar story.At 24 hours, little difference could be distinguished between the four treatment groups for either TUBO or 4T1 cells (Figure 2 upper panels).Even at 48 and 72 hours, although cells were clearly not multiplying, only modest evidence of cell death was apparent by visual inspection (data not shown).However, by 96 hours, not only did TUBO cells treated with both IFN-γ and TNF-α show few adherent, viable cells, but many dead cells were also apparent (Figure 2 lower panels).Similar observations were made for MMC15 cells (data not shown).In contrast, no differences could be discerned between any treatment groups for 4T1 cells, confirming a relative lack of sensitivity to Th1 cytokines.
Differences in sensitivity to Th1 cytokines do not result from differential expression of cytokine receptors
We next wanted to rule out the possibility that the differences in sensitivity between TUBO and 4T1 lines could be explained simply by differential expression of Th1 cytokine receptors.We therefore stained both lines with fluorescently-labeled antibodies against IFN-γR1, TNF-αR1, or an isotype-matched control antibody, and analyzed the cells via flow cytometry (Supplementary Figure 1).We found that both cell lines displayed modest, yet comparable levels of cytokine receptors, indicating that the differences in sensitivity between the two lines cannot be explained by differential expression of cytokine receptors.
Th1 cytokines induce apoptotic cell death
To determine whether the effects of Th1 cytokines are due to induction of apoptosis, TUBO, MMC15 and 4T1 cells were once again cultured with no treatment, or exposed to single or dual Th1 cytokines.Cells were then harvested at 72 hours post-treatment and stained with FITC-AnnexinV and propidium iodide (PI), then subjected to flow cytometric analysis.These studies showed that TUBO and MMC15 cells treated with both IFN-γ and TNF-α displayed significantly greater populations of AnnexinV pos /PI pos (apoptotic) phenotype, as compared with untreated cells or single cytokine-treated cells (Figure 3A).On the other hand, 4T1 cells did not display significantly enhanced levels of AnnexinV pos /PI pos cells in response to Th1 cytokines, indicating insensitivity to cytokine-induced apoptosis.) and 4T1 (rHER-2 neg ) cell lines were cultured for 96 hours in the presence of IFN-γ (12.5 ng/ml), TNF-α (1 ng/ml), both cytokines, or left untreated.(A) at the 96 hour point, 20 μl Alamar Blue dye (resazurin; 0.7 mg/ml stock solution) was added for 6 additional hours, after which optical densities of supernatants were read at 630 nm ( ** p </ = .01).Results from 3 independent experiments +/− SEM.(B) Replicate wells were harvested after 24, 48, 72 and 96 hours of culture, and stained with Trypan Blue.Dye-excluding cells were enumerated microscopically with the aid of a hemocytometer ( ** p </ = .01).Results from 3 independent experiments +/− SEM.
We confirmed the apoptotic nature of cytokineinduced cell death via the TUNEL assay, which detects DNA damage through enzyme-mediated repair with a biotin-labeled nucleotide analog, the incorporation of which can be detected using fluorescently-labeled streptavidin via flow cytometry.Here, TUBO and 4T1 cells were left untreated, or treated with dual Th1 cytokines for 96 hours, harvested, subjected to TUNEL labeling, and analyzed by flow cytometry (Figure 3B, upper panels).It was evident via histogram analysis that the incorporation of labeled nucleotide was increased for cytokine-treated TUBO cells (dark histogram traces) compared with untreated cells (light histogram traces).In contrast, for 4T1 cells, the fluorescent intensity for untreated and cytokine-treated cells were virtually identical, as evidenced by the overlapping nature of their respective histograms, indicating no differences in labeled nucleotide incorporation.Analysis of 3 independent experiments including nuclease-and Actinomycin D-treated positive controls showed that statistically significant (p < .0001)enhancements in labeling occurred only with cytokine-treated TUBO cells but not 4T1 cells (Figure 3B, lower panels), even though both lines showed enhanced incorporation of label after treatment with Actinomycin D. These studies provided strong evidence that Th1 cytokine-induced death was occurring through an apoptotic mechanism.
Th1 cytokines induce caspase-3 activation
To determine critical components of the underlying pathway for cellular apoptosis, we investigated the involvement of executioner caspases, which are intimately linked to most known downstream apoptotic processes.Caspases exist in inactive, high molecular weight proforms which must be cleaved proteolytically into active, lower molecular weight, components.Both of these forms can be individually detected via Western Blot analysis.Cultured TUBO and 4T1 cells were treated with either dual Th1 cytokines, actinomycin D (positive control) or left untreated.After 5 hours, cells were harvested, extracted, proteins separated via SDS-PAGE and analyzed via Western Blot for expression of pro-caspase 3 (32kDa form), activated caspase3(17kDa form) and β-actin (loading control).For TUBO cells, we found that dual Th1 cytokine treatment resulted in statistically-significant (p < .03)decrease in pro-caspase 3 levels, comparable to that induced by actinomycin D (Figure 4A), while levels of the active form correspondingly and significantly (p < .001)increased (Figure 4B).In contrast, we did not cell lines were cultured for in the presence of IFN-γ (12.5 ng/ml), TNF-α (1 ng/ml), both cytokines, or left untreated.Cells were subjected to photomicroscopy at 24 and 96 hours of culture.www.oncotarget.comdetect any cytokine-induced diminution of procaspase 3 in the insensitive 4T1 cells (Supplementary Figure 2), nor did we detect activation of other caspases, including caspase 1, caspase 6 and caspase 7 in TUBO cells (data not shown).These studies show that treatments that induce apoptosis in sensitive lines also activate the executioner caspase, caspase-3, thereby suggesting that it may have a role in apoptotic cell death induced by Th1 cytokines.
Th1 cytokines induce down-regulation of surface HER receptors for many breast cancer lines
We demonstrated in the previous experiments that the combination of IFN-γ and TNF-α was capable of inducing apoptotic cell death in rHER-2 pos cell lines, confirming the first part of our hypothesis that soluble factors secreted by Th1 cells could account for the clinical effects of DC1 vaccination.We now turned our attention to the effect of these cytokines on the expression of HER family members.We began our studies with the TUBO line.Cultured cells were treated with cytokines, incubated for 72 hours (a time point preceding maximal cell death), harvested, and stained with anti-rHER-2 antibodies.Flow cytometric analysis showed that dual cytokine treatment led to a strong down-modulation of surface rHER-2 expression in these cells; a representative experiment is shown in Figure 5A.This loss was selective for rHER-2, since levels of the common epithelial cell marker EpCAM were also monitored and not only failed to drop, but actually increased slightly (Supplementary Figure 3).Interestingly, when cytokines were removed after this 72-hour exposure (prior to widespread cell death) by replacing the culture medium, the surviving rHER-2 suppressed cells were able to recover over the course of 48 hours of additional culture to begin proliferating and re-expressing surface HER-2 (Figure 5A lower panel).This experiment was repeated a total of 3 times, with statistically significant HER-2 loss induced by dual cytokine treatment (p < 0.0001), and recoveries of rHER-2 expression after cytokine withdrawal not significantly different (p = .443)from untreated cells (Figure 5B).We also examined human breast cancer cell lines for cytokineinduced suppression of surface HER family members.The HER-2 pos line SKBR3 demonstrated somewhat less dramatic, yet statistically-significant reductions (p < 0.005) in HER-2 surface expression (Figure 5C), while the HER-2 neg /HER-3 pos cell line MDA-MB-468 showed strong down-regulation of surface HER-3 expression (Figure 5D) in response to paired Th1 cytokines.It should be noted that not all breast cancer cell lines tested posted significant losses in HER-family expression.For example, murine MMC15 cells retained baseline HER-2 expression despite sensitivity to cytokine-induced cell death (not shown).These studies nonetheless indicate that the combination of IFN-γ and TNF-α are capable of inducing, for many breast cancer lines, reductions in surface expression of HER family proteins in vitro, similar to what is observed in vivo with DC-based vaccinations that induce strong Th1 immunity.
Loss of HER-2 expression is associated with apoptotic cell death
The preceding experiments indicated that paired Th1 cytokines induce both apoptotic cell death and downregulation of surface HER-2 expression in a variety of murine and human breast cancer lines.We next sought to determine how closely this loss of growth factor receptor was associated with cell death.To accomplish this, we focused on the human SKBR3 cell line.These cells were cultured in the presence and absence of IFN-γ plus TNF-α for 48 hours, harvested, and stained simultaneously with fluorescently-labeled APC-conjugated anti HER-2 antibody, FITC-labeled AnnexinV and propidium iodide (PI), then subjected to multicolor FACS analysis.As before, untreated cells expressed high levels of surface HER-2 protein (Figure 6, upper left panel).For cytokinetreated SKBR3 cells, HER-2 expression is starting to be suppressed at 48 h, but the histogram at this time point reveals bimodality, with a population of cells clearly downregulating surface HER-2, and another yet retaining high expression.This bimodality allowed us to define logical gates for high HER-2 expression versus low/negative HER-2 expression, and to analyze these populations separately for markers of apoptosis (Figure 6 lower panels).We found that the HER-2 hi populations had few AnnexinV pos /PI pos cells, with the vast majority in the viable, double-negative quadrant (Figure 6 lower right panel).In contrast, with the HER-2 low/neg population, the situation was completely reversed, with high numbers of AnnexinV pos /PI pos (i.e.apoptotic) cells, and few remaining viable, double-negative events (Figure 6 lower left panel).This study indicates a close association between downregulation of HER-2 expression and apoptotic cell death for SKBR3 cells.
Small molecule agonists of caspase 3 mimic, while its inhibitors block, Th1 cytokine effects
The preceding studies indicated that Th1 cytokinetreated cells lose HER-2 surface expression and die through an apoptotic mechanism, while activating the executioner caspase-3.They do not prove, however, that caspase 3 activity is essential to these processes.We therefore sought to further delineate the role of caspase 3 by testing whether small molecule agonists of this caspase were capable of mediating the same biological effects on breast cancer lines as Th1 cytokines, and whether the effects of cytokines could be blocked by antagonists of caspase 3. We began these studies by examining the effects of PAC-1, a highly selective caspase-3 agonist, on the Th1 cytokine-sensitive murine TUBO cells, and insensitive murine 4T1 cells, as well as sensitive human lines SKBR3 and MDA-MB-468.Cells were cultured in the presence of activator (10 µM), dual Th1 cytokines, or medium alone for 72 hours.Cells were then harvested and stained with Annexin V and propidium iodide to determine apoptotic status, while TUBO and SKBR3 cells were also stained with either anti-rat or anti-human HER-2 antibody, and MDA-MB-468 cells were stained with anti-HER-3 antibody.The 4T1 cells, because of their HER-negativity, did not receive these additional stains.All cell preparations were then subjected to flow cytometry.Analysis indicated that, as expected, 4T1 cells did not undergo apoptosis in response to Th1 cytokines (Figure 7A, upper left panel).They were, however, sensitive to PAC-1; on average half of the cells treated with this agonist became double-positive for Annexin V and PI at this timepoint compared with untreated cells.On the other hand, TUBO, SKBR3 and MDA-MB-468 cells all underwent significant apoptosis in response to both PAC-1 and Th1 cytokines (Figure 7A upper center and right panels).Interestingly, PAC-1 also induced loss of HER-2 expression in TUBO and SKBR3 cells comparable to that caused by Th1 cytokine exposure, and also elicited a similar loss of HER-3 in MDA-MB-468 cells (Figure 7A lower panels), suggesting caspase 3 activation precedes HER loss.
We next turned our attention to caspase antagonists.TUBO cells were treated with Th1 cytokines alone, cytokines plus the caspase 3/7 antagonist (5-[(S)-(+)-2-(Methoxymethyl)pyrrolidino]sulfonylisatin) or as a control, Th1 cytokines plus the Caspase I inhibitor VI (Z-VAD-fmk).After 72 hours, TUBO cells were harvested, stained for surface rHER-2 and subjected to flow cytometry analysis.As expected, we replicated Th1 cytokine-induced loss of rHER-2 expression on TUBO cells (Figure 7B).This loss was not prevented in the presence of the Caspase I inhibitor.However, TUBO cells treated with the Caspase 3/7 inhibitor did not show any evidence of cytokine-induced loss of rHER-2, strongly suggesting a critical role for caspase-3 in the loss of HER-2 surface expression.
DISCUSSION
At one time, CD8 pos CTL were considered to be the most important effector lymphocytes for the control of tumors.However, it is becoming increasingly clear that CD4 pos Th cells, particularly those of the IFN-γ hi /TNF-α hi Th1 phenotype, play a critical and in some instances a possibly defining role in anti-tumor immunity [16].In addition to our vaccine studies, our group has recently discovered a number of surprising associations between Th1 immunity and breast cancer.For example, we have demonstrated in healthy donors a surprisingly high degree of pre-existing Th1 immunity against HER-2.Interestingly, this immunity is diminished in patients with early HER-2 pos breast cancer (DCIS), and further depressed, sometimes to the point of non-detection, in patients with more advanced invasive HER-2 pos tumors [17].Such losses are not observed in those with HER-2 neg breast disease.In follow-on retrospective studies, patients who had invasive HER-2 pos breast cancer were treated with neoadjuvant chemotherapy plus trastuzumab [18].A fraction of these patients achieved pathologic complete responses (pCr) to therapy; i.e. no tumor was detectable after completion of drug therapy.When HER-2 Th1 immunity was compared in the pCR versus non-pCR group, it was found that higher retained Th1 immunity was independently associated with pCR [18].Another study looked at disease recurrence in HER-2 pos invasive breast cancer patients previously treated with chemo plus trastuzumab.As before, higher retained Th1 immunity against HER-2 correlated with the better outcome, in this case longer disease-free survival [19].Interestingly, when p </ = .01;n.s.not significant).www.oncotarget.comfour patients with low anti-HER-2 Th1 immunity who did not experience pCR to chemotherapy plus trastuzumab were vaccinated with HER-2-pulsed IL-12-secreting dendritic cells, their anti-HER-2 immunity levels were restored to the range of healthy individuals [18].In addition, we showed that Th1 cytokines plus the monoclonal antibody drug Trastuzumab worked cooperatively to sensitize HER-2-expressing tumors to lysis by HER-2-specific CTL in vitro [20].Taken together, these studies indicate a critical role for Th1 immunity in the control of breast cancer, and also suggest the tantalizing possibility that boosting anti-HER-2 Th1 immunity could vastly improve responses to conventional therapy.However, a critical, unaddressed question posed by these studies is, by what possible mechanisms do Th1 cells promote immunity against HER-2 pos breast (and perhaps other) cancers?
One way that Th1 responses could participate in tumor control is through a process whose understanding evolved from Burnett's the original "immunosurveillence" hypothesis [21], and is known as "immunoediting" [12].The immunoediting hypothesis poses that the adaptive immune system is capable of sculpting tumor phenoypes during the process of oncogenesis, and these influences occur in three phases [22].The first is "elimination".As normal cells transform into cancerous ones, changes in gene expression can trigger immune responses capable of destroying all of the altered cells, protecting the body.However, if some malignant cells survive this attack, the "equilibrium" phase is entered.Here, constant immune pressure holds the transformed cells in check, even though it is incapable of destroying them outright.During this phase, the actual "immunoediting" occurs.Malignant cells that acquire, through genetic instability, the characteristics that allow them to resist immune destruction (e.g.antigen loss, acquired resistance to immune effector mechanisms, or acquired immunosuppressive qualities) begin to multiply and break containment.In the final phase, "escape", the malignant cells have acquired enough changes to allow them to multiply unchecked, and uncontrolled tumor growth results.Thus the immune system is complicit in selecting for the very tumor phenotypes it is incapable of destroying.Interestingly, IFN-γ and lymphocytes are thought to be critical for immunoediting [23,24,25,26], and the cytokine IL-12 has been long known important for driving IFN-γsecreting Th1 type lymphocytes [27].
In our previously-published clinical trial we used HER-2 peptide-pulsed, IL-12-secreting dendritic cells as vehicles for vaccinating against an early form of HER-2 pos breast cancer [8,9,10].This vaccine induced strong, long-lasting Th1 immunity against HER-2, eliminated disease in 18% of subjects and also induced, in about half of the patients, strong loss of HER-2 expression in tumors excised after vaccination.Because our vaccine caused an apparent alteration in tumor phenotype (HER-2 pos to HER-2 neg ), under conditions known to be important in classical immunoediting (e.g.polarized type-1 responses), we termed the observed vaccine effect "targeted immunoediting" [10,28].We argued that the targeted immunoediting approach was beneficial, since HER-2 expression is associated with invasion and overall poor prognosis [29,30,31,32], and its elimination left behind a residuum of disease with less aggressive characteristics.However, a possible alternate interpretation is that elimination of HER-2 expression is deleterious in the estrogen receptor-negative (ER neg ) patient subpopulation, because the resulting ER neg /HER-2 neg phenotype is by definition "triple-negative", and part of a subset of tumors considered notoriously difficult to treat, since there are currrently fewer targeted treatment options for this phenotype [33].
The data generated in the present studies, however, suggest that we may have previously misidentified at least some portion of our vaccine effects as a form of induced, targeted immunoediting.True immunoediting should produce relatively stable alterations in tumor phenotype (e.g.antigen loss variants).The loss of HER-2 expression, though selective (EpCAM expression was not similarly affected), was shown to be quickly reversed if cytokines were withdrawn prior to full commitment to cell death (Figure 5A), indicating a lack of stability in the HER-2 neg cellular phenotype.This observation instead implies that HER-2 expression can be simply regulated by the presence of certain cytokines, and HER-2 loss was somehow tied to the multistep process of programmed cell death.Indeed, we showed that SKBR3 cells analyzed after a 48-hour exposure to Th1 cytokines (a timepoint prior to maximal apoptotic cell death) displayed two distinct cell populations: One with high retained HER-2 expression, and one with diminishing HER-2 expression.For the tested SKBR3 cell line, the HER-2 hi population contained few apoptotic cells (7.3%) while the HER-2 lo population contained many (41.4%; Figure 6).All things considered, it remains possible that vaccine effects could encompass both targeted immunoediting as well as the demonstrated cytokine-induced HER-2 downregulation phenomena.
But why should the expression of HER-2 or other HER-family members be tied to apoptosis?Cellular outputs of either proliferation or death are determined by input signals coming from growth factor (proliferation) or apoptotic (death) signaling pathways.If there are many growth factor/survival signals and few apoptotic signals, cells live and grow.If there are few growth factor/ survival signals and many apoptotic signals, cells undergo programmed cell death.It is therefore perfectly reasonable that a cell entering the decision to undergo apoptosis would down-regulate growth factor receptors to eliminate conflicting signals that would hamper this process.The role of HER-2 in acting in opposition to apoptosis thorugh both intrinsic and extrinsic pathways is well documented and recently reviewed [34].It is therefore not conceptually www.oncotarget.comsurprising that HER family members are down-regulated by apoptosis-inducing Th1 cytokines in both murine and human breast cancer lines.A somewhat unexpected finding, however, was the apparent timing of caspase-3 activation with respect to HER-2 loss.We originally anticipated that caspase-3 activation, being considered a later step in the commitment to cellular apoptosis, would occur after suppression of HER-2.In this scenario, HER-2 loss, perhaps regulated transcriptionally by Th1 cytokine exposure, would rob breast cancer cells of critical growth factor signaling, and hasten them toward apoptosis with eventual activation of caspase-3 as one of the final steps toward commitment to apoptosis.However, our finding that a caspase-3 activator (as a single agent) induced HER-2 down-regulation while a caspase-3 inhibitor prevented Th1 cytokine-induced HER-2 loss offers strong supporting evidence that caspase-3 activation precedes, rather than follows HER-2 down-regulation.Although inhibition of caspase-3 and subsequent prevention of HER-2 loss does not prove caspase-3 acts on HER-2 directly, previous studies by others have demonstrated multiple caspase cleavage sites on the cytoplasmic domain of HER-2, and that HER-2 can be digested by caspase-3 before becomming completely degraded in the proteasome [35].Despite this, it should also be noted that we detected caspase-3 activation after only a 5 hour exposure to Th1 cytokines, but loss does not begin to be detected until 48-7 2 hours after treatment.There may therefore be both direct and indirect roles for caspase-3 in the loss of HER-family surface expression as a consequence of Th1 cytokine exposure.The precise mechanisms of Th1 cytokine-induced changes in breast cancer cells clearly warrants further investigation.
An interesting and somewhat paradoxical finding was that HER-2 and perhaps HER-3 expression is associated with Th1 cytokine sensitivity, even though these RTKs are powerfully down-regulated by these same cytokines.This is evidenced by the fact that of the examined murine tumors, HER-2 pos TUBO and MMC15 cells undergo apoptosis in response to Th1 cytokines, while 4T1 cells (which expressed none of the rodent homologs for EGFR, HER-2 or HER-3) were very resistant.In addition, forced overexpression of HER-2 enhanced cytokine susceptibility in human breast lines [17].The lack of sensitivity of 4T1 cells to Th1 cytokines was not a result of differential expression of cytokine receptors, since we showed via FACS analysis that both susceptible TUBO and resistant 4T1 stained comparably for TNF-α and IFN-γ receptors (Supplementary Figure 1).Despite differences in susceptibility to Th1 cytokines, the caspase-3 agonist PAC-1 induced comparable levels of apoptosis in both TUBO and 4T1 cells.This suggests that some factor farther upstream from this executioner caspase is responsible for the differences in Th1 cytokine susceptibility between these two cell lines.A possible explanation for the differences in susceptibility that accounts for the differential expression of HER-family members entails the concept of oncogene addiction [36].Oncogene addiction describes a tumor cell's dependency upon the expression of an oncogene for its survial; eliminating the contribution of the addictive oncogene will result in a dramatic loss of cell viability.If the examined murine and human cell lines are addicted to the expression of the HER-2, then the Th1 cytokine-induced suppression of this oncogene constitutes an insult that cannot be tolerated if the suppression remains constant.In contrast, a cell line that expresses no HER-family proteins by definition cannot be addicted to these oncogenes.So long as other important oncodrivers outside the HER family are not eliminated by Th1 cytokine exposure, such tumor cells would be resistant to cytokine-induced cell death.Another posible explaination would be that expression of some HER family members alters intracellular signaling through some incompletely understood pathway that under the appropriate conditions actually promotes apoptosis.For example, it was recently demonstrated that a proteolytic fragment of HER-2 produced by caspase action can translocate to the mitochondria and initiate apoptosis via the intrinsic pathway [35].This would explain why expression of HER proteins actually enhanced sensitivity to induced apoptosis.
It should be noted, however, that breast cancer cell lines display considerable heterogeniety in response to Th1 cytokines.Not all HER-expressing lines are equally sensitive to induced apoptosis, and not all lines that undergo apoptosis show strong HER loss.Such heterogeniety is also apparent in our HER-2 dendritic cell vaccine trials for DCIS [8,9,10] where 18.5% of subjects showed complete pathological responses (i.e.all observable tumor cells died) while other subjects showed apparent partial reductions in tumor volume, while still others showed no discernable reductions in disease at all.Furthermore, post-vaccine reductions in HER-2 expression in this trial were only seen for about half of the subjects with residual tumor.There are at least two possible explanations for such observations, which are not mutually-exclusive.The first possibility is that for some individuals, not enough Th1cytokines are available at the site of disease to mediate observable changes to the tumors.In our in vivo studies we endeavored to use physiologically-plausable cytokine concentrations, but it is difficult to know with certainty the true in vivo concentrations at sites of DCIS in vaccinated individuals.In addition, we have shown in other studies that T cells from vaccinated individuals that are stimulated with HER-2 recall peptides produce enough cytokines to induce caspase 3 activation of SKBR3 cells in a transwell assay [17], even when antigen-specific cells constitute less than 1% of the total T cells.This suggests that relatively few cells may be required to affect tumor death at a distance.The second possibility is that the pathways regulating cytokine, HER-2 and apoptotic signaling are somewhat variable from one cell line to another, and also between individual cancers.Although either of these possibilites are reasonable, we acknowledge the limitations of the present in vitro studies in determining with certainty the true mechanism of immune-mediated alterations in tumor cells, which may await future animal model studies to positively delineate.Nonetheless, it is apparent that the mechanisms of Th1 cytokine-mediated apoptosis warrants additional investigation as a likely mechanism contributing to the observed vaccine effects.Deliniation of these pathways will allow us to modify and improve our present vaccine therapy, perhaps by the addition of targeted drugs that will enhance cytokine effects by amplifying apoptotic signaling pathways, or further restricting growth factor signaling, thus increasing response rates to therapeutic vaccination.
Cell lines and culture
Murine breast cancer cell lines transgenic for rat ErbB2 (homolog to human HER-2, henceforward referred to as "rHER-2") TUBO (spontaneously arising from Balb-NeuT mice) and MMC15 (spontaneously arising from FVB-neu-N mice) were kind gifts of Drs.Guido Forni (University of Turin), and Li-Xin Wang (Cleveland Clinic), respectively.Murine 4T1, human HER-2 neg /HER-3 pos MDA-MB-468 and human HER-2 pos SKBR3 breast cancer lines were obtained from the American Type Culture Collection (Manassas, VA, USA).4T1 cells were obtained from the American Type Culture Collection (ATCC, Manassas, VA, USA).The human HER-2 neg /HER-3 pos breast cancer cell line MDA-MB-468 was likewise purchased from ATCC, which authenticates their lines via short tandem repeat profiling.All lines were cultured in RPMI medium supplemented with 10% FBS except SKBR3, which was cultured in McCoy's medium.All lines were immediately cultured and expanded upon receipt with multiple aliquots re-frozen to establish shortpassage stocks.Individual stock cultures were maintained in serial passage for no more than 6 months to minimize drift effects.Cultures were maintained by serial passage in 75 cm 2 flasks (Corning) at 37°C 5% CO 2 .For individual experiments, cells were seeded in 24-well cluster plates and treated with various combinations of cytokines, caspase agonists and inhibitors.Cells were then harvested and subjected to analysis 24-9 6 hours later.
Alamar blue assay
Cells seeded into 24-well cluster plates were subjected to various treatments and cultured for 96 hours, after which 20 µl of 0.7 mg/ml stock concentration of Alamar Blue dye was added to each well.After approximately six hours additional incubation, the optical density of each well was read at 630 nm using a BioTek ELx800 spectrophotometer.
Trypan blue exclusion assay
Viable cell counts were determined by harvesting treated cells at 24, 48, 72 and 96 hours and staining with Trypan Blue dye.Dye-excluding cells were observed microscopically using an Olympus CX2 inverted microscope and enumerated with the aid of a hemocytometer.
Photomicroscopy
Cells were seeded in 24 well cluster plates, treated with various cytokine combinations or actinomycin D (positive control), and observed each day via phase contrast light microscopy and photographed at 24 and www.oncotarget.com96 hours using a Olympus CKX41 inverted microscope at total magnification of 100×, using a Hamamatsu camera and Cell Sens software.
Apoptosis assays
AnnexinV/PI assay staining: Cells were seeded in 24 well cluster plates and treated with cytokines in presence or absence of PAC-1.Cells were then harvested at 24, 48, 72 and 96 hours post-treatment.Harvested cells were washed and resuspended in FACS buffer (PBS + 1% FBS + 0.01% sodium azide), and stained with FITC-AnnexinV (4 µl) and PI (2 µl).Cells were incubated at 4°C for 20 min, washed and subjected to flow cytometry using an Amnis Flow Sight flow cytometer and analyzed by IDEAS analysis suite V6.0.Cells exhibiting AnnexinV pos / PI pos phenotype were defined as apoptotic.
TUNEL assay
Apoptotic cells were detected using Flow TACS TM Apoptosis Detection Kit (Trevigen, Gaithersburg, MD, USA) according to manufacturer's protocol.Briefly, cells were seeded in 24 well cluster plates and treated with cytokines.Cells were harvested at 72 and 96 hours post-treatment and then incubated in labeling buffer with terminal deoxynucleotidyl transferase (TdT) and biotinylated nucleotides.After washing, cells were incubated with Streptavidin-Fluorescein solution and analyzed by flow cytometry using an Amnis Flow Sight flow cytometer running IDEAS analysis software.
Detection of surface expression of HER-family proteins
Harvested cells were washed and resuspended in 50 ul FACS buffer (PBS + 1% FBS + 0.01% sodium azide) to prepare them for staining with specific antibodies or their isotype-matched controls.Murine tumor lines were incubated with unconjugated murine anti-rodent HER-2 antibody followed by FITC-conjugated antimouse secondary antibody, or stained directly with APCconjugated anti-EpCAM, PE-conjugated IFN-γR or PEconjugated TNF-αR1.Human cell lines were stained with APC-conjugated anti HER-2 or anti-HER-3 antibody.Stained cells were incubated at 4°C for 30min, washed and analyzed for HER-2 expression by flow cytometry using an Amnis Flow Sight flow cytometer and IDEAS analysis software.
Western blot analysis
TUBO and 4T1 cells were seeded in 6-well cluster plates at a density of 5 × 10 4 cells/well in RPMI-1640 media supplemented with 10% FBS and treated with IFN-γ and TNF-α, or no treatment as negative control.Positive control was treated with the pro-apoptotic agent, actinomycin D (10 µM).The plates were then incubated at 37°C for 5 hours.Cells were harvested by trypsinization, washed with ice-cold PBS and resuspended in cold RIPA extraction buffer containing protease and phosphatase inhibitors.The cells were incubated on ice for 30 min and then centrifuged at 14,000 × g for 20 min at 4°C to obtain a clear extract.The total concentration of protein was determined by Bradford assay.Total protein (30 μg) of each lysate was loaded onto 10% polyacrylamide gels (Biorad) and separated by electrophoresis.After electrophoresis, the proteins were electrotransferred onto nitrocellulose membranes, and the membranes were blocked using 1% BSA for one hour.Membranes were then incubated with primary antibodies (cleaved caspase-3 (Asp175), procaspase 1, procaspase 3, procaspase 6 and procaspase 7, and β-actin) at 4°C overnight.Subsequently, the membrane was washed with TBST (0.05% Tween-20 in TBS) and incubated with corresponding anti-mouse or anti-rabbit immunoglobulin G-horseradish peroxidaseconjugated secondary antibody for one hour at room temperature.The membranes were then washed again with TBST.Protein bands were visualized using enhanced chemiluminescence (ECL) detection kit (Pierce) and GE Luminescent image analyzer using Image Quant LAS4000 software.Protein band intensities were analyzed quantitatively with ImageJ.
Statistical analysis
Quantitative data are presented as means ± SEM.The significance of difference was evaluated with the one-way ANOVA test.A p value of </ = 0.05 was considered statistically significant.Statistical analyses were performed in SPSS version 22 (IBM Corp).
CONFLICTS OF INTEREST
None.
GRANT SUPPORT
This work was supported by a grants from the American Cancer Society (117283-RSG-09-187-01-LIB) and the Pennies in Action organization.
Figure 3 :
Figure 3: Induction of apoptosis by Th1 cytokines.(A) TUBO, MMC15 and 4T1 cells left untreated, or treated with TNF-α (1 ng/ml), IFN-γ (12.5 ng/ml) or both cytokines and cultured for 96 hours.Cells were then harvested and stained with Annexin V and PI and subjected to flow cytometric analysis.Values represent percentage of double-staining (apoptotic) cells +/− SEM.(B) TUBO and 4T1 cells were cytokine-treated and cultured as before.Harvested cells were formaldehyde-fixed and labeled with biotinylated nucleotides, then stained with FITC-labeled streptavidin and subjected to flow cytometric analysis.Upper panels display histogram analysis from a single representative of labeling for untreated (gray trace) versus cytokine-treated (black trace) cells.Lower panel represents summary analysis of 3 separate experiments, expressed as percent maximum mean fluorescent index +/− SEM ( ** p </ = .01;n.s.not significant).
Figure 5 :
Figure 5: Th1 cytokines alter HER-family expression on murine and human breast cancer cells.(A)TUBO cells were cultured alone or in the presence of TNF-α and IFN-γ for 72 hours, harvested and analyzed for HER-2 expression via flow cytometry (upper 3 panels).Replicate treated wells were washed free of cytokines at the 72 hour point and cultured an additional 48 hours, demonstrating the recovery of HER-2 expression (lower panel).(B) Summary of 3 separate trials with TUBO cells illustrating cytokine-induced HER-2 loss as well has recovery after cytokine withdrawal.Values represent percent maximal fluorescence +/− SEM from 3 separate experiments.(C) Human HER-2 pos SKBR3 cells were cultured alone or with TNF-α (1 ng/ml) plus IFN-γ (12.5 ng/ml) for 72 hours, harvested, and analyzed for HER-2 expression via flow cytometry.Values represent percent maximal fluorescence +/− SEMfrom 3 separate experiments.(D) Human HER-2 neg /HER-3 pos MDA-MB-468 breast cancer cells were cultured alone or in the presence of TNF-α plus IFN-γ for 72 hours, harvested, and analyzed for HER-3 expression via flow cytometry ( ** p </ = .01).
Figure 6 :
Figure 6: Loss of HER-2 expression is associated with apoptosis.SKBR3 cells were cultured alone or with TNF-α and IFN-γ for 48 hours, harvested, and simultaneously stained with APC-conjugated anti-HER-2 antibody, Annexin V and PI and analyzed via flow cytometry.Upper left; expression of HER-2 in untreated group, upper right; expression of HER-2 in cytokine-treated group.For cytokinetreated group, two gates were defined.M1 gate contained cells with depressed HER-2 expression, M2 contained cells with high retained HER-2 expression.These separate populations were individually analyzed for Annexin V and PI staining (lower panels).Histograms and associated dot plots representative of 3 separate experiments with similar results.
Figure 7 :
Figure 7: Caspase 3 agonist induces apoptosis and HER-family loss while caspase 3 antagonist prevents cytokineinduced HER-2 loss in breast cancer cell lines.(A) Murine 4T1 and TUBO, and humanSKBR3 and MDA-MB-468 cells were cultured alone, with Th1 cytokines, or with caspase-3 agonist PAC-1 (10 μM).Murine lines were harvested 72 hours post-treatment, and human lines 48 hours post-treatment, stained with FITC-Annexin V, PI (upper panels) or with anti-HER antibodies (lower panels), and subjected to flow cytometric analysis.Values are expressed as % total apoptotic cells (AnnexinV pos /PI pos ), and percent maximal HER expression +/− SEM.(B) TUBO cells were cultured alone or in the presence of IFN-γ plus TNF-α.To these groups were added either no additional treatment, caspase-1 inhibitor (50 μM), or caspase 3/7 inhibitor I (50 μM).Cells were incubated for 72 hours, harvested, and stained for HER-2 expression and analyzed via flow cytometric analysis.Results are expressed as % HER-2 pos cells +/− SEM ( * p </ = .05;** | 2019-03-15T13:13:22.210Z | 2014-11-09T00:00:00.000 | {
"year": 2019,
"sha1": "f8e9070511c723293071270fe5803c9a86edbfca",
"oa_license": "CCBY",
"oa_url": "https://www.oncotarget.com/article/10298/pdf/",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a3b82d71576fb5e9fbc69072e547a919049d615c",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
221955653 | pes2o/s2orc | v3-fos-license | Improved sensitivity, safety, and rapidity of COVID-19 tests by replacing viral storage solution with lysis buffer
Conducting numerous, rapid, and reliable PCR tests for SARS-CoV-2 is essential for our ability to monitor and control the current COVID-19 pandemic. Here, we tested the sensitivity and efficiency of SARS-CoV-2 detection in clinical samples collected directly into a mix of lysis buffer and RNA preservative, thus inactivating the virus immediately after sampling. We tested 79 COVID-19 patients and 20 healthy controls. We collected two samples (nasopharyngeal swabs) from each participant: one swab was inserted into a test tube with Viral Transport Medium (VTM), following the standard guideline used as the recommended method for sample collection; the other swab was inserted into a lysis buffer supplemented with nucleic acid stabilization mix (coined NSLB). We found that RT-qPCR tests of patients were significantly more sensitive with NSLB sampling, reaching detection threshold 2.1±0.6 (Mean±SE) PCR cycles earlier then VTM samples from the same patient. We show that this improvement is most likely since NSLB samples are not diluted in lysis buffer before RNA extraction. Re-extracting RNA from NSLB samples after 72 hours at room temperature did not affect the sensitivity of detection, demonstrating that NSLB allows for long periods of sample preservation without special cooling equipment. We also show that swirling the swab in NSLB and discarding it did not reduce sensitivity compared to retaining the swab in the tube, thus allowing improved automation of COVID-19 tests. Overall, we show that using NSLB instead of VTM can improve the sensitivity, safety, and rapidity of COVID-19 tests at a time most needed.
Introduction
Rapid and robust identification of individuals infected by COVID-19 is one of the mainstays of containment and mitigation efforts of this pandemic.
The current guidelines of the CDC and WHO for SARS-CoV-2 tests are that collected swabs should be placed in a transport tube that is either empty or contains either Viral Transport Medium (VTM), Amies transport medium, or sterile saline [1,2]. Under such conditions, the viral capsid remains intact and active. While this strategy is the mainstay of clinical testing protocols for many pathogens, as it allows culturing of the pathogens of interest, there are disadvantages to be considered. Since the virus is kept in its infectious state, the specimens pose a significant biohazard both during transport and in the lab and require special safety procedures of packaging, transport, and treatment under BSL2 conditions in the lab which cause a considerable bottleneck in the processing workflow. After unpacking in the lab, a small fraction of the transport medium with the specimen is transferred into Lysis Buffer (LB) which inactivates the virus but also dilutes the sample typically~2-3 fold. This leads to a smaller overall quantity of viral RNA used in the diagnostic test, potentially leading to a higher proportion of false-negative results, especially in borderline cases. In addition, transport in VTM or any of the other common transport mediums requires the specimen to be kept at 4˚C in order to prevent degradation of viral nucleic acid. This can present logistic challenges in the testing process.
We hypothesized that the collection of SARS-CoV-2 clinical specimens into a tube containing a Nucleic Acid Stabilization and Lysis Buffer (NSLB), instead of VTM, will streamline the clinical diagnostic workflow as this buffer both inactivates the virus and preserves viral RNA at room temperature. The use of NSLB will allow transport of the specimens without the need for cooling and improve the biosafety profile of the entire diagnostic workflow. Additionally, this buffer obviates the need for incubation in LB and the samples can be directly inserted into the RNA extraction step. This leads to a larger quantity of viral nucleic acid per sample in the diagnostic pathway.
Here, we compared the performance of RT-qPCR on samples collected into VTM and NSLB from 77 COVID-19 patients and 20 healthy controls and observed a significantly higher sensitivity for NSLB over VTM.
Methods
Recruitment took place during April 2020. Only adults (above 18 years old) were asked to participate. Participants were random Israelis who were hospitalized or quarantined after they have been found positive to SARS-CoV-2 by RT-qPCR prior to participating in this study. Participants were recruited from three different medical centers in Israel, and are representative of the Israeli general population. The participants included 26 hospitalized patients with moderate to severe symptoms from the Sheba Medical Center, 21 hospitalized patients from Hadassah Medical Center, and 32 asymptomatic or mildly symptomatic patients who were quarantined in a hotel (their samples were processed by the Rambam Medical Center). Hospitalized patients were recruited by medical staff in the relevant departments, and quarantined patients were recruited by the researchers at the hotel were the quarantine took place, after receiving permission from the medical staff in charge of the quarantine. In addition, we recruited 13 healthy volunteers from the medical staff at the Rambam Medical Center and 7 healthy volunteers from Hadassah Medical Center. All participants signed informed consent and the experiments were approved by the IRBs of the Sheba, Hadassah, and Rambam Medical Centers. Sampling and RT-qPCR for each of the three experiments were conducted by different teams at different labs. We conducted a power analysis with the Gpower software [3], which indicated that a sample size of 46 participants would be required to detect a mean of differences of 1.5 Ct (Ct difference expected by the dilution factor in our experiment), with 80% power using a one tailed t-test with alpha at 0.05.
In order to compare sampling into VTM vs. NSLB, we sampled each participant consecutively twice and put one specimen into VTM and one into NSLB (in alternating order). Each specimen was obtained from nasal and oropharyngeal swabs combined into one tube. We tested alternately two different commercially available NSLBs (DNA/RNA shield™ from ZYMO, and PrimeStore1 MTM from LongHorn). The commercial NSLBs are proprietary solutions, however protocols for making NSLB with similar expected properties are publically available [4]. For VTM, we used the widely used COPAN UTM1. This VTM is proprietary, but VTM with similar expected properties can be prepared by using a publicly available detailed protocol published by the CDC SOP#: DSR-052-05 [5].
Samples were collected 2-8 hours before RNA extraction in the lab. During that time, samples in VTM were kept at 4˚C after collection, and samples in NSLB were kept at room temperature until reaching the lab.
VTM samples were diluted in the lab in standard LB for inactivation of the virus and releasing of RNA into the medium. After 10-20 minutes in LB, RNA was extracted. NSLB samples were transferred directly to RNA extraction.
At the Sheba and Rambam Medical Centers, we extracted RNA by the Precision System Science MagLead 12gC with the MagDEA Dx (LV at Sheba and SV at Rambam). At Hadassah Medical Center we extracted RNA using three different commercial kits and platforms: MagNA Pure 96 kit (Roche Lifesciences) using Roche platform, Qiagen DSP virus/Pathogen kit using Qiasymphony platform, and MagDEA DX SSV kit (PSS, Japan) using the MagLead 12gC platform.
The Sheba and Hadassah RT-qPCR tests were conducted on the viral E gene, and the Rambam RT-qPCR was performed on the E, RdRp, and N genes (Seegene Allplex 2019-nCoV Assay). Human genes were also tested from the same sample: Sheba and Rambam tested the RNase P and Hadassah the ACTB. The limit of detection was Ct = 40. For the calculation of Ct difference, a sample that was negative in one medium but its matched sample was positive, was defined as Ct = 40.5. Each pair of samples from the same participant were always processed in the same 96 PCR plate.
All the raw data is available in the S1 Dataset.
Using NSLB for sampling improves the sensitivity of SARS-COV-2 RT-qPCR tests
To compare the sensitivity of SARS-COV-2 tests in different collection media, we sampled patients once into VTM and then again into NSLB, conducted RT-qPCR on both samples, and compared the Cycle threshold (Ct) values of the matched samples from the same individual. Overall, we tested 99 participants. Of them, 20 were healthy volunteers recruited as negative controls. All controls tested negative in both VTM and NSLB. The other 79 participants were COVID-19 patients (participants who previously tested positive at least once and were hospitalized or quarantined at the time of the study). Of these, 18/79 patients tested negative in both VTM and NSLB (of which 17 were from the recovering quarantined group). Those are likely patients who recovered and not false negatives since internal controls of human RNA were positive in all of these cases. One patient was negative for human RNA internal control (bad sampling) and was omitted from further analysis. The 60 patients who tested positive in either VTM, NSLB, or both, were further analyzed.
In 45/60 (75%) of the patients, the NSLB sample showed lower Ct value (higher viral titer) than its matched VTM sample, while in only 15/60 (25%) of patients, the Ct of VTM was lower (Fig 1A-1C). The average Ct difference in favor of NSLB was 2.1, CI [0.9,3.3] and this difference was statistically significant (t-test p-value = 8.4x10 -4 ). 12/60 (20%) of the patients tested positive in NSLB but negative (below the limit of detection) in VTM, while only 6/60 (10%) tested positive in VTM but negative in NSLB. Overall, our results demonstrate a significantly higher sensitivity for sampling into NSLB over VTM. Importantly, NSLB was advantageous (lower Ct) over VTM in all the three different medical centers that participated in this study (each processed approximately a third of the samples, see Methods), although each had different teams of samplers and different lab protocols.
When samples in VTM are processed in the lab they are typically inactivated in LB before entering the RNA extraction process. This inactivation dilutes the original sample, typically 2-3 fold. In our experiment, samples in VTM were diluted on average 2.8 fold in LB. Therefore, if NSLB is equal in performance with VTM with regards to biochemical properties alone, one would expect a baseline of log 2 (2.8) = 1.5 lower Ct for NSLB. The observed average Ct difference of 2.1±0.6 (Mean±SE) was not significantly higher than 1.5 (t-test p-value = 0.28). This result suggests that at least most of the advantage of NSLB in our study was due to using a more concentrated sample for RNA extraction since no inactivation in LB (and hence dilution) was needed. We observed no significant difference between the two commercial NSLBs that we used (rank-sum test p-value = 0.52).
For 45 positive patients, we also performed RT-qPCR on human mRNA. NSLB samples showed on average 3.1 CI [2.4,3.8] lower Ct than their matched VTM samples (Fig 1D-1F). Interestingly, this Ct difference was significantly higher than both zero (t-test p-value = 5x10 -11 ) and the 1.5 Ct expected difference when dilution is controlled for (t-test p-value = 4x10 -5 ). This suggests that the sampling into NSLB probably broke open more human cells over the time it took to transfer the samples to the lab (2-8 hours) compared to incubating VTM samples for 10-15 minutes in LB before RNA extraction. It also indicates that RNA in NSLB has not been degraded before reaching the lab.
Samples in NSLB can be stored at room temperature at least 72 hours without a reduction in RT-qPCR performance
A possible concern when using NSLB for sampling is that RNA will be degraded prior to being processed in the lab, and thus the sensitivity of the test will decrease. To test the RNA stability over time, we repeated the RNA extraction of 24 samples from hospitalized patients (Sheba Medical Center) after 24 hours of storage at 4˚C. Ct values were highly correlated for VTM samples between time 0h and time 24h (Pearson r = 0.96), and also for NSLB samples (Pearson r = 0.93), see Fig 2A. The advantage of NSLB over VTM did not significantly change after 24h (t-test P = 0.63).
For 20 other patients (recovering quarantined patients) we kept for 72 hours the NSLB samples at room temperature and the VTM samples at 4˚C and then extracted RNA a second time and performed another RT-qPCR. Correlation of Ct values between time 0h and 72h were Pearson r = 0.83 for VTM samples and Pearson r = 0.63 for NSLB samples (Fig 2B). Both VTM and NSLB samples showed a mean decrease of 1 Ct value after 72 hours (VTM: -1 CI [-1.6,-0.2], t-test p-value = 0.02, NSLB: -1 CI [-2.1,0.05], t-test p-value = 0.06), indicating that not only RNA was not degraded in either medium, but that perhaps even more detectable RNA was released into the mediums during these 72 hours. Also, the advantage of NSLB over VTM remained similar at 72h as it was at time 0h (t-test p-value = 0.89).
Sampling into NSLB eliminates the need to keep the swab in the tube
Sampling into NSLB is not sufficient for eliminating work under BSL2 conditions in the lab, since typically the swab is fully inserted into a patient's mouth or nose but only the swab's tip is dipped into the medium. In addition, keeping the swab in the tube poses a challenge for downstream automation procedures in the lab. Unlike VTM, the reagents in NSLB have the potential to release almost immediately most of the biological material from the swab into the medium. Therefore, we tested if biosafety and automation could be improved without impairing sensitivity, by discarding the swabs immediately upon sampling after a short swirl in NSLB. This method was described previously and proved to allow efficient detection of S. aureus [6], but has not been evaluated in the context of COVID-19. To test this, six patients were tested using two consecutive NSLB swabs collection, where one swab was retained in the tube, and the other was swirled-in the second tube and discarded. For RT-qPCR on the E gene of SARS-CoV-2, keeping the swab in showed on average only a slightly lower Ct value compared to the Ct of a parallel sample for which the swab was discarded (Mean of Ct differences = -0.27, CI [-1.7,1.2]). For RT-qPCR on the ERV3 human gene keeping the swab showed on average a higher Ct value compared to discarding the swab (Mean of Ct differences = 0.45, CI [-0.46,1.4]). For both viral and human genes the difference between keeping the swab and discarding it was not statistically significant (P = 0.65, P = 0.26 correspondingly).
Discussion
Here, we showed that direct sampling into NSLB resulted in significantly higher sensitivity in SARS-CoV-2 RT-qPCR compared to sampling into VTM. We showed that samples can be kept in NSLB at room temperature for at least 72 hours prior to RNA extraction without significant loss of sensitivity. Moreover, we showed that swabs can be discarded at the sampling site after a few seconds of swirling in NSLB without a significant loss of sensitivity. We conducted the experiment in three different medical centers, using different personnel and lab protocols, and obtained similar results in all three centers, confirming that sampling into NSLB is robust to varying RNA extraction protocols.
The increased sensitivity was most likely gained by facilitating the introduction of a more concentrated sample into the RNA purification step. PCR tests for SARS-CoV-2 have shown a high variation of False Negative Rates (FNR), with some reports of up to 20%-30% FNR [7,8]. Increasing the sensitivity of PCR tests is especially important at the incubation stage, where a patient might not have developed yet a high viral load and might falsely test negative shortly before becoming infectious. In addition, pooling protocols that aim to increase the number of COVID-19 tests cause loss of sensitivity [9], and using NSLB could compensate for some of this sensitivity loss. Further increases in sensitivity might be achieved by other means of increasing the analyte quantity in the reaction, e.g. taking larger volumes into the PCR step.
We showed the samples can be kept in NSLB at room temperature for at least 72 hours without degradation of RNA. This is important at times of high demand for COVID-19 testing and backlogs of samples waiting to be processed in labs. It can also simplify the transportation of large numbers of samples from remote areas, especially in countries where molecular diagnostic labs are far from the sampling sites.
We showed that when sampling into NSLB, it is possible to discard the swab at the sampling site without sacrificing the test's sensitivity. This removes a major obstacle for automation of COVID-19 tests (as pipetting robots can be used without the hindrance of the swab in the tube). Since NSLB also circumvents the need to lyse the sample in the lab before RNA extraction, it opens up the possibility of sampling into tubes that can be inserted directly into an RNA extraction robotic pipeline. This can help to significantly increase the number and shorten the turn-around time of COVID-19 tests, an important measure to curb the COVID-19 pandemic since patients are most infectious prior and only shortly after symptoms onset [10].
Direct sampling into NSLB can reduce the risk of infecting the medical and lab personnel who conduct the sampling, transporting, and processing of the samples. In addition, the immediate inactivation of the virus in NSLB can improve the safety of a recently suggested swab pooling approach [11] that involves multiple sampling into one open tube. Since NSLB enables discarding of the swab after each sampling it can increase the swab pooling yield.
The method we describe here could be possibly improved further by direct sampling into a lysis buffer that allows for direct PCR (skipping an RNA extraction step, as described elsewhere [12,13]). Such a procedure should be validated in further experiments.
It should be noted, however, that there are several disadvantages to using NSLB. First, it is more expensive than VTM. Second, it typically contains reagents that require more careful handling than VTM in cases of spillover. Third, there are certain RNA purification protocols that are not suitable for samples in NSLB. For example, RNA extraction protocols that use bleach are not safe if used with NSLB that contains Guanidine. Lastly, NSLB does not allow for the culturing of the virus in the lab. However, with the current need for high throughput and rapid SARS-CoV-2 tests, and since most samples are not currently cultured, this disadvantage is negligible.
Conclusion
We demonstrated that sampling of suspected COVID-19 patients directly into NSLB and then discarding the swab has significant advantages over using the current guideline with VTM. It increases the sensitivity of the test, increases safety, and facilitates better automatic handling of samples in the lab. These advantages can help to increase the number and rapidity of COVID-19 tests at a time it is most needed. | 2020-09-28T13:04:58.290Z | 2020-09-28T00:00:00.000 | {
"year": 2021,
"sha1": "fe9d16d9c7f7e7a343a1c2f0a9b5ee42f33a5fbd",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0249149&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ae063995cef4ad5ab824f4de9da9b361e8efd0a2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
3725545 | pes2o/s2orc | v3-fos-license | Navigation-Linked Heads-Up Display in Intracranial Surgery: Early Experience
Abstract BACKGROUND The use of intraoperative navigation during microscope cases can be limited when attention needs to be divided between the operative field and the navigation screens. Heads-up display (HUD), also referred to as augmented reality, permits visualization of navigation information during surgery workflow. OBJECTIVE To detail our initial experience with HUD. METHODS We retrospectively reviewed patients who underwent HUD-assisted surgery from April 2016 through April 2017. All lesions were assessed for accuracy and those from the latter half of the study were assessed for utility. RESULTS Seventy-nine patients with 84 pathologies were included. Pathologies included aneurysms (14), arteriovenous malformations (6), cavernous malformations (5), intracranial stenosis (3), meningiomas (27), metastasis (4), craniopharygniomas (4), gliomas (4), schwannomas (3), epidermoid/dermoids (3), pituitary adenomas (2) hemangioblastoma (2), choroid plexus papilloma (1), lymphoma (1), osteoblastoma (1), clival chordoma (1), cerebrospinal fluid leak (1), abscess (1), and a cerebellopontine angle Teflon granuloma (1). Fifty-nine lesions were deep and 25 were superficial. Structures identified included the lesion (81), vessels (48), and nerves/brain tissue (31). Accuracy was deemed excellent (71.4%), good (20.2%), or poor (8.3%). Deep lesions were less likely to have excellent accuracy (P = .029). HUD was used during bed/head positioning (50.0%), skin incision (17.3%), craniotomy (23.1%), dural opening (26.9%), corticectomy (13.5%), arachnoid opening (36.5%), and intracranial drilling (13.5%). HUD was deactivated at some point during the surgery in 59.6% of cases. There were no complications related to HUD use. CONCLUSION HUD can be safely used for a wide variety of vascular and oncologic intracranial pathologies and can be utilized during multiple stages of surgery.
I ntraoperative navigation and microscope integration are very useful, but can be limited when attention needs to be divided between the operative field and the navigation screens. A heads-up display (HUD), available in the aviation industry for many years, is a recent addition to the neurosurgery toolkit. HUD ABBREVIATIONS: AR, augmented reality; AVM, arteriovenous malformation; CSF, cerebrospinal fluid; CTA, computed tomography angiography; EC-IC, extracranial-intracranial; HUD, heads-up display; IC-IC, intracranial-intracranial; MRI, magnetic resonance imaging Supplemental digital content is available for this article at www.operativeneurosurgery-online.com. provides visualization of navigation information during the surgery workflow. There has been a limited use of HUD across the surgical field thus far. This study details our early use of HUD in skull base and vascular cases. To our knowledge, this is the largest series utilizing HUD for intracranial surgery.
METHODS
Institutional Review Board approval was obtained to perform this study. A waiver of consent was obtained to perform this retrospective review. This is a retrospective review of all patients who underwent intracranial surgery using HUD from April 2016 through April 2017. There were no other inclusion or exclusion criteria. Intraoperative navigation was performed with Brain Lab Curve™ Image Guided Surgery (Brainlab, Munich, Germany). The Zeiss Pentero 900 (Carl Zeiss Meditec Inc, Dublin, California) was used for the majority of cases, and the Leica OH6 (Leica Microsystems Inc, Buffalo Grove, Illinois) was used for only a small number of cases. Prior to surgery, preoperative imaging, usually contrast-enhanced MRI and CTA, were reviewed by the senior author (JB) and team. Using the Brainlab platform, the lesions of interest, surrounding vessels, and/or surrounding nerves/brain tissue were painted using the Brainlab Smartbrush function by a member of the surgical team and then approved by the senior author, JB. Patient registration and microscope integration were performed in the standard fashion. The operating room setup included the operating microscope, Brainlab Navigation, and Surgical Theater imaging (Surgical Theater, Mayfield, Ohio; Figure 1).
The HUD could be overlaid at multiple time points during the surgery (ie, during exposure of skin, bone, dura, cortex, and/or lesion). Microscope integration was typically performed after dural opening at the time of first use, but could be done earlier in the operation if the surgeon wanted to use HUD for phases such as head/bed positioning, skin incision, craniotomy, or dural opening. By convention, HUD objects have either a solid or dashed outline ( Figure 1). The dashed outline represents the greatest dimension of the object projected in the surgeon's point of view, irrespective of microscope focus depth. A solid outline represents the object dimension at the current focal depth. Objects could also be outlined or filled with different degrees of opacity.
Pathologies were grouped together as to their vascular, oncologic, or other origin. Lesions were labeled as superficial if they came to within 1 cm of the surface of the brain or calvarium and all others were labeled deep. Structures painted included the lesion itself as well as surrounding vessels, nerves, or brain/brain tissue. HUD accuracy was subjectively determined based on the visualized overlap of painted structures with the location of the actual structures. Accuracy was determined retrospectively by authors JB and JM and was graded as excellent (perfect overlay), good (minimal overlay displacement), or poor (significant overlay displacement). This assessment was an estimate, not a measurement. Accuracy was assessed when the painted objects first came into view. Other patient data were recorded by reviewing the medical record. The chi-squared test was used to compare accuracy at different depths with a significance level of 0.05. HUD utility during phases other than lesion localization/resection was assessed for patients in the second half of the study by recording the other phases of surgery when HUD was used. Other phases of surgery included bed/head positioning, skin incision, craniotomy, dural opening, corticectomy, arachnoid incision, and intracranial drilling. Head positioning could only be performed prior to the start of surgery, but bed positioning could be performed either prior to or during surgery. In order to be counted for utility, the HUD had to be actively used during that phase of surgery (ie, not just be turned on). The decision to use HUD during a certain phase of surgery, however, was operator dependent. Additionally, it was recorded whether HUD was turned off during the case and the reason for deactivation.
Main Findings
This series is, to our knowledge, the largest experience using HUD to assist with intracranial surgery. We have shown here that HUD can be used for a wide variety of both vascular and oncologic pathologies both on the surface and in the depths of the brain. Excellent or good accuracy was maintained in the majority of cases (91.6%). Deep lesions were less likely to have excellent accuracy. The overlay can be used to outline not only the pathological lesion, but also surrounding vessels and nervous tissue that must be anticipated, identified, and preserved.
We have also shown that HUD has potential value during multiple stages of surgery (other than the lesion localization/resection) from as early as the skin incision/positioning to arachnoid dissection and intracranial drilling. Our experience was that HUD utility varied depending on pathology. We found that for intra-axial and superficial lesions, HUD was more useful OPERATIVE NEUROSURGERY VOLUME 15 | NUMBER 2 | AUGUST 2018 | 187 FIGURE 5. Craniotomy/craniectomy. A, A patient with a cerebellar meningioma at the junction of the transverse and sigmoid sinuses. B, The HUD was activated prior to performing the craniectomy and was used to demonstrate the tumor (green), dural sinuses (purple), and guide the craniectomy.
for skin incision, craniotomy, dural opening, and corticectomy. On the other hand, for skull base lesions, HUD was more useful for bed/head positioning as well as extradural/intradural bone removal. These findings are intuitive, as intra-axial lesions require more unique operative plans, whereas skull base lesions generally follow a more typical surgical approach and depend on bone removal for adequate lesion exposure. HUD used during bed positioning, skin incision, craniotomy, and dural opening represents a deviation from normal microscope workflow. We have demonstrated here that HUD can be potentially used during these phases, but only if the operator deems that it would be useful.
During the tranphenoidal approach, HUD can be useful for choosing the correct trajectory to the sellar region, defining the carotid arteries and optic nerves from the nasal cavity, and in turn guiding the craniectomy, dural opening, and tumor resection. Although navigation is typically not utilized during aneurysm and intracranial stenosis surgery, HUD proved to have some utility in these cases. During aneurysm surgery, HUD can be used to visualize the aneurysm and tailor the arachnoid dissection. In addition, we used HUD to identify the target recipient vessel during the bypass surgery. Finally, based on our small experience with low-grade gliomas, we postulate that HUD can be useful for guiding the resection of lesions that do not appear abnormal to the naked eye.
HUD Limitations
HUD itself has 2 major limitations ( Figure 13). First, HUD relies on navigation accuracy and any loss of accuracy can potentially lead to false reliance on HUD. HUD accuracy can be affected by poor intraoperative navigation, inaccurate object painting, brain shift, brain retraction, and may also deteriorate as the surgery progresses. Navigation accuracy should be confirmed throughout each procedure to assure validity of information provided by the HUD. This involves vigilance from members of the operating team including surgeons, circulating and scrub staff, anesthesia, and intraoperative neurophysiology monitoring teams to assure that the navigation star with fiducials is not moved during preparation or other phases of the operation. Further, loss of accuracy has different implications for different pathologies. For instance, imperfect accuracy can be tolerated for lesions such as extra-axial tumors or aneurysms, where HUD may serve as a guide to the general vicinity of the lesion, but the lesion itself is obvious thereafter. On the other hand, excellent HUD accuracy is essential for normal-appearing lesions, such as low-grade gliomas, and deep intra-axial lesions that have no other landmarks. It is FIGURE 8. Arachnoid opening. A, A patient with previous subarachnoid hemorrhage from a ruptured posterior inferior cerebellar artery aneurysm with aneurysm recurrence following coiling, as seen here on a lateral digital subtraction angiography. B and C, In this case, the aneurysm was painted (green). The HUD was used to tailor a focused arachnoid opening directly over the aneurysm. . Corticectomy. A, A patient with a lateral ventricular AVM that had previously undergone radiation, but was not obliterated, had undergone cystic change as seen in this contrast-enhanced coronal MRI with a planned temporal trajectory. In this case, the AVM was painted (red). The HUD was activated after dural opening and was used to choose a precise temporal cortisectomy (B) to reach this deep lesion (C). The HUD allowed for visualization of an accurate, narrow, and safe trajectory to a deep location.
FIGURE 10.
Intradural drilling. A, A patient with a tuberculum meningioma with a lateral extent, as seen on contrast enhanced coronal MRI. Given the lateral extent, the patient was selected for a transcranial approach, specifically a bifrontal craniectomy and subfrontal approach to the tumor. In this case, the tumor (yellow), optic nerves (green), and carotid arteries (red) were painted. The optic nerve can be seen entering the optic canal and then taking a normal slightly lateral trajectory. B, The HUD is used here to understand the course of the optic nerve within the optic canal while drilling the orbital roof. essential that the operator understands the importance of accuracy for each case.
The second major HUD limitation is that the painted objects that are injected into the microscope can become distracting from the normal anatomy. There is a learning curve for visualizing normal anatomy while the HUD is active and for integrating information provided by the HUD graphical overlay. Distraction was cited as the reason for disabling the HUD in 38.7% of cases for which HUD was disabled during a case. There is certainly room for improvement in terms of seamlessly integrating HUD without disruption. Figure 10, the optic nerve is first embedded within the tumor and difficult to visualize. B, The HUD provides guidance as to its location, and once a portion of the tumor has been removed, the optic nerve is better visualized. that the surgeon did not have to turn his or her head 180 • . The surgeons felt they could maintain focus on the operative task without having to move their head or shift focus. HUD has been used in ophthalmologic procedures, 2,3 diabetic limb salvage surgery, 4 orthopedic procedures, 5 and bedside procedures such as central line placements. 6 Anesthesiologists have used HUD to view vital signs. 7 Many of these approaches utilize Google Glass. HUD has been used in the aviation industry for many years to project data on to the window in front of the pilots' eyes. Similarly, automated surgical trajectories utilizing microscope-navigation integration have been described. 8 The concept of augmented reality (AR) in neurosurgery has been explored as early as the 1990s, 9,10 including reports of injecting overlays into the operative microscope. 11 AR has been used in endoscopic transsphenoidal surgery with virtual images of the tumor and nearby structures overlaid onto the endoscopic tower view. 12 Kockro et al 13 described the Dex-Ray system in which a handheld probe on the skin surface integrated with projections on an adjacent screen. Deng et al 14 described an easyto-use AR neuronavigation system using a tablet PC to view the virtual image. Cabrilo et al 15 described AR use for 28 patients with 39 unruptured aneurysms in which preoperative imaging was injected into the operative microscope. In this work, Brainlab was integrated with Zeiss, as it was in our study. The authors showed examples of bony anatomy projected onto the skin to tailor the incision, vessel anatomy projected onto the bony surface to tailor a craniotomy, as well as aneurysm projection onto the arachnoid to tailor the final dissection. The authors also describe its utility in positioning the head (10%), tailoring the craniotomy (63.3%), minimizing arachnoid dissection (66.7%), choosing clip position (92.3%), and its overall major impact (16.7%). The same authors also describe the use of AR in the treatment of AVMs and during bypass surgery. 16,17 They found it to be less useful for obtaining relevant information regarding feeding arteries during the AVM surgery, but helpful in identifying donor and recipient vessels during the bypass surgery, especially outlining the superficial temporal artery on the skin beforehand.
Limitations
Our study is limited first by its retrospective nature. A prospective assessment of accuracy and utility would improve the strength of the study. Secondly, our assessments of accuracy and utility are entirely subjective and therefore it is difficult to truly quantify the accuracy and utility of HUD. Our assessment of accuracy was subjective (not objective). Further, we only assessed accuracy once during a given case, rather than at multiple time points to demonstrate if there is accuracy deterioration. Finally, we did not record the source of lost accuracy (eg, brain shift vs poor registration vs poor painting), which is an important factor to understand. Although we reported utility by describing the phases of surgery in which HUD was used, the decision to use HUD in a given phase of surgery was entirely operator dependent. Finally, we have not demonstrated its use in comparison to non-HUD cases and we have not demonstrated an impact on outcome. In future investigations, it would be useful to assess operative time, surgical approach, extent of resection, and patient outcome in HUD and non-HUD cases.
CONCLUSION
Our early experience with HUD technology demonstrates that it can be safely used for a wide variety of vascular and oncologic intracranial pathologies and has potential value during multiple stages of surgery. A prospective assessment of the technology with predetermined endpoints is needed. | 2018-04-03T01:12:45.700Z | 2017-10-10T00:00:00.000 | {
"year": 2017,
"sha1": "a7e6c53149bd530fde7108a715aa02d861f26055",
"oa_license": "CCBYNCND",
"oa_url": "https://academic.oup.com/ons/article-pdf/15/2/184/25137011/opx205.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "a7e6c53149bd530fde7108a715aa02d861f26055",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
15528380 | pes2o/s2orc | v3-fos-license | Effect of organic acids in dental biofilm on microhardness of a silorane-based composite
Objectives This study evaluated the effect of lactic acid and acetic acid on the microhardness of a silorane-based composite compared to two methacrylate-based composite resins. Materials and Methods Thirty disc-shaped specimens each were fabricated of Filtek P90, Filtek Z250 and Filtek Z350XT. After measuring of Vickers microhardness, they were randomly divided into 3 subgroups (n = 10) and immersed in lactic acid, acetic acid or distilled water. Microhardness was measured after 48 hr and 7 day of immersion. Data were analyzed using repeated measures ANOVA (p < 0.05). The surfaces of two additional specimens were evaluated using a scanning electron microscope (SEM) before and after immersion. Results All groups showed a reduction in microhardness after 7 day of immersion (p < 0.001). At baseline and 7 day, the microhardness of Z250 was the greatest, followed by Z350 and P90 (p < 0.001). At 48 hr, the microhardness values of Z250 and Z350 were greater than P90 (p < 0.001 for both), but those of Z250 and Z350 were not significantly different (p = 0.095). Also, the effect of storage media on microhardness was not significant at baseline, but significant at 48 hr and after 7 day (p = 0.001 and p < 0.001, respectively). Lactic acid had the greatest effect. Conclusions The microhardness of composites decreased after 7 day of immersion. The microhardness of P90 was lower than that of other composites. Lactic acid caused a greater reduction in microhardness compared to other solutions.
Introduction
Use of resin-based restorative dental materials has greatly increased in the recent years due to their optimal esthetics, enhanced properties, easy handling and the ability to optimally bond to tooth structure. 1 The main drawback of composite resins is their polymerization shrinkage and the resultant stress that can lead to gap formation at the tooth-restoration interface, microleakage, hypersensitivity, pulp irritation, marginal discoloration and recurrent caries. 2,3 Low-shrinkage silorane-based composites were introduced to overcome these shortcomings. They have low polymerization shrinkage due to the ring-opening polymerization mechanism of oxirane molecule. 3 Restorative materials should have adequate longevity in order to be considered clinically successful. 4 The survival of composite restorations depends not only on their innate characteristics, but also on the surrounding environment. 5,6 Composite materials are more susceptible to chemical degradation than metal or ceramics due to the possession of organic matrix. 7 Oral cavity is a complex aqueous environment where dental restorative materials are exposed to several factors namely saliva and low pH due to the consumption of acidic foods and release of organic acids in the dental biofilm. These conditions have a destructive effect on the polymer network affecting its physical and chemical properties in short-term or long-term. 8 Numerous studies have evaluated water sorption, solubility and mechanical properties of composites after immersion in water, artificial saliva or ethanol in order to better understand the process of composite degradation. [9][10][11] Hardness is an important characteristic of restorative materials correlated with their intraoral compressive strength and resistance to softening. 12 Low surface hardness is strongly correlated with insufficient wear resistance and susceptibility to scratching. It can also compromise fatigue strength and lead to restoration fracture. 5 Dental biofilm contains high concentrations of lactic acid, acetic acid and propionic acid. 13,14 Previous studies have indicated that accumulation of dental biofilm does not depend on the oral hygiene or technique of plaque removal by the patients. 15 Everyone can have the potential of producing organic acids in dental biofilm. 16 It has been reported that low pH may affect the surface hardness of resin-based composites. 17 To date, limited studies have investigated the effect of organic acids present in dental biofilm on methacrylatebased composites. 11,18 On the other hand, it has been claimed that silorane-based composites are less soluble due to the presence of siloxane molecules. 19 However, to the best of our knowledge, no study has evaluated the effect of these acids on the surface hardness of silorane-based composites. Thus, this study aimed to assess the effect of lactic acid, acetic acid and distilled water on microhardness of a silorane-based compared to two methacrylatebased (nanofilled and microhybrid) composites. The null hypotheses were that type of composite would have no effect on the microhardness and that type of storage media would have no effect on microhardness.
Materials and Methods
The brand names, composition and the manufacturing company of the composites used in this study are shown in Table 1.
Composite specimen preparation
First, a stainless steel mold, 10 mm in diameter and 2 mm in thickness, was placed on a glass slab. Composite resin was applied to the mold and another glass slab was placed over it to ensure surface smoothness and uniform thickness of specimens and also to prevent void formation. According to the manufacturer's instructions, composite specimens were cured at both sides for 20 seconds using LED lightcuring unit (Valo, Ultradent, Products Inc., South Jordan, USA) with 1,000 mW/cm 2 intensity and then polished with 1,200, 1,500, 2,000, 2,500, 3,000 and 5,000 grit abrasive papers. A total of 90 specimens were fabricated (30 of each composite). Samples were immersed in an ultrasonic bath containing water for 4 minutes followed by 24 hours of distilled water storage at 37℃ to allow completion of polymerization. Baseline microhardness was assessed using a Vickers microhardness tester.
Organic acids on microhardness of composites
Immersion in the media
Immediately after measuring the baseline microhardness, specimens in each composite group were randomly divided into 3 subgroups of 10 and coded. Subgroup 1 specimens were immersed in screw-top vials containing distilled water (pH = 7) as the control subgroup, subgroup 2 specimens were immersed in lactic acid (pH = 4, 0.01 M), and subgroup 3 into acetic acid (pH = 4, 0.01 M). The vials containing specimens were stored in an incubator at 37℃ for 7 days.
Microhardness test
The microhardness of specimens was measured at baseline and 48 hours and 7 days after immersion using a digital microhardness tester (Vickers hardness testing machine, KB HardWin XL, KB Pruftechnik GmbH, Germany), and 100 g load was applied by the indenter of the Vickers machine for 30 seconds at room temperature. Three indentations with more than 1 mm distance from the disc margins were made at different surface areas and the mean microhardness was calculated using the microhardness values of the three indentations. For the calculation of Vickers microhardness number (HV), the lengths of the two diagonals of each indentation were measured and HV was calculated using the following formula, where F is the load applied and d is the mean length of the two diagonals of each indentation. 20
Electron microscopic assessment
Two extra specimens were fabricated in each group and evaluated before and after 7 days immersion using a scanning electron microscope (SEM, KYKY SBC-12, Beijing, China). For this purpose, surface of specimens was completely dried and gold coated with a sputter coater. SEM analysis was then performed at a voltage of 20 kV with x3,000 magnification.
Statistical analysis
Repeated measures ANOVA was used for the comparison of microhardness of different composite specimens before and after immersion in the respective media. The microhardness value at different time points was considered as the repeated factor and the media factor and type of composite were considered as the between subject factors. If the interaction was significant, two-way repeated ANOVA was applied for the comparison of microhardness of composite specimens at each time point separately for each medium, separately for each composite in different media and also for the comparison of microhardness changes based on the type of composite and storage medium. Data were analyzed using SPSS software (IBM SPSS statistics 18, SPSS Inc., Chicago, IL, USA). A p value < 0.05 was considered significant.
Microhardness test results
The mean microhardness values are shown in Table 2. Repeated measures ANOVA revealed that the microhardness of all composite specimens decreased after 7 days of immersion (p < 0.001) and different composites changed variably in microhardness in different media (p < 0.001). At baseline, the interaction effect of type of composite and the media on the microhardness was not significant (Two-way ANOVA, p = 0.429), and we could show that the water immersion before baseline measurement after light curing did not have any effect on the microhardness. The microhardness values of the composites were significantly different (p < 0.001), that is, the microhardness of Z250 was higher than Z350 (p < 0.001) and the latter was higher than P90 (p < 0.001). The effect of media on microhardness was not significant (p = 0.346). At 48 hours after immersion, the interaction effect of type of composite and the media was not significant (p = 0.444). The microhardness values of the composites were significantly different (p < 0.001). The microhardness values of Z250 (p < 0.001) and Z350 (p < 0.001) were higher than that of P90. However, the microhardness values of Z250 and Z350 were not significantly different (p = 0.095). Also, the effect of the media on microhardness was significant (p = 0.001). The difference between lactic acid and distilled water (p = 0.001) and lactic acid and acetic acid (p = 0.043) in this respect was significant. However, distilled water and acetic acid had no significant difference in this regard (p = 0.403). At 7 days, the interaction effect of independent variables on microhardness was not significant (p = 0.111). The microhardness values of the composites were significantly different (p < 0.001). The microhardness of Z250 was higher than Z350 (p < 0.001) and the latter was higher than P90 (p < 0.001). The effect of media on microhardness was significant as well (p < 0.001). The differences between distilled water and lactic acid (p < 0.001) and lactic acid and acetic acid (p = 0.031) were significant in this respect, whereas distilled water had no significant difference with acetic acid (p = 0.138, Table 1).
SEM results
SEM images before and after immersion in the respective media are shown in Figures 1 -3
Discussion
Composites compared in our study were all manufactured by 3M ESPE. P90 is a silorane-based and Z250 and Z350 are methacrylate-based composites with similar resin base (bisphenylglycidyl dimethacrylate [Bis-GMA]; ethoxylated bisphenol-A dimethacrylate [Bis-EMA]; urethane dimethacrylate [UDMA]; triethylene glycol dimethacrylate [TEGDMA]) and different filler content (microhybrid and nanofilled). According to Distler and Kröncke , lactic acid and acetic acid account for 70% of the acids present in dental biofilm. 13 Thus, we used these two acids in our study. The pH of acids used in our study was adjusted at 4, because previous studies have reported the pH of 4 as the lowest pH of dental plaque. 13 Moreover, all specimens were stored in screw-top dark vials in an incubator at 37℃ during the study period in order to simulate the oral environment as much as possible. Surface resistance of materials to chemical degradation and their mechanical properties relate to wear resistance, and hardness measurement relatively determines this characteristic. [21][22][23] In our study, the microhardness of all groups decreased after 7 days of immersion compared to the baseline value. Longer storage time affects the filler surface or the fillermatrix bond. 24 It has been confirmed that water and weak acids can cause inorganic filler surface degradation; this can be clearly seen in SEM images of specimens 7 days after immersion in distilled water and acidic solutions. 25 Degradation of inorganic fillers may play an important role in microhardness reduction. 26 This finding is in accord with the results of Honorio et al., and in contrast to those of Wan Bakar and Hashemi et al. 17,27,28 In our study, the microhardness of P90 silorane-based composite at all time points was lower than that of the two methacrylate-based composites. This difference in microhardness can be due to the filler type and content. P90 is a silorane-based microhybrid composite filled with fine quartz particles, whereas Z250 and Z350 contain zirconia-silica particles. The Knoop hardness is 820 for quartz and 1,160 for zirconia particles. 29 This may be responsible for the lower hardness of P90. On the other hand, hardness is correlated with the degree of conversion (DC) and it has been shown that DC of silorane-based composites is lower than that of methacrylate-based resins explaining the lower baseline hardness of P90. 30,31 In our study, the microhardness of this composite significantly decreased after immersion, which is in contrast to the results of Kusgoz et al. 31 They demonstrated that the microhardness of this composite remained unchanged after 7 and 30 days of water storage. Acids can release unreacted monomers in composites (due to low DC) via penetration into resin matrix and this issue may be responsible for the reduction in microhardness of P90 in our study. 1,32 The microhardness of methacrylate-based composites also significantly decreased after 7 days of immersion but this reduction in Z350 was greater than the reduction in Z250. Z350 is a nanofilled composite. Its filler system is comprised of a combination of 20 nm silica nanofillers and 0.4 -0.6 μm zirconia-silica nanoclusters. 33 Although some studies have shown that this composite has mechanical properties similar to those of hybrid and midi-filled composites, its high surface/volume ratio due to the presence of silica particles may increase its water sorption and lead to the degradation of polymer-filler interface and possible drop in mechanical properties. [9][10][11]34,35 On the other hand, Z350 contains large volumes of silane (γ-methacryl oxypropyltrimethoxysilane) due to high filler content and thus, may be more susceptible to hydrolysis and increased solubility. SEM image of this composite after immersion confirms this finding. Lactic acid caused a greater reduction in microhardness than other solutions. Lactic acid is a carboxylic acid with -COOH and -OH functional groups. There is a high possibility that these functional groups form hydrogen bonds with the polar side of methacrylate monomer present in the matrix of Z250 and Z350, namely -OH in Bis-GMA, -OH in TEGDMA and Bis-EMA and -NH in UDMA causing greater water sorption and subsequently higher matrix softening. The SEM image of Z350 also confirms this theory. However, the SEM image of Z250 indicates scraped off filler particles, which may be responsible for decreased microhardness.
P90 is expected to have less solubility due to the presence of siloxane molecule. However, its microhardness significantly decreased and degradation of inorganic fillers was evident on the SEM image. It appears that solutions used in our study decreased its microhardness by affecting the silane coupling agent or the filler particles. On the other hand, it has been stated that chemical softening occurs when the solubility parameter of the resin matrix of composites is similar to the solubility parameter of storage media. 36 No definite information is available regarding the solubility parameter of silorane but the proximity of the solubility parameter of P90 to that of solutions used in this study may also be responsible for the significant reduction of P90 microhardness compared to other composites. The aim of our study was to evaluate the immediate effect of organic acids in dental biofilm on microhardness of composites. In order to evaluate their effect of degradation, we need to store the samples longer.
Conclusions
Under the limitation of this study, the microhardness of all composites decreased after 7 days of immersion. The microhardness of P90 was lower than that of other composites at all time points. Lactic acid caused a greater reduction in microhardness compared to other solutions. | 2016-08-09T08:50:54.084Z | 2015-06-02T00:00:00.000 | {
"year": 2015,
"sha1": "838a3a59642b86faa797e205709832f81977f62e",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.5395/rde.2015.40.3.188",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "838a3a59642b86faa797e205709832f81977f62e",
"s2fieldsofstudy": [
"Materials Science",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
233805089 | pes2o/s2orc | v3-fos-license | The new era in office-based facial rejuvenation: Promising technology of silicone threads Frontiers in Life Sciences and Related Technologies
Aging is unpreventable, although its symptoms vary a lot among individuals because of the genetic determinants and one's life habits. Sun exposure, bad habits like excessive alcohol consumption, and smoking accelerate the aging process and urge people to seek for a solution to reverse the changes, especially for the most prominent part of our body, the face. Unfortunately, there is no one simple solution for that, and it includes a bunch of surgical and non-surgical interventions. Relatively simple methods have fewer risks, but the reversal effect is also minor. This includes neurotoxin and filler injections as well as energy-based devices. More competent surgical options, alas, come with a long and difficult recovery period and diverse, sometimes inevitable, complications. Most of the time, people are scared of the surgery and accept less invasive methods. Among these, thread lift is perceived as the missing link between the surgery and non-invasive methods. Unfortunately, up to recent years, the results of threads have not been promising, and they also have many complications. A new type of thread originated in France, made of silicone and polyester, gives promising results. This paper reviews the history and specifications of the threads and tries to explain the logic of their use in facial rejuvenation.
Introduction
Aging is a degenerative process and a complex biological phenomenon caused by intrinsic and extrinsic factors (Charlesde-Sá et al., 2018). Intrinsic aging is largely genetically determined and affects the skin through a slow and partly reversible degeneration of connective tissue (Uitto et al., 1986). On the other hand, extrinsic aging, primarily ultraviolet radiation, results in premature aging even in young individuals (Scharffetter-Kochanek et al., 2000). The neck and face are sunexposed areas like hands. These areas are under an overlapping influence of intrinsic and extrinsic factors, which produces a more complex and faster aging process (Fisher et al., 2002). The symptoms of aging can be concealed in some areas, thanks to advanced surgical techniques (Rodriquez-Bruno and Papel, N. Celik Front Life Sci RT 2(1) 2021 30-34 31 surgery's success is not deniable in this area, there are particularly challenging cases either due to anatomical variations or psychological expectations based on the individual's perception of the problem and motivation for change. A thorough patient evaluation and counseling are mandatory before considering any surgical approach in the neck and face, and modifications of the surgical approach are needed, if necessary (Smith and Papel, 2018). On the other hand, surgery alone may fail to meet some neck problems, such as superficial aging changes, as it addresses excision only. Chromophorebased pathologies, vascular changes, epidermal and dermal nonchromophore-based lesions are among these problems (Mulholland, 2014). It is also important to individualize the relationship between the lower face, jawline, and neck (Celik, 2020a). As people age, some of the lower facial tissues descend beyond the jawline, which makes the correction of facial tissues crucial to accomplish a substantial improvement in the aesthetics of the neck (Rohrich et al., 2006). In brief, good results with face and neck rejuvenation can be achieved if it is combined with lower face and jawline procedures, filler injections, and the inclusion of energy-based therapies (Celik, 2020b).
Nonetheless, there are three important reasons for writing this paper; 1. Surgery offers its advantages together with its complications, like a longer recovery period and inevitable scars.
2. Not every doctor can perform this kind of surgery. This includes inexperienced plastic surgeons.
3. Not every patient is fond of surgery, and they are looking for the so-called "minimally invasive procedures".
Both patients and the doctors need a bridge between the surgery and non-surgical rejuvenation (Celik, 2020c). For more than two decades, thread lift methods tried to fill this gap, however in vain, most of the time.
Non-absorbable polypropylene threads
Thread lifting has come to its current place after a long journey (Savoia et al., 2014), which began in the nineties with non-absorbable polypropylene threads (Sulamanidze et al., 2002). The doctors' early excitement created popular officebased procedures called a lunch-time facelift by media (Atiyeh et al., 2010). However, they fell out of favor because of high complication rates and fast temporary results (Lycka et al., 2004). Most of the complications encountered were due to nonabsorbable suture material (Silva-Siwady et al., 2005). These types of complications necessitated surgical interventions to solve a problem of non-surgical intervention. Complications and lack of a good long-term effect of the threads were the most common reasons for the technique's abandonment (Rachel et al., 2010).
Absorbable threads
With the introduction of mixed absorbable and nonabsorbable threads and pure absorbable threads, thread lifting has again gained attention. Several manufacturing companies have produced diverse types of threads. Silhouette Soft ® suture (Silhouette Soft ® , Sinclair Pharma GmbH, Irvine USA) was made of poly-L-lactic acid (PLLA) and consisted of multiple cones that were made of polylactide/glycolide copolymer (PLGA). This thread was an absorbable one, and it was different from the other threads of the same company. Silhouette Lift ® was composed of non-absorbable polypropylene suture and absorbable PLGA cones. Silhouette Instalift ® contained only PLGA both in sutures and cones. PLLA and PLGA are biodegradable polymers, and PLLA is a well-known biomedical device for over four decades and has been used as absorbable plates, screws, and suture materials. PLLA triggers a foreign body reaction when implanted into the tissue. This reaction generates a cellular inflammatory response, which leads to the formation of vascularized, connective tissue (neocollagenesis) (Bohnert et al., 2019). This neocollagenesis, when coupled with repositioning by the sutures' cones, makes these kinds of suspension sutures a valuable tool for facial rejuvenation (Goldberg, 2020).
Another kind of absorbable thread that is used for facial rejuvenation is polydioxanone (PDO). Polydioxanone sutures were in plastic surgery as intradermal sutures since the '80s (Chusak and Dibbell, 1983). Years later, originated in Asia, PDO threads became available for facial rejuvenation around 2015 (Suh et al., 2015). PDO is a synthetic polymer and absorbed from the body by hydrolyzing in 6 months. The addition of barbs to PDO tries to increase the load-bearing ability when used as suspension sutures. However, the aim is to lift the ptotic facial tissues, polydioxanone could not achieve this goal, and it is often used as "solid fillers" to treat deep static wrinkles on the face (Kang et al., 2019). Polydioxanone fills the wrinkle first by its volume and later by the mild local inflammatory reaction, which results in lymphocytic infiltration and subsequent fibrosis. Although there are many variations, polydioxanone threads for facial rejuvenation can be classified roughly into three different types: 1. Mono PDO thread: non-barbed and thin (0.07-0.15mm) monofilament thread.
2. Spring or twin thread: Braided 2 monofilament PDOs or twined single monofilament. It is more tensile than mono PDO.
3. COG PDO thread: This one has barbs and creates a lifting effect when pulled. The cogs were shown to induce a fibrotic reaction four weeks after the insertion (Jang, 2005). Depending on the direction of the barbs, they can be categorized as: Although absorbable threads were relatively well-accepted by dermatologists and aesthetic practitioners, plastic surgeons were distant from these kinds of procedures since the beginning. Many cosmetic companies produce their own so-called threadlifting sutures, and these companies have spent a great deal of marketing budget for the training of physicians and nonphysicians. In combination with enthusiastic cosmetic professionals, these commercial encouragements are mainly responsible for the extensive spread of this alleged "minimally invasive" lifting procedure. On the other hand, even the most optimistic non-surgeon doctors emphasized that thread lifting is neither an alternative to surgery nor magic per se, but it could have good results for rejuvenation and skin tightening, especially when combined with other tools of rejuvenation (Ali, 2018). Although the facelift effect seems very subtle, the complications of absorbable sutures are regarded as minimal or moderate without permanent sequela (Sarigul Guduk and Karaca, 2018). In another study, despite the results seemed to be good, the authors concluded that, because of the high complication rates of the PDO threads, short-lived benefits, and similar downtime and costs, traditional facelifting was to be preferred (Bertossi et al., 2019).
Most of the studies published show only a limited effect and longevity. A recent review of thread-lift sutures (Gulbitti et al., 2018) concludes that the use of threads seems to be promising when they are used in combination with an open procedure.
Silicone threads
Being a skeptic plastic surgeon, the author of this publication refused to use threads until 2016, when nonabsorbable silicone threads were introduced in the Turkish market (Spring Thread ® , 1st SurgiConcept 96 Rue de Pont, Rompu 59200 Tourcoing, France). Silicone threads do not rely on fibrosis and remodeling and subsequent skin tightening for lifting effect as absorbable threads do. Instead, their mechanism is simpler and more rational for a plastic surgeon; the lift effect is created by the upright movement of tissues, thanks to the barbs. Although the mechanism and logic were similar to primitive non-absorbable sutures, the technology and the material used are different. Silicone thread is a biocompatible composite material. The outer part is medical grade silicone, which envelopes the polyester inner part. It is elastic and can be elongated by 20%. This elasticity provides a spring effect that compensates for the creep of classic threads. The author used this thread alone and in combination with surgery (Celik, 2020d). Although it is beyond this paper's scope, the main advantages and disadvantages of Spring Thread ® should be addressed concisely. The usual thread reactions like extrusion or granuloma formation encountered in the polypropylene threads were not seen in silicone threads. The lifting effect was more powerful than the other barbed sutures that the author used before. On the other hand, slipping of the silicone coating from the polyester core during the thread's implantation was not unusual, necessitating another box of thread. According to the author, another problem was the elasticity of the silicone. However, in the beginning, it was presented as one of the advantages of this new type of thread; by the time, it was realized that the elasticity was one of the disadvantages of this thread. Briefly, elasticity causes shorter thread length and less amount of barbes inside the tissue. The thread shortens back to its unaltered length after some time, and the pull effect decreases. Later in 2019, After 2 years of experience with more than 200 patients, the author's perception about the threads was that they could not replace open-type face and neck lift surgeries but can be applied to many patients who are avoiding the complications of open surgery and seeking for some effect of a facelift for a few years at least. For this purpose, the author designed a new way of eyebrow lift and canthopexy instead of an endoscopic temporal lift (Fig. 1).
In 2019, the author started to use non-elastic silicone threads (Infinite-Thread ® , Thread & Lift Laboratory, Brussels, Belgium). This one, just like the other silicone thread, is made in France, but the headquarter is in Brussels. After using Infinite-Thread ® for a minimally invasive facelift, the author realized that this thread is a powerful game-changer. It has some unique properties which are different from the previous threads. It also has a polyester core coated with silicone. There are 4 cogs in every 1.5 cm. Each series of cogs is offset at 45 degrees, and it creates an "8 axis" hooking. The diameter is 1.4 mm from cog to cog, but the cylinder of silicone is 0.5 mm. Cogs have a unique design that prevents them from turning and slipping from the tissue thanks to their conical shape, which reinforces the cog at the base. Cogs have rounded tips that prevent the cheese-wire effect (a common problem of early polypropylene threads). Because of this new thread's powerful lift effect, a neck lift is also possible (Fig. 2). Infinite-Thread ® is not elastic deliberately (does not elongate) but flexible.
There are not a lot of scientific articles about the results of silicone threads for facial rejuvenation. Although according to the author of this paper, non-elastic silicone thread seems promising, we need more studies to see their long-term effects and their capacity to lift the face.
Discussion
The minimal invasive facial rejuvenation concept is tempting for patients. It has gained high popularity among physicians and patients looking for an easy facial lifting method and skin rejuvenation. Although the plastic surgeons' curiosity has withered because of the fast-temporary effect and a bunch of complications of the threads, the patients have never lost interest, thanks to cosmetic doctors who offer the procedure (Celik and Gok, 2020). On the other hand, the literature review shows no evidence to support the threads' effectiveness unless studies are supported by industry.
We encounter many rumors of perfect lunch-time facelifts, websites promising perfect results, media interviews, and case reports. However, we do not come across detailed scientific papers about the results, long-term follow-ups, and other types of information that we used to see in scientific publications on other subjects. That is why plastic surgeons have lost their trust in threads, no matter what the material or the technique is. There are rare but promising publications about thread use in open surgeries published by surgeons (Matarasso, 2013;O'Connell, 2015). A combination of threads with other methods of facial rejuvenation also seems promising (Celik, 2020e). Nevertheless, the success of these encouraging results is not only because of the threads but because of the other techniques used with the threads, and they are far away from being a minimally invasive office procedure. On the other hand, recent developments in thread technology reveal a new era in facial rejuvenation. Under local anesthesia, office procedures may have a similar rejuvenating effect to those of open surgeries with minimal downtime and less risk of serious complications with these new silicone threads. This is especially very important for plastic surgeons who are strictly adhered to conventional methods. There is a publication from 2008 by D'Amico et al. which had seen the future (that future is our today now) and warned the plastic surgeons about the imminent danger of negligence. Their paper clearly shows the trend of the consumers (patients), plastic surgeons and non-plastic surgeon core providers (dermatologists, ENT specialists), and non-core providers (the rest of all cosmetic procedure providers) at that time. One vital message that can be learned from this study is that the patients would choose plastic surgeons to perform more invasive procedures (90%), but the percentage decreases rapidly when it comes to less invasive (40%) and the least invasive procedures (15%). Another important note is that of consumers who had had a positive experience with a non-plastic surgeon for a noninvasive procedure, 47 percent said that the same provider would be their first choice for an invasive procedure (D'Amico et al., 2008). This means that should the plastic surgeons keep the distance to these minimally invasive methods, their number of surgical cases may drop over time.
Conclusion
Thread lifting is not a complication-free nor a minimal invasive lunch-time beautification procedure. Quite the contrary, it is not different from surgery in terms of complications, technical demands, and effects. Then why should we choose thread lift instead of surgery? The author of this paper thinks that we should not adhere to one of them. We should choose the right technique for the right patient. The decision should be made by a mutual agreement between the doctor and the patient. We should quit promoting surgery while vilifying thread lifts. One of our latest unpublished studies shows that 75% of patients who ask for thread lift and are refused by the doctor and offered surgery would find another doctor who would do the thread lift. More than half of the remaining 25% forgoes any procedure. Plastic surgeons should overcome their prejudice and give another chance to evolving thread technology. | 2021-05-07T00:03:07.097Z | 2021-03-05T00:00:00.000 | {
"year": 2021,
"sha1": "281599aa966db090d0b7117bba6c149d8b0ee27d",
"oa_license": "CCBY",
"oa_url": "https://dergipark.org.tr/en/download/article-file/1526379",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "e692b499a51cec00de657f123d92bae2043ee212",
"s2fieldsofstudy": [
"Medicine",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
113406832 | pes2o/s2orc | v3-fos-license | Fabrication of Ni – Co – BN ( h ) Nanocomposite Coatings with Jet Electrodeposition in Di ff erent Pulse Parameters
In order to study the effects of pulse parameters on jet electrodeposition, Ni–Co–BN (h) nanocomposite coatings were prepared on the surface of steel C1045. The samples were analyzed and characterized by scanning electron microscopy (SEM), energy dispersive spectroscopy (EDS), X-ray diffraction (XRD), laser scanning confocal microscopy (LSCM), microhardness tester, and electrochemical workstation. The experimental results showed that the contents of Co and BN (h) nanoparticles in the coatings changed with the variation of pulse parameters. When the pulse frequency was 4 kHz and the duty cycle was 0.7, their contents reached maxima of 27.34 wt % and 3.82 wt %, respectively. The XRD patterns of the coatings showed that the deposits had a face-centered cube (fcc) structure, and there was an obvious preferred orientation in (111) plane. With the increase in pulse parameters, the surface roughness of the coatings first decreased and then increased, with the minimum value obtained being 0.664 μm. The microhardness of the coatings first increased and then decreased with increase in pulse parameters. The maximum value of the microhardness reached 719.2 HV0.05 when the pulse frequency was 4 kHz and the duty cycle was 0.7. In the electrochemical test, the potentiodynamic polarization curves of the coatings after immersion in 3.5 wt % NaCl solution showed the pulse parameters had an obvious effect on the corrosion resistance of the Ni–Co–BN (h) nanocamposite coatings. The corrosion current density and polarization resistance indicated that the coatings had better corrosion resistance when the pulse frequency was 4 kHz and duty cycle was 0.7.
Introduction
Wear and corrosion are the most common types of failure in parts.The wear and corrosion processes are gradual and not very distinct thereby making it hard for people to easily notice them.Therefore, they usually have huge negative effects in the industry and daily life due to lowering of operational efficiency of machines.According to the relation reports, annul economic losses caused only by corrosion damage in the world exceed that due to natural disasters.The unique volume effect Coatings 2019, 9, 50 2 of 14 and surface effect of nanomaterials have great development potential and application prospects in new materials and functional materials.Embedding nanoparticles into the nanocomposite materials can optimize the microstructure and improve the mechanical properties.Also, new functional properties may also be observed from the nanocomposite materials.Ni-Co alloys have high hardness, excellent wear resistance and corrosion resistance.They are often coated on the surface of parts as protective materials to improve the surface properties.In recent years, the research and exploration of Ni-Co alloy coatings has always been the focus of attention for scholars.The preparation and properties of Ni-Co nanocomposite coatings with different nanoparticles (SiC [1,2], SiO 2 [3], Al 2 O 3 [4][5][6], ZrO 2 [7-9], TiO 2 [10,11], CNT [12], etc.) as the second phase have been reported.BN (h) nanoparticles have excellent performances in electrical insulating, thermal stability, chemical stability, and self-lubrication.Its advantages in biomedical applications [13,14], microelectronics [15], nanophotonics [16], composite materials [17], electrochemical catalysis [18], hydrogen storage materials and fuel cells [19,20] have been discovered and preliminarily applied.The Ni-Co composite coatings having BN (h) as the second phase has better properties in self-lubricating and wear resistance, high temperature resistance, and corrosion resistance.Therefore, the studies about Ni-Co-BN (h) nanocomposite coatings have important significance in the improvement of material surface properties.
Electrodeposition technology has the advantages of controllable deposition process and better coatings thickness.It is one of the common methods used to prepare metal coatings.Pulse electrodeposition is a method of electrodeposition in which the circuit controlling current is switched on and off periodically, or a pulse waveform is superimposed on a fixed DC.In the process of pulse electrodeposition, the relaxation of current or voltage can not only weaken the concentration polarization of the cathode, but can also produce adsorption and desorption effects on the cathode surface.In addition, periodic interruption of the circuit can prevent the continuous growth of grains and refine the grain size.The above characteristics of pulse electrodeposition made it possible to obtain higher deposition efficiency, smaller grain size, and had obvious advantages in improving coatings performances and saving precious metals [9,21].Figure 1 shows a schematic image of the jet electrodeposition device.As shown in Figure 1, jet electrodeposition is a technique in which the sample is used as the cathode and the nozzle as the anode.Under the action of an electric field, the plating solution is sprayed from the nozzle to the cathode surface to achieve electrodeposition.Compared with traditional electrodeposition, it can allow for a higher over-potential in the deposition process and has a higher deposition efficiency.The periodic sweeping of the nozzle relative to the cathode could continuously change the deposition area, thereby preventing the continuous growth of the grains and refining the grain size [22][23][24].With the help of the numerical control device, the jet electrodeposition can also obtain coatings on the surface of parts with different shapes and sizes.These characteristics enable it to have good application prospects in the preparation of nanocrystalline materials and local repair of parts surface [24].and surface effect of nanomaterials have great development potential and application prospects in new materials and functional materials.Embedding nanoparticles into the nanocomposite materials can optimize the microstructure and improve the mechanical properties.Also, new functional properties may also be observed from the nanocomposite materials.Ni-Co alloys have high hardness, excellent wear resistance and corrosion resistance.They are often coated on the surface of parts as protective materials to improve the surface properties.In recent years, the research and exploration of Ni-Co alloy coatings has always been the focus of attention for scholars.The preparation and properties of Ni-Co nanocomposite coatings with different nanoparticles (SiC [1,2], SiO2 [3], Al2O3 [4][5][6], ZrO2 [7-9], TiO2 [10,11], CNT [12], etc.) as the second phase have been reported.BN (h) nanoparticles have excellent performances in electrical insulating, thermal stability, chemical stability, and self-lubrication.Its advantages in biomedical applications [13,14], microelectronics [15], nanophotonics [16], composite materials [17], electrochemical catalysis [18], hydrogen storage materials and fuel cells [19,20] have been discovered and preliminarily applied.The Ni-Co composite coatings having BN (h) as the second phase has better properties in self-lubricating and wear resistance, high temperature resistance, and corrosion resistance.Therefore, the studies about Ni-Co-BN (h) nanocomposite coatings have important significance in the improvement of material surface properties.
Electrodeposition technology has the advantages of controllable deposition process and better coatings thickness.It is one of the common methods used to prepare metal coatings.Pulse electrodeposition is a method of electrodeposition in which the circuit controlling current is switched on and off periodically, or a pulse waveform is superimposed on a fixed DC.In the process of pulse electrodeposition, the relaxation of current or voltage can not only weaken the concentration polarization of the cathode, but can also produce adsorption and desorption effects on the cathode surface.In addition, periodic interruption of the circuit can prevent the continuous growth of grains and refine the grain size.The above characteristics of pulse electrodeposition made it possible to obtain higher deposition efficiency, smaller grain size, and had obvious advantages in improving coatings performances and saving precious metals [9,21].Figure 1 shows a schematic image of the jet electrodeposition device.As shown in Figure 1, jet electrodeposition is a technique in which the sample is used as the cathode and the nozzle as the anode.Under the action of an electric field, the plating solution is sprayed from the nozzle to the cathode surface to achieve electrodeposition.Compared with traditional electrodeposition, it can allow for a higher over-potential in the deposition process and has a higher deposition efficiency.The periodic sweeping of the nozzle relative to the cathode could continuously change the deposition area, thereby preventing the continuous growth of the grains and refining the grain size [22][23][24].With the help of the numerical control device, the jet electrodeposition can also obtain coatings on the surface of parts with different shapes and sizes.These characteristics enable it to have good application prospects in the preparation of nanocrystalline materials and local repair of parts surface [24].In order to study the effects of pulse parameters on the properties of Ni-Co-BN (h) nanocomposite coatings, Ni-Co-BN (h) nanocomposite coatings were prepared by jet electrodeposition with different pulse parameters.The surface morphology and cross-section images, composition, phase structure, surface roughness, microhardness and corrosion resistance of Ni-Co-BN (h) composite coatings were characterized and analyzed by SEM, EDS, XRD, LSCM, microhardness tester, and electrochemical workstation, respectively.The above tests and analysis results may be useful to provide theoretical reference for industrial production and practical application of pulse jet electrodeposition technology.
Experimental Materials and Pretreatment
A steel C1045 sample with a size of 7 mm × 8 mm × 30 mm was used as substrate and a high purity nickel plate (99.9%) was used as anode nozzle in the pulse jet electrodeposition process.The aperture of the nozzle was a 10 mm × 1 mm rectangle.The steel C1045 substrate was activated by DC power source after degreasing and polishing.In the activation process, the residual oil contamination on the substrate surface was removed by electric cleaning solution.In the process of electric cleaning, the substrate was connected to the negative electrode and the high purity nickel plate was connected to the positive electrode.The treatment current was 1 A, and the processing time was 20 s.After electric cleaning treatment, the substrate was connected to the positive electrode, and the high purity nickel plate to the negative electrode then immersed into strong activation solution for strong activation treatment.The strong activation current was 0.5 A and the treatment time was 30 s. Finally, the same activation mode with strong activation was used for weak activation in weak activated solution.The weak activation current was 0.5 A and the treatment time was 20 s.The different solution constituents for this paper are shown in Table 1.The grade of all the chemicals was analytically pure, and the solvent and cleaning solution used in the experiments was deionized water.After the plating solution was prepared, the pH value of the solution was adjusted to 4.3 by NaOH and HCl.The BN (h) nanoparticles size added in the plating solution was 100 nm and the concentration was 5 g L −1 .
Preparation of Ni-Co-BN (h) Nanocomposite Coating
The jet electrodeposition was carried out on the self-made experimental facility under the action of pulse power source.The activated steel C1045 substrate was connected to the cathode and the high purity nickel plate was connected to the anode.In the process of preparing Ni-Co-BN (h) nanocomposite coatings, the gap between the nozzle and the cathode was 1.6 mm, the temperature of plating solution was 60 • C, the pulse voltage was 18 V, and the electrodeposition time was 20 min.The speed of the reciprocating sweep of the nozzle relative to the cathode was 135 mm s −1 , and the injection speed of the plating solution was 1.5 m s −1 .After the process of pulse jet electrodeposition, the samples were cleaned in the ultrasonic cleaner for 5 min.
Sample Characterization
The surface morphology and thickness of Ni-Co-BN (h) nanocomposite coatings were characterized using scanning electron microscope (SEM, Quanta 250, FEI, Hillsboro, OR, USA).The element content was analyzed using energy dispersive spectrometer (EDS, XFlash Detector 5030, BRUKER, Karlsruhe, Germany).The phase structure of the samples was measured by X-ray diffraction (XRD, X'Pert Powder, PANalytical B.V., Almelo, Holland).Laser scanning confocal microscope (LSCM, OLS4000, OLYMPUS, Tokyo, Japan) was used to measure the surface roughness of the coatings.The average surface roughness at five different positions was determined.The microhardness of the coatings was measured by microhardness tester (HVS-1000, Laizhou Huayin Test Instrument Co., Ltd., Yantai, China).The test load was 50 g, the loading time was 15 s, and the hardness results took the average value of five different points on the coatings surface.The corrosion resistance of the coatings was tested by electrochemical workstation (CS350, Wuhan Corrtest Instruments Corp., Ltd., Wuhan, China) in 3.5 wt % NaCl solution.Before the corrosion resistance test, the samples were immersed into the 3.5 wt % NaCl solution for 2 h to obtain stable test results.The potential range of potentiodynamic sweeping was −0.6 to +0.6 V with respect to EOCV, and the sweeping rate was 0.5 Mv s −1 .
Effects of Pulse Parameters on the Surface Morphology and Element Content
Figure 2 shows the surface morphologies of Ni-Co-BN (h) nanocomposite coatings with pulse frequency of 4 kHz and varying duty cycles.It can be seen from Figure 2 that the duty cycle has a great influence on the surface morphology of the coatings.When duty cycle was 0.1 (Figure 2a), some different-sized globular protrusions appeared on the coating surface.A deep canyon and a small crack were formed between the adjacent big globular protrusions.As the duty cycle increased (Figure 2a-d), the depth of the canyon gradually decreased and almost disappeared.Further increase in the duty cycle resulted in the depth of the canyon increasing slightly (Figure 2e). Figure 3 shows the surface morphologies of Ni-Co-BN (h) nanocomposite coatings with duty cycle of 0.7 and varying pulse frequencies.The flatness of the coatings surface first increased and then decreased with variation of pulse frequency as shown by Figures 2d and 3a-d.There were a large number of diamond shaped protrusions on the coatings surface as can be seen from Figures 2 and 3. Also, there was no obvious cluster phenomenon of BN (h) nanoparticles on the coating surface.
Figures 4 and 5 show the cross-section images of the Ni-Co-BN (h) nanocomposite coatings with varying pulse parameters.As can be seen from the figures, the coatings thickness changed with the variation of pulse parameters.When the pulse frequency was 4 kHz and the duty cycle was increased (Figure 4a-e), the coatings thickness first increased and then decreased.When the duty cycle was 0.7, the coating obtained a maximum thickness of approximately 37.66 µm.When the duty cycle was 0.7 and the frequency was increased (Figures 4d and 5a-d), the thickness of coatings showed a similar trend with that of the duty cycle.With the increase in pulse frequency, the coatings obtained a maximum thickness at 4 kHz.Because the coatings thickness is an important index for deposition efficiency, better thickness with the same deposition time means that the coating has a better deposition efficiency.Therefore, the change of coatings thickness can reflect the change in the deposition efficiency in electrodeposition.The figures also indicated that the deposits had a compact structure and no obvious stomata were observed in the coatings matrix.The boundary of the substrate and the coating was compact and the coatings matrix was well adhered to the substrate.2 shows partial data of Ni-Co-BN (h) nanocomposite coatings with varying pulse parameters.As shown in Table 2, the contents of Co, Ni, and N elements in the coatings changes with the variation of pulse parameters.With the increase in duty cycle, the contents of Co and N elements in the deposit first increased and then decreased.The content of Ni element first decreased and then increased with increase in duty cycle.When duty cycle was 0.7, the contents of Co and N elements reached the maximum, that was 27.34 wt % and 3.82 wt %, respectively.At the same duty cycle, the content of Ni element decreased to a minimum value of 68.84%.When the pulse frequency increased, the contents of Co and N elements in the coatings first increased and then decreased.This content variation trend was similar to the one observed when duty cycle was increased.The content of Ni element first decreased and then increased with increase in frequency.The maximum value of Co and N elements was obtained when the pulse frequency was 4 kHz.
Coatings 2018, 8, x FOR PEER REVIEW 7 of 14 Figure 6 shows the EDS spectra of Ni-Co-BN (h) nanocomposite coatings surface with duty cycle of 0.7 and pulse frequency of 4 kHz.Considering the N element in the bath was provided only by the BN (h) nanoparticles, the change of N element content in the coatings directly reflects the change in nanoparticles content.Table 2 shows partial data of Ni-Co-BN (h) nanocomposite coatings with varying pulse parameters.As shown in Table 2, the contents of Co, Ni, and N elements in the coatings changes with the variation of pulse parameters.With the increase in duty cycle, the contents of Co and N elements in the deposit first increased and then decreased.The content of Ni element first decreased and then increased with increase in duty cycle.When duty cycle was 0.7, the contents of Co and N elements reached the maximum, that was 27.34 wt % and 3.82 wt %, respectively.At the same duty cycle, the content of Ni element decreased to a minimum value of 68.84%.When the pulse frequency increased, the contents of Co and N elements in the coatings first increased and then decreased.This content variation trend was similar to the one observed when duty cycle was increased.The content of Ni element first decreased and then increased with increase in frequency.The maximum value of Co and N elements was obtained when the pulse frequency was 4 kHz.Those results may be due to the periodic effect of the pulse power source.The change of duty cycle and frequency can affect the connection and disconnection time.Reasonable pulse turn-on and turn-off time can make the metal ions consumed in deposition to be timely supplemented.This reduces the concentration polarization of the cathode surface and also aids the cathode surface adsorption and desorption effects.Shorter pulse turn-on time can cause a decrease in grain growth time and lower the deposition efficiency.However, when the pulse turn-on time is extremely long, the consumption of the metal ions on the cathode is extremely high.This can increase the concentration polarization of the cathode surface, which results in rapid hydrogen formation on the cathode surface.The hydrogen layer formed hinders effective deposition of metal ions thereby reducing efficiency.The amount of nanoparticles in the coatings is largely affected by the deposition efficiency.A low deposition efficiency decreases the capturing capacity of the nanoparticles suspended in the plating solution.As a result, the nanoparticles are either loosely adsorbed into the cathode surface or embedded at a much lower content.This results in the reduction of the nanoparticles content in the coatings.In addition, relevant literature studies show that the content of Co in Ni-Co alloy coating can affect the content of nanoparticles [25,26].The increase in Co content can increase the content of nanoparticle in the coatings.Those results may be due to the periodic effect of the pulse power source.The change of duty cycle and frequency can affect the connection and disconnection time.Reasonable pulse turn-on and turn-off time can make the metal ions consumed in deposition to be timely supplemented.This reduces the concentration polarization of the cathode surface and also aids the cathode surface adsorption and desorption effects.Shorter pulse turn-on time can cause a decrease in grain growth time and lower the deposition efficiency.However, when the pulse turn-on time is extremely long, the consumption of the metal ions on the cathode is extremely high.This can increase the concentration polarization of the cathode surface, which results in rapid hydrogen formation on the cathode surface.The hydrogen layer formed hinders effective deposition of metal ions thereby reducing efficiency.The amount of nanoparticles in the coatings is largely affected by the deposition efficiency.A low deposition efficiency decreases the capturing capacity of the nanoparticles suspended in the plating solution.As a result, the nanoparticles are either loosely adsorbed into the cathode surface or embedded at a much lower content.This results in the reduction of the nanoparticles content in the coatings.In addition, relevant literature studies show that the content of Co in Ni-Co alloy coating can affect the content of nanoparticles [25,26].The increase in Co content can increase the content of nanoparticle in the coatings.[27,28].Calculation of the texture coefficient indicated a preferred orientation appeared at (111).Probably due to the relatively low content of BN (h) nanoparticles and also being dispersive in the coatings, the related diffraction peaks were not distinct in Figures 7 and 8.
According to the XRD patterns, the grain size of the coatings can be calculated by the Scherrer equation.The grain size of Ni-Co-BN (h) nanocomposite coatings in (111) plane is shown in Table 2.The data showed that the grain size increased with increase in duty cycle, first decreased and then increased with increase in pulse frequency.This may be because periodic turn-on and turn-off of pulse power source prevents the continuous growth of grain and helps to refine grain size.When the duty cycle is small, the shorter pulse turn-on time results in shorter grain growth time, and a smaller grain size can be obtained.With increase in duty cycle, the continuous increase in time of growth for grain increases the grain sizes.When the pulse frequency increases within a suitable range, the pulse turn-on time and the grain size decrease.When the pulse frequency is extremely high, the reduced pulse turn-off time affects the replenishment of metal ions on the cathode surface and also weakens the inhibitory effect of adsorbents on grain growth, resulting in the increase in grain size.[27,28].Calculation of the texture coefficient indicated a preferred orientation appeared at (111).Probably due to the relatively low content of BN (h) nanoparticles and also being dispersive in the coatings, the related diffraction peaks were not distinct in Figure 7 and 8.
According to the XRD patterns, the grain size of the coatings can be calculated by the Scherrer equation.The grain size of Ni-Co-BN (h) nanocomposite coatings in (111) plane is shown in Table 2.The data showed that the grain size increased with increase in duty cycle, first decreased and then increased with increase in pulse frequency.This may be because periodic turn-on and turn-off of pulse power source prevents the continuous growth of grain and helps to refine grain size.When the duty cycle is small, the shorter pulse turn-on time results in shorter grain growth time, and a smaller grain size can be obtained.With increase in duty cycle, the continuous increase in time of growth for grain increases the grain sizes.When the pulse frequency increases within a suitable range, the pulse turn-on time and the grain size decrease.When the pulse frequency is extremely high, the reduced pulse turn-off time affects the replenishment of metal ions on the cathode surface and also weakens the inhibitory effect of adsorbents on grain growth, resulting in the increase in grain size.
Effects of Pulse Parameters on the Surface Roughness
The surface roughness of Ni-Co-BN (h) nanocomposite coatings with varying pulse parameters is shown in Table 2.It was clear that the variation of pulse parameters could directly affect the surface roughness of the coatings.The surface roughness of the coatings first decreased and then increased with increase in duty cycle and pulse frequency.As the duty cycle increased from 0.1 to 0.7, the surface roughness decreased from 1.466 to 0.664 µm.With further increase in duty cycle, the surface roughness of the coatings increased to 0.936 µm.With increase in pulse frequency from 2 to 4 kHz, the surface roughness decreased from 0.870 to 0.664 µm.As the pulse frequency increased further to 10 kHz, the surface roughness begun to increase, reaching a maximum of 0.995 µm.The above changes in the surface roughness are partly caused by change in the pulse parameters.As mentioned earlier, change in pulse parameters can affect the deposition efficiency of the coatings.A coating deposited at a lower deposition efficiency cannot entirely cover the rough structure produced by pretreatment of the substrate, resulting in the difference in surface roughness.On the other hand, the change in the surface roughness may be caused by the fluctuation of the pulse power source [29,30].Figure 9 is a schematic diagram of pulse waveform in ideal and actual conditions.As shown in Figure 9a, due to the circuit response and the capacitance effect at the electrode-solution interface, the actual waveform at the moment of pulse turn-on and turn-off inevitably lags behind the ideal waveform, resulting in the distortion of the pulse waveform.In the figure, the ideal pulse turn-on time is ton, the ideal pulse turn-off time is toff, the rising edge delay time is tc, the falling edge delay time is td, the peak duration of the actual pulse is tb, and the amplitude of the pulse is JP.As shown in Figure 9b, when the conduction time of the pulse is extremely short, the pulse current cannot reach its peak value due to the delay of the rising edge.As shown in Figure 9c, if the turn-on time of the pulse is too long and the turn-off time is too short, the falling edge of the previous pulse will superimpose the rising edge of the next pulse.Figure 9d shows that, when the pulse frequency is too high, extremely short turn-on and turn-off time can produce pulse waveforms similar to those produced by DC power source.The variation of pulse waveform shown in Figure 9b-d
Effects of Pulse Parameters on the Surface Roughness
The surface roughness of Ni-Co-BN (h) nanocomposite coatings with varying pulse parameters is shown in Table 2.It was clear that the variation of pulse parameters could directly affect the surface roughness of the coatings.The surface roughness of the coatings first decreased and then increased with increase in duty cycle and pulse frequency.As the duty cycle increased from 0.1 to 0.7, the surface roughness decreased from 1.466 to 0.664 µm.With further increase in duty cycle, the surface roughness of the coatings increased to 0.936 µm.With increase in pulse frequency from 2 to 4 kHz, the surface roughness decreased from 0.870 to 0.664 µm.As the pulse frequency increased further to 10 kHz, the surface roughness begun to increase, reaching a maximum of 0.995 µm.The above changes in the surface roughness are partly caused by change in the pulse parameters.As mentioned earlier, change in pulse parameters can affect the deposition efficiency of the coatings.A coating deposited at a lower deposition efficiency cannot entirely cover the rough structure produced by pre-treatment of the substrate, resulting in the difference in surface roughness.On the other hand, the change in the surface roughness may be caused by the fluctuation of the pulse power source [29,30].Figure 9 is a schematic diagram of pulse waveform in ideal and actual conditions.As shown in Figure 9a, due to the circuit response and the capacitance effect at the electrode-solution interface, the actual waveform at the moment of pulse turn-on and turn-off inevitably lags behind the ideal waveform, resulting in the distortion of the pulse waveform.In the figure, the ideal pulse turn-on time is t on , the ideal pulse turn-off time is t off , the rising edge delay time is t c , the falling edge delay time is t d , the peak duration of the actual pulse is t b , and the amplitude of the pulse is J P .As shown in Figure 9b, when the conduction time of the pulse is extremely short, the pulse current cannot reach its peak value due to the delay of the rising edge.As shown in Figure 9c, if the turn-on time of the pulse is too long and the turn-off time is too short, the falling edge of the previous pulse will superimpose the rising edge of the next pulse.Figure 9d shows that, when the pulse frequency is too high, extremely short turn-on and turn-off time can produce pulse waveforms similar to those produced by DC power source.The variation of pulse waveform shown in Figure 9b-d
Effects of Pulse Parameters on the Microhardness
The microhardness of Ni-Co-BN (h) nanocomposite coatings surface with varying pulse parameters is shown in Table 2.The data shows that the microhardness of the coatings is obviously affected by the pulse parameters.With increase in pulse parameters, the microhardness of the coatings first increased and then decreased.When pulse frequency was 4 kHz and duty cycle was 0.7, the microhardness of Ni-Co-BN (h) nanocomposite coatings reached a maximum value of 719.2 HV0.05.The data also indicated that the variation range of microhardness was smaller when the pulse frequency was changed as compared to when the duty cycle changed.
The microhardness of metal-based nanocomposite coatings is mainly affected by two aspects: the microhardness of the metal matrix and the amount of reinforcing particles in the metal matrix.As mentioned earlier, the variation of pulse parameters has no obvious effect on the element types and the phase structure.Therefore, the change of Co content and grain size in the metal matrix is the main factor for change in microhardness of the metal matrix.As the Ni-Co-BN (h) nanocomposite coatings were prepared, Ni atoms and Co atoms formed a single α-phase solid solution.Under the influence of solid solution strengthening, increase in Co content in the coatings is beneficial to the increase in microhardness.On the other hand, according to Hall-Petch relationship, the deformation of crystal material is caused by the dislocation movement in the crystal, and the grain boundary can hinder the dislocation movement.A smaller grain size can increase the proportion of grain boundaries in the material and improve hindrance effect on the dislocation movement.The effects produced by small grain size in macroscopic performance is that the material hardness increases.Therefore, the microhardness of the nanocomposite coatings is affected by the grain size.
The effect of reinforcing particles on the microhardness of the nanocomposite coatings is mainly achieved by dispersion strengthening.Nanoparticles embedded in the coatings can improve the number of phase boundaries.This can hinder the movement of dislocation grains, thereby resulting in a large number of dislocation blockages.Moreover, the nanoparticles in plating solution can also promote the crystallization of metal ions and refine the grain size.Therefore, in a suitable range, the more nanoparticles embedded in the coatings, the more obvious the dispersion strengthening effects are and the greater the microhardness of the material.
Effects of Pulse Parameters on the Corrosion Resistance
Potentiodynamic scanning is an important method to obtain electrochemical properties of materials in electrochemical tests.The corrosion current density Icorr and polarization resistance Rp are important indexes to evaluate the properties.A small corrosion current density Icorr and a large polarization resistance Rp in the electrochemical test indicate the material has a better property in corrosion resistance.The polarization curves of Ni-Co-BN (h) nanocomposite coatings with varying
Effects of Pulse Parameters on the Microhardness
The microhardness of Ni-Co-BN (h) nanocomposite coatings surface with varying pulse parameters is shown in Table 2.The data shows that the microhardness of the coatings is obviously affected by the pulse parameters.With increase in pulse parameters, the microhardness of the coatings first increased and then decreased.When pulse frequency was 4 kHz and duty cycle was 0.7, the microhardness of Ni-Co-BN (h) nanocomposite coatings reached a maximum value of 719.2 HV 0.05 .The data also indicated that the variation range of microhardness was smaller when the pulse frequency was changed as compared to when the duty cycle changed.
The microhardness of metal-based nanocomposite coatings is mainly affected by two aspects: the microhardness of the metal matrix and the amount of reinforcing particles in the metal matrix.As mentioned earlier, the variation of pulse parameters has no obvious effect on the element types and the phase structure.Therefore, the change of Co content and grain size in the metal matrix is the main factor for change in microhardness of the metal matrix.As the Ni-Co-BN (h) nanocomposite coatings were prepared, Ni atoms and Co atoms formed a single α-phase solid solution.Under the influence of solid solution strengthening, increase in Co content in the coatings is beneficial to the increase in microhardness.On the other hand, according to Hall-Petch relationship, the deformation of crystal material is caused by the dislocation movement in the crystal, and the grain boundary can hinder the dislocation movement.A smaller grain size can increase the proportion of grain boundaries in the material and improve hindrance effect on the dislocation movement.The effects produced by small grain size in macroscopic performance is that the material hardness increases.Therefore, the microhardness of the nanocomposite coatings is affected by the grain size.
The effect of reinforcing particles on the microhardness of the nanocomposite coatings is mainly achieved by dispersion strengthening.Nanoparticles embedded in the coatings can improve the number of phase boundaries.This can hinder the movement of dislocation grains, thereby resulting in a large number of dislocation blockages.Moreover, the nanoparticles in plating solution can also promote the crystallization of metal ions and refine the grain size.Therefore, in a suitable range, the more nanoparticles embedded in the coatings, the more obvious the dispersion strengthening effects are and the greater the microhardness of the material.
Effects of Pulse Parameters on the Corrosion Resistance
Potentiodynamic scanning is an important method to obtain electrochemical properties of materials in electrochemical tests.The corrosion current density I corr and polarization resistance R p are important indexes to evaluate the properties.A small corrosion current density I corr and a large polarization resistance R p in the electrochemical test indicate the material has a better property in corrosion resistance.The polarization curves of Ni-Co-BN (h) nanocomposite coatings with varying pulse parameters in 3.5 wt % NaCl solution are shown in Figures 10 and 11, respectively.It can be seen in the figures that the pulse parameters have obvious effects on the corrosion resistance of the Ni-Co-BN (h) nanocomposite coatings.Table 3 shows the electrochemical properties of different samples obtained from the polarization curves.The data in the table shows that the corrosion current density I corr first decreases and then increases with increase in duty cycle.When the duty cycle was 0.7, the corrosion current density I corr reached a minimum value of 0.77 µA cm −2 .When the duty cycle was increased to 0.9, the corrosion current density I corr increased to 3.57 µA cm −2 .The effect of pulse frequency on the corrosion resistance of the coatings was similar to that of duty cycle.With the increase in pulse frequency, the corrosion current density I corr of Ni-Co-BN (h) nanocomposite coatings first decreased and then increased.When the pulse frequency was 4 kHz, the sample showed a better corrosion current density.
The polarization resistance R p of the samples in Table 3 is obtained using the Stern-Geary equation.As can be seen from Table 3, the polarization resistance R p of Ni-Co-BN (h) nanocomposite coatings first increases and then decreases with increase in duty cycle and pulse frequency.When the duty cycle was 0.7 and the pulse frequency was 4 kHz, the polarization resistance R p of Ni-Co-BN (h) nanocomposite coatings reached a maximum value of 30.11 kΩ cm 2 .
The above phenomena may be explained by several reasons.Firstly, corrosion is more likely to occur at the grain boundary because of the higher interfacial energy at the grain boundary of the coatings [31,32].A small grain size can improve the density of grain boundary in coating matrix.The coatings with a small grain size can obtain a uniform corrosion area and reduce the risk of losing efficacy by local corrosion.Therefore, the decrease of grain size is beneficial to improve the corrosion resistance of the coatings.However, the lower duty cycle and pulse frequency make the deposit time shorter and the deposit thickness thinner.It is easier for the Cl − to penetrate the deposit during the immersion process, which causes the steel C1045 substrate to be corroded by the corrosion medium.Large duty cycle and high pulse frequency can increase grain size, causing the reduction of grain boundary density and the uniformity of corrosion area.Those effects of pulse parameters result in a decrease in corrosion resistance of the coatings.In addition, the expansion of concentration polarization caused by higher duty cycle and higher pulse frequency results in a large number of hydrogen evolution reactions on the cathode surface.The increase in defects on the coatings will also lead to the decrease in corrosion resistance.
Secondly, the improvement of the corrosion resistance of the coatings is partly due to the change of Co content.When the coatings are corroded, a lot of corrosion products will appear on the surface.Those corrosion products can reduce the direct contact between the coatings and corrosion medium thereby hindering the corrosion reaction.After immersion in 3.5 wt % NaCl solution for 2 h, the samples surface could be covered by a large number of corrosion products.Since the active energy of Co is higher than Ni, a low content of Co in the coating matrix can increase the corrosion products during corrosion and cannot go so far as to produce too many defects [26,33].Therefore, the increase in Co content in a suitable range is good for the improvement of coatings corrosion resistance.
Finally, the variation content of nanoparticles in coatings can affect the property of corrosion resistance.The effects of nanoparticles on the corrosion resistance of coatings are mainly reflected in two aspects.On one hand, the nanoparticles with a small size can refine the grain size and optimize the microstructure of coatings.In addition, they can also fill the spaces of grain boundaries and improve coatings compactness.Those effects produced by nanoparticles have vital contributions to improving the corrosion resistance of coatings.On the other hand, the embedded nanoparticles in the nanocomposite coatings surface can reduce the interface between the coatings and the corrosion medium [34].It can partly enhance the corrosion resistance of coatings in the corrosion medium.Therefore, in a suitable range, the samples with a higher content can exhibit a better performance in corrosion resistance.α-phase solid solution.In the process of coatings growth, the grains had obvious preferred orientation in the (111) plane.The grain size of the coatings decreased with the increase in duty cycle, and decreased first and then increased with the increase in pulse frequency.
•
In the process of pulse jet electrodeposition, the variation of duty cycle and pulse frequency had similar effects on the microhardness of the coatings.With the increase in pulse parameters, the microhardness first increased and then decreased.
•
Polarization curves of the Ni-Co-BN (h) nanocomposite coatings showed that the pulse parameters had great effects on the corrosion resistance.The change in corrosion current density and polarization resistance indicated that too high or too low pulse parameters were not conducive to the improvement of corrosion resistance of the coatings.The sample with pulse frequency of 4 kHz and duty cycle of 0.7 exhibited good performance in corrosion current density and polarization resistance.
Figure 1 .
Figure 1.Schematic image of pulse jet electrodeposition.Figure 1.Schematic image of pulse jet electrodeposition.
Figure 1 .
Figure 1.Schematic image of pulse jet electrodeposition.Figure 1.Schematic image of pulse jet electrodeposition.
Figure 6 .
Figure 6.EDS spectra of Ni-Co-BN (h) nanocomposite coatings surface with duty cycle of 0.7 and pulse frequency of 4 kHz.
Figure 6 .
Figure 6.EDS spectra of Ni-Co-BN (h) nanocomposite coatings surface with duty cycle of 0.7 and pulse frequency of 4 kHz.
Figures 7 and 8
Figures 7 and 8 show the XRD patterns of Ni-Co-BN (h) nanocomposite coatings with varying pulse parameters.From the figures it can be seen that the Ni-Co-BN (h) nanocomposite coatings have a face-centered cubic (fcc) structure.The 2theta angles corresponding to the (111), (200), and (220) planes were 44.06 • , 51.95 • , and 76.54 • , respectively.The variation of pulse parameters had no obvious effects on the phase structure.Ni atoms and Co atoms formed a Ni-Co solid solution.As a result of the low content of Co in the coatings, the Ni-Co solid solution formed in the coatings was a single α-phase structure[27,28].Calculation of the texture coefficient indicated a preferred orientation appeared at (111).Probably due to the relatively low content of BN (h) nanoparticles and also being dispersive in the coatings, the related diffraction peaks were not distinct in Figures7 and 8.According to the XRD patterns, the grain size of the coatings can be calculated by the Scherrer equation.The grain size of Ni-Co-BN (h) nanocomposite coatings in (111) plane is shown in Table2.The data showed that the grain size increased with increase in duty cycle, first decreased and then increased with increase in pulse frequency.This may be because periodic turn-on and turn-off of pulse power source prevents the continuous growth of grain and helps to refine grain size.When the duty cycle is small, the shorter pulse turn-on time results in shorter grain growth time, and a smaller grain size can be obtained.With increase in duty cycle, the continuous increase in time of growth for grain increases the grain sizes.When the pulse frequency increases within a suitable range, the pulse turn-on time and the grain size decrease.When the pulse frequency is extremely high, the reduced pulse turn-off time affects the replenishment of metal ions on the cathode surface and also weakens the inhibitory effect of adsorbents on grain growth, resulting in the increase in grain size.
Figure 7 and 8
Figure 7 and 8 show the XRD patterns of Ni-Co-BN (h) nanocomposite coatings with varying pulse parameters.From the figures it can be seen that the Ni-Co-BN (h) nanocomposite coatings have a face-centered cubic (fcc) structure.The 2theta angles corresponding to the (111), (200), and (220) planes were 44.06°, 51.95°, and 76.54°, respectively.The variation of pulse parameters had no obvious effects on the phase structure.Ni atoms and Co atoms formed a Ni-Co solid solution.As a result of the low content of Co in the coatings, the Ni-Co solid solution formed in the coatings was a single α-phase structure[27,28].Calculation of the texture coefficient indicated a preferred orientation appeared at (111).Probably due to the relatively low content of BN (h) nanoparticles and also being dispersive in the coatings, the related diffraction peaks were not distinct in Figure7 and 8.According to the XRD patterns, the grain size of the coatings can be calculated by the Scherrer equation.The grain size of Ni-Co-BN (h) nanocomposite coatings in (111) plane is shown in Table2.The data showed that the grain size increased with increase in duty cycle, first decreased and then increased with increase in pulse frequency.This may be because periodic turn-on and turn-off of pulse power source prevents the continuous growth of grain and helps to refine grain size.When the duty cycle is small, the shorter pulse turn-on time results in shorter grain growth time, and a smaller grain size can be obtained.With increase in duty cycle, the continuous increase in time of growth for grain increases the grain sizes.When the pulse frequency increases within a suitable range, the pulse turn-on time and the grain size decrease.When the pulse frequency is extremely high, the reduced pulse turn-off time affects the replenishment of metal ions on the cathode surface and also weakens the inhibitory effect of adsorbents on grain growth, resulting in the increase in grain size.
weakens the relaxation effect of the pulse.It also increases the concentration polarization of the cathode surface, weakens the adsorption and desorption effect, and reduces the current intensity of the pulse.Under the combined action of the abovementioned factors, the surface roughness of the coatings is spontaneously different.
14 Figure 9 .
Figure 9. Schematic diagram of pulse waveform in ideal and actual conditions: (a) the distortion of the pulse waveform; (b) waveform with extremely small duty cycle; (c) waveform with extremely large duty cycle; (d) waveform with extremely large frequency.
Figure 9 .
Figure 9. Schematic diagram of pulse waveform in ideal and actual conditions: (a) the distortion of the pulse waveform; (b) waveform with extremely small duty cycle; (c) waveform with extremely large duty cycle; (d) waveform with extremely large frequency.
Table 1 .
Different solution constituents used for this paper.
Table 2 .
Partial data of Ni-Co-BN (h) nanocomposite coatings with varying pulse parameters.
Table 2 .
Partial data of Ni-Co-BN (h) nanocomposite coatings with varying pulse parameters. | 2019-01-29T06:24:36.793Z | 2019-01-16T00:00:00.000 | {
"year": 2019,
"sha1": "ca79eec678d92182eea6a3790bb1bb4d2ca21ec8",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-6412/9/1/50/pdf?version=1547610272",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "ca79eec678d92182eea6a3790bb1bb4d2ca21ec8",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
55385743 | pes2o/s2orc | v3-fos-license | A semi-analytical method for vibration analysis of thin spherical shells with elastic boundary conditions
A semi-analytical method is proposed to analyze both axisymmetric and asymmetric vibrations of thin opened spherical shells with elastic boundary conditions and discontinuity in thickness. To establish the governing equation, the method is involved in dividing the shell into many narrow strips in meridional direction, and those strips are approximately treated as conical ones with uniform thickness. Flügge shell theory is used to describe the motions of strips and displacement functions are expanded as power series. Artificial springs are employed to restrain displacements at edges so that arbitrary boundary conditions can be analyzed. By assembling all continuity conditions of adjacent strips and boundary conditions, the governing equation is established. In numerical results discussion, many comparisons of frequency parameters of present method and those in literature are firstly presented and they illustrate high accuracy and wide application of present method. Furthermore, influences of elastic boundary conditions, open angle, ratio of thickness to radius and thickness discontinuity on natural frequencies of spherical shells are investigated. Results show that meridinoal and circumferential displacements have obvious effects on natural frequencies, and the influence of thickness discontinuity seriously depends on the location of discontinuity.
Introduction
In building roofs, LNG tanks, offshore structures, nuclear power plants and other engineering structures, spherical shells are extensively used.Many times those structures are subjected to various external loads, such as earthquakes and sea waves, and the loads are of serious consequence for the strength and safety.Herein, knowing vibration characteristics plays an important role in design process.To this end, vibrations of spherical shells were investigated in past years and are also attracting attentions of more and more scholars nowadays.
In general, the research method analyzing vibrations of spherical shells can be classified as three categories: analytical method , numerical method [22][23][24][25][26][27][28][29][30][31][32][33][34][35][36][37] and experimental method [38].For analytical method, selecting appropriate displacement functions is the most important aspect.Although Legendre functions are usually adopted [2,3,[6][7][8][9][10][11][13][14][15], those functions significantly increase difficulties in solving natural frequencies due to the complex values.Niordson [14,15] decomposed Legendre functions into real and imagine parts and the frequency equation could be correspondently solved in real number region.However, the form of displacements is uncertain around the critical frequency.Except for Legendre functions, Bessel functions were also used by some scholars, such as Kalnins and Naghdi [1], Hoppmann II [4] and Kalnins [5], to express the displacement functions of spherical shells.However, only shallow spherical shells can be accurately analyzed.Chakrabarti [12] adopted elementary (algebraic) functions to study radial vibrations of spherical shells.Lee [17] employed Chebyshev polynomials and Fourier series to express the displacement functions of spherical caps.Then, Lee [18] used the same method to analyze free vibration a hermetic capsule consisting of one cylindrical shell and two hemispherical shells.Standard Fourier series with auxiliary functions were adopted by Su et al. [19] to express displacement functions of functionally graded spherical shells.Chernobryvko et al. [20] used the eigenmodes of the cantilever beam to approximate eigenmodes of axisymmetric shells and presented natural frequencies of a spherical shell of clamped-free boundary conditions.Excepting two-dimensional shell theory, three-dimensional method was adopted by Chen and Ding [16] and Kang [21] to study vibrations of multi-layered hollow spheres and shallow spherical domes, respectively.
Numerically solving governing equations [22,23,[30][31][32][33] and discretizing spherical shell [24][25][26][27][28][29][34][35][36][37] are two main numerical methods to analyze vibrations of spherical shells.Zarghamee and Robinson [22] used the Holzer method to analyze free vibrations of spherical shells.Souza and Croll [23] investigated free vibrations of spherical shells by the finite difference method.Artioli and Viola [30] and Tornabene and Viola [31] used the generalized differential quadrature method (GDQ) to evaluate natural frequencies of spherical shells.Simmonds and Hosseinbor [32,33] adopted a perturbation method to study free and forced vibrations of a closed elastic spherical shell fixed to an equatorial beam.As similar with other shell structures, finite element method [24,26,29] and semi-analytic finite element method [27] are the most common discrete methods to study vibrations of spherical shells.By using circular arc to represent the generic segment of the shell of revolution, Singh [25] adopted Bezier polynomials to study free vibrations of shells of revolutions.Piecewise Hermite interpolation polynomial and Fourier approximation were used by Wu and Heyliger [28] to expand approximately unknown displacements and forces in azimuthal and circumferential directions, respectively.By combining the modified variational principle with multi-segment partitioning procedure, Qu et al. [34] studied free and forced vibrations of functionally graded shells of revolution through adopting Fourier series and polynomials to expand the displacements.Choi et al. [35] used the Sylvester-transfer stiffness coefficient method to study free vibrations of axisymmetric shells.Before establishing the governing equation, axisymmetric shells were divided into lots of narrow strips and those strips were treated approximately as conical shells.Naghsh et al. [36] used the meridional finite strip method to investigate free vibrations of general shells of revolution, and natural frequencies of a spherical shell with constant and linearly variable thickness in the meridional direction were presented.Cui et al. [37] proposed a nodal integration model for elasticstatic, free vibration and forced vibrations of axisymmetric thin shells by using two-node truncated conical elements.
In most above cited literature, the thickness of shell is uniform and only classic boundary conditions are taken into account.However, in practical engineering applications, boundary conditions may not be fixed in a classic restraint, and a variety of boundary conditions, such as elastic ones, may be encountered.In addition, non-uniform thickness, e.g.continuous variational thickness and stepped thickness, is also widely adopted to efficiently improve structural strength without obviously increasing the weight.At this context, proposing an accurate and efficient method for vibration analysis of thin spherical shells with arbitrary boundary conditions and non-uniform thickness is meaningful.
The main purpose of the paper is to present an approach to analyze free vibrations of thin spherical shells with arbitrary boundary conditions and non-uniform thickness.First, the spherical shell is decomposed into lots of narrow strips, which are approximately treated as conical shells.Then, Flügge thin shell theory is employed to describe equations of motions of those conical strips, and displacement functions are expressed as power series.Finally, the governing equation is established through assembling continuity and boundary conditions.Based on the proposed method, effects of elastic boundary conditions, open angles, thickness discontinuity and other parameters on vibration characteristics of spherical shells are investigated.The present method is believed to include following novelties.It offers an accurate and efficient method to investigate free vibrations of spherical shells with elastic boundary conditions.In addition, it is applicable to deal with both uniform and non-uniform thickness.Last but not least, as spherical shells are coupled with thin cylindrical, conical and/or spherical shells, the continuity conditions between spherical shells and other shells can be accurately satisfied by present method.Vibrations of this kind of coupled structures are rarely studied whereas vibrations of coupled cylindrical-cylindrical, conical-cylindrical and conical-conical shells have been relatively extensively studied.
Basic concept of the semi-analytical method
Fig. 1 shows the schematic diagram of a spherical shell with two open angles.The thickness may be constant, stepwise or continuous variational.and denote the azimuthal and circumferential coordinates in spherical coordinate system.and are the azimuth angles of two edges.In order to establish the governing equation of the spherical shell, the shell is divided into many narrow strips along the short dash lines in Fig. 1(b), and the strips are approximately treated as conical shells.Meanwhile, the thickness of any narrow strip, which may be variational, can be equivalently dealt as constant one when the strips are narrow enough.Radii of two ends of the strip are same as the radii of two circles of edges of the corresponding strip, and the axial length is the axial distance between the two circles.Then, on the basis of Flügge thin shell theory, power series are adopted to expand displacements of conical shells.Consequently, for a particular circumferential mode number, displacements and forces at the cross-section of one strip can be expressed in terms of eight unknown coefficients.Lastly, four continuity conditions of displacements and four equilibrium equations of forces of adjacent two strips are used to assemble those strips to the spherical shell.With the help of boundary conditions, the final governing equation analyzing vibrations of spherical shells is established.
If north and/or south poles are included in the spherical shell, the poles should be cut so that the strip closing to the pole can be treated as truncated conical shells.In addition, the hole must be small enough to avoid big errors.In the following analysis, the azimuth angle of edge is = 0.1° for the north pole while the azimuth angle of edge is = 179.9°for the south pole.At those added edges, free boundary conditions are adopted.
Equations of motion of conical shells
The local coordinate system, displacements and forces of a conical shell are shown in Fig. 2. is the meridional coordinate and it's measured from the middle of the strip.is the circumferential coordinate and it's same with that of the spherical shell. 1 and 2 are the radii of small and large ends, respectively.0 is the mean radius and is the radius at . is the semi-vertex angle., and are the displacements in meridional, circumferential and normal directions.= / denotes the slope.is the bending moment resultant, is the meridional force resultant, and and are the normal and circumferential Kelvin-Kirchhoff shear force resultants, respectively.
It must be mentioned that only torsionless axisymmetric modes ( ≠ 0, ≠ 0, = 0), namely breathing modes, are accounted for in the following analysis.In addition, in view of differences of semi-vertex angles of two adjacent strips, displacements and forces will be considered in the cylindrical coordinate system.Correspondingly, four notations about some displacements and forces are introduced and they are: where is the semi-vertex angle of the th strip, and are the axial and radial displacements, and and ̅ denote axial and radical force resultants, rather than meridional and normal ones.Substituting the functions of displacements into the expressions of slope and forces (Expressions of force resultants in terms of displacements are given in the Appendix), four displacements and four forces at the cross-section can be expressed as: where In Eqs. ( 8) and ( 9), the detailed expressions of Φ , which represents , , , , , , ̅ and , can be readily obtained and they are not given for the sake of brevity.
Boundary and continuity conditions
After all strips have been analyzed individually, the strips can be assembled through the continuity conditions.Fig. 3 shows the displacements and forces at the th junction, namely the junction between the th strip and ( + 1)th strip.The continuity conditions of displacements and equilibrium equations of forces are: where and represent edges of adjacent strips at left side and right side of the th junction, respectively.Besides continuity conditions, boundary conditions are also indispensable.Artificial springs are employed to restrain the displacements at the ends, and corresponding equations are: where, plus sign indicates boundary condition at = while minus sign indicates boundary condition at = ., , and are stiffness constants of artificial springs restraining meridional, circumferential and normal displacements and slop, respectively.
By assigning appropriate values of stiffness constants of artificial springs, both classic and elastic boundary conditions can be analyzed.In the following analysis, the stiffness constant is set as 0 if the corresponding displacement is free or is assigned as a large value ( 10
Final governing equation
Assuming that a spherical shell is decomposed into individual narrow strips, 8 unknown coefficients need to be solved for a given circumferential mode number.The final governing equation, Eq. ( 13), can be obtained by assembling all continuity conditions and boundary conditions in matrix form: where = [[ ] , [ ] , . . ., [ ] ] is a 8 ×1 vector of the unknown coefficients of all strips, and the expression of is: where ( = 1: ) is the meridional length of the th strip.By assigning appropriate dimension and material parameters of the corresponding strip, values of the elements in [ ] and [ ] are obtained through Eqs.(8,9).[ ] and [ ] depend on the boundary conditions of two edges, and the general expressions are: where [ ] is the matrix about stiffness constants of artificial springs, [ ] denotes the transformation matrix about the notations introduced in Eq. ( 6), and their detailed expressions are: Keeping circumferential number unchanged, is increased in an appropriately small step until the sign of the determinant of matrix changes, and corresponding eigenvalue is roughly obtained.Decreasing the step and repeating the same process, the eigenvalue, namely natural frequency, can be trapped with the desired accuracy.Meanwhile, substituting the eigenvalue back into Eq.( 13) and setting one coefficient in vector to 1, all the other coefficients can be solved and the corresponding mode shape can be obtained.
Numerical results and discussion
In the following analysis, all natural frequencies are expressed as frequency parameters, Ω = (1 − ) ⁄ .
Convergence and validity
First, the convergence of present method for asymmetric vibrations of a clamped spherical shell with different open angles and ratios of thickness to radius is discussed.Before convergence analysis, a notation, Δ , is introduced and it denotes the difference of azimuth angles of two edges of one strip, as shown in Fig. 1.For the sake of brevity, the value of Δ is constant for one kind of decomposition, and different values of Δ essentially indicate different numbers of strips, e.g.Δ = 1° denotes 60 strips as = 60° and 90 strips as = 90°.The influence of the number of strips on frequency parameters is listed in Table 1, and Fig. 4 shows some mode shapes.In the table, denotes mode number in meridional direction.As Δ decreases, frequency parameters rapidly converge, and the ones of Δ = 0.5° satisfy the requirement of convergence.More importantly, the difference of frequency parameters of present method and those in the literature is negligible, which demonstrates high accuracy of present method.Furthermore, although the thin shell theory is adopted, present method can still predict accurately natural frequencies as the ratio of thickness to radius ℎ ⁄ reaches 0.05.The convergence of frequency parameters of axisymmetric modes of a clamped spherical shell is presented in Table 2. Compared with asymmetric modes, the rate of convergence for axisymmetric modes is slower.Some axisymmetric mode shapes are shown in Fig. 5, and it's seen that the amplitudes of mode shapes have obvious variation at the region closing to the pole, which explains why more strips are required to satisfy the requirement of convergence.It's further observed that the convergence rate of frequency parameter of the spherical shell with = 30° is obviously slower than the others, which is mainly attributed to higher frequency parameters and similar variations of amplitudes at much smaller region for the same meridional mode.Last but most important, excellent agreement of frequency parameters of present method and literature is observed.That's to say the axisymmetric vibrations can be also accurately analyzed by present method.
[ To further illustrate high accuracy and wide application of present method, more comprehensive comparisons of frequency parameters are tabulated in Table 3.A spherical shell with four kinds of open angles, three different ratios of thickness to radius and two different boundary conditions are considered.It's observed that, as the ratio of thickness to radius is small, e.g.ℎ ⁄ = 0.005 and ℎ ⁄ = 0.01, frequency parameters of present method coincide exactly with the ones in literature for all four kinds of open angles.As ℎ ⁄ increases to 0.05, obvious differences can be found for some modes if open angle is small.It's further found that, for the same circumferential and meridional mode numbers, frequency parameters of the spherical shell with small open angle are obvious larger than those of the spherical shell with large open angle.In addition, classical thin shell theory is employed by present paper while the first-order shear deformation theory is used in [24], and differences of those two theories are negligible at low frequency.All those mentioned reasons lead to obvious differences for some modes when ℎ ⁄ is 0.05.From Table 3, it can be also observed that frequency parameters of the shell with clamped boundary conditions are expectedly larger than those of the shell with hinged boundary conditions, and the frequency parameters increase as the ratio of thickness to radius increases.Present Ref.
[24] Present Ref.Based on above comparisons of frequency parameters of present method and literature, it can be concluded that present semi-analytic method can accurately analyze free vibrations of thin opened spherical shells.
Effects of boundary conditions
Fig. 6 presents effects of boundary conditions on the frequency parameters of a spherical shell with three different open angles.It's observed that, as circumferential mode number is not less than 2, no matter what boundary conditions are, tendencies of frequency parameter versus circumferential mode number are identical, which means the increase of circumferential mode number leads to the increase of frequency parameter.However, as the circumferential mode number varies from 0 to 2, increasing circumferential mode number may lead to the increase or decrease of frequency parameter, which depends on the boundary conditions, open angle and meridional mode number.It's also observed that, for a particular circumferential mode number, the frequency parameter of the mode with larger meridional mode number is always larger than the one with small meridional mode number.In addition, for = 0 and = 1, frequency parameters of are larger than the ones of , which is attributed to that the rigid modes are not considered for .In Fig. 7, influences of elastic boundary conditions on free vibrations of a hemispherical shell are shown.Before analyzing the effects of elastic boundaries, it should be mentioned that only three curves in Fig. 7(a), which results from that the circumferential displacement is always 0 for = 0.In addition, every curve denotes that only one displacement is elastically restrained and the other three are fixed.It's observed that effects of meridional and circumferential displacements are obviously greater than normal displacement and slope.However, with the increase of circumferential mode number, the effect of circumferential displacement becomes greater than meridonal displacement.Meanwhile, influences of normal displacement and slope become obvious as the circumferential mode number varies from 0 to 2. It's further observed that the appropriate value of stiffness constant, which can significantly affect natural frequencies, is different for different circumferential mode numbers and directions of displacements, which is attributed to different values of stresses.
Effects of open angle
In Section 3.1, it is pointed out that the open angle has great influence on frequency parameters.In this section, influences of the open angle are discussed in detail.Fig. 8 and Fig. 9 show the effects of the open angle on frequency parameters of the free and clamped spherical shell, respectively.For free boundary conditions, on the one hand, the increase of the open angle can increase the frequency parameter as the open angle is small.On the other hand, for no small open angle, the increase of open angle may lead to the increase or decrease of frequency parameter, which depends on the circumferential and meridional mode numbers.For clamped boundary conditions, with the increase of open angle, the frequency parameter always decreases as the circumferential mode number varies from 0 to 3 and meridional mode number changes from 1 to 3. In addition, as the circumferential mode number increases from 0 to 3, the decrease speed of frequency parameter becomes slower and slower as open angle is larger than 45°.
Effects of thickness to radius ratio
Effects of the ratio of thickness to radius on frequency parameters are presented in Fig. 10 and Fig. 11.Generally, frequency parameters are increased by the increase of the ratio of thickness to radius.However, effects of the ratio of thickness to radius on frequency parameters are dependent on the mode shape, open angle and boundary conditions.As the open angle increases, effects of the ratio of thickness to radius rapidly decrease, especially for = 0 and = 1, frequency parameters of which keep basically unchanged.
Effects of thickness discontinuity
In above study, the thickness of the shell is uniform, and non-uniform thickness can be also analyzed by present method.As a special case, a two-stepped hemi-spherical shell is employed and Fig. 12 shows the schematic diagram.
Influences of the location of thickness discontinuity on frequency parameters of the free and clamped hemispherical shell are presented in Fig. 13 and Fig. 14, respectively.It's observed that tendencies of frequency parameters versus azimuth angle vary with circumferential mode numbers and boundary conditions.For the free hemispherical shell with ℎ ℎ ⁄ = 2, the increase of azimuth angle of discontinuity may lead to the increase or decrease of frequency parameters as circumferential mode number is 0 and 1.Nevertheless, as circumferential mode number is 2 and 3, frequency parameter certainly decreases as the azimuth angle of discontinuity increases.It's further observed that, as the azimuth angle of discontinuity is small, effects of azimuth angle of discontinuity are negligible for = 2 and = 3, which is different from those of = 0 and = 1.For the clamped hemispherical shell with ℎ ℎ ⁄ = 2, the tendencies of = 0 and = 1 are simpler than those of = 2 and = 3. Frequency parameters firstly increase and then decrease as the azimuth angle of discontinuity increases for = 0 and = 1.For = 2 and = 3, as shown in Fig. 14(c) and (d), frequency parameters firstly decrease and then increase.Finally, as the azimuth angle closes to 90°, frequency parameters decrease again.
Conclusions
In present paper, a semi-analytical method was proposed to analyze both axisymmetric and asymmetric modes of spherical shells with elastic boundary conditions and discontinuity in thickness.To establish the governing equation, the spherical shell is firstly divided into many narrow strips, which are approximately treated as conical shells.Based on Flügge thin shell theory and power series method, displacements and forces at the cross-section of strips are expressed in terms of eight unknown coefficients.Lastly, continuity conditions of adjacent strips and boundary conditions are assembled to the final governing equation.By comparing frequency parameters of present method with those in literature, high accuracy and wide application of present method are verified.As circumferential mode number is small, frequency parameter may increase or decrease as circumferential mode number increases.On the other hand, the increase of circumferential mode number certainly results in the increase of frequency parameters for no small circumferential mode number.The effects of stiffness constants of elastic boundaries illustrate that meridional and circumferential displacements have the greatest effects on frequency parameters, and the effects of normal displacement and slope becomes obvious as circumferential mode number varies from 0 to 3. For = 0°, increasing the open angle can significantly reduce the frequency parameters as the open angle is small.However, as the open angle is greater than 125°, the increase of open angle may lead to the increase or decrease of frequency parameters, which strongly depends on the boundary conditions and circumferential mode number.The frequency parameters increase as the ratio of thickness to radius increases.Nevertheless, for = 0 and = 1, the effect of the ratio thickness to radius is negligible as open angle is between 90° and 135°.The location of thickness discontinuity is of great influence on frequency parameters, and the influence strongly depends on the boundary conditions, circumferential mode number and thickness ratio ℎ ℎ ⁄ .The proposed method can accurately and efficiently analyze free vibrations of spherical shells.In the subsequent work, spherical shells coupled with other revolution shells, e.g.cylindrical and conical shells, will be investigated since continuity conditions between spherical shells and others can be accurately satisfied without difficulty.In addition, it should be mentioned that the continuity conditions may be the bottleneck to analyze vibrations of coupled spherical-cylindrical or spherical-conical shells because there are many papers studying coupled cylindrical-cylindrical, conical-conical and cylindrical-conical shells whereas the literature about couple shells including spherical shells is rare.
Fig. 1 .
Schematic diagram of a spherical shell with two edges
Fig. 4 .
Fig. 4. Asymmetric mode shapes and frequency parameters Ω of the spherical shell with different open angles
Table 3 .
Comparison of frequency parameters of a spherical shell with different open angles and boundary conditions ( = 0°, = 0.3) ℎ ⁄ = 0.005 ℎ ⁄ = 0.01ℎ ⁄ = 0.05 ℎ ⁄ = 0.005 ℎ ⁄ = 0.01 ℎ ⁄ = 0.05 Ref.[24] 's further observed that, for a fixed open angle , frequency parameters do not monotonically increase or decrease as open angle increases.The effect of open angle will be particularly investigated in the following.
Table 1 .
Convergence of frequency parameters of asymmetric modes of a clamped spherical shell with different open angles ( = 0°, = 0.3) | 2018-12-05T08:54:15.890Z | 2017-06-30T00:00:00.000 | {
"year": 2017,
"sha1": "9dbd8aaa0f54ea924173bd9ed58bed9a836204ab",
"oa_license": "CCBY",
"oa_url": "https://www.jvejournals.com/article/17154/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9dbd8aaa0f54ea924173bd9ed58bed9a836204ab",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
} |
14829759 | pes2o/s2orc | v3-fos-license | Relaxometric Investigation of Functional Group Placement on MnTPP Derivatives Supports the Role of the Molecular Electrostatic Potential Maps as a Tool to Design New Metalloporphyrins with Larger Relaxivities
We report the T1 and T2 NMR (nuclear magnetic resonance) dispersion profiles for a new manganese porphyrin [MnT(2-C)PP] which has an anionic carboxylate group in the ortho position of the phenyl rings on the metalloporphyrin. Previous MEP (molecular electrostatic potential) maps indicated that this judicious derivatization could result in increases in the observed relaxation efficiency. Relaxometric investigations experimentally confirm about a 20 % increase in the relaxation efficiency at clinically relevant field strengths for MnT(2-C)PP compared to the most efficient metalloporphyrin previously reported MnT(4-S)PP. This result supports the hypothesis that electrostatic forces are relevant to the relaxivity of this family of compounds and that the MEP may be used as a tool to design new agents with even larger relaxivities.
Introduction
Metalloporphyrins have found several potential biological applications such as in photodynamic therapy and as MR contrast agents [1,2].In the context of MRI (magnetic resonance imaging) contrast 141 agents, there has been much effort in studying the relaxometric properties of metalloporphyrins in an effort to understand and increase their water proton relaxation efficiencies [3][4][5].It has been suggested, based on molecular electrostatic potential (MEP) calculations [6], that substitution of the sulfonato group in the para position in MnT(4-S)PP with carboxylate groups in the ortho position of the phenyl rings attached to the meso carbons, MnT(2-C)PP, may result in an increase in the water proton relaxation enhancement of the metalloporphyrin.We report the T1 (longitudinal; spin-lattice) and T2 (transverse; spin-spin) nuclear magnetic resonance dispersion (NMRD) profiles for MnT(2-C)PP and compare these data to the measurements obtained for MnT(4-S)PP.The results support the hypothesis that electrostatic forces are relevant to the relaxivity of this class of compounds.Thus, the calculation of the MEP may be used as a tool to optimize the relaxivity of these compounds as suggested by Mercier [6].Electronic spectra were recorded on a Shimadzu UV-1601 spectrophotometer.Relaxation measurements were made on a custom-designed variable field T1-T2 analyzer (Southwest Research Institute, San Antonio, TX) at 23 oC.The magnetic field strength was varied from 0.05 to 1.5 T (corresponding to a proton Larmor frequency of 2-61 MHz).T1 was measured by using a saturation recovery pulse sequence with 32 incremental recovery times.T2 was measured by using a Carr-Purcell-Meiboom-Gill (CPMG) pulse sequence of 500 echoes and a time interval of 1 msec between echoes.The relaxivities (relaxation rate per mM concentration) were obtained after subtracting the buffer contribution.
Results and Discussion
A Beer's law plot of the Soret bands indicated no aggregation for either the red MnT(4-S)PP or the green MnT(2-C)PP solutions in PBS, pH 7.4.Additionally, no evidence of precipitation was observed for up to at least 6 months.
The T1 NMRD profile of MnT(4-S)PP is qualitatively and quantitatively in agreement with previously published reports [3].The authors also reported that the T1 NMRD profile of MnT(4-S)PP was the same as MnT(4-C)PP, which has a carboxylate instead of a sulfonato group in the para position.The T1 and T2 NMRD profile of the new MnT(2-C)PP is shown in Figure 1 and 2. The 142 142 shape of the T1 profile for MnT(2-C)PP is similar to that of MnT(4-S)PP and MnT(4-C)PP, but its magnitude is larger by a factor of 1.2.As expected, the T2 NMRD profile parallels the T1 NMRD one.To date, MnT(4-S)PP is the most efficient relaxation enhancement agent of the metalloporphyrins [2].Our results herein show that MnT(2-C)PP is yet more efficient.Previous attempts to increase the relaxivities of metalloporphyrins by the judicious placement of various functional groups or atoms 143 have met with limited success [7] and in at least one report a decrease in the relaxivities was actually observed [8].Bryant et al. [7] have attempted to delocalize the paramagnetic electron spin density in MnTPPS by the covalent attachment of bromines to the ß-pyrrole positions of the porphyrin ring.
Although an increase in the relaxivity was reported, the electron-withdrawing bromines decrease the stability of the derivatized MnTPPS with respect to manganese ion release.
The Solomon-Bloembergen-Morgan (SBM) theory and the corresponding mathematical modeling of the NMRD profiles based upon it attempt to quantitate the various parameters contributing to the observed relaxivities.The main contributing parameters come from both inner-and outer-sphere water molecules associated with the paramagnetic metal complex after subtraction of the diamagnetic contribution.The inner-sphere relaxivity results from the number of bound water molecules, the concentration of the metal complex and the distance between the paramagnetic electron spin angular magnetic moment vector and the water proton nuclear angular magnetic moment vector and the correlation time of the metal complex.The correlation time comprises the paramagnetic electron spin relaxation time, the exchange time of the water molecule coordinated to the paramagnetic metal ion with bulk water, and the rotational correlation time of the coupled electron-nuclear angular magnetic moment vector of the paramagnetic electron spin and the proton on the coordinated water molecule, usually taken as the tumbling time of the paramagnetic metal complex.The shortest (fastest) one of these dominates the inner-sphere relaxivity.For the Mn(III) porphyrins, the paramagnetic electron spin relaxation time dominates the correlation time and therefore the relaxivity.The electron relaxation time is frequency dependent and when it disperses there is a characteristic peak in the NMRD profile as shown in Figure 1.The outer-sphere relaxivity has contributions from the electron spin relaxation time, the distance of closest approach of the water proton nuclear angular magnetic moment vector to the paramagnetic electron spin angular magnetic moment vector taken as the distance of closest approach of the water molecule to the paramagnetic center and the relative translation diffusion coefficient.
Attempts to mathematically model the NMRD profiles using the standard SBM theory were unsuccessful possibly due to zero-field splitting contributions of the manganese coordination environment within the metalloporphyrin [7][8][9].This type of modeling is a common tool in the effort to develop new compounds with larger relaxivities and that may be applied as contrast media in magnetic resonance imaging [10].The inability to do this modeling together with the limited success of empirical methods [7,8] makes it necessary to explore new tools that may be used to successfully guide the synthesis of new metalloporphyrins with larger relaxivities.It has been suggested that one such tool is the MEP [6].
The MEP is an experimentally observed molecular property that is easier to compute with modern computational chemistry methods than to measure [11][12][13].The MEP is commonly defined by, where the equation is in atomic units.The first term is the electronic contribution to the electrostatic potential and the second term is the contribution from the atomic nuclei [11].Multiple authors have reviewed the role of MEP in molecular reactivity [11,12].Because electrostatic interactions are important in the bonding of water molecules to transition metals and their complexes [14][15][16][17][18], Mercier suggested that the MEP may be modified to generate an electrostatic focusing field that would attract water molecules closer to the paramagnet's spin density [6].This closer interaction would also imply a stronger bond.The result would be to increase the relaxivity, not only because the distance between the paramagnetic electron spin angular magnetic moment vector and the water proton nuclear angular magnetic moment vector would be reduced, but also because the number of "bound" water molecules would be increased.These changes would affect both the inner-and outer-sphere mechanism of relaxation.Moreover, an electrostatic focusing field may also increase the residence time of water molecules bound to the inner-and outer-spheres.This effect would increase the outer-sphere contribution to the relaxivity and would have variable effects on the inner-sphere contribution.A more detailed discussion of these issues is found in reference [6].
It is noteworthy that a simple change from the para to the ortho position in the location of the carboxylate group for MnT(2-C)PP increases the relaxivity by 20 % when compared to MnT(4-C)PP.This modest but significant increase is unlikely to result from changes in the electronic configuration of the Mn(III) ion or rotational correlation time.Nonetheless, the change in the location of the anionic functional group has a strong effect on the MEP [6].
As discussed above and explained by Mercier [6], the MEP is a marker for the forces responsible for the motion of water molecules around the paramagnet.The fact that a change in relaxivity occurs by a simple pertubation in the molecular geometry that retains the net charge of the complex is significant.This result indicates that the relaxivity in this family of compounds is sensitive to the anisotropy of the electrostatic forces generated by the spatial distribution of the electric charges.Therefore, our results support the suggestion that the MEP can be used as a tool to design compounds with more efficient relaxivities.
Although the increase in relaxivity is small, our results are encouraging because the formulation of MnT(2-C)PP used in this study constitutes a mixture of conformers.As such, the electrostatic focusing field associated with the MEP is still suboptimal when compared to the one suggested by Mercier [6].
Because there are hindered rotations around the Cmeso-C1 bond, the orientation of the carboxy group in the ortho position with reference to the plane of the porphyrin ring can vary as shown in Figure 3.
The maximium electrostatic focusing field is expected from conformer (+,+,+,+) which is shown in 3D in Figure 4. Our results reflect a weighted average of the electrostatic focusing field from all the conformers.Moreover, as discussed previously [6], the effect of the MEP on the relaxivity should be more dramatic when the rotational correlation time is long.Therefore, there is still room to improve on the modest effect described here.In summary, we have shown experimentally by NMR dispersion profiles that the manganese porphyrin MnT(2-C)PP shows a 20 % increase in relaxation efficiency relative to MnT(4-S)PP, supporting the hypothesis that electrostatic forces are relevant to the relaxivity of these types of compounds.The MEP appears to be a useful tool to design new metalloporphyrins with improved relaxivities.We are in the progress of determining the distribution of conformers present in our experiments and isolating the conformers to understand how much we can increase the relaxivity by modifying the MEP.With this work, we hope to further increase the relaxation enhancement of metalloporphyrins by judicious placement of various functional groups on metalloporphyrins.In addition, we intend to explore the in vivo characteristics and toxicity of the new MnT(2-C)PP.
Figure 3 .
Figure 3. Conformers of MnT(2-C)PP."+" reflects a carboxy group above the plane of the porphyrin ring."-" reflects one below the ring.The Mn(III) ion lies slightly above the plane of the ring. | 2015-03-21T17:44:09.000Z | 2001-08-25T00:00:00.000 | {
"year": 2001,
"sha1": "f6e8cc4920a8222eb6d142a2ab107ab90f15b6aa",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/2/3/140/pdf?version=1403128960",
"oa_status": "GOLD",
"pdf_src": "Grobid",
"pdf_hash": "715d82521b876f46f76a65aea544e5a255264684",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
46368819 | pes2o/s2orc | v3-fos-license | An idiopathic hypogonadotropic hypogonadism patient with metabolic disorder and diabetes: case report
gonadotropin-releasing hormone stimulation test (100 μg intravenous) results showed that the peak of LH was 1.41 IU l−1 at 45 min, while the stimulation test with human chorionic gonadotrophin (2000 IU i.m. for 3 days) revealed that 72 h testosterone levels was at the lower limit of the normal range (2.0 nmol l−1). The laboratory data presented normal basal levels of thyroid hormones, thyroid stimulating hormone, growth hormone, prolactin, adrenocorticotropic hormone and cortisol (Table 1). The karyotype is 46, XY. A magnetic resonance imaging (MRI) of the testis showed bilateral testis was located at the level of the femoral head, but an MRI of the pituitary was normal. Ophthalmological findings showed the right eye was intraretinal hemorrhage, and the left eye was proliferative retinopathy. Bone mineral density showed osteoporosis. Electromyography showed severe diabetic neuropathy. Both cross-sectional and longitudinal epidemiological studies have reported that testosterone is inversely related to the different components of MS in men.1,2 And hypogonadotropic hypogonadism occurs commonly in patients with Type 2 diabetes,3 but the majority data were investigated in study groups confounded with aging, obesity or chronic metabolic disorders. Recently, some disorder of sex development were reported to be associated with increased risks of diabetes and the MS as well.4–6 However, abnormal glucose metabolism in young men with CIHH were only reported in two small studies,7,8 and there was few data on the components of MS in young men with CIHH. The present patient was diagnosed as CIHH, due to absent sexual maturation, bilateral cryptorchidism and selectively low gonadotropin, low testosterone, normal karyotype and pituitary image. Until now, he had never received hormone replacement therapy, and with low testosterone during puberty and postpuberty. At the age of 34-year-old, the patient was diagnosed as MS according to 2005 International Diabetes Federation criteria. Moreover, the patient had severe complications of diabetes, including diabetic nephropathy, retinopathy, and neuropathy. His diabetes was characterized by no ketoacidosis, negative antibodies for IAA, ICA and GADA. His blood glucose levels gradually decreased after a daily dose of insulin of 0.7 U kg−1, suggesting insulin resistance. All above points supported the diagnosis of Type 2 diabetes mellitus. Although the specific pathway of the development of diabetes and MS in testosterone deficiency are still not fully clear, it was reported that Dear Editor, Congenital idiopathic hypogonadotropic hypogonadism (CIHH) is a rare congenital disorder characterized by delayed or absent sexual maturation and infertility associated with inappropriately low gonadotropin and sex steroid levels. We report a 34-year-old patient with CIHH accompanying with metabolic syndrome (MS) and diabetes. The patient was admitted to our center for the evaluation of high-blood sugar in July 26, 2013. He presented with polyuria, polydipsia and lost 5 kg of his body weight over the past 6 months. He developed blurred vision for 2 weeks before admission. His casual plasma glucose was 26 mmol l−1 and ketone bodies were normal. The patient was diagnosed with cryptorchidism at 6 years old without any therapy. He has poor secondary sex characteristics after puberty, and no further evaluation was conducted. He is the only child of the nonconsanguineous parents and had no history of hyposmia or anosmia, hearing loss. On physical examination, he had normal blood pressure and his height was 171 cm, weight 64 kg, body mass index 21.9 kg m−2, waist circumference 103 cm, hip circumference 89 cm. He has high pitched voice, absent beard, sparse pubic hair (Tanner stage 2), bilateral testes cannot be palpable in the scrotum, and microphallus with penis length was 3.0 cm. Results of the biochemical analysis are listed in Table 1, serum total cholesterol 7.17 mmol l−1, triglyceride 2.47 mmol l−1, high density lipoprotein-cholesterol 1.4 mmol l−1, low density lipoprotein-cholesterol 4.48 mmol l−1. Urine microalbuminuria was 529.7 mg l−1. The oral glucose tolerance test (75 g glucose) showed that peak insulin and c-peptide were 64.63 mmol l−1 and 5.46 ng ml−1 respectively at 180 min. glycosylated hemoglobin was 11.1%. Islet cell autoantibodies (ICA), insulin autoantibody (IAA) and glutamic acid decarboxylase antibody (GADA) were all negative. Serum concentrations of luteinizing hormone (LH) (0.1 IU l−1) and follicule-stimulating hormone (0.39 IU l−1) and testosterone (0.4 nmol l−1) were significantly lower than the normal range. The LETTER TO THE EDITOR
gonadotropin-releasing hormone stimulation test (100 µg intravenous) results showed that the peak of LH was 1.41 IU l −1 at 45 min, while the stimulation test with human chorionic gonadotrophin (2000 IU i.m. for 3 days) revealed that 72 h testosterone levels was at the lower limit of the normal range (2.0 nmol l −1 ). The laboratory data presented normal basal levels of thyroid hormones, thyroid stimulating hormone, growth hormone, prolactin, adrenocorticotropic hormone and cortisol ( Table 1). The karyotype is 46, XY. A magnetic resonance imaging (MRI) of the testis showed bilateral testis was located at the level of the femoral head, but an MRI of the pituitary was normal. Ophthalmological findings showed the right eye was intraretinal hemorrhage, and the left eye was proliferative retinopathy. Bone mineral density showed osteoporosis. Electromyography showed severe diabetic neuropathy.
Both cross-sectional and longitudinal epidemiological studies have reported that testosterone is inversely related to the different components of MS in men. 1,2 And hypogonadotropic hypogonadism occurs commonly in patients with Type 2 diabetes, 3 but the majority data were investigated in study groups confounded with aging, obesity or chronic metabolic disorders. Recently, some disorder of sex development were reported to be associated with increased risks of diabetes and the MS as well. [4][5][6] However, abnormal glucose metabolism in young men with CIHH were only reported in two small studies, 7,8 and there was few data on the components of MS in young men with CIHH.
The present patient was diagnosed as CIHH, due to absent sexual maturation, bilateral cryptorchidism and selectively low gonadotropin, low testosterone, normal karyotype and pituitary image. Until now, he had never received hormone replacement therapy, and with low testosterone during puberty and postpuberty. At the age of 34-year-old, the patient was diagnosed as MS according to 2005 International Diabetes Federation criteria. Moreover, the patient had severe complications of diabetes, including diabetic nephropathy, retinopathy, and neuropathy. His diabetes was characterized by no ketoacidosis, negative antibodies for IAA, ICA and GADA. His blood glucose levels gradually decreased after a daily dose of insulin of 0.7 U kg −1 , suggesting insulin resistance. All above points supported the diagnosis of Type 2 diabetes mellitus.
Although the specific pathway of the development of diabetes and MS in testosterone deficiency are still not fully clear, it was reported that Dear Editor, Congenital idiopathic hypogonadotropic hypogonadism (CIHH) is a rare congenital disorder characterized by delayed or absent sexual maturation and infertility associated with inappropriately low gonadotropin and sex steroid levels. We report a 34-year-old patient with CIHH accompanying with metabolic syndrome (MS) and diabetes.
The patient was admitted to our center for the evaluation of high-blood sugar in July 26, 2013. He presented with polyuria, polydipsia and lost 5 kg of his body weight over the past 6 months. He developed blurred vision for 2 weeks before admission. His casual plasma glucose was 26 mmol l −1 and ketone bodies were normal. The patient was diagnosed with cryptorchidism at 6 years old without any therapy. He has poor secondary sex characteristics after puberty, and no further evaluation was conducted. He is the only child of the nonconsanguineous parents and had no history of hyposmia or anosmia, hearing loss. On physical examination, he had normal blood pressure and his height was 171 cm, weight 64 kg, body mass index 21.9 kg m −2 , waist circumference 103 cm, hip circumference 89 cm. He has high pitched voice, absent beard, sparse pubic hair (Tanner stage 2), bilateral testes cannot be palpable in the scrotum, and microphallus with penis length was 3.0 cm.
Results of the biochemical analysis are listed in Table 1, serum total cholesterol 7.17 mmol l −1 , triglyceride 2.47 mmol l −1 , high density lipoprotein-cholesterol 1.4 mmol l −1 , low density lipoprotein-cholesterol 4.48 mmol l −1 . Urine microalbuminuria was 529.7 mg l −1 . The oral glucose tolerance test (75 g glucose) showed that peak insulin and c-peptide were 64.63 mmol l −1 and 5.46 ng ml −1 respectively at 180 min. glycosylated hemoglobin was 11.1%. Islet cell autoantibodies (ICA), insulin autoantibody (IAA) and glutamic acid decarboxylase antibody (GADA) were all negative. Serum concentrations of luteinizing hormone (LH) (0.1 IU l −1 ) and follicule-stimulating hormone (0.39 IU l −1 ) and testosterone (0.4 nmol l −1 ) were significantly lower than the normal range. The testosterone could up-regulate the expression of glucose transporter 4 (GLUT4) and insulin receptor substrate 1 to stimulate glucose uptake into muscle and adipose, 9 and deficiency of androgen action could decrease lipolysis and affect the expression of several key enzymes involved in lipogenesis. 10 In conclusion, we report a 34-year-old patient with CIHH accompanying with MS and diabetes, the change of the patient's metabolic parameters after testosterone therapy need further follow-up.
AUTHOR CONTRIBUTIONS
MNZ and BS conceived of the study, drafted and revised the manuscript. CHQ and LB participated in the design of the study. XYC and WJL assisted with the revising of the manuscript. SQ participated in its design and coordination and revision of the manuscript. All authors read and approved the final manuscript. | 2018-04-03T06:19:31.170Z | 2014-09-23T00:00:00.000 | {
"year": 2014,
"sha1": "6fdde954509cb6a98e3e5b0a7b85636e7c9b5feb",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/1008-682x.137885",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6fdde954509cb6a98e3e5b0a7b85636e7c9b5feb",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
270635814 | pes2o/s2orc | v3-fos-license | Short Notes: First Report of a Co-Infection of Squash Vein Yellowing Virus and Tomato Leaf Curl Palampur Virus in Cucumber Plants in Iraq
: Mosaic disease causes a serious epidemic of cucumber plants in Iraq. To investigate causal agents of such symptoms, whole genome and metatrancriptomic sequencing data were collected from pathogenic cucumber leaves. The result shows that Tomato leaf curl Palampur virus DNA-A and B were existed with 2,756 and 2,719 bp sequences that deposited under acc. numbers ON229618 and ON229620 respectively, and the isolate named Babylon1. Further, Squash vein yellowing virus was found with 9,832 bp sequence that deposited under acc. number ON229619, and the isolate called Iraq. Mixed infection reported here for the first time in cucumber in Iraq, highlighting the impact of such infection that threat the cucumber fields.
Tomato leaf curl Palampur virus (TLCPV), as a member of the family Geminiviridae causes mosaic disease in Cucurbitaceae plants in low tunnels, greenhouses, and open fields.The disease causes a devastating epidemic of melon, cucumber, and squash.Throughout the Middle East, this virus has spread rapidly, threatening cucurbites production (Heydarnejad et al., 2013;Dhkal et al., 2020;Adhab & Alkuwaiti., 2022).Squash vein yellowing virus (SqVYV) was first detected in Florida in 2007 (Hernandez et al., 2021).The virus belongs to the Potyviridae, genus Ipomovirus, known to infect cucurbits (Jailani et al., 2021;Inoue-Nagata et al., 2022).There is an interesting correlation between circulatory persistence and semi-persistent transmission of TLCPV and SqVYV by whiteflies Bemisia tabaci (Kumar & Kumar 2018;Kavalappara et al., 2021).The leaves of an infected cucumber plant in Babylon Province were collected on 22 nd October 2021 (Fig. 1A), and then cut into squares of 0.5×0.5 cm.A single Eppendorf tube containing five times as much RNAlater (2 ml) was then sent for sequencing to the company of DNAlink in South Korea.A whole genome sequencing was performed on the extracted DNA and RNA (Platform: Novaseq6000; Applications: WGS Nano550 and WTS/mRNA, respectively).DNA reads accounted for 89,893,674 clean reads, whereas RNA reads accounted for 53,458,772.In order to create one representative reference sequence, 5040 suspected viruses were downloaded from NCBI-GenBank and were concatenated to form one 76,145,671 nucleotide sequence.A reference genome-wide clean read mapping using Geneious software was performed on whole DNA and RNA (http://www.geneious.com/).As a result, 1,055,827 and 1,347,785 reads were assembled against TLCPV DNA-A and TLCPV DNA-B to produce 2,756 and 2,719 bp consensus sequences, respectively, which have been deposited under accession numbers ON229618 and ON229620.The isolate has been named Babylon1.As shown in Fig. 1 C and D, TLCPV DNA-A encodes six protein domains (AV1, AV2, AC1, AC2, AC3, and AC4), and TLCPV DNA-B encodes two proteins (BV1, BC1).Further, 196,675 reads were assembled against SqVYV and 9,832 bp consensus was produced and then deposited in GenBank under accession number ON229619, and the isolate named Iraq (Fig. 1B).SqVYV has ten protein domains involved in one open reading frame (P1a, P1b, P3, 6K1, C1, 6K2, VPg, Pro, Replicase and CP).The analysis of phylogeny shows that SqVYV was very close to Israeli isolate (Fig. 2), while TLCPV DNA-A and TLCPV DNA-B were in high similarity with Iranian isolates (Figs. 3 and 4).For the first time in Iraq, the coinfection of the two viruses has been reported in cucumber, highlighting the impact of double virus infections that spread epidemically across the cucumber fields.
Contributions of authors
Project supervision was provided by HA, OA, and FF.In collaboration with HA, OA carried out the experiments and analyzed the data.OA and FF drafted the final version of the manuscript, which was revised and approved by all authors.
Fig
Fig. (1): Symptomatic cucumber leaves show typical mosaic symptoms caused by co-infection of SqVYV and TLCPV (A).Complete sequence of SqVYV shows ten protein domains (B).Two DNA segments of TLCPV, A (C), and B (D).
Fig
Fig. (2): Geneious tree builder was used to build the tree of SVYV, and ClustalW was used to align the nucleotide sequences.The out group member was Cucumovirus Cucumber mosaic virus.
Fig. ( 3
Fig. (3): ClustalW was used to align fullgenome nucleotide sequences of TLCPV DNA-A and build the tree using Geneious tree builder.The out group member was Caulimovirus Cauliflower mosaic virus.
Fig
Fig. (4): Full genome nucleotide sequences of TLCPV DNA-B were aligned using ClustalW using Geneious tree builder.The out group member was Caulimovirus Cauliflower mosaic virus. | 2024-06-21T15:11:32.366Z | 2024-06-19T00:00:00.000 | {
"year": 2024,
"sha1": "4cc25e46c6d61aef73de2bb1c853fed03ec4005b",
"oa_license": "CCBYNC",
"oa_url": "https://bjas.bajas.edu.iq/index.php/bjas/article/download/1419/367",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9a886145ddfbccb58ac27cdba98356a1eab99951",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": []
} |
5521577 | pes2o/s2orc | v3-fos-license | Yes, SiR
How time flies! It is the 20th anniversary of RNA. For many of us, our research career is largely co-incident with the launch and progression of the journal, which has provided bonds and guiding lights for our community. Here, I share my personal experience in entering this exciting RNAworld through contributing to the discovery of SR proteins in pre-mRNA splicing, and emphasize that, while a large array of principles have been established for the function of SR proteins in the past two decades, there are still many outstanding questions on the roles of SR proteins in diverse regulatory activities in mammalian cells.
How time flies! It is the 20th anniversary of RNA. For many of us, our research career is largely co-incident with the launch and progression of the journal, which has provided bonds and guiding lights for our community. Here, I share my personal experience in entering this exciting RNA world through contributing to the discovery of SR proteins in pre-mRNA splicing, and emphasize that, while a large array of principles have been established for the function of SR proteins in the past two decades, there are still many outstanding questions on the roles of SR proteins in diverse regulatory activities in mammalian cells.
A personal roller coaster in the race to discover the SR family With the establishment of the in vitro splicing system in early '80s, the machinery for the spliceosome was quickly shown to consist of 5 small nuclear ribonucleoprotein particles (snRNPs), two of which (U1 and U2) are directly responsible for defining the 5 ′ and 3 ′ splice sites in the pre-mRNA. But when I entered this field in 1988 as a postdoc in Tom Maniatis's lab, we knew virtually nothing about protein factors required for pre-mRNA assembly into the spliceosome. The biochemical fractionation/reconstitution strategy, which had been effectively taken to tease apart the basal transcription machinery, was proving to be challenging. I was charged to let the mouse immune system sort out the essential factors for us by raising monoclonal antibodies against crude gel-filtration purified spliceosomes. By generating >2000 candidate monoclonal lines and testing >200, I was fortunate to land on SC35 (now renamed as SRSF2). The antibody against this non-snRNA factor potently inhibits splicing, and interestingly, also decorates the nucleus with bright "stars" known as nuclear speckles. Our first paper was accepted as an Article to Nature with only a two-sentence review to recommend publication, a once-in-lifetime experience that I have never been able to repeat. I was also desperate to further prove that SC35 protein detected on Western was a true active splicing factor by affinity purifying the protein, cutting it out of an SDS gel, extracting it with 8M urea, and renaturing it to see if it would complement the immune depleted nuclear extracts for in vitro splicing.
The initial success was quickly followed by a nightmare, as I was soon in a race to clone the gene with both Krainer and Manley groups who succeeded in biochemical purification of a protein of roughly the same size and apparently the same activity for complementing S100 (which Krainer called SF2) or inducing T antigen isoform switch (which Manley called ASF). I first tried to clone by screening a λgt11 library in bacteria; I failed of course and later realized that the antibody recognizes a phospho-epitope and bacteria do not have the right kinase. I then tried to scale up immune affinity purification and obtained pure protein for sequencing by Edman degradation, but the core facility at Harvard informed me that there was not enough protein (later, I found out why: they accidently lost my protein!). I was next forced to biochemically purify the protein using the Western as a readout. With sufficient material this time, which generated multiple tryptic peptides on HPLC, I was asked to pick candidate peaks for sequencing. Tom suggested I pick three, and if all corresponded to SF2/ASF, I should abandon the project. Among the three I picked, two were identical to SF2/ASF and the third one contained a 14 amino acid sequence with one half corresponding to SF2/ASF, but the other half appeared new. Had all three been peptides of SF2/ASF, I would be doing different things now. Too short to design degenerated PCR primers, I had to use >1000-fold degenerate oligonucleotides to screen by hybridization, which led to cloning of both SF2/ASF and SC35. Having lost the battle to clone SF2/ASF, I realized, together with Mark Roth's work on amphibian B "snurposomes," that SF2/ASF and SC35 are members of a larger group of proteins now known as the SR family. The rest is history. I was invited by Tim Nilsen to write the first review on SR proteins. This paper has been the most highly cited one among all I have published in my career thus far.
SR proteins in splicing control and beyond
After thousands of papers on SR proteins contributed by the community, we now have a set of general rules for the function of SR proteins in splicing. SR proteins are essential to commit pre-mRNA to the splicing pathway by promoting U1 and U2 binding to functional 5 ′ and 3 ′ splice sites as well as communication between the functional splice sites during initial exon definition followed by paired interactions across the intron for spliceosome assembly. SR proteins are also involved in regulated splicing by strengthening weak splice sites and/or competing with negative splicing regulators, such as hnRNP proteins. In both constitutive and regulated splicing, SR proteins bind specific, although quite degenerate in many cases, sequence motifs, most in exons and largely purine-rich. Thus, SR proteins have been generally considered positive splicing factors and regulators by functioning through exonic splicing enhancers (ESEs).
The defining features for SR proteins are multiple Arg-Ser (RS) dipeptides in their RS domain besides RNA Recognition Motif (RRM). Relative to 12 core SR protein family members, the SR protein family has many cousins, nieces, and nephews characterized by an RS domain in combination with other protein motifs or domains. Relative to core SR proteins, however, those SR-related proteins are less understood. The RS domain in all SR proteins and related proteins is heavily phosphorylated by specific kinases, a gateway to understand the biology and regulation of these splicing regulators in cell cycle, signaling, and cancer.
SR proteins continue to fascinate us because they appear to have multiple other roles beyond the splicing control. As most splicing takes place co-transcriptionally on chromatin, and defects in the process cause genome instability, which may be connected to a wide range of biological processes. We also discovered that SR proteins have a direct role in both transcription initiation and elongation, suggesting that splicing is not simply coupled with transcription in time and space, but functionally integrated to have mutual benefits for efficient gene expression in the nucleus. SR proteins have also been implicated in multiple other RNA metabolism pathways, including mRNA nuclear export, mRNA quality control, microRNA biogenesis, and even translational regulation in the cytoplasm. These findings place SR proteins in nearly all steps along the axis of the central dogma.
Much more is left to discover about SR proteins
While SR proteins are among the best-characterized splicing factors, many fundamental questions remain unsolved re-garding how they promote spliceosome assembly and specific interactions within the assembled spliceosome. For example, SR proteins are thought to interact with other RS domaincontaining proteins, such as U1-70K and U2AF, to mediate spliceosome assembly. However, all existing data are based on yeast two-hybrid and/or in vitro pull-down assays with additional support from the FRET experiments, and thus, rigorous evidence for this textbook statement has been lacking. SR proteins are part of the spliceosome, but their specific contacts in this RNA machinery await insights from structural analysis of the spliceosome. In fact, there is no high-resolution structural information on the RS domain of any SR protein thus far, which may require both proper phosphorylation and stable interactions with its partners within a splicing complex for success crystallization.
Functional genomics has generated new insights into the in vivo function of SR proteins, such as their position-dependent and context-sensitive effects in regulated splicing, akin to many other splicing regulators. However, the binding specificity of individual SR proteins established with purified proteins does not seem to fully support their exon-central binding pattern in vivo, indicative of cooperative binding with other RNA binding proteins during splice site selection, an important direction to be further explored. As SR proteins have been implicated in diverse aspects of RNA metabolism, it is also the prime time to survey their interactions with different RNA populations in different cellular compartments, within specific complexes, and in response to depletion of potential antagonizing activities. For instance, hnRNP A/B is well known to counteract the function of SR proteins in regulated splicing, but such anticipated reciprocal relationship has not yet been substantiated by in vivo genomic evidence at the genome scale. Now with the elucidated functions of SR proteins in transcriptional control, we have much to learn if we are to understand co-transcriptional splicing. In fact, as SR proteins are localized in nuclear speckles where many transcription factors and chromatin remodelers also reside, it appears that this nuclear domain may have a bigger role in genome organization for coordinated gene expression, rather than simply serving as storage sites for splicing factors in the nucleus. This research direction is connected to the big picture issue of the cell biology of the nucleus.
Last, but not least, many splicing factors, particularly various SR proteins, have been implicated in diseases. SF2/ASF (a.k.a. SRSF1) has well-characterized roles in oncogenesis. It is also fascinating to note that SC35 (a.k.a. SRSF2) and another RS domain-containing protein U2AF1 (a.k.a. U2AF35) are now known to be two major leukemia genes. On SRSF2, all mutations associated with Myelodysplastic Syndromes (MDS) occur at the single amino acid P95, which resides between its RRM and RS domain. It remains to be explored why this particular site is such a hot spot for mutation, which biochemical function(s) of the SR protein is disrupted, and what is the underlying mechanism for causing the disease. These questions have connected splicers to the large disease community, which also speaks for the importance of basic research in providing the theoretical ground for understanding specific disease mechanisms and developing effective therapeutics. | 2018-04-03T05:58:41.447Z | 2015-04-01T00:00:00.000 | {
"year": 2015,
"sha1": "f5de92ee53100d6adee70c93da93ded82aa0e7fe",
"oa_license": "CCBYNC",
"oa_url": "http://rnajournal.cshlp.org/content/21/4/619.full.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "60efe9a8b56cf5cd53ca699d15c0b73124edda07",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Medicine"
]
} |
216535405 | pes2o/s2orc | v3-fos-license | Electricity in retail markets as a commodity in smart energy systems
Currently, the worldwide trend has been the transition of many countries to low-carbon energy sources. These changes are dictated both by external factors, such as the rapid growth of information and communication technologies, as well as the general policy of world powers advocating a clean environment and ensuring energy independence for the future. In Russia, the introduction of small generating capacities on renewable energy sources has great prospects due to the huge number of regions that are difficult to access for electricity supply. But the integration of generating capacities based on renewable energy sources requires not only the availability of special technical and legislative means, but also the competent selection of a tariff for a prosumer. This article presents the concept of an intelligent system that allows individual households to be involved in a progressive type of energy consumption and generation and creates a new type of energy system and market participant - prosumer. Two systems are offered for households equipped with solar panels. It is proposed to extend the practice of settlements at unregulated prices to individuals and households with the possibility of choosing the most favorable price category for the considered prosumer.
Introduction
One of the challenges of the energy security doctrine, approved by the Presidential decree 216 of May 13, 2019, is an increase in the share of renewable energy sources (RES) in the global fuel and energy balance. The challenge identified in the doctrine is a consequence of the commitment Russia made to reduce CO emissions by 2020. As an adequate response to the challenge, Russian manufacturers of equipment for renewable sources need to provide support and assistance in its implementation. To create a convenient platform and make appropriate amendments to the legislation for making settlements between the consumer and the producer, in one person, who will be called a prosumer and the already existing system in which payments for electricity are carried out now. Such a shift will be possible due to the development of intelligent technologies in the field of energy and the steady increase in the amount of renewable energy, which are the objectives of the above-mentioned doctrine.
Moreover, besides it is necessary not only to adopt the climate agreement and national legislative acts, but also to conduct an active policy in our state aimed at supporting the development of renewable sources by the population. All of the above requires the support of investment and venture funds, a flexible tax policy of the state, as well as the support of prosumers in the consumption and production IV International Scientific and Technical Conference "Energy Systems" IOP Conf. Series: Materials Science and Engineering 791 (2020) 012061 IOP Publishing doi: 10.1088/1757-899X/791/1/012061 2 of clean energy [1]. Of great importance is the readiness of scientific organizations to develop competitive domestic equipment and local factories to begin production of equipment intended for the use of renewable energy sources.
The current trend facing the energy policy of any state is the search for optimal relation between the availability of electricity technically and from economic point of view. In most markets, both wholesale and retail, electricity demand has a low price sensitivity, since fluctuations in electricity prices that occur in the market hourly are not communicated to the retail consumer. Retail prices in Russia are regulated, and sales companies do not seek to use prices in real time and inform consumers of the household sector about the opportunity to save. Now, in the context of price regulation in retail markets, consumers of utility services, according to the annex to the Federal Tariff Service N 20-e/2, adopted in 2004, are charged at a fixed price, unlike commercial consumers with multi-rate tariffs. The government has repeatedly attempted to introduce different tariffs for the population and piloted them in several regions of the Russian Federation, but for the most part, payment for the tariff was made depending on the amount of energy consumed, and not on price fluctuations in the wholesale electricity market. Since in the short term, energy consumption is almost unchanged, and demand is variable, there is a shortage of it during peak periods and surplus at night. If the consumer is aware of real-time prices, then the price in the retail market will accurately follow the load level at the same time [2]. As a result of public awareness of tariff increases during the peak period, the need for generating capacity would be reduced by 10% with the help of only price factors. As a result, there would be an equalization of the demand curve for electricity, due to a decrease in its peak and an increase in the base parts of the daily schedule. If the consumer is informed of large fluctuations in electricity prices, this may lead to flattening of the consumption curve peak. To do this, it is necessary to amend the existing legislation and make calculations of the population at unregulated tariffs [3].
According to the International Energy Agency [4] the share of renewable energy sources is expected to grow by 20% and reach 12.4% of the total world consumption in 2023. Electricity generation at power plants using renewable energy sources has been showing steady growth in 170 countries for the past ten years, so in 2018 commissioning of capacities using renewable energy sources amounted to 181 GW, in 2017 -8% less than in 2018, and a year earlier, only 85 GW [5]. Currently, hydropower industry remains the largest renewable source based type of generation in the world, with a share of 16% of global electricity generation, wind power is in the second place accounting about 6% and solar photovoltaic energy is in third place (4%).
Nevertheless, the last three years, the leaders in commissioned capacities on the basis of renewable energy sources have been accounted for the solar stations, which is explained by the desire of investors to invest in an acceptable payback period. This is due to scientific and technological progress in the development of materials for photo panels and, as a result, to a decrease in their cost, which has continued over the past decade.
In many developed countries, as well as places where centralized power supply is unavailable, for example, islands or, as is typical for Russia, a huge number of regions that have difficulties with access to power supply, in which RES will be competitive, it is necessary to pay attention to the development of distributed power supply to consumers [6]. To meet this energy need, 77% of global demand refers to buying a home solar system. Therefore, the use of distributed renewable energy, intellectualization of infrastructure and the transition of consumers to active, prosumer behavior patterns is obvious [7,8]. At the same time, effective tariff setting for the prosumer will allow to get additional profit and adjust its consumption.
Materials and methods
To solve the problem of effective tariff setting for the prosumer in Russia the current system of retail tariffs was researched. According to clause 5 of Decree of the Government of the Russian Federation In accordance with these documents, the choice of the category according to which the calculation will be made between the legal entity and the sales organization is the duty and right of the consumer of electric energy (power) itself. Let us to consider the pricing principles and calculation methods which are applicable for each of the categories, according to the above regulatory legal acts.
The first price category includes small consumers with a maximum power of less than 670 kW. This article discusses the object of energy consumption, which can be classified in this category, due to the fact that the amount of electricity consumed by it is relatively small. Payment for electricity in this category is carried out at the same price for the whole amount of electricity consumed per month. The price already includes costs for the purchase of electricity in the wholesale market and for its transmission through networks of the corresponding voltage. In the invoice for payment for electricity, there is only one item at a single-rate tariff, and this category is one of the simplest, since hourly planning of energy consumption is not carried out, but it is often the most expensive for legal entities.
It is worth noting that in relation to legal entities (industries) that did not notify the guaranteeing supplier of the choice of a category on time, the calculation is made according to the first category. But it is possible to automatically transfer the price category used in the previous year to the current year.
The only difference of the second price category from the first one is that when calculating within this category, the division into zones is carried out to account for electricity consumption. The control zone is day and night consumption -two-zone consumption, as well as periods of peak, base and halfpeak load, which are related to the three-zone consumption metering system. The object considered in the article has its own energy source -the solar panel, so the calculation of the two zones will be relevant for this article, since the solar panel operates in the mode of power output only in the daytime. As well as for the first price category, the cost of power and transmission costs are inherent in the price and consumer power does not exceed 670 kW.
When using the third price category, the electricity charge is hourly, therefore, appropriate metering of electricity is necessary. The transfer tariff is considered single-rate, as in the first and second price categories. Starting with the third, the range of application of price categories is expanding: they can be applied to legal entities that consume both less and more than 670 kW. Based on these factors, the third price category as well as the first and second ones may be applied to the object in question.
Common to the fourth, fifth and sixth price categories is that the electricity consumed by a legal entity is charged every hour at a different price. The main distinguishing feature between them is the need for consumers to plan their hourly consumption a day ahead (for the fifth and sixth). In case of deviations, the legal entity is obliged to pay for them (they are included in the price of electricity). This is not always feasible, therefore we will not calculate the fifth and sixth categories for calculating the consumption of electric energy by the object under consideration.
As for the tariff for the transmission of electricity, the fourth and sixth price categories can be combined into one group. They are different from other categories. Only in the calculations for these price categories the two-rate transmission tariffing is applicable. Calculation of payment by price categories between legal entities and guaranteeing suppliers is established by the law -the use of other mechanisms is not legal now in Russia. The real price of electric energy and capacity for each guaranteeing supplier is published before the 14th day of the month following the estimated one on the website of the «Administrator of the Trading System of the Wholesale Electricity Market», Joint Stock Company. Tariffs for transmission services and marketing premiums for guaranteeing suppliers are set annually by the regional energy commissions. Thus, having information on the data of actual and planned consumption, it becomes possible to verify the cost of electricity production, which is set by the guaranteeing supplier for the month, with the highest degree of probability [9].
What to do if, when checking the expediency of using a category, it proved to be unprofitable? The answer is switch to another category. The transition in the framework of the first, second, third and IV International Scientific and Technical Conference "Energy Systems" IOP Conf. Series: Materials Science and Engineering 791 (2020) 012061 IOP Publishing doi:10.1088/1757-899X/791/1/012061 4 fifth categories is possible once a month for one year. Similarly, between the fourth and sixth. For the transition, it is necessary to notify the guaranteeing supplier at least 10 working days before the start of the settlement period. The company can only switch from the first, second, third and fifth to the fourth or sixth categories once a year. For the transition, it is necessary to inform the guaranteeing supplier and replace the tariff for the transfer of electric energy within one month after the publication of tariffs for a new period. Usually, updated information is available at the end of December.
Thus, a legal entity can choose between a single-rate or two-rate tariff for the transfer of electric energy only once a year [10]. As for the choice within this division, the situation is much better. You can change the price category on a monthly basis if it meets the requirements of economic feasibility.
For our object -a residential building in the city of Kazan -we assumed the choice of one of several price categories: first, second, third and fourth, and also performed a calculation for each of them. Based on the data received, recommendations were made for the electricity consumer. The choice fell on these categories of data not accidentally. The price categories for which independent planning of consumption by the enterprise is provided (fifth and sixth) are not beneficial to the vast majority of consumers, because it is impossible with 100% probability to plan hourly consumption for the day ahead, resulting in deviations from the declared volume, which are necessary pay to a legal entity [11]. Thus, for consumers with a maximum power of less than 670 kW, the determination of the price category is reduced to the first, second, third or fourth, which differ in the tariff option for electric power transmission services, as well as taking into account electric energy. Unlike the second, the first is characterized by a more "ragged" load profile. According to this characteristic, in accordance with the schedule of electrical load, a residential building can be attributed to the first price category.
Results
In view of the foregoing, a technical solution should be developed for the integration of renewable energy sources into the energy system, which takes into account market signals and is accessible not only to large industrial facilities, but also to single households or housing societies. Then each person can be involved in a system of progressive energy consumption. The main components of this system are smart metering devices with built-in data transmission devices, communication lines, a hardwaresoftware complex for data processing and computing, and user applications. The structural diagram of the system is shown in Figure 1. Information about the consumption or generation of electricity on various devices of the facility is collected in a single smart settlement system located on the server of the energy supplying organization or guaranteeing supplier. This server carries out financial calculations for electricity. One of the requirements for the system is to support the cloud storage function to provide users with access to information about their consumption and the state of the electricity bill. The user will be able to view IV International Scientific and Technical Conference "Energy Systems" IOP Conf. Series: Materials Science and Engineering 791 (2020) 012061 IOP Publishing doi:10.1088/1757-899X/791/1/012061 5 data in a mobile application that supports communication with cloud storage at any time of the day. In Figure 1, the implementation of communication channels is shown by PLC technology and optical communication lines, but it is possible to use twisted pair or wireless data transmission methods.
The financial calculations carried out by the system are carried out according to two models. The first model is the energy credit model. When choosing this model, the prosumer gets the opportunity to take into account the electricity generated and delivered to the network on his account in the form of marketable products. In periods when own consumption exceeds generation, it can be used to offset the consumed electricity from the network. The second model is called energy billing. In energy billing, the electricity generated and consumed from the network has its own set prices. The consumer sells electricity generated in excess of his own consumption and takes it into account in monetary terms. An energy credit model requires a bi-directional meter, which, when the electricity is consumed from the grid, will add its volume to the value recorded in the meter, and when the electricity is generated, it will subtract its amount. The prosumer pays in monetary terms only for the purchase of electricity from the grid at a retail price.
Energy billing requires two meters that separately measure the energy consumed and delivered to the grid. In this model, a prosumer buys electricity from the grid from a utility at a retail price and sells its own produced energy at a wholesale price. At the same time, in order to track real-time pricing, the meter must take into account the current market price so that the prosumer has up-to-date information for choosing a retail tariff [12,13].
In order to assess the economic feasibility of the proposed calculation models, we carried out calculations of the estimated generation obtained from the solar panels installed on the roof of a residential building in the city of Kazan, and the income from their use. Realization of the generated energy was carried out according to the two models described above.
According to the model of energy credit, energy was consumed by a residential building, both from the network and from its own sources of generation. In periods when the generation of electricity by solar panels exceeded the needs of a residential building, the surplus was given to the network at the expense of future consumption. According to the energy billing model, surplus energy generated by solar panels was sold to the network for money.
The rightmost column in tables 1 and 2 displays the monthly status of the prosumer payment. The amount in the payment is determined based on the average consumption rate of electricity by the household (the base cost for the tariff in Russia is 170 rubles), and the amount of electricity sold or additionally consumed. Electricity consumption is paid at a price of 170 rubles/kWh, sale at a price of 3.015 rubles/kWh. In this case, the total consumption is 7120.797 kWh, the total electricity generation is 6901.912 kWh. At the same time, according to the results of the month, the user's debt to the supplying organization under the credit system amounted to 2700 rubles, and under the billing system 7213 rubles. Similar calculations for households with different ratios of generation and consumption showed that an energy credit is beneficial for those who generate less than they consume. And energy billing is appropriate for prosumers with generating capacities, the generation of which often exceeds the amount of household needs.
In the above calculations, fluctuations in the price of electricity in the retail market were not taken into account. However, as mentioned above, for the full inclusion of all consumers in a progressive consumption model, the financial settlement system includes the function of translating market price signals. Thus we consider and evaluate several models of calculations using price categories that are currently valid in Russia for legal entities, and the essence of which is proposed to be transferred to the financial calculations of prosumers.
Electricity generation from renewable energy sources is inconsistent, so the consumer has to combine power from the network and renewable energy sources [14]. If the apartment building is not equipped with batteries or other sources of alternative energy, except for the sun, that is, consumption from the network will be made every day at night, the comparison chart of the selected categories will look like shown in Figure 2.
For the case when the calculation is carried out by category, but the consumer does not give "surplus" to the network, the comparison chart of the selected price categories will look like shown in Figure 3. Figure 4 shows a comparison of the third and fourth price categories.
Discussion
It can be seen from Figure 2 that the use of a two-rate transmission tariff compared to others is more profitable by 1,310 rubles. Thus, under these conditions, it is more profitable to use the second price category, when electricity is calculated in two zones of the day.
In the period from April to August, the amount of electricity consumption from the network is zero. The diagram in Figure 3 shows that when comparing these categories, the first turned out to be the most profitable. The price of electricity for the first price category is calculated according to the residual principle. That is, all the expenses of the guaranteeing supplier after subtracting of the bills of all consumers in categories from second through sixth, the remaining money is distributed evenly among consumers using the first price category. The first price category should be more expensive than the others, should encourage consumers to install hourly metering devices and choose other price categories with more accurate methods for determining actual consumption. However, due to the residual mechanism for calculating the price of the first price category, in some regions there are "distortions" at which it is the cheapest category for consumers. When using the first category, the consumer benefits for the year compared to the third and fourth, respectively, 545 and 544 rubles, respectively. The diagram in Figure 4 shows that the two-rate tariff turned out to be, although not by much, but more profitable than the single-rate one. Consumer savings in this case amounted to 117 rubles.
If the facility has an accumulator of unlimited capacity, with the help of which accumulation is carried out, it is possible to assume the possibility of selling excessively generated electricity to the network during peak hours in a balancing market.
However, it should be noted that the profitability of any price category is variable. It cannot be said unequivocally that any one specific category will be beneficial for all regions. Moreover, for each prosumers within the region various price categories are beneficial. Therefore, in order not to fall for an increase in price, it is necessary to periodically analyze the conditions of energy supply of the consumer and, if necessary, make changes. It is worth considering that if the period of maximum electricity consumption occurs at night, it is more profitable to use the second price category. If from an economic point of view it is more profitable to use two-rate tariffing for energy transfer, then a fourth or sixth price category should be used. But we must not forget about the need for consumption planning in the calculation of the fifth and sixth categories.
Conclusion
As a first step towards the implementation of the proposed concept of the energy trade system, amendments should be made to the tax legislation, which will be aimed at stimulating the development of renewable energy sources. Such measures, in relation to prosumers, may include the introduction of tax benefits on value added for the transportation of solar panels, their purchase and installation, property tax, tax holidays for the first years of use of equipment, and introduce state subsidies for interest rates on loans in in relation to renewable energy facilities [15]. The next step is to agree on the procedure for technological connection to the network object. Here, the technological process of integrating renewable energy generation should be supported by network equipment and the capabilities of the network operator. After this, the principles of contractual relations with the sales organization are established. In general, this means choosing a billing model for the user. To implement this step, it is necessary to extend the practice of settlements at unregulated prices to individuals and households with the possibility of choosing the most favorable price category for the considered prosumer and make changes to paragraph 5 of the Resolution 442 issued by the Government of Russian Federation and equate the population with consumers who pay for electricity at unregulated prices. | 2020-04-16T09:16:39.730Z | 2020-04-09T00:00:00.000 | {
"year": 2020,
"sha1": "b252f9410f496b82d7fa983e5cbd33a1611f98f5",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/791/1/012061",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "62bd18911e77e924090fe8a2ce2f8612561b78fa",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Business"
]
} |
131866807 | pes2o/s2orc | v3-fos-license | Study on the Optimal Equivalent Radius in Calculating the Heat Dissipation of Surrounding Rock
The heat dissipation of surrounding rock of a non-circular roadway is computed using an equivalent circular roadway approach under three circumstances when the area, perimeter, or hydraulic diameter of the circular roadway is equal to the non-circular roadway to obtain the optimal equivalent radius. The differential equations of heat conduction for unstable surrounding rock are established in cylindrical and rectangular coordinate systems using dimensionless analysis method. The calculation formulas of heat dissipation capacity and heat transfer resistance are derived from differential equations. Based on the method of equivalent radius, the similarities and differences between non-circular and circular roadways in calculating the heat dissipation of surrounding rock are discussed. Using the finite volume method, the calculation models for non-circular and circular roadways in the heat dissipation of surrounding rock are also established, among the non-circular roadways including three circumstances, namely, trapezoid, rectangle, and arch. The relation errors of heat dissipation of the surrounding rock of the three equivalent circular roadway methods are investigated for the three non-circular roadways. Results show that the calculation approach with equal perimeters is the best for the heat dissipation of surrounding rock of non-circular roadways.
Introduction
With the increase in mining depth, a high-temperature geothermal field has become a serious problem in mine in recent years.In South Africa, Bluhm et al. reported that the mining depth of the working face on Witwatersrand reached 3800 m where virgin rock temperature exceeded 60 °C [1].Meanwhile, based on related statistics in China, 47 mines have mining depth of over 1000 m and the working face temperature of approximately 36 °C [2].Three different modes of deep mine geothermal field, namely, linear, nonlinear, and abnormal, were also previously presented [3].Yuan analyzed the causes of formation of heat hazards in Huainan and provided some suggestions in prevent heat damage [4].Based on the influence of thermal hazard in deep mine, Yang et al. summarized the causes of deep thermal hazard and compared the control technologies for deep thermal hazards [5].According to cooling systems consume up to 25% of the total electricity, Gideon et al. proposed an energy saving strategy on a deep mine cooling system [6].In [7], some measures and solutions were presented by analysis the environmental issues from coal mining.It's found that the high temperature condition can severely impair the health of coal miners and can evoke several disastrous incidents, such as fire and gas explosion [8].Therefore, a study on the characteristics of the geothermal field and the control of heat hazards is significant in improve coal miners' working condition.
In this study, we mainly focus on discuss the influence of sectional form of roadways in calculate the heat dissipation of surrounding rock between non-circular and circular roadways.The heat quantity from rock around roadways is one of the main heat resources in a high-temperature mine, and its heat dissipation capability exhibits a close relationship with the sectional form of roadways, except for the thermo-physical properties of rocks, airflow temperature, wind speed, wall humidity and ventilation time [9], [10].Based on different cross-sectional shapes, roadways can be divided into two types, namely, non-circular and circular roadways.A circular roadway possesses symmetry.Consequently, the calculation of heat dissipation capability using an analytical solution is relatively simple.However, most roadways in mines are non-circular in addition to the shaft, which is relatively complex.As such, obtaining its analytical solution is difficult.Therefore, an equivalent circular roadway approach is commonly employed for noncircular roadways in calculate the heat dissipation of surrounding rock.However, the specific length of the equivalent circular roadway, i.e., the so-called equivalent radius, has not yet been unified.Three well-known methods [11], namely, (1) equal area, (2) equal perimeter, and (3) equal hydraulic diameter methods, have been used in calculate the heat dissipation of surrounding rock of the noncircular roadway.Therefore, the application of the equivalent circular radius is extremely confusing in computing the heat dissipation of surrounding rock.
Many studies on calculating heat quantity from rock around roadways have been reported.Щсрбаиъ.АП [12] proposed an unstable heat transfer criterion, which had a relationship with the heat dissipation of surrounding rock and was influenced by the Fourier and Biot numbers.Cen et al. proposed an analytical solution for the unstable heat transfer criterion using the variable separation approach [13].However, the calculation method for the practical application was complex and difficult.Sun et al. [14] simplified the problem using the Laplace transformation approach and provided an analytic expression of the unsteady heat transfer coefficient on the basis of the equivalent circular roadway model to remedy the deficiencies of the solution of Cen et al.Starfield et al. proposed a theoretical approach to describe heat dissipation between surrounding rock and airflow under a moist condition [15].In [16], Yakovenko et al. analyzed the heat transfer coefficient between airflow and surrounding rock with a small Fourier number.With regard to the deficiencies of the methods of Starfield et al. and Yakovenko et al., Gao et al. analyzed the influence of moisture on the heat dissipation of surrounding rock using the equal area method [17].They also proposed a theory that heat released from surrounding rock was greater and airflow temperature was higher compared with water vaporized on the airway surface when water evaporation occurred during airflow.However, the shape of the roadway was not considered.In [18], Zhang et al. analyzed the coupled problem of heat transfer in the surrounding rock and heat convection between the air and the surrounding rock in the Fenghuo mountain tunnel.Considering the characteristics of the temperature field, such as an unsteady state, heterogeneous rocks and anisotropy, and anomalous forms of laneways, Wu et al. proposed a theoretical approach based on the finite element method and analyzed the basic law of the change of the surrounding rock temperature field with time and space [19], [20].Lowndes et al. [21] built a set of experimental devices to simulate the convective heat transfer coefficient and obtained the change relationship between the wall temperature of surrounding rock and flow temperature.In [22], Zhang proposed a mathematical model for the temperature field established under cylindrical coordinates to verify the feasibility of the modeling experimental method, which was a supplement to the model of Lowndes.Based on CLIMSIM and MULTIFLUX, Danko et al. [23] investigated the transfer phenomena in heat and mass on a roadway wall.Considering that the flow temperature periodically changed with time, Qin et al. determined that the fluctuation amplitude of temperature presented a negative exponential change with the increase in depth of surrounding rocks using the equal area method [24].To date, only a few studies have been conducted on the effect of the sectional form of roadways on the heat dissipation of surrounding rock.
In practice, in order to simplify calculation, equivalent methods used to measure a circular roadway can be applied to measure non-circular roadways.According to the above investigations, each equivalent method exhibits a similar relationship curve for the unstable heat transfer criterion (Kuτ) with Fourier and Biot numbers, as shown in Figure 1.However, every equivalent method has different Fourier and Biot numbers, which results in varying heat dissipation levels for a practical problem.The reasons that the different result is discussed in calculate the heat dissipation of surrounding rock using the different equivalent circular roadway approaches.In this study, the heat dissipation of surrounding rock of a non-circular roadway is computed using the equivalent circular roadway approach under three circumstances when the area, perimeter, or hydraulic diameter of the equivalent circular roadway is equal to those of the non-circular roadway to obtain the optimal equivalent radius.Noncircular roadways with trapezoidal, rectangular, and arch cross-sectional forms are shown in Figure 2. Based on theory of equivalent method, the similarities and differences of surrounding rock heat dissipation of non-circular and circular roadways are analyzed.Ultimately, the deviations of surrounding rock heat dissipation of equivalent circular roadways are investigated under the three trapezoidal, rectangular, and arch cross-sectional forms.
Established Governing Equations of Thermal Conductivity of Surrounding Rock
The circular roadway is analyzed in the cylindrical coordinate system and the non-circular roadway is analyzed in the rectangular coordinate system to simplify the calculations.Meanwhile, several necessary assumptions are as follows.( 1) Only the heat flow of the radial direction is considered, whereas that of the axial direction is ignored.( 2) Sensible heat and latent heat on the surface of surrounding rocks are not considered.(3) The surrounding rock is homogeneous and isotropic.Based on the law of conservation of energy and Fourier's law, the governing equations of thermal conductivity for the evolution of the temperature field are given as follows: where, T is the temperature of surrounding rocks; x and y are the values in the rectangular coordinate system; r is the distance from the outside boundary of the roadway to the center in the cylindrical coordinate system; t is the ventilation time; n is the direction of the boundary outer normal line; a is the temperature conductivity of surrounding rocks; a = λ/ρc, λ is the thermal conductivity, ρ is the density, c is the specific heat; h is the convective heat transfer coefficient between surrounding rocks and airflow; T gu is the virgin rock temperature; T w is the surface temperature of surrounding rocks; T f is the airflow temperature of the roadway; and Г 1 and Г 2 are the internal and external boundaries of surrounding rocks, respectively.
Dimensionless Governing Equations
Dimensionless numbers are integrated into Eq. 1 to derive Eq. 2, as follows: where, Θ denotes the dimensionless excess temperature; Θ w denotes the dimensionless excess temperature of the surrounding rock surface; X is the dimensionless abscissa; Y is the dimensionless ordinate; R is the dimensionless radius, N is the surface normal direction; Bi is the Biot number; Fo is the Fourier number; and r 0 is the equivalent radius of the roadway calculated in Eq. 3, as follows: where, r 0,1 is the radius of the equal area method; r 0,2 is the radius of the equal perimeter method; r 0,3 is the radius of the equal hydraulic diameter method; S is the cross-sectional area; and U is the cross-sectional perimeter.By substituting Eq. 2 into Eq. 1, we can obtain Eq. 4, as follows: From Eq. 4, Θ is the function of Bi, Fo, and R in the equivalent circular roadway.Therefore, we can derive Eq. 5, as follows: Meanwhile, the equivalent radius of the roadway is 1 at the surrounding rock surface.By rearranging Eq. 5, Θ w can be expressed in Eq. 6, as follows: ( , )
Calculating Heat Dissipation Capability
After the excavation of a roadway, airflow blows across the rock surface and convective heat transfer occurs between airflow and surrounding rocks.Given that the temperature field of surrounding rocks exhibits unsteady heat conductivity, heat flux density based on Newton's law of cooling can be expressed in Eq. 7, as follows: where, q denotes heat flux density.From Eq. 7, heat dissipation capacity at unit length can be expressed in Eq. 8, as follows: where, Q is the heat dissipation capability; Q l is the heat dissipation capability at unit length; and l is the length of the roadway.Therefore, the unstable heat transfer criterion (K uτ ) [20] can be expressed in Eq. 9, as follows: Based on Eq. 9, the relationship curve of K uτ , Fo, and Bi is shown in Figure 1.
By substituting Eq. 9 into Eq.8, heat dissipation capability at unit length can be expressed in Eq. 10, as follows:
Calculating Heat Transfer Resistance
In general, the two methods of heat transport between surrounding rock interior and airflow are as follows: heat conduction from the surrounding rock interior to the surrounding rock surface and natural heat convention from the surrounding rock to the flow.That is, the two types of thermal resistances during the thermal transformations are as follows: heat conduction resistance and heat convection resistance.Therefore, the heat quality from rocks around roadways is influenced by heat conduction and heat convection.Using the concept of thermal resistance, the heat quantity between the surrounding rock surface and airflow per unit length can be expressed in Eq. 11, as follows: where, β l is the heat transfer resistance.
From Eq. 8 and 9, the heat transfer resistance (β l ) can be expressed in Eq. 12, as follows: Therefore, heat transfer resistance is closely related to the heat transfer criterion.
Analysis of Heat Dissipation of Surrounding Rock
The application of the equivalent circular roadway to measure a non-circular roadway should have the same initial conditions to ensure that the same results of the heat dissipation of the surrounding rock are obtained.Otherwise, this equivalent calculation is nonsense.These expressions are shown in Eq. 13 and Eq.14, as follows: Where, subscript Y denotes the physical quantity of the equivalent circular roadway and subscript F denotes the noncircular roadway.
The analysis of the synthesis of Eq. 11, 12, 13, and 14 can be expressed as follows: The three circumstances where in the area, perimeter, and hydraulic diameter are equal can be expressed in Eq. 16, as follows: By substituting Eq. 12 into Eq.15, we can obtain Eq. 17, as follows: Therefore, when the equivalent circular roadway has the same perimeter as the noncircular roadway, , , = w Y w F Θ Θ can be obtained using Eq. 17.From the above analysis, it is found that the equivalent perimeter between non-circular and circular roadways is the best method for calculating the heat dissipation of surrounding rock.
Established Physical Model of Surrounding Rock
Three roadways, namely, trapezoid, rectangle, and arch, are selected to analyze the generality and difference of the three types of equivalent circular roadway methods.Qin et al. proposed a method for the surrounding rock heat dissipation of a circular roadway based on the finite volume method in a 1D coordinate system [25].The discretization of Eq. 1 in the rectangular coordinate by Wang [26] is adopted in this study.
The closer the surface of the surrounding rock, the more intense the heat dissipation of surrounding rock.Thus, the triangle unit is selected as the basic unit.The closer the roadway surface, the smaller the unit size.The mass generation of surrounding rocks has been evaluated, and the calculation result is grid independent.The radial grid units of the circular and non-circular roadways are divided into 30 parts, with the ratio of 1:3.The loop grid of the noncircular roadway is divided into 36 parts.
To simplify calculation and provide a convenient contrastive analysis for different equivalent circular roadways, the following assumptions are made: the sectional area for each non-circular roadway is π; the proportion of the upper base, lower base, and height of the isosceles trapezoid roadway is 3:5:2.5; the proportion of the width and height of the rectangle roadway is 5:3; the proportion of the width and height of the semicircular arch is 4:3.5; and the calculation depth is 60.The physical models of different shapes are shown in Figure 3.
Equivalent Radius of Non-circular Roadways
For the aforementioned three non-circular roadways, three equivalent circular roadway methods are adopted.Each equivalent radius is shown in Table 1.Based on the definition of Fo and Bi in Eq. 2, the proportion of three Bi for the equivalent circular roadway of each shape is r 0,1 : r 0,2 : r 0,3 and the proportion of three Fo is (r 0,1 ) −2 : (r 0,2 ) −2 : (r 0,3 ) −2 .For example, the proportion of three Bi for a trapezoid is 1: 1.285: 1.557 and the proportion of three Fo is 1: 1.651: 2.423.
For the non-circular roadway in the rectangle coordinate system, the three equivalent types are essentially equal, even when their Bi and Fo are different.By contrast, for the three equivalent circular roadways, the physical model discrepancy between the circular roadway and the non-circular roadway leads to different results despite the Bi and Fo of the non-circular roadway being the same as those of the equivalent circular roadway.This study discusses the discrepancy in obtaining the optimal method.
Analysis of Surface Temperature of Surrounding Rock
The non-circular roadway is not strictly symmetrical.As such, the border grid units are divided unequally.In particular, the grid in the boundary of corners is more densely divided.Thus, the loop border node temperature is different and is solved using the weighted average method expressed in Eq. 18, as follows: ( ) where, z is the unit number of the loop grid; Θ 1 , Θ 2 , …, Θ z are the loop border node temperatures; and L 1 , L 2 , …, L z are the lengths of the loop border units.
Analysis of Deviation
To our knowledge, the less the relative error (ξ), the more accurate the result.Thus, three equivalent circular roadway methods can be evaluated using the relative error expressed in Eq. 19: where, Q F is the heat dissipation capacity of the non-circular roadway; Q Y is the heat dissipation capacity of the equivalent circular roadway.
To simplify the calculation, Eq. 10 is substituted into Eq.19 to obtain Eq. 20, as follows: Based on Table 1, the proportion of Bi and Fo for the three equivalent circular roadways for the same problem can be obtained.Then, based on the ratio relationship, the relative error of the three methods can be compared using Eq.20.
Results
The heat dissipation of surrounding rock of non-circular roadways is computed under three circumstances when area, perimeter, or hydraulic diameter is equal to obtain the optimal equivalent radius.Meanwhile, we employ Fourier time as the abscissa to simplify the problem.On the one hand, comparing the results of circular and non-circular roadways is convenient.On the other hand, analyzing the deviation exhibits universality.Based on Eq. 20 and the initial condition, the surrounding rock heat dissipation of the aforementioned three roadways is computed using the finite volume method in the rectangular coordinate system, in which roadway surface temperature changes with time.From Eq. 9, we can obtain the unsteady state heat transfer criterion of the non-circular roadway at any Fourier number.
Meanwhile, for any equivalent circular roadway, the unsteady state heat transfer criterion at any Fourier number is also obtained based on the foundation model of Qin et al. [25].Finally, the aforementioned results are successively substituted into Eq.20, in which the relative error of the three non-circular roadways can be obtained under three circumstances when area, perimeter, or hydraulic diameter is equal, as shown in Figures 4, 5, and 6.In Figure 4, when the equal area method is used, the relative error decreases with the change in time.The maximum relative error of the trapezoid roadway is 22% under any Bi.The relative error decreases to 10% when Fo = 100 and Bi = 10, as shown in Figure 4(a).When the equal hydraulic diameter method is used, the maximum negative deviation is 35% and the minimum negative deviation is 10%, with Bi = 0.1716.However, the variation range of the absolute value of the relative error is 0% to 1% for the equal perimeter method.This result is evidently less than those of the other methods.Fig. 5 shows that for the rectangular roadway, the range of the relative error of heat dissipation is 10% to 20% for the equal area method, 20% to 50% for the equal hydraulic diameter approach, and 0% to 0.6% for the equal perimeter method under any Bi.The result of the equal perimeter method is evidently less than those of the other methods.In Figure 6, the relative error of the heat dissipation of the arch roadway has the same variation feature as the other two roadways.When Fo ranges from 0.0001 to 100, the relative error of the equal area method ranges from 6.5% to 2% and the relative error of the equal hydraulic diameter method ranges from 20% to 50%.However, the relative error of the equal perimeter method ranges from 0% to 0.25%.Thus, the equal perimeter approach is evidently superior to the other methods, too.
Analysis of Relative Errors of Arch Roadway
For the three roadway surrounding rocks, the relative error of heat dissipation of the equal perimeter method is evidently smaller than those of the other methods.Thus, preferring the equal perimeter method when calculating heat dissipation capability is reasonable.
The equivalent radius is essentially an equivalent measure, but not an equal value.Objectively, the relative errors are inevitable when we adopt the equivalent radius to calculate the heat dissipation problem.However, in practice, we should select a relatively accurate approach to calculate the heat dissipation capability to obtain more precise results.For the equivalent radius problem, employing the equal perimeter method in engineering applications is evidently beneficial.
Conclusion
A model of the unsteady state temperature field of surrounding rocks is established.The dimensionless criterion is introduced to analyze the governing equations of surrounding rocks in cylindrical and rectangular coordinate systems.The calculation methods of heat dissipation capability and heat transfer resistance of surrounding rocks are derived from differential equations.The generality and difference of surrounding rock heat dissipation of three equivalent circular roadway methods are compared and analyzed.The equal perimeter roadway method is determined to be the best approach.In this study, the deviations of heat dissipation of surrounding rock between circular and non-circular roadways with trapezoidal, rectangular, and arch cross-sectional forms are investigated.The results show that the deviation of the equal perimeter circular roadway method is the lowest and the calculation approach of the surrounding rock heat dissipation of noncircular roadways with equal perimeter is the best.
In practical applications, we can generally calculate Fo and Bi using the optimal equivalent radius, called the equal perimeter, based on the initial conditions.Then, the unstable heat transfer criterion can be confirmed based on the results shown in Figure 1.Finally, the heat dissipation capability of roadway surrounding rocks can be determined using Eq.10.This study on equivalent radius is standardized, which provides a theoretical basis for engineering applications.
Fig. 1 .
Fig. 1.The relationship curve for the unstable heat transfer criterion with Fourier and Biot numbers
Fig. 3
Discretization of Different Shape Roadway
Fig. 4 .
Fig. 4. Relative error of the heat dissipation of the trapezoid roadway under three circumstances
Fig. 5 .
Fig. 5. Relative error of the heat dissipation of the rectangular roadway under three circumstances
Fig. 6 .
Fig. 6.Relative error of the heat dissipation of the arch roadway under three circumstances
Table 1 .
Different shapes corresponding to three equivalent radii. | 2018-12-21T00:04:22.665Z | 2015-10-01T00:00:00.000 | {
"year": 2015,
"sha1": "ea234dfbacbb09fb0cabd61d694907dae2d32337",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.25103/jestr.085.09",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "ea234dfbacbb09fb0cabd61d694907dae2d32337",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Geology"
]
} |
229589326 | pes2o/s2orc | v3-fos-license | On the Development and Conception of Physical Education Reforms in Colleges and Universities
As a medium subject, physical education in colleges and universities not only has high value of physical quality exercise for the students, but also has strong value of humanistic education for ideological education and professional quality education in China, which has been totally ignored in physical education in the past. Nowadays, the requirements for talents and social needs for talents' abilities including professionalism are becoming more and more comprehensive and flexible. Physical education should strengthen its understanding of the educational value of multiple deep-seated educational functions, such as physical culture, ideological and political education and professional qualities. As a bridge, physical education can realize the new educational reform and development of multi-disciplinary joint training with other subjects, thus achieving the development of physical education in the era. The implementation of talent training is the focus and main educational exploration of this paper for the author.
Introduction
Physical Education is a national basic education subject, whose educational effect and talent training effect has a decisive value and also represents the development level of a national basic education. China's quality education reform and subject teaching reform have been carried out for many years, but there are many deficiencies in the effect and strength of physical education reform compared with the development of other subjects. According to the data collected by the author and a large number of classroom observation and questionnaire survey results in the past two years, there is still a gap between the current physical education reform in the construction of teaching content, the exploration of the training system of teaching objectives, the development of disciplines, the training results of personnel quality and the final training requirements of quality education personnel in China. The main contradictions in this subject teaching and construction are lack of physical culture education and cognition of physical humanities education and the poor consciousness of interdisciplinary combination of physical education, which aggravates the lack of cultivation of consciousness of "lifelong physical education". This makes physical education unable to play the role of medium subject between professional subjects especially the students' specialized courses to gain professional systems of knowledge and other humanities. The lack of teaching effect is the main contradiction between the students' interest in physical exercise and the lack of consciousness, resulting in the decline of physical fitness for Chinese colleges and universities students and so do to the students in most other countries in the world around. As an important part of physical education in colleges and universities, physical education has been deeply concerned by the national sports authorities and education authorities. In the era of continuous pursuit of "quality education, all-round develop-Copyright © 2020 Dongbiao Li doi: 10.18282/le.v9i2.821 This is an open-access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/), which permits non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
Lifelong Education
ment" in the world, China's physical education must achieve the overall reform. Taking physical education concept, physical culture, physical literacy education and other ability education as the ultimate goal of physical education reform in colleges and universities, it is necessary to activate teaching means and methods. Therefore, students can improve themselves in physical education. The core of the future development of physical education reform in colleges and universities is to improve the self, realize the socialization of the individual, establish the concept of "lifelong physical education" and realize the cultivation of professional quality through the medium role of physical education.
So physical education has the significance of the times. The results of the in-depth questionnaire for 1000 college students are shown in Table1 1, who are distributed in 50 different colleges and universities and divided into 4 institutes as a whole aged from 18-23 years old. universities is to break through the boundaries between disciplines with "all-round development of students", which is the core of physical education reform. So the physical education discipline can fully play the role of the medium of the bridge discipline, and realize the multi-disciplinary in humanities, sports concept, sports concept through the organization of sports activities and joint training of sports culture and professional quality. However, these teaching requirements cannot be realized in the current physical education in China. In the future, the reform of physical education in teaching content can be realized from the following two aspects: first, according to the quality requirements of modern education, the ability development of students in moral, intellectual, physical and aesthetic aspects is closely combined with the basic quality of physical education, so it is necessary to divide the education units;
Development and conception of physical education reforms in
second, in the actual teaching process, the results are used to guide the evaluation mechanism, so teachers are forced to adopt effective means to improve students' learning efficiency, ensure students' physical and mental health, take care of the cultivation of students' ability and accomplishment and the realization of the results; teachers should make full use of the subject medium role of physical education, and cultivate contemporary college students to be excellent in physique, mental health, quality, creativity and adaptation.
Conclusion
Physical education is an important part of modern education, the essence of which is not only to teach sports skills to exercise students' physical quality, but also to realize the development of moral, intellectual and physical quality of college students and the promotion of relevant quality and ability in group relationship, to enhance students' physique, to improve students' health, and to promote students' consciousness of "lifelong physical education". In other words, physical education has become an important part of individual lifelong education. Physical Education is a kind of activity combining physical quality with intelligence and cultural quality. Physical education is a subject of multi-dimensional quality training. Therefore, under the current talent demand mode and the joint promotion of quality education and well-off society, physical education in colleges and universities must recognize the discipline responsibility and take the initiative to change, which is an important foundation and starting point to promote the renewal of teaching concept, the sustainable development of disciplines and the development of quality education. From the perspective of subject development, this paper analyzes the key elements of the reform and development of physical education in colleges and universities, which are related to the results of the cultivation of students' quality and the long-term development of the subject from the aspects of the construction of teaching content, the exploration of the cultivation system of teaching objectives, the development of the subject and the results of the cultivation of talents' quality. | 2020-12-03T09:04:15.847Z | 2020-03-04T00:00:00.000 | {
"year": 2020,
"sha1": "96a78b08b7799f03da93b6c96aeaf387b16b86ef",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.18282/le.v9i2.821",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "a006f42dff09ef8ae2b2c868a72f76ae6df90ebd",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Political Science"
]
} |
208233875 | pes2o/s2orc | v3-fos-license | Effects of Climate Change and Maternal Morality: Perspective from Case Studies in the Rural Area of Bangladesh
This study explored the community perception of maternal deaths influenced by natural disaster (flood), and the practice of maternal complications during natural disaster among the rural population in Bangladesh. It also explored the challenges faced by the community for providing healthcare and referring the pregnant women experiencing complications during flood disaster. Three focus group discussions (FGDs) and eight in-depth interviews (IDIs) were conducted in the marginalized rural communities in the flood-prone Khaliajhuri sub-district, Netrakona district, Bangladesh. Flood is one of the major risk factors for influencing maternal death. Pregnant women seriously suffer from maternal complications, lack of antenatal checkup, and lack of doctors during flooding. During the time of delivery, it is difficult to find a skilled attendant, and referring the patient with delivery complications to the healthcare facility. Boats are the only mode of transport. The majority of maternal deaths occur on the boats during transfer from the community to the hospital. Rural people feel that the maternal deaths influenced by natural disaster are natural phenomena. Pre-preparation is needed to support pregnant women during disasters. There is unawareness of maternal health, related care, and complications during disasters among local health service providers and volunteers.
Introduction
In rural Bangladesh, natural disasters are identified as one of the most important factors for deaths of women, especially during pregnancy. The female to male death ratio is 3:1 during natural disasters [1]. During the disaster, women are the most vulnerable for pregnancy complications including retained placenta, obstructed labor, and fetal distress. It is difficult to manage maternal health problems during disasters as healthcare facilities and providers are not available. Delivery in unsafe condition increases maternal deaths [2]. Disaster also impacts on reproductive health through spontaneous abortion, birth defects, and low birth weight of the baby [2,3]. According to literature, South-East Asia is the most vulnerable region, due to global warming having significant effects on the climate changes including unprecedented heavy rain and floods [4][5][6][7]. In Pakistan, about 500,000 expecting mothers were affected by the 2010 flood while 1.5 million women needed emergency obstetric care [4]. Among pregnant women during the disaster, 1700 women delivered, with hundreds of them suffering from delivery complications in Pakistan. Maternal deaths are also high in the Indian sub-continent due to a lack of medicine and the absence of female healthcare providers during disasters [5]. Bangladesh is considered to be one of the world's most natural hazard-prone countries, and flood is one of the most common disasters experienced regularly by the people of Bangladesh where on average, 18% of the country is affected by floods every year [6]. In a study by UNFPA of nine districts in Bangladesh, it was found that 1,876,636 people were affected by flood disasters, of which 32,000-33,000 were pregnant women [7]. Pregnant women, lactating mothers, and differently disabled women suffered the most, as they found it difficult to move during and after a disaster. Sometimes women cannot express their problems [8]. In Bangladesh, it is found that non-availability of transport in and around all flood-affected areas and disruption of communications seriously hindered women's ability to access health facilities for deliveries [9]. Moreover, the delay in decision making and delays in transportation influence the maternal deaths in rural communities [10]. It is also found that some of the healthcare centers are inundated with flood water. So, access to health services becomes limited as a result of routine immunization and outpatient consultation, antenatal care become disrupted in the affected villages in Bangladesh [11]. Due to climate change, Bangladesh is overexposed for natural disasters such as floods. This study explored the perception of maternal death during the flood. This study also explored the practices and challenges of the community people for emergency maternal care with complications during the flood period. The study tried to investigate the community recommendation for preventing maternal death during flood.
Study Methods
A qualitative study was conducted at Khaliajhuri Upazila (sub-district) in Netrakona district of Bangladesh during July to September 2015. The sub district is affected by floods every year, and boats are the only mode of transportation during rainy seasons and floods.
Three focus group discussions (FGD) and eight in-depth interviews (IDI) were conducted in two unions of the sub-district where three maternal deaths occurred during the previous flood. For FGDs, we chose three groups. The participants who were included in the FGDs were people who knew details about the respective maternal deaths. Each group was selected from the union where the maternal deaths occurred during the previous flood. FGD members consisted of the neighbors of the deceased mother's family, male and female guardians of pregnant and recently delivered mothers, pregnant women, community group members, school teachers, religious leader, Union Parisad members, and elite people of the society who have idea on the incidence of the maternal death. Nine to eleven participants were included in each of the FGD.
For in-depth interviews, participants were chosen from specific communities (where FGDs were conducted). Eight IDIs were performed; Two were conducted with the male guardian and two were conducted with the female guardians of the pregnant or recently delivered mother, two were conducted with the village doctors, and the remaining two were conducted with the traditional birth attendant of that specific community where maternal death was reported. All IDIs were conducted following guidelines by face-to-face interview at the household level. This qualitative research technique has vast advantages to explore the interviewee's perspective on a particular situation [12,13]. Time duration for FGD conduction was 30-40 minutes, whereas for the IDI it was 15-20 minutes (Table 1).
Data Collection
Field training was conducted among two research officers and guidelines were pre-tested. One research officer was a trained and experienced anthropologist (postgraduate), who performed as a moderator, and the other research officer was from social science background, who performed as a note taker). During FGDs, one research officer facilitated the discussion, while the other research officer took important notes. The objectives of the research were demonstrated to the respondents before the interviews. A written consent was taken from each of the respondents before the interviews or FGDs. A number of prompts were used to obtain the information. Audio voice recording was done with prior permission from the respondents. From the audio-recordings and hand notes of the interviewers, the research officers prepared verbatim transcripts of the IDIs and FGDs in native Bengali language. Later, English translations of the transcripts were performed by two expert bi-lingual researchers. The principal researcher controlled the transcript quality by randomly selected transcripts reviews and translation.
These transcriptions were also checked by public health specialists. Peer debriefing was also performed to maintain the reliability of the data. Initial open coding was done, then from these open codes, selective coding was done. Contents were identified after read and re-read of the data [14,15] and finally, content analysis was performed ( Table 2). (02) Village doctors (02) Table 2. Content of the focus group discussion and in-depth interview.
Area of Discussion Types of Prompts Used
Perception on the occurrence of maternal death and natural disaster Did any maternal deaths occur here? Why and when did the maternal deaths occur in this area?
Were any natural disasters observed? When did the disasters occur here? What types of disaster occur here? Does the disaster occur regularly, and what is the duration of the disaster?
Practices of maternal healthcare during natural disaster How is maternal healthcare provided during a disaster? What preparation is there during pregnancy and delivery complications at the time of a disaster? What do the community people do during complication at a disaster?
Where do they go during maternal complications, and how?
Barrier of the marginalized community to practices on maternal care during disaster What are the challenges for maternal care during a disaster? What are the types of obstacles faced during the referral of a risky mother?
Was the referral of a mother ever delayed, and why?
Relation of maternal death with natural disaster Is maternal death influenced by natural disaster? How does natural disaster cause maternal death (with example)?
Recommendation and initiatives to prevent maternal death during disaster How can maternal deaths during natural disasters be prevented? What initiatives can be taken by the community people to overcome such a situation? What are the recommendations to increase maternal healthcare during a disaster?
Data Analysis
Qualitative content analysis was conducted following the guidelines by Graneheim and Lundman [16]. The participants' words were analyzed as actual content, and interpretation and judgment of participants' response was analyzed as latent content [17]. We analyzed the data with a repeated look over the written transcription by identifying each of the meaning units and listening to the audio recorder [16].
Results
Flood is a common and annual natural disaster in the study area, especially during monsoon in the Khaliajhuri sub-district of the Netrakona district, Bangladesh. Maternal healthcare is seriously disrupted during such a disaster period. There is community ignorance on specialized maternal care as a whole, including during disaster period. Healthcare providers are not available during maternal complications of the mother during and after the disaster. It is difficult to organize delivery of the mother during and after the disaster period. It is a very complex task to organize a place and person for delivery. A skilled birth attendant for delivery are frequently unavailable in the disaster area. It is even very difficult to organize as traditional birth attendant for assisting the delivery. If delivery complication arises, then referral of the mother from the community to a facility is a very difficult and cumbersome process. The boat is the most common mode of transportation. Transporting the risky mother with delivery complications to as healthcare facility takes a lot of time. As a result, sometimes mothers die on boats due to delays in transferring them to the hospital.
Perception on Occurrence of Maternal Death during Flood
Most of the community people who participated in FGD and IDI had a perception of natural disaster and maternal death in their area. However, they perceive that both of the things occur due to destiny/ill-fate, even though floods occur in one season (monsoon) but their effects persists throughout the year in that area. Floods make their life more complicated as there are problems of housing, communication, food, and medical treatment. Their lives become under threat during flooding. Pregnant women and their children are more vulnerable during floods. Maximum maternal death occurs during the disaster period due to difficulty in obtaining treatment and communication. Maternal deaths occur due to complications after delivery to delays in reaching the hospital due to communication and transportation problems.
"Flood occurs every year in rainy season but it was most dangerous in 2014. People couldn't go out from home. Tube wells and toilets were sub-merged. We stayed on the roof of our houses. We could not cook due to wind and heavy rain. Many children died with diarrhea. Some people died by thunderstorms" -(P20, FGD 1) one of the male guardians mentioned.
"Maternal deaths occur due to excessive bleeding after delivery and difficult to reach at hospital as boat is the only mode of transportation. Sometimes the mother died within the boat. This is most common during the flood period" -(P13) village doctor.
Practices of Maternal Healthcare during Natural Disaster
According to the community people, pregnant women suffer a lot during floods. They often have no prior planning for the management of maternal complications during floods. Pregnant women do not receive special attention in terms of care or treatment. Local public healthcare facilities and healthcare providers were not available for maternal care. The husband of a mother with severe labor pain commonly accompanied her during boat transportation. But if any complication arises, they communicate with the village doctors (rural medical practitioners without any formal medical degree), local kabiraj (a type of quack practicing traditional Ayurveda), and a traditional birth attendant for help. If any serious maternal complications arise, they arrange an emergency boat, trawler, or any available mode of transport to transfer the mother to the nearby hospital as per suggestion by the village doctor and traditional birth attendant. (N.B. Traditional Ayurveda is not a scientifically approved medicine. But people have practiced it for hundreds of years. It is locally made by traditional, untrained people. Kabiraj and village doctors are not government employed, rather they practice at rural areas without any professional skills).
"We call kabiraj and village doctor if there is any complication of the pregnant mother during flood. They provide treatment after checkup. We can't go to the public hospital due to difficulties in transportation. But if the kabiraj and village doctor fail to provide treatment and suggest us to go to the hospital then we try to arrange for transferring the mother which is very difficult. At first, we have to go to Krishnopur bazaar (nearby market place) to arrange a boat for reaching to Samachor (nearest available land with motorable road), then we book the auto-rickshaw or laguna (indigenously makeshift tri-cycle with installed water-pump motor) to reach hospital. The process is very time-consuming and expensive" -(P2, FGD 2).
"Koki's wife died with delivery complication because she was not sending to the hospital timely due to flood at that time in our village. When she reached the hospital she died. The road and transport system are not fine here and it's difficult to reach hospital timely during emergency case which is a reason for the maternal death" -(P12, FGD 3).
"I try to make delivery of the mother at home but during flood, it is difficult for me to reach the pregnant mother's home. But if I realize that the mother is at risk then I immediately refer her to Upazila hospital though it is very difficult and time-consuming to reach the hospital by boat". -(IDI5) traditional birth attendant.
Obstacle to Practices on Maternal Care during Disaster
Communication and transportation are major obstacles of maternal care in the flood-affected areas. During the flood, the public healthcare facilities or hospitals in rural areas are closed, and healthcare providers do not regularly checkup on pregnant women. Satellite clinics do not organize during the flood period. Though there are some volunteer activities for supporting flood-affected people, there is no special support for maternal healthcare. It is even difficult to find a village doctor or traditional birth attendant. The only vehicles are boats for transportation. When any difficulties arising among pregnant women, it is difficult to organize a boat. It is also very time-consuming to reach the hospitals. Sometimes people depend on destiny for any maternal complication during the flood, if they cannot organize any boat. Boatmen are not easily available, or they will not agree to carry the risk to transport the complicated pregnant mother. They demand a high amount of money for the emergency boat transportation.
"Boat is the only vehicle for transportation during flood. Sometimes a boat can't easily manage during an emergency. Boatmen demand a lot of money during transportation of mother with maternal complication as it is time-consuming" -(P8, FGD 1).
"We can't easily find any village doctor or traditional birth attendant during maternal complication at the period of flood as there is huge pressure of patient at that time. Sometimes we have nothing to do but just to depend only on the fate" -(IDI8) One of the female guardians.
Barrier to Referral of Complicated Mothers
Community people mostly depend on the kabiraj, village doctors (quack) and traditional birth attendant during maternal complication. They normally do not refer the mother until a serious condition occurs as they know the barriers of communication and transportation during floods. Moreover, doctors are not available at healthcare facilities. The traditional practitioners refer the complicated mothers when they fail to provide necessary support. Sometimes they have to move from village doctor to Union sub-center, then from there to the Upazila health complex by boat, and then to the district hospital by any mode of transportation, which is very time consuming and painful for the mother.
"My sister-in-law was admitted to Upazila hospital during the last flood with complication and she was referred to the Sylhet district hospital which took a minimum of five hours to reach by boat. Moreover, the wave of the river was too high to move and there was no other way to move and ultimately she delivered on the boat under high waves with high risk" -(P2, FGD 1).
"Actually, in case of advanced pregnancy, women are in more trouble during the flood. There is no proper treatment of the mother during that period. The financial problem is common at that time for proper checkup and treatment of the mothers. Moreover, moving from the Khaliajhuri to another destination by boat is the only vehicle that takes lots of time and any accidents can happen at that time" -(P3, FGD 2).
"If the traditional birth attendants cannot handle the complicated delivery then she refers the mother to the village doctor, and if village the doctor failed then he refers to the hospital. But before that, both of them tried traditionally. It takes a long time to decide to refer the mother to a healthcare facility. And after that, the transportation by boat takes a long time. So, as a whole we lose long time, resulting in more life-threatening complications for the mother" -(IDI7) One of the male guardians.
Influence of Maternal Death by Natural Disaster
Most of the community people perceived that maternal deaths are seriously influenced by natural disasters like floods in the affected areas. As the mother, who has been already suffering from malnutrition and anemia with other diseases, cannot receive proper maternal care during flood, if any delivery complication arises, and the result is her maternal death during a natural disaster. Tarred decision and transportation delays are common during floods, which influence maternal death. Ignorance of traditional practitioners to identify risky mothers and negligence of the community people cause delays in decision making. Moreover, arranging a boat, managing money, selecting people to assist, and delay in boat arrival cause further delays in transportation.
"Two years ago, during the flood, a daughter of my brother-in-law died with delivery complication in boat on the way from Khaliajhuri. At first, she was carried to the village doctor's chamber which was too far away from her home but it was very difficult to find the said doctor at night. Then he suggested for going to Upazila hospital for severe complications. Eventually, she died on the way to hospital" -(P4, FGD1).
"Mother can survive luckily with maternal complication during flood. But it is a very risk of maternal death. A mother and her infant died during the last flood period inside the boat while going to hospital with adverse weather and could not reach to hospital on time" -(P3, FGD2).
"I madly swim for searching a vehicle to transfer my niece with maternal complications. But I could not manage the boat easily at night. At last, we started to move to Sylhet for receiving her treatment at night with stormy weather. However, later she died after delivering on the boat". -(P9, FGD 4).
"During flood, the wave of the river is large enough and it is a risk of drowning even for a large-sized boat. So, community people afraid to go out from home with such stormy weather. Risky referral mother can't transfer quickly to Mymensingh medical and Dhaka through water even in the emergency" -(IDI4), Male guardian.
"A maternal death occurred after four to five hours of delivery. The bleeding started immediately after delivery but it needs two hours to manage a boat. When the boat was arranged the mother died inside the boat with profuse bleeding" -(IDI9), One of the village doctors.
Community Recommendation to Prevent Maternal Death Related to Disaster
The majority of participants mentioned about early transferring the pregnant women with complications to the referral hospitals. Many of them recommended improvement in the communication and transportation systems in these areas where natural disaster like flood is very common. Some people also recommended water ambulances for referring the risky mothers during disaster. Some people also suggested the establishment of temporary health camps for proper care of pregnant women during the flood period. "If complicated pregnant mother could be admitted to hospital before delivery, then many mothers' life can be saved. If there is availability of qualified doctors in the nearby facilities during flood then maternal death can be prevented" -(P9, FGD5) One of the participants.
"if the government takes initiative for quick transfer of high-risk pregnant mother then there will be no maternal death would occur, as earlier it happened in this area" -(IDI18) One of the village doctors.
Discussion
Flood disaster occurs almost every year at the Khaliajhuri Upazila of the Netrakona district in Bangladesh. This disaster occurs immediately after the rainy season. The suffering lasts over the year. The flood intensity and severity are increasing over the years mainly due to climate change. During the flood period, there is difficulty in communication, transportation, housing, and getting safe water and food. Maternal-and child-care are most affected. Maternal deaths commonly occur during the flood period in the absence of proper care and treatment. Boats are the only mode of transport for transferring a referred mother to hospital which is risky, and time-and money-consuming. Many maternal deaths occur on the boats. Delays in decision-making and transport are common during the flood disaster period which influences and results in maternal deaths.
Flood is common during the rainy season every year in the study area, starting from the early rainy season till the next month of the season. Normally, 20%-25% of Bangladesh is inundated during the monsoon from June to September. In the case of extreme flood events, 40%-70% of the area can be inundated, which amply proves the extremity of flood events [18].
Pregnant women are identified as the most vulnerable human beings during floods in Bangladesh. Maternal deaths are common in this period with several complications like bleeding after delivery obstructed and prolonged labor. A high number of pregnant women are found affected by the floods, which is similar to the UNFPA findings where approximately 1.75% of the flood-affected mothers are pregnant in the nine districts of Bangladesh [7]. It has been noticed that pregnant women, children, the elderly, disabled people, and women are more vulnerable than the other sections of the population. During disaster, they are left behind in cases of emergency because they lack knowledge, mobility, and resources [8].
It is found that many mothers died during natural disaster due to lack of access to health facilities during flood and stormy periods. Many mothers refrain from using the toilet during the day and consequently suffer from urinary tract infections. Pregnant women, lactating mothers, and differentsly-abled women suffered the most. It is difficult to move before and after cyclones [19]. The rate of inadequate antenatal care increased from 1.3% to 3.9% during any disaster [2].
Maternal deaths are influenced by floods because delays in decision-making and transport are common when pregnant women exhibit complications during this period. The factors included delays in recognizing the problem and in decision-making to seek care; long distances to health facility; scarcity of money and/or unavailability of transportation, and the long duration of transportation by boat. In Nepal, it was found that 14% of pregnant women in transit to or from a facility during disasters died. Of those, 46% died in a public facility with maternal complications due to transport delay. This shows that more women are willing to reach healthcare facilities but transportation delays are causing death [20]. Another study shows that Pakistan has recently been gravely affected by the worst monsoon flooding in a century. The number of people directly affected by the floods stands at 20.2 million, with over 1.9 million houses reportedly damaged or destroyed and women and girls comprising 85% of the persons displaced by the floods [5]. Therefore, natural disasters are increasing in the Indian sub-continent, threatening more pregnant women due to climate change.
Pregnant women are seriously deprived of proper care and treatment during the disaster period. It is more difficult to refer pregnant women experiencing complications to proper treatment during this period. It is recommended from this study that flood shelters should have increased separate accommodations for pregnant women. At least one room should be earmarked for child delivery and infants [9]. A study found that serious threats to pregnant women and children (0-6 months) in flood-affected sub-centers were reduced by providing delivery kits to the Auxiliary Nurse Midwives, as this lowered the individual risk of being exposed to waterborne-and skin-diseases [11].
Community people recommended alerting people in authority of the need for special support for pregnant women during disaster and emergency management of transport of high-risk pregnant women. Some studies also support such suggestions for the welfare of the mothers and infants for healthy pregnancy and safe delivery during disasters [21].
Conclusions
Maternal deaths mostly occur during the rainy season in flood-affected areas. Negligence of maternal healthcare, unavailability of facilities and proper care services, dependency on unqualified doctors, communication and transportation problems, and barriers to referral of the pregnant women experiencing complications during floods cause maternal deaths. To prevent such unwanted maternal deaths during disaster, policy-makers need to take special initiatives including community awareness about preparations for maternal care, providing support for proper care and treatment by qualified service providers, and quick referral of the pregnant women experiencing complications to the hospitals. Special attention of people in authority is essential to decrease the decision-making and transport delays of the risky pregnant women to reach to the hospital from their homes and communities. | 2019-11-22T01:04:50.634Z | 2019-11-20T00:00:00.000 | {
"year": 2019,
"sha1": "f5be7d73c4ffa94d0150c861c7f8919ea1e0caba",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/16/23/4594/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "93394f7b63db69b11552872ce4b755f3de0468fb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247985404 | pes2o/s2orc | v3-fos-license | Transformative gestures
Douglas Yacek’s recent book The Transformative Classroom proposes a useful aspirational model of transformative education. In this critical commentary, I review this model and suggest that while it succeeds in overcoming some ethical shortcomings of other dominant models of transformative education, I would like to suggest that focusing on more subtle transformative gestures could have the benefit of being less dependent of the teacher’s intention to transform and of being less constrained by the expectation that transformation should take place primarily in the classroom. When transformation is conceived as an educational fiction, it may be conceived as a retroactive experience constructed around memories of the teacher's transformative gestures, thereby adding to Yacek’s aspirational model by allowing for transformation to continue beyond the walls of the classroom.
and so I will not dwell on this incident. I mean to return to it below, however, as I believe there is a sense in which it speaks to the sometimes nearly imperceptible ways in which transformative experiences can sneak into our lives, nesting there quietly, only to grow more significant over time. However transformative experiences are conceived and explained, there seems to be a deep sense in which they are part of a near-universal pedagogical grammar available to teachers (and to others as well?). As such, they hold a certain pedagogical promise, to be acted on or not. In his recent book, Yacek sets out to 'provide a philosophically grounded sketch of how such transformative experiences can be fostered in the contemporary classroom' (p. 1).
The structure of Yacek's book is straightforward and sensible. First, Yacek outlines three competing paradigms of transformation -conversion, emancipation, and reconstruction -in order to then critique them from an ethical standpoint. Having done so, Yacek suggests a different conception of transformation as aspiration, seeking to overcome some of the ethical shortcomings of the previously outlined paradigms. The main problem that Yacek identifies with regard to the three dominant paradigms or types of transformation is that they risk stifling student autonomy in the process of unsettling taken for granted assumptions and habits of thought, leading to existential trauma and self-alienation rather than emancipation and autonomy. Let me briefly flesh out the rationale of the dominant paradigms as perceived by Yacek.
The conversion paradigm of transformation brings together a religious sense of awakening and atonement with a political desire to change the status quo. It is the task of the teacher to bring to the fore certain illegitimate preconceptions of the human social world, and through a necessarily painful process of revaluation, bring the student to act on this realization. On Yacek's account, this turns education into an instrument for creating political actors geared at realigning themselves according to a set of predetermined social ideals. Despite its benefits (of alerting students to existing injustices in the world), transformation as conversion reveals a contradiction between its preferred method -dialogue -and the fixed nature of the goals to strive for. While open dialogue is typically promoted in social justice education, it seems that the foundational values at the core of it are not generally open for critical discussions. Accordingly, Yacek asks (somewhat rhetorically), 'whether directive dialogue is open-ended enough for it to constitute an instance of pedagogical non-oppression' (p. 35, emphasis in original)? Here, Yacek reminds me of Nigel Tubbs' (2005) salient critique of the liberation narrative in critical pedagogy, where the teacher as servant (to the political emancipation of the student) inevitably reemerges as the master (that it was once construed to dethrone), holding the keys to the uncovering of pervasive social injustices and oppressive structures. Teacher authority, it seems, has a tendency to reassert itself in spite of the teacher's ambition to surrender it. In addition to the ambivalent relationship to teacher authority, Yacek also notes the danger that focusing primarily on demystifying and deconstructing various prejudices embedded in the student's current worldview might not help transform them into critically minded activists but can actually leave them suffering from substantial existential trauma (as a result of driving an emotional wedge between them and their community).
Moving on to the emancipation paradigm, Yacek locates this trend in a more personally (and less overtly political) geared tradition where the teacher stages various interventions to help students break with preexisting values and ideals that are taken to hinder their 'authentic identity formation ' (p. 56). Key words here are 'authenticity' and 'true self', and there is an obvious therapeutic dimension to these educational programs. The overarching rationale seems to be that drastic transformative interventions can help students shed various values and habits stemming from external influences so as to instead begin to construe new values that are somehow truer to themselves. This, of course, sets up a well-known dichotomy between external and internal values, and as such it hinges on what I would critically describe as a rather unrealistic ideal of self as epistemically self-sufficient. While Yacek notes that this sets the students up for a difficult-to-resolve conflict between their old selves and their new selves, I would perhaps add that it also seems to build on an understanding of autonomy that appears to demand something akin to self-causation. To my mind, a more relational conception of autonomy would certainly be equipped to handle the seeming tension between external and internal values, but this is something that we will have cause to return to.
In the third and last of the dominant paradigms outlined by Yacek, focus is placed on disrupting the student's cognitive apparatus so as to reconstruct it in a way that is better equipped to deal with the many challenges and problems encountered in the world. Accordingly, it is simply called the reconstruction paradigm. What Yacek finds lacking from this approach is any clear sense in which disruption and reconstruction actually lead to a better and more stable sense of self. Instead, the student might well be set on a path of perpetual disruption/reconstruction and, as Yacek warns, 'The danger is that disruption and reconstruction become ends in themselves rather than means of attaining deeper knowledge and wisdom' (p. 85). Having concluded that all three dominant paradigms have their different shortcomings in relation to instigating a form of transformation that is ethically sound and sufficiently mindful of the development of personal agency, Yacek turns to his own proposed model, labeled transformation as aspiration. Aspiration, Yacek suggests, 'constitutes a form of transformation designed specifically to expand the horizons of student agency' (p. 95).
In a sense, Yacek's aspirational model is less dramatically conceived than the previously described paradigms. This is a bit deceptive, however. The transformation related to aspiration may not be as overtly drastic -it does not seem to hinge on religious or political awakening, liberation, or on the sudden disruption of cognitive structures -but it does aim for a foundational kind of transformation where the student's life takes on a radically different quality after having been inspired to pursue different values than those previously known and adhered to. After all, to entice students to aspire for a life previously unknown to them is the core of this kind of educational transformation. To be sure, this may involve disrupting the way students have previously thought and acted, but it is conceived as a disruptive process coupled with a more positive aspect of striving for something better. Yacek here relies on the notion of epiphany, describing a specific kind of disruptive transformation that is conceived as less traumatizing than those involved in the previously described paradigms. The difference being that '[s]tudents are not only disturbed or pulled up short by their epiphanies', but that they 'can envision, or at least sense, a different path forward for themselves -a path that will ennoble and enrich the life they are currently leading' (p. 145, emphasis in original).
Epiphanies can, on Yacek's account, amount to an awakening that leads up to a form of transformation that is not conceived in strictly instrumental terms as being geared for social justice or for any other extrinsic aim (however laudable). Rather, epiphanies can 'invite students into a wholly different way of seeing the subject they are studying, as sources of cognitive insight and ethical inspiration' (p. 147, emphasis in original). It can do this, Yacek argues, while also potentially breaking through the psychological barriers of apathy, distraction, and akrasia; barriers that are conceived as particularly palpable threats to the aspiration of the contemporary student. The role of the teacher is key here as the teacher is the one who can illustrate for the students what epiphany looks like and who can in fact embody the process of reorienting your life by aspiring for new values. Mr Keating of Dead Poets Society becomes a case in point. Mr Keating goes to great lengths to stage an epiphany on the part of his students as he 'wants his students to have a different kind of relationship to poetry than the one that is typically encouraged in Literature classes' (p. 151). This is a dramatic example and one that may strike us as a bit romantic and far removed from the everyday experience of teaching. If we scale it down a bit, I think we will find it quite relatable however. Let us return momentarily to Mr Möller.
While Mr Möller did not act out his call for ethical reorientation by standing on his desk and shouting it for all the world to hear, he did nevertheless communicate it in writing. When writing on the inside of the cover that the book he gave me had everything, he was also, and at the same time, conveying a deep sense in which I still had much to learn and much to experience in life. He was encouraging me to be on the lookout for things that would add to my life and he was, quite subtly, indicating that, for him at least, great literature was a thing that would allow a person to experience the world without having to travel very far. What may be interesting to note is that, for me, the epiphany and eventual transformation induced by Mr Möller's act happened much later than for Mr Keating's students. You might say there was a considerable delay between the gesture of giving me the book and the experiences I needed to go through in order to be able to appreciate it (or construe it) as truly transformative. While this may of course be coincidental, it may also indicate something important about transformative experiences.
One important aspect of the aspirational brand of transformative education that Yacek fleshes out in the penultimate chapter of the book is the ability of the teacher to construct a classroom ethos. The example Yacek uses to illustrate this is of a teacher acting as if students were already transformed. It seems to me that this is a crucial aspect of transformation, necessary for bridging the felt distance between the gesture intended to spark an epiphany and the arduous journey of the aspirant to live differently in light of new values. Addressing the student as someone who is already on the inside (of R. S. Peters' metaphorical gates to the citadel) appears to render transformation into a challenge to be met head on while also communicating a faith in the ability of the student to take on and overcome this challenge. The simple phrase written in Mr Möller's hand on the inside of the cover of Ragtime could be read as a message from an expert reader to a novice reader (saying that there are books containing more than meets the eye), but it could also be read as a challenge extended from one who has lived long and seen much, to one who has only seen a fraction of all there is to see. From this perspective, the challenge extended says something like this: 'Go on, cast your net wide! Here's a book to give you a first glimpse of all that life has to offer. Be inspired by it and go explore the world'. I am fairly certain that I am reading far too much into Mr Möller's gesture here. But that is precisely my point. It matters less to me what Mr Möller intended and more what his students could make of the few clues he left them with. On the surface of things, this seems to be a case of one individual (a teacher) handing something over to another (a student), in the hope of affecting some desirable change in that student. But, in fact, there are several things going on at the same time here. There are several things interactingprobably too many to keep track of them all -and so we might feel that we need to simplify the scenario, settling for the fact that someone intends to influence someone else in a particular way. How much of this is a reconstruction done in hindsight, using the few clues we have at hand, we will never know. But at least it allows us to retain a conception of autonomy that is epistemically self-sufficient and that fits with our conception of how individuals influence one another in a straightforward sense. There seems to be a strong tradition of assuming this kind of autonomy in the different paradigms of transformative education (as mentioned above). While it is perhaps intuitively appealing, I wonder whether it might not be called for to challenge it?
As a contrast, it might be useful to look briefly at a more fundamentally relational understanding of autonomy. For Étienne Balibar (2020), it is crucial to note that 'the processes that make individuals relatively autonomous or separate are not themselves separate, but reciprocal or interdependent ' (p. 44, emphasis in original). This means that while we can certainly talk about individuals and individual influences, these are never completely self-sufficient, but always informed, and at least in part constituted, by external influences. I wonder how this relational conception of autonomy would agree with Yacek's transformation as aspiration?
When Yacek closes his discourse on the aspirational classroom, he emphasizes the importance of establishing aspirational communities. I cannot help thinking that this is key for understanding transformation, whether inside or outside of the classroom. When we focus hard on understanding how one person can be made to change from one state of being to another, we often forget to account for the fact that people never really undergo changes in isolation. Insofar as education is an inherently relational affair, community is always already there. From this standpoint, we might ask whether the different virtues that we find attractive in people are perhaps also relationally rather than individually constituted? In a sense, then, this turns the table on transformation, urging us to look at how and why educational communities undergo transformation and what this might mean for the individual student, rather than seeking to understand the transformative experience as one that is strictly personal in scope and meaning.
Returning one last time to the incident with Mr Möller, I wish to make a few remarks that may (or may not) be valuable for making some distinctions with regard to Yacek's account of the pedagogical promise of transformative experiences in the contemporary classroom.
1. It is not at all clear to me whether Mr Möller intended for his act of slipping me the book to constitute a transformative experience on my part. Perhaps he simply had an impulse to give me a book that he liked as a token of appreciation -no strings attached. If so, it is not clear that the teacher's intention is really constitutive of the transformative experience at all.
2. It is not at all clear to me whether this experience was transformative in itself, or whether it became transformative in hindsight, as part of me looking back on my life, trying to make sense of the winding path that I had been going down for several years since the event. If it is the latter, then the transformation seems to be just us much about me making sense of past experiences as it is about the transformative potential of these experiences. 3. From the above, it is not at all clear to me whether transformations can be planned or whether we are in fact dealing with an unforeseeable combination of more or less isolated events and sustained retrospection, gradually coming together over time. If this is so, can transformative experiences really be conceived as an important part of schooling or are they rather part of a much broader educational process of formation, one that does not necessarily (or predominately) take place in school and one that only very rarely is recognized as transformation during the actual period of formal schooling?
Granted, there are a lot of 'what-ifs' and 'if that's so, then what-ifs' in the above remarks. Be that as it may, I believe these kinds of reservations are important for setting some provisional limits on the kind of control we can assume to assert over transformative experiences (however educational these may be). It does not take away from the fact that transformative experiences are deeply educational, and that they are legitimately recognized as holding great pedagogical and ethical value, but it may cast some small doubt over the degree to which they can play a meaningful role in the teacher's day-today planning of classroom activities.
What I find myself attracted to in a scenario where transformation is loosened from the tight grip of the teacher is that it allows for the simple beauty of a teacher who does not already assume or expect to be able to witness (or even be made aware of) the eventual transformation of the student. Sometimes it happens, but oftentimes it does not, and there appears to be precious little a teacher can do about controlling it. Another aspect that I believe is important is that transformation, at least in part, appears to be an imaginative move that we make in hindsight. In a sense, we do violence to our memories so as to make them accord with a narrative of transformation much like I am in some sense doing violence to Yacek's book so as to make it fit better with my own recollection of a transformative experience. In this sense, I am inclined to view transformation as an educational fiction, albeit a very valuable one. It is valuable in the sense that it allows me to act on it even if the grounds for action appear contradictory or if they are anything but clear to me (cf. Vaihinger, 2021Vaihinger, [1924). This means that we might benefit from acting 'as if' transformation is real, but at the same time we should probably be wary of thinking that we can ever control it. It also means that I am more inclined to think of transformation in terms of (sometimes subtle) individual gestures than in terms of formalized educational programs.
There is something contradictory, perhaps even manipulative, about the transformative gesture conceived as an educational fiction. This makes it considerably less pedagogically transparent than the model proposed by Yacek, but it also makes it exciting in a forbidden sort of way. Granted, it becomes difficult to stage it in the classroom, but it is attractive as it seems to open up for something beyond what the teacher has already planned for. Writing something on the inside of a book without explaining the message is a bit like handing over a treasure map that you can only partially decipher, to be followed or discarded depending on the treasure seeker's disposition. Think of the used bookseller in Michael Ende's The Neverending Story. He explicitly tells Bastian that he has no intention of selling his books to children, yet he leaves the alluring volume out in the open, fully expecting Bastian to sneak it out from the store to read it. Yacek offers us a comprehensive -and eminently useful -map to a transformative classroom that is no doubt ethically richer than what is currently on the market. What I would like to add to that map is simply a footnote pointing outside of the classroom, to the gradual transformation awaiting the student who has perhaps already parted ways with the teacher.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article. | 2022-04-07T15:14:33.321Z | 2022-03-01T00:00:00.000 | {
"year": 2022,
"sha1": "0baa918234f72d8640c92ed2400db1fef9b05bfb",
"oa_license": "CCBY",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/14778785221087009",
"oa_status": "HYBRID",
"pdf_src": "Sage",
"pdf_hash": "3d615243ed05a62378830ae693a0304b5cdc09f5",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
} |
15982538 | pes2o/s2orc | v3-fos-license | Do We Need to Put Society First? The Potential for Tragedy in Antimicrobial Resistance
Antimicrobial use presents a dilemma, say Foster and Grundmann. Appropriate use can benefit individual patients but carries a cost to society by selecting for resistant strains that are difficult to treat.
T he use of antimicrobials has caused a proliferation of resistant pathogens (Figure 1), and most worryingly, some bacterial strains are resistant to multiple classes of drugs [1,2]. Policies are now being implemented to reduce antimicrobial use, with some encouraging successes [3,4]. However, here we argue that current policies may only partly solve the problem. In particular, they do not address the conundrum at the heart of antimicrobial resistance: the solution may ultimately require us to put society before the individual. That is, halting the rise of resistance may only be achievable if some patients go untreated. We defend this uncomfortable conclusion using the logic of the well-known social dilemma "the tragedy of the commons" [5]. More data on the societal costs of resistance are required to evaluate the potential for a tragedy of antimicrobial resistance and the moral dilemma that it would present.
In the late 19th century, pioneering microbiologists laid the foundations of germ theory, which became one of the most powerful explanations for epidemic disease [6,7]. It was quickly understood that chemical substances that kill microbes could defeat infectious diseases. In the middle of the last century, an apparently endless stream of newly developed antimicrobial compounds, most famously penicillin, left the impression that humanity had established superiority over the microbial world once and for all [7]. But it has since emerged that this is far from the truth [3,7,8]. For decades, we have created an environment where any pathogens that can survive antimicrobial treatment have a strong selective advantage. The result has been the proliferation of resistant strains [6,7] and the origin of bacteria resistant to multiple antibiotic classes [1,2].
Antimicrobial chemicals are frequently used where they are not needed. For example, antibacterials are often prescribed for viral infections [3,4,8] and the widespread availability of over-the-counter antimicrobials in many countries can result in ineffective self-medication [8]. Large volumes of antimicrobials are also used in agriculture and veterinary medicine [9,10], and in many consumer products in which they do not always have a documented function [11]. The emergence of antimicrobial resistance, therefore, can be greatly slowed by reducing inappropriate antimicrobial use [4,12] and considerable efforts are currently underway to promote this goal. These include the development of guidelines [13], and educating physicians and the public about best practice [3,4,8]. Another priority is to develop improved diagnostic tools that allow rapid identifi cation of pathogens and the appropriate antimicrobial treatment [12]. The pressing need for these programs is clear. However, here we argue that they do not address a conundrum that is central to the problem of resistance. Protecting the effectiveness of antimicrobials may only be possible if we put society before the individual.
The Potential for Tragedy
Antimicrobial use presents a dilemma [14]. Appropriate use can benefi t individual patients but carry a cost to societal health by selecting for resistant strains that are diffi cult to treat [15] ( Figure 2). Baquero and Campos [16] recently argued that this dilemma mirrors what Hardin termed "the tragedy of the commons" [5,[17][18][19]]. Hardin's phrase refers to common land to which many people have rights. Every herdsman knows that putting too many cows upon a pasture will eventually destroy it by overgrazing. However, when pastures are a shared commons, the benefi t of adding a cow goes entirely to the owner (the individual) but all herders share the cost (society). The rational solution for an individual is to keep adding cows, even though this leads to the deterioration and possible collapse of the pasture, at a large cost for all [5,[17][18][19].
Hardin applied this analogy to the problems of overpopulation, shared fi sheries, and taxation [5]. Baquero and Campos [16] have argued that the similarity to the problem of antimicrobial resistance means that we can make use of reputation effects to limit antimicrobial prescription (i.e. if overprescription is seen as damaging to the reputation of the doctors), as discussed below. What is most important for our discussion, however, is Hardin's key insight that a tragedy of the commons lacks a technical solution, which he defi ned as "one that requires a change only in the techniques of the natural sciences, demanding little or nothing in the way of change in human values or ideas of morality." This insight is important because the current campaign to ensure that antimicrobials are only used where they will work is such a technical solution. This campaign is very important and will help to slow the evolution of resistance, but Hardin's argument indicates that we may need to go further. Protecting the antimicrobial commons, and hence the collective best interest, may require society sometimes to act against an individual patient's best interests ( Figure 2A).
Clearly, any policy that acts against a patient's interest should be a last resort and would raise serious ethical concerns that need careful consideration [14,20]. That said, the importance of restricting diagnostic and therapeutic options to patients is already well understood by general practitioners who are increasingly obliged to divide medical resources among patients along fi rm budget lines [18,21]. The unfortunate reality is that individuals do not always receive the full extent of the treatment that they desire.
But what would putting society fi rst mean for antimicrobial use? This is not yet clear. In the best-case scenario, the individual and societal optima for antimicrobial use will turn out to be similar, and the current focus on stopping inappropriate use [3,4,8] will indeed be suffi cient ( Figure 2B). However, if it emerges that what is good for society is markedly different to what is good for the individual, then society will benefi t from reductions in use beyond those currently planned ( Figure 2A). That is, society will benefi t from further reductions in the number of times that each patient takes a course of antimicrobials in order to limit evolutionary selection for resistant strains. Such reductions might include severe limits on the use of new and broad-spectrum antimicrobials [14], or leaving milder, mostly self-limiting bacterial infections untreated. In the (ii) Cost to societal health from decreased effectiveness of antimicrobials E(u) as a result of the evolution of antimicrobial-resistant pathogens. We assume that the societal benefi t from reducing transmission rates is small and do not include it here. iii) Overall effect of antimicrobial use on societal health S(u) = I(u)E(u). Ensuring that antimicrobials are only used for infections they can treat is a technical solution that will only take us to the individual optimum, which in this illustrative example is far from the real optimum. (B) Best-case scenario. Low to moderate antimicrobial use has little impact on our ability to treat later infections and is only weakly costly to societal health E(u). This means there is no tragedy. The true nature of E(u) is unknown but it is a function of investment in both new antimicrobials and infection control, which may be able to shift us from scenario A to scenario B. extreme case that we face a complete loss of antimicrobial effectiveness, some antimicrobials might be reserved only for dangerous and potentially life threatening infections.
Is Antimicrobial Use a Tragedy of the Commons?
Understanding just how far antimicrobial use should be restricted is a major challenge for the future. The problem is that the optimal solution of a tragedy of the commons requires a clear idea of the relative costs and benefi ts to both the individual and society [17] (Figure 2), and it is here that large gaps in our knowledge exist [15,22]. While we have an idea of what the cost to a patient is for leaving an infection untreated, we need to better understand the benefi ts, which are likely to include leaving gut fl ora unharmed [8,13] and reducing the risk of resistance in later infections [4,8,23].
More data on the societal effects of antimicrobial use are also required to understand the potential for a tragedy of antimicrobial resistance [15,22]. Antimicrobial use by a patient can benefi t others by preventing the pathogen being passed on but also carries the cost of promoting resistance [6,7]. And while it is clear that antimicrobial use increases the frequency of resistant strains that cause casualties [2], the full impact of resistance upon society is still poorly understood [15,22]. Although challenging, attempts to assess the societal costs of resistance will be helped by the strong differences in antimicrobial use between countries [24], which means that the effects of resistance can be monitored in communities with differing levels of use (Figure 2Aii). Agricultural studies may also provide valuable data, because a strategy that leaves animals untreated raises fewer ethical concerns than an equivalent strategy in our own society. The case for strong reductions in human antimicrobial use would be strengthened by evidence from agriculture that such reductions greatly prolong their effectiveness. Furthermore, any reductions in agricultural antimicrobials may have a knock-on benefi t in reducing the incidence of resistant strains in human infections [9,10].
A better understanding of the costs and benefi ts of antimicrobial use, therefore, is a highly desirable goal for future research. However, it should also be emphasized that public policy can affect the severity of societal costs and the basis for any tragedy of antimicrobial resistance ( Figure 2). Until now, we have been able to avoid many health effects of resistance through the development of new antimicrobial compounds [1,2]. Unfortunately, development by private fi rms is decreasing rapidly as the discovery of new compounds becomes more and more challenging [1], and consequently, fi nancially costly. And these costs are set to increase as the campaigns to limit antimicrobial use further reduce profi ts from the sales of new compounds [14]. The impact of resistant strains on society, therefore, can be reduced by policies that promote antimicrobial development such as government investment in public-private partnerships [2,25] and the careful use of patents [26][27][28].
Another strategy that can reduce the health impact of resistant pathogens is hospital infection control, where a resistant pathogen is carefully monitored and targeted for special contact isolation and decontamination measures in a "search and destroy" policy [29][30][31]. This strategy has resulted in some notable successes with outbreaks of methicillin-resistant Staphylococcus aureus (MRSA) [31] (see Figure 1) and vancomycin-resistant enterococci [29], but it cannot contain the spread of resistance outside of institutions and, like development of new drugs, the strategy comes at considerable economic cost [32,33]. So while it is clear that investment in development and infection control will play an important role in reducing the health impact of resistance ( Figure 2Aii), it is less clear that these strategies will eliminate the basis for tragedy altogether. We may, therefore, have to face up to the reality of a tragedy of antimicrobial resistance (Figure 2A).
Could Further Restrictions in Antimicrobial Use Be Achieved?
The recent campaigns to discourage antibiotic use for common colds and to limit antibacterials to bacterial infections have met with success, but they also underline the diffi culty in changing society's attitudes and behavior [3,4]. If further restrictions were deemed appropriate, could these be achieved? Here we might again turn to Hardin, who proposed two candidate solutions to a tragedy of the commons: "mutual coercion mutually agreed upon," and privatization. However uncomfortable, in the event that antimicrobial use must be further restricted, both might play a role.
Coercion in society frequently takes the form of taxation, such as the use of parking fees when space is limited [5]. Similarly, prescribers or patients might be offered the choice of paying an antimicrobial-use levy or instead using alternative remedies. Coercion might also be achieved through new government regulations, which have already proved effective at reducing antimicrobial use in several countries [34]. More local regulation can be achieved by exploiting the preexisting management structures that exist in many health care settings to ensure that medical resources are divided equally [8,21]. Privatization solves the tragedy of the commons by dividing up the resource so that costs from selfi shness feed back directly upon the individual owner [5]. But antimicrobial effectiveness cannot be divided in this way, making true privatization impossible. However, by analogy, careful tracking of antimicrobial prescriptions would create a feedback that enables individuals to be held accountable for extremes of use.
Hardin's solutions to a tragedy of the commons assume that humans behave as selfi sh, rational individuals who-if unmanaged-will display no regard for the interests of society. Although there is no doubt that humans are capable of selfi shness, this assumption of rational "Homo economicus" behavior is being increasingly challenged [19,[35][36][37]. For example, increased cooperativity is predicted whenever selfi shness is damaging to reputation [19,37], and Baquero and Campos have argued that antimicrobial use will be decreased if we can establish a context in which overprescription is damaging to the reputation of doctors [16].
In addition, studies in which participants are asked to divide up shared resources show that humans behave less selfi shly than simplistic self-interested strategies predict. This highlights the importance of human norms for cooperation [19,36]. If these norms translate to health care decisions (although it is not certain that they will), then educating patients about societal benefi ts will help decrease antimicrobial use [4]. The power of any societal argument is likely to be greatest when benefi ts accrue on a local scale [14]. Local benefi ts are realistic given the strong regional effects of differences in antimicrobial use and resistance, which suggest that reducing antimicrobial use can benefi t a region or nation even if its neighbors adopt a less effective program [38].
Conclusion
Hardin's tragedy of the commons has proved to be a powerful analogy for understanding the problem of protecting the benefi t we all receive from public goods [5,[17][18][19]36]. It fi nds particular relevance in the growing crisis of antimicrobial resistance, where use of antimicrobials threatens to undermine the protection they provide to society as a whole. The questions of how far to reduce antimicrobial use, and at what cost to individual patients, represent a central unanswered problem in the battle against resistance (Figure 2). The answer requires a better understanding of the effects of antimicrobial use on both the individual patient and society as a whole. In particular, we need to better understand the societal costs of resistant pathogens and the potential for investments into new antimicrobials and infection control to limit these costs. These empirical challenges exist alongside the ethical question of whether we should ever resort to a strategy that leaves patients untreated. It is the challenging task of physicians, public health agencies and governments to evaluate the severity of the situation and decide what should be done. Perhaps the strongest message from Hardin's analogy is that diffi cult choices may lie ahead [5,18]. Solutions to a tragedy of the commons do not come easily and are likely to require brave policy decisions. | 2014-10-01T00:00:00.000Z | 2006-01-10T00:00:00.000 | {
"year": 2006,
"sha1": "dcbd1c4ca5c7aa692e606eef9b2bb1c498443061",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosmedicine/article/file?id=10.1371/journal.pmed.0030029&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f4497875559c1bcca4b74fa7cc7d9c9eef64bef6",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
118476350 | pes2o/s2orc | v3-fos-license | Interaction-range effects for fermions in one dimension
Experiments on quasi-one-dimensional systems such as quantum wires and metallic chains on surfaces suggest the existence of electron-electron interactions of substantial range and hence physics beyond the Hubbard model. We therefore investigate one-dimensional, quarter-filled chains with a Coulomb potential with variable screening length by quantum Monte Carlo methods and exact diagonalization. The Luttinger liquid interaction parameter K_rho decreases with increasing interaction strength and range. Experimentally observed values close to 1/4 require strong interactions and/or large screening lengths. As predicted by bosonization, we find a metal-insulator transition at K_rho=1/4. Upon increasing the screening length, the charge and spin correlation functions reveal the crossover from dominant 2k_F spin correlations to dominant 4k_F charge correlations, and a strong enhancement of the charge velocity. In the metallic phase, the signatures of spin-charge separation in the single-particle spectrum, spinon and holon bands, remain robust even for rather long-ranged interactions. The charge-density-wave state exhibits backfolded shadow bands.
I. INTRODUCTION
The Hubbard model has served as a framework to study strongly correlated electrons for almost five decades. 1 Its relative simplicity compared to more realistic models is largely based on approximating the electronelectron Coulomb interaction by an onsite repulsion U between electrons of opposite spin. The resulting Hamiltonian captures many aspects of strong correlations, including the Mott transition at half filling. Detailed knowledge about the model can be obtained by combining the Bethe ansatz with the bosonization technique. 2 However, experiments on quasi-one-dimensional (1D) systems such as quantum wires, 3 carbon nanotubes, 4 or self-organized atom chains 5 fall outside the range of validity of the Hubbard model. This is evinced by the possibility of an insulating, charge-ordered state at quarter filling, substantial 4k F charge correlations, or by a Luttinger liquid (LL) interaction parameter smaller than 1/2. Within a quasi-1D description, these features imply electron-electron interactions of finite range.
The case of one dimension is particularly interesting due to the breakdown of Fermi liquid theory, the importance of collective excitations, and the emergence of spincharge separation. These phenomena can be understood in the framework of bosonization, 2,6-8 which provides a description in terms of a few nonuniversal parameters valid asymptotically at long wavelengths and low energies. In particular, knowledge of these parameters fully characterizes the correlation functions. The 1D Hubbard model, describing a screened, onsite interaction, is a Mott insulator for any U > 0 at half filling. Away from half filling, umklapp scattering is not allowed and the system remains metallic. The LL interaction parameter takes on values 1/2 ≤ K ρ ≤ 1, leading to dominant spin density wave correlations. A finite interaction range permits Mott or charge-densitywave (CDW) transitions of the Kosterlitz-Thouless type at other commensurate fillings n, for example at quarter filling in the U -V model with onsite (U ) and nearestneighbor (V ) repulsion. 2 In contrast to the Hubbard model, such transitions occur at a finite critical U determined by the condition K ρ = n 2 . The effects of extendedrange interactions depend on the details. For example, the intuitive picture of long-range interactions driving the system to strong coupling does not always apply: for spinless fermions, the critical interaction for the metal-CDW transition is larger for the 1/r potential than for a nearest-neighbor repulsion; 9 for spinfull fermions, a transition seems to be absent for the unscreened potential up to very strong interactions. 10 The 1/r Coulomb potential realized in, e.g., nanotubes and quantum wires, represents the extreme limit of longrange interactions. The logarithmic divergence of its Fourier transform gives rise to remarkable differences, most notably the metallic Wigner crystal (WC) state with quasi-long-range 4k F charge correlations, 11,12 and the existence of plasmon excitations. Strictly speaking, the divergence only exists for infinite systems and in the absence of screening. Consequently, the above phenomena are absent for any large but finite interaction range, and the bare Coulomb potential can be regarded as a special point in parameter space distinct from the LL liquid fixed point. The 1/r potential has been studied analytically [12][13][14][15][16][17][18][19][20][21] and numerically. 9,10,22,23 The typical experimental situation is most likely intermediate between the Hubbard limit and the bare 1/r potential. Within bosonization, a finite interaction range only leads to a renormalization of the LL parameters. 2,24 However, in contrast to the Hubbard model, there ex-ist no analytical methods to calculate the LL parameters exactly for nontrivial cases. Besides, the bosonization results rely on a linear band dispersion, and are valid only at low energies and long wavelengths, a limit which is nontrivial to achieve both in experiment and in numerical simulations. On the other hand, exact numerical methods are valid at all energies and distances and permit, e.g., the calculation of spectral weights of excitations. They provide a quantitative connection to microscopic model parameters, and can be used to study intermediate interaction ranges. The 1D nature of the problem makes numerical methods particularly powerful.
In this work we study the effect of the electron-electron interaction range using exact, large-scale quantum Monte Carlo (QMC) simulations and exact diagonalization. The model chosen here makes significant simplifications over typical experimental situations, but we believe that our findings are rather general. One of the key results is the LL interaction parameter K ρ , which allows us to estimate the interaction strength and range required to reproduce the experimentally observed values. We also study the evolution of static and dynamical correlation functions as a function of the interaction range. Importantly, we find that spin-charge separation in the singleparticle spectrum is robust against increasing the interaction range. Our work extends previous investigations of spinfull and spinless lattice models, 9,14,21,22,25,26 and continuum simulations. [27][28][29] The paper is organized in the following way. In Sec. II we introduce the model and discuss related previous work. Section III gives details of the numerical methods. Our results are discussed in Sec. IV. Sec. V contains the conclusions. The appendix provides details about the application of the continuous-time (CT)QMC method.
II. MODEL
We consider a 1D chain of length L with Hamiltonian The kinetic term contains the usual 1D tight-binding band structure, (k) = −2t cos k. The electron density operator (summed over spin σ) at wavevector k (Wannier site i) is given byn k (n i ), withn iσ = c † iσ c iσ . We have set the lattice constant, and k B equal to one, and take t as the unit of energy.
The interaction matrix element V (r) is defined as The screening length ξ permits us to interpolate between the Hubbard model (ξ = 0, U = 2V ) and longrange Coulomb interaction [ξ = ∞, V (r) ∼ 1/r]. The choice (2) appears more natural than gradually adding more and more matrix elements for increasing distances. The condition r < L/2 is due to the use of periodic boundary conditions. V (r) as defined by Eq. (2) satisfies V (r) → 0 as r → ∞ as well as the convexity condition V (r+1)+V (r−1) ≥ 2V (r) for r > 1. In the classical limit (no hopping), this guarantees a 4k F CDW ground state. 14 If the second condition is not met, the competition between 2k F and 4k F charge order can lead to enhanced metallic behavior or even a CDW-metal transition with increasing interaction range, as observed in quarter-filled extended Hubbard models. 25,30 As we show below, our choice of V (r) excludes such phenomena. We have also compared the choice of potential (2) to an Ewald summation for the case of Fig. 10, where the cutoff is expected to be most relevant, but found only minor changes in the form of energy shifts. The bosonization picture for the model (1), taking into account the lattice, is as follows. At half filling, any V > 0 produces a Mott insulator. For commensurate densities n away from half filling and an interaction range greater than or equal to the average particle spacing 1/n, strong enough interactions cause a CDW transition at the critical point K ρ = n 2 , beyond which umklapp scattering becomes a relevant perturbation. 2 The CDW state is characterized by long-range 4k F charge order. In the LL phase, the dominant correlations are 2k F spin-density fluctuations for K ρ > 1/3, and 4k F charge correlations for K ρ < 1/3. For the unscreened Coulomb potential with divergent Fourier transform (ξ = ∞), we formally have K ρ = 0, which would suggest an insulating ground state, in contrast to the continuum prediction of a metallic quasi-WC made by Schulz. 12 The existence of a metal-insulator transition at K ρ = n 2 has been verified numerically for the U -V model and the U -V 1 -V 2 model. In contrast, for lattice fermions with a long-range potential (more specifically, the Pariser-Parr-Pople model), numerical results 10 suggest a metallic ground state with the properties predicted in the absence of umklapp scattering. 12 This rather surprising result, obtained on large but finite systems, is attributed to the reduction of the umklapp matrix element g 3 due to longrange interactions. 9 Within bosonization, there are subtle but important differences between spinfull and spinless models (concerning umklapp scattering), and between odd and even filling factors (e.g., n = 1/2 and n = 1/3 are not equivalent when considering the Luther-Emery point). 2 These differences seem to manifest themselves also in numerical studies of lattice models. For example, whereas spinfull fermions interacting via a 1/r potential remain metallic even for large V , 10 a metal-insulator transition has been observed in the spinless case, 25 with the critical interaction being larger than for the extended Hubbard model.
For simplicity, we consider in the following only the case ξ < ∞, so that no divergence in the Fourier transform V (q) occurs. We further focus on quarter filling n = 0.5, and will see below that the model (1) is then either a LL (for K ρ > 1/4) or a CDW insulator (for K ρ < 1/4).
For quarter filling, n = 0.5, most of the physics of the model (1) (with ξ < ∞) can also be captured by simpler U -V or U -V 1 -V 2 models provided the convexity condition is satisfied. 14 In particular, these models realize the non-Hubbard regime K ρ < 1/2, and a metal-insulator transition at K = 1/4. In the metallic phase, the LL conjecture implies that given the same LL parameters, the extended Hubbard models and Eq. (1) produce identical results, albeit with different microscopic parameters. However, in connection with experiments, it is crucial to know how strong the dependence of the LL parameters and hence the static and dynamical correlation functions on the interaction range is. We will show below that in order to reach the same value of K ρ , the U -V model requires much larger (and thus rather unrealistic) interactions than a model with a larger interaction range.
III. METHODS AND OBSERVABLES
The majority of our results were obtained from simulations in the stochastic series expansion (SSE) representation with directed loop updates. 31,32 The inclusion of the long-range interaction terms in Eq. (2) is straight forward. Due to the linear scaling of computing time with the average expansion order, this method permits us to study low temperatures and long chains (up to L = 140 here) even in the strong-coupling regime. We also show results obtained with the continuous-time QMC method. 33,34 The latter is restricted to weak and intermediate interactions due to a less favorable scaling of computer time with temperature and system size, and additional numerical difficulties (see the Appendix). Both QMC methods are exact.
The single-particle spectral function is of particular interest in relation to photoemission results. Since the calculation of the single-particle Green's function in SSE is hampered by a minus-sign problem (for periodic boundaries), we instead present results from exact diagonalization on clusters with L = 20.
We consider the static charge (ρ) and spin (σ) structure factors whereŜ z j = 1 2 (n j↑ −n j↓ ), and the dynamical charge and spin structure factors whereρ q = r e iqr (n r −n)/ √ L, and |i and |j are eigenstates with energies E i and E j . These dynamical correlation functions can be calculated in the SSE representation at fixed particle density and for periodic boundaries without a sign problem. For the analytical continuation we have used the maximum entropy method. 35 The T = 0 single-particle spectral function reads as where A − (A + ) is related to photoemission (inverse photoemission), and |ψ (Ne) 0,k denotes the groundstate for the sector with N e electrons and total momentum k; the corresponding energy is E (Ne) 0,k . In order to measure energies relative to the Fermi energy, we show
IV. RESULTS
Since we used three different methods, let us state here that the results of Figs. 1-5, 7 and 8 were obtained using the SSE representation, Fig. 6 with the CTQMC method, and Figs. 9-11 by exact diagonalization. Except for Fig. 2(b), results are for quarter filling n = 0.5.
A. Luttinger liquid interaction parameter
In the metallic regime of the model (1), the knowledge of the LL interaction parameter K ρ together with the bosonization results for the correlation functions provides a complete description of the low-energy, long-wavelength physics. The crossover between the Hubbard and longrange cases as a function of ξ, and the quantitative relation between microscopic parameters and LL parameters, can be studied exactly by means of numerical methods. The LL parameter has previously been calculated, for example, for spinless fermions with a 1/r potential, 9 for the U -V model, 36 and for the U -V 1 -V 2 model. 30 We extract K ρ from SSE QMC results for the charge structure factor using the relation where q 1 = 2π/L is the smallest, nonzero wavevector for a given system size, and the static structure factor is defined in Eq. a finite-size scaling to obtain K ρ . The extrapolation is shown for selected values of ξ in the case V /t = 3, n = 0.5 in Fig. 1. We find that for large enough system sizes, the finite-size dependence is dominated by the lowest order 1/L, and have therefore used a linear fit for the extrapolation. Figure 2(a) shows the dependence of K ρ on V /t and ξ at quarter filling n = 0.5. The V /t = 1 results fall into the Hubbard regime K ρ ≥ 1/2 for all values of ξ shown. For a stronger interaction V /t = 3, K ρ becomes smaller than 1/2 for ξ ≈ 2, but remains larger than 1/3, thereby implying dominant 2k F correlations [see Eq. (8) and discussion below]. At V /t = 6, the values of K ρ span the Hubbard, non-Hubbard and dominant 4k F (i.e., K ρ < 1/3) regimes. For the largest ξ = 20, the LL parameter takes on almost exactly the critical value K ρ = 1/4 of the LL-CDW transition. The numerical results therefore suggest that the experimentally observed values of K ρ ≈ 0.25 require surprisingly large values of V /t and ξ. Finally, for V /t = 9, the system undergoes the metalinsulator transition for ξ ≈ 3.5. Independent of V , we expect K ρ → 0 for ξ → ∞ in the thermodynamic limit, corresponding to the quasi-WC. A theoretical prediction, K ρ ∼ ln −1/2 ξ, was made by Schulz, 12 and the numerical results for the charge structure factor by Fano et al. 10 are consistent with K ρ = 0. Figure 2(a) reveals that K ρ decreases with increasing ξ, thereby bringing the system closer to the insulating phase. In previous work on extended Hubbard models, it was found that adding interactions at distances beyond the interparticle spacing 1/n can increase K ρ and hence enhance the metallic character of the system. 9,25 Similarly, in the U -V 1 -V 2 model with U fixed, varying the relative strength of V 1 and V 2 leads to a competition between 2k F and 4k F charge fluctuations. 25,30 As a result, K ρ takes on a maximum for The condition V (2) = V (1)/2 is also realized for the unscreened Coulomb potential, and numerical results suggest that the system remains metallic up to very strong interactions even in the presence of a lattice. 10,25 The experimentally motivated form (2), fulfilling the monotonicity and the convexity condition, 14 favors a 4k F CDW state in the limit V /t → ∞. 14 Similar to previous results for spinless fermions with a 1/r potential, 9 K ρ in Fig. 2 decreases with increasing V /t. A common feature of the curves in Fig. 2(a) is a pronounced decrease at small values of ξ, followed by a much slower decrease for larger ξ. The numerical results indicate that the change in behavior occurs when the interaction range ξ equals the average particle spacing 1/n = 2. To verify this hypothesis, we compare in Fig. 2(b) the ξ dependence of K ρ for two different densities n = 0.5 and n = 0.1 at V /t = 3. The curve for n = 0.1 indeed exhibits a significant ξ dependence up to much larger ξ. The results for n = 0.1 further reveal that for a given V /t, a smaller density requires a significantly larger interaction strength and/or range to reach the critical K ρ = n 2 for the metal-insulator transition, see also Ref. 21.
B. Charge and spin correlation functions
For a model with SU (2) spin symmetry such as Hamiltonian (1), bosonization predicts the decay of charge and spin correlation functions to be determined solely by the parameter K ρ (since K σ = 1), 37 The 38 In the opposite limit of a 1/r Coulomb potential (ξ = ∞) with divergent Fourier transform V (q) ∼ ln(1/q), Schulz 12 obtained Apart from the absence of the 1/x 2 Fermi liquid contribution, the most notable difference is that charge correlations are dominated by an unusually slow decay of the 4k F component (slower than any power law). These quasi-long-range 4k F charge oscillations led to the notion of a fluctuating WC, where the wavelength λ = 2π/4k F = 1/n is the average distance between fermions. In contrast, the spin sector retains a power-law decay. These continuum results are consistent with numerical work. 10,27 As emphasized before, the WC results (9) rely on the divergence of the Fourier transform of the potential V (r). Such a divergence only occurs in the thermodynamic limit, and for ξ = ∞. If either of these conditions is not met, the LL forms (8) can be recovered in the longwavelength limit. Here we only consider large but finite values of ξ, for which a metal-insulator transition occurs at K ρ = n 2 = 1/4. The CDW state exhibits longrange 4k F charge order. The closest analog of the metallic quasi-WC state in our case is therefore the metallic regime 1/3 > K ρ > 1/4 with dominant (power-law) 4k F correlations. As shown in Fig. 2, K ρ < 1/3 is realized for V /t = 6 and large ξ, and we explore the similarities to the WC below. Figure 3 shows the charge and spin structure factors as defined in Eq. (3). At V /t = 3 and with increasing ξ, we see a slight increase of the 4k F = π charge correlations, see Fig. 3(a). This effect becomes more noticeable for a stronger repulsion V /t = 6, as shown in Fig. 3(b). The inherent length scale 1/n again appears in Fig. 3, with the results saturating on the scale of the plots for ξ 2. The spin structure factor [Figs. 3(c) and (d)] reveals an enhancement of 2k F = π/2 antiferromagnetic correlations with increasing ξ, which according to Eq. (8) can be related to the reduction of K ρ . This enhancement is again more pronounced for V /t = 6 than for V /t = 3.
Let us now turn to the long-wavelength behavior. For a LL we have S ρ (q) ∼ qK ρ , whereas for the WC S ρ (q) ∼ q| ln q| −1/2 (see Ref. 2). Following Ref. 10, we plot in Fig. 3(e) S ρ (q)| ln q| 1/2 /q. This quantity shows a logarithmic divergence at q = 0 as long as S ρ (q) ∼ q and tends to a constant as q → 0 for ξ = ∞. 10 Our numerical results show that a divergence occurs throughout the metallic phase, and that the approach to the WC result is rather slow. In particular, given the finite values of ξ, the LL nature of the system reemerges eventually in the limit q → 0, although the system sizes required to see this effect become larger and larger. A nonlinear (at long wavelengths) density structure factor corresponding to K ρ = 0 has been observed for the 1/r potential. 10 In contrast, for finite ξ, Fig. 3 shows that the linear behavior of S ρ (q) is preserved. The long-wavelength spin structure factor is not affected by the interactions [Fig. 3(c) and (d)]; the slope in the limit q → 0 remains fixed, as required by K σ = 1 [cf. Eq. (8)]. Schulz 12 suggested that for a finite ξ, one should be able to observe WC-like correlations at distances x < ξ and LL-like correlations at x > ξ. Although the bosonization results are only valid for large distances, this prediction can in principle be tested numerically. Figure 4 shows the density-density correlation function in real space. We have chosen V /t = 5, and ξ = 10 or ξ = 20. This choice was made for the following reasons. First, deviations from the LL form given by Eq. (8) are most visible in the regime where 4k F oscillations dominate, that is for K ρ < 1/3. However, for the bosonization results to apply, it is important to avoid the insulating state expected for K ρ < 1/4. Close to K ρ = 1/4, pre- vious work on the extended (U -V ) Hubbard model has shown the importance of logarithmic corrections. 39 For the parameters chosen, we have K ρ ≈ 0.29 for ξ = 10 and K ρ ≈ 0.28 for ξ = 20. The results in Fig. 4 show dominant 4k F correlations but no long-range order, as expected in the LL regime.
Based on the idea that the LL form for n x n 0 should hold at distances larger than ξ, we fit the numerical data to Eq. (8) using two fitting parameters (the 2k F and 4k F amplitudes) as well as the above values of K ρ . The fitting intervals are chosen as [ξ + 5, 35] and we used βt = L = 84. Figure 4(a) shows that we indeed have good agreement between the fit and the QMC data at large distances. However, for r ξ = 10, significant deviations become visible. To discriminate between short-distance effects coming from the continuum approximation underlying Eq. (8) and genuine deviations from LL theory we consider ξ = 20 in Fig. 4(b). Again there is reasonable agreement at large distances, but clear differences at r ξ = 20. Hence, keeping in mind the difficulties men- tioned above, our results are consistent with the picture proposed by Schulz. 12 As can be seen from Fig. 2, the insulating CDW phase can be reached for ξ 3.5 and V /t = 9. The CDW state is characterized by long-range 4k F charge order at T = 0, as formally reflected by Eq. (8) for K ρ = 0, and may be regarded as a WC pinned to the lattice. Figure 5 shows the amplitude of 4k F charge correlations divided by system size, i.e., S ρ (4k F )/L. At fixed ξ = 10, we find that this quantity extrapolates to zero in the thermodynamic limit in the LL phase [K ρ = 0.383(1), V /t = 3], and to a finite value in the CDW state (V /t = 9). Near the phase boundary, the Kosterlitz-Thouless nature of the transition makes numerical studies difficult and we see that, assuming a linear scaling, S ρ (4k F )/L extrapolates to a finite but very small value despite K ρ = 0.270(3) > 1/4. For the unscreened Coulomb potential, S ρ (4k F )/L increases logarithmically with system size, and there is no long-range order.
C. Dynamical charge and spin correlations
We now discuss the dynamical spin and charge correlation functions, defined in Eq. (4), as obtained from QMC simulations. We begin with a rather weak interaction V /t = 1 and a large screening length ξ = 10. CTQMC results for these parameters which, according to Fig. 2, fall into the Hubbard regime, are presented in Fig. 6. Despite the long-range interaction, the spectra closely resemble previous results for the Hubbard model, see, e.g., Ref. 40. In particular, the particle-hole continuum is clearly visible in both the charge and the spin channels. As a result of interactions, the velocities of long-wavelength charge and spin excitations differ by about a factor of 2.
To investigate larger values of V /t, we use the SSE representation. The latter can also be used for the parameters of Fig. 5, but we chose the CTQMC method to demonstrate its applicability to models with long-range interactions. Taking V /t = 6, we can explore the whole metallic regime of the model (1) by varying the screening length ξ. Results are shown in Fig. 7.
We first discuss the charge sector. For ξ = 0.1, corresponding to the strong-coupling regime of the Hubbard model [U = 2V (0) = 12t], the results in Fig. 7(a) look qualitatively similar to Fig. 6(a). However, the distribution of spectral weight over the particle-hole continuum is much more inhomogeneous, with pronounced excitation features along the edges. The charge velocity v ρ is only slightly smaller than in Fig. 6(a). Upon increasing ξ, we observe a substantial increase of v ρ , as indicated by the dashed lines; between ξ = 0.1 and ξ = 1, v ρ increases from 1.97(2)t to 2.64(2)t. A small charge gap of order 0.1t, which extrapolates to zero for L → ∞ in the LL phase, is visible in Fig. 7(c), but we can estimate the velocity as v ρ > 3.5t. The increase of v ρ reflects the fact that the extended interaction promotes 4k F charge order, and thereby increases the stiffness of the charges with respect to long-wavelength excitations. This gap is a finite-size effect caused by the close proximity of the CDW transition. The onset of 4k F fluctuations is also reflected in an incomplete but well visible softening of the excitations at q = 4k F . We will see below that this feature develops into a Bragg peak in the CDW state. A plasmon excitation, one of the hallmark features of the 1/r Coulomb potential, is not expected for finite values of ξ, and would in general be very difficult to distinguish from a linear mode in numerical simulations. In contrast to the charge sector, the effect of ξ on the spin dynamics is very small. In accordance with LL theory, the velocity v σ of long-wavelength spin excitations remains virtually unchanged upon increasing ξ from 0.1 to 10 [ Fig. 7(d) and (f)]. However, v σ is strongly renormalized in going from V /t = 1 [ Fig. 6(b)] to V /t = 6 [ Fig. 7(d)]. At fixed V /t, the screening length hence provides a natural way of changing the ratio of charge and spin energy scales, and opens a route to explore the spinincoherent LL. 41 Figure 8 shows results for the charge and spin dynamics in the CDW phase, for V /t = 9 and ξ = 10. As demonstrated in Fig. 5, for these parameters, the system is in a CDW state with long-range 4k F order. In addition to a charge gap at q = 0, the charge structure factor has become almost perfectly symmetric with respect to q = π/2. This doubling of the unit cell results from the softening at q = 4k F , and is a typical signature of the CDW state. Except for a smaller velocity v σ , the spin structure factor in Fig. 8 is similar to the metallic regime (i.e., gapless), see for example Fig. 7(c). D. Single-particle spectral function The single-particle spectrum is of particular interest in the search for experimental realizations of LLs because it can reveal the signatures of spin-charge separation (spinon and holon bands). 42,43 Although LL the-ory is a low-energy description, spin-charge separation may be observed up to rather high energies. For example, spinon and holon bands are visible over an energy range of the order of the bandwidth in the Hubbard model, 40,44,45 and also experimentally for TTF-TCNQ 46 and 1D cuprates. 47,48 In contrast, such clear features of spin-charge separation seem to be absent in recent measurements on self-organized gold chains, although the density of states reveals the scaling expected for a LL. 5,49 To understand the role of the interaction range and small values of K ρ , we calculate the single-particle spectral function A(k, ω − µ) [Eq. (6)] for different values of V and ξ. To simplify the interpretation of the complex structures, we use exact diagonalization on chains with L = 20 sites, and use a different graphical representation. Figure 9 shows the single-particle spectrum in the Hubbard regime for V /t = 1 and ξ = 10. To highlight the spinon, holon and shadow bands previously observed for the Hubbard model away from half filling, 40,44,45 we include the holon and shadow band dispersions for the U = ∞ Hubbard model, 45 −2t cos(|k| + k F ) and −2t cos(|k| − k F ), as well as a linear spinon branch v σ (k −k F ) with v σ determined from S σ (q, ω). These analytical results have well-defined corresponding excitations in the numerical spectra, and establish the signatures of spin-charge separation in the Hubbard regime of the phase diagram. The spectral weight of the shadow band at large k is rather small in Fig. 9. The finite spectral weight between the holon and spinon excitation peaks is due to the finite system size. 45 Taking V /t = 6, we can study the spectral function across the Hubbard, non-Hubbard and dominant 4k F regimes with increasing ξ. The results are shown in Fig. 10, and reveal that the signatures of spin-charge separation are fully preserved. Whereas the holon dispersion reflects the noticeable increase of the charge velocity with increasing ξ, see Fig. 7, the spinon excitations remain virtually unchanged by the interaction range, again in accordance with the results for S σ (q, ω) in Fig. 7. The spectral weight of the shadow band is significantly enhanced compared to V /t = 1 (Fig. 9). On approaching the strong-coupling region at larger ξ, the upper Hubbard band (visible in the insets of Fig. 10) becomes almost completely flat. Similar to Fig. 10, a gap is visible at k F in Fig. 10(c) [and also in (b) but much smaller]; we have verified that this gap is a finite-size effect.
Our findings in the metallic region of the phase diagram are consistent with the experimentally observed coexistence of a small K ρ (implying extended-range interactions) with signatures of spin-charge separation in photoemission measurements; a good example is TTF-TCNQ. 46 On the other hand, the finite interaction range does not provide an explanation for the possible absence of clear spin-charge separation in self-organized gold chains. 5,49 We comment on the latter case in the conclusions.
Finally, we show in Fig. 11 the single-particle spectrum in the insulating CDW phase at V /t = 9 and ξ = 10. The dynamical charge and spin structure factors for these parameters were presented in Fig. 8. We find a charge gap (equal to 0.2(1) in the thermodynamic limit), and backfolded shadow bands related to the 4k F charge order which are visible in the inset of Fig. 11. The spectrum appears to evolve continuously across the CDW transition. In particular, the holon band is well visible in Fig. 11, whereas it has been found to separate into two domain walls for much stronger Coulomb interaction. 22 The single-particle spectrum of a quarter-filled CDW state has also been calculated using the bosonization method. 50 In the absence of dimerization, no singularities Fig. 9 but for V /t = 9 and ξ = 10, corresponding to the insulating CDW phase, see Fig. 2.
The inset shows a logarithmic density plot of the spectrum, revealing backfolded shadow bands related to the 4kF charge order.
exist near k F (note that in our numerical calculations, we cannot distinguish between singularities and excitation peaks of finite width). The spectrum may also depend on the details of the interaction potential.
V. CONCLUSIONS
In this work, we have studied the effects of the electronelectron interaction range in one dimension using exact numerical methods. We have obtained the Luttinger liquid interaction parameter K ρ as a function of the Coulomb matrix element V and the screening length ξ which, in combination with Luttinger liquid theory, defines the phase diagram of the model. In addition to the Hubbard regime 1 ≥ K ρ ≥ 1/2, we have explored the non-Hubbard regime K ρ < 1/2, the case K ρ < 1/3 with dominant 4k F charge correlations, and the insulating CDW state which exists at quarter filling for K ρ < 1/4. We identified an important length scale 1/k F for K ρ ; K ρ strongly depends on the screening length for ξ 1/k F , whereas it decays very slowly for ξ 1/k F . Our results indicate that the lattice model with a finite (but possibly large) interaction range can be described by Luttinger liquid theory if higher-order umklapp terms are taken into account. This case is therefore distinct from the unscreened 1/r potential which falls outside the Luttinger liquid description. 11,12 For the unscreened potential, numerical results suggest the existence of a metallic quasi Wigner crystal state with K ρ = 0. 10 For our choice of a screened Coulomb potential, which is both convex and monotonically decreasing with increasing distance, K ρ always decreases with increasing interaction strength or range, as compared to enhanced metallic behavior observed in extended Hubbard models as a result of competing nearest-neighbor and next-nearest neighbor interactions. Interestingly, the small values of K ρ ≈ 1/4 observed in recent experiments on gold chains, as well as previously in quantum wires, carbon nanotubes and quasi-1D materials, can only be achieved for large values of the interaction strength and/or range.
We have calculated the static and dynamical charge and spin correlation functions, and found good agreement with the expectations based on Luttinger liquid theory. Upon decreasing K ρ by increasing V and/or ξ, 4k F charge correlations become strongly enhanced, reminiscent of although not identical to the quasi Wigner crystal. Our results for the real-space density-density correlations are consistent with Luttinger liquid behavior on length scales beyond the screening length and deviations on smaller length scales.
The 4k F correlations lead to a pronounced Bragg peak in the dynamical density structure factor. The interaction range strongly modifies the velocity of longwavelength charge excitations, whereas the spin velocity only depends on the onsite repulsion. Throughout the Luttinger liquid phase, spin-charge separation is clearly visible in the single-particle spectrum. Finally, in the insulating charge-density-wave phase, we observe backfolded shadow bands.
An important question to be addressed in future work, motivated by experiments on self-organized gold chains, 5,49 is the impact of spin incoherence on the spinon and holon signatures in photoemission spectra. The energy scales for low-energy charge and spin excitations are determined by the corresponding velocities v ρ and v σ . As explicitly shown in this work, v ρ increases with increasing ξ, whereas v σ does not depend on the interaction range. Therefore, the charge and spin energy scales can be well separated for sufficiently large ξ. In the regime v ρ v σ , the 2k F spin correlations can be suppressed at finite temperatures, whereas the charge sector remains coherent. 41 This scenario may explain the rather incoherent angle-resolved spectrum of gold chains, which at the same time show clean LL power-law behavior in the angle-integrated density of states. 5
ACKNOWLEDGMENTS
We thank D. Baeriswyl, R. Claessen, S. Eggert, S. Ejima, F. Essler, H. Fehske, V. Meden, J. Schäfer, and D. Schuricht for helpful discussions. This work was supported by the DFG Grants No. FOR1162 and WE 3639/2-1, as well as by the Emmy Noether Programme. We are grateful to the LRZ Munich and the Jülich Supercomputing Centre for generous computer time.
APPENDIX: CTQMC
The general formulation of the weak-coupling CTQMC method allows the simulation of problems with long-range interactions in imaginary time and/or space. 33,34,51 Retarded interactions (i.e., nonlocal in time), which essentially correspond to the electron-phonon problem, have been considered in Refs. 52-54. In this appendix, we provide technical details for the application of the CTQMC method to a Hamiltonian of the form (1).
Although such simulations are in principle straightforward, we have encountered difficulties which are ultimately related to the strong-coupling character of the problem considered in this paper. The algorithm is quite similar to the case of electron-phonon interactions, and has been implemented both at finite temperatures and at T = 0 (with a projection parameter θ). 34 Our starting point is Eq. (1), which we write as H = k (k)n k + V ir P (r) (n i − n) (n i+r − n) .
(A.10)
Here n is the average density, the interaction accounts for fluctuations around the paramagnetic saddle point, and P (r) is a probability distribution; we also defined V = r V (r). During the simulation, vertices corresponding to interactions over a distance r are proposed with probability P (r) = V (r)/V .
To circumvent the negative sign problem, and following Ref. 34, we rewrite the interaction as (A.11) Here we have introduced an Ising variable s = ±1. Up to a constant, Eq. (A.11) is equivalent to the original interaction. To avoid the sign problem for V > 0 we have the condition n/2 + δ > 1.
The average expansion order, which determines the computer time, can be evaluated within the finite temperature approach, giving M = βV L 4δ 2 − r P (r) (n 0 − n) (n r − n) .
(A. 12) The fact that the form (A.11) is beneficial for the simulations at quarter filling and for rather strong interactions confirms an empirically derived rule. In order to obtain optimal results away from half filling, it is often useful to increase the value of δ at the expense of a larger average expansion order. With the above formulation, we were able to extend the parameter regime of applicability for the weak-coupling CTQMC method, and exemplary results are shown in Fig. 6. However, the strong-coupling regime remains out of reach. | 2012-05-09T14:55:28.000Z | 2012-01-17T00:00:00.000 | {
"year": 2012,
"sha1": "be240e3a01dd1d000678a1eb861e09409e2985d7",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1201.3626",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "be240e3a01dd1d000678a1eb861e09409e2985d7",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
117547 | pes2o/s2orc | v3-fos-license | Daily rhythms of cloacal temperature in broiler chickens of different age groups administered with zinc gluconate and probiotic during the hot‐dry season
Abstract The aim of the experiment was to evaluate effects of zinc gluconate (ZnGlu) and probiotic administration on the daily rhythm of cloacal temperature (t cloacal) in broiler chickens of different age groups during the hot‐dry season. One‐day‐old broiler chicks (n = 60) were divided into groups I–IV of 15 chicks per group, and treated for 35 days: Group I (control) was given deionized water; Group II, ZnGlu (50 mg/kg); Group III, probiotic (4.125 × 106 cfu/100 mL), and Group IV, ZnGlu (50 mg/kg) + probiotic (4.125 × 106 cfu/100 mL). Air dry‐bulb temperature (t db), relative humidity (RH), and temperature‐humidity index (THI) inside the pen, and t cloacal of each broiler chick were obtained bihourly over a 24‐h period; on days 21, 28, and 35 of the study. Values of tdb (32.10 ± 0.49°C), RH (49.94 ± 1.91%), and THI (38.85 ± 0.42) obtained were outside the thermoneutral zone for broiler chickens, and suggested that the birds were subjected to heat stress. Application of the periodic model showed disruption of daily rhythm of t cloacal in broilers on day 21, which was synchronized by probiotic administration. The administration of probiotics or ZnGlu + probiotics to a greater extent decreased the mesor and amplitude, delayed the acrophases of t cloacal in broilers, especially at day 35, as compared to the controls. Overall, the t cloacal values in broiler chickens administered with probiotic alone (41.25 ± 0.05°C) and ZnGlu + probiotic (41.52 ± 0.05°C) were lower (P < 0.001) than that of the controls (41.94 ± 0.06°C). In conclusion, probiotic alone synchronized t cloacal of the birds at day 21, and, in addition, decreased t cloacal response most, followed by its coadministration with ZnGlu, the antioxidants may be beneficial in modulating daily rhythmicity of tcloacal and alleviating adverse effects of heat stress on broiler chickens during the hot‐dry season.
Introduction
The thermal environmental conditions during the hot-dry season in the Northern Guinea Savannah zone of Nigeria, which prevails from March to May (Dzenda et al. 2011) induce heat stress in pullets (Sinkalu and Ayo 2008), and directly exert adverse effects on the health and welfare of birds (Minka and Ayo 2013;Sinkalu et al. 2015a). Exposure to heat stress affects the circadian rhythms of many physiological variables in livestock, which may disorganize the circadian system, and consequently the productivity, welfare, and health status of animals (Ayo et al. 1998;Piccione et al. 2013;Minka and Ayo 2016b). The thermoneutral zone for poultry is 18-24°C in the tropics (Dei and Bumbie 2011), but the upper limit of this range is often exceeded in the tropics. In the tropics, the thermoneutral zone for broilers that are 5-week old is 18-24°C; whereas for broilers that are 1-, 2-, 3-, and 4-week old, the thermoneutral zones are 29-33°C, 30-29.5°C, 26-28.5°C, and 23-27°C, respectively (Meltzer 1983;Scheele et al. 2014). High air dry-bulb temperature (t db ) and high relative humidity (RH), characteristic of the hot-humid season result in heat stress (Dei and Bumbie 2011). High t db decreases feed intake, live weight gain, and feed efficiency in broiler chickens (Niu et al. 2009;Azad et al. 2010a), and egg production in laying hens (Franco-Jimenez and Beck 2007;Ajakaiye et al. 2010). It increases cloacal temperature (t cloacal ) responses (Chowdhury et al. 2012a,b;Egbuniwe et al. 2015) and may cause heat stress in broiler chickens (Soleimani et al. 2010;Singh et al. 2015). After 21 days of age, heat stress causes mortality rate of up to 92.4% (Vale et al. 2010); and high susceptibility, which persists until the broiler chickens attain market age, at days 35-42 (Chepete et al. 2005). Thus, thermal sensitivity of broiler chickens to high t db increases with a rise in body weight (Lin et al. 2004a).
Combating heat stress remains a challenge for the broiler industry in the tropics, which is even aggravated by the changing climatic conditions. The development of novel dietary measures may be beneficial in ameliorating heat stress and enhancing optimum performance in broiler chickens. Harmful effects of different stressors acting on broiler chickens during the hot-dry season may be ameliorated by supplementing the diet of the birds with antistress agents (Erwan et al. 2012), possessing also some antioxidant activity, such as zinc gluconate (ZnGlu) and probiotic (Hasan et al. 2015), which are shown to suppress oxidative stress, and improve the health and growth performance of broiler chickens (Aluwong et al. 2013;Hasan et al. 2015). The effects of ZnGlu alone and its coadministration with probiotics on daily rhythm of t cloacal in broiler chickens, reared during the hotdry season, have not been investigated. The t cloacal is one of the indices of heat stress, reflecting the core body temperature. It indicates the balance between heat loss and heat gain in broiler chickens (Edgar et al. 2013). Thus, changes in t cloacal during heat stress are used to evaluate the degree of adaptation of broiler chickens to hot-dry conditions (Chen et al. 2013), and level of reactive oxygen species (ROS) in broilers (Azad et al. 2010b), generated in excess during heat stress (Lin et al. 2000).
The aim of this study was to investigate effects of ZnGlu and/or probiotic administration on daily rhythms of t cloacal in broiler chickens of different age groups during the hot-dry season.
Experimental site and thermal environmental conditions
The experiment was conducted at the Department of Physiology, Faculty of Veterinary Medicine, Ahmadu Bello University, Zaria (11°10 / N, 07°38 / E, altitude 686 m), located in the Northern Guinea Savannah zone of Nigeria. The broiler chickens after brooding period were kept under natural conditions, without artificial control of the microenvironment. They were, thus, subjected to the naturally-prevailing thermal environmental conditions of high t db and high RH, characteristic of the peak of the hot-dry season in the zone; in April-May, 2015 Dzenda et al. 2011).
Experimental birds, management, and administration of zinc gluconate and probiotic A total of 60, apparently, healthy broiler chickens (Arbor Acres), comprising both sexes, were used for the experiment. They were kept under an intensive management system in a standard poultry pen, littered with wood shavings. The broiler chickens were given access to water and feeds ad libitum. They were fed with commercial broiler starter (day 0-28) and broiler finisher (day 29-35), produced by Grand Cereals Limited, Jos, Nigeria. The poultry house was made of concrete floor and cement block with aluminum roofing and cardboard ceiling. The dimensions of the pen were 8.4 m 9 5.6 m 9 1.91 m, and the broiler chickens were stocked at 15 birds/m 2 (Muniz et al. 2006) in order to obtain higher production volumes. The broiler chickens were randomly divided into four groups (I-IV) of 15 birds each: Control (I), ZnGlu (II), probiotic (III), and combination of probiotic + ZnGlu (IV). Both probiotic and ZnGlu were administered daily to the birds individually using a 1 mL-tuberculin syringe for 35 days by the oral route, starting at 1-day old. Each broiler chick was tagged, using a masking tape, on the leg for identification and proper recordings. The study was approved by the Ahmadu Bello University Committee on Animal Welfare and Use, and the management system adhered to the new European Union (EU) council directive 2007/43/EC of laying down minimum rules for the protection of chickens, kept for meat production (European Commission, 2007).
Zinc gluconate 70 mg (PHARMEDIC JSC: Ho Chi Minh City, Vietnam) was dissolved in 50 mL of deionized water and administered at a dosage of 50 mg/kg (NRC, 1994), whereas 1.5 mL/L of the probiotic (Saccharomyces cerevisiae) (Montajat Pharmaceuticals, Biosciences Division, Dammam 31491, Saudi Arabia) was administered daily at the concentration of 4.125 9 10 6 cfu/100 mL, using the competitive exclusion method for 1 week, and according to the manufacturers.
Thermal environmental parameters
The t dbs inside the pen were measured by a dry-and wetbulb thermometer (Brannan (R) , Cumbria, England), and . The t db and RH were recorded every 2 h daily for 3 days; 1 week apart, on days 21, 28, and 35 of the experiment. The thermal environmental parameters were recorded inside the poultry house on each day of the experiment. The temperature-humidity index (THI) for the broiler chickens was determined using the following formula (Tao and Xin 2003): where THI = temperature-humidity index for broilers, t db = dry-bulb temperature (°C) and t wb = wet-bulb temperature (°C).
Measurements of cloacal temperature
The t cloacal values were recorded as an indicator of the core body temperature (Sinkalu et al. 2015b), with the aid of a digital clinical thermometer (KRAUSE digital thermometer (R) , DK-5550, Langeskov, Denmark). The t cloacal was measured, concurrently with the thermal environmental parameters for 3 days only, 1 week apart, in order to reduce the adverse effect of stress due to handling on the birds, known to increase the body temperature (Edgar et al. 2013). On each day of the recording, measurements of t cloacal were taken 12 times using standard procedures (Minka and Ayo 2013) over a 22-h period. After gentle catching and restraining the birds, the t cloacal of each bird was taken by inserting the thermometer about 3-cm deep into the cloaca for 2 min and tilting it to ensure direct contact with the wall of the cloaca. The values of thermal environment and t cloacal were recorded concurrently and bihourly from 07:00 to 05:00 h (GMT + 1) on days 21, 28, and 35 of the study.
Statistical analysis
Data obtained were expressed as mean AE standard error of the mean (Mean AE SEM). Cosinor analysis was used to determine the t cloacal daily rhythms of individual birds. The mean mesor (rhythm-adjusted mean), amplitude (half the range of excursion or a measure of the extent of predictable change within a cycle) and acrophase (time of peak) values of the variables of daily rhythm were calculated for each bird and for each time series of the study period. Values were subjected to repeated-measures oneway analysis of variance (ANOVA model-3) and by the cosinor procedure (Refinetti et al. 2007;Piccione et al. 2013), followed by Tukey's multiple comparison post hoc test, using GraphPad Prism 4.0 for windows (GraphPad Software, San Diego, CA) to compare the differences between the means, obtained from the control and treated broilers. Values of P ˂ 0.05 were considered significant.
Results
Variations in thermal environmental parameters on selected days of the study period The t db on day 28 varied between (29.00-36.00°C) of the study period, and was not different, compared to the value obtained on day 21 (27.00-34.00°C) or 35 (27.00-35.00°C). There was no significant difference in RH and THI values between days 21, 28, and 35 of the study (Fig. 1). From 13:00 h to 15:00 h, t db and THI varied from 31 to 34°C and 30.30-34.40, respectively (Table 1).
Variations in cloacal temperature of 21-dayold broiler chickens during the hot-dry season The application of the periodic model showed that the t cloacal of the broilers on day 21 did not exhibit a clear daily rhythm, except for broilers administered with probiotic (Fig. 4). The characteristics of cloacal temperature daily rhythms in broiler chickens of different age groups administered with ZnGlu, probiotic and ZnGlu + probiotic during the hot-dry seasons are shown in Table 2.
Variations in cloacal temperature of 28-day-old broiler chickens during the hot-dry season characteristics of t cloacal in control group showed a higher mesor, greater amplitude, and delayed acrophases as compared to probiotic and ZnGlu + probiotic groups (Table 2). Specifically, the t cloacal value was higher (P < 0.05) in control broiler chickens at 17:00-21:00 h and 1:00 h, but the lowest t cloacal value was obtained in probiotic-treated broiler chickens at 11:00 h. ZnGlu-treated group had lower values of t cloacal from 19:00 h to 5:00 h (Fig. 3). 4) and ZnGlu + probiotic (Fig. 5) groups had the lowest t cloacal at each hour of recording both during the day and night.
Discussion
The result showed that the t db values obtained during the study period were predominantly outside the thermoneutral zone of 18-24°C (Dei and Bumbie 2011) for mature broiler chickens (days 28-42), reared in a hot tropical climate. The t cloacal was measured for 3 days only, 1 week apart, in order to reduce the adverse effects of stress due to handling on the birds, known to increase the body temperature (Edgar et al. 2013). The results of this study showed that thermal environmental conditions of high t db (27.00-36.00°C) and relatively high RH (32-70%), prevailing during the hot-dry season, were unfavorable for the rearing of broiler chickens, and that they induced heat stress. Elson (1995) reported that the ideal values of t cloacal for broiler chickens vary between 41 and 42°C for a comfortable physiological state, and maximal growth rate and feed intake were observed between the ages of 4 and 8 weeks (Yahav et al. 1995). These values of t cloacal concur with the values of 41-41.8°C recorded in this study. Robinson et al. (2016) reported that the ideal t db for broiler chickens within the third week of life is between 26 and 28°C. Similarly, the ideal t db for broilers within the fourth and fifth weeks of life is between 12 and 26°C (Sturkie 1976 (Purswell et al. 2012;Sinkalu et al. 2015b). The high THI may render evaporative cooling mechanism ineffective in the broiler chickens. The results agree with the findings of Purswell et al. (2012), who reported that as THI exceeds approximately 21°C, the performance of birds significantly declines and their body temperature increases. Since heat stress induces excess production of ROS and consequently, oxidative stress (Lara and Rostagno 2013), the high THI obtained in this study strongly suggests that the birds were subjected to heat stress, especially on day 35 and, consequently, oxidative stress. The finding serves as the rationale to mitigate the adverse effects of heat stress by administration of ZnGlu and/or probiotic, which are potent antioxidants (Zhang et al. 2014;Hasan et al. 2015).
The t cloacal values obtained in the broiler chickens on day 21, except for probiotic group, did not exhibit daily rhythmicity and had the highest range of 4.40°C, but indicates an inability to maintain a stable body temperature of the broiler chickens at this age. The finding was evidence that the thermoregulatory mechanism of the broiler chickens was more stressed in the control birds than any other group. The lowest t cloacal range of 1.4°C recorded in probiotic group showed that probiotic stabilized the t cloacal values in the broiler chickens; thus, maintaining the values at this age at a lower range (40.50-41.90°C) than in any other group. It, therefore, appears that the probiotic exerted thermogenic effect on the broiler chickens and modulated the t cloacal by synchronizing the circadian rhythm of t cloacal at the age of 21 days to normal daily rhythm. Although the acrophase of the t cloacal at day 21 did not differ between the groups, the finding that probiotic group showed the lowest t cloacal value at the acrophase period and had the smallest amplitude as compared to other groups suggests that probiotic, may be used in synchronizing dysfunctional t cloacal rhythms. This finding demonstrates the interrelationship between circadian clocks and metabolism and opens new possibilities for the adoption of nutritional interventions to modulate the circadian clock's function. This requires further investigations. Probiotic facilitates thermogenic activity by increasing the sympathetic activity of the brown and adipose tissues; through enhanced brown adipose tissue thermogenesis in rats (Tanida et al. 2008), and increases expression of thermogenic proteins (uncoupling protein-2) in mice (Pothuraju et al. 2016). This is a desirable effect. The results of this study showed that probiotic alone, and ZnGlu + probiotic lowered t cloacal values, when compared with the controls. Similarly, Egbuniwe et al. (2015) demonstrated a decrease in t cloacal values in broiler chickens administered with the antioxidants, betaine and ascorbic acid during the hot-dry season. Although the mechanism of action of zinc was not elucidated in this study, it may be linked partly to Zn induction of the ultimate antioxidants, metallothioneins; and protection of protein sulfhydryls. It may also be linked to reduction in OH À formation from H 2 O 2 through the antagonism of redox-active transition metals, such as iron and copper (Powell 2000), and other ROS generated in excess in heat-stressed broilers Hao et al. 2012). Probiotic, shown to enhance growth in broiler chickens (Aluwong et al. 2013;Zhang et al. 2014), decreased the t cloacal values; apparently by improving the intestinal microarchitecture in terms of villus height, and crypt depth in heat-stressed broiler chickens (Silva et al. 2010). Further investigations are required at the molecular level to elucidate effects of probiotics on broiler chickens, exposed to heat stress during the hot-dry season.
On day 28, results of t cloacal values showed that ZnGlu and probiotic, either singly or in combination significantly reduced t cloacal values in broiler chickens, especially starting from 17:00 h to 5:00 h, indicating the beneficial effect of the antioxidants in combating the adverse effect of heat stress on broiler chickens during the hot-dry season. The result, unlike on day 21, shows a clear ascent in t cloacal during the photophase and a decent during the scotophase. Furthermore, the results showed that the lowest t cloacal range (1.5-1.7°C) values were obtained in the probiotic and ZnGlu + probiotic groups, indicating that probiotic administration stabilized the t cloacal fluctuations and may be most beneficial in normalizing the t cloacal fluctuations in broiler chickens at the age of 28 days. By decreasing the t cloacal values of the birds, the response of the broiler chickens to administration of antioxidants at this age was beneficial. At this age of 28 days, the thermoregulatory mechanisms of broiler chickens are better developed and their metabolic rate increases rapidly, as evidenced by their rapid growth (Singh et al. 2015). The result of this study showed, for the first time, that the antioxidants, probiotic and ZnGlu have tendency to decreasing the body temperature of the broiler chickens during the last part of photophase period, when the t cloacal values are known to rise, starting from 17:00 h (Fig. 4) The present findings showed that the antioxidants were able to offset the adverse effects of thermal load on the broilers at an earlier stage. The antioxidants are, therefore, beneficial and recommended in modulating and combating adverse effects of exposure of broiler chickens to heat stress, particularly starting from day 28. At the age of 35 days when the broiler chickens were due for slaughter, the t cloacal of the control group was well above normal reference values of 40-42°C, whereas probiotic and/or ZnGlu administration decreased t cloacal values, starting from 07:00 h to 05:00 h. The finding that probiotic and/or ZnGlu groups had smaller amplitude and delayed acrophases demonstrated that probiotic exerted the most potent effect in reducing the t cloacal values of broiler chickens during the hot-dry season. With increase in age of the broiler chickens, the effect of the antioxidant varied; the lowest reduction in t cloacal values was recorded in broiler chickens treated with ZnGlu at 28 days; but at the age of 35 days, the lowest t cloacal value was obtained in broiler chickens given only probiotic (Fig. 4). The finding showed that age is a crucial factor in the manifestation of responses of broiler chickens to the stressful thermal environmental conditions of the hot-dry season. The t cloacal responses showed that broiler chickens administered with probiotic had the least t cloacal value (41.25 AE 0.05°C), indicating that probiotic exerted the most potent decrease in body temperature in 35-day-old broiler chickens.
In general, the overall mean t cloacal values recorded in all the groups were within the established normal physiological range (40-42°C) for broiler chickens in the Northern Guinea Savannah zone of Nigeria (Minka and Ayo 2016a). However, at the age of 35 days, especially during the day time, the thermoregulatory mechanism of the birds was adversely challenged, apparently due to increase in metabolic rate and the concomitant effect of heat stress. Thus, the t cloacal values obtained at this period were above the normal reference limits. This finding was evidence that the Arbor Acres breed of chickens at day 35, by inability to maintain homeothermy, has not successfully adapted to the unfavorable thermal environmental conditions in the zone during the day time. Thus at 28and 35-day-old, it is best to administer ZnGlu and probiotic, respectively to combat the adverse impact of heat stress on broiler chickens during the hot-dry season. Further studies are required to elucidate the effects of ZnGlu and/or probiotic on other body parameters of broiler chickens; involved in the response of the birds to heat stress, especially the hematological, biochemical, and performance indices. The results of this study, for the first time, demonstrated that ZnGlu and especially probiotic, decreased t cloacal responses in broiler chickens, exposed to heat stress during the hot-dry season; and, in addition, modulated the circadian rhythm of t cloacal in broiler chickens. Consequently, the antioxidants may be beneficial in reducing economic losses, often incurred by farmers due to heat stress in broiler production. Furthermore, the age of the birds is crucial in determining the best antioxidant to be administered; with ZnGlu alone being the best before attainment of 28 days' old; and, thereafter, probiotic alone is the best antioxidant to be administered.
In conclusion, probiotic alone decreased t cloacal response most, followed by its coadministration with ZnGlu; and the antioxidants may be beneficial in alleviating adverse effects of heat stress on broiler chickens during the hot-dry season. For this purpose, ZnGlu and probiotic are best administered at the age of 28 and 35 days, respectively. | 2017-08-15T02:01:11.415Z | 2017-06-01T00:00:00.000 | {
"year": 2017,
"sha1": "52da0faf64c623b995a4d596b988bd99b81ecf2e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.14814/phy2.13314",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "52da0faf64c623b995a4d596b988bd99b81ecf2e",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
12387863 | pes2o/s2orc | v3-fos-license | Detection of methoxylated and hydroxylated polychlorinated biphenyls in sewage sludge in China with evidence for their microbial transformation
The concentrations of methoxylated polychlorinated biphenyls (MeO-PCBs) and hydroxylated polychlorinated biphenyls (OH-PCBs) were measured in the sewage sludge samples collected from twelve wastewater treatment plants in China. Two MeO-PCB congeners, including 3′-MeO-CB-65 and 4′-MeO-CB-101, were detected in three sludge with mean concentrations of 0.58 and 0.52 ng/g dry weight, respectively. OH-PCBs were detected in eight sludge samples, with an average total concentration of 4.2 ng/g dry weight. Furthermore, laboratory exposure was conducted to determine the possible source of OH-PCBs and MeO-PCBs in the sewage sludge, and their metabolism by the microbes. Both 4′-OH-CB-101 and 4′-MeO-CB-101 were detected as metabolites of CB-101 at a limited conversion rate after 5 days. Importantly, microbial interconversion between OH-PCBs and MeO-PCBs was observed in sewage sludge. Demethylation of MeO-PCBs was favored over methylation of OH-PCBs. The abundant and diverse microbes in sludge play a key role in the transformation processes of the PCB analogues. To our knowledge, this is the first report on MeO-PCBs in environmental matrices and on OH-PCBs in sewage sludge. The findings are important to understand the environmental fate of PCBs.
Results and Discussion
Concentrations and compositions of MeO-PCBs and OH-PCBs in sludge. The measured concentrations of the targeted analytes in the sewage sludge are summarized in Table 1. The spatial distributions of Σ PCBs, Σ OH-PCBs, and Σ MeO-PCBs in the sewage sludge from different sampling locations in Greater OH-PCBs were found in 8 sludge samples with a detection rate of 67%. Four OH-PCB congeners, including 3′ -OH-CB-65, 4′ -OH-CB-101, 4′ -OH-CB-18, and 4′ -OH-CB-26, were identified. The total concentrations of the OH-PCBs ranged from < 0.1 to 11.5 ng/g, with a mean of 4.23 ng/g. The dominant congeners were 3′ -OH-CB-65 (mean 49% of the total concentration of OH-PCBs) and 4′ -OH-CB-101 (mean 33% of the total concentration of OH-PCBs). The number of reports on the occurrence of OH-PCBs in abiotic samples is limited. The detected concentrations of OH-PCBs in the sludge of the present study were comparable to those detected in the sediment from Lake Michigan, USA (0.20 to 26 ng/g with a mean of 8.5 ng/g) 10 .
To investigate the relationships among PCBs, OH-PCBs, and MeO-PCBs, the contamination status of the PCBs in the sludge samples was determined. The total concentrations of the PCBs in the sludge samples ranged from 3.0 to 170 ng/g with a mean of 35.8 ng/g and a detection rate of 100%. The dominant PCBs were CB-28, 52, and 101, which were commonly used to indicate the PCB contamination in the environment 32,33 . The results show that low-chlorinated PCBs are the major PCB homologue group residing in sewage sludge in Chinese WWTPs. There were also several peaks of unknown compounds detected in the chromatograms, which might be the metabolites of CB-28, CB-52 or other PCBs, though none of them were identified due to the lack of authentic standards at the time of analysis. This study focused on only ten low-chlorinated OH-PCBs that were found in sediment and commercial PCB mixtures, together with ten homologous MeO-PCBs. More research is needed to identify more potential OH-PCBs and MeO-PCBs in the ambient environment.
The PCBs, OH-PCBs, and MeO-PCBs showed higher levels in Zhejiang Province, Shanghai Municipality, and Guangdong Province. The concentration of PCBs in the surface soil of Shanghai Municipality was found higher than other regions of China 34 . In Zhejiang and Guangdong Provinces, electronic waste recycling activities are considered a very important emission source of PCBs 35,36 . The concentrations of the Σ PCBs were significantly correlated with those of the Σ OH-PCBs (R = 0.755, p < 0.01) and Σ MeO-PCBs (R = 0.762, p < 0.01). The concentrations of the Σ OH-PCBs were also significantly correlated with those of the Σ MeO-PCBs (R = 0.776, p < 0.01). For the individual homologous congeners, the concentrations of 3′ -OH-CB-65 were significantly correlated with those of 3′ -MeO-CB-65 (R = 0.791, p < 0.05). Close correlations were found among 4′ -OH-CB-101, 4′ -MeO-CB-101 and CB-101 (R = 0.762-0.839, p < 0.01). The concentrations of high chlorinated CB-153 were also significantly correlated with those of low chlorinated 4′ -OH-PCB-101 and 4′ -MeO-PCB-101 (R = 0.895-0.946, p < 0.01). A strong correlation between concentrations of two contaminants may suggest transformation relationships and/or common sources. No significant relationship was observed between the concentrations of OH-PCB and the total organic carbon (TOC) content, or between the concentrations of MeO-PCBs and the TOC content in the sludge samples (p > 0.05).
The highest levels of the three compound groups were all in the WWTP from Zhejiang Province. This WWTP, which was located near an electronic waste dismantling area, treated a mixture of domestic and industrial wastewater. Influent and effluent samples in this WWTP were analyzed to explore the possible source of the analytes (i.e., OH-PCBs and MeO-PCBs). PCBs were detected in the suspended particulate matter (SPM) of the influent and effluent at concentrations of 6.9 ng/g and 1.3 ng/g, respectively. PCBs were also found in water of the influent and effluent with concentrations of 679 pg/L and 141 pg/L, respectively. The concentrations of PCBs in the sludge were higher than those in the wastewater. Only one OH-PCB congener, i.e., 3′ -OH-CB-65, was identified in the influent, with concentrations of 0.9 ng/g in the SPM and 65 pg/L in the water. MeO-PCBs were not found in the influent and effluent samples, corroborating their formation in sludge.
The hypothetical precursor of 3′ -OH-CB-65 and 3′ -MeO-CB-65, namely CB-65, was not found in the sludge or wastewater, suggesting that 3′ -OH-CB-65 and 3′ -MeO-CB-65 might not be formed as the metabolite of CB-65 in the wastewater treatment process. A previous study reported the presence of several OH-PCBs in the original Aroclors, and found 3′ -OH-PCB-65 to be the most prominent congener in Aroclors 1221, 1242, 1248, and 1254 10 . This indicated that the accumulation of 3′ -OH-CB-65 in sewage sludge was at least partially due to OH-PCB contamination of the original Aroclors. Although PCBs have been banned for use, there was still considerable emission from the disposal of PCB-containing materials 37 . Therefore, WWTPs were possible receivers of PCBs and the coexisting OH-PCBs. The persistence of 3′ -OH-CB-65 should also be concerned since it was recently detected in the sediment of the Lake Michigan. 4′ -OH-CB-101, 4′ -MeO-CB-101, and their parent compound, CB-101, were all detected in the sludge samples. In our previous study where rice plants were used as the model, CB-61 was biotransformed to 4′ -OH-CB-61 (major metabolite) and 4′ -MeO-CB-61 (minor metabolite) 21 . Moreover, the interconversion between OH-PCBs and MeO-PCBs is an important metabolic pathway. On the basis of these observations, we hypothesized that microbes in sludge play a key role in the formation of OH-PCBs and MeO-PCBs. The results of the exposure study provide compelling evidence for this hypothesis.
Metabolism of PCBs, MeO-PCBs and OH-PCBs by microbes in sludge.
The hydroxylated and methoxylated metabolites of CB-101 and CB-65 in exposed sludge were analyzed. The 4′ -OH-CB-101 and 4′ -MeO-CB-101 were identified after the sludge was exposed to CB-101, whereas 3′ -OH-PCB-65 and 3′ -MeO-PCB-65 were not detected as metabolites of CB-65 (Fig. 2). The metabolic properties of PCBs may depend on the number and location of chlorine atoms on the ring structure of the PCB molecule 38 . Moreover, the hydroxyl and methoxyl were likely to preferentially occur at the para position, which needs the least energy during enzymatic reaction 39 .
The interconversion between OH-PCBs and the related MeO-PCBs mediated by microbes in sludge was observed. Transformations from OH-PCBs to MeO-PCBs and from MeO-PCBs to OH-PCBs both occurred after sludge exposure (Fig. 2). The results were consistent with those of the plant exposure study, further supporting that the recently found metabolic pathway of PCBs is ubiquitous and may reflect real PCB biotransformation in the environment 21 . Moreover, similar hydroxylation and methoxylation pathways of PBDEs in plants and animals have been proposed in previous studies [40][41][42] . The comprehensive results show that a reciprocal transformation between hydroxylated and methoxylated metabolites of other compounds may also exist in the environment.
The conversion percentages, the mass of the metabolite after 5-day exposure over that of the initial parent compound (M/P), are shown in Table 2 Overall, the demethylation of MeO-PCBs was favored over the methylation of OH-PCBs. Consequently, the concentrations and detection rates of OH-PCBs were higher than those of MeO-PCBs in the collected sludge samples. This may also explain why OH-PCBs have been widely detected, whereas MeO-PCBs have never been observed in the environment. There might be co-elution of isomers in the analysis of sludge samples from WWTPs, which could not be completely avoided due to the large number of isomers and lack of authentic standards. However, the results of these exposure experiments, from some point, verified the detection of targeted compounds.
All the transformation processes occurred rapidly in the sludge, and the metabolites were detected after exposure for only one day (Fig. 2). Generally, the mean residual amount of exposure compounds in sludge slightly decreased, and the mean amount of metabolites gradually increased over 5 days, though no statistically significant difference between the different time points was observed (p > 0.05). The metabolism between OH-PCBs and MeO-PCBs may occur simultaneously in the sludge. The mean recoveries of these compounds ranged from 74% to 83% for the exposure groups. This indicated that these compounds may be utilized by microbes in sludge as carbon sources. Some other unknown metabolites, such as diOH-PCBs may also be generated, though none of them have been identified 30 .
None of the metabolites were found in the blank control, water control or sterile control, suggesting that the microbes in the sludge were responsible for the transformation of PCBs, OH-PCBs, and MeO-PCBs. No cross-contamination was found between the reactors. The purity of the six exposure chemical standards was verified, and there was no undesirable OH-PCB and MeO-PCB detected as impurities that would affect the metabolic results in this study. The metabolic results of PCBs, OH-PCBs, and MeO-PCBs by microbes in sludge are illustrated in Fig. 3. MeO-PCBs may be reaction intermediates in the formation of OH-PCBs from PCBs, making them difficult to detect in the environment. MeO-PCBs may also be the final transformation product of OH-PCBs, though the conversion rate was relatively low.
Source estimation and environmental implications. Two major reasons for that MeO-PCBs and
OH-PCBs were found in the collected sludge samples in this study are as follows: (i) the levels of PCBs were relatively high in some selected WWTPs, such as in Zhejiang Province, Shanghai Municipality, and Guangdong Province; and (ii) abundant and diverse microbes existed in the sewage sludge, which is a special medium and functioned in the entire metabolic process of PCBs, OH-PCBs, and MeO-PCBs. Several Studies show that anaerobic and aerobic processes mediated PCBs degradation. Highly chlorinated PCBs were removed chlorine atoms under anaerobic process and then mineralized under the aerobic condition. The factors influencing the transformation included the complexity of the PCB congener, the type of microorganism employed, and the interaction among the microorganisms 29,30 . Our ongoing studies include exploring the key microbe species that are involved in the proposed metabolic pathway in this work.
4′ -OH-CB-101 has been found as a major metabolite in various animals [43][44] . Although the concentration of 4′ -OH-CB-101 was below the detection limit in the influent sample, we could not exclude the possibility that a proportion of 4′ -OH-CB-101 might be formed by humans at trace concentration and entered the WWTPs from human excretion. The 4′ -MeO-CB-101 could be generated from both CB-101 and 4′ -OH-CB-101 by microbes in the sludge. The 4′ -MeO-CB-101 was also a potential intermediate that requires further confirmation.
Compared with the conversion rate in the present exposure study, the calculated rates of concentration of both MeO-PCB/PCB and MeO-PCB/OH-PCB were higher in the sludge samples collected from the WWTPs. There are several reasons can explain this. First, the microbial reaction in the WWTPs might be more active than in the laboratory. Second, a portion of the parent compounds in the sewage might be discharged with the effluent without full contact with the sludge. Finally, hydrophobic MeO-PCBs were more easily preserved in the sludge than OH-PCBs. This work was not meant to establish the WWTP sludge as the only source of MeO-PCBs and OH-PCBs; however, it is one source. Previous studies have shown that sludge amendment can be a source of elevated levels of a variety of pollutants to agricultural soils 45 . The presence of OH-PCBs and MeO-PCBs in the sewage sludge may therefore be another cause for concern.
In summary, this is the first study on the detection of MeO-PCBs and OH-PCBs in sewage sludge, which is important because MeO-PCBs are a class of previously undiscovered chemicals in the environment. Microbes in sewage sludge play a key role in the transformation of the PCB analogues, including the hydroxylation and methoxylation of PCBs, as well as the interconversion between OH-PCBs and MeO-PCBs. Wastewater treatment plants are overlooked producers of widespread OH-PCBs in the environment. Other than microbial transformation, the potential sources of OH-PCBs and MeO-PCBs in sewage sludge also include the accumulation from original commercial Aroclors and human excretion. Wastewater treatment plants are a possible emission source of OH-PCBs and MeO-PCBs to the surrounding environment.
Experimental section
Materials. The low-chlorinated PCBs (mainly di-to penta-PCBs) are the major PCB homologue group residing in the environment in China 34 . Accordingly, the selected OH-PCB analytes in this study were mainly low-chlorinated congeners, which have been found in sediment samples and original commercial Aroclors at relatively high concentrations 10 . The homologous MeO-PCBs were also selected as targeted analytes. The full names, abbreviations, and chemical structures for the target compounds are shown in Table S1 Table S2. The freshly digested sludge samples (approximately 1 kg for each sample) from the dewatering process were packed in aluminum foil, sealed in kraft bags, and immediately delivered to a laboratory. The samples were then freeze-dried, homogenized, sieved through a stainless steel 100-mesh sieve and preserved at − 20 °C until analysis. Two grams of each sludge sample were used in the determination experiments. The calculations of the concentrations of the targeted compounds in the sludge were based on the dry weight (dw). The influent and effluent water samples (approximately 1 L for each sample) were taken from a WWTP in Zhejiang Province in September 2014, from which a sludge Table 2. Detection of metabolites after exposure of the sludge for 5 days, and their rates of concentration in WWTP sludge. a Mean ± standard deviation (n = 3). b Nondetectable.
sample was previously collected. These water samples were extracted promptly after centrifugation at 5000 rpm for 10 min, and the entire remaining SPM was also collected for further analysis.
Laboratory-simulated sludge exposure. Laboratory-based exposure studies were conducted to identify the possible reason for the occurrence of the targeted chemicals in the sewage sludge. Based on the results of the field investigation, two PCB congeners, CB-65 and CB-101, two MeO-PCB congeners, 3′ -MeO-CB-65 and 4′ -MeO-CB-101, and two OH-PCB congeners, 3′ -OH-CB-65 and 4′ -OH-CB-101, were selected as the exposure compounds. The six compounds were added separately (10 μ g) to 65 mL of laboratory-simulated sewage sludge and mixed thoroughly in a 100 mL brown incubator bottle. Seed sludge (15 mL) was added to each of the incubator bottles as a cosubstrate to begin the digestion 46 . The sewage consisted of yeast extract, meat extract, peptone, urea, (NH 4 ) 2 SO 4 , K 2 HPO 4 , CaCl 2 , MgSO 4 , and trace element solution 47 .
The blank control (in the absence of the exposure compounds), water control (exposure compounds only in deionized water), and the sterile control (exposure compounds in fully sterile sludge) were prepared similarly to the exposure groups. Each of the bottles was placed simultaneously on an incubated shaker-table at 35 ± 2 °C that was kept under the same conditions. The total exposure time was 5 days, which was similar to the common sludge retention time in WWTPs. The sludge of the exposure group was sampled at intervals of 1, 2, 3, 4 and 5 days. At the end of the exposure time, the control groups were sampled. The exposure and control groups were prepared in triplicate, including the ones for different time intervals. The sludge was freeze-dried, homogenized, and stored at − 20 °C prior to analysis. No targeted compounds existed in the simulated sludge that was used in this study before the exposure experiment.
Sample pretreatment and analysis. The sample pretreatment and analysis were adapted from the previously reported method 21 . Briefly, the solid sample (sludge and SPM) was spiked with surrogate standards and ultrasonically extracted for 60 min twice using hexane/MTBE (1:1 v/v; 40 mL). The extracts were combined and evaporated to dryness and redissolved in 50 mL of DCM. Acidified silica gel (10 g) was added, and the mixture was shaken vigorously for 10 min to remove the lipids. The acidified silica gel was then removed via an anhydrous Na 2 SO 4 column (15 g). An additional 40 mL of DCM was used to further elute the compounds. A secondary purification cycle was performed following the same operational steps. Then, sulfur was eliminated by the addition of activated copper powder (2 g). The extract was concentrated to dryness and dissolved with 400 μ L hexane. A half of the extract was transferred into a vial for subsequent analysis of the PCBs and MeO-PCBs by gas chromatography/mass spectrometry (GC/MS). The other half of the extract was dried under a nitrogen stream and redissolved in 200 μ L of acetonitrile for analyzing OH-PCBs with liquid chromatography/tandem mass spectrometry (LC/ MS/MS). Without the commonly used prior derivatization of OH-PCBs 17 , the entire sample preparation was simplified with satisfying sensitivity of the method.
The water samples were extracted using a liquid-liquid extraction method. The water sample (1L) was spiked with surrogates, mixed with 100 mL of DCM, and shaken for 10 min. Then, the DCM was transferred to another glass bottle, and the extraction was repeated twice. The combined extract was concentrated by rotary evaporation to 50 mL, and further purified and analyzed as described for the solid samples.
The quantitative analysis of the PCBs and MeO-PCBs was conducted on a 7890B/5977A GC/MS instrument (Agilent Technologies, Santa Clara, CA, USA) operated with electron impact source. The GC was fitted with a DB-5 MS capillary column (30 m, 0.25 mm i.d., 0.25 μ m film thickness; J&W Scientific, Folsom, CA, USA) with helium as the carrier gas at a constant flow rate of 1.0 mL/min. The oven temperature was initially set at 80 °C, ramped to 140 °C at 10 °C/min, and increased to 300 °C at 2.5 °C/min. The selected ion monitoring (SIM) mode was used for the quantitative determination. The ions that were used to analyze the targeted MeO-PCBs and PCBs are listed in Table S3. The GC/MS chromatograms of PCB and MeO-PCB standards were presented in Figures S1 and S2. A GC with tandem MS (GC/MS/MS) (Agilent 7890B-7000C) was employed to further confirm the identification accuracy of MeO-PCBs. The precursor and product ions of the targeted MeO-PCBs are listed in Table S4. The quantification of the OH-PCBs was performed on an Agilent 1260-6460 LC/MS/MS instrument. A C18 column (100 mm × 2.1 mm, 2.2 μ m particle size, Thermo Fisher Scientific, Waltham, MA, USA) was chosen for chromatographic separation and quantification. The mobile phase consisted of acetonitrile and water, which was used with a gradient elution of a ratio ranging from 45:55 to 90:10 over 35 min at a flow rate of 0.3 mL/min. The MS was operated with a negative electrospray ionization (ESI) source in multiple-reaction monitoring (MRM) mode. Detailed information on the ion transitions that were monitored for each OH-PCB is provided in Table S5. The LC/MS/MS chromatograms of OH-PCB standards were presented in Figure S3. Another Agilent ZORBAX SB-C18 column (150 mm × 2.1 mm, 3.5 μ m particle size) was used for further identification of the analytes. The corresponding mobile phase consisted of mehanol and water with a gradient elution of a ratio ranging from 55:45 to 85:15 over 50 min at a flow rate of 0.3 mL/min.
Quality assurance and quality control. All of the reported data were subject to strict quality assurance and control procedures. No mutual interference was observed in the instrumental analysis of the phenolic and neutral compounds. A procedural blank, a spiked blank, and a sample duplicate were processed in parallel with each batch of six samples. The procedural blank, Na 2 SO 4 , was used to monitor for background contamination levels, and all analytes were under the detection limits. The average recoveries of the PCBs, MeO-PCBs, and OH-PCBs in the spiked samples were 81.3-90.9%, 80.5-91.4% and 73.2-104.2%, respectively, where the relative standard deviation (RSD) was lower than 18% (n = 3). The recoveries of the surrogate standards were 85.1-96.4% for 4′ -MeO-CB-159, and 83.5-102.2% for 4′ -OH-CB-159. Duplicates were included in the sludge sample analysis, and the RSD of the detected concentration was lower than 15% (n = 3). The instrumental calibration was verified by injecting five calibration standards, and the linearity of the calibration curve (R 2 ) was > 0.99. The method limits of detection (MLODs) were calculated at a signal-to-noise ratio of 3. The MLODs for the PCBs, MeO-PCBs and OH-PCBs in the solid samples were 0.2-1.5, 0.1-1.2, and 0.05-0.8 ng/g, respectively. The MLODs for the three groups of compounds in water were 45-180, 60-200, and 20-80 pg/L, respectively. The statistical analysis including the Pearson's correlation analysis was performed using SPSS 18.0 and Origin 8.0. Statistical significance was considered as p < 0.05. | 2018-04-03T01:42:15.545Z | 2016-07-15T00:00:00.000 | {
"year": 2016,
"sha1": "3f0aa66e156ad6625c34df1af1d843497f3709ad",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/srep29782.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "427c712a43536b1e128b993f6e71bb7ac468e1bc",
"s2fieldsofstudy": [
"Chemistry",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
27722528 | pes2o/s2orc | v3-fos-license | Mortality in Swedish patients with Hirschsprung disease
Purpose Hirschsprung disease (HSCR) has previously been associated with increased mortality. The aim of this study was to assess mortality in patients with Hirschsprung disease in a population-based cohort. Methods This was a nationwide, population-based cohort study. The study exposure was HSCR and the study outcome was death. The cohort included all individuals with HSCR registered in the Swedish National Patient Register between 1964 and 2013 and ten age- and sex-matched controls per patient, randomly selected from the Population Register. Mortality and cause of death were assessed using the Swedish National Causes of Death Register. Results The cohort comprised 739 individuals with HSCR (565 male) and 7390 controls (5650 male). Median age of the cohort was 19 years (range 2–49). Twenty-two (3.0%) individuals with HSCR had died at median age 2.5 years (range 0–35) compared to 49 (0.7%) controls at median age 20 years (0–44), p < 0.001. Hazard ratio for death in HSCR patients compared to healthy controls was 4.77 (confidence interval (CI) 95% 2.87–7.91), and when adjusted for Down syndrome, the hazard ratio was 3.6 (CI 95% 2.04–6.37). Conclusions The mortality rate in the HSCR cohort was 3%, which was higher than in controls also when data were adjusted for Down syndrome.
Introduction
Hirschsprung disease (HSCR) is a developmental defect of the enteric nervous system caused by incomplete migration, differentiation, and survival of enteric nervous progenitors. The birth prevalence is 1 in 5000 living newborns [1]. HSCR can be a part of a syndrome, most commonly trisomy 21 (Down syndrome). The etiology is still unknown, but HSCR is a multifactorial disease, probably caused by both environmental and genetic factors [2]. Before the era of possible surgical treatment for HSCR, the mortality rate was very high and only patients with short-segment aganglionosis had any chance of survival. Since the surgical procedure became available in the 1950s, the mortality rate has decreased significantly. Postoperative mortality after the Swenson procedure has been reported to be 2.4% between 1947 and 1986 [3]. The mortality in HSCR patients undergoing one-stage transanal pull-through varies between 0 and 2% [4,5]. Patients with Down syndrome (DS), total colonic aganglionosis (TCA), and Hirschsprung-associated enterocolitis (HAEC) seem to have an increased risk of mortality, as well as patients with anastomotic leakage after the pullthrough [3,6,7]. HAEC is the most threatening complication of HSCR, since morbidity and mortality are possible outcomes [8]. The pathogenesis remains unknown. HAEC occurs in 5-42% of cases and may develop both before and after surgery for HSCR [9].
The aim of this study was to assess the mortality rate among Swedish patients diagnosed with Hirschsprung disease and to compare the mortality rate with an age-and gender-matched cohort.
Study design and settings
This was a nationwide, population-based cohort study during the observational period 1st of January 1964 to 31st of December 2013. The study exposure was HSCR and the primary study outcome was death. Exposure and outcomes were assessed through linkage between the Swedish National Patient Register and the Swedish National Causes of Death Register. All residents in Sweden get a unique tendigit personal identification number after birth or immigration, which enables linkage between the national registers.
Data resources/registers
The Swedish National Patient Register contains prospectively collected information from all hospital admissions in Sweden and is maintained by the Swedish National Board of Health and Welfare. The register was initiated in 1964 and it covers all hospitals in Sweden from 1987. The data include gender, age, geographical data, surgical procedures, date of admission and discharge, and primary and secondary diagnosis. The International Classification of Diseases (ICD) is used to register diagnosis. This classification has been modified over the years: ICD-7 in 1964-1968, ICD-8 in 1969-1986, ICD-9 in 1987-1996, and ICD-10 since 1997. From 2001, data on outpatient specialist care were also included in the register. The most recent validation of the register showed that the diagnoses are valid in 85-95% of the cases [10].
The Swedish National Causes of Death Register is also maintained by the Swedish National Board of Health and Welfare. The register was initiated in 1961 and contains information about all deaths in Swedish citizens since then. Data as cause of death according to ICD classification, date of death, age at death, and place of death are recorded in the register for each death.
Participants
The cohort was collected from Statistics Sweden and the Swedish National Patient Register. Data on the exposure, HSCR, were collected from the Swedish National Patient Register (ICD-7: 756.31, ICD-8: 751.39, ICD-9: 751D, ICD-10: Q431) during the study period. A total of 1267 individuals with these ICD codes were found. To confirm that they had HSCR and were not misdiagnosed by mistake, each case had to satisfy one of the following inclusion criteria: 1. HSCR as main diagnosis and a surgical intervention number specific for HSCR; 2. admission to a pediatric surgical center at least twice, with a hospital stay of at least 4 days, at least once, and HSCR as main diagnosis for both hospital stays; 3. one long admission (≥4 days) at a pediatric surgical center once and more than one outpatient visit at a pediatric surgical center with HSCR as main diagnosis.
For instance, we wanted to avoid including neonates with suspected HSCR admitted for rectal suction biopsies, where the biopsies turned out to be negative or patients admitted only to a hospital without pediatric surgery.
Using these criteria, 528 individuals were excluded, ending up with 739 exposed cases. The unexposed individuals in the cohort were collected from the Swedish National Population Register and comprised ten unexposed individuals for each exposed individual matched for birth year and gender (n = 7390) (Fig. 1).
Variables
The study outcome death was defined as any registration of death in the Swedish National Causes of Death Register. The cause of death was based on ICD classification from the Swedish National Causes of Death Register. HSCR is associated with trisomy 21, which was considered a potential bias. Individuals with Down syndrome were identified in both cohorts in the Swedish National Patient Register (ICD8: 759.3, ICD9: 758A, and ICD10: Q90.0-90.9).
Statistical analysis
The association between exposed and unexposed individuals was analyzed with R program [11]. Categorical data are presented as frequencies or proportions and analyzed with two-tailed Fisher's exact test. Numerical data are presented as median and range and two-sided Mann-Whitney U test was used for analysis. p < 0.05 was considered statistically significant. The Hazard ratio was used for calculations of risk of death and a logistic regression model presented as Odds ratio (OR) and 95% CI was used for calculation of changes in death over time.
Ethics
The Regional Ethics Review Board in Stockholm approved the study.
Key results
This is a large national population-based register cohort study, showing that mortality rate was 3% among the HSCR patients. The risk for death was significantly higher in the HSCR cohort compared to the unexposed cohort, but there was no difference in age at death.
Interpretation
In the literature, mortality among HSCR patients is reported between 0 and 2.4% [3][4][5]. The follow-up time in these studies varies. Our study shows a mortality rate of 3% in data based on a national register with a long-term median followup. Data from another population-based study of patients with HSCR between 1990 and 2008 in the North of England showed that 9% of the children died during their first year of life [13]. In our study, four children died within their first year of life (0.5%) indicating a higher in survival rate in our cohort. As a speculation, this may reflect changes during the latest years as early treatment in patients with suspected HAEC as well as changes in the surgical and anesthetical procedures.
Data have shown that trisomy 21, HAEC, and TCA increase the risk for death, and in this study, the risk for death is still significantly increased although when adjusted for Down syndrome [3,6,7]. In addition, assessing the ICD classifications of causes of death indicates a difference between the exposed cohort and the unexposed. For every death, there is at least one ICD classification in the register explaining the main cause of death. Looking at cause of death in the HSCR cohort, four of the patient had HSCR as their main diagnosis. Unfortunately, HAEC does not have an own ICD code. HAEC could potentially have been the cause of death in patient with HSCR registered as cause of death. In this study, we were not able to study if TCA increased the risk for death due to lack of clinical data in the national registers.
Limitations
This study was based on prospectively collected national register data, which previously shown to have high validity. Since this is a register-based study, no histopathology reports were possible for HSCR diagnose. To reduce the risk for misclassification, specific inclusion criteria were set in advance to identify the exposure of HSCR. This is a limitation of the study, since we may have included patients without HSCR, but also excluded patients with HSCR. One other limitation is that data on HAEC or level of aganglionosis cannot be collected from the registers. Since we know that these factors increase mortality among patients with HSCR, it would have been interesting to include the data in a subanalysis.
The control cohort was randomly selected from Statistics Sweden, reducing the risk for selection bias. To decrease the risk for confounders, the controls were matched for birth year and gender. One other confounder is the fact that HSCR is associated with Down syndrome. Individuals with Down syndrome often have other congenital malformations and have an increased mortality rate [12]. We analyzed unadjusted data and also data adjusted for Down syndrome, and could not show any affect on the mortality rate.
Generalizability
Being based on a national population-based study, these results are considered highly generalizable. Individuals with HSCR have an increased risk of mortality compared to an unexposed cohort.
Author contributions Dr. Anna Löf Granström conceptualized and designed the study, analyzed the data, drafted the article and revised the manuscript, and approved the final manuscript as submitted. Professor Tomas Wester conceptualized and designed the study, analyzed the data, critically reviewed and revised the manuscript, and approved the final manuscript as submitted.
Compliance with ethical standards
Funding source This study was supported by the Foundation Frimurare Barnhuset, Her Royal Highness Crown Princess Lovisa Foundation, and the Sällskapet Barnavård Foundation. | 2017-11-04T17:04:47.841Z | 2017-09-07T00:00:00.000 | {
"year": 2017,
"sha1": "4cc7eeb7d92f52b907bd3c36d421e5ad970e7be2",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00383-017-4150-z.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "f8f323d9df399a2b07587e8bd7c023287f7a122c",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233818845 | pes2o/s2orc | v3-fos-license | Modeling the Decision Making of Vehicle Control in Case of Emergency Situations Based on Game Theory
Modern transport is a high energy-intensive system of increased danger. There are special instructions on how to comply with the rules of operation and use of such systems, ensuring safety, in accordance with the requirements. Drivers of any vehicle are trained in accordance with the regulations for this type of transport. The article deals with issues related to life safety in the driving air vehicles, in solving the problem of landing with use of manual control in the event of emergency situations. A pilot must make the right decision under the conditions of time deficiency: he should land or go to the second round. In order to gain experience, it is proposed to use decision-making modeling in case of emergency situations using game theory methods. Usually, game theory considers conflicts between the opposing parties, which include military operations, sports games, and market relations. Contingencies that are called "nature" may occur when driving. Although nature does not have a conscious antagonistic reaction, it can be considered as the opposing side. Based on the theory of games, it is possible to conduct a safety simulation while controlling transport.
Introduction
Ensuring transport safety is an important component of people's lives. This is especially important when driving an air vehicle, since in case of an accident a large number of people are immediately exposed to mortal danger. It goes without saying, the safety of the aircraft and people depend on experience and the level of crew training. Despite the fact that modern aircraft are equipped with automation, the pilot is indispensable when landing. The landing process is the most critical part of the flight, and emergency situations often arise when the role of the crew commander is most significant. The pilot must make the right decision as soon as possible: whether to start landing or go to the second round. When landing in emergency situations, the pilot has to evaluate a number of factors. They can be a change in speed, deviation from the glide path, a change in the angle of inclination, a strong crosswind, engine malfunction, depressurization and different situations on the land [1][2][3][4]. We propose to use the theory of games as a mathematical approach, for the decision-making rules. It is clear that this approach may not be practiced in real flight conditions, but in training.
A mathematical model based on game theory
Game theory formally refers to the mathematical discipline associated with decision-making in conflict situations arising from the clash of interests of the opposing sides [5,6]. Opponents in this situation may be participants in sports games, opposing parties to hostilities, participants in market 2 relations between seller and client. When problem setting on the basis of game theory, the relations of conflicting parties are described using a payment matrix that has a structure shown in figure 1.
Figure 1. Payoff matrix.
In the matrix in Figure 1, A and B are opposing players. They take strategies Ai and Bj, where i takes values from 1 to m and j takes values from 1 to n, respectively, with step 1. The value aij indicates the amount of payment depending on the winning or losing of one of the players. To get the maximum win or minimum loss, each of the players must adhere to a certain strategy. Player A must adhere to the maximum of values which are minimal in columns in this case, let it be α. Player B must adhere to β , which is the minimum of values which are maximal in rows of the payoff matrix [3].
If α and β are equal, then the game is stable. This is a game with a "saddle" point, when players know all the moves. If α is not equal β, then a mixed strategy can be used, and the mathematical expectation of the gain of player A can be evaluated according to the expression: Elements P = (p1, p2,..., pm) and Q = (q1, q2, ..., qn) in the formula (1) set the probabilities of the adoption of strategies by players A and B, respectively [5,6]. In case of games of the type with dimension 2x2 or 2xn, a solution can be visually demonstrated geometrically. Let in the case payment matrix dimension 2x2 the player A has alternatives A1 and A2 with probabilities p1, and p2, accordingly. In this case, if p1 is equal to p, then p2 is equal to 1-p. Alternatives B1 will linearly depend on event probabilities according to: (2) Figure 2 gives an illustrative representation of decision making with a mixed strategy, where Ai can take its strategies A1 and A2 with probabilities p1 and p2. The value of V provides the maximum. Antagonistic conflicts in pure form take place in cases of deliberate confrontation opponent. It may be a war opposition, sports, market relations, etc. In reality, we can consider abnormal unexpected situations as an adversary. In decision theory they are called "nature". Although games with "nature" do not imply a conscious opposition, but due to uncertainty, they can have an even greater negative effect.
Decision-making modeling for aircraft control in a landing
When driving air transport an automatic flight mode is used, as a rule. But for a number of reasons pilots have to switch to manual control when landing. This may be due to the fact that not all runways can be equipped with Instrument Landing System. In addition, even if there is one, it can be turned off for repair work. Some airfields have such glide path angles that auto landing on them may be prohibited. Furthermore, equipment malfunctions are possible. In this case, the pilot has to perform a visual control in manual mode. And in case of emergency, as soon as possible he should make the right decision to sit down, go to the second round or fly to another airfield [7,8,9].
Let the notation is introduced for the decision made: X1 means to land and X2 means to go to the second round. And the following notation is introduced for possible emergency situations: S1 -speed change; S2 -pitch change; S3 -glide path deviation; S4 -engine failure; S5 -onboard equipment malfunction; S6 -depressurization; S7 -fire on board.
Various abnormal situations can lead to various injuries, up to catastrophes. Let the conditional scale adopted to assess the severity of damage using numbers from 0 to 9 [5]. Numerical values characterize the catastrophic consequences that may occur on board after the adoption of certain decisions as it is shown in Table 1. Zero means successful landing and 9 means catastrophic landing. These values should be formulated by highly professional experts. Serious consequences that could lead to the death of the board and passengers 8 -9 The decision table is presented in the form of a payment matrix in Table 2. Element X there corresponds to alternatives, element S corresponds to contingencies, and R(x) is value for decision making. Consequently, with the values given in the example the first alternative X1 to be followed, i.e. try to land the plane. Based on game theory, the lower bound for the min-max strategy is when α has value 1 and the upper one is when β is equal to 5. Therefore, a mixed strategy can be used. Let introduce the system of linear equations (4) for abnormal situations based on the values of Table 2.
These linear equations can be represented as lines on the graph in figure 3. The solution region is the space enclosed by lines 2, 5 and the lower horizontal ordinate. The maximum value is determined at the intersection point of the equation w2 = w5. That means: 6p = 1p + 6(1-p); then p = 6/11; or p1 = 6/11 and p2 = 5/11. The mixed strategy will be: Substituting p1 with the value 6/11 in the equation for w5 results in the maximum of value v. This value is equal to 3.27 which also corresponds to the graphical interpretation.
Thus, for given initial numerical values in Table 2, game theory recommends the adoption of the first alternative [10, 11] which means landing. In this case, the damage assessment will be no more than 3.27, which is the minimum value.
Conclusion
Transport safety is associated with factors that include human, technical and environmental factors. Human factors can be associated with inadequate preparation for driving or psychophysiological stress, leading to errors. Technical factors are associated with failures, malfunctions or damage to individual components in the transport. Environmental factors include meteorological conditions (thunderstorm, wind, fog, etc.). All these factors affect the safety of driving. Pilots are most at risk when landing an aircraft. During the landing of an aircraft using manual control, various emergency situations can arise and in conditions of time pressure, the pilot must make the right decision: to land or go to the second round. To develop these solutions, it is proposed to model situations using the methods of game theory and uncertainty [12]. Decision-making modeling using game theory can also | 2021-05-07T00:04:07.522Z | 2021-03-01T00:00:00.000 | {
"year": 2021,
"sha1": "ac35ee468f6195f9962ffa83f4028feb3fdddeea",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/666/3/032043",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "94a71ecbdf8635b1a44256ae540f7bdfe56ef89f",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Physics"
]
} |
210254069 | pes2o/s2orc | v3-fos-license | Three‐Dimensional Structure of a Cold‐Core Arctic Eddy Interacting with the Chukchi Slope Current
A rapid, high‐resolution shipboard survey, using a combination of lowered and expendable hydrographic measurements and vessel‐mounted acoustic Doppler current profiler data, provided a unique three‐dimensional view of an Arctic anti‐cyclonic cold‐core eddy. The eddy was situated 50‐km seaward of the Chukchi Sea shelfbreak over the 1,000 m isobath, embedded in the offshore side of the Chukchi slope current. The eddy core, centered near 150‐m depth, consisted of newly ventilated Pacific winter water which was high in nitrate and dissolved oxygen. Its fluorescence signal was due to phaeopigments rather than chlorophyll, indicating that photosynthesis was no longer active, consistent with an eddy age on the order of months. Subtracting out the slope current signal demonstrated that the eddy velocity field was symmetrical with a peak azimuthal speed of order 10 cm s. Its Rossby number was ~0.4, consistent with the fact that the measured cyclogeostrophic velocity was dominated by the geostrophic component. Different scenarios are discussed regarding how the eddy became embedded in the slope current, and what the associated ramifications are with respect to eddy spin‐down and ventilation of the Canada Basin halocline. Plain Language Summary A critical feature of the interior Arctic Ocean is the sharp vertical change in salinity between roughly 100‐m to 200‐m depth, known as the cold halocline. This shields the warm Atlantic‐origin water below from mixing upward to the surface and melting the pack ice. The cold halocline is believed to be partially maintained by eddies of cold water emanating from the Chukchi Sea continental shelf. This paper presents measurements from a rapid, high‐resolution shipboard survey of a cold‐core Arctic eddy offshore of the shelf edge, providing a unique three‐dimensional view of the feature. The eddy's core contained water near the freezing point with a high level of nitrate, but the biological activity had largely ceased because the eddy had descended below the part of the water column exposed to sunlight. The eddy was imbedded in the offshore edge of the westward‐flowing Chukchi slope current. Different scenarios are discussed regarding how the eddy became embedded in the slope current, and what the associated ramifications are with respect to disintegration of the eddy and the manner in which the cold water feeds the halocline.
Introduction
The cold halocline is an important part of the water column of the Arctic Ocean, as its enhanced stratification prevents the warm Atlantic Water below from mixing vertically to the sea surface and melting the pack ice (Carmack et al., 2015). It is now well established that the cold halocline is ventilated laterally from the edges of the Arctic basin, rather than vertically from above (Aagaard et al., 1981). In the Canadian sector of the Arctic, dense winter water near the freezing point formed on the Bering, Chukchi, Beaufort, and East Siberian shelves provides source water for the cold halocline (Anderson et al., 2013;Melling, 1993;Muench et al., 1988;Weingartner et al., 1998). The cold water is also high in nutrients, in part because, during formation, regenerated nutrients from the sediments are mixed into the water column (Arrigo et al., 2017;Cooper et al., 1997). Hence the winter water contributes as well to the nutricline of the Canada Basin (Jones & Anderson, 1986), which impacts primary production. Above the cold halocline resides the summer Pacific halocline (e.g. Steele et al., 2004), whose constituent water masses originate from different portions of the Bering Sea shelf during the warm months of the year. These warm waters are not considered in this paper, and from here-on the term halocline refers to the cold halocline, which has an upper portion and lower portion, separated by a salinity of roughly 33.5 (Pickart, 2004).
In order to ventilate the cold halocline, the dense winter water must be fluxed from the shelves into the interior basin. Figure 1 presents a circulation schematic for the vicinity of the Chukchi and Western Beaufort Seas. Pacific water is known to exit the Chukchi shelf through Barrow Canyon in the east (Pickart et al., 2005;Weingartner et al., 2017) and Herald Canyon in the west (Linders et al., 2017;Pickart et al., 2010). Both of these outflows form a shelfbreak jet that flows to the east. The Chukchi shelfbreak jet transports between 0.01 and 0.10 Sv of Pacific water (Corlett & Pickart, 2017;Li et al., 2019), while the transport of the Beaufort shelfbreak jet is in the range 0.02-0.13 Sv (Brugler et al., 2014;Nikolopoulos et al., 2009). Recently it has been documented that Pacific water is advected in a westward-flowing boundary current along the continental slope of the Chukchi Sea (Corlett & Pickart, 2017;Li et al., 2019). This current has been named the Chukchi slope current and is believed to emanate from Barrow Canyon (Spall et al., 2018). Corlett and Pickart (2017) constructed a mass budget of the Chukchi shelf, in which the slope current constitutes the dominant outflow. Transport estimates of the Pacific water component of the current range from 0.50 to 0.57 Sv (Corlett & Pickart, 2017;Li et al., 2019).
There are various mechanisms by which the Pacific water carried by the system of boundary currents stemming from the Chukchi shelf can be fluxed seaward into the basin. This can occur via wind-forced upwelling and downwelling at the shelfbreak. The former process has been studied extensively in the Beaufort Sea and occurs throughout the year during all ice conditions (e.g. Pickart et al., 2009Pickart et al., , 2011Pickart et al., , 2013Lin et al., 2016). The offshore Ekman transport during these events carries Pacific water from the surface layer on the shelf into the basin (Ekman depth approximately 45 m, Schulze & Pickart, 2012). Although this water can be as cold as Pacific winter water during much of the year, it is typically too fresh (and light) to ventilate the halocline. Upwelling has also been observed on the Chukchi slope, although there is less evidence of an offshore surface Ekman circulation (Li et al., 2019). Downwelling, on the other hand, does transport Pacific winter water offshore in the density range of the upper halocline. This has been demonstrated in the Canadian Beaufort Sea (Dmitrenko et al., 2016) and in the Alaskan Beaufort Sea (Foukal et al., 2019).
Another mechanism for transporting Pacific winter water offshore is via eddies. Halocline eddies are a ubiquitous feature of the Canada Basin and are commonly observed by a variety of measurement platforms (e.g.
10.1029/2019JC015523
Journal of Geophysical Research: Oceans Manley & Hunkins, 1985;Plueddemann et al., 1999;Muench et al., 2000;Mathis et al., 2007;Kawaguchi et al., 2012;Zhao & Timmermans, 2015;Fine et al., 2018). The vast majority of the eddies are cold-core anti-cyclones with lateral scales on the order of the Rossby deformation radius, which is between 10 and 15 km in this region (Zhao et al., 2014). The eddies that reside in the northern part of the Canada Basin are generally shallow (centered above 80 m) and are believed to be spawned via baroclinic instability of the hydrographic front that divides Canadian Arctic waters from Eurasian Arctic waters (Timmermans et al., 2008). By contrast, the cold-core anti-cyclones observed in the southern portion of the Canada Basin are deeper (centered below 80 m), saltier, and denser. These features are believed to last up to a year before spinning down, and their population has increased in recent years (Zhao et al., 2016).
The deeper cold-core eddies found in the southern Canada Basin are thought to emanate from the two canyons on the outer Chukchi shelf or from the boundary currents that emerge from these canyons. Numerical, laboratory, theoretical, and observational studies have argued that the dense water flowing down Barrow Canyon should form anti-cyclonic eddies, either through sidewall friction (D'Asaro, 1988), flow-topography interactions (Cenedese & Whitehead, 2000;Chao & Shaw, 2003), or baroclinic instability (Pickart et al., 2005). Pickart and Stossmeister (2008) present MODIS satellite ice images showing a train of anti-cyclones being generated from the canyon outflow, which provides observational support for these mechanisms. Presumably the same argument applies to the Herald Canyon outflow. Indeed, Pisareva et al. (2015) observed a cold-core anticyclone of Pacific winter water situated directly offshore of the mouth of the canyon.
It is also believed that Pacific winter water anti-cyclones are spawned from the shelfbreak jet of the Chukchi and Beaufort Seas via baroclinic instability. This process was investigated by Spall et al. (2008) who used mooring observations to assess the stability characteristics of the Beaufort shelfbreak jet and to calculate the mean-to-eddy energy fluxes. This implied that baroclinic instability was active, and the numerical model they employed showed how the unstable boundary current readily formed eddies with the same characteristics of those observed in the basin. The Chukchi shelfbreak jet should be similarly unstable, and Pickart et al. (2005) presented evidence of a cold-core anti-cyclone being spawned from the current. It remains to be determined if the Chukchi slope current can form Pacific water eddies. Corlett and Pickart (2017) showed that the potential vorticity structure of the current satisfies the necessary condition for baroclinic instability, and the current undergoes meanders which are suggestive of an unstable current. The eddies spawned from the canyon outflows and boundary currents are represented schematically in Figure 1. Pickart et al. (2005) argued that, in order for the locus of these eddies to be the dominant ventilation source of the Canada Basin upper halocline, 100-200 eddies must be formed each year. This does not seem unreasonable in light of the large numbers of cold-core anti-cyclones observed in the basin (e.g. Zhao et al., 2014).
While anti-cyclonic cold-core eddies have been observed extensively in the Canada Basin, most notably using the ice-tethered profiler database, to date there have been no surveys revealing the full threedimensional structure of one of these features. In addition, no studies have investigated the role of the Chukchi slope current in either generating or influencing the eddies. In this paper we present results from a high-resolution shipboard survey of a cold-core anti-cyclonic Pacific water eddy. The feature was situated on the Chukchi continental slope to the northwest of Barrow Canyon, adjacent to the seaward edge of the Chukchi slope current. The eddy, and its immediate surroundings, was mapped using a combination of expendable probes, lowered instrumentation, and underway sensors. The uniform horizontal grid spacing in both longitude and latitude provided an unprecedented three-dimensional view of the feature. We begin with a description of the data, followed by an analysis of the eddy's water mass, kinematic, and dynamical structure. The Chukchi slope current is then investigated in an effort to better understand the interaction between the current and the eddy. Finally, we discuss some of the implications of our findings.
Hydrographic Data
The in situ data used in this study were collected in September 2004, on the USCGC Healy, as part of the Western Arctic Shelf Basin Interactions program. The ship sampled both the Chukchi and Alaskan Beaufort Seas, but the geographical focus of the present study is the Chukchi continental slope to the south of the Northwind Ridge ( Figure 2). Different aspects of this eddy have been reported on earlier, addressing the off-shelf flux of carbon (Mathis et al., 2007), the age of the eddy determined by radium dating (Kadko et al., 2008), and the different zooplankton species contained within the feature (Llinás et al., 2009). Here we focus mainly on the physical attributes of the eddy.
The eddy was initially revealed by occupying a series of expendable-bathythermograph (XBT) sections across the Chukchi continental slope (sections x1-x5; Figure 3a). Only the western-most line (x1, which was occupied first) showed any evidence of an eddy. After completing section x5, we steamed back to section x2 and did a back-and-forth XBT line (x6), in order to pinpoint the location of the eddy core and determine its along slope length scale. Using this information, we laid out a high-resolution grid to be occupied as quickly as possible using expendable conductivity-temperature-depth (XCTD) probes ( Figure 3b). Due to inventory constraints, Healy's CTD package was used to complete Transect 1 and extend XCTD Transect 2 (green squares). The CTD casts were taken to 300 m, and no water sampling was done in order to save time.
The average station spacing of the eddy grid was 5 km, and it took approximately 24 hr to complete. This resulted in a synoptic snapshot encompassing the eddy with uniform spatial coverage. This is the only such survey resolving the complete three-dimensional structure of an Arctic eddy of which we are aware. Approximately 4 hr after the XCTD survey was completed, a CTD section was occupied across the center of the eddy. This took 25 hr to complete and included water sampling for dissolved oxygen, total CO 2 , nutrients, total alkalinity, chlorophyll, dissolved and particulate organic matter, and salinity. At six of the stations a multi-net cast was done to sample for zooplankton. Many of the biochemical aspects of the eddy are reported elsewhere (Mathis et al., 2007;Kadko et al., 2008;Llinás et al., 2009).
The CTD used on the Healy was a Sea-Bird 911+ with dual temperature and conductivity sensors mounted on a 24-place rosette with 12-L Niskin bottles. Laboratory calibrations were done on the temperature sensors, and the conductivity data were calibrated using the bottle salinity data. The accuracies were deemed to be 0.001°C and 0.007 for temperature and salinity, respectively. Additional CTD variables included transmissivity and fluorescence, although these were not calibrated. The final CTD data were used to create 1-db Figure 2. Study domain and place names. The bathymetry is from IBCAO v3 (Jakobsson et al., 2012). The area outlined in red is shown in Figure 3. The yellow shaded area on the Chukchi shelf is where the wind stress curl was averaged to construct the time series of Figure. 15. averaged downcast files. Kadko et al. (2008) describe the processing and calibration of the XCTD data. The XCTD temperature, salinity, and depth are deemed accurate to within 0.02°C, 0.04, and 1 m, respectively. The final XCTD data were used to construct 2-m averaged profiles of temperature and salinity. Both the CTD and XCTD data were used to calibrate Healy's multi-beam system, which produced the bathymetry used in the "area of detail" figures (Figures 3b,6,9,and 11).
Vertical sections of various properties were constructed using Laplacian-Spline interpolation with a vertical grid spacing of 5 m and horizontal grid spacing of 1 km, where the meridional distance is relative to the latitude of 73.24°N (which is just south of the XCTD survey minimum latitude). Lateral plots were constructed using the same interpolator, with a grid spacing of 0.01°in latitude and 0.02°in longitude. The lateral maps do not include the hydrographic or velocity data from central CTD section because, as noted above, it took approximately 25 hr to occupy the section after the eddy survey was completed.
Velocity Data
Vessel-mounted acoustic Doppler current profiler (VMADCP) data were obtained from Healy's 75-kHz phased-array Ocean Surveyor instrument. Attitude information (heading, pitch, and roll) was provided by an Ashtech ADU2-3DGPS receiver, and the ship's position was determined by a Trimble Centurion p-code DGPS system. Processing and merging of these data streams resulted in calibrated, earth-referenced profiles of horizontal currents from about 20 m below the surface to a maximum depth of 400-m depth every 2 min in 15-m vertical bins. The reader is referred to Münchow et al. (2006Münchow et al. ( , 2015 for details about the system and its overall performance. As part of the quality control of the velocity data set, we sorted the data within each 2min interval and discarded extreme values from the record. We thus forced the data distribution towards a normal distribution for which the standard deviation decreases as N −1/2 , where N is the number of independent estimates (pings). With a single ping error of about 14 cm s −1 and N = 50 pings within each 2-min ensemble, we estimate absolute random errors to be about 2 cm s −1 . For the analysis we created 10-min averages of the ensembles.
Based on mooring data from Chukchi continental slope, tidal velocities in this region are small (<2.2 cm s −1 ; Li et al., 2019). Nonetheless, the major barotropic tidal signals were removed from the VMADCP profiles using the Oregon State University Arctic tidal model, which has a resolution of 5 km and predicted similarly small tides (<2 cm s −1 ; Padman & Erofeeva, 2004;Llinás et al., 2009). Sections of relative geostrophic velocity were calculated from the dynamic height relative to the sea surface using the gridded CTD data. These velocities were subsequently interpolated to the original grid, then made absolute by referencing them to the analogously gridded VMADCP velocities. In particular, for each grid point, the vertically averaged relative geostrophic velocity was matched to the vertically averaged cross-track VMADCP velocity over their common depth range. This resulted in vertical sections of absolute geostrophic velocity along each transect.
Biochemical Data
Chlorophyll a and phaeopigments were determined fluorometrically (Holm-Hansen et al., 1965) by filtration through 25-mm Whatman GF/F filters as outlined in Evans et al. (1987). Samples were collected from the CTD casts and immediately filtered. The filters were placed in vials on ice, sonicated in 90% acetone, and extracted for 24 hr. Extracted fluorescence was measured before and after acidification (10% HCL), with a Turner Designs model AU-10 fluorometer calibrated with commercially purified Chlorophyll a (Turner Designs).
Sample methods for nutrients have already been described in detail elsewhere (see Codispoti et al., 2005). In brief, nutrient analyses (phosphate, silicate, nitrate + nitrite, urea, ammonium, and nitrite) were performed on an ODF-modified 6-channel Technicon AutoAnalyzer II. The samples collected from CTD casts were generally analyzed within a few hours after sample collection. Methodologies and modification for the individual nutrient species are also described in detail in Codispoti et al. (2005).
Wind and Surface Geostrophic Velocity Data
Timeseries of wind stress curl over the Chukchi shelf were constructed using 10-m wind fields from the ERA-Interim reanalysis provided by the European Center for Medium-Range Weather Forecasts (Berrisford et al., 2009). The data have a temporal and spatial resolution of 6 hr and 0.75°, respectively. The surface geostrophic velocity during the time period of the eddy survey was provided by Copernicus Marine and Environment Monitoring Service (http://marine.copernicus.eu/). This product consists of daily gridded fields with a resolution of 0.25°in latitude and longitude, based on data from multiple altimeter missions.
Eddy Hydrographic Characteristics
Cross-slope vertical sections from the XCTD grid reveal that the feature was a cold-core eddy centered vertically near 150-m depth (Figure 4), embedded within the halocline, roughly confined to the density layer 26.4-27.2 kg m −3 . On the western side (Transect 6, Figure 4 top row) there is only a slight widening of Figure 4 bottom row) the cold layer is only~30-m thick, and there is little to no widening of the density layer. At these cold temperatures the density is dictated by salinity, and only the transect through the eddy center shows a noticeable signature of the isohalines, which spread apart from the core absolute salinity of 33.3 g kg −1 . This core value corresponds to the upper halocline in the southern Canada Basin (Melling, 1998;Pickart, 2004).
The cold water within the eddy corresponds to newly ventilated winter water (NVWW), which is formed via convection in the Bering Sea (e.g. Muench et al., 1988) and Chukchi Sea (e.g. Pickart et al., 2016), and is colder than −1.6°C and saltier than 31.5 (e.g. Pisareva et al., 2015;Corlett & Pickart, 2017). The other type of winter water is known as remnant winter water (RWW), which is NVWW that has warmed either by mixing or via solar heating (Gong & Pickart, 2016). It should be noted that most of the NVWW is typically flushed off of the Chukchi shelf by late summer Shroyer & Pickart, 2019).
The CTD section through the center of the cold-core eddy (Figure 3b, magenta squares) provides additional information about the feature ( Figure 5). Transmissivity within the eddy is lower than the surrounding water ( Figure 5a). Some of this is likely due to suspended sediments since the dense NVWW flows along the bottom of the Chukchi shelf as it progresses northward (prior to eddy formation near the shelf edge). However, the elevated values of fluorescence ( Figure 5b) and dissolved oxygen (Figure 5c) indicate recent biological activity. It is well documented that primary production on the Chukchi shelf is strongly tied to the presence of winter water (e.g. Lowry et al., 2015Lowry et al., , 2018. This is because winter waters are generally high in nutrients. In fact, the NVWW has the highest levels of nitrate on the Chukchi shelf during early summer . The eddy also contained elevated levels of nitrate sufficient to spur primary production ( Figure 5d). However, the depth of the core was well beneath the euphotic zone (typically <25 m at this time of year), implying that photosynthesis completely ceased once the eddy left the edge of the shelf and descended to its equilibrium depth.
The lack of photosynthesis is consistent with the low levels of chlorophyll within the eddy (not shown), while there was an enhanced phaeopigment signal (Figure 5e). This implies that the chlorophyll cells in the feature were either dead and or in the process of dying, which is expected when nutrients are drawdown, or, in this case, the access to sunlight is cut off. It should be noted, however, that phaeopigments do fluoresce, which accounts for the signal in Figure 5b. Using radium isotope data, Kadko et al. (2008) estimated the age of the eddy to be on the order of months (i.e. the time since it left the shelf). The chlorophyll to phaeopigment differential that we measured is consistent with this time frame (i.e. significantly shorter than a year). In addition, the weak stratification of the eddy core (Figure 5f), and the fact that it contains NVWW, suggests that the water in the feature was ventilated earlier in the year during the winter months, as opposed to the previous winter. This is because NVWW is modified fairly quickly into RWW (Gong & Pickart, 2016). This supports the radium age estimate as well.
A lateral map of the mean temperature within the density layer bounding the eddy (26.4-27.2 kg m −3 , shown in the vertical sections The −1.6°C contour, taken to delimit the core of the eddy, is highlighted red. The expendable conductivity-temperature-depth stations are marked by the blue squares, and the conductivity-temperature-depth stations are marked by the green squares. The transect numbers are labeled along the top. The bathymetry is from the ship's multi-beam system. Figure 6. The data points within the core of the eddy are highlighted red (see Figure 6 for where the core of the eddy is situated). The thin grey lines are contours of potential density (kg m −3 ). The thick black line shows the division between the newly ventilated winter water (NVWW) and remnant winter water (RWW); see text for details. The freezing point is marked by the dashed blue line.
10.1029/2019JC015523
Journal of Geophysical Research: Oceans of Figures 4 and 5) indicates that the XCTD survey completely encompassed the feature ( Figure 6). The eddy's core, defined here as within the −1.6°C isotherm, has a quasi-circular shape, with a zonal diameter of~19 km and a meridional diameter of~14 km. It is located over the deep continental slope, centered on the 1,000-m isobath, roughly 50-km seaward of the shelfbreak. The water surrounding the eddy displays some patchiness in temperature, particularly to the west and south, which could be a reflection of mixing processes as the eddy spins down (see the Discussion section).
A conservative temperature-absolute salinity (/S A ) diagram characterizes the water in the survey region between the bounding isopycnals of the eddy (Figure 7). As noted above, the water in the core of the eddy is NVWW. The coldest water in the eddy is predominantly confined to the density layer 26.6-26.8 kg m −3 , and, at the center of the feature, the temperature is near the freezing point (−1.8°C at this temperature and salinity). Even outside of the eddy some of the water is NVWW, although most of the surrounding water is slightly warmer RWW, which is the dominant water mass of the cold halocline in the Canada Basin. As the eddy spins down, the anomalously cold NVWW in its core will moderate to RWW.
The rapid, high-resolution XCTD grid allows us to present the first three-dimensional view of an Arctic coldcore anti-cyclone. In Figure 8 we show the topography of the bounding density layers 26.5 and 27 kg m −3 (these surfaces are slightly more restrictive than those used above in order to highlight the deflection of the isopycnals), where the viewer is looking to the southwest. The maximum layer thickness is 85 m, compared to a thickness of 37 m outside of the eddy. Note that the entire feature is slanted in the vertical; that is, the density surfaces are shallower on the onshore side of the eddy versus the offshore side. This background density tilt is also evident in the vertical sections of Figure 4. It is due to the presence of the Chukchi slope current, as explained below.
Eddy Kinematics and Dynamics
The vertical coverage of the VMADCP data extended on average to approximately 225-m depth, which enabled us to capture the threedimensional velocity structure of the eddy and the surrounding flow (see Figures 3a and 3b for the lateral coverage of the VMADCP data). Figure 9 shows the vertically averaged velocity within the density range 26.4-27.2 kg m −3 (the same density layer as in Figure 6) in relation to the thickness of the layer. The azimuthal anti-cyclonic flow of the eddy is evident. However, it appears that the circulation of the feature is asymmetric, with enhanced flow on the southern side of the eddy versus the northern side. In addition, there is strong flow in both the western (sections 5-7) and eastern (section 3) parts of the domain. These aspects of the circulation are due to the fact that the eddy is embedded in the offshore part of the Chukchi slope current.
To demonstrate this more clearly, in Figure 10 we present vertical sections of absolute geostrophic velocity corresponding to the same three transects shown in Figure 4. The azimuthal flow of the eddy is clearly evident in Transect 2 through the center of the feature (compare to the temperature section through the center of the feature, Figure 4, middle row). However, in all three transects of Figure 10 the strongest westward flow is in the southern portion of the section above 50 m, which is the signature of the Chukchi slope current. At this time of year the slope current is Figure 9. Vessel-mounted acoustic Doppler current profiler velocity averaged within the density layer 26.4-27.2 kg m −3 (black vectors; the key is located in the lower right). The color represents the thickness of the density layer, and the red line denotes the core of the eddy (see Figure 6). The expendable conductivity-temperature-depth stations are marked by the blue squares, and the conductivity-temperature-depth stations are marked by the green squares. The transect numbers are labeled along the top. The bathymetry is from the ship's multi-beam system. The 400 and 600 m bathymetry contours are thicker, highlighting the sharp bend in bathymetry.
10.1029/2019JC015523
Journal of Geophysical Research: Oceans surface-intensified and extends to 150-200-m depth (Corlett & Pickart, 2017;Li et al., 2019). Based on data from a year-long mooring array across the Chukchi Slope, the mean transport of the current in early fall is 1.0 ± 0.48 Sv. This compares well to our September survey data; the transport of the slope current in Transects 6 and 3 (away from the eddy core) were both 0.83 Sv, where transports were calculated over the domain 0-21 km in distance and 0-185 m in depth.
As discussed above, the vertical sections of hydrographic properties for Transects 6 and 3, through the western and eastern sides of the eddy, respectively, show relatively little signature in the density field but a clear signal of cold NVWW (Figure 4). The absolute geostrophic velocity sections on the two sides of the eddy show a much more muted signal than the central transect ( Figure 10). On the western side ( Figure 10, Transect 6) there is just a hint of positive flow on the northern side of the eddy, while on the southern side there is no extremum of negative flow; the flow of the slope current masks any signature of this half of the eddy. On the eastern side ( Figure 10, Transect 3), one would not notice that there is an eddy signature at all. These sections clearly demonstrate that the eddy is caught in the seaward side of the Chukchi slope current. Figure 9 indicates that the strong westward flow of the slope current in the southern portion of the domain (Transects 1-4) bends to the northwest on the western side of the domain (Transects 5-7). This deflection of the slope current is likely in response to the northward diversion of the isobaths due to Hanna Canyon (see Figures 2 and 3a).
In order to isolate the velocity signature of the eddy, we attempted to remove the signature of the slope current from the VMADCP data. The layer-averaged velocity vectors on Transect 4 ( Figure 9) display the signature of a Rankine vortex for the northern half of the feature. This structure of intrahalocline Arctic eddies has been noted previously (Zhao et al., 2014). In particular, the eastward velocity increases from the eddy center until a maximum at the edge of the eddy (indicated by the red line in Figure 9). We assume therefore that the northern half of the eddy for Transect 4 is not significantly influenced by the slope current. We thus mirror the eddy signature about its center and subtract it from the full velocity section. This leaves values of 0 cm s −1 north of the eddy's center and an undisturbed signature of the slope current in the southern half of the section. Following this, we subtract the undisturbed slope current signal from the southern halves of Transects 1,2, and 4, which encompass the strongest part of the eddy. (We are unable to objectively remove the slope current signature in Transect 3 and after the current bends to the northwest in Transects 5-7.) A more symmetrical eddy signature with peak azimuthal speeds of 8 cm s −1 appears in the layer-averaged velocity plot once it is isolated as described above (Figure 11; by definition it has to be symmetrical at Transect 4). A three-dimensional view, displaying both the eddy velocity field and the temperature field, is shown in Figure 12. At the topmost surface (100 m) there is little indication of the eddy: no NVWW is present, and there is no consistent swirl signature. As one progresses downward through the feature (125, 150, 175 m) the cold core becomes evident, as does the anti-cyclonic azimuthal flow. At the bottom of the feature (200 m) the swirl speed remains strong. This is consistent with the modeling results of Spall et al. (2008) that show that the velocity signal of cold-core Arctic eddies often extends deeper into the water column than the property signature. Figure 3b for the locations of the three transects. The viewer is looking to the west. Positive flow is to the east. Station numbers are marked along the top (Station 119 is a conductivity-temperature-depth cast). The highlighted density contours correspond to the layer averages in Figures 9 and 11. The Chukchi Slope Current is labeled as is the center of the eddy.
Journal of Geophysical Research: Oceans
Previous studies have demonstrated that intrahalocline eddies are in cyclostrophic balance for large Rossby numbers (near 1) and in approximate geostrophic balance for small Rossby numbers (Fine et al., 2018;Zhao et al., 2014). To assess the dynamical balance in the eddy sampled here, we evaluated the cyclogeostrophic equation where v is the eddy azimuthal velocity, r is distance from the eddy core, and f is the Coriolis parameter (1.3 ×10 −4 s −1 ). In equation (1) the first term on the left is the centrifugal acceleration, the second term is the Coriolis acceleration, and the third term is the pressure gradient force.
We computed the three terms for Transect 2 (the XCTD line that passes through the eddy's center), averaged between the isopycnals 26.4-27.2 kg m −3 (the same density layer as in Figure 6). We find that the cyclogeostrophic velocity, the full solution to equation (1), is dominated by the geostrophic component (the balance of the two right-hand terms in equation (1); Figure 13a, where both velocity terms are referenced to 250 m). This is not surprising in light of the relatively weak swirl speeds of the feature (Figure 11). The Rossby number can be calculated as R o = ζ/f, where in cylindrical coordinates the relative vorticity ζ ¼ ∂ rv ð Þ r∂r . Figure 13b shows how R o varies across Transect 2. The maximum value on the two sides of the feature is 0.4, suggesting a geostrophic balance. This is consistent with the warm-core eddy sampled by Fine et al. Figure 11. Same as Figure 9 except that the slope current signature has been removed (see text for details).
10.1029/2019JC015523
Journal of Geophysical Research: Oceans (2018), which had a similarly small Rossby number and in which the geostrophic term accounted for most of the cyclogeostrophic velocity.
Summary and Discussion
In this study we have presented a unique three-dimensional view of an Arctic cold-core eddy located roughly 50-km seaward of the Chukchi Sea shelfbreak over the 1,000-m isobath. It had a zonal diameter of 19 km and a meridional diameter of~14 km, with cold NVWW in its core centered at 150-m depth. Elevated levels of nutrients, fluorescence, and dissolved oxygen indicated biological activity within the eddy. However, the fluorescence was due to phaeopigments instead of healthy chlorophyll, indicating that photosynthesis was no longer active. This is to be expected since the eddy descended to a depth well below the euphotic depth after leaving the shelf. The chlorophyll to phaeopigment differential is consistent with the eddy age being on the order of months, which was previously deduced using radium dating (Kadko et al., 2008).
The shipboard velocity data indicated that the eddy was embedded in the offshore edge of the westward-flowing Chukchi slope current, which was surface-intensified with a volume transport roughly 0.8 Svsimilar to previous autumn measurements of the current. Subtracting out the slope current signal, we demonstrated that the eddy velocity field was symmetrical with a peak azimuthal speed of order 10 cm s −1 . Its Rossby number was small (~0.4), consistent with the fact that the measured cyclogeostrophic velocity was dominated by the geostrophic component. The swirl speed extended below the depth of the property core of the eddy, in line with previous modeling results (Spall et al., 2008).
Using the surface velocity field derived from the satellite absolute dynamic topography during the period of observation (see section 2.4), we are able to map out the path of the Chukchi slope current in relation to the location of the eddy (Figure 14). The eddy is situated precisely where the slope current is diverted to the north (see also Figure 9). As such, one might be tempted to think that anti-cyclonic eddy is altering the path of the current (causing the slope current to partially wrap around the eddy). It is more likely, however, that the slope current is bending northward in response to the local bathymetry of Hanna Canyon, which causes the isobaths to bend to the north.
Ours are not the only measurements revealing an intrahalocline eddy embedded in the seaward side of the Chukchi slope current. Kawaguchi et al. (2012) reported on a large, warm-core anti-cyclone similarly situated (the authors referred to the strong westward flow as a southern branch of the Beaufort Gyre. How do such eddies end up here? As noted in the introduction, the Chukchi shelfbreak jet is highly unstable and readily spawns anti-cyclonic eddies (Pickart et al., 2005;Spall et al., 2008). However, the relative locations of the Chukchi slope current and shelfbreak jet suggest that eddies formed by this process would end up getting entrained into the onshore side of the slope current not its offshore side; that is, as the eddies progress northward, they encounter the southern side of the slope current. It should be remembered, however, that the slope current is strongly influenced by wind. In particular, Li et al. (2019) demonstrated that the current is enhanced when the wind stress curl is positive on the Chukchi shelf. This is due to the decrease in sea surface height (SSH) on the shelf relative to the slope, which results in a stronger northward SSH gradient and hence stronger westward flow. By contrast, when the wind stress curl is negative on the Chukchi shelf, the slope current is retarded or absent due to the increase in SSH on the shelf versus the slope, which weakens or flattens the northward SSH gradient. Thus, eddies emanating from the Chukchi shelfbreak jet after a negative wind stress curl event might be able to progress offshore before the slope current re-establishes itself.
Journal of Geophysical Research: Oceans
To investigate this, we computed the time series of wind stress curl over the same region of the Chukchi shelf considered by Li et al. (2019; see Figure 2 for the region) for the time period of June to August 2004the hypothesis being that the eddy was formed some months before it was measured. Figure 15 shows that there were plenty of periods over this 3-month time span when the slope current might have been disrupted, allowing the eddy to move well offshore. As discussed by Spall et al. (2008), the shelfbreak eddies are originally formed as dipole pairs that self-propagate seaward. Fairly soon after formation the cyclone partner spins down, which is why there is a preponderance of anti-cyclones in the basin. Figure 14. Surface velocity vectors derived from the absolute dynamic topography for the day of the eddy survey. The red line denotes the approximate path of the Chukchi Slope Current. The blue contour denotes the edge of the eddy (the −1.6°C contour from Figure 6). The bathymetry is from IBCAO v3 (Jakobsson et al., 2012). The study area outlined in red is shown in Figure 3. It is perhaps more likely that the eddies embedded on the seaward side of the Chukchi slope current emanated from Barrow Canyon. As discussed in the Introduction, different mechanisms are believed to result in eddy formation from the canyon outflow, and eddies have been observed seaward of the canyon mouth. Hence, as the slope current forms from westward-turning flow leaving the canyon, it could influence a previously formed eddy residing farther offshore. Based on our slope current velocity data (5-10 cm s −1 at the offshore edge of the current, Figure 10), in this scenario the eddy would have been advected to our measurement site within 1-2 months, consistent with the estimated age of the eddy.
If this was a regular occurrence, it would mean that some fraction of the turbulent outflow from Barrow Canyon is also transported to the west, in addition to the Pacific water directly forming the slope current. Furthermore, any eddies stemming from the Chukchi shelfbreak jet under positive (or weak) wind stress curl conditions would likely follow a similar pathway via the inshore side of the slope current. One implication is that the primary geographical region of halocline ventilation in the Canada Basinvia cold-core eddies plus the Chukchi slope currentis to the west of Barrow Canyon. However, it is unknown how many eddies emanating from the canyon are entrained by the slope current, and, as noted in the Introduction, cold-core eddies are also spawned to the east by the Beaufort shelfbreak jetalthough the transport of the Beaufort shelfbreak jet is roughly five times smaller than that of the Chukchi slope current. Further investigation is necessary to elucidate precisely where the halocline ventilation occurs as well as the degree to which the ventilation is accomplished via mesoscale eddies versus the advective source of the Chukchi slope current. Assessing this will be challenging in part because we do not know how often eddies are formed during the winter months.
Previous papers have addressed the spin-down of eddies in the Canada Basin. Ou and Gordon (1986) considered the effect of pack ice in retarding the eddy flow due to ocean-ice stress and estimated a decay timescale of 1-10 years. Using this methodology, Zhao et al. (2014) estimated the lifetime of their observed small, coldcore anti-cyclones to be from 0.9 to 5 years. In our case we note that the velocity signature of the eddy is absent at a depth of 150 m ( Figure 12) as well as shallower than this (not shown), so it is unlikely that ice friction will impact its ultimate decay. Padman et al. (1990) considered the effect of background dissipation on the spin-down of a small cyclone located in the cold halocline layer and deduced a decay timescale on the order of 10 years. We are unable to assess the impact of small-scale mixing using our data but have no reason to suspect that the conclusion would be different than that reached by Padman et al. (1990).
Another mechanism for spinning down a cold-core anti-cyclone is due to convergence/divergence of the radial flow which can lead to vertical and horizontal exchanges of water masses. This in turn would flatten the displaced isopycnals of the eddy. Zhao et al. (2014) assessed this process for a representative eddy in their data set and deduced a much shorter decay time of roughly 7 months. Following the same methodology for the radial velocities along Transect 2 through the center of our eddy, we come up with a spin-down time on the order of half a year, which is comparable to Zhao et al.'s (2014) result. As noted above, there is NVWW outside of the eddy (i.e. the patch of cold water to the west of the eddy in the lateral map of Figure 6), which could be a reflection of this process.
A final thing to consider is, does the interaction of an eddy with the Chukchi slope current impact the eddy's decay process and spin-down time? For the cold-core eddy observed here, the onshore side of the feature was in contact with the slope current, while the offshore side of the eddy was not (or was less impacted). This would suggest that the eddy will become sheared at some point; recall that the zonal diameter of the eddy we measured was longer than its meridional diameter. This in turn implies that it would spin-down more quickly as the lateral gradients are enhanced. Further work is necessary to address this and other ramifications of eddy-slope current interactions. | 2019-11-14T17:13:38.558Z | 2019-11-01T00:00:00.000 | {
"year": 2019,
"sha1": "ceea86d22568a86639127e2d2fb88c8fe5fd2187",
"oa_license": "CCBY",
"oa_url": "https://agupubs.onlinelibrary.wiley.com/doi/pdfdirect/10.1029/2019JC015523",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "63679ede4443900f1e7c8514019b6a869e289fd0",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geology"
]
} |
38286773 | pes2o/s2orc | v3-fos-license | Correction: Implications of an Absolute Simultaneity Theory for Cosmology and Universe Acceleration
This article was republished on February 3, 2015, to replace the missing Equation 5 in the online version and to correct errors in the legend of S1 Table in both the online and PDF versions. The publisher apologizes for the errors. The S1 Table legend has now been moved to within the file itself. Please download this article again to view the correct version. The originally published, uncorrected article and the republished, corrected article are provided here for reference.
Introduction
The Absolute Lorentz Transformation (ALT) is an alternate Lorentz transformation that has similar kinematics to special relativity (SR), but is distinct in describing absolute simultaneity and invoking a preferred reference frame (PRF) relative to which time dilation and length contraction occur in a directional manner [1][2][3]. The key insights in this study are the following. ALT is compatible with current experimental data if it is embedded in the theoretical framework that PRFs are locally associated with centers of gravitational mass. Experimental strategies that focus on light speed anisotropies and time dilation in relation to local centers of gravitational mass can distinguish between the ALT framework and SR. The ALT framework is more compatible with the interpretation of cosmological redshift as kinematic Doppler shift than with the conventional interpretation of photons being modified directly by the expansion of space. Combining the ALT framework with the kinematic interpretation of cosmological redshift creates a scenario in which Hubble expansion is linked to time dilation on a universal scale. Analysis of Type Ia supernovae in the context of this scenario provides an alternate explanation for the reduced luminosity of high redshift Type Ia supernovae that does not invoke an acceleration in the rate of universe expansion.
The Absolute Lorentz Transformation
The Lorentz transformation equations were first described by J. Larmor [4], H.A. Lorentz [5], and J.H. Poincaré [6] as directional transformations for objects in motion relative to the ether as a PRF. Einstein's 1905 paper describing SR independently derived the Lorentz transformation with the stipulation that all inertial reference frames are equivalent [7]. In SR, Lorentz transformations are reciprocal, and occur in the context of differential simultaneity.
R. Mansouri & R.U. Sexl created a widely-used test theory for SR [3]. The Mansouri & Sexl (MS) test theory describes transformations between an ''ether frame'' S (with space-time coordinates X, T) and an inertial reference frame S (with space-time coordinates x, t). The transformation equations include arbitrary functions of velocity: 1/a(v) is the time dilation factor; b(v) is the length contraction factor; and E v ð Þ is determined by the convention of clock synchronization. The MS test theory is described in an unconventional format in which t is calculated relative to T and x (rather than T and X).
SR and ALT have similar kinematics. The form of the Lorentz transformation equation that is generally used in experimental settings to calculate time dilation is identical to the ALT time dilation equation. As described in Einstein's 1905 paper [7], the Lorentz time dilation equation t9 5 (t -vx/c 2 )/(1 -v 2 /c 2 ) 1/2 with the value x5 vt produces t9 5 t(1 -v 2 /c 2 ) 1/2 , which is the ALT equation (3). Mansouri & Sexl noted that ALT is the very relation one would write down if one has to formulate a theory in which rods shrink by a factor (1 -v 2 /c 2 ) 1/2 and clocks are slow by a factor (1 -v 2 /c 2 ) 1/2 when moving with respect to a PRF [3].
Differences between ALT and SR
ALT differs from SR in several respects. ALT maintains absolute simultaneity for all observers, while SR implies local differential simultaneity [2,3]. The corollary to this is that SR maintains light speed isotropy between inertial reference frames, while ALT implies anisotropies in the one-way speed of light, although the twoway speed of light for ALT is c [3,9]. The two theories also differ in that time dilation between inertial reference frames is reciprocal for SR and directional for ALT [2,3]. With directional time dilation, observers in a PRF will observe that clocks moving relative to the PRF run slower, while observers in non-PRF reference frames will observe that clocks in the PRF run faster (i.e., exhibit time contraction) [2,3]. The directional time dilation specified by ALT is absolute, and clocks can be compared directly for time differences that reflect the extent of time dilation. Further, time dilation in the two theories is calculated relative to different reference frames [2,3]. In SR, Lorentz transformations are calculated reciprocally using the relative velocity between inertial reference frames. In contrast, ALT is calculated relative to the PRF for each observer.
SR does not preclude an absolute reference frame. Lorentz and Poincaré believed in the existence of an absolute reference frame in the context of the Lorentz transformation [6,10]. However, unlike ALT, SR cannot distinguish between an absolute reference frame and other inertial reference frames. This is because SR predicts equivalent, reciprocal time dilation and length contraction between any two inertial reference frames, including a potential absolute reference frame.
Throughout the remainder of this study, 'PRF' will not be used in the sense of an absolute reference frame, but rather in the broader sense to refer to any reference frame relative to which Lorentz/ALT transformations occur in a directional manner.
Evidence supporting directional time dilation relative to the ECI Experimental evidence from Hafele & Keating indicates that the Earth-centered non-rotating inertial reference frame (ECI) can act as a local reference frame to direct time dilation (i.e., a PRF in the broader sense). In their experiment, atomic clocks were flown in airplanes eastward and westward around the Earth, and time dilation was calculated relative to the ECI [11,12]. Flying eastward, in the direction of the Earth's rotation, increased the speed of the airplane relative to the non-rotating ECI; while flying westward, in the direction opposite of the Earth's rotation, produced a slower speed relative to the ECI. The Lorentz/ALT time dilation formula was applied to the velocity of the ground-based clocks relative to the ECI and to velocities of the flying clocks relative to the ECI in order to calculate the extent of time dilation [11]. The flying clocks recorded the expected loss of time on the eastward flight, and the expected gain of time on the westward flight when compared to the ground-based clocks. More accurate repetitions of the Hafele & Keating experiment have similarly obtained the expected time dilations for movements relative to the ECI [13][14][15].
In the Hafele & Keating experiment, the time dilation was absolute and directional, as the flying and ground-based clocks showed different elapsed times when brought together for side-by-side comparisons. Hafele & Keating suggested that the directional time dilation arose within the context of SR because objects in non-inertial reference frames experience directional time dilation relative to inertial reference frames [11]. However, the section below will show that absolute directional time dilation is also observed between inertial reference frames.
Satellites of the global positioning system (GPS) are in inertial reference frames because they are in free-fall orbits around the Earth, similar to the inertial reference frame of the ECI that arises from its free-fall orbit around the Sun. It is well established that the ECI functions as a PRF for GPS satellites, with the satellites experiencing directional time dilation based on their velocity relative to the ECI [16]. Clocks on GPS satellites undergo time dilation of ,7 ms per day relative to the Earth's surface, which is calculated by applying the Lorentz/ALT time dilation formula independently to the speed of the satellite relative to the ECI and to the speed of the Earth's surface relative to the ECI [17]. Correcting for the Lorentz/ALT time dilation is essential for proper positioning in the GPS system, as the 7 ms/day difference translates to a localization error of 2.1 km per day [17]. The Sagnac effect, which is important for the communication of GPS satellites with rotating ground-based receivers, is irrelevant to the time dilation experienced by the satellites as they move relative to the non-rotating ECI [16]. The communication between GPS satellites and ground-based clocks continuously reveals the absolute and directional nature of the time dilation.
The interpretation of cosmological redshift as kinematic movement
In 1929, Edwin Hubble provided evidence that the recession velocities of galaxies increase linearly with distance, thereby inferring that the Universe is expanding [18]. The Hubble constant, recently estimated to be 73¡2 km/s/Mpc [19], defines the rate at which objects separate from each other with increasing cosmological distance.
Cosmological redshift (z) can be correlated with the change in universe scale during expansion [20,21]. The lengthening of the wavelength of the cosmic microwave background (CMB) (and its consequent cooling) correlates with the cosmic scale factor a(t), 1/(1+z) [22]. The conventional interpretation of cosmological redshift is that it arises as the wavelength of photons are lengthened as they traverse through expanding space [23].
Cosmological redshift can be interpreted as kinematic relativistic Doppler shift by a mathematical treatment of transporting the velocity four-vector from the source to the observer [24], and through analyses of Friedman-Lemalˆtre-Robertson-Walker (FLRW) models [25,26]. While the kinematic interpretation of cosmological redshift is unconventional, it incorporates a well-characterized mechanism, relativistic Doppler shift, and can also explain the lengthening of light wavelengths with universe expansion. Application of the relativistic Doppler shift equation and the relativistic law of addition of velocities to the kinematic motion of cosmological objects produces the same linkage between the cosmic scale factor and changes in wavelength [27]. The kinematic interpretation of redshift therefore provides an alternate explanation for the observed lengthening of wavelength and cooling of the CMB radiation.
Conditions under which ALT is compatible with experimental evidence
There is a large body of published data that shows no violations of Lorentz invariance for experiments carried out on the Earth or in the local Earth environment [28]. These experiments observed the predicted Lorentz time dilations regardless of the Earth's movement, which would be expected to alter the speed of the experimental instrument relative to an external PRF. With ALT, time dilation is calculated using the velocity of the reference frame relative to the PRF, so in a valid ALT scenario, an external PRF would affect time dilation on the Earth as the Earth moved relative to the PRF. Tests of Lorentz invariance often use the MS test theory to provide a lower limit on the confidence of Lorentz invariance [29]. These lower confidence limits are equivalent to increasingly restricting the movement (drift) of a potential PRF relative to the experimental apparatus [30,31].
Mansouri & Sexl suggested that the CMB frame is the obvious candidate for a possible ''ether frame'' [3]. However, the CMB cannot be the PRF for a viable ALT, as the movement of the Solar System relative to the CMB (,368 km/s, [22]) greatly exceeds the allowable PRF drift that is calculated using the MS test theory [30,31]. Based on the extensive tests of Lorentz invariance that have been carried out on or near the Earth [28,29], the only viable scenario for ALT is that a PRF must be locally associated with the Earth, in particular, with the ECI.
The requirement for the PRF to be locally centered on the ECI has implications for the concept of the ether. The ether is defined as the medium for the propagation of electromagnetic radiation [32]. The concept of the ether has been considered for more than 100 years, yet during this period, no compelling experimental evidence has supported the existence of a specific medium for the propagation of light. Therefore, the viability of the ether concept is tenuous. The observation of stellar aberration indicates that starlight does not move in the same reference frame as the Earth, and this implies that the ether cannot be dragged/ entrained by the Earth [32]. Both ALT and SR have the same formula for the angle of stellar aberration [33]. Therefore, in the scenario of a valid ALT, the ether cannot be equivalent to the PRF because ALT is only compatible with a PRF that is locally centered on the ECI, and yet the ether, if it exists, cannot be locally centered on the ECI.
The observation of directional time dilation relative to the ECI indicates that the ECI functions locally as a PRF (broadly defined). Both the ECI and GPS satellites are in ''free fall'' inertial reference frames, and yet GPS satellites experience directional time dilation relative to the ECI. This indicates that directional time dilation is not limited to the interaction of non-inertial and inertial reference frames but is also observed between inertial reference frames. It therefore raises the issue of why the ECI functions as a PRF. The force of gravity connects the ECI and the objects that experience directional time dilation as a result of motion relative to the ECI. A plausible hypothesis is that the ECI functions as a PRF because it is the local center of mass with the dominant gravitational field in its local environment. The combination of ALT and PRFs linked to local centers of gravitational mass will be referred to as absolute simultaneity theory (AST).
Experimental approaches to distinguish SR and AST
Mansouri & Sexl state that there is the impossibility of an 'experimentum crucis' that can distinguish between SR and ALT because both have similar kinematics [30]. There are, however, two differences between SR and ALT (in the context of AST) that can be distinguished experimentally.
The first experimentally distinguishable difference between the two theories is that ALT allows anisotropies in the one-way speed of light, while light speed is isotropic with SR [3,9]. However, the designs of experiments to analyze one-way light speeds have been incapable of detecting the light speed anisotropies predicted by the AST framework. With the exception of a space flight experiment that could not distinguish between potential anisotropies in the speed of light and gravitational effects [34,35], all of the modern experiments to detect the one-way speed of light have relied on changes in the Earth's movement to alter the speed of the test equipment relative to a potential external PRF [36][37][38][39][40][41][42][43][44]. The null results of these experiments are compatible with the ECI as the PRF, as the movement of the Earth would not alter the location of the test equipment relative to the ECI.
Experimental approaches using one-way light paths have demonstrated that light speeds are anisotropic when measured from the rotating Earth surface; these approaches include the Michelson-Gale experiment [45,46] as well as other experiments that reveal the Sagnac effect relative to the ECI, including GPS satellite communications [16,47]. The Sagnac effect is consistent with AST because light is predicted to propagate isotropically only in PRFs, but not in reference frames moving relative to a PRF, such as the rotation of the Earth's surface relative to the non-rotating ECI [3]. The Sagnac effect does not conflict with SR because rotational movements are considered to be exempt from the relativity principle [48]. Therefore, current experiments to analyze the speed of light do not distinguish between the two theories.
It is possible to design experiments that would be capable of detecting light speed anisotropies predicted by AST in the context of a proposed gravitational mass-based PRF moving relative to an inertial reference frame. Consider two observers at rest in the heliocentric reference frame who are separated from each other parallel to and near Earth's orbit. When the Earth is next to the observers, they send light signals between themselves so that the light signals move in the direction of the Earth's orbital motion or opposite to the Earth's motion. Viewed from the ECI perspective, the observers are in an inertial reference frame moving past the ECI, and one observer appears to move toward the light signal sent in the direction of Earth's orbital motion, while the other observer moves away from the light signal sent in the other direction. This situation can be considered analogous to the AST perspective on the Sagnac effect, where observers on the rotating Earth move toward or away from light beams that propagate isotropically in the ECI. Just as observers on the Earth surface or in orbit around the Earth calculate light speed anisotropies when sending light signals among themselves [16,[45][46][47], in an AST framework, the heliocentric observers would similarly experience light speed anisotropies: light sent in the direction of the Earth's motion would appear faster than c, and light sent in the direction opposite of the Earth's motion would appear slower than c. The same experiment conducted when the Earth was distant from the two heliocentric observers (so that their main gravitational influence becomes the Sun, with which they are at rest) would predict isotropic light speeds within the AST framework. In contrast, SR predicts isotropic light speeds in all situations.
The second experimentally distinguishable difference between the two theories is that AST predicts directional time dilation for inertial reference frames moving relative to a proposed PRF [2,3], while SR predicts reciprocal time dilation.
Experiments that utilize atomic clocks traveling in inertial reference frames near a proposed gravitational mass-based PRF can be used to probe for differences in time dilation. For example, clocks could be sent past the Earth in the direction of and opposite to the Earth's orbital motion in linear inertial paths. For each clock, the time dilation due to gravitational effects would be calculated and subtracted from the total observed time dilation to determine the time dilation due to motion. This can be accomplished because time dilation due to gravity (calculated using general relativity, GR) and motion (calculated using the Lorentz transformation/ALT) are, in practice, independent and additive [11,17]. In the proposed experiment, AST predicts that the clock traveling in the direction of PRF motion would experience less time dilation than the clock moving in the direction opposite of PRF motion. This is because the former clock would have a lower velocity relative to the PRF, and the latter clock would have a higher velocity. In contrast, SR does not predict directional time dilation between objects moving in inertial reference frames, and there is no theoretical basis within SR for assigning a different velocity based on the motion of a nearby gravitational mass. The clocks can be considered to be traveling in inertial reference frames because their constant-speed linear trajectories would only be affected to a limited extent by free fall in Earth's gravity, which would also be inertial.
The application of ALT to cosmological data
Historically, SR has not been used extensively in general relativistic cosmology (GRC). This can be attributed in part to the historical view that Minkowski spacetime applies only in situations devoid of mass and energy [49], and the designation of SR as a limiting case of GR that is only valid in small, local settings [50]. These historical considerations would not apply to ALT, which is not encompassed by Minkowski spacetime or current GRC theories.
The Lorentz transformation/ALT time dilation equation functions robustly in conditions that have classically not been associated with Minkowski spacetime. The Lorentz transformation/ALT equation can accurately calculate the time dilation of objects traveling in non-inertial frames [12]. It can also accurately predict the time dilation of muons traveling in a circular cyclotron using only the speed of the muons as input; and this motion is, by definition, accelerated motion [51]. Further, the Lorentz transformation/ALT equation accurately predicts the time dilation of subatomic particles traveling through Earth's atmosphere [52], which is neither empty nor flat, with densities of matter and curvature of space that are significantly higher than that found in intergalactic space. This wide applicability is consistent with ALT for which there is no theoretical basis to limit its application to inertial reference frames.
AST implies universal time dilation
The convention in cosmology is to use a comoving universe coordinate system that expands in sync with the Hubble expansion [49]. However, AST implies that PRFs are linked to centers of gravitational mass, which implies that an AST coordinate system would be non-comoving. In a non-comoving coordinate system, the interpretation of cosmological redshift as kinematic relativistic Doppler shift can be applied to objects separating due to Hubble expansion. In this context, higher redshifts linked to Hubble expansion signify increased velocities of separation between observers (at the time the light is received) and cosmological objects (at the time the light was emitted in the past). Thus objects in the present Universe can be interpreted to have increased kinematic velocities relative to objects in the past. The application of ALT to recession velocities would imply that objects in the present Universe experience time dilation relative to objects in the past. Conversely, when viewed from the present, objects in the past would have experienced time contraction.
Time contraction would have effects on both redshift and luminosity. From the vantage point of our present time scale, photons emitted in the past under timecontracted conditions would have been emitted at a faster rate, with blueshifted wavelengths (as the frequency of the emitted light was increased relative to our time scale).
Universal time dilation implies a non-accelerating universe
Type Ia supernovae (SNe Ia) function as standard candles, and the analysis of their redshift and luminosity has provided unique insights into universe evolution [20]. The effect of time contraction (TC) on the placement of SNe Ia in a Hubbletype diagram will be analyzed using data from the Supernova Cosmology Project (SCP) Union 2.1 compilation [53,54].
The relativistic Doppler shift formula is used to calculate the effective recession velocity (v er ) of SNe Ia based on their observed redshifts.
Based on their apparent magnitudes, SNe Ia at high redshift are separating with velocities greater than c, as expected for an expansion rate based on the Hubble constant [55]. While the relativistic Doppler shift equation (4) will not produce velocities greater than c, it can be used as a conduit between the redshift and time dilation formulas; i.e., it is the effective velocity embedded in the redshift value for time dilation calculations. The ALT time dilation formula (3) is used to calculate the time-contraction ratio (TC), which represents the ratio of the number of time intervals for an object emitting light in the past (Dt e ) relative to the number of time intervals for an observer in the present (Dt o ).
TC increases above 1 as v er increases, reflecting that at high v er values, more than one unit of time occurred in the past for every present-day time unit (e.g., for a v er of 0.6 c, 1.25 s elapsed in the past for every 1 s in the present). Substituting the definition of v er from equation (4) into equation (5) produces the formula for TC in terms of z.
Time contraction on the scale of the Universe is linked to Hubble expansion. A direct link between time contraction and universe expansion can be illustrated by expressing the equation for time contraction (6) in terms of the scale factor a(t).
While not widely considered, the normal interpretation of Hubble diagrams has the embedded inference that the positions of SNe Ia reflect their distance and luminosity based on the SNe Ia emitting light at their normal rate (e.g., the redshift value denotes the change in redshift from the observed redshift to the normal emission redshift). The effects of time contraction alter the proper placement of SNe Ia on a plot of redshift and distance modulus. Under a timecontraction scenario, the wavelengths of SNe Ia at higher redshifts were blueshifted at the time of emission. Therefore, the light from these SNe Ia underwent a larger change in wavelength (from blueshift to redshift) than is reported in the Hubble diagram. The total change in z value from the timecontracted, blueshifted emission to the observed redshift is given by z TC , which will be derived below. It is known that: where f e is the inferred frequency of light emitted and f o is the frequency of light observed. Rearranging equation (8) gives: The effect of time contraction increases f e in equation (8) by the time contraction ratio (TC) to give: Substituting the value of f o from equation (9) into equation (10), and simplifying, gives: To reflect the larger change in redshift from emission to detection, SNe Ia are shifted to the higher z TC redshift position (rightward) on the diagram (Fig. 1). Under time-contraction conditions, the rates of photon emissions for SNe Ia in the past were increased when viewed from our current, time-dilated perspective. To compensate for the increased emission rates, SNe Ia are shifted to higher distance modulus values (upward) on the diagram to reflect the lower level of luminosity that would have occurred if the SNe Ia were emitting at the current (non-time contracted) rate (Fig. 1). This adjustment is necessary because the use of SNe Ia as standard candles inherently requires that all SNe Ia have the same emission rate. The formula for apparent magnitude (m) is: where f x /f x0 is the observed flux. Multiplying the observed flux by 1/TC, gives the apparent magnitude if the effect of time contraction is removed (m TC ). m TC~{ 2:5log 10 1 TC : f x f x0 ð13Þ S1 In 1998 and 1999, two groups showed that SNe Ia with redshifts greater than 0.3-0.4 are dimmer than predicted from the linear application of the Hubble constant [56,57]. This suggested that at earlier times in universe evolution, the rate of expansion was less than that of the Hubble constant. The shift from a slower rate of expansion to the current, faster Hubble constant rate provided evidence for an accelerating universe. In the Hubble diagram, SNe Ia at higher redshifts are located above the Hubble line (Fig. 1). Significantly, in the diagram adjusted for the effects of time contraction, the SNe Ia distribution straddles the Hubble line across all redshift values (Fig. 1).
Statistical analysis was performed to determine if the distribution of the TCadjusted SNe Ia is consistent with a linear distribution. In agreement with previous reports [56,57], the conventional Hubble SNe Ia distribution does not lie on a straight line (Wald-Wolfowitz Runs test, P,0.0001 using either weighted data that incorporates m-M errors from the SCP Union 2.1 compilation, or unweighted data; and analyzed with Prism 5 software by GraphPad Software). In contrast, the distribution of TC-adjusted SNe Ia does not statistically differ from the straight line derived from linear regression of the data set (Wald-Wolfowitz Runs test, P50.5507 with weighted data, and P50.1695 with unweighted data).
To further confirm that the TC-adjusted high-redshift SNe Ia are linear with low-redshift SNe Ia, the high-redshift SNe Ia were compared to a line derived from linear regression of low-redshift SNe Ia. The cut-off for low redshift SNe Ia was set to z,0.14 because this is the largest redshift value that contains the same number of SNe Ia in both data sets (194 of the 580 SNe Ia). Comparing the z,0.14 low-redshift Hubble line to the 100 highest-redshift SNe Ia using the Extra Sum-of-Squares F test shows that the distribution of the high-redshift SNe Ia in the conventional Hubble diagram is statistically different from the low-redshift Hubble line (P50.0004 with weighted data, and P50.0048 with unweighted data).
In contrast, the distribution of the TC-adjusted high-redshift SNe Ia is not statistically different from the Hubble line (P50.4486 with weighted data, and P50.7863 with unweighted data). Therefore, adjusting the placement of SNe Ia to account for the effects of time contraction eliminates the statistical support for high-redshift SNe Ia that are dimmer than predicted from linear Hubble expansion.
SNe Ia light curve durations are maintained in the time contraction scenario
SNe Ia have characteristic light curves that increase and decrease in intensity over a set time period. Cosmological time dilation alters the duration of the light curves that are observed on Earth by a factor of 1+z [58,59]. The universal time dilation (UTD) scenario considered here implies that the duration of light curves for distant SNe Ia were time contracted at the time of emission when viewed from our current time scale. A central requirement for this scenario to be valid is that it must match the observed data; in this case, the duration of observed light curves for time-contracted SNe Ia must match the normally-observed duration. This requirement is met because while the duration of the light curve would have been compressed at the time of emission (relative to our time scale), there would be a correspondingly larger cosmological time dilation prior to the light being observed on Earth (as the light traversed from blueshift to redshift).
Changes in the light period correlate directly to changes in the duration of the light curve. To illustrate that the light periods of distant time-contracted SNe Ia have the normal periods upon observation, a specific SN Ia, sn2002fw, will be used as an example. As listed in S1 Table, sn2002fw The observed light period is thus the same under both non-time-contracted conditions (T o ) and time-contracted conditions (T oTC ), and therefore both conditions will have the same observed light curve duration.
Discussion
This study explores the potential validity of ALT, an alternate Lorentz transformation that is not widely known, and its implications for cosmology when integrated into the AST framework in which PRFs are linked to centers of gravitational mass. The failure to identify violations of Lorentz invariance has been used to support the widely-accepted SR theory. However, these experiments do not invalidate ALT, but rather act to restrict the localization of a potential PRF. Multiple experiments to test SR (analyzing light or subatomic particles moving at high relative speeds) have had the effect of restricting the localization of a putative PRF to the ECI. Complementary time dilation experiments that studied objects traveling at slower speeds for longer durations (e.g., airplanes and satellites) have provided evidence that the ECI acts as a PRF (broadly defined) to direct Lorentz/ ALT transformations. Thus, the first class of experiments provides evidence that the only viable scenario for ALT is a PRF that is locally centered on the ECI, while the second class of experiments shows that the ECI does in fact act as a PRF for Lorentz/ALT transformations. Notably, GPS satellites traveling in inertial reference frames also experience directional time dilation relative to the ECI, and this finding is more compatible with AST than with SR.
The current situation, where there is a lack of compelling experimental evidence that distinguishes between SR and AST, allows one to countenance the possibility and implications of a valid AST. In the context of a valid AST, one can ask why the ECI functions as a PRF. The observation that objects moving in inertial reference frames experience directional time dilation relative to the ECI suggests that inertial reference frame status is not sufficient to confer PRF status. The most compelling hypothesis is that the ECI functions as a PRF because it is the local center of gravitational mass. This suggests that in an AST scenario, PRFs would not have fixed positions in the Universe, but would vary temporally and spatially based on the distribution of gravitational mass. New experimental data is required to definitively distinguish between SR and AST; and if the latter is supported, to inform theoretical models that describe how the effects of PRFs extend spatially and overlap.
The published interpretation of redshifts as kinematic recession velocities suggests that cosmological redshifts arise because cosmological objects in the present Universe move faster than objects in the past due to Hubble expansion. Combining this with ALT leads to a scenario of universal time dilation (UTD) in which the present Universe experiences time dilation relative to the past Universe. When viewed from our present (time-dilated) vantage point, cosmological objects in the past would have experienced time contraction that was associated with increased rates of light emissions and increased frequencies of emitted light. The UTD scenario would apply throughout the Universe, e.g., to observers in other PRFs or at rest with the CMB. The proposed universal nature of UTD is illustrated by equation (7), where the extent of time contraction is described in relation to the scale factor.
UTD has several implications, foremost of which is that the rate of time is not constant, and is linked to the rate of universe expansion. Because the effect of past time contraction includes the blueshifting of emissions (relative to our current time scale), light from distant cosmological objects would have undergone further changes in wavelength prior to reaching us (a greater redshift value); and therefore cosmological objects at high redshift would be older and more distant than currently envisioned.
Currently, the strongest and most direct evidence for an acceleration in the rate of universe expansion is that distant SNe Ia are less luminous than predicted by a linear regression of the Hubble constant [56,57]. In the UTD scenario, SNe Ia that emitted light in the distant past would have experienced time contraction relative to our current time scale. To place time-contracted SNe Ia accurately on a Hubble-type diagram, the positions of the SNe Ia must be shifted to higher z values to reflect the increased change in z between the blueshifted emission and the observed redshift. Additionally, to compensate for the increased rate of light emissions of time-contracted SNe Ia, the SNe Ia must be shifted to higher distance modulus values to reflect the lower level of luminosity that would have resulted if the SNe Ia were emitting light at our current, slower, time-dilated rate. This latter adjustment is required so that all SNe Ia throughout the redshift spectrum have the same emission rate to allow them to function as standard candles with the same initial luminosity. Incorporating adjustments for the effects of time contraction produces a linear distribution of SNe Ia that has the effect of eliminating the signature of an accelerating universe. Given that the SNe Ia data is a direct readout of universe expansion [60], a linear distribution would have the effect of invalidating universe acceleration within the z,1.4 period.
Dark energy is proposed to drive the accelerated universe expansion, but its composition and mechanism of action are unknown. As stated in a review of dark energy: ''… through most of the history of the universe dark matter or radiation dominated dark energy by many orders of magnitude. We happen to live at a time when dark energy has become important.''; ''The universe has gone through three distinct eras: radiation dominated, z$3000; matter dominated, 3000$z$0.5; and dark energy dominated, z#0.5.''; and ''… we expect that its effects at high redshift were very small, as otherwise it would have been difficult for large-scale structure to have formed…'' [60]. The prevailing theory, while it can accurately model the effects of dark energy, is mechanistically not understood at multiple levels, including the nature of dark energy, and why it has significantly increased activity only in the most recent era. The UTD scenario is much simpler: universe expansion occurred at the Hubble constant through at least z,1.4 with no evidence for universe acceleration. In this scenario, the apparent non-linearity of high-redshift SNe on a Hubble diagram arises from a failure to incorporate the effects of time contraction, as only at higher redshifts are recession velocities large enough to produce appreciable time contraction effects.
Experimental support for a role of dark energy in universe acceleration comes from the analysis of four types of data: SNe Ia luminosity and redshift; the distribution of galaxy clusters; baryon acoustic oscillations; and the analysis of cosmic shear caused by gravitational lensing [60]. Of these, the SNe Ia data provides the most direct evidence for universe acceleration [60]. Notably, the signature of dark energy has only been observed with data for distant, highredshift events. In contrast, the expected effect of dark energy on expansion within the solar system has not been observed [61]. This apparent contradiction does not apply to the UTD scenario, where the effects of time contraction manifest only at higher redshifts. Note that while the UTD scenario provides an alternate view of the recent increased effects of dark energy, it does not address the mechanistic basis for linear Hubble expansion, which may involve the cosmological constant/ dark energy.
One argument against UTD is that it has the potential to disrupt current GRC theories, which are able to accurately model cosmological observations. In this regard, it should be noted that GRC theories have substantial inherent flexibilities that allow the theories to model diverse observations. The flexibility in these models derives from the ability to alter parameter values; and it is not unusual for these values to change in response to new experimental observations [62]. Historically, new GRC theories have been created when the prevailing theories were no longer able to accurately model new cosmological data, e.g., the creation of the LCDM model allowed the incorporation of the recently proposed expansion in the role of dark energy [63]. Presumably, if UTD is confirmed, it could be incorporated into future cosmological models.
In summary, current experimental evidence fails to definitively distinguish between SR and AST. This study shows that a valid AST would have significant implications for cosmology, including universal time dilation, increased ages and distances for high-redshift objects, and a linear, non-accelerating rate of universe expansion during the most recent era.
Supporting Information
Cosmological Implications of an Absolute Simultaneity Theory S1 Table. SNe Ia data with modifications for time contraction. | 2016-05-12T22:15:10.714Z | 2015-03-20T00:00:00.000 | {
"year": 2015,
"sha1": "fe1cb20af06e8e76d03a1d19764053f7d73c703f",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0120187&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "13e80d17fec5363ab4e902fd104fd56d48090160",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
29923950 | pes2o/s2orc | v3-fos-license | Structure and expression analysis of the OsCam 1-1 calmodulin gene from Oryza sativa L
Calmodulin (CaM) proteins, members of the EF-hand family of Cabinding proteins, represent important relays in plant calcium signals. Here, OsCam1-1 was isolated by PCR amplification from the rice genome. The gene contains an ORF of 450 base pairs with a single intron at the same position found in other plant Cam genes. A promoter region with a TATA box at position -26 was predicted and fused to a gus reporter gene, and this construct was used to produce transgenic rice by Agrobacteriummediated transformation. GUS activity was observed in all organs examined and throughout tissues in cross-sections, but activity was strongest in the vascular bundles of leaves and the vascular cylinders of roots. To examine the properties of OsCaM1-1, the encoding cDNA was expressed in Escherichia coli. The electrophoretic mobility shift when incubated with Ca indicates that recombinant OsCaM1-1 is a functional Ca-binding protein. In addition, OsCaM1-1 bound the CaMKII target peptide confirming its likely functionality as a calmodulin. [BMB reports 2008; 41(11): 771-777]
All eukaryote cells utilize changes in Ca
2+ concentration as a second messenger to generate cellular responses to extracellular stimuli.In plants, Ca 2+ signals are utilized in response to a diverse array of stimuli and have been implicated in transducing signals from the environmental changes into adaptive responses (1,2).These intracellular Ca 2+ signals are not only transient, but they also vary temporally and spatially with different organelles or cytoplasmic regions acting as distinct compartments.Thus, within cells a diverse array of changes in the cytosolic Ca 2+ concentration must be correctly perceived and discriminated so as to elicit the correct subsequent cellular response, a task performed in part by the EF-hand family of Ca 2+ -modulated proteins.Calmodulin (CaM) proteins, members of the EF-hand family, are small multifunctional proteins that transduce the signal of in-creased Ca 2+ concentration by binding to and altering the activities of a variety of target proteins.The activities of these proteins affect the physiological responses to a vast array of specific stimuli received by plant cells (3,4).
A large family of Cam genes from several plants has been identified including from the two model plants, Arabidopsis (Arabidopsis thaliana (L.) Heynh) and Rice (Oryza sativa L.) (5,6), in which Cam and Cam-like (CML) genes have been extensively identified in their genomes.Although the broad significance of multiple CaM isoforms can be postulated to be important in distinguishing between the Ca 2+ signals from different stimuli and thus aid in eliciting the correct response, the actual significance is, however, not clearly understood.Nevertheless, accumulating evidence suggests that each of the different Cam genes may have distinct and significant functions.Until now, there is no detailed information on Cam gene functions in response to any particular stress in rice, which is considered a model plant for monocots (7).In this study, the OsCam1-1 gene along with its promoter was isolated from Oryza sativa L. cv.Khao Dok Ma Li 105 (KDML105), and its expression was examined using lines of transgenic rice plants that harbor the OsCam1-1 promoter fused to a β-glucuronidase (gus) reporter gene (8).To further examine OsCaM1-1, the protein encoded by OsCam1-1 was produced in E. coli, purified to apparent homogeneity, and assessed for its functional properties.
Isolation of the OsCam1-1 gene from Oryza sativa L. cv. KDML105
Through extensive analyses of the rice genome, five OsCam genes that encode three closely related CaM proteins were identified (6).Here, we isolated the OsCam1-1 gene from the KDML105 rice cultivar by PCR amplification, attaining a product of 1,528 base pairs.Following determination of its sequence and assembly with its upstream region, which was isolated as described below, the gene structure and its predicted mRNA was attained (Fig. 1).Similar to the homologous OsCam1-1 sequence from the Nipponbare rice cultivar (9), it contains a predicted open reading frame of 450 base pairs interrupted by a single intron of 828 base pairs at the location corresponding to the glycine codon at position 26.This arrangement has been found in numerous plant Cam and CML http://bmbreports.orggenes (5,6).The intron-exon junctions conform to the GT/AG rule (10) and the intron sequence is highly AT-rich (60% A + T, compared with 40% A + T in the coding sequence), which is characteristic of plant introns and has been shown to be required for splicing (11).
The sequence upstream of the OsCam1-1 coding region was isolated by PCR amplification, yielding a product of 1,370 base pairs.The sequence was determined and the contig assembled with the coding region as shown in Fig. 1.The upstream sequence was bioinformatically analyzed, which revealed a likely promoter region with a TATA box (sequence TATAAA) and a predicted transcription start site within a few base pairs from the postulated transcription start site defined by the 5'-end of the available cDNA sequences from various databases.When the nucleotide position of the predicted transcription start site is designated as +1, the TATA box will be centered at position -26, which corresponds to the location of TATA boxes found in most genes.The region just upstream of the putative promoter is very GC-rich and contains several potential overlapping sites (-40 to -30) for the Sp1 transcription factor (12,13) and a consensus binding site sequence (-96 to -88) for the mammalian Krox-24 (14).In addition, several potential regulatory sites, such as the Adh1 GC and GT elements found in the maize alcohol dehydrogenase promoter (15) and or binding sequences for a tobacco TGA1 transcription factor (16), were found and are indicated in Fig. 1b.These predictions present good candidate regions for further examination of the promoter functionality.
To confirm the predicted structural organization of the OsCam1-1 gene obtained from the KDML105 rice, the OsCam1-1 cDNA was isolated by RT-PCR.The nucleotide sequence obtained confirmed the predicted sequence of the deduced ORF and mRNA derived from the genomic DNA sequence depicted in Fig. 1.Within the KDML105 OsCam1-1 coding region, only one synonymous nucleotide difference from that of the Nipponbare cultivar of japonica rice (9) was found, and thus no differences exist in the predicted amino acid sequences between the two rice subspecies.More differences are seen within the introns between Nipponbare and KDML105, in which seven-nucleotide substitutions or deletions occur (data known not shown).Similarly, we have isolated from the KDML105 rice the OsCML1 gene, which encodes a CaM-like protein that shares the highest degree of amino acid identity among different OsCML proteins with the OsCaM proteins (6).Its coding region has nine nucleotide differences from that of Nipponbare, which result in four pre- dicted amino acid differences (data not shown).Considering the similarity in their sizes (450 and 564 nucleotides and 149 and 187 amino acids for OsCam1-1 and OsCML1 genes, respectively), the OsCam1-1 gene exhibits a very high degree of conservation between the two subspecies both at the nucleotide and the amino acid sequence levels.
OsCam1-1 promoter analysis in OsCam1-1::gus transgenic rice plants
To determine whether the isolated OsCam1-1 upstream sequence can function as a promoter, lines of transgenic rice plants that harbor the OsCam1-1::gus construct were generated.Rice plants from three independent transgenic lines were obtained, which showed no abnormal morphological characteristics, and were analyzed for expression of the reporter gene product.GUS staining patterns, detected using X-Gluc as a substrate, were consistent among all transgenic lines and their representative images are shown in Fig. 2 and 3.Under normal growth conditions, GUS activity was histochemically observed in whole organs and tissues including leaf blades, leaf sheaths, roots, lateral roots, and several floral parts including stigmas, anthers, and pollen (Fig. 2a-g).To visualize the localization of the GUS activity in leaves and roots, they were sectioned (70-100 μm) and stained with X-Gluc.GUS staining was observed throughout the cross-sections of leaf blades, but staining was strongly detected in the large and small vascular bundles, specifically in the phloem and the bundle sheath cells (Fig. 3a-d).Some epidermal cells were stained stronger than others, but hair cells were always stained.It should be noted that GUS staining was also found in the guard cells.Similar to leaf blades, GUS staining was predominantly observed in the vascular bundles in cross-sections of leaf sheaths (Fig. 3e-h), in the vascular cylinder of roots (Fig. 3i-k), and during the development of lateral roots.In agreement with our results, expression in vascular tissues was reported in cherry rootstock transformed with the apple Cam::gus construct (17), although higher levels of expression and more cell types were observed in our experiments.Overall, gus was found to be expressed in all organs and many tissues throughout the OsCam1-1::gus transgenic rice plants.Assuming under these conditions that the OsCam1-1 promoter is driving the gus gene expression in the same manner as it drives endogenous OsCam1-1 gene expression, this data suggests a likely and important role for OsCam1-1 under normal growing conditions.Moreover, the OsCam1-1 driven gus expression in the outer parts of the plant body, such as the epidermises (including hair cells and guard cells), suggests that OsCaM1-1 may contribute to Ca 2+ -mediated responses to both http://bmbreports.orgbiotic and abiotic stresses perceived by the leaf, but this remains to be confirmed.
Production in E. coli and properties of the rOsCaM1-1 protein
To examine the molecular properties of OsCaM1-1, its coding sequence was engineered by PCR amplification and cloned into the T7-based expression plasmid, pET21a.The resulting recombinant expression plasmid encoding rOsCaM1-1 was introduced into the E. coli (K12) strain BL21(DE3) and used to produce the recombinant protein following IPTG induction.SDS-PAGE based analysis of the protein product induced by 0.2 mM IPTG displayed a distinct band of the expected size at 17.6 kDa in the soluble fraction.The protein was purified by Ca 2+ -dependent hydrophobic chromatography on phenyl-Sepharose and the purity of CaM was judged by SDS-PAGE (Fig. 4a).One of the characteristics of CaM is its ability to bind Ca 2+ in the presence of SDS, which increases its electrophoretic mobility relative to CaM in the absence of Ca 2+ .Fig. 4a shows that rOsCaM1-1 displayed this characteristic electrophoretic mobility shift when incubated with 1 mM Ca 2+ prior to electrophoresis.This result indicates that the rOsCaM1-1 protein produced in E. coli and puri-fied by these methods is likely to be a functional Ca 2+ -binding protein.To further examine the properties of the OsCaM1-1 protein, its ability to bind the peptide derived from CaM kinase II (CaMKII) was assessed by gel mobility shift assay.Incubation of 100 picomoles of rOsCaM1-1 protein in the presence of 1 mM Ca 2+ , with different molar equivalents of the peptide (Fig. 4b) prior to PAGE-4 M urea resolution, showed a clear dose dependent band shift consistent with the notion that rOsCaM1-1 binds the CaMKII with a 1:1 stoichiometry and suggesting its mechanisms of action are likely to be similar to those from known CaMs.
In conclusion, this work has verified that OsCam1-1 encodes a functional Ca 2+ -binding calmodulin protein and identified its promoter region from the KDML105 rice.Analyses of the OsCam1-1::gus transgenic rice plants suggest that OsCam1-1 is highly expressed in vascular tissues and during the emergence of lateral roots.These expression patterns highlight the tissues and developmental stages worth assaying by techniques such as immunocytochemistry in the future.Knowledge of the expression patterns and properties of OsCam1-1 obtained in this study will help facilitate further investigations into its roles.
Materials
Enzymes used for manipulating recombinant DNA were from Fermentas (Hanover, MD, USA).Kits for purifying and gel extracting plasmid were purchased from Qiagen (Hilden, Germany).pGEM Ⓡ -T vector and oligo(dT)15 primers were obtained from Promega (Madison, WI, USA).The expression plasmid pET21a and host E. coli (K12) strain BL21(DE3) were from Novagen (Madison, WI, USA).The E. coli (K12) strain XL1-Blue was from Stratagene (Cedar Creek, TX, USA).Phenyl-Sepharose was purchased from Amersham Biosciences (Piscataway, NJ, USA).Synthetic oligonucleotides were obtained from Operon Technologies (Cologne, Germany).The synthetic peptide representing the CaM-binding domain of CaMKII was purchased from Sigma (St. Louis, MO, USA).Plasmid pCAMBIA1381Z was obtained from CAMBIA (Canberra, Australia).Seed of Oryza sativa L. cultivar KDML105 was provided by the Department of Agriculture, Ministry of Agriculture and Cooperatives (Bangkok, Thailand).
Cloning of the OsCam1-1 gene
Oryza sativa L. cv.KDML105 seedlings were ground in liquid nitrogen using chilled mortars and pestles.Genomic DNA was isolated according to Agrawal et al. (18).To isolate the gene, PCR amplification was conducted using oligonucleotides designed from the homologous OsCam1-1 sequences from the Nipponbare cultivar of Oryza sativa L. ssp.japonica (9), 5'-GA AGCCAGGCTAAGCCCAGC-3' and 5'-GCAAGCCTTAACAGA TTCAC-3' as the sense and antisense primers, respectively.PCR To isolate the upstream promoter region of the presumed OsCam1-1 first exon, PCR amplification was conducted as above except using oligonucleotides designed from the genomic DNA sequence of the 93-11 cultivar of Oryza sativa L. ssp.indica (19): 5'-TCCCAATCCTCCCTGCTGATGTTGC-3' and 5'-CCATGCCGCGGGGCTTAGCCTGGCT-3' as the sense and antisense primers, respectively.
To clone OsCam1-1 cDNA, Oryza sativa L. tissues were ground in liquid nitrogen using chilled mortars and pestles.Total RNA was isolated according to Verwoerd et al. (20) and used as a template for reverse transcription primed by using oligo(dT)15 primers in a 20-μl reaction with 200 units of M-MLV reverse transcriptase (Promega) at 42 o C for 1 hour.PCR amplification of the total cDNA was conducted as above, using the same primers as used to isolating the gene.to amplify the OsCam1-1 transcript.PCR amplification was conducted as above except a denaturation time of 2 min and an annealing temperature of 55 o C were used.All PCR products were cloned into the pGEM Ⓡ -T vector, transformed into XL1-Blue cells for cloning and propagation of the plasmid, and their sequences were then determined from four independent clones.
Sequence retrieval and analyses
Sequences from Oryza sativa L. ssp.were retrieved from the Rice Annotation Project Database at the NIAS (http://rapdb.dna.affrc.go.jp/) and Rice Information System at the Beijing Genomics Institute (http://rise.genomics.org.cn/rice/index2.jsp).http://bmbreports.orgAlignments were performed using EMBOSS pairwise alignment algorithms at the European Bioinformatics Institute (http://www.ebi.ac.uk/).To identify promoter and transcriptional elements, sequences were analyzed using the computer programs "Promoter Scan" and "Signal Scan" at the Bioinformatics and Molecular Analysis Section (http://bimas.dcrt.nih.gov/molbiol/),Computational Bioscience and Engineering Lab, Division of Computational Bioscience, Center for Information Technology, at the National Institute of Health (21).
Generation of OsCam1-1::gus transgenic plants
Rice seeds were dehusked and sterilized with 70% (v/v) ethanol for two minutes and then with 2% (w/v) sodium hypochlorite for 20 minutes.The seeds were rinsed three times with sterile water and placed on NB medium ( 22) containing 2 mg/L 2,4-dinitrophenoxy acetic acid (2,4-D) and incubated in the dark at 28 o C for two weeks.Before transformation, the growing calli were subcultured on fresh medium and incubated under the same conditions for four days.To generate an OsCam1-1::gus construct, BamHI and NcoI recognition sequences (bold) were introduced into the promoter by PCR using the oligonucleotides 5'-GGATCC CAATCCTCCCTGCTGATG-3' and 5'-CCATGGCGCGGGGCT TAGCCTGGC-3' as the sense and antisense primers, respectively.Ⓡ prior to subsequent ligation into a BamHI-NcoI-cleaved pCAMBIA1381Z vector.The sequence of the resulting OsCam1-1::gus was then determined to confirm its insertion and integrity.The recombinant plasmid was introduced into the Agrobacterium tumefaciens strain EHA105 by electrotransformation.When plants were ready for transformation, A. tumefaciens cells were streaked on solid AB medium (23) containing 25 μg/ml rifampicin and 50 μg/ml kanamycin.The cells were incubated at 28 o C for 2-3 days, collected by scraping with a loop and resuspended in AAM medium (24) supplemented with 100 μM acetosyringone.The optical density of the bacterial suspension at 600 nm was adjusted to 1.0 by addition of fresh medium.Embryonic calli were immersed in the bacterial suspension for 30 minutes with occasional shaking and blotted dry on sterile filter papers.The calli were then transferred to NB medium supplemented with 10 g/L glucose, 2 mg/L 2,4-D, and 100 μM acetosyringone and incubated at 25 o C for three days.Calli were then first washed with sterile 250 μg/ml cefotaxime to remove the excess Agrobacterium, followed by several sterile water rinses before being blotted dry on sterile filter papers and transferring to NB medium containing 250 μg/ml cefotaxime and 50 μg/ml hygromycin.After incubation at 28 o C for four weeks, the hygromycin-resistant calli so obtained were subcultured on fresh medium as above for another round of selection and then transferred to NB medium containing 4 mg/L 6-benzylaminopurine and incubated at 28 o C under a 16/8 hours light/dark photoperiod for 3-4 weeks.When the green shoots reached 2-3 cm in height, they were cut and transferred to NB medium to stimulate root and stem elongation.
GUS histochemical assays
Staining solution [100 mM phosphate buffer, pH 7.5, 0.5 mM potassium ferrocyanide, 0.5 potassium ferricyanide, 0.1% (v/v) Triton X-100, 10 mM EDTA, and 1 mM 5-bromo-4-chloro-3-indolyl-β-D-glucuronide (X-Gluc)] was added in a tightly capped tube to cover the tissues to be stained.To remove air trapped in the tissues and allow for maximum stain penetration, a 20-mmHg vacuum was applied to the tissues twice for five minutes.The tube then was capped and placed in a 37 o C incubator.After 1 to 2 days the stain solution was removed and the sample was decolorized with several changes of 70% (v/v) ethanol over 1 to 2 days at 37 o C.
Recombinant protein production and gel-mobility shift analysis
Recombinant pET21a plasmids were introduced into the E. coli (K12) strain BL21(DE3) to produce recombinant proteins.Protein production and purification were carried out using methods employed previously for recombinant plant CaM (25).To examine the Ca 2+ -binding ability, 1 mM (final concentration) of either CaCl2 or EGTA was added to three micrograms of the protein and mixed.The samples were then resolved through a 12% (w/v) SDS-polyacrylamide gel and detected by Coomassie blue staining.To examine the peptide binding ability, 100 picomoles of the recombinant protein was mixed with the peptide derived from CaMKII (Sigma) at different molar equivalents and then fractionated in a 12% (w/v) polyacrylamide gel containing 4 M urea and detected by Coomassie blue staining.
Fig. 1 .
Fig. 1.Organization of the OsCam1-1 gene from Oryza sativa L. cv.KDML105.(a) Schematic diagram displaying OsCam1-1 genomic DNA and mRNA sequences.Exons are indicated by boxed regions; intron, 5' and 3' untranslated regions are indicated by solid lines.The predicted promoter is indicated by a grey arrow, and the sequences encoding the EF hands are depicted by black rectangles.(b) Nucleotide sequence of the OsCam1-1 gene and deduced amino acid sequence of its predicted open reading frame excluding the intron.The predicted TATA box is in bold and double underlined, and the potential regulatory sites for the Sp1 transcription factor are shown in bold and underlined letters; Adh1 GC and GT elements are shown in bold and italic, and potential binding sequences of TGA1 and Krox-24 transcription factors are underlined and double underlined, respectively.The predicted transcription start site is boxed and designated as +1, and the resulting upstream nucleotide positions are shown on the left.The closed triangle represents the location of the single intron.Likely start and stop codons are shown in bold.Residues in the EF-hand Ca 2+ -binding loops are highlighted in grey and residues serving as likely Ca 2+ -binding ligands are underlined and correspond to those in typical CaM proteins.Nucleotides underlined with thick lines are the ones that differ from those of the Nipponbare OsCam1-1 sequence.
Fig. 2 .
Fig. 2. Localization of gus activity in whole seedlings and organs of transgenic rice plants harboring OsCam1-1::gus.GUS staining was detected in a) leaf blade, b)-c) roots and lateral roots, d) whole seedling, e) panicle, f) anther, and g) stigma.Representative staining images of the three transgenic rice plants are shown.
Fig. 3 .
Fig. 3. Localization of gus activity in cross-sections of leaves and roots of transgenic rice plants harboring OsCam1-1::gus.GUS staining was observed in cross-sections of a)-d) leaf blades, e)-h) leaf sheaths, and i)-k) roots.Inner regions in the sections of the leaf blade (a), leaf sheath (e), and root (i) were magnified and shown in b)-d), f)-g), and j)-k), respectively.Representative staining images of the three transgenic rice plants are shown.
Fig. 4 .
Fig. 4. Recombinant OsCaM1-1 possesses Ca 2+ and peptide-binding properties in vitro.(a) Purification and Ca 2+ -induced electrophoretic mobility shift of the rOsCaM1-1.Separation by 12% (w/v) SDS-PAGE of the protein extracted from E. coli harboring pET21a expression plasmids following phenyl-Sepharose hydrophobic chromatography.To analyze calcium-induced electrophoretic mobility of the rOsCaM1-1, three micrograms of the eluted rOsCaM1-1 and 1 mM of either EGTA (lane +EGTA) or CaCl2 (lane +CaCl2) were resolved.Protein bands were detected by Coomassie blue staining.Lane marked M contained molecular mass standard proteins (Fermentas).(b) Gel mobility shift analysis of rOsCaM1-1 interaction with a peptide from CaMKII.Different molar equivalents of peptide (indicated) mixed with 100 picomoles of rOsCaM1-1 were fractionated in a 12% (w/v) PAGE-4 M urea gel and detected by Coomassie blue staining to reveal band shifts.
utilized
Taq polymerase (Fermentas) and 30 cycles of PCR by Taq polymerase (Fermentas) consisted of 30 cycles of nutes with a final elongation phase of 72 o C for 10 minutes.The PCR product was ligated into pGEM-T | 2018-04-03T01:31:45.462Z | 2008-11-30T00:00:00.000 | {
"year": 2008,
"sha1": "0600845f40640e174a72c1c9d1a75546c0ac327e",
"oa_license": "CCBYNC",
"oa_url": "http://society.kisti.re.kr/sv/SV_svpsbs03V.do?cn1=JAKO200810103432891&method=download",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "3ed03f9aaf87d38b4dee2336b056cd0c0408184b",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
219807893 | pes2o/s2orc | v3-fos-license | The Complex Maze of the Informed Consent Process: Helping to Improve Comprehension in Clinical Trial Participants with Alzheimer’s Disease
We intend for this article to provide a foundation toward the creation of a more patient-centric approach to the informed consent process. Our overall objectives are to promote ethical clinical research standards and procedures toward enhanced supportive systems for clinical trial participants. We provide a suggested format which multidisciplinary clinical trial researchers can adapt for their own clinical trial setting.
People who have been given the devastating diagnosis of Alzheimer's disease live with the hope for a cure. There are thousands of individuals willing to try an innovative treatment in hopes of finding the secret to halting the disease progression. Many receive a physician's recommendation to enroll in a clinical trial and others search www.clinicaltrials.gov to locate a hospital conducting a clinical trial. There are hundreds of clinical trials looking for study partic- * Correspondence to: Louis X. Wong, MS, University of California, San Francisco, 550 16th Street, 6th Floor, Mailstop 0981, San Francisco, CA 94143, USA. E-mail: louiswong 415@gwu.edu.
ipants, and too-many-to-count potential participants who would be eager to join in the effort to find a cure or delay the onset of symptoms.
Over 5 million Americans are currently living with the diagnosis of Alzheimer's disease. Worldwide, approximately 30 million people have been diagnosed with this condition and that number is projected to triple by 2050 [2]. There are a variety of ways to attempt to manage the disease process, but even with today's modern miracles of medical interventions, the decline of Alzheimer's disease is irreversible. Most people who are diagnosed with this progressive neurodegenerative disease do not notice the presence of symptoms when the disease is at the earliest stages [8]. Common disease-related changes include cognitive deterioration, memory loss, psychosocial incapacity, and decision-making difficulties [1]. Alzheimer's disease is a leading cause of death in Americans 65 years of age or older [4]. Although many researchers are hard-at-work trying to find a cure, there is no cure for Alzheimer's disease at this point in time [3]. This disease that affects the lives of so many people has no known effective affordable cure as of this date.
A physician or other researcher will go through a process of informed consent before a potential participant can sign-up for participation in a clinical trial [6]. A clinical trial is a closely controlled environment where an innovative new treatment, drug, or medical device is tested for effectiveness and safety [9]. The informed consent protocol can vary from requiring only a few minutes for a quick review of a couple of pages to hours of reading a detailed document of dozens of pages. The informed consent process includes, but is not limited to, a description of anticipated procedures with the probable risks, benefits, and financial costs. The participant will be told that there is a right to stop any procedure at any time during the trial period.
In all phases of clinical research (Phase 0-III), the informed consent process is important to protect the human participants from potential harm [6]. Moreover, this consent process is not merely a onetime signing of a document and it is not solely for researchers to obtain initial permission to conduct clinical research on human participants. This everchanging process is ongoing throughout the clinical and regulatory phases with a primary aim to adequately inform the participants during the complete duration of the clinical trial [10]. This continuous informed consent process should emphasize and delineate all possible modifications to procedures and any change to risks and benefits. Any information that may help participants decide whether they should continue participation should be shared [3]. Furthermore, the informed consent process is an indication of respect for the individual participant and an acknowledgement of all participants' right of autonomy, to either participate or to not participate in the clinical trial. Investigators are ultimately responsible for the conduct of all significant participants during the informed consent process, including the clinical investigators themselves.
The informed consent process is intended to protect participant rights and safety. The consent process for a study participant with Alzheimer's disease or other cognitive limitation requires the engagement of an additional witness as an added level of protection for the study participant and the researcher. The investigator should attempt to maximize opportunity for clinical trial participants to not only learn about pertinent potential benefits and possible risks but to also comprehend the study goals. Accordingly, the informed consent process minimizes the scientific knowledge gap, aids in assessment, and promotes decision-making [7]. In this informed consent process, the participant is encouraged to adequately assess the risks of participation, such as the likely side effects and the time commitment, while the benefits of participation, such as the therapeutic value of feeling less anxious or improved concentration, can be more thoroughly discussed.
It is normal and usual for a potential participant to feel overwhelmed or confused when presented with the descriptions and amount of information shared as a part of the informed consent process prior to clinical trial participation. Potential participants may be overeager to quickly provide consent to join in the clinical trial due to the physical and emotional toll of Alzheimer's disease and the lack of available treatment and cure options. Concerns may be generated from a fear of decreased ability to make correct decisions due to the progression of the disease. The symptoms associated with Alzheimer's disease may negatively influence ability to comprehend and remember the information shared during the informed consent process. A better understanding of the informed consent process can help a potential clinical trial participant be better prepared to give permission for participation towards making a truly informed decision.
Our intent in designing this list of pointers and suggestions is to provide some guidance for the potential clinical trial participants as well as to assist the multidisciplinary clinical trial researchers mandated to obtain informed consent. The investigator is responsible for establishing an environment that will promote a patient-centric approach to the informed consent process. Providing a guide or a set of suggestions specifically directed toward a participant will promote a patient-centric approach to the informed consent process. In this setting, investigators are better able to align specific decisions with each participant's wants, needs, and preferences. A patient-centric approach inherently increases participant ownership and autonomy. A higher level of trust-building between participant and investigator is built by providing this type of structured guidance to participants. Investigators must create a suitable practice with standard procedures to encourage the proper participant mindset for a true informed consent process. | 2020-05-21T09:14:25.034Z | 2020-05-15T00:00:00.000 | {
"year": 2020,
"sha1": "9c9446a9eed95ba1fcf6c8e92bd9f4da9b8ebdec",
"oa_license": "CCBYNC",
"oa_url": "https://europepmc.org/articles/pmc7306923?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "620e55cd32602d63eec1d14388971916006c8e2c",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
259706005 | pes2o/s2orc | v3-fos-license | Mycobacterium avium subsp. paratuberculosis Antigens Elicit a Strong IgG4 Response in Patients with Multiple Sclerosis and Exacerbate Experimental Autoimmune Encephalomyelitis
Neuroinflammation can be triggered by microbial products disrupting immune regulation. In this study, we investigated the levels of IgG1, IgG2, IgG3, and IgG4 subclasses against the heat shock protein (HSP)70533–545 peptide and lipopentapeptide (MAP_Lp5) derived from Mycobacterium avium subsp. paratuberculosis (MAP) in the blood samples of Japanese and Italian individuals with relapsing remitting multiple sclerosis (MS). Additionally, we examined the impact of this peptide on MOG-induced experimental autoimmune encephalomyelitis (EAE). A total of 130 Japanese and 130 Italian subjects were retrospectively analyzed using the indirect ELISA method. Furthermore, a group of C57BL/6J mice received immunization with the MAP_HSP70533–545 peptide two weeks prior to the active induction of MOG35–55 EAE. The results revealed a significantly robust antibody response against MAP_HSP70533–545 in serum of both Japanese and Italian MS patients compared to their respective control groups. Moreover, heightened levels of serum IgG4 antibodies specific to MAP antigens were correlated with the severity of the disease. Additionally, EAE mice that were immunized with MAP_HSP70533–545 peptide exhibited more severe disease symptoms and increased reactivity of MOG35–55-specific T-cell compared to untreated mice. These findings provide evidence suggesting a potential link between MAP and the development or exacerbation of MS, particularly in a subgroup of MS patients with elevated serum IgG4 levels.
Introduction
While the precise pathological mechanism remains unclear, the influence of pathogen exposure as an environmental trigger for multiple sclerosis (MS), a chronic disease impacting the central nervous system (CNS), has been recognized. MS is characterized by inflammatory demyelination in the early phase of relapsing-remitting MS (RR-MS), followed by progressive phases dominated by neurodegenerative processes, resulting in the continuous loss of neurons and axons [1].
Although MS is not hereditary, genetic factors play a significant role in determining susceptibility to the disease, but they alone cannot fully explain its incidence [2]. The hygiene hypothesis proposes that early childhood exposure to pathogens may provide protective immunity, while infections during adulthood could act as triggers for autoimmunity, Life 2023, 13, 1437 2 of 12 particularly in individuals with specific genetic predispositions [3]. Another possibility is the reactivation of viral or bacterial in individuals who had asymptomatic infections years earlier, potentially due to a weakened immune system or other external factors [4]. For instance, it has been suggested that the Epstein-Barr virus (EBV), which is considered the most influential risk factor for MS, could induce the expression of endogenous retroviruses from the HERV-W family, leading to the onset of MS [5]. Moreover, the variation in MS incidence prevalence across different geographical regions suggests that an abnormal immune response might be triggered by a region-specific pathogen prevalent in areas with high MS rates [1].
In the absence of a confirmed pathogen directly causing MS, the interplay between pathogens could have a significant impact on the pathogenesis of the disease. It is highly plausible that bacterial-viral coinfections could contribute to the disparity in MS risk between different regions.
Among bacterial factors, exposure to antigenic determinants of Mycobacterium avium subsp. paratuberculosis (MAP), the etiological agent of paratuberculosis (commonly known as Johne's disease) in animals, has been linked to the risk of developing MS [6][7][8][9]. Multiple clinical studies conducted in various countries consistently demonstrated an association between MAP and MS, based on the detection of mycobacterial DNA, as well as the presence of humoral and antigen-specific immune responses against MAP antigens in the sera and cerebrospinal fluids of patients with RR-MS [9]. It is important to note that the bacterium has never been isolated from any MS patient, suggesting that its role in MS, particularly in regions with low paratuberculosis prevalence, may be more related to the ingestion of antigenic components through contaminated food rather than active infection [10].
The potential of MAP antigens to exacerbate the progression of experimental autoimmune encephalomyelitis (EAE), a commonly used animal model of neuroinflammation, has been demonstrated. MAP can serve as an adjuvant, replacing Mycobacterium tuberculosis, to enhance EAE [11]. Additionally, it has been observed that oral administration of heathkilled mycobacteria activates mucosal immunity, modulates dendritic cells, and influences the trafficking of CD4 T-cells from mesenteric lymph nodes and spleen to the CNS, thereby worsening active EAE [12].
Furthermore, immune reactivity has been identified against cross-epitopes of MAP and EBV in individuals with MS, suggesting that both pathogens, through molecular mimicry, may activate a shared pathway leading to neuroinflammation in genetically susceptible individuals [13,14]. Studies have reported the presence of autoantibodies that recognize peptides from human myelin basic protein and interferon regulatory factor 5, which also cross-react with homologous peptides from EBV latent and lytic proteins, as well as MAP, in the sera and cerebrospinal fluid of MS patients [14].
In a recent study, specific antibodies against an epitope of the EBV nuclear antigen 1 (EBNA1) protein, specifically EBNA1 386-405 , which binds to the glial cell adhesion molecule (GlialCAM), exhibited increased immunoreactivity in both the serum and CSF of patients with RR-MS from the United States and German when compared to HCs [15].
In our previous study, we highlighted the high recognition of MAP_0106c 121-132 and its homologous peptide EBNA1 400-413 , which shares a 5-amino acid overlap (GRRPF) with EBNA1 386-405 , in the serum and CSF of patients with RR-MS [14]. Furthermore, these peptides were found to induce both humoral and cell-mediated responses in RR-MS patients with a history of infectious mononucleosis [16].
Through the use of the Basic Local Alignment Search Tool (BLAST), we conducted an in silico analysis and discovered regions of local similarity between EBNA1 386-405 and an epitope of MAP heath shock protein (HSP) 70, a protein that has previously been associated with MS [17].
Based on these findings, the objectives of our current study were as follows: 1.
To evaluate the humoral response against the peptide MAP_HSP70 533-545 in Japanese and Italian patients with RR-MS through in vitro experiments. To evaluate the impact of MAP_HSP70 533-545 on neuroinflammation using an active EAE model.
Peptide-Based Enzyme-Linked Immunosorbent Assays (ELISAs)
The perform the peptide-based indirect ELISA, we utilized the Imject maleimideactivated bovine serum albumin (BSA) spin kit (Thermo Fisher Scientific, Waltham, MA, USA). The kit was chosen to prevent the masking of antigenic epitopes, ensuring their accessibility for antibody binding. The procedure involved activating the BSA carrier protein with reactive sulfhydryl maleimide, purifying it, and then crosslinking it with the sulfidyl group (-SH) moiety in the cysteine-containing peptide antigen MAPHSP70 533-545 following the manufacturer's instructions.
To determine the optimal coating conditions, titration experiments were conducted. Nunc-immuno-MicroWell-96-well solid plates (Thermo Fisher Scientific, Waltham, MA, USA) were coated with 50 µL/well of MAP_HSP70 533-545 peptide diluted in ELISA coating buffer (Bio-Rad, Tokyo, Japan) at a final concentration of 10 µg/mL. The plates were incubated overnight at 4 • C. Subsequently, the microplate was blocked with 200 µL/well of Blocking One (Nakalai Tesque, Kyoto, Japan) for 1 h at room temperature. Sera samples were then added to the duplicate wells, diluted 1:100 in Blocking One, and incubated for 2 h at room temperature (25 • C).
Following four washes with phosphate-buffered saline with 0.05% Tween 20 (PBS-T), the plates were incubated with 100 µL/well of horseradish peroxidase-labeled goat antihuman total IgG IgG1, IgG2, IgG3 or IgG4 antibodies (Southern Biotech Associates, Inc., Birmingham, AL, USA) for 1 h at room temperature. After incubation, the microplates were washed again, and the wells were incubated with 100 µL/well of ABTS Peroxidase System (SeraCare Life Sciences, KPL, Gaithersburg, MD, USA) for 10 min at room temperature in the dark. The optical density was measured at 650 nm using a Benchmark Plus Microplate Reader (Bio-Rad, Tokyo, Japan). Wells coated with BSA were included as a negative control, and the mean value obtained from these wells was subtracted from all other data points. The results were normalized against a positive control serum, which was included in all experiments.
Indirect ELISA for Anti-Lipopentapeptide (MAP_Lp5) Antibodies
In order to evaluate the specificity of our findings, all individuals who tested positive to MAP_HSP70 533-545 were further tested for reactivity against the specific antigen MAP_Lp5. MAP_Lp5 is a distinctive antigenic lipoprotein found in the cell wall of MAP [19].
To eliminate any potential cross-reactivity with other mycobacterial components, all sera were pre-incubated with lyophilized Mycobacterium phlei obtained from the commercially available ELISA kit, Johnelisa II kit (Kyoritsu Seiyaku Corporation, Tokyo, Japan) [20]. This step ensured that the antibody response observed was specific to the antigens of interest.
Inhibition ELISA
To determine the presence of cross-reactive antibodies between MAP_HSP70 533-545 and EBNA1 386-405 peptides in the sera of RR-MS patients, we conducted an inhibition ELISA. Serum samples were first pre-absorbed with saturating concentrations [10-15 mM] of EBNA1 386-405 , MAP_HSP70 533-545 , or scramble peptide overnight. This pre-absorption step aimed to block any cross-reactivity or binding of antibodies to these specific peptides.
After the incubation, the serum samples containing the antibody-antigen mixture were added to microplates coated with the MAP_HSP70 533-545 peptide. The plates were then subjected to an indirect ELISA, following the previously described procedure.
By pre-absorbing the serum samples with the respective peptides, any antibodies present in the samples that were specific to MAP_HSP70 533-545 and EBNA1 386-405 would already be bound to their corresponding peptide agent. As a result, the binding reaction in the wells of the ELISA microplate is reduced, and the reduction in absorbance in the wells is inversely proportional to the concentration of the analyte (cross-reactive antibodies) in the patient samples.
Animal and Mouse Immunization
Groups of 9-week-old female wild-type C57BL/6J mice, with a total of 20 mice, were obtained from Charles River Laboratories, Yokohama, Japan, Inc. The mice were housed under pathogen-free conditions with a 12-h light/dark cycle. The animal experiments were conducted in accordance with the guidelines and approval of the Institutional Animal Care and Use Committee of Juntendo University School of Medicine (No. 290238).
To induce EAE, mice were subcutaneously immunized with 200 µg of MAP_HSP70 533-545 peptide emulsified in incomplete Freund's adjuvant supplemented with 4 mg/mL of Mycobacterium tuberculosis H37Ra (CFA) two weeks prior to the induction of EAE. Placebo mice were immunized with a scrambled control peptide.
For the induction of active EAE, mice were subcutaneously immunized with 200 µg of MOG peptide emulsified in CFA, followed by an intraperitoneal dose of 200 ng of pertussis toxin immediately after immunization and another dose 48 h later. The mice were monitored daily for clinical symptoms of EAE, and their disease severity was scored as follows: (1) for flaccid tail, (2) for impaired righting reflex and hind limb weakness, (3) for complete hind limb paralysis, (4) for complete hind limb paralysis with partial forelimb paralysis, and (5) for death.
Histological Analysis and T-Cell Proliferation Assay
Histological analysis was performed on the spinal cord sections of EAE mice euthanized during the peak (14-16 days post-immunization) of clinical symptoms. The sections were stained with hematoxylin/eosin to detect inflammatory foci. The severity of inflammation was analyzed using a semiquantitative scale, where 1 represented a small infiltrate (<10 cells/field), 2 represented a medium infiltrate (>15 cells/field), and 3 represented a large infiltrate (>100 cells/field).
T-cell proliferation was assessed by measuring the incorporation of radioactive 3 H-thymidine (1uCi/well) (PerkilnElmenr, Waltham, MA, USA) [21]. Spleen cells (4 × 10 5 cells/well) from EAE mice at the peak of clinical symptoms were cultured for 2 days with 50 µg/mL MOG in the presence of gamma-irradiated (30.0 Gy) spleen cells (1 × 10 6 cells/mL) syngeneic to the responding T-cells. DNA synthesis was determined by measuring the radioactivity of the incorporated-3 H-thymidine with a scintillation plate counter (MicroBeta TriLux, PerkinElmer, Waltham, MA, USA) 18 h later. The proliferative response was expressed as a stimulation index (SI): counts per minute (cpm) of stimulated cells divided by counts per cpm of unstimulated cells.
Statistics
Statistical analysis was performed using Graphpad Prism 10 software (GraphPad Software, La Jolla, CA, USA). The non-parametric Mann-Whitney's u-test was used to compare ELISA results between patients and HCs. Spearman's correlation analysis was conducted to verify cross-reactivity between antibodies. Receiver operating characteristic (ROC) analysis was performed to assess the diagnostic accuracy of the ELISA and determine the cut-off for positivity with a specificity of 95%.
Clinical EAE scores were analyzed using the non-parametric Mann-Whitney u-test. Histological analysis of spinal cords was analyzed using one-way analysis of variance (ANOVA) followed by post-hoc Dunnett's multiple comparison test. T-cell proliferation was analyzed using a two-tailed Student's t-test. A p-value less than 0.05 was considered statistically significant.
To confirm the specificity of the MAP_HSP70533-545 epitope, all subject sera were preadsorbed with Mycobacterium phlei antigen and tested against MAP_Lp5 antigen. Linear regression analysis demonstrated a good correlation between the ELISA based on MAP_HSP70533-545 and the MAP_Lp5-ELISA for both Italian (r = 0.73, p < 0.005) ( Figure 1C) and Japanese (r = 0.46, p = 0.01) ( Figure 1F) subjects, indicating the specificity of MAP detection. Similarly, in the Japanese RR-MS patients, at the same cut-off level, the MAP_HSP70 533-545 peptide elicited strong antibody titers in the serum of 27 out of 65 (41%; 95% confidence interval [CI]: 23-46%), while no controls displayed positive results (p < 0.0001) ( Figure 1D). IgG1 and IgG4 antibodies were elevated in the sera of 14 (52%) and 13 sera (48%), respectively ( Figure 1E). A significant correlation between IgG4 and EDSS was also observed in Japanese RR-MS patients (r = 0.73, p < 0.0001).
To confirm the specificity of the MAP_HSP70 533-545 epitope, all subject sera were pre-adsorbed with Mycobacterium phlei antigen and tested against MAP_Lp5 antigen. Linear regression analysis demonstrated a good correlation between the ELISA based on MAP_HSP70 533-545 and the MAP_Lp5-ELISA for both Italian (r = 0.73, p < 0.005) ( Figure 1C) and Japanese (r = 0.46, p = 0.01) ( Figure 1F) subjects, indicating the specificity of MAP detection.
The indirect ELISA test showed good reproducibility, with an inter-assay coefficient of variation (CV) of 8% and an intra-assay CV of 4%.
Cross Reactivity between HSP70 and EBNA
Inhibition immunoassays were conducted to investigate the presence of cross-reactive antibodies to MAP_HSP70 533-545 and EBNA1 386-405 peptides in the sera of RR-MS patients from Italy ( Figure 2A) and Japan ( Figure 2B). The results showed that EBNA1 386-405 peptide Life 2023, 13, 1437 7 of 12 inhibited the binding signal on MAP_HSP70 533-545 coated plates by 35-47% (p < 0.0001). In comparison, the positive control (MAP_HSP70 533-545 peptide) caused a decrease in the binding signal by 64-76%, while the negative control (scramble peptide) resulted in 15-17% reduction. Furthermore, a strong correlation was observed between the levels of anti-EBNA1 366-345 antibodies and MAP_HSP70 533-545 antibodies detected by peptide-based ELISA (r = 0.82, p < 0.0034) ( Figure 2C). These findings not only support the association between EBV and MS but also suggest that MAP might be involved in the etiology or progression of the disease through a mechanism of molecular mimicry.
The indirect ELISA test showed good reproducibility, with an inter-assay coefficient of variation (CV) of 8% and an intra-assay CV of 4%.
Cross Reactivity between HSP70 and EBNA
Inhibition immunoassays were conducted to investigate the presence of crossreactive antibodies to MAP_HSP70533-545 and EBNA1386-405 peptides in the sera of RR-MS patients from Italy ( Figure 2A) and Japan ( Figure 2B). The results showed that EBNA1386-405 peptide inhibited the binding signal on MAP_HSP70533-545 coated plates by 35-47% (p < 0.0001). In comparison, the positive control (MAP_HSP70533-545 peptide) caused a decrease in the binding signal by 64-76%, while the negative control (scramble peptide) resulted in 15-17% reduction. Furthermore, a strong correlation was observed between the levels of anti-EBNA1366-345 antibodies and MAP_HSP70533-545 antibodies detected by peptide-based ELISA (r = 0.82, p < 0.0034) ( Figure 2C). These findings not only support the association between EBV and MS but also suggest that MAP might be involved in the etiology or progression of the disease through a mechanism of molecular mimicry.
MAP_HSP70533-545 Immunization Exacerbates Active EAE
To assess the encephalitogenic potential of MAP_HSP70533-545 in active EAE, groups of wild type mice (C57BL/6J) were immunized with the MAP_HSP70533-545 peptide two months prior to EAE induction. The immunized mice exhibited a slightly delayed onset of the disease (9 ± 0.4 placebo vs. 10 ± 0.5 immunized) but developed a more severe peak of the disease (3.0 ± 0.5 placebo vs. 3.5 ± 0.4 immunized) compared to non-immunized control mice ( Figure 3A). The disease incidence (9/10 placebo vs. 9/10 immunized) and clinical course were comparable to placebo controls ( Table 2).
Furthermore, histological examination of spinal cords revealed an increased infiltration of mononuclear cells in MAP_HSP70533-545-immunized mice compared to placebo-treated mice ( Figure 3C,D).
Discussion
In this study, we have provided further evidence for the involvement of mycobacteria in the pathology of MS by identifying a peptide derived from the MAP_HSP70 protein antigen that can elicit a strong immune response in patients with RR-MS. Additionally, we have demonstrated the role of this peptide in the neuroinflammation induced by the EAE model.
HSP70s are a group of highly conserved protein families that are induced under stress conditions, such as inflammation, and have been associated with neuroinflammation and neurodegeneration, particularly in RR-MS [22]. Through in silico analysis and the transcriptional profiling of MS patients compared to healthy controls, we observed a significant upregulation of several genes encoding HSP70s in various brain regions affected by MS, including the corpus callosum, hippocampus, internal capsule, and optic chiasm [22].
A separate study conducted on a cohort of 268 MS patients and 231 HCs from Sardinia demonstrated the presence of serum anti-MAP_HSP70 IgG in 23% of the patients, whereas only 6.5% of controls showed the presence of these antibodies [17]. BLASTp analysis revealed 28% amino acid identity between human HSP70 and MAP_HSP70, although no immunodominant epitope from the recombinant protein was identified.
Here, we have demonstrated the presence of cross-reactivity between MAP_HSP70 533-545 and EBNA1 386-405 , which is a peptide involved in molecular mimicry with GlialCAM [15]. GlialCAM is an adhesion molecule primarily expressed in oligodendrocytes and astrocytes [23]. Mutations in the GlialCAM protein have been associated with megalencephalic leukodystrophy, a genetic neurodegenerative disorder that affecting the white matter of the CNS, which consists of glial cells and myelinated axons [24]. The shared linear amino acid sequence or structural similarities between MAP/EBV and GlialCAM epitopes may contribute to autoimmune processes.
When exposed to bacterial or viral antigens, the immune system undergoes adaptation, including changes in antibody affinity and isotype of peripheral B cells [25]. In patients with MS, the somatic hypermutation of B cells has been observed [26], potentially leading to the production of self-reactive antibodies that strongly interact with host proteins, such as GlialCAM. This interaction may trigger an immune response against GlialCAM-expressingcells in the CNS, leading to inflammation and demyelination.
In support of this hypothesis, we have detected IgG4 antibodies in serum of RR-MS patients (~50%) directed against MAP_HSP70 533-545 , particularly in patients with higher EDSS scores. Normally, IgG4 antibodies constitute around 5% of total IgG immunoglobulins [27]. The presence of IgG4 in RR-MS patients may be attributed to chronic exposure to MAP, primarily through the fecal-oral route. In Italy, paratuberculosis is endemic in domestic livestock in Sardinia [28], and several studies have demonstrated the presence of specific antibodies and MAP DNA in patients with autoimmune disorders, including MS, Chron's diseases and type 1 diabetes [29]. As for the epidemiological situation of paratuberculosis in Japan, the disease's prevalence is relatively low compared to Western countries [30]. However, seroprevalence studies have indicated the presence of IgG1 and IgG4 antibodies against MAP antigens in the serum of healthy individuals [20], as well as IgE in individuals with allergies [31], suggesting that the Japanese population may also be exposed to the mycobacterium, likely through the consumption of contaminated dairy products [20].
Furthermore, certain HLA DRB1*0405 alleles have been associated with an increased risk of developing MS in specific countries, such as Sardinia and Japan [32,33], as well as an increased susceptibility to IgG4-related disease in the Japanese populations [34].
Although the prevalence of MS is higher in Western countries compared to East Asia [35], one hypothesis is that individuals carrying this specific haplotype and exposed to MAP antigens, including MAP_HSP70 533-545 , may be more susceptible to developing autoimmune disorders.
While there is limited literature on the effect of therapy on IgG4 in MS, a clinical study involving 29 Greek RR-MS patients demonstrated an increase in IgG4 levels after 24 months of treatment with alemtuzumab, a monoclonal antibody targeting CD52 [36]. Moreover, patients with higher IgG4 titers were more likely to develop secondary autoimmune disorders such as Crohn's disease and thyroid-related disorders [36].
We have also demonstrated the direct impact of MAP_HSP70 533-545 on the development of EAE. In C57Bl/6J mice, the exacerbation of EAE was likely attributed to autoreactive MAP_HSP70 533-545 T-cells generated in response to immunization. Previous studies on EAE have revealed that various self-antigen specific T-cells contribute to autoimmune inflammation [37]. The delayed onset but more severe disease observed in mice may be associated with the chaperoning effect of HSP70 on regulatory T (Treg) cells' function [38], potentially modulating the trafficking of Tregs between the periphery (where their accumulation may delay the onset of EAE) and the CNS (where a defective number is linked to disease severity).
Interestingly, recent research has suggested that Bacillus Calmette-Guérin vaccination may reduce the incidence of MS through cytokine-induced IL-10 secreting CD8 T-cells, indicating that certain mycobacteria may have a role in disease prevention [21].
Although we have provided evidence supporting the association between MAP and the pathogenesis of MS, further investigation is needed to determine whether the antigenic components of MAP are recognized by T-cells and/or disease-causing autoreactive antibodies or if these antigens solely contribute to epitope spreading. To assess the encephalitogenic effect of MAP_HSP70 533-545 T-cell during the effector phase of EAE, future experiments will involve inducing induce passive EAE through the adoptive transfer of these cells.
Moreover, additional research will be conducted to comprehend the mechanism of infection-mediated neuroinflammation, with a specific focus on the role of mitochondria dysfunction. Mitochondria dysfunction has been associated with the development of both MS and EAE [39,40], as well as mycobacterial infection [41]. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
The data underlying this article will be shared on reasonable request to the corresponding author. | 2023-07-12T08:15:57.747Z | 2023-06-25T00:00:00.000 | {
"year": 2023,
"sha1": "a66d1733b4812fd859d5ae1589d0c0ec75a6a1f3",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-1729/13/7/1437/pdf?version=1687674738",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "97228c82eb721e6698137388ab2d67d0aa305a90",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": []
} |
214270243 | pes2o/s2orc | v3-fos-license | Uncertainty Study of Reflectance Measurements for Concentrating Solar Reflectors
The solar reflector is one of the main components of concentrated solar thermal systems. Therefore, accurate knowledge of its solar-weighted, near-specular reflectance is highly important. Currently, this parameter cannot be properly measured with a single commercial instrument. There is a great interest in having a suitable procedure that can guarantee the accuracy of reflector quality analysis, which already led to the publication of an international measurement guideline (title “Parameters and method to evaluate reflectance properties of reflector materials for concentrating solar power technology”). Still, more research work is needed to improve the state of the art. At present, both the specular reflectance and the spectral hemispherical reflectance are measured by using commercial portable reflectometers and spectrophotometers, respectively, to gain enough information. This article concentrates on the evaluation and calculation of the type-B (nonstatistical) uncertainties associated with these employed instruments and, therefore, leads to a more accurate definition of the measurement uncertainty. Considering type-B uncertainty, the expanded uncertainties of measurements for most of the reflector types are $U_{\mathrm {B,ref}} = 0.006$ for monochromatic specular reflectance and $U_{\mathrm {B,spec}} = 0.016$ for solar-weighted hemispherical reflectance.
hand, photovoltaic energy uses solar cells to directly generate electricity. This energy has been very common in electricity production for many years [2]. On the other hand, CST technology has been under investigation for more than a century, but it has been commercialized in the last four decades [3]. CST technology has been delayed in market development since the 1980s because of market resistance to large plant sizes and poor political and financial support from incentive programs [4]. In the last decade, public programs in several countries around the world (led by Spain and USA) have promoted a rapid growth in both the basic technology and the market establishment [5].
All CST systems, from line focusing to point focusing ones, are based on large areas covered by solar concentrators (i.e., reflecting mirrors with the proper shape) that concentrate the direct solar radiation into a receiver where a circulating fluid increases its enthalpy. The deployment of this technology is linked to the development of cost-effective components, which, in the case of the optical concentrator, means durable reflectors with high solar-weighted near-specular reflectance. Consequently, the proper assessment of this optical parameter is a crucial issue that is receiving special attention from the international solar community [6].
Research work of the last few years regarding the reflectance measurement procedure [7]- [11] and instruments [12]- [16] of solar mirror materials has advanced as far as the publication of a reflectance measurement guideline within the SolarPACES Task III [17]. This procedure is already established as the standard protocol to be followed by the official norms of solar reflectors [18]. According to this guideline and due to the lack of appropriate measurement equipment [19], the evaluation of nonhighly specular mirrors (such as aluminum or polymer film ones), as well as aged and soiled mirrors, must consider the results obtained separately from instruments such as specular reflectometers and spectrophotometers. In addition, a proper optical characterization of solar reflectors involves not only defining a precise measurement protocol to achieve a correct real value but also providing the specific and detailed uncertainty of the measurement instruments.
This article is focused on a thorough study on the evaluation and calculation of the type-B (nonstatistical) uncertainties associated with the commercial instruments typically employed and, thus, leads to a more accurate and deep definition of the reflectance uncertainty. Moreover, the goal of this article is to provide proof and clarification about certain This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ details of the reflectance measurement procedure and offer suggestions on how to enable higher accuracy. This study was conducted with the most commonly used reflectance measurement devices, that is, a Perkin Elmer (PE) Lambda 1050 spectrophotometer and a 15R-USB Devices and Services (D&S) portable reflectometer.
II. METHODS
This section describes a frequently employed method to measure the reflectance of concentrating solar reflectors, the instruments and samples used in this study, and the methodology followed to verify the measurement procedure and calculate its uncertainty. Nomenclature applied is according to [18] and [20].
A. Reflectance Definitions
The most precise way to characterize a reflector material for CST applications is to measure its specular reflectance, ρ λ,ϕ (λ, θ i , ϕ), as a function of the wavelength, λ, the incidence angle, θ i , and the acceptance angle, ϕ, in a proper range [21]. It is not possible to directly obtain this parameter with a unique instrument at the current state of the art. As a compromise solution, the required information to optically characterize solar reflectors (mainly in nonspecular, aged, and soiled mirrors) is typically obtained by using two different commercial instruments [7], [17].
Spectral reflectance is generally obtained with a commercial spectrophotometer that measures the hemispherical reflectance, thus covering the ultraviolet (UV), visible (Vis), and near-infrared (NIR) ranges. This means that all light reflected in the hemisphere is measured regardless of its directionality. This optical parameter is the spectral hemispherical reflectance, ρ λ,h (λ, θ i , h), which depends on λ and θ i , where, in this case, ϕ is denoted by h to indicate the complete hemisphere. Hemispherical spectrum is weighted with the solar spectrum to obtain the solar-weighted value in a specific λ range, ρ s,h ([λ a , λ b ], θ i , h) [22]. This parameter allows assessing the optical behavior in the solar spectrum but presents the disadvantage of missing the required information about its specular performance.
Since solar concentrators only use the reflected radiation in the near-specular direction, their performance highly depends on the amount of beam spread. Therefore, the solar specular reflectance, ρ s,ϕ ([λ a , λ b ], θ i , ϕ), must also be characterized. This value describes the amount of energy reflected around the specular direction according to the law of reflection and is bounded by ϕ. Measurement of spectral specular reflectance is problematic at this time because adequate instrumentation that can define representative ϕ for CST technologies and in a wide λ range is not marketed although, in the past years, several prototype instruments and methods have been developed to determine it [10], [13]- [15]. In commercial reflectometers, specular reflectance can only be appropriately measured at a certain selected λ and a fixed near-normal ϕ. The value supplied is the monochromatic specular reflectance, ρ λ,ϕ (λ, θ i , ϕ), which gives useful information about the specular behavior but is insufficient because only a small λ range is registered.
B. Measurement Devices
This section presents a description of the measurement devices analyzed in this work. In both cases, the two specific instruments selected were chosen because they are the most commonly used at present [19], and to the best of the authors' knowledge, they are the most appropriate for measuring the reflectance of solar reflectors.
1) Reflectometer: The 15R-USB by D&S Co. was selected to measure ρ λ,ϕ (see Fig. 1). This portable reflectometer was developed in cooperation with the Sandia National Laboratories [23] with the specific purpose of measuring reflectors for CST systems. Table I presents the main features of this device. The reasons for choosing this equipment are that the specular reflectance is directly measured by choosing ϕ in the appropriate range; it allows for the adjustment of the beam path, so that the first and second surfaces or curved mirrors can be measured; and it has no influence on external stray light and it is suitable for field measurements. Although the instrument comes with a reference mirror of known reflectance (see bottom right of Fig. 1), which can be inserted in a fixed position for calibration, an external calibration is recommended [19]. The instrument produces a collimated beam to a diameter of 10 mm, so that all of the reflected beam can be collected by the 22-mm-diameter receiver lens. This device has been extensively employed in research activities by a number of institutions [28]- [32].
2) Spectrophotometer: ρ λ,h was measured with a Lambda 1050 two-beam scanning spectrophotometer by PE, equipped with an integrating sphere accessory of 150 mm diameter (see Fig. 2). In general, this PE device is frequently used for a wide range of applications to measure the transmittance, the absorptance, and the reflectance of solutions and opaque materials and has been extensively employed to characterize solar reflectors [33]- [35]. Table I presents the main features of this device. Measurements were taken at a 5-nm step.
C. Sample Description
Several solar reflector types are employed in CST applications. They can be classified as silvered-glass reflectors, aluminum reflectors, and silvered-polymer films [36]. All types of solar reflectors were included in this article. Tables II and Fig. 3 show the sample identification (ID) and a brief description of the samples used as measurement samples in the tests. Polymer films were applied to a glass substrate to give them enough rigidity. Table III presents the main features of the coupons used as reference standards, as given by the manufacturers who also provided the calibration data. The three standard mirrors listed in Table III are named external because they can be used for all the instruments. However, there are other kinds of reference mirrors labeled as internal mirrors, which are provided by the manufacturer (D&S) for each specific reflectometer. The uncertainty data do not include the coverage factor (see Section II-D).
Finally, Fig. 4 shows the calibrated hemispherical spectra of all reference standards described in Table III. As can be observed, the highest reflectance values are reached by the two glass references, and the aluminum spectrum is considerably lower in the visible range.
D. Uncertainty Calculation
The process to calculate uncertainties associated with experimentally measured magnitudes was done by following the international "Guide to the Expression of Uncertainty in Measurement" [38]. The reflectance uncertainties refer to the specular and hemispherical reflectance measurements performed with the corresponding instruments. The uncertainty of the monochromatic specular reflectance, u ρ λ,ϕ (λ,θ i ,ϕ) , is obtained by considering type-A and type-B uncertainties of the reflectometer, u refl On the other hand, the uncertainty of the spectral or solar hemispherical reflectance, u ρ λ,h (λ,θ i ,h) , measured with the spectrophotometer, u spec , is calculated as follows: where u A,refl and u A,spec are the objective uncertainty obtained from the statistical analysis of series of observations, and u B,refl and u B,spec have a subjected character and they are calculated by means of other than the statistical analysis of series of observations. Therefore, the purpose of the type-A and type-B classifications is to indicate the two different ways of evaluating uncertainty components and for convenience of discussion only; the classification is not meant to indicate that there is any difference in the nature of the components resulting from the two types of evaluation [38]. Both of them are based on probability distributions (Gaussian, rectangular, triangular, and trapezoidal). The calculation of the uncertainty linked to a set of observations depends on the probability distribution, being the most common one, i.e., the Gaussian or normal distribution whose uncertainty is the standard deviation [38]. The normality of the distributions was checked by using the contrast of the hypothesis through the Shapiro-Wilk, ANOVA, and Wilcoxon tests. The type-B uncertainty of the measurement instruments was derived from the available information given by the manufacturers and performing statistical observations through specific experiments of those key factors whose influence is missing. Ten repetitions in each experiment with both instruments were performed.
E. Type-B Uncertainty of the Reflectometer
The uncertainties considered relevant in the ρ λ,ϕ measurement process with the reflectometer are given as follows: 1) accuracy, u refl,acc (given by the manufacturer as reproducibility, Table I); 2) resolution, u refl,res (given by the manufacturer, Table I); 3) calibration quality of the reference mirror, u refl,cal (given by the manufacturer, Table III); 4) influence of the ambient temperature, u refl,tem ; 5) influence of the reflectometer unit itself, u refl,unit ; 6) influence of the reference mirror (external or internal), u refl,ref ; 7) stability over time, u refl,time ; 8) influence of the ambient light, u refl,light ; 9) influence of the acceptance angle, u refl,ϕ ; 10) influence of the reflectometer's central screw position, u refl,scr ; 11) influence of the operator, u refl,ope ; 12) influence of the curvature, u refl,curv . As there is not any known relationship between the abovementioned uncertainties, the type-B uncertainty of the reflectometer, u B,refl , can be calculated as follows: Only the uncertainties associated with the first three influences are known. The following sections include a detailed description of all the related tests performed to calculate the influence of the other parameters and the corresponding uncertainties when it is necessary.
1) Ambient Temperature: Although the reflectance is an optical parameter influenced by the temperature [39], according to the instrument manufacturer, any temperature effect is internally minimized [25]. An experiment was performed to verify it, which consisted in measuring all the mirror samples, ten times every 5 • C within the operating temperature range, T = [10, 45] • C. The test was performed inside a weathering chamber model SC 340 manufactured by ATLAS (see Fig. 5). For each temperature step, the mirror sample stood inside the chamber together with the reflectometer for 30 min before the measurement. The experiment was done by calibrating the instrument only at the standard ambient temperature (T = 22.5 • C) as well as performing intermediate recalibrations at each temperature step. In the rest of the tests, ambient temperature = 22.5 • C was kept.
2) Reflectometer Unit Itself:
This test is focused on the calculation of the uncertainty derived for the use of one specific instrument. It was performed with five different D&S reflectometers with serial numbers 060, 110, 116, 117, and 119 (see Fig. 6). To avoid any influence of the sample behavior, the most stable and homogeneous sample (4.0-mm silveredglass) was employed in the measurements (ten repetitions with each unit), and the results obtained are considered valid for the rest of the materials. The device with serial number 117 was chosen for the rest of the tests.
3) Reference Mirror: The aim of this test was to calculate the uncertainty associated with the calibration mirror by comparing the internal one with the 4.0-mm silvered-glass external reference mirror (ext4) (see Fig. 7). To avoid any influence of the sample behavior, the most stable and homogeneous sample (4-mm silvered-glass) was measured (ten times with each reference mirror), and the results achieved are valid for the rest of the materials. The reflectometers were calibrated with this external reference mirror in the rest of the testing campaign.
4) Stability Over Time:
This test was performed to check the stability of the instrument over time while keeping the rest of the operating conditions constant. The 4-mm silvered-glass sample was measured ten times during a full working day (i.e., 7 h).
5) Ambient Light:
According to the reflectometer's manufacturer, the light source is chopped electronically at a rate of about 90 Hz, so that the stray light will not cause measurement errors [24]. A test was conducted to check the validity of this system. The 4.0-mm silvered-glass sample was measured ten times in an illuminated room (ambient light) and another ten times in dark conditions by covering the instrument and the operator with an opaque fabric (see Fig. 8). The rest of the tests were performed with ambient light.
6) Acceptance Angle: The operator of the D&S reflectometer can select one out of three apertures that define ϕ in the path of the reflected beam by rotating the thumbwheel on the side of the instrument [25]. A test was performed to evaluate the influence of ϕ on the reflectance of all samples because it is well-known that the scattering phenomenon depends on the material type [14]. Different ϕ used were ϕ = {7.5, 12.5, 23.0} mrad (ten repetitions were done with each ϕ). The ϕ selected in the rest of the study was 12.5 mrad because it is the one recommended for parabolic-trough collectors [7].
7) Central Screw's Position:
The D&S reflectometer has a central screw that must be adjusted depending on the frontlayer thickness. The length of the central screw controls the distance of the incidence beam path before reaching the reflective surface of the sample. According to the manufacturer instructions [25], for the first-surface mirrors, it has to be screwed to the maximum extended position. For the secondsurface mirrors, it should be adjusted by an amount equal to the thickness of the mirror divided by the index of refraction of the material. Assuming that the index of the refraction of glass is 1.5, the adjustment is about 1 turn for each 1.27 mm thickness. An experiment was done to verify if modifications of the central screw position cause variations in the reflectance. The reflectance of three mirror samples with different frontlayer thicknesses (aluminum #1-first surface and 0.95-mm silvered-glass and 4.0-mm silvered-glass-second surface) was checked at ten positions of the central screw, including the corresponding turn according to the mirror thickness. The results obtained are considered valid for the other samples with similar front-layer thickness. The rest of the tests were performed with the central screw in a fixed position.
8) Operator:
This test was performed to calculate the uncertainty linked to the experience of the operator. The samples were measured by three different operators with high, medium, and low experience (ten repetitions with each operator). In this case, all the samples were measured because the difficulty of the calibration process depends on the material type. The rest of the studies were carried out by the expert operator. 9) Curvature of the Mirror: The influence of the mirror curvature was studied with three different flat solar mirrors (see Table II): 0.95-mm silvered-glass, silvered polymer film #1, and aluminum #2. A bending machine was used to modify the shape of the flat samples, thus providing the same curvature as a real parabolic-through collector (PTC) facet. To do that, the samples were glued to a real PTC facet piece (30 × 40 cm 2 ) and submitted to a vacuum atmosphere (see Fig. 9). In this case, the selected mirror sample size was 30 × 13 cm 2 to have enough size to copy the curvature of a real facet. Every sample was measured in the same spot both in the flat and curved statuses. The comparison of the results for the flat and curved shapes will give the possible influence of the curvature. To have enough statistical information, this measurement was repeated ten times.
F. Type-B Uncertainty of the Spectrophotometer ρ λ,h measurements done with the spectrophotometer also depend on several nonstatistical uncertainties. For this article, the type-B uncertainties considered relevant in the whole reflectance measurement process with the spectrophotometer are given as follows: 1) accuracy, u spec , acc (given by the manufacturer, Table I); 2) resolution, u spec,res ; 3) calibration quality of the reference mirror, u spec , cal (given by manufacturer, Table III); 4) influence of the ambient temperature, u spec , tem ; 5) influence of the spectrophotometer unit itself, u spec,unit ; 6) influence of the reference mirror (external), u spec,ref ; 7) stability over time, u spec,time ; 8) influence of the ambient light, u spec,light ; 9) influence of the detector response time, u spec,det ; 10) influence of the curvature, u spec,curv .
There is not any known relationship between the abovementioned uncertainties. Therefore, the combined type-B uncertainty of the spectrophotometer, u B,spec , can be calculated by the following equation: spec,acc +u 2 spec,cal +u 2 spec,res +u 2 spec,tem +u 2 spec,unit +u 2 spec,ref +u 2 spec,time +u 2 spec,light +u 2 spec,det +u 2 spec,curv .
(4) The following sections include a detailed description of all the tests performed to evaluate the unknown uncertainties of the spectrophotometer (i.e., all the previously listed except the first two).
1) Ambient Temperature: According to [17], the temperature is a factor that could significantly change the accuracy of the spectrophotometer. To check this influence, the device was situated in a room where the temperature was controlled by a thermostat and verified by a thermometer. All samples were measured ten times at T = {16, 26} • C. These minimum and maximum ambient temperatures were chosen according to the usual working temperature conditions in a laboratory. In the rest of the tests, T = 22.5 • C was kept.
2) Spectrophotometer Unit Itself: In a round Robin test that was performed in an earlier research work in 2010 [7], a set of samples similar to the selected ones in this study were evaluated using the spectrophotometer of this study and a Lambda 950 by PE at the DLR Quarz Laboratory, Cologne, Germany, keeping the rest of the measurement conditions constant. This gives hints of the uncertainty concerning different instruments at different laboratories. The results obtained a normal distribution with a standard deviation of σ ≤ 0.002. Therefore, an uncertainty of u spec,unit = 0.002 is considered in this study. It indicates that this kind of measurement can be performed very stable if a good calibration of the reference mirror is ensured and the same method is applied. However, it is advisable to always use the same spectrophotometer because in this case, the uncertainty would be negligible. In this article, all the experiments were performed with the same spectrophotometer.
3) Reference Mirror: It is pointed out in the literature [40], [41] that the reference sample for measurements with an integrating sphere should have the same properties as the test sample to be measured to acquire accurate results. This refers to its specularity, the grade of reflectance, and also if it is a first or second surface mirror. The purchase of several kinds of calibrated reference mirrors may be expensive, and it is also hindered by the lack of products in the market. It would be easier and cheaper to settle on only one stable reference mirror that can be used for all types of test samples.
To find out if this can be realized without compromising the accuracy, a set of tests was performed with two representative samples, aluminum #1 and 4-mm silvered-glass samples. Each sample was measured with all the three reference mirrors listed in Table III. For the rest of the experiment, the reference mirror selected was the 4-mm silvered-glass (ext4).
4) Stability Over Time:
The SolarPACES Reflectance Guideline suggests that the stability over time is a parameter that could affect the spectrophotometer uncertainty [17]. Therefore, a test was carried out to assess this influence. In this case, the 4-mm silvered-glass sample was measured (to avoid any influence of the material itself) during 14 h at every 3 min (i.e., 280 times) while keeping the rest of the measurement conditions constant.
5) Ambient Light:
After the years of experience, it has been observed that the external light has a major influence on the reliability of a measurement with the spectrophotometer. For this reason, a test was performed to assess the importance of controlling this parameter. A homemade cover of an opaque plastic was manufactured by OPAC operators. The spectrophotometer was totally covered with this gadget, and ten measurements of the 4.0-mm silvered-glass samples were done with and without the cover. For the rest of the tests, a cover was used.
6) Detector Response Time: Another important factor observed in the laboratory that could affect the result is the detector response time, which can be varied in a wide range. Actually, it can be changed both within a wavelength range and across the whole spectrum. In this test, the reflectance variability was studied when the spectrophotometer worked with a detector response time of 1 and 0.04 s, which are the slowest and fastest, respectively. In addition, a mix of these two detector response times was checked. Ten measurements were performed for each response time with the 4-mm silvered-glass sample. It should be taken into account that at 1 s, the measurements involved around 10 min, four times more than at 0.04 s. For the rest of the tests, a combination of detector response times was selected, according to the results obtained (see Section III-B6).
7) Curvature of the Mirror: In this article, reflectance measurements were taken in the same spot for the flat and curved mirror samples in order to verify the uncertainty that could provoke the curvature, as it was previously explained in Section II-E9.
III. RESULTS
This section includes the results obtained from all the tests performed to calculate the unknown type-B uncertainties associated with the reflectometer and the spectrophotometer measurements.
A. Type-B Uncertainties of the Reflectometer
Results of the tests described in Section II-E to calculate the type-B uncertainties of the reflectometer are presented in this section.
1) Ambient Temperature: In general, differences due to the temperature changes are not considered as an uncertainty but considered as a measurement correction. This behavior depends on the reflector materials because their chemical structure may be affected by thermal expansion. The results of the ambient temperature study are shown in Fig. 10, where the reflectance differences, ρ λ,ϕ , with respect to the value at the lowest temperature (T = 10 • C) are presented, for all reflector samples.
As shown in Fig. 10, temperature variations affect the reflectance when the calibration is performed only once. The graph shows that the measured reflectance of silvered-glass and polymer samples increases when ambient temperature increases, mainly for values above 25 • C. However, the measured reflectance in the aluminum #2 sample is totally opposite to the rest of the samples because it decreases when the temperature increases. Finally, the measured reflectance of the aluminum #1 sample does not show a clear tendency.
On the contrary, when intermediate calibration is carried out in every temperature step (see Fig. 11), no influence on the reflectance was observed for silvered materials (both silvered-glass and polymer film reflectors). However, in the case of aluminum#1 and #2 samples, it was noticed that their reflectance decreases when the temperature rises. The change in the measured reflectance between 10 • C and 45 • C is very drastic in aluminum materials, thus reaching values up to −0.0068 ppt for aluminum #1 and −0.017 ppt for aluminum #2. This change is originated by a change in sample flatness, which can strongly affect specularity (the mirror thickness is only 0.5 mm). The changes in sample flatness cannot be compensated by the calibration mirror.
Taking into consideration the results achieved in the two tests performed, it is concluded that the influence of the temperature on the reflectance measurement can be perfectly compensated for silvered materials (both silvered-glass and polymer film reflectors) when the reflectometer is recalibrated using a reference mirror of the same material type, but there is an increase in the measured value if no recalibrations are applied (because the instrument itself is affected by the temperature). Consequently, no corrections are needed for these silvered materials if the reflectometer is recalibrated when temperature variations occur. A frequent calibration is recommended (every 5 • C) if relevant temperature changes occur between one measurement and another (i.e., outdoor measurements in large solar plants).
On the other hand, when the effect of the ambient temperature in the reflectometer response is balanced through frequent recalibrations (Fig. 11), the effect of the temperature on the measured reflectance of aluminum reflectors is clearly seen. This might be due to a different nature of the reference mirror used. This phenomenon is attenuated by the reflectometer behavior due to temperature, provoking the tendency observed in Fig. 10, when no recalibrations are applied. As frequent calibrations shall be done (to eliminate the influence of the temperature on the device), the following corrections should be applied to the reflectance measurement to obtain the correct value, ρ λ,ϕ , for aluminum #1 (5) and aluminum #2 (6) samples: ρ λ,ϕ = −0.0001 · T + 0.0006;R 2 = 0.9041 (5) ρ λ,ϕ = −0.0004 · T + 0.0001;R 2 = 0.9641.
As the correction equation depends on the aluminum type, it is advisable to perform a similar experiment for each aluminum reflector.
2) Instrument Unit Itself: The results of the test carried out with the five reflectometers are presented in Table IV, as the average and standard deviation of the ten reflectance measurements taken. As can be seen, the measurements taken with every individual reflectometer have a null standard Table V includes the results of the test performed to study the influence of the calibration mirror type, thus indicating the average and standard deviation of the ten reflectance measurements. As can be observed, the standard deviation of the measurements is slightly higher when the reflectometer's own reference is used. This could be because normally the internal mirror undergoes higher degradation than the external one (because it cannot be recalibrated or replaced in the lab in the case of deterioration). Therefore, it is recommended to use an appropriate external reference mirror to do the calibration process of the reflectometer to have more stable results. If the results of this test are treated independently of the reference mirror used (i.e., considering the 20 values obtained), the standard deviation of the Gaussian distribution achieved is σ = 0.0010. Consequently, an uncertainty of u refl,ref = 0.0010 can be considered when reflectance values measured with different calibration mirrors are compared.
Regarding practical issues, it is important to consider that the internal calibration mirrors are subjected to certain factors that might influence the calibration quality: soiling (sometimes difficult to eliminate), positioning instability, misalignment, etc. On the other hand, the use of an external calibration mirror might lead to inconveniencies for outdoor measurements. This external reference must be recalibrated periodically with a master standard and replaced in the case of deterioration.
4) Stability Over Time:
The mean reflectance after 7 h is ρ s,h = 0.961 and its σ = 0.073. Data follow a triangular distribution, and consequently, the uncertainty associated with this parameter is calculated from the variance of this probability distribution by applying the following equation: where a is the half of the triangle base. As in this case, a = 0.001 and u refl,time = 0.0004. From a practical point of view, this means that this uncertainty should be added when the instrument is being used during a full working day (without any recalibration). Table VI shows the results of the test carried out to check the influence on the ambient light in the reflectometer. In this case, no changes were detected in the measured reflectance values, and, as a consequence, the standard deviation is null and u refl,light = 0.000. From a practical point of view, this involves that the measurements can be taken with ambient light without any problem. 6) Acceptance Angle: In this case, all samples were included because it was observed that materials with lower specularity are more affected by changes in ϕ. As shown in Table VII, the reflectance measurements of the three silvered-glass samples present standard deviations null at ϕ = {12.5, 23.0} mrad and negligible at ϕ = 7.5 mrad, which indicates a good homogeneity for every ϕ. Also, the average reflectance is quite similar for different ϕ, thus informing that the scattering in this type of reflectors is very low. As a consequence, these results confirm that silvered-glass reflectors are highly specular and they can even be measured with the independence of the ϕ (as it was already suggested in [11]). If this is the case, the uncertainty to be considered, u refl,ϕ , was derived from the standard deviation of the normal distributions (i.e., the series of 30 data for each sample type), as it is indicated in the last column of Table VII. Regarding the silvered-polymer films, the behavior of the two samples analyzed is quite different. On the one hand, standard deviations of each ϕ are much lower for polymer #1 than those for polymer #2. In addition, the differences in the average reflectance from one ϕ to another are higher in polymer #2, thus indicating a higher scattering for this sample. Hence, it is demonstrated that the scattering in this type of mirrors depends on the polymer layer deposited onto the silver layer and the degradation status. Finally, both aluminum mirrors present similar standard deviation for the three ϕ, with higher values than the rest of the materials, as well as significant differences between average reflectance values, thus indicating greater scattering in aluminum reflectors than that in silvered ones. In general, the discrepancies among the values obtained at different ϕ for silvered-polymer film and aluminum samples point out that this parameter must be properly selected to measure these kinds of reflectors. Therefore, the calculation of u refl,ϕ makes no sense for aluminum and polymer films.
7) Central Screw Position:
The results of the study about the influence of the central support are shown in Table VIII. As observed, null variations were noticed regarding mirror thickness and the number of central screw turns in the two silvered-glass samples. In the case of the aluminum sample, a normal distribution with a very low standard deviation (σ = 0.0007) was obtained. If this σ is compared to the corresponding one in Table VI (alumnium #1 sample at ϕ = 12.5 mrad, ten repetitions with the central screw fixed), a similar value is achieved, which indicates that the modification of the position of the central screw is not affecting the results. Hence, the screw position showed a neglected influence on the reflectance, and it is considered that this factor does not contribute to the type-B uncertainty of the device (u ref,scr = 0.000). From a practical point of view, it is safe to always maintain the screw in one specific position and adjust the optical beam only with the two outer screws. 8) Operator: Table IX presents the results of the test performed to study the influence of the operator experience. As can be seen, the standard deviation of the measurements is slightly higher for a little experienced operator (mainly in nonglass mirrors). A more experienced operator reaches more homogeneous results, and consequently, it is highly recommended that the operator that is going to use the reflectometer receives proper training to assure stable values. In addition, it is important that all the measurements belonging to the same test campaign are done by the same operator. Finally, if reflectance measurements are compared for each sample type and regardless of the operator's level of expertise (i.e., considering the 30 values for each sample), Gaussian distributions are detected in all samples, and the uncertainty to be considered is presented in the last column of Table IX. 9) Curvature of the Mirrors: The results obtained for this experiment are presented in Table X, where the mean and the standard deviation of the specular reflectance values are depicted for both flat and curved statuses of the mirror samples. In the silvered-glass mirror, no influence of the curvature is detected, as the specularity is very high. However, for silvered-polymer film #1 and aluminum #2, if the flatand curved-shape measurements are compared, the value of u refl,curv obtained is 0.0006 and 0.001, respectively. The reason behind this influence is that the equipment is not able to compensate for the curvature of the mirror sample when it presents a certain scattering. Thus, if several specular reflectance measurements are taken in several positions of a concentrator with different curvatures, an uncertainty should be added in materials with scattering, such as aluminum or polymers. 10) Combined Type-B Uncertainty of the Reflectometer: Combined type-B uncertainty of the reflectometer, u B,refl , is calculated with the information presented in Sections III-A1-III-A9 and also given in Tables I and III, by applying (3). Considering the data given by the manufacturer (Table I), the corresponding uncertainties are u refl,acc = 0.002 and u refl,res = 0.0006 since the resolution follows a rectangular distribution. According to Table III, the uncertainty to be considered for the calibration is u refl,cal = 0.0015 if the external reference mirror used is OMT-216035-01 or OMT-214044-02 (manufactured by OMT), while the value is u refl,ref = 0.0013 for the reference PAV-D-2 (manufactured by NRC). Consequently, the minimum type-B uncertainty for the reflectometer is obtained by (8) for the OMT references and (9) for the NRC reference u B,refl = u 2 refl,acc + u 2 refl,res + u 2 refl,cal = = 0.002 2 + 0.0006 2 + 0.0015 2 = 0.003 (8) u B,refl = u 2 refl,acc + u 2 refl,res + u 2 refl,cal = 0.002 2 + 0.0006 2 + 0.0015 2 = 0.003. If a coverage factor of k = 2 is considered, the minimum expanded uncertainty obtained for the reflectometer is U B,refl = 0.006 for all material samples. With respect to the other parameters studied, the ambient light and the position of the central screw did not show any impact on the results, giving null uncertainties, u refl,light = u refl,screw = 0.000. In the case of the influence of ambient temperature, no uncertainty or correlation should be added for silvered reflectors when recalibrations are done every time that temperature changes around 5 C, while a correction should be applied to the reflectance results for aluminum mirrors (see Section III-A1). Finally, Table XI presents the uncertainties that must be combined with the values presented in (8) or (9) for different samples when the rest of the parameters analyzed are involved in the measurement process.
Finally, if all different individual uncertainties are considered and included in (3), the value obtained is u B,refl = 0.003 for all materials, except for aluminum #2, whose uncertainty is u B,refl = 0.004. This means that in general, the uncertainty is quite independent of the material (for those typically used in CST applications) and the main source of uncertainty is the accuracy of the instrument reported by the manufacturer. If a coverage factor of k = 2 is considered, the maximum expanded uncertainty calculated for the reflectometer is U B,refl = 0.006 for all material samples, except for aluminum #2, whose expanded uncertainty is U B,refl = 0.008.
B. Type-B Uncertainties of the Spectrophotometer
This section includes the results of the tests described in Section II-F to calculate the type-B uncertainties of the spectrophotometer.
1) Ambient Temperature: According to the results obtained (see Table XII), the differences in the reflectance values measured at the two ambient temperatures studied, T = [16,26] • C, were null for all materials. Consequently, this device is very stable in the range of operating temperatures suitable for an air-conditioned laboratory, and neither correction nor uncertainty must be considered in relation to this parameter (u spec,tem = 0.000) if laboratory standard ambient conditions are assured.
2) Reference Mirror: Table XIII shows the reflectance and standard deviation of the ten reflectance repetitions done, TABLE XII AVERAGE AND STANDARD DEVIATION FOR TWO DIFFERENT LABORA-TORY AMBIENT TEMPERATURES TABLE XIII AVERAGE AND STANDARD DEVIATION REFLECTANCE OF MEASUREMENT WITH THREE REFERENCE MIRRORS thus measuring the 4-mm silvered-glass sample and the alu-minum#1 sample using the three different reference mirrors (see Table III). As appreciated in Table XIII, the standard deviations of the measured reflectance of silvered-glass and aluminum samples are null when the two silvered-glass references are used (ext2 and ext4). However, the standard deviation of the measurements when the aluminum reference mirror is used is higher in both cases. Hence, it is advisable to work with the silvered-glass reference mirror (independently of the glass thickness). If no distinctions between reference mirrors are done when the spectrophotometer measurements are performed, a maximum uncertainty of u spec,ref = 0.002 and u spec,ref = 0.001 can be obtained for silvered-glass and aluminum samples, respectively.
3) Stability Over Time: The mean reflectance and its standard deviation of the 280 measurements performed in this test were ρ s,h = 0.947 and 0.000, respectively. This means that a great stability over time was shown by the spectrophotometer and no uncertainty is derived from this parameter, that is, u spec,time = 0.000. 4) Ambient Light: Fig. 12 shows the results of the test performed to check the effect of the ambient light in the spectrophotometer. This graph represents the standard deviation of the ten repetitions performed both with and without the opaque cover, as a function of the wavelength. As can be seen, slightly smaller standard deviation values were achieved when the cover was utilized (blue curve), thus indicating a higher stability of the measurements in this case.
For an easier comparison, Table XIV shows the results of the effect of the cover as the average and standard deviation values of ρ s,h . As can be observed, null variations were detected in ρ s,h when the cover was employed, while slight influence was suffered without the cover (σ = 0.0005). Consequently, it is recommended to use a cover to avoid any possible influence of the ambient light in the reflectance results. If results are compared independently of the use of this kind of protection (i.e., considering the 20 reflectance values), a Gaussian distribution is obtained, which gives an uncertainty of u spec,light = 0.0006. 5) Detector Response Time: Fig. 13 shows the standard deviation of the ten repetitions at three detector response times (the minimum, the maximum, and a mix of them) as a function of the wavelength. The mixed curve represents the result of the measurement done at the maximum response time in the whole solar wavelength range, except in λ = [600, 880] nm, where the minimum response time was selected. As appreciated, the higher standard deviation was obtained at the fastest response time (0.04 s) compared to the slowest detector response time (1 s), being the mixed curved intermediate among them. Table XV shows the results of the ρ s,h for the three response times, as well as the time consumed in each case. As the detector response time should be a compromise between the accuracy and the testing time, the mix solution is recommended. If no attention is paid to the detector response time (i.e., the 30 measurements are considered), the maximum uncertainty associated is u spec,det = 0.0004. 6) Curvature of the Mirrors: Regarding the u refl,curv , as represented in Table XVI, the three types of mirrors did not show any differences between hemispherical reflectance when the surface is flat or curved because in this case, the effect of the scattering provoked by the aluminum and polymer is palliated. Fig. 13. Standard deviation spectra of the ten repetitions for the measurements at the minimum and maximum detector response time.
7) Combined Type-B Uncertainty of the Spectrophotometer:
Combined type-B uncertainty of the spectrophotometer, u B,spec , is calculated with the information presented in the previous sections and also given in Tables I and III, by applying (4). Considering data given by the manufacturer (Table I), the corresponding uncertainty due to the instrument accuracy is u spec,acc = 0.007. Although the resolution of the spectrophotometer is higher than that of the reflectometer to avoid confusion when data from the two instruments are provided, it is a common practice to consider the same number of decimals for both ρ λ,ϕ and ρ s,h . This means that u spec,res = 0.001. According to Table III, the uncertainty to be considered is u spec,re f = 0.0015 if the external reference mirror used is OMT-216035-01 or OMT-214044-02 (manufactured by OMT), while for the reference PAV-D-2 (manufactured by NRC), the value is u refl,ref = 0.0013. Consequently, the minimum type-B uncertainty for the spectrophotometer is calculated by (10) for the OMT references and (11) for the NRC reference u B,spec = u 2 spec,acc + u 2 spec,res + u 2 spec,cal = 0.007 2 + 0.001 2 + 0.0015 2 = 0.007 (10) u B,spec = u 2 spec,acc + u 2 spec,res + u 2 spec,cal = 0.007 2 + 0.001 2 + 0.0013 2 = 0.007. (11) If a coverage factor of k = 2 is considered, the minimum expanded uncertainty obtained for the spectrophotometer is U B,spec = 0.014 for all material samples. With respect to the other parameters studied, the ambient temperature and the stability over time did not show any impact on the results, thus giving null uncertainty, (4), the value obtained is u B,spec = 0.008 for all materials. This means that in general, the uncertainty is independent of the material (for those typically used in CST applications) and the main source of the uncertainty is the accuracy of the device, reported by the manufacturer. If a coverage factor of k = 2 is considered, U B,spec = 0.016 for all material samples.
IV. DISCUSSION
The following specific remarks from the tests performed with the reflectometer might be highlighted.
1) The combined type-B uncertainty is u B,refl = 0.003 if the only factors considered are the accuracy of the instrument, its resolution, and the reference mirror used to calibrate the device. 2) Ambient temperature changes do not influence the reflectance measurement process of silvered-glass mirrors when intermediate recalibrations are carried out. However, corrections are needed for aluminum materials when temperature changes exist between measurements. Hence, a frequent calibration is recommended if relevant ambient temperature fluctuations occur. 3) Although the influence of the instrument unit is not critical (with u refl,unit = 0.00014), it is recommended to use the same instrument when a set of measurements is going to be taken. 4) It is advisable to use an appropriate external reference mirror to calibrate the reflectometer (instead of its own calibration mirror) because higher measurement stability is obtained. Moreover, internal references are more prone to deteriorate. If both types of reference mirrors are used indistinctly, the uncertainty to be added is u refl,ref = 0.0010.
5)
If no recalibrations are carried out over the testing time, the contribution of this factor on the type-B uncertainty should also be considered, that is, u refl,time = 0.0004. 6) It is demonstrated that the influence of the external light in this device is null (u refl,light = 0.000). 7) Specular reflectance of silvered-glass reflectors is not highly affected by ϕ due to the high specularity u refl,ϕ < 0.0011. However, the reflectance values obtained for silvered polymer film and aluminum reflector samples significantly fluctuate depending on ϕ. Therefore, it is recommended to measure nonglass reflectors with the appropriate ϕ (depending on the technology). 8) It is safe to keep the central screw always in one specific position (u refl,screw = 0.000). 9) The more experienced the operator is, the more homogeneous the results are. The experience of the operator has only little influence on the accuracy of the result if he or she is trained on the correct procedure before measuring. The uncertainty in this case depends on the material type. 10) An uncertainty should be added in materials with scattering, such as aluminum or polymers, when a concentrator with different curvatures is measured, being u refl,curv = 0.001 for the aluminum and u refl,curv = 0.0006 for the polymer. 11) Finally, if all the parameters studied are considered in the type-B uncertainty calculation, u B,refl = 0.003 for all the reflectors considered in this study except for the aluminum#2 whose uncertainty is u B,refl = 0.004.
In addition, the main results obtained from the tests performed with the spectrophotometer are given as follows.
1) The combined type-B uncertainty is u B,spec = 0.007 if the only factors considered are the accuracy of the device, its resolution, and the reference mirror. 2) In the range of ambient temperatures from 16 • C to 26 • C, the spectrophotometer does not suffer reflectance changes due to the temperature (u spec,tem = 0.000). 3) Using the same measurement method but different units of the spectrophotometer gives a high reproducibility, that is, u spec,unit = 0.002. 4) Silvered-glass mirror references are the most adequate alternative to take the measurements because the uncertainty added to the measure is lower than that in aluminum reference. If samples are measured indistinctly with both types of references, u spec,tem = 0.002 should be considered for silvered-glass samples and u spec,tem = 0.001 for aluminum samples. 5) Regarding the stability over time, it has been evidenced that the equipment is really stable if laboratory ambient conditions are constant (u spec,time = 0.000). 6) The influence of the ambient light is a parameter to take into account because an excessive illumination in the laboratory might affect the measurement. If measurements are carried out in bright rooms, u spec,light = 0.0006 should be added. 7) Detector response time affects the spectrophotometer measurements. The differences of standard deviation between the faster and slower detector response times are quite insignificant along the whole solar spectrum, except in the range λ = [600, 880] nm, where the slower method obtains much better results. Thus, a deal between standard deviation and time of measurement should be achieved. If no attention is paid to the detector response time for this specific device, the maximum uncertainty associated is u spec,det = 0.0004. But for other types of spectrophotometer, the contribution of this factor to the uncertainty could be more relevant. 8) The influence of the curvature, u refl,curv , does not affect the uncertainty in the spectrophotometer measurements. 9) When all the influences are considered in the uncertainty calculation, the maximum uncertainty of u B,spec = 0.008 is obtained, regardless of the type of sample.
V. CONCLUSION
This article demonstrates that both the reflectometer and the spectrophotometer are adequate devices for optical measurements of reflector materials for CST technologies. Two specific commercial devices were used to perform this study. The maximum expanded type-B uncertainty calculated is U B,refl = 0.006 for monochromatic specular reflectance and U B,spec = 0.016 for solar-weighted hemispherical reflectance. Moreover, it is recommended to recalibrate the reflectometer regularly when ambient temperature fluctuations exist between measurements to use the same equipment and the same reference mirror to calibrate the device in all the measurement processes and properly train the operators before using the device. Regarding the spectrophotometer, it is advisable to employ different detector response times along the spectrum in order to obtain a suitable measurement for the sake of both the time invested and the accuracy. In addition, silveredglass reference mirrors shall be used to measure silvered as well as aluminum reflector specimens because of their lower uncertainty compared to aluminum references.
Francisco Buendía-Martínez was born in Larva, Spain, in 1994. He received the degree in chemistry from the University of Córdoba, Córdoba, Spain, in 2016, the master's degree in energies and fuels for the future from the University Autónoma of Madrid, Madrid, Spain, in 2017. He is currently pursuing the Ph.D. degree with CIEMAT. His Ph.D. thesis is on "Lifetime estimation of materials for solar reflectors" where the aim is to provide correlations which predict the solar mirror lifetime.
He has participated in several scientific articles and he actively collaborates in the RAISELIFE Project as a Scientist (Grant Agreement Nr. 686008), carrying out durability tests in primary and secondary mirrors. She has developed her research activity in CIEMAT-PSA, Almería, since 2002. She is the author of two books and three book chapters. She is the coauthor of 34 publications in international journals, 78 contributions to different international conferences and symposiums and more than 300 technical reports. She has attended 16 international conferences. She has supervised three Ph.D. students, six master's students, and 12 undergraduate students. She has been involved in 18 EU and four Spanish founded Research and development grants and has been scientific responsible of 35 Research and development cooperation agreements with industries. She is currently the Site Manager of the laboratories for optical characterization and durability testing of solar reflectors at the PSA (OPAC). Her research activities are mainly focused on the optical measurement and durability testing of solar reflectors (both in outdoor conditions and accelerated aging) and the optimization of the cleaning methods for concentrating solar power plants. | 2020-02-27T09:07:36.605Z | 2020-09-01T00:00:00.000 | {
"year": 2020,
"sha1": "f0d26df56c48450c8da86ba3fb899576e849d4ae",
"oa_license": "CCBY",
"oa_url": "https://elib.dlr.de/136765/1/2020%20Buendia%20Uncertainty%20study%20of%20reflectance.pdf",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "8a2824f8c244ff773c137890ef4fa73d5dc69a72",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science",
"Physics"
],
"extfieldsofstudy": [
"Computer Science",
"Materials Science"
]
} |
247418766 | pes2o/s2orc | v3-fos-license | Optical Polarization-Based Measurement Methods for Characterization of Self-Assembled Peptides’ and Amino Acids’ Micro- and Nanostructures
In recent years, self-assembled peptides’ and amino acids’ (SAPA) micro- and nanostructures have gained much research interest. Here, description of how SAPA architectures can be characterized using polarization-based optical measurement methods is provided. The measurement methods discussed include: polarized Raman spectroscopy, polarized imaging microscopy, birefringence imaging, and fluorescence polarization. An example of linear polarized waveguiding in an amino acid Histidine microstructure is discussed. The implementation of a polarization-based measurement method for monitoring peptide self-assembly processes and for deriving molecular orientation of peptides is also described.
Introduction
The increasing interest in the field of "organic photonics" has accelerated the search for novel optical materials at the nano-and micro-scales [1][2][3]. In organic photonics, optical elements are made of or comprise organic materials to achieve certain improved functionality aspects. One well-known example of devices that use organic photonics is organic light-emitting diode (OLED) displays [2].
One common property of many organic materials is their ability to self-assemble into ordered architectures, a property that may be considered as a natural "bottom-up" fabrication approach. The self-assembly process is reversible and does not require external energy; it is governed by specific interactions or forces between the assembled entities [4]. A self-assembly process occurs in many biological systems, and it relies upon the basic concept of supramolecular chemistry [5], which was first described by the 1987 Nobel Prize laureate Jean-Marie Lehn [6].
Amino acids serve as basic building blocks of peptides and complex biological structures and are also able to self-assemble into various ordered architectures. Decoding the amino acids and their sequences opened the avenue for the development of new materials, which are called "bioinspired" and are based on chemically synthesized peptides [7]. These synthesized peptides are also supramolecular materials, which are capable of self-organization into nanostructures with different shapes and sizes. Theoretical and experimental works have shown that peptide micro-and nanostructures are created by weak, dynamic, and reversible non-covalent interactions [1,8,9].
Fabrication of SAPA nano-and microstructures is performed by dissolving peptide or amino acid powders in different solvents [10] and allowing the resulting solutions to dry. The various SAPA architectures can be obtained by controlling different conditions that affect the self-assembly process, such as altering the amino acid's sequence [11], protecting the amine group in the peptide's molecule [12], and changing the solvent properties (e.g.,
Key Concepts in Optical Polarization
The polarization of light is one of the most remarkable phenomena in nature and has led to numerous discoveries and applications. The theory of optical polarization is reviewed in numerous books on optics [36][37][38][39]. In this paper, a brief description of some of the key concepts in optical polarization is provided in order to lay the foundation for the next sections.
Polarization describes the direction of the oscillating electric field [36][37][38][39]. The reason the electric field vector, E, was chosen to define the state of the polarization of light waves is because the electric field is involved in most light-matter interactions, and, in many media, the refractive index depends on the direction of the electric field.
There are several combinations of the amplitudes and phases of light waves that lead to two important types of polarization. These combinations, which are known as degenerate polarization states, include linearly horizontal (or vertical) polarized light (LHP/LVP) and right (or left) circularly polarized light (RCP/LCP). Figure 1a shows a representation of linear, circular, and elliptical polarized light, where the phase differences between the electric fields parallel to the x and y axes are 0, π/2, and π/4, respectively [37].
The state-of-polarization (SOP) may be represented by the polarization ellipse ( Figure 1b) or by the Poincaré Sphere [36,38,40] (Figure 1c). The SOP of light is determined by the shape of the polarization ellipse (the direction of the major axis) and its ellipticity (the ratio of the minor axis to the major axis of the ellipse). The size of the polarization ellipse determines the intensity of the electric field. The Poincaré Sphere is a three-dimensional (3D) representation of the polarization ellipse. Any point on the sphere can be expressed in terms of the spherical coordinates of the sphere, i.e., the orientation angle, θ, and the ellipticity angle, β.
The main problem with the orientation and ellipticity angles is that the angles are not directly measurable [36]. In 1852, George Gabriel Stokes laid out a set of four measurable parameters, grouped into a column vector, which were derived by time averaging the polarization ellipse [38]. These parameters are known as the Stokes polarization parameters, and the vector of these parameters is known as the Stokes vector. The Stokes parameters are written as S 0 , S 1 , S 2 , S 3 , and they can be expressed as below [38,41]: Here, I H is the intensity measured in the horizontal direction, I V is the intensity measured in the vertical direction, I ±45 is the intensity measured at ±45 • , and I L , and I R are the intensities measured in the left and right circular polarizations, respectively [41].
When polarized light travels through a polarizing material, a new polarization state is formed at the output of the material. As such, the transformation between the input polarization state (marked as the vector S in ) and the output polarization state (marked as the vector S out ) is given by a 4 × 4 matrix known as a Mueller matrix [38,42]. The input polarization is generated by a polarization-state generator (PSG), and the output polarization is determined by a polarization-state analyzer (PSA) [41]. Figure 1d depicts the input-tooutput polarization system representation that comprises the PSG, the Mueller matrix M, and the PSA. Knowledge of all 16 Mueller matrix elements provides the full amount of information about the polarized-light-matter interaction, and, therefore, the mathematical analysis and experimental measurement of the Mueller matrix is of great importance. therefore, the mathematical analysis and experimental measurement of t matrix is of great importance.
Polarization-Based Measurement Methods for Characterization of Peptid Amino Acids
In this section, a review of common polarization-based optical m methods that are used to characterize peptides' and amino acids' self-assem and nanostructures is provided. Most of these methods are aimed at est molecular orientations of various SAPA architectures, but some methods are a other applications, as is detailed in the following sections.
Polarized Raman Spectroscopy
Raman spectroscopy [43] is a non-destructive, label-free light-scatterin that is used to probe light-matter interactions. Specifically, Raman spectro relies on inelastic light scattering), is used to analyze the vibrational and molecular modes of a material by monitoring perturbations in the molecular po caused by incident (laser) light [43]. When a polarized laser light is used, about the molecular orientation can be obtained, and the technique is known a
Polarization-Based Measurement Methods for Characterization of Peptides and Amino Acids
In this section, a review of common polarization-based optical measurement methods that are used to characterize peptides' and amino acids' self-assembled micro-and nanostructures is provided. Most of these methods are aimed at estimating the molecular orientations of various SAPA architectures, but some methods are also used for other applications, as is detailed in the following sections.
Polarized Raman Spectroscopy
Raman spectroscopy [43] is a non-destructive, label-free light-scattering technique that is used to probe light-matter interactions. Specifically, Raman spectroscopy (that relies on inelastic light scattering), is used to analyze the vibrational and rotational molecular modes of a material by monitoring perturbations in the molecular polarizability caused by incident (laser) light [43]. When a polarized laser light is used, information about the molecular orientation can be obtained, and the technique is known as Polarized Raman Spectroscopy [44]. Polarized Raman spectra allow for the extraction of information on the molecular conformation of materials, since the Raman scattering intensity depends on the molecular polarizability tensor [45]. In the following paragraphs, examples of the implementation of polarized Raman spectroscopy for the study of the molecular conformations of SAPA micro-and nanostructures are given. Notingher et al. derived the molecular orientation in the dipeptide L-diphenylalanine (FF) self-assembled nanotubes by applying polarized Raman spectroscopy [46]. As is evident from their findings (presented in Figure 2a), the 1002 cm −1 (phenyl group) and the 1686 cm −1 (C=O) band exhibited stronger intensities in ZZ rather than in XX configurations (the Z-axis was selected as the direction along the FF nanotubes), while the Raman band at 1249 cm −1 (C-N) exhibited maximum intensity in the crossed configuration, XX. As such, the authors concluded that the C=O side chains have parallel alignment with the nanotube axis, and that the C-N backbone vibrations are aligned perpendicular to the nanotube axis.
REVIEW 5 of 16
1686 cm −1 (C=O) band exhibited stronger intensities in ZZ rather than in XX configurations (the Z-axis was selected as the direction along the FF nanotubes), while the Raman band at 1249 cm −1 (C-N) exhibited maximum intensity in the crossed configuration, XX. As such, the authors concluded that the C=O side chains have parallel alignment with the nanotube axis, and that the C-N backbone vibrations are aligned perpendicular to the nanotube axis. In a later study by Lednev et al., [47], the researchers used polarized Raman In a later study by Lednev et al., [47], the researchers used polarized Raman spectroscopy at a larger spectral range (300-1800 cm −1 ) to determine the orientation of di-Dphenylalanine (D-FF) molecules within a nanotube. As evident from Figure 2b, there are differences in several band intensities (1418 cm −1 , 1131 cm −1 , 495 cm −1 ) in the XX and ZZ polarizations. The application of polarized Raman spectroscopy allowed the authors of this study to determine the cylindrical symmetry of the nanotube, the orientation of the NH 3 + rocking mode, and the orientation of the COO − group in relation to the nanotube axis.
However, not only was the molecular orientation of peptide structures deduced by polarized Raman spectroscopy, but also the amino acid structures. For example, polarized Raman spectra of the amino acid (α-form) glycine [48] were recorded by Filho et al. for two scattering geometries, as seen in Figure 2c. The implementation of polarized Raman spectroscopy allowed stretching modes that are specific to the glycine polymorphic form to be revealed; these cannot be differentiated without using a polarization technique.
In another study [49], Tischler et al. used low-frequency polarized Raman spectroscopy in order to reveal the molecular orientation of a single organic microcrystal made of the amino acid L-alanine. The Raman spectra at the polarization directions 0 • , 45 • , and 90 • are shown in Figure 2d, relative to the (101) plane. The conclusion from the Raman spectra presented in Figure 2d is that the polarization direction has a major effect on the distribution of photons in both spectral regions of the hydrogen bond stretching modes (parallel beam direction) and shear modes (perpendicular beam direction). These findings allowed the authors to construct a simulation of the hydrogen bond's network within the single microcrystal.
The works mentioned above are merely examples showing the kind of information that was obtained using polarized Raman spectroscopy in the study of some SAPA architectures.
Polarized Imaging and Birefringence Monitoring
Another optical polarization-based measurement method that is used for studying peptides' and amino acids' micro-and nanostructures is polarized optical microscopy (POM) [50,51]. Depending on the setup and components, one can obtain quantitative information with respect to the SAPA architectures (such as the 16 elements of a Mueller matrix [52], retardation, refractive index, thickness, and birefringence [50]) and also polarization imaging information.
Simple POM is based on the common brightfield microscope with additional components [50,52]. The additional components usually include a (rotatable) PSG (used to polarize the incident light), strain-free microscope objectives (used to reduce unwanted birefringence of the objective's lens), and an analyzer that is rotated at 90 • with respect to the polarizer (a polarizer is an optical element that generates a specific type of polarized light (linear, circular, or elliptical) [38]. The analyzer (also known as a PSA or polarization state detector-PSD) is just another polarizer placed after the specimen.
A further component that can be added to a POM setup is a compensator (also known as a retarder). A retarder is an optical device that creates a phase shift between two orthogonal components of polarized light. If the retarder is a quarter-wave plate, circular polarization is produced by the linear polarization.
The PSA component can be replaced by a polarization camera that is composed of a complementary metal-oxide semiconductor (CMOS) or a charge-coupled device (CCD) sensor and a grid of polarizer arrays arranged at the 0 • , 45 • , −45 • , and 90 • transmission axes. A major advantage of using a polarization camera as a PSA is that it allows the easy and rapid extraction of the intensity, azimuth of linear polarization (AoLP), and degree of linear polarization (DoLP). From these triplet parameters, the 12 linear elements of the Mueller matrix that characterize the specimen can be deduced [53]. In the following paragraphs, some examples of the use of POM and a polarization camera to study amino acids and peptides are given.
In the study of amino acids, a polarization camera was deployed by Ellis et al. [54] in order to determine the L-and D-enantiomeric abundances of the Serine (Ser, S) and phenylalanine (Phe, F) amino acids. This was performed by computing their optical rotation (which is the rotation angle of a plane-polarized light after passing through a molecule) as a function of concentration. The polarization camera yielded the AoLP, and, from it, the optical rotation was calculated [54]. Note that the AoLP is related to the Stokes vector by 1 2 tan −1 (S 2 /S 1 ) [51]. Ellis et al. found a lower bound on the amino acid concentration, above which their optical rotation can be detected [54]. Such method to detect small amounts of amino acids is of great importance, for example, in extra-terrestrial biosignature research [54].
Another amino acid that was studied using POM is Histidine (His, H). Histidine (His, H) is a polar hydrophilic α-amino acid that contains an imidazole side chain and is capable of self-assembly into two polymorph crystal structures-monoclinic (P2 1 ) and orthorhombic (P2 1 2 1 P 1 ) [55]. The interaction of linear polarized light with self-assembled Histidine microplates (with an orthorhombic crystal structure) was studied by Handelman et al. using POM that included a polarization camera [56]. Using this setup, the triplet parameters (intensity, DoLP, and AoLP) at the output (camera) plane were extracted as a function of PSG rotation angle. Note that the DoLP is related to the Stokes vector by DoLP = S 2 1 + S 2 2 /S 0 [51]. Figure 3a shows the DoLP, AoLP, and intensity images of several His-microplates at different sizes, thicknesses, and orientations at five PSG angles (−87 • , −45 • , 0 • , 45 • , 87 • ). The different colors of the His-microplates that are seen in the DoLP and AoLP images result from the different thicknesses and orientations of the His-microplates. Further, in that study [56], the birefringence of the His-microplates was extracted by elimination of the corresponding thickness and orientation values of the His-microplates and by considering their optical symmetry (biaxial) and crystal system (orthorhombic).
Besides amino acids, self-assembled peptide microstructures were also investigated by POM methods [34] (Figure 3b). For example, Stupp et al. [57] developed a method (based on thermal treatment) for the fabrication of long arrays of aligned peptide nanofibers bundles. In order to evaluate their method, Stupp et al. used POM in order to image the birefringence of these peptide amphiphile gel nanofibers. It was shown that the fabrication using thermal treating presented in [57] yields macroscopic birefringent domains of the order of tenths of millimeters.
POM was also used to image the birefringence of SAPA micro-and nanostructures. For example, Li et al. [58] developed a cryogenic-treatment-based technique to control the self-assembly of dipeptide diphenylalanine (FF) microstructures (Figure 3c). Using POM, Li et al. found that birefringence was strong and angle-dependent after the cryogenic treatment, which proved the feasibility of their method to form well-ordered, chiral crystalline dipeptide fibers from their organogel phase.
Some works use POM to image peptide-fibrils that were stained with organic dyes (such as Congo red) in order to detect amyloid formation by imaging the peptides' birefringence. Such works are, for example, [59,60], where (in [59]) birefringence was detected in stained lipopeptides fibrils (Figure 3d), and (in [60]) birefringence was imaged in stained (N-fluorenylmethoxycarbonyl) Fmoc-RGD peptide hydrogels.
POM was also used to monitor the birefringence of peptides as a function of time and temperature. This monitoring allows for the examination of the effect of external conditions on the self-assembly process of the peptides' micro-and nanostructures.
For example, Rosenman, Apter et al. studied the evolution of the birefringence of triphenylalanine (FFF) tripeptide microplates as a function of temperature, using POM [61]. It was shown that birefringence decreases substantially as the heating temperature of the FFFmicroplates rises (Figure 4a). Their conclusion, from the decrease in birefringence of heated FFF microplates, was that thermally induced FFF-microplates exhibit a transformation from a helix to a β-sheet secondary structure, which correlates with the circular dichroism (CD) spectra observations of such a transformation [61]. A further discovery made using POM was that the FFF-microplates undergo an intermediate melt-like state before they complete their full transformation into a β-sheet secondary structure. In addition to the temperature-dependent birefringence discussed above, timedependent birefringence was measured by Yan et al. in other peptide nanostructures (such as Fmoc-FF) in order to track their formation process [62]. The transformation of Fmoc-FF triclinic aggregates to nanofibers and to monoclinic nanobelts was initiated by ultrasound irradiation and monitored by POM imaging. Figure 4b shows the polarization images of Fmoc-FF aggregates after a 1 min ultrasound sonication (first row) and polarization images of Fmoc-FF nanofibers after a 3 min ultrasound sonication (second row). It can be noted that no obvious anisotropic birefringence was observed after the 1 min sonication of the Fmoc-FF aggregates, but strong birefringence was observed after the 3 min sonication of the Fmoc-FF nanofibers. This clearly shows the ultrasound-dependent evolution of the nanofibers from Fmoc-FF aggregates. In addition to the temperature-dependent birefringence discussed above, time-dependent birefringence was measured by Yan et al. in other peptide nanostructures (such as Fmoc-FF) in order to track their formation process [62]. The transformation of Fmoc-FF triclinic aggregates to nanofibers and to monoclinic nanobelts was initiated by ultrasound irradiation and monitored by POM imaging. Figure 4b shows the polarization images of Fmoc-FF aggregates after a 1 min ultrasound sonication (first row) and polarization images of Fmoc-FF nanofibers after a 3 min ultrasound sonication (second row). It can be noted that no obvious anisotropic birefringence was observed after the 1 min sonication of the Fmoc-FF aggregates, but strong birefringence was observed after the 3 min sonication of the Fmoc-FF nanofibers. This clearly shows the ultrasound-dependent evolution of the nanofibers from Fmoc-FF aggregates. POM was also used to monitor the transformations of disordered peptide structures into highly ordered crystalline structures. Zhang et al. introduced a method (based on the differential evaporation rates of peptide solution) of uniformly aligning naphthalene-FF (Nap-FF) peptide nanofibrils [63]. POM was used in this work to track the orientation POM was also used to monitor the transformations of disordered peptide structures into highly ordered crystalline structures. Zhang et al. introduced a method (based on the differential evaporation rates of peptide solution) of uniformly aligning naphthalene-FF (Nap-FF) peptide nanofibrils [63]. POM was used in this work to track the orientation transition of these nanofibers in real time. Figure 4c depicts images (acquired by POM) of the casting of a Nap-FF based solution after various time periods of evaporation. As can be seen in Figure 4c, the evolution of the self-assembly of peptide-nanofibrils can be divided into several time intervals. Within the first 540 min, the birefringence area increases, suggesting the condensation of the peptide nanofibrils. In the next 60 min, the birefringence area shrinks, and the nanofibrils accelerate their solidification, showing a white interference color [63]. After the completion of solidification, four birefringence lamellar domains are formed. This formation can be seen in the different colors of the domains in the POM image.
The examples mentioned in this section show that POM has been used to monitor the self-assembly of peptide nanostructures, derive the birefringence of SAPA microstructures, track changes in the secondary structures of peptides, and verify the feasibility of various peptide structures' fabrication processes.
Polarization and Fluorescence
Fluorescence is the emission of light as a result of molecular excitation by light absorption [64]. If the excited light is polarized, the absorption of the fluorophore is proportional to cos 2 θ, where θ is the angle between the electric field vector of the excited light and the absorption transition moment vector. This means that when θ = 90 • , i.e., the polarized electric field vector is oriented at 90 • in relation to the orientation of the transition dipole moment of the molecules [65], then the probability of excitation will be minimal. When the polarized electric field vector is aligned (i.e., parallel) with the transition dipole moment of the molecules, then the probability of excitation will be maximal. As such, polarizationbased fluorescence measurement tools can be used to study the molecular organization of fluorophores [64] and the effect of the chemical environment on the fluorophore.
Common polarization-based fluorescence measurement methods include Fluorescence Polarization Microscopy (FPM), Muller Fluorescence Spectroscopy (MFS), and Circularly Polarized Luminescence (CPL) Spectroscopy. These instruments are widely used in lifescience applications [65], for example, for the study of protein structures [66,67] and disease diagnostics [68]. FPM and MFS measurements can also be used in cases where fluorescent dyes (such as Thioflavine-T and Congo Red) are incorporated with non-fluorescent molecules [69]. In the following paragraphs, some recent examples of the use of these instruments for the characterization of SAPA micro-and nanostructures are provided.
Haldar et al. used MFS to probe the anisotropic molecular organization and orientation of Boc-Xaa-Met-OMe (Xaa = Val/Leu) peptide nanotubes painted by the organic dye 2,3,6,7tetrabromonaphthalene diimide (TB-NDI) [70]. The full 4 × 4 fluorescence spectroscopic Mueller matrix (Figure 5a) was derived, and, by performing inverse analysis, Haldar et al. were able to quantify the fluorescence linear diattenuation, the linear polarizance, and the average fluorescent dipolar orientation angles for the ground and excited molecular states [70]. Eventually, these parameters were used to determine the molecular angular distribution function and the molecular orientational order.
He et al. developed a method of generating CPL with inverted handedness from a Fmoc-tripeptides film [71]. He et al. used a CPL Spectrometer in order to show that, by changing the middle amino acid (Phe and Trp) of Fmoc-tripeptides, and with the addition of achiral fluorescent dyes, CPL emission was observed after the peptides self-assembled into long-range-ordered hierarchical helical arrays (Figure 5b). The generation of CPL from peptide microstructures extends the diversity of optical materials that are able to generate CPL, a feature that is used for bioimaging [72], optical devices [73], and chirality transfer and energy transfer studies [74].
The examples mentioned in this section show that polarization-based fluorescence measurement tools have been used to derive the anisotropic molecular organization of peptides and to test peptides nanostructures CPL capability. He et al. developed a method of generating CPL with inverted handedness from a Fmoc-tripeptides film [71]. He et al. used a CPL Spectrometer in order to show that, by changing the middle amino acid (Phe and Trp) of Fmoc-tripeptides, and with the addition of achiral fluorescent dyes, CPL emission was observed after the peptides self-assembled into long-range-ordered hierarchical helical arrays (Figure 5b). The generation of CPL from peptide microstructures extends the diversity of optical materials that are able to generate CPL, a feature that is used for bioimaging [72], optical devices [73], and chirality transfer and energy transfer studies [74].
The examples mentioned in this section show that polarization-based fluorescence measurement tools have been used to derive the anisotropic molecular organization of peptides and to test peptides nanostructures CPL capability.
Polarized Waveguiding in Self-Assembled Amino Acid Microstructures
Bio-organic optical waveguides show a great potential in various biomedical applications, such as photodynamic therapy [75], photobiomodulation [76,77], and bioresorbable photonics [78]. Polarized waveguiding at the microscale is also of great importance for a variety of photonic applications, such as for increasing the polarization efficiency of liquid crystal displays [79,80] and for polarized data decoding [81]. Thus, major research efforts are aimed at finding more organic materials that can guide light at various scale sizes [82].
One study that recently demonstrated polarized optical waveguiding in organic SAPA microstructures is described in [56]. In that work, Handelman et al. showed the ability of an amino acid Histidine irregular convex hexagonal (His-ICH) microstructure to passively guide linear polarized light [56]. Figure 6 depicts passive (polarized) waveguiding in His-ICH microstructure [56]. The passive waveguiding capability can be clearly seen in Figure 6a, where, only when light impinges upon the His-ICH microstructure (presented in Figure 6b) at a specific location, a small light spot appears at the opposite end of the microstructure (marked by a white circle). However, when light impinges upon the His-ICH microstructure at other locations, no light spots can be observed at the end of the microstructure.
Polarized Waveguiding in Self-Assembled Amino Acid Microstructures
Bio-organic optical waveguides show a great potential in various biomedical applications, such as photodynamic therapy [75], photobiomodulation [76,77], and bioresorbable photonics [78]. Polarized waveguiding at the microscale is also of great importance for a variety of photonic applications, such as for increasing the polarization efficiency of liquid crystal displays [79,80] and for polarized data decoding [81]. Thus, major research efforts are aimed at finding more organic materials that can guide light at various scale sizes [82].
One study that recently demonstrated polarized optical waveguiding in organic SAPA microstructures is described in [56]. In that work, Handelman et al. showed the ability of an amino acid Histidine irregular convex hexagonal (His-ICH) microstructure to passively guide linear polarized light [56]. Figure 6 depicts passive (polarized) waveguiding in His-ICH microstructure [56]. The passive waveguiding capability can be clearly seen in Figure 6a, where, only when light impinges upon the His-ICH microstructure (presented in Figure 6b) at a specific location, a small light spot appears at the opposite end of the microstructure (marked by a white circle). However, when light impinges upon the His-ICH microstructure at other locations, no light spots can be observed at the end of the microstructure. The capability of guiding polarized light is evident from the azimuth of linear polarization (AoLP) images presented in Figure 6c,d. Only at a specific input polarization state, a light spot at the opposite end of the His-ICH microstructure is observed ( Figure 6d). This proves that His-ICH plates can guide polarized light. The capability of guiding polarized light is evident from the azimuth of linear polarization (AoLP) images presented in Figure 6c,d. Only at a specific input polarization state, a light spot at the opposite end of the His-ICH microstructure is observed (Figure 6d). This proves that His-ICH plates can guide polarized light.
Final Remarks and Future Directions
The great interest in SAPA micro-and nanostructures has accelerated the use of various optical measurement methods for analyzing their physical and chemical properties. In this review, it was shown that polarization-based optical measurement methods can deduce additional information regarding the inner structures of SAPA. Examples of the specific information obtained by polarization-based measurement methods and described in this review are birefringence, secondary-structure tracking, molecular orientation, and the monitoring of the peptide self-assembly processes. These physical parameters may be useful for future SAPA-based photonic applications. An example of a property that is discussed in this paper and was discovered using one such polarization-based measurement method is polarized optical waveguiding in a microstructure of the amino acid Histidine. This polarized optical waveguiding capability has the potential for light-guiding applications within or between organic elements. Figure 7 shows an infographic diagram that summarizes the optical applications and the parameters that were extracted from the polarization-based measurement methods discussed in this paper. There are many future directions that can be taken in this research field. For example, other SAPA-based microstructures can be explored in the context of linear or circular polarization waveguiding. Furthermore, polarized fluorescence at various wavelengths can be investigated in other SAPA structures. Precise birefringence studies, including monitoring changes in birefringence as a function of external parameters such as temperature, evaporation rate, and pH environment, can also be performed for many types of SAPA crystals. It is, therefore, recommended to expand the implementation of these polarization-based measurement methods for further SAPA research. There are many future directions that can be taken in this research field. For example, other SAPA-based microstructures can be explored in the context of linear or circular polarization waveguiding. Furthermore, polarized fluorescence at various wavelengths can be investigated in other SAPA structures. Precise birefringence studies, including monitoring changes in birefringence as a function of external parameters such as temperature, evaporation rate, and pH environment, can also be performed for many types of SAPA crystals. It is, therefore, recommended to expand the implementation of these polarization-based measurement methods for further SAPA research.
Conflicts of Interest:
The authors declare no conflict of interest. | 2022-03-14T15:17:56.167Z | 2022-03-01T00:00:00.000 | {
"year": 2022,
"sha1": "65aa1c7bff1eef1f293a0a651255c99c8d6dab9c",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/27/6/1802/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fe085121c059db0ca22e15052cc58c407d0b90cc",
"s2fieldsofstudy": [
"Physics",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
59045626 | pes2o/s2orc | v3-fos-license | Geoheritage sites with palaeogeographical value : some geotourism perspectives with examples from Mountainous Adygeja ( Russia )
Geoheritage sites with palaeogeographical value are excellent venues for geotourism. These sites preserve information about ancient environments, ecosystems, and their dynamics that may be of interest to professionals, students, amateur scientists, and the general public. Palaeogeographical geoheritage sites (geosites) can be used to successfully increase public awareness of past and future climate changes. However, because palaeogeographical information is typically complex and not directly visible, professional interpretation is necessary. Successful interpretive tools include posted signs and education activities that engage visitors in scientific research. Using modern analogues to help visitors visualize past environments and ecosystems may be particularly effective. Professional interpretation helps foster visitor awareness of a geosite’s value. We suggest that some geosites can be visited sequentially on a guided excursion and propose a route for observing five geosites that exemplify the geodiversity of Mountainous Adygeja (Western Caucasus, southwestern Russia). Guided geosite excursions would introduce visitors to a broad diversity of palaeoenvironments and deepen their understanding of palaeogeographical phenomena. However, carrying capacity should be evaluated seriously for any geosites that are incorporated into palaeogeographical tourist excursions.
Introduction
Owing to the activity of individual researchers, research institutions, and international organizations such as the European Association for the Conservation of the Geological Heritage (ProGEO), studies of geological heritage (geoheritage) have become an important direction of Earth Science over the past two decades (e.g., WIMBLEDON & SMITH-MEYER 2012;PROSSER 2013).Yet despite numerous achievements and certain standardization of the relevant term definitions, concepts, and methods at both international and national levels (WIMBLEDON & SMITH-MEYER 2012), further progress is necessary.Inconsistencies in classifications and approaches remain (e.g., BRADBURY 2014; GARCIA-ORTIZ et al. 2014), and the perspectives of geoheritage for academic and public policies still need discussion.
Palaeogeographical information is preserved in many geological heritage sites (geosites).Palaeogeographical geosites are different from the other types of geosites because of the presence of valuable information about palaeoenvironments, palaeoecosystems, etc. (BRUNO et al. 2014; see also below).These sites are also valuable from the point of view of geotourism (DOWLING & NEWSOME 2010;NEWSOME & DOWLING 2010;DOWLING 2011;GRAY 2013;HENRIET et al. 2014;BRUNO et al. 2014;RUBAN 2015).Geotourists, who may include nature enthusiasts, students, amateur scientists, or professionals on vacation or participating in conference excursions (see also HOSE 1996HOSE , 2000;;HOSE & WICKENS 2004;DOWLING & NEWSOME 2010), are excited by the possibility of seeing features that reflect the history of the Earth, its ancient life, and past environments.The modern increase in geotourism activities on the international scale (DOWLING & NEWSOME 2010;NEWSOME & DOWLING 2010;DOWLING 2011;HOSE & VASILJEVIĆ 2012;RUBAN 2015) contributes to the importance of palaeogeographical geosites as tourist attractions.Deeper interest in the Earth's dynamics stimulates curiosity in phenomena more complex than solely collecting minerals and fossils.
This paper continues a discussion started in previous papers by BRUNO et al. (2014) and HENRIET et al. (2014).In this brief review, we address three topics related to palaeogeographical geosites and geotourism: 1) the importance of palaeogeographical geosites for increasing climate change awareness; 2) the challenges of facilitating and managing geotourism; 3) the opportunity of including multiple palaeogeographical geosites in guided excursions.Our goal is to alert specialists in geology as well as geoconservation to the immense potential of palaeogeographical geosites for geotourism development.However, we do not intend to propose something new to tourism.In contrast, we consider that brochures, guided excursions, and other "standard" attributes of tourism activity can be employed successfully for the purposes of palaeogeography-based geotourism, which itself is a kind of novelty.
Terminology
The terms "geoheritage" and "geosites" were defined by ProGEO.Geoheritage "encompasses the special places and objects that have a key role in our understanding of the history of the Earth -its rocks, minerals and fossils, and landscapes" (WIMBLEDON & SMITH-MEYER 2012, p. 18).A geosite is "a key locality ... or area showing geological features of intrinsic scientific interest, features that allow us to understand the key stages in the evolution of the Earth" (WIMBLEDON & SMITH-MEYER 2012, p. 19).Our definition of geotourism follows HOSE (2000), DOWNLING &NEWSOME (2010), andHOSE &VASILJEVIĆ (2012).Generally, geotourism refers to any kind of tourism activity related to geoheritage.
The value of palaeogeographical features and even the palaeogeographical type of geoheritage are widely recognized (WIMBLEDON et al. 2008;REYNARD et al. 2007;BRUSCHI & CENDRERO 2009;RUBAN 2010;BRU-NO et al. 2014).We follow the relevant definitions proposed by BRUNO et al. (2014).Particularly, palaeogeographical geosites are understood as "geological heritage sites that represent paleoenvironments in general or highlight particular paleoenvironmental features, which are of special interest for science, education, or tourism/recreation" (BRUNO et al. 2014, p. 301).The use of these geosites for the purposes of geotourism is defined provisionally as palaeogeography-related geotourism.Palaeogeographical geosites are diverse, and several subtypes can be distinguished (BRUNO et al. 2014).
Palaeogeographical geosites can preserve information about ancient climates (BRUNO et al. 2014).Some geosites exhibit features that reflect climate extremes reached in the past, providing clues for understanding the factors that trigger unusual climatic regimes, and demonstrating the consequences of icehouse and greenhouse conditions.As shown by ARCHER (2008), HAY (2011), andBOTTJER (2012), extreme climate shifts that are comparable to current climate change and its consequences can be found in the geological history of our planet.Palaeogeographical geosites could, therefore, serve as educational tools, facilitating public awareness and comprehension of past and current climate change, and stimulating mitigation and adaptation efforts.For instance, fluvial deposits, palaeosols, and fossils preserved at the Agate Fossil Beds National Monument (Nebraska, USA) document significant climatic fluctuations and their ecological ramifications from the Oligocene into the Holocene (JOHNSGARD et al. 2007).
Similarly, marine terraces that border many Italian coasts were formed by frequent marine transgressions and regressions during Pleistocene glacial and interglacial phases.These terraces (e.g., BIANCA et al. 2011), which are currently exposed high above sea level, contain an abundance of molluscs and corals, providing evidence for how climatically induced sealevel changes (balanced with local tectonics) can affect nearshore ecosystems (CAROBENE & DAI PRA 1990).The corestones, or boulders, of the Sila Massif (Calabria, Italy) provide another example of fluctuating climate in the past.These boulders are embedded in roughly 100 m of saprolite and regolith of granitoid and low-grade metamorphic rocks, representing ancient tropical weathering on a massive scale (GUZZETTA 1974;see LE PERA &SORRISO-VALVO 2000 andSCARCIGLIA et al. 2005 for the other explanations).In Puglia (Italy), the most part of the coast is characterized by numerous caves result of interaction between the karstic phenomena and sea level fluctuations during glacial and interglacial episodes of Quaternary (CANORA et al. 2012).In the same region, red bauxite deposits fill old palaeokarst basins developed in the Bari Limestone (mid-Cenomanian) during the continental meso-Cretacic phase.These deposits represent residual rocks that occur on carbonate rocks formed in tropical to sub tropical climates (BARDOSSY 1982).The bauxites mark local or regional unconformities associated with subaerially exposed carbonates.These deposits are important for provenance studies (BONI et al. 2012) and palaeogeographic reconstructions (MONGELLI et al. 2014).A similar example can be found at the famous Giant's Causeway World Heritage Site, Ireland.Here, a thick palaeosol between Paleogene basalt lava flows provides evidence for a tropical palaeoclimate in a place that currently experiences temperate conditions (LYLE 1996;SMITH 2005).Such sites can facilitate public understanding of the magnitude of regional changes in climate as well as climate extremes.
Challenges of palaeogeography-related geotourism activities
The necessity of professional interpretation for geoheritage is a serious challenge for geotourism because many visitors of geosites and geoparks are occasional tourists with no background in the Earth Sciences (HOSE 1996(HOSE , 2000;;HOSE & WICKENS 2004).This is particularly true for palaeogeographical geosites, which are inherently complex."Palaeogeography" could potentially become a key word attracting tourists, but these tourists will need to know what this word means.Understanding the preserved feature may be beyond the abilities of most people without proper guidance.Geoscientists offer interpretation of features that are not easily visualized by ordinary visitors.In addition, these sites may appear unspectacular, and therefore would be unlikely to generate excitement, with some exceptions.Providing an explanation for the connections between observed rocks and fossils with environments and ecosystems of the past and present to such geotourists is crucial.The above-mentioned Agate Fossil Beds National Monument offers an excellent example of proper tourist guidance.Park visitors are presented with abundant information about the geologic history of the site, the palaeoclimatic and paleoenvironmental information it preserves, and the ecology of its fossil mammals ( h t t p : / / w w w. n p s .g o v / a g f o / n a t u r e s c i e n c e / ) .Conversely, a well-established tourist trail offering a 360° panoramic view of the Oshten Mountain, which is an impressive Late Jurassic reef in Mountainous Adygeja (Western Caucasus Russia) with outstanding heritage value (BRUNO et al. 2014), lacks any accompanying interpretative information.This trail is used daily by dozens of tourists travelling individually or in groups, generally for holiday outdoor recreation, but also for adventure tourism and ecotourism.However, without a guide or any interpretative signs, few visitors will recognize that the exposed carbonate rocks and their fossil content preserve an ancient coral reef.
There are many interpretative approaches that could be used in geotourism to help the public appreciate palaeogeographical geosites.These include distribution of posters and brochures (these have been used successfully in many countries for decades -e.g., PURI & VER-NON (1959); for the general importance of brochures in tourism see MOLINA & ESTEBAN (2006) and QUELHAS BRITO & PRATAS (2015)), installation of interpretative signs, and interpretation by professional excursion guides (see HOSE (2000), HUGHES & BALLANTYNE (2010), CAR-DOZO MOREIRA (2012), and GORDON (2012) for an evaluation of the efficacy of these approaches).An example of a well-designed and useful brochure is the field guidebook to the "Jurassic Coast", which is a famous World Heritage Site in southern England.This brochure provides informative explanations of geological features exposed at the site, for instance Triassic cross-bedding and Jurassic tree stumps that were preserved due to algal growth on ancient trees (WESTWOOD 2011;BRUNSDEN 2013).On-line tools may also work well for the purposes of palaeogeographical interpretations * .
In our opinion, interpretative approaches to palaeogeographical geosites are most useful if they provide visitors with modern examples to visualize palaeoenvironments and palaeoecosystems.This requires some simplifications and imagination, but finding approximate analogues is possible, even for ancient environments and ecosystems (e.g., RUSSELL 2009).On rare occasions, such analogues might exist near the interpreted geosites, which is an outstanding opportunity for geotourism.An example is the Merzhanovo section (northern Azov Sea, southwestern Russia), where upper Miocene deposits representing a cliffed coast facies are exposed in a modern steep slope situated on a very similar seashore (RUBAN 2011).Such coincidence of palaeogeographical phenomena with their modern analogue(s) greatly facilitates visitor comprehension.Additionally, souvenir vendors, local restaurants, etc. may offer products explaining the essence of palaeogeographical geosites and promoting deeper knowledge (cf. the idea of "geoproducts" presented by RODRIGUEZ & NETO DE CARVALHO ( 2009)).For instance, the traditional food of the Adygejans is sold at the tourism destination "Rufabgo" in the Western Caucasus (Russia), which is known for its splendid waterfalls as well as outstanding geology (see below).Boxes with this food accompanied by an explanation could potentially be used to promote the picturesque geological features of the canyon, including those linked to palaeogeography.
Geosites where a person or family can actively view or take part in scientific research can also greatly enhance public appreciation and awareness of these valuable natural historic resources.With increased public interest follows the increased likelihood of preservation of important geosites (although without proper conservation measures, there is also the increased potential for geosite destruction).An excellent example of a geosite where visitors can view scientific research is the Dinosaur National Monument (Colorado and Utah, USA) (www.nps.gov/dino/parkmgmt/statistics.htm).This actively excavated palaeontological site works like a museum in the field.The site contains an enclosure of a large quarry of fossils comprised of hundreds of bones from 10 different species of dinosaurs and has an open viewing area for visitors to see how an active, scientific dig site works.Archaeological materials such as petroglyphs and pictographs from local Native Americans are also available for viewing.
At some geosites, visitors are given the opportunity to receive rudimentary training in fieldwork methods and then participate in the scientific process.For example, the Two Medicine Dinosaur Center (Montana, USA) is dedicated to hands-on education of the public through experience in active scientific research (www.timescale.org/about.html).Visitors are trained in some of the basics of geological and local history as well as palaeontological field prospecting, and then participate in documenting, uncovering and relocating dinosaur bones to the museum.All fossils and documentation are retained by the museum for scientific study and perhaps later museum display.At places like this, visitors gain a clearer understanding of various aspects of the procedures used to properly find and excavate fossils as well as how excavated material can be utilized to enhance scientific knowledge.They also gain an appreciation for the importance of this type of work, including the value of documentation and site preservation.
Similar to other geosites (GRAY 2013), palaeogeographical geosites are prone to anthropogenic influences.An increase in their exploitation for geotourism purposes can have negative consequences, including irreparable damage.This concern can be clearly seen in Iceland, where geotourism is greatly on the rise in response to the decline of traditional economies, such as fishing, and the country's 2008 banking crisis (BRAUN 1999;JÓHANNESSON & HUIJBENS 2010).Iceland's sits directly on the Mid-Atlantic rift and resides on two tectonic plates and a hot spot.This unique geographic setting offers numerous nationwide opportunities to see active volcanoes, geothermal phenomena (i.e.geysers and "mudpots"), and glaciers (DÓRASAEÞÓSDÓTTIR 2010).These geological phenomena make Iceland an important geotourism destination (it should be noted that large quantities of visitors to a few popular attractions can endanger the natural environment and ecosystems surrounding sites there (JÓHANNESSON & HUIJBENS 2010)).
Attempts to minimize anthropogenic influences may be challenging.The community of the largest Westman Island, Vestmannaeyjar, is currently constructing a state-sanctioned museum at the remains of several partially-excavated homes that were buried during the last large volcanic eruption in 1973.This is a useful and informative way to observe how the environment is perturbed by a natural hazard as well as exploit a devastating natural phenomenon.
Despite the above-mentioned problems, it should be noted that promoting awareness of palaeogeographical heritage in schools and other educational centres can increase the awareness of regional residents and visitors to the heritage value of these sites and the necessity of their protection, including safety and conservation concerns (e.g., PROSSER et al. 2006).Among other benefits, this increased awareness may help reduce the need for excessive signage or protective barriers.
Consideration of the consequences of geotourism activities is very important at any geosite; proper policy and careful management are always required.Such concerns, however, are typical for all kinds of nature-based tourism (e.g., KRÜGER 2005;STOLTON et al. 2010).Unfortunately, the legal basis for adequate management and conservation of palaeogeographical geosites is ambiguous.As shown by some examples (e.g., CAIRNCROSS 2011; TIESS & RUBAN 2013), even those policies that recognize geoheritage as a special legal category, frequently use very general terms, or restrict the heritage to include only minerals and fossils.Proper conservation of palaeogeographical her-itage will require a more comprehensive approach, and, at the very least, recognition of the fact that geological phenomena exposed today represent important, irreplaceable fragments of past environments.Rapidly evolving geoconservation legislation in European countries (WIMBLEDON & SMITH-MEYER 2012) leaves a hope that the problem will be resolved successfully.Additionally, development of an on-line dictionary and thesaurus for proper and broadly-accepted definitions of all terminology related to palaeogeography, geoconservation, and geotourism will help improve existing policies.This would be a single website maintained by an international organization that would be accessible to both researchers and the public from around the world (see example in RAPISARDI et al. 2013).It should be noted that not only specialists in geoconservation and geotourism should be involved, but also stratigraphers and palaeontologists.We envision that this on-line resource would serve as a "participatory open space" that is constantly updated following the growing requests for revised terminology in this topic, combined with linked data.Of course, edits to this resource would require some moderation (e.g., to prevent the development of superficial or incorrect definitions).This is an effort that will probably require collaboration between multiple research institutions, but would likely have a large payoff.ProGEO has made a lot of relevant developments (e.g., WIMBLEDON & SMITH-MEYER 2012).Organizations like this may help to establish research networks and resolve international debates about terminology.
Potential for guided palaeogeographical excursions
Because palaeogeographical geosites reflect various palaeoenvironments and palaeoecosystems (BRU-NO et al. 2014), a series of different geosites located within the same territory could be combined to illustrate a more complete geological history or diversity of ancient environments.For example, in the same general area, there may be one outcrop that exhibits Paleocene continental rocks and fossils, a second that shows Eocene shallow-marine rocks and fossils, and a third that exposes Oligocene deep-marine rocks and fossils.If these outcrops are located close to one another, they could be used to demonstrate the spectrum of regional palaeoenvironments associated with bathymetrical changes through the Paleogene.In other words, we propose that local or even regional palaeogeographical geosites can be linked to form geotourism excursion routes.Due to the common necessity of professional geosite interpretation, such excursions would be most valuable if guided.We use the excellent example of Mountainous Adygeja (Western Caucasus, Russia) to consider the oppor-tunities and challenges of organizing such excursions.This geodiversity hotspot, recognized by RUBAN (2010), would be ideal for palaeogeography-related guided excursions.The study area includes several important geoheritage sites with palaeogeographical value, and it is a nationally important destination for nature-based tourism and recreation.
We have selected five geosites for a proposed palaeogeographical excursion route (Fig. 2).Specific information about these sites has been previously published (RUBAN 2010;PLYUSNINA et al. 2015) and is not repeated here.The main selection criterion is their significant and complementary palaeogeographical value.Following this route, a geotourist would be exposed to a large spectrum of palaeoenvironments and their fossil assemblages preserved in sedimentary rocks (Table 1).The one-day excursion would start at the Khamyshki Section representing continental strata (geosite 1), then lead to the Little Khadzhokh Valley with lagoonal sandstones and clays (geosite 2).The excursion would next stop at two geosites representing shelf deposits (the Lago-Naki Highlands and the Rufabgo Canyon; geosites 3 and 4, respectively) and finish at the Partisan Glade Section, where deepmarine organic-rich shales outcrop (geosite 5).Because of the loop-like configuration of its route (Fig. 2), this excursion could be split into two parts (Part 1: geosites 1 and 2; Part 2: geosites 3, 4, and 5) or shortened (i.e., starting with geosite 2, where some evidence of a continental palaeoenvironment can be demonstrated).This excursion would contribute significantly to the local development of geotourism because it provides an exceptional opportunity to present information about the diversity of palaeoenvironments that existed in Mountainous Adygeja.Mountainous Adygeja is a significant Russian tourist destination that is visited by numerous "occasional" geotourists.Moreover, several large universities use this territory for field educational programs in geology, geography, and tourism.Thus, one should expect a large number of visitors to potentially be interested in learning about its geological past.
Undoubtedly, the possible palaeogeographical excursion mentioned above should be guided.
Table 1.Geosites to be included into the possible guided palaeogeographical excursion in Mountainous Adygeja (Western Caucasus).
Professional geologists may understand the geological setting without guides.However, students and various non-professional visitors would need some explanation of what the observed deposits and fossils mean.For instance, understanding the nature of Triassic quasi-flysch strata (e.g., GAETANI et al. 2005) or Jurassic lagoonal and carbonate platform deposits (e.g., RUBAN 2006) might be difficult even for geologists.This proposed excursion might be especially suitable for a conference field experience or a student field trip.Professional guidance could be provided by the staff of a university camp (specially created for student field practice), which is located in the midst of the considered territory, or by the staff of the Caucasus State Natural Biosphere Reserve that is situated in southern Mountainous Adygeja.Interpretative signs installed near the geosites may also help, although their efficacy would be limited.
The other possibility for palaeogeography-related geotourism in Mountainous Adygeja exists in the Lago-Naki Highlands.There, on the top of the Stonesea Range, one can observe a 360°-panoramic view of the mountains of the Western Caucasus.Two tall Fig. 2. Outline of a possible palaeogeographical excursion in the Mountainous Adygeja (Western Caucasus).Numbers for photos correspond to geosite numbers on the map.See Table 1 for geosite names and more details.mountains are visible: the Big Tkhatch Mountain and the Oshten Mountain (Fig. 3).Both are ancient reefs of Late Triassic and Late Jurassic age, respectively.Thus, a geotourist can view the carbonate build-ups of different palaeoseas in one place by just turning the head.This site has great potential as a geotourism locality.However, the importance of this panoramic view for understanding the latter cannot be understood without professional guidance.
Organization of guided palaeogeographical excursions faces an additional challenge, which is not limited to Mountainous Adygeja.The carrying capacity of geosites, which is used for the purposes of crowd management and stipulates the maximum number of visitors that can visit a site at once (JIN & RUBAN 2011), is very limited.Efficient communication of palaeogeographical information requires small, compact groups of tourists.The carrying capacity for groups at selected geosites should always be carefully considered when planning palaeogeography-related geotourism excursions (Fig. 4).The geometry of the geosites, as well as safety and accessibility issues may leave only a few places for groups to gather.In the case of Mountainous Adygeja, the maximum size of a group at any given locality should not exceed 10 persons in most cases (Table 1), even if some of the geosites (e.g., the Khamyshki Section) are very large and can host dozens if not hundreds of individual visitors.Of course, the accessibility and tourist perception of the above-mentioned (and all other) palaeogeographical geosites can be improved with "standard" geoconservation procedures like vegetation removal (full or partial), renewal of road sections, etc. (see PROSSER et al. 2006).Various factors that affect the "natural beauty" of these sites should be also taken into consideration (KIRILLOVA et al. 2014).
Conclusions
Palaeogeographical geoheritage sites can facilitate understanding of the Earth's ancient environments and ecosystems, and they can also enhance awareness of past and future climate change.However, effective communication of palaeogeographical information to tourists requires professional explanation and use of interpretative tools.Palaeogeographical geosites can be visited sequentially on guided excursions that enable deeper appreciation of the geological past.An important topic for further research is discussion of the tourism potential of palaeogeographical geosites based on quantitative assessment of tourist preferences.2 and Table 1 for location and general characteristics of these geosites): 1 (Little Khadzhokh Valley) -a student (first author) at the toe of the slope and near the stream to indicate discontinuity in the Upper Jurassic siliciclastics that probably mark the palaeorelief surface (note that the space is very limited); 2 (Rufabgo Canyon) -a geologist (second author) that has enough space to comfortably examine folds in the Triassic carbonates.
Fig. 3 .
Fig. 3. Big Tkhatch Mountain (1) and Oshten Mountain (2), which are Late Triassic and Late Jurassic reefs, respectively, are visible from the same place on the top of the Stonesea Range of the Lago-Naki Highlands.
Fig. 4 .
Fig. 4. Differences in carrying capacity of the Little Khadzhokh Valley and the Rufabgo Canyon (see Fig.2and Table1for location and general characteristics of these geosites): 1 (Little Khadzhokh Valley) -a student (first author) at the toe of the slope and near the stream to indicate discontinuity in the Upper Jurassic siliciclastics that probably mark the palaeorelief surface (note that the space is very limited); 2 (Rufabgo Canyon) -a geologist (second author) that has enough space to comfortably examine folds in the Triassic carbonates. | 2018-12-18T19:58:37.381Z | 2015-01-01T00:00:00.000 | {
"year": 2015,
"sha1": "17942bd453f783af0ae655d110c4497e23c5c2e0",
"oa_license": "CCBY",
"oa_url": "http://www.doiserbia.nb.rs/ft.aspx?id=0350-06081576093G",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "17942bd453f783af0ae655d110c4497e23c5c2e0",
"s2fieldsofstudy": [
"Environmental Science",
"Geography"
],
"extfieldsofstudy": [
"Biology"
]
} |
246493158 | pes2o/s2orc | v3-fos-license | Influential nodes identification using network local structural properties
With the rapid development of information technology, the scale of complex networks is increasing, which makes the spread of diseases and rumors harder to control. Identifying the influential nodes effectively and accurately is critical to predict and control the network system pertinently. Some existing influential nodes detection algorithms do not consider the impact of edges, resulting in the algorithm effect deviating from the expected. Some consider the global structure of the network, resulting in high computational complexity. To solve the above problems, based on the information entropy theory, we propose an influential nodes evaluation algorithm based on the entropy and the weight distribution of the edges connecting it to calculate the difference of edge weights and the influence of edge weights on neighbor nodes. We select eight real-world networks to verify the effectiveness and accuracy of the algorithm. We verify the infection size of each node and top-10 nodes according to the ranking results by the SIR model. Otherwise, the Kendall \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\tau$$\end{document}τ coefficient is used to examine the consistency of our algorithm with the SIR model. Based on the above experiments, the performance of the LENC algorithm is verified.
www.nature.com/scientificreports/ spreading rate and the number of the target node's neighbors into account. The above entropy-based algorithms for identifying influential nodes have all proved their accuracy through experiments. In the time complexity analysis section, we will compare the computational complexity of these algorithms with our proposed algorithm. In this paper, considering the computational time of large-scale complex networks, we propose an algorithm that has low time complexity, can identify influential nodes with more accuracy. The algorithm can directly identify influential nodes without setting any parameters because some parameters are set to reasonable constants.
The rest of the paper is organized as follows. "Preliminaries" section gives a brief introduction to the preliminaries. "Methods" section presents the LENC ranking algorithm we proposed, including the main idea, and the calculation process of our algorithm. "Experiments" section will point out the experimental verification. "Discussion" section is the conclusion.
Preliminaries
A network can be denoted by G, equated as G = (V , E) , where V and E represent the set of nodes and edges, respectively.
Equilateral triangle. The edge between node V m and V n is expressed by E mn . Assuming that the neighbor node sets of V m and V n are Ŵ(V m ) and Ŵ(V n ) , respectively, then the number of triangles that can form between the two nodes is the number of their common neighbors. It can be defined as Edge weight. On the one hand, the more paths connected by the node, the greater the information load, and the greater the influence of corresponding edges. On the other hand, the more alternative paths are available, the influence of the edge will be reduced correspondingly 23 . Besides, the contribution of the edge to the information transmission is proportional to the information load of the node. Based on the above considerations, the influence of an edge depends on the information-carrying capacity of the connected nodes and the possibility of the edge is being replaced by other paths. The weight of edge E mn between node V m and V n can be expressed as where k(v m ) represents the degree of node v m , T mn represents the number of triangles formed by the edge E mn , w mn represents the weight of the edge between node m and n, and R mn represents the contribution coefficient of the edge. For simplicity, we set w mn = 1 . Weight(E mn ) is abbreviated as W mn in this paper. Since the contribution of edge to the influence of the two nodes it connects is different, the same edge reflects different influences to the two nodes, expressed as W mn = W nm .
The virtual node V ′ and other nodes in the network have a virtual edge E mv ′ , and the number of triangles constituted by the virtual edge is assumed to be 0 in the algorithm. The weight calculation method of the virtual edge is the same as that of other edges, and the weight of edge E mv ′ is expressed as where k(v ′ ) represents the information load (degree value) of the virtual node. Since the virtual node connected to all nodes in the network, here k(v ′ ) = N , and N is the number of nodes in the network.
The influence of all edges around the node is added, the sum of the weights of the first order edges of the nodes can be expressed as where W m represents the sum of the weights of all edges connected to the node V m . Information entropy. Claude Shannon 24 pointed out that information entropy is monotone, non-negative and additive. The only form of the uncertainty measurement function of a random variable that has proved to satisfy the three conditions is H(X) = −C P(X) log 2 P(X) , which is suitable for constant C, in this case, we set C = 1 . The entropy of the node based on the weight distribution of the edges connected to it of virtual node and neighbor node defined as www.nature.com/scientificreports/ The sum of the information entropy of all first-order edges of a node is obtained by summing up the information entropy of all first-order edges of a node, which is similar to the method of calculating and evaluating the influence of nodes based on degree entropy 25 . In this algorithm, the entropy of the node based on the weight distribution of the edges connected to it is used to evaluate the influence of nodes, and the edge entropy weight of nodes is expressed as Node influence. According to the topological structure of the network, nodes at the center of the network have a higher influence than nodes at the edge of the network. In the case of two nodes with the same entropy value, the node at the center is more important than the node at the edge. When calculating the local influence of nodes, the position influence coefficient k-core is introduced. The first-order entropy of the node based on the weight distribution of the edges connected to the node is the contribution of the first-order edge to the node influence, which is expressed as To ensure the accuracy of the algorithm, the second-order edge entropy should be considered. The first-order edge is the entropy of the node based on the weight distribution of the edges connected to the node itself, and the second-order edge is the entropy of the node based on the weight distribution of the edges connected to the neighbor nodes, which is the contribution of the neighbor to the influence of the node 26 . The total influence of nodes in the network can be expressed as where, v n are the neighbor nodes of node v m .
Methods
Main idea. According to the properties of information entropy, information entropy can measure the uncertainty of the system. The more stable the system is, the higher the information entropy is. Otherwise, the information entropy is lower. Therefore, the entropy of the node based on the weight distribution of the edges connected to it can be used as an indicator to evaluate the local influence of nodes. The higher the entropy is, the higher the complexity of nodes is, and the more influential it is in the network. There are two extreme cases of ranking the influence of nodes by using information entropy: the entropy value of nodes with one edge is 0, and the entropy value of nodes with a similar structure is equal. As shown in Fig. 1, node a and node b in the network have the same number of neighbor nodes. It is assumed that the degree of the node j and k are not equal, the weights are different but the distribution is the same. If we calculate the information entropy by the difference distribution, the weighted entropy of both nodes are the same, and the value is E = − log 2 (1/3) . Hence the influence is indistinguishable. However, with the introduction of the virtual node V ′ , the weight and the connecting edge of the virtual node to the two nodes are the same, without changing the network attribute. The weight value of the virtual edge and real edge is different due to the difference of nodes, which breaks the original balanced distribution so that the influence of the two nodes can be well distinguished. The influence of nodes can be obtained by the entropy weight of multi-order edge information.
Information entropy model. Introducing virtual nodes to reconstruct the network, assigning edge weights to nodes, and calculating the entropy of the node based on the weight distribution of the edges connected to it, the contribution of edges to the influence of nodes is determined. On this basis, introduce the location of the nodes in the network parameters, ensure the rationality of the proposed algorithm. The calculation considers the two-layer edge of the node to ensure the accuracy and efficiency of the algorithm. We mainly introduce the construction process of the LENC algorithm model from three aspects: algorithm definition, algorithm flow, and time complexity analysis.
Entropy(E mn ). www.nature.com/scientificreports/ The algorithm flowchart. The idea of the LENC algorithm is as follows. Firstly, a virtual node V ′ is introduced to reconstruct the network, forming a new network G = (V , E) . Node V ′ has a virtual edge with all nodes in the network, and the degree of node V ′ is the total number of nodes in the network. Secondly, according to the number of adjacent triangles and the effect of the nodes on the influence of adjacent triangles, the weights of adjacent nodes and neighbor nodes, and virtual nodes are calculated. Then, the entropy value of each edge is obtained according to the information entropy formula, and the entropy value of the first-order edge of the node to obtain the local influence of the node. Finally, the local influence attributes of the neighbor nodes are added to obtain the entropy weight of the first and second-order edges of the nodes, which can be an indicator of the influence ability of the nodes in the network. Figure 2 shows the calculation process of the model.
Time complexity.
The time complexity of the LENC algorithm has three main components. In the first part, to calculate the weight of the edge, we need to consider the number of common nodes among the nodes and their neighbors. First, calculate the number of the triangle of the edge, then calculate the weight of the edge. The time complexity is O(N), where is the average degree of the network, and N is the number of nodes; The second part is to calculate the local influence of nodes, which requires the introduction of the location attribute k-core of nodes. According to the K-shell algorithm, this step requires the traversal of all edges in the network. The time complexity is O(|E|), where E is the number of edges. In the third part, to calculate the total influence of the node, it is necessary to accumulate the weighted entropy of the first and second-order edges of the node. To calculate the weighted entropy of the neighbor edges, it is necessary to continue to traverse two-layer neighbor nodes, with the time complexity of O(N < k > 2 ) . Therefore, the time complexity of the LENC algorithm is O(N < k > 2 +|E|) . Table 1 lists the time complexity of several state-of-the-art algorithms and some popular entropy-based centrality measures. We can see the time complexity of LENC is low.
Computation process. To further explain the specific calculation process of the LENC algorithm, a simple network that contains 6 nodes and 7 edges is an example. Node V ′ is a virtual node introduced in the network. As shown in Fig. 3, take the influence of node v 4 in the network as an example. By calculating the entropy weight contribution of the first-order and second-order edges, the influence of the nodes was obtained. The specific calculation steps are as follows.
Step 1: Calculate edge weight. Calculate the weight of the edge between node v 4 and virtual node V ′ , Calculate the weight of the edge between node v 4 and neighbor node v 2 , www.nature.com/scientificreports/ In the same way, the weights of the edge of the neighbor node v 3 , node v 5 , and node v 6 can be calculated as follows: 0.5714, 3.2, 1.3333.
Step 2: Calculate total weight. Add the edge weights of node v 4 and all neighbors to obtain the first-order edge weights of node v 4 , Step 3: Calculate the entropy value. The entropy weights of the edges of node v 4 and its neighbors and virtual nodes are calculated respectively. Take the information entropy of the edges of node v 2 and its neighbors as an example, Following this method, the entropy weights of the edges of node v 4 and all neighbor nodes are added (including the entropy values of the virtual node V ′ ), so that the sum of the entropy values of all first-order edges of the node is Step 4: Calculate the total influence. Combined with the location parameters of node, the local influence of nodes in the network is calculated, as shown in the following formula, ) log 2 ( 9.6 16.9904 ) = 0.4654.
HITS O(n)
Hindex www.nature.com/scientificreports/ According to the "three-degree separation" theory 27 , edges outside the third-order have less impact on the influence of nodes and even have a negative effect. To ensure accuracy, the second-order edges were considered. In addition to its local influence, the ultimate influence of nodes in the network should be extended to other neighbor nodes. The total influence of node v 4 in the network is calculated as follows.
In the same way, we can calculate the influence of all nodes in the network. The results are shown in Table 2. Next, we analyze the influence of nodes in the network according to the node deletion method. Firstly, nodes 3 and 4 are at the center of the network. The degree of node 4 is greater than that of node 3, and removing node 4 has a greater impact on the network structure. Therefore, the influence of node 4 is greater than that of node 3. Secondly, for node 2 and node 6, moving node 2 will cause the entire network to be disconnected, which has a greater impact on the network structure, so node 2 is more important than node 6. Finally, the influence of node 1 and node 5 at the edge of the network is similar. Therefore, according to the node deletion method, the ranking results of the influence of nodes are 4, 3, 2, 6, 5, 1. As shown in Table 2, the ranking results of our proposed algorithm are consistent with the analysis result. Therefore, the accuracy of LENC has been proved initially.
Experiments
Data sets. In this experiment, eight real-world networks with different properties are selected, the statistics of these networks are summarized as follows. The basic statistics are shown in Table 3, and these networks can be downloaded from KONECT (http:// konect. uni-koble nz. de/ netwo rks/) and NETWORK (http:// netwo rkrep osito ry. com/).
Zachary. The Karate club network, a total of 34 nodes and 78 edges. The nodes represent the club members, and the edges represent the bond between two club members.
Arenas-email. This is the E-mail network of Rovira I Virgili University in Tarragona, southern Catalonia, Spain. It consists of 1133 nodes and 5451 edges. In the network, the nodes represent e-mail users, and the edges represent at least one e-mail message that has been sent between two users.
Moreno-blogs. The Blog network contains hyperlinks on the front pages of blogs in the context of the 2004 USA election. A node represents a blog, and an edge represents a reference relationship between two blogs. There are 1224 nodes and 16715 edges in the network.
Web-spam. The network is provided by the Purdue university network repository and contains 4767 nodes and 37375 edges.
Bio-dmela. In biological networks, the nodes are proteins, and the edges are interactions between proteins. The nodes are individual proteins with a total of 7393 nodes. The edges represent the interactions between proteins with a total of 25569 edges.
CC (Closeness Centrality).
Closeness centrality is based on the global information of the network to determine the network influence of nodes. The smaller the relative distance between all the node pairs, the stronger the accessibility of node information, and the more important are the nodes. It has been widely used in research but the time complexity is high.
EC (Eigenvector Centrality) 28 . This method considers that the influence of nodes in the network depends on both the number of neighbor nodes and the influence of neighbor nodes themselves. Its essence is to increase the influence of the node itself by connecting other nodes of relative influence. However, when there are many nodes with a large degree in the network, the phenomenon of fractional convergence will occur 29 .
HITS. HITS algorithm using different metrics to assess the influence of the nodes in the network. Give each node a hub value and an authority value to evaluate the influence of the node. Authority value measures the original creativity of nodes to information, and hub values reflect the role of nodes in information transmission. They interact and converge iteratively.
Hindex. This algorithm is mainly used to evaluate a scholar's academic achievements. The higher Hindex value indicates the greater influence of the node.
DIL. DIL is a new algorithm 23 . The method considers the degree attribute of the node but also the edge attribute of the node. where S(t), I(t) and R(t) represent the number of susceptible nodes, infected nodes, and recovered nodes at time t respectively. β represents the probability of infection and γ represents the probability of recovery.
Evaluation indicators. SIR model. Kermack and
Kendall coefficient. Kendall τ coefficient 31 is used to explain the correlation of two sequences, the correlation coefficient can reflect the proximity of two sequences. Suppose two sequences are related and have the same number of elements, expressed as X = (x 1 , x 2 ..., x n ) , Y = (y 1 , y 2 ..., y n ) . For the elements in both sequences, if x i > x j , y i > y j or x i < x j , y i < y j , then any pair of sequence tuples (x i , y i ) and (x j , y j ) , (i = j) are considered to be concordant; If x i < x j , y i > y j or x i > x j , y i < y j , they are considered discordant; If x i = x j or y i = y j , they are considered neither consistent nor inconsistent. Kendall τ coefficient is defined as where n is total combinations in these sequences, n c and n d indicate the number of concordant and discordant pairs, respectively. It reflects the correlation and matching between two sequences. In general, τ ∈ [−1, 1] , where τ > 0 indicates a positive correlation and τ < 0 illustrates negative correlation. That is, the higher the τ value is, the more accurate the ranking.
Experimental analysis.
To verify the ability of the LENC algorithm to identify influential nodes, the SIR model and Kendall correlation coefficient are used as evaluation indicators and compare the accuracy and effectiveness of different algorithms.
, www.nature.com/scientificreports/ Case test. First, take the karate network as an example. The topology of the karate network is shown in Fig. 4. Table 4 shows the top-10 nodes ranking results of different algorithms and the SIR model. It can be seen from Table 4 that the ranking results of CC, Hindex, and DIL algorithms are different from the SIR model, which indicates that the ranking results of these three algorithms in the Zachary network are not accurate enough. The ranking results of the LENC, EC, and HITS algorithms are consistent with the SIR model, which can identify the influential nodes in the network accurately. Therefore, the accuracy of the LENC algorithm is proved preliminarily.
Correlation analysis. In this experiment, the SIR model was used to evaluate the rationality and correctness of different algorithms. The infection probability 32 is set as β = 2 < k > / < k 2 > , k represents the average degree of nodes in the network, and < k 2 > represents the second-order neighbor degree with recovery probability γ = 1 , running independently for 1000 times. Figure 5 shows how the number of infected nodes varies with the influence of nodes. The x-axis represents the influence of nodes in different algorithms, and the y-axis represents the average number of infected nodes corresponding to different infection probabilities. The more linear the curve is, the more accurate the ranking result is. To observation, the axis is scaled, as shown in Fig. 5.
In the Arenas-email network, the linear growth trend of the LENC algorithm is obvious, which indicates that there is a positive correlation between the node influence and the SIR model. CC, EC, and Hindex algorithms perform well, but they can not accurately distinguish nodes with the same influence. In the HITS algorithm, the distribution of nodes is loose, and the influence of nodes in the same location is significantly different, which indicates that the algorithm is coarse-grained. In the Moreno-blogs network, the HITS algorithm performs worst. In Web-spam network, Bio-dmela, Opsahl-powergrid, and Email-EU network, the LENC algorithm is better than other algorithms because it has a significant positive correlation with the SIR model. Therefore, the LENC algorithm is suitable for different networks, and the ranking result of node influence is more accurate and reasonable. As the above result, the LENC algorithm has the best positive correlation with the SIR model. As the influence evaluation index increases gradually, the number of infected nodes in the SIR model increases steadily. Moreover, the number of nodes with the same influence is relatively concentrated, which indicates that this method can rank the influence of nodes more precisely.
Transmission capacity. In this experiment, the top-10 nodes detected by different algorithms are used as infected nodes, and the number of infected nodes in each time step is used to distinguish the influence of nodes. To verify the initial infection ability of each algorithm, we set the infection probability β = 0.01 , and the recovery probability γ = 1 . T is the time step, and F(t) represents the number of infected nodes in the network in time step t, as shown in Fig. 6. In the Web-spam network, the infected nodes of the top-10 nodes of the LENC www.nature.com/scientificreports/ algorithm are greater than other algorithms, indicating that the LENC algorithm can more identify the influential nodes in the network accurately. In the Bio-dmela and Email-EU network, the infection effect of the LENC and DIL algorithms is the best. In Arenas-email, Moreno-blogs and Ca-astroph networks, the ranking result of CC, Hindex, HITS, and LENC algorithms are similar. It is worth noting that the infection curve of the LENC algorithm has better infection performance. Besides, the ranking results of influential nodes identified by the EC algorithm do not meet the expected results. In summary, the positive correlation between the number of nodes infected by the LENC algorithm and the SIR model is the most obvious and verified the accuracy of this method. In the Opsahl-powergrid network, it can be seen from Fig. 6 that the infected performance of the LENC algorithm is significantly better than other algorithms, and the infection size of the top-10 influential nodes of the CC algorithm is relatively smaller than that of other algorithms.
Consistency analysis.
Kendall coefficient is used to express the similarity and consistency of two sequences 33 . In this experiment, the infection probability of the SIR model is set as [0.01, 0.1]. The infection sequence was obtained through 500 iterations. The higher the Kendall coefficient, the more consistent the algorithm is with the real ranking result, as shown in Fig. 7. In Arena-Email and Bio-dmela networks, the kendall correlation coefficient between the LENC algorithm and SIR model is higher significantly. When the infection probability is greater than 0.06, the effect of the proposed algorithm and SIR model is relatively consistent, which proves that the evaluation results of the LENC algorithm are accurate. In Moreno-blogs and Ca-astroph networks, The Kendall coefficient of the LENC algorithm is the maximum when β ≤ 0.05 , and then drops to the same value as other algorithms when β > 0.05 , but it still has certain advantages, indicating that the algorithm can accurately identify influential nodes in the network. In the Bio-dmela and Email-EU networks, the Kendall www.nature.com/scientificreports/ coefficient of the LENC algorithm is the largest, indicating that the recognition accuracy of the LENC algorithm is high. In summary, the ranking results of the LENC algorithm are highly consistent with the results of the SIR model, which verifies the accuracy of the algorithm. In the Opsahl-powergrid network, the Kendall coefficient of the LENC and HITS algorithm has the highest value, which is significantly better than other algorithms.
Discussion
The paper mainly introduces the model construction process of identifying influential nodes based on the entropy of the node based on the weight distribution of the edges connected to it. Introduces the time complexity of the model and the node influence evaluation process, and selects eight real-world networks with different network structure attributes. The experimental verification is carried out in four aspects: case test, correlation analysis, transmission capacity, and consistency analysis. The experiment verifies that the proposed algorithm LENC has obvious advantages. However, when calculating the influence of nodes, to control the time complexity of calculation cost, only the influence of first-order and second-order edges of the nodes are considered, and the accuracy of node influence ranking still has a lot of room for improvement. We will further improve the algorithm in future work. www.nature.com/scientificreports/ | 2022-02-04T14:32:46.258Z | 2022-02-03T00:00:00.000 | {
"year": 2022,
"sha1": "f442325f0c8a89f7282317b593712a4cd265aaf0",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-022-05564-6.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c7fa779a923cc8d2a2b0ebe9a91e23d7f8b45d54",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
13502189 | pes2o/s2orc | v3-fos-license | Photocatalytic Oxidation Processes for Toluene Oxidation over TiO2 Catalysts
Gas-solid heterogeneous photooxidation of toluene over TiO2 catalyst was studied to investigate the factors controlling the catalytic activities. The toluene photooxidation behavior on TiO2 was strongly affected by the formation and oxidation behavior of intermediate compounds on TiO2, and their accumulation decreased the reaction rate for toluene photooxidation. The formation and oxidation behavior of the byproduct compounds depended on the initial concentration of toluene and water vapor. In situ Fourier transform infrared (FTIR) studies revealed that water vapor promoted the cleavage of the aromatic ring and facilitated CO2 formation. At the reaction temperature of 300 K, the deposition of Pt on TiO2 suppressed CO formation, whereas catalytic activity was decreased due to the increase in the amount of intermediate compounds. On the other hand, Pt/TiO2 showed higher activity than TiO2 at 353 K, in spite of the increase of the intermediate compounds.
Introduction
Control of air quality is still one of the important topics in the research area of environmental science and technology.Photocatalytic oxidation (PCO) processes have been extensively studied for air quality control when the target compounds are CO, NO, volatile organic compounds (VOCs) and bioaerosols in closed environment [1][2][3][4][5].PCO is effective for the control of these compounds, especially under their diluted conditions.In the case of VOC removal from polluted air, TiO 2 catalysts have been generally used because of their high activities, high safety and low cost.The photooxidation behavior of various kinds of organic compounds, including aromatic compounds on near UV irradiated TiO 2 catalysts, has been clarified [6][7][8][9][10][11][12][13][14][15][16][17][18][19].
It has been generally accepted that the factors controlling photocatalytic activities involve the efficiency for electron-hole pair generation and their separation, formation of active oxygen species on the TiO 2 surface and their reaction with organic compounds adsorbed on the surface [1,19].Therefore, many efforts have been made to improve the photocatalytic properties of TiO 2 catalysts by controlling their crystallinity, band gaps and surface area.The other catalyst-modification methods are metal doping/deposition in/on the TiO 2 catalyst surface [20,21].Deposition of Pt metals on TiO 2 was effective in improving the rate for VOC photooxidation when the photoreaction was conducted with heating processes [22,23].
We have carried out the photocatalytic oxidation of hydrocarbons with TiO 2 to investigate their photooxidation behavior by using a gas-solid heterogeneous flow system [24][25][26][27][28].Benzene and toluene were oxidized to CO 2 on TiO 2 under near-UV irradiation; the TiO 2 catalysts suffered from severe deactivation when aromatic compounds were used as the substrate under their high concentrations or dry conditions [7,9].Under these conditions, intermediate compounds, including oxygen-containing species and carbonaceous materials, were generally formed on the TiO 2 catalyst surface, which covered catalytic active sites, inhibiting the adsorption and oxidation of toluene on the sites.On the basis of these findings, we have suggested that the formation and oxidation behavior of such intermediate compounds is also one of the important factors that controls the photocatalytic activities [26].
We herein report the photocatalytic oxidation of toluene over TiO 2 catalysts by focusing on the formation and oxidation behavior of intermediate compounds on the catalyst surface.The amount of intermediate compounds on TiO 2 was compared under various conditions to investigate the relationship between the photooxidation behavior of toluene and that of intermediate compounds in detail.We also conducted in situ Fourier transform infrared (FTIR) spectroscopic studies for toluene photooxidation on TiO 2 to reveal the effect of water vapor on the behavior of toluene and the intermediate compounds.
Photooxidation Behavior of Toluene on TiO 2
Figure 1a shows the time course for toluene photooxidation with TiO 2 under humid condition ([toluene] 0 = 8.4 μmol/L (210 ppmv), [H 2 O] = 1.0%) with a circulation system at 300 K.After the adsorption-desorption equilibrium was achieved in dark, photoirradiation was started for the TiO 2 catalyst.Toluene was oxidized on TiO 2 to form CO 2 and CO without an induction period on irradiation.Toluene concentration rapidly decreased in the initial period (~4 min), and then, the reaction rate decreased to a constant value until gaseous toluene was almost completely consumed.The formation of these intermediate compounds on TiO 2 in toluene photooxidation has been reported and the oxygen-containing byproducts, such as benzoic acid and benzaldehyde, have been identified [15].The formation of carbonaceous species was also confirmed by browning of TiO 2 catalyst during the toluene photooxidation.Figure 1b shows the time course for the changes in the amount of byproduct compounds on TiO 2 , which can be estimated from the balance between the amount of toluene consumption and that of CO x formation, as expressed by Equation (1), where V and W are the volume of the circulation system and catalyst mass.This estimation was valid, because no other C-containing byproducts were observed in gas phase with our system, indicating that the byproducts were presented on the catalyst surface.
The amount of intermediate ={([toluene]
(1) compounds on the catalyst surface The amount of byproduct compounds greatly increased in the initial period.The amount reached the maximum at 76 min, where the amount of byproducts and the C-density on TiO 2 were estimated to be 423 μmol/g-catalyst and 4.7 C-atom/nm 2 , respectively, indicating that the catalysts were significantly covered by the intermediate compounds.Comparison of the behavior of intermediate compounds with that of toluene indicates that the build-up of the byproducts caused the decrease in the reaction rate by blocking the catalyst active sites in the initial period.The intermediate compounds then gradually decreased with time, and their amount reached almost zero after 300 min, indicating that these compounds were further oxidized to CO 2 and CO during the reaction.
The formation behavior for these intermediate compounds greatly depended on the initial concentration of toluene in gas phase.When the toluene concentration was decreased to 4.2 μmol/L (Figure 2a), the toluene concentration monotonically decreased with time on photoirradiation.In this case, a similar trend was observed in the initial period of the reaction: the rate for toluene photooxidation significantly decreased due to the formation of intermediate compounds on TiO 2 (Figure 2b).However, the amount of intermediate compounds was lower and decreased monotonically with time.It has been reported that the rate of toluene photooxidation on TiO 2 catalyst can be fitted to Langmuir-Hinshelwood (L-H) kinetics, Equation ( 2), where k and K are a constant and the equilibrium constant for toluene adsorption on TiO 2 , respectively [16]: In the present study, however, the decay in toluene concentration was not fitted to the kinetic equation.This discrepancy was ascribed to the reaction condition.When the intermediate compounds were formed on TiO 2 surface, the catalytic active sites were poisoned by these compounds, and the fraction of surface coverage of toluene (θ in Equation ( 3)) decreased.Hence, the reaction rate did not obey the L-H kinetics under the condition where the catalyst surface was significantly covered with the intermediate compounds.
The formation mechanism of the intermediate compounds on TiO 2 in toluene photooxidation has been already studied by many groups [10,11,13,15].It is well accepted that electron-hole pairs photoformed on TiO 2 diffused to surface sites and generated active oxygen species [1,2].So, the highly-reactive species attacked the aromatic ring of toluene to form organic radicals, which were then oxidized by molecular O 2 to peroxy radicals, leading to the formation of ring-opening compounds.The methyl groups were also oxidized by the active oxygen species to form benzoic acid.When toluene concentration increased, the intermediate radicals also reacted with toluene to form dimerized species, which were then transformed to oligomeric and polymeric species in the presence of toluene.They were the first step for the formation of carbonaceous materials on TiO 2 .As mentioned above, however, these compounds were further oxidized to CO 2 and CO under photoirradiation.
The formation behavior of the intermediate compounds was also greatly affected by the concentration of water vapor in gas phase, as reported earlier [9].Figure 3 different humidity conditions.Water vapor in reaction gas have both positive and negative effects on toluene photooxidation.As a positive effect, water vapor inhibited the build-up of intermediate compounds on TiO 2 , giving rise to the increase in toluene photooxidation activity [24,26].On the other hand, it inhibited the adsorption of toluene on the TiO 2 surface, leading to the decrease in toluene photooxidation activity.At the concentration of 0.5%, the two effects competed, and the former effect was prominent at 1.0%.The positive effect of water vapor was explained in terms of radical species: OH radicals were formed on the TiO 2 surface by photoirradiation in the presence of water vapor.However, this mechanism has been questioned because of the low reactivity of OH radicals on TiO 2 [29].In addition, no evidence was obtained for the formation of OH radicals on the gas-solid heterogeneous TiO 2 photooxidation system.Further investigation should be conducted to reveal the reaction mechanism.
In Situ FTIR Studies for Toluene Photooxidation
In situ FTIR studies were conducted to pursue the photooxidation behavior of toluene on TiO 2 and, especially, to investigate the effect of water vapor on toluene photooxidation.Figure 4 shows the changes in FTIR spectra of intermediate compounds on TiO 2 during toluene photooxidation in air in the absence of water vapor.Here, it should be noted that the reaction rate was much lower than that in Figure 1, because the catalyst weight was much lower (see Experimental Section).In the toluene photooxidation ([toluene] 0 = 30 ppm), the bands appeared in the wavenumber range 1000-2000 cm −1 (Figure 4a).In the initial period of reaction (0~5 h), the bands due to various kinds of intermediate compounds were observed.The bands at 1222, 1582, 1646 and 1682 cm −1 were assignable to benzaldehyde [30][31][32][33][34].The strong band at 1414 cm −1 and the bands at 1448 and 1516 cm −1 were assignable to adsorbed benzoic acid on TiO 2 [30][31][32][33][34].The formation of these oxygen-containing compounds indicated that the active oxygen species attacked toluene and the intermediate compounds on the catalyst surface.In a prolonged reaction, the intensities of these bands increased with time, The amount of carbonaceous materials / moL g-cat -1 Time / min implying that the intermediate compounds were continuously accumulated on the TiO 2 catalyst surface (Figure 4b).These behaviors were consistent with those observed for catalytic reactions described in Section 2.1.Then, the bands in the range of 1680-1720 cm −1 , which were assignable to C=O stretchings, increased in their intensities.The appearance of the band at 1716 cm −1 due to aliphatic C=O groups indicated that aromatic ring cleavage occurred for the byproduct compounds.The bands at 2740, 2956 and 3072 cm −1 appeared, and their intensities increased with time, indicating that aromatic and aliphatic C−H groups were present in the intermediate compounds.On the other hand, the intermediate compounds formed on TiO 2 in the presence of H 2 O ([toluene] = 0.12 μmol/L (30 ppmv), [H 2 O] = 1.0%) were much different, as shown in Figure 5.After photoirradiation, the bands due to intermediate compounds were also observed.Although the bands due to the adsorbed benzoic acid (1416, 1450, 1514 and 1602 cm −1 ) were observed, the bands due to benzaldehyde were not detected, indicating that the oxidation of benzaldehyde to benzoic acid was promoted by the presence of water vapor.The difference was more prominent for the band of aliphatic C=O groups (1716 cm −1 ).The band appeared in the initial period in the presence of water vapor, in marked contrast with the reaction in the absence of water vapor, where there was an induction period for the appearance of the band.Therefore, benzene ring-cleavage immediately proceeded when water vapor was present in the reaction gas.The intensities of these bands increased with time until 5 h, and then, the intensities decreased with time, indicating that these intermediate compounds were oxidized CO 2 and CO under photoirradiation in the presence of water vapor.
In the toluene photooxidation, active oxygen species photoformed on the TiO 2 oxidize methyl group to form benzaldehyde and benzoic acid.The oxidation of aromatic rings gives rise to the formation of phenols or cleavage of the aromatic rings, which is the first step for complete oxidation to CO 2 .The findings described above show that the presence of water vapor promoted the sequential oxidizing of benzaldehyde to benzoic acid and the cleavage of aromatic rings.Therefore, the accumulation of the intermediate compounds was greatly suppressed by the presence of water vapor.
Effect of Pt Deposition and Catalyst Heating
Platinization of TiO 2 is one of the effective methods to improve the photocatalytic oxidation activities, because it is reported to suppress the recombination of the electron-hole pair formed by photoirradiation in TiO 2 .In the case of benzene oxidation, the deposition of Pt did not change the rate for benzene photooxidation on TiO 2 [25].Figure 6 shows time course plots for toluene conversion, CO x formation and the amount of intermediate compounds on Pt/TiO 2 catalyst.Toluene concentration monotonically decreased with time, and the profile was similar to that with TiO 2 (Figure 1).However, the amount of intermediate compounds formed on Pt/TiO 2 was much larger than that on TiO 2 .Pt/TiO 2 catalyst showed lower activity than TiO 2 due to the increase in the amount of intermediate compounds on the catalyst.The intermediate compounds existed on the catalyst even after toluene was completely consumed (300~360 min).Thus, deposition of Pt on TiO 2 promoted the formation of intermediate compounds at room temperature.On the other hand, CO formation was greatly suppressed when the Pt/TiO 2 catalyst was used for the reaction.CO was in all cases detected in toluene photooxidation on TiO 2 catalyst, even when the reaction conditions were variously changed.However, CO concentration greatly decreased with Pt/TiO 2 catalyst.This behavior was ascribed to the facile oxidation of CO oxidation on Pt/TiO 2 .The byproduct CO was oxidized on Pt sites under photoirradiation at room temperature [35].This finding also indicates that the Pt sites were not significantly poisoned by the intermediate compounds, and thus, they can contribute to the CO photooxidation.
In our circulation recycling system, Pt/TiO 2 catalysts were effective under briefly heated condition.Figure 7 shows time course plots for toluene photooxidation over TiO 2 and platinized catalyst (1 wt%-Pt/TiO 2 ) at 353 K. Here, the reaction rate was much lower than that in Figure 1, because a different kind of photochemical reactor was used for toluene photooxidation (see Experimental Section).Photocatalytic activity of TiO 2 decreased and the amount of intermediate compounds increased by heating the catalyst up to 353 K.This behavior was ascribed to the decrease in H 2 O adsorption capacity of TiO 2 at elevated temperature, which resulted in the promotion of the accumulation of intermediate compounds on TiO 2 .On the other hand, Pt deposition on TiO 2 increased the toluene photooxidation at the same reaction temperature, although the amount of intermediate compounds formed on Pt/TiO 2 was larger than that on TiO 2 .Pt/TiO 2 catalyst was also effective for suppression of CO formation at 353 K, because CO concentration was lower than the detection limit (<3 ppm).
Catalyst Materials
Commercially available TiO 2 (P25; Nippon Aerosil Co. Ltd.; Tokyo, Japan) supplied from Catalysis Society of Japan as JRC-TIO-4) was used as catalyst and precursor of Pt/TiO 2 catalyst.The BET surface area was 54 m 2 /g.Pt deposition on TiO 2 was carried out by the photodeposition method, according to the procedure reported earlier [25,35].The Pt loading was 1.0 wt%.CO chemisorption measurements revealed that Pt dispersion of Pt/TiO 2 was 16%.
Photocatalytic Test Reaction
Photoreactions were carried out with a circulation system (Figure 8a).The total volume of the system was 3.5 L. The photochemical reactor was fabricated from a Pyrex glass tube with a quartz glass window and contained a square-shaped glass plate (50 mm × 45 mm) (Figure 8b).The catalyst was coated on one side of the glass plate from an aqueous slurry of catalyst powder and dried at 383 K. Catalyst weight on the plate was 0.10 g.Photoirradiation was carried out for the catalyst through the quartz glass window by a 300 W Xe lamp equipped with a UV-30 cut filter (λ > 300 nm).Reaction gases were prepared by mixing toluene vapor with pure air.When reactions were carried out under humidified conditions, toluene and water vapor were introduced to the reaction gases with a syringe from the injection port.The reaction temperature was around 295 K.As a pretreatment, the catalyst was photoirradiated in air to decompose the organic impurities adsorbed on the TiO 2 surface.After the gas-solid adsorption equilibrium of the substrate was achieved in the reactor, photoirradiation was started.The circulating flow rate was 2.0 L/min.Concentrations of toluene, CO 2 and CO were simultaneously determined by an FTIR spectrometer (PerkinElmer Spectrum One) equipped with a gas cell (2.4 m path length, 0.20 L volume).In the case of catalytic toluene photooxidation under heated condition, a photochemical reactor with a cylindrical shape was used.The catalysts were loaded on a glass plate with the area of 9 cm 2 (3 cm × 3 cm) and then mounted in the photochemical reactor.A ceramic heater was contacted with the bottom side of the reactor to control the reaction temperature.This system can identify organic byproduct at the ppm level.Carbon balance was defined as 7-times the amount of the CO 2 (and CO) formed and divided by the amount of toluene reacted.
FTIR Studies
FTIR spectra of catalysts were measured on an FTIR spectrometer (PerkinElmer Spectrum One) equipped with a TGS detector and an in situ FTIR cell, which was connected to the circulation recycling system described above.The cell had a quartz glass window on the top and a set of connectors to allow reaction gases to flow through the cell (Figure 8c).On both sides of the cell, KBr windows were placed.TiO 2 samples were pressed into thin disk (20 mm Ф in diameter) and placed in the cell.Photoirradiation was carried out with a 300 W Xe lamp equipped with a UV-30 cut filter from the quartz window.The catalyst sample was photoirradiated through the quartz window.FTIR spectra were taken from the KBr windows.Prior to the FTIR measurements, the catalyst sample was photoirradiated in humidified air.
compares the time course plots for toluene conversion, CO x formation and the amount of byproduct compounds on TiO 2 under
Figure 8 .
Figure 8. Schematic of photocatalytic reaction system and equipment for catalytic reaction and FTIR measurement.(a) Photocatalytic system; (b) reactor for toluene photooxidation and (c) in-situ FTIR cell. | 2014-10-01T00:00:00.000Z | 2013-03-04T00:00:00.000 | {
"year": 2013,
"sha1": "d1a786e9452426fedc4b823cc987a04eeb5ffad8",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4344/3/1/219/pdf?version=1362404115",
"oa_status": "GOLD",
"pdf_src": "CiteSeerX",
"pdf_hash": "d1a786e9452426fedc4b823cc987a04eeb5ffad8",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
119314383 | pes2o/s2orc | v3-fos-license | Thresholds of Prox-Boundedness of PLQ functions
Introduced in the 1960s, the Moreau envelope has grown to become a key tool in non\-smooth analysis and optimization. Essentially an infimal convolution with a parametrized norm squared, the Moreau envelope is used in many applications and optimization algorithms. An important aspect in applying the Moreau envelope to nonconvex functions is determining if the function is prox-bounded, that is, if there exists a point $x$ and a parameter $r$ such that the Moreau envelope is finite. The infimum of all such $r$ is called the threshold of prox-boundedness (prox-threshold) of the function $f.$ In this paper, we seek to understand the prox-thresholds of piecewise linear-quadratic (PLQ) functions. (A PLQ function is a function whose domain is a union of finitely many polyhedral sets, and that is linear or quadratic on each piece.) The main result provides a computational technique for determining the prox-threshold for a PLQ function, and further analyzes the behavior of the Moreau envelope of the function using the prox-threshold. We provide several examples to illustrate the techniques and challenges.
Introduction
The Moreau envelope e r f of a proper lower-semicontinuous (lsc) function f, is a smoothing, approximating function that made its first appearance in the mid-1960s [23,24]. It was presented by Jean-Jacques Moreau, together with its associated proximal mapping P r f, as a tool in locating and studying the minima of convex functions. A parametrized function of the proxparameter r, the Moreau envelope is defined as the infimal convolution of f with the scaled norm-squared function r 2 · −x 2 . It is largely used in matters of minimization of convex functions [1,2,5,6,15,17,19,20,31,33,34], and more recently it has found a place in non-convex optimization as well [4,10,11,12,13,14,16,25,26].
Given a function f and a prox-parameter r, the Moreau envelope is formally defined One of the most inviting properties of the Moreau envelope is that of regularization. Starting with a sufficiently well-behaved function f, such as a convex and lower semicontinuous function, the Moreau envelope is continuously differentiable. In fact, f does not have to be differentiable, or even continuous for that matter, in order for this to happen [13,Proposition 2.1]. Moreover, the global minimum of e r f coincides with that of f, in the case where it exists [31,Proposition 13.37]. So the value of this regularization is clear in matters of minimization of nonsmooth functions. This paper explores the properties of the threshold of prox-boundedness (hereafter referred to simply as threshold where convenient). A function f is called prox-bounded if there exist r ≥ 0 and x ∈ dom f such that e r f (x) ∈ R. The infimum of all such r is called the threshold of proxboundedness of f, and throughout this paper is denoted byr. This numberr is of interest, as any r >r yields e r f (x) ∈ R for all x [31, Theorem 1.25], and (ifr > 0) any r such that 0 ≤ r <r yields e r f (x) = −∞ for all x. At the threshold itself, the Moreau envelope may be −∞ everywhere, a real number everywhere, or some combination of the two, depending on the characteristics of f. In this paper we seek to identify the proximal threshold and understand the behavior of the envelope at the threshold.
Thresholds are also of interest due to their importance when dealing with certain programmable tasks in optimization. A prime example is the proximal point method, a well-known algorithm used for minimizing functions [22,24,29]. The algorithm starts at an arbitrary point x 0 ∈ dom f and iteratively calculates the proximal mapping This method is known to converge to the solution point for convex functions [9], and for certain nonconvex functions as well [14,18,32]. There is a question of how to choose the sequence r i , and it appears that an ideal starting choice is to use the thresholdr [28]. So for this algorithm, and others that use variants of the proximal point method, it is desirable to be able to calculate the threshold for the function in question. With that in mind, the main result of this work is a computational method of identifying and classifying the thresholds of piecewise linear-quadratic (PLQ) functions. A PLQ function is a function whose domain is a union of polyhedral sets, and that is linear or quadratic on each of those sets [31,Definition 10.20] (see Definition 2.1 herein). This is a logical family of functions on which to focus in the present work, as they are commonly used in applications and computational optimization [7,8,21,27,30]. They are easily programmable, but complex enough to allow us to illustrate the variety of situations that arise at the threshold.
The organization of this work is as follows. Section 2 provides background definitions and presents the method we use to identify the domain of the Moreau envelope, on R. In Section 3, we consider full-domain, quadratic functions on R n . In Section 4 we work with functions that have conic or general polyhedral domains, and we present the main result: computation and classification of the thresholds for PLQ functions. Section 5 provides several examples that illustrate some special cases and the procedures given in previous sections. Section 6 provides some concluding thoughts, and proposes areas of future research.
Notation
For all that follows, we use the notation S n for the set of symmetric matrices, S n + for the set of symmetric positive-semidefinite matrices, and S n ++ for the set of symmetric positive-definite matrices. We introduce the notation D n , D n + , and D n ++ to represent the sets of diagonal matrices that are arbitrary, positive semidefinite, and positive definite, respectively. For a function f : R n → R ∪ {−∞, +∞}, we will denote by dom f the set of points where f is finite, that is,
Definitions
Definition 2.1. A function f : R n → R ∪ {+∞} is called piecewise linear-quadratic (PLQ) if dom f can be represented as the union of finitely many polyhedral sets, relative to each of which f (x) is given by an expression of the form 1 2 x Ax + b x + c for some scalar c ∈ R, vector b ∈ R n , and symmetric matrix A ∈ S n . The parameter r ≥ 0 is called the prox-parameter, and x is called the prox-center.
The infimum of all such r is called the threshold of prox-boundedness, and is denotedr.
For brevity's sake, we refer to the threshold of prox-boundedness of a function simply as its threshold. The goal of this paper is to be able to identify the threshold of any PLQ function, and to describe the behavior of the Moreau envelope at the threshold. We want to be able to say, given any pointx ∈ R n , whether or notx ∈ dom erf. It is known that for all r >r, dom e r f = R n , and (ifr > 0) for any r ∈ [0,r), dom e r f = ∅. At the threshold itself, however, a variety of situations arise. Depending on the function f, as we see in Examples 2.5, 2.6, and 2.7 below, we can have dom erf = R n , dom erf = ∅, or ∅ dom erf R n . We conclude this subsection with a lemma that will be useful in proving some of the results that follow. Proof: Notice that
Full-domain single-variable quadratic functions
We present three examples here, without proof, to show that all three cases above exist in the form of basic functions. The proofs of the example statements are covered by Lemma 2.8. Example 2.6 also demonstrates the importance of the "dom e r f = R n " component of Lemma 2.4. b) If a < 0, then for r = −a we find the vertex of 1 2 ay 2 + by + c + r 2 (y − x) 2 by setting the derivative with respect to y equal to 0. This gives a critical point y = rx−b a+r . The second derivative is a + r, so the critical point gives a minimum for all r > −a, and a maximum for all r < −a. Indeed, r < −a results in 1 2 ay 2 + by + c + r 2 (y − x) 2 being unbounded below. Hence,r = −a. Then we evaluate the Moreau envelope at the threshold: Hence, dom erf = − b a . c) If a = 0 and b = 0, then for any r > 0 we have e r f (x) > −∞ for allx ∈ R. This tells us thatr = 0. Then Therefore, dom erf = ∅.
Full-Domain Quadratic Functions
Lemma 2.8 can be extended to the case x ∈ R n , as we see in Lemma 3.1 and Theorem 3.3. We begin this section by considering the special case of a quadratic function on R n with full domain, whose quadratic coefficient is a diagonal matrix. Recall that we use D n , D + n , and D ++ n to denote the sets of n-dimensional diagonal, diagonal positive-semidefinite, and diagonal positive definite matrices, respectively.
Suppose that (without loss of generality) for i = 1, 2, . . . , n the diagonal elements λ i of A are in non-increasing order. Then the threshold of f is r = max{0, −λ n }, and dom erf depends on A and b in the following manner.
c) If A ∈ D n + \ D n ++ and there exists i such that λ i = 0 and b i = 0, then dom erf = ∅.
d) If
A ∈ D n + \ D n ++ and b i = 0 for every i such that λ i = 0, then dom erf = R n .
Proof: We have then λ n is the negative eigenvalue of largest magnitude, since A is ordered. Fixx ∈ R n and r < −λ n , and consider the following limit: This gives us that the threshold of f is at least −λ n .
Since r > −λ n , then (A + rI) ∈ D n ++ . So f (x) + r 2 |x −x| 2 is strictly convex quadratic, and is therefore bounded below. Hence,r = −λ n . Now we consider the Moreau envelope at the threshold: ≥ 0 for all i, so that the argument of the infimum above consists of a sum of n single-variable functions, one function of each y i , that are either strictly convex quadratic (when λ i > λ n ) or linear (when λ i = λ n ). In particular, the n th such function is linear. Suppose the first k functions are quadratic, and the last n − k functions are linear. Then to find the infimum, we must choose y 1 through y k to be those numbers that give us the vertices of the parabolas λ i −λn That gives us the minimum values for the first k components of the sum in equation (3.2). For the remaining components, however, we must choose the y i that give the infima of (b i +λ ixi )y i . This means that we will have a finite infimum whenx c) Suppose A ∈ D n + \ D n ++ , and let k be such that λ k = 0 and b k = 0. Fixx ∈ R n and consider the Moreau envelope: inf For any r > 0 the argument is strictly convex quadratic, so the infimum is a real number. Hence,r = 0. Now we consider Again we have a finite sum of strictly convex quadratic functions and linear functions, but since b i = 0 for every corresponding λ i = 0, the linear functions are in fact constant. Hence, the function is bounded below, and we apply Lemma 2.4 to conclude thatr = 0 and dom erf = R n .
In order to generalize Lemma 3.1 to include all real symmetric matrices, we use the spectral decomposition. Recall that a square matrix A is orthogonally diagonalizable if and only if there exists an orthogonal matrix Q and a diagonal matrix D such that A = Q DQ. [3]). A square matrix A is orthogonally diagonalizable if and only if A is symmetric. Moreover, D is the matrix generated by diagonalizing the vector of eigenvalues of A. This is referred to as the spectral decomposition of A.
, we are always able to diagonalize A, and the eigenvalues of the resulting diagonal matrix are the same as those of A. The consequence of this is that with a change of variable we will be able to apply Lemma 3.1 to any quadratic, full-domain function. With this tool at our disposal, we present the general form of Lemma 3.1 in Theorem 3.3.
Let Q DQ be the spectral decomposition of A, and suppose (without loss of generality) that for i = 1, 2, . . . , n the diagonal elements λ i of D are in non-increasing order. Then the threshold of f isr = max{0, −λ n }, and dom erf depends on D, Q and b in the following manner.
Proof: We implement the variable changes y = Qx andȳ = Qx. These changes do not affect the threshold, as Q is invertible and, by orthogonality, Further, Now we consider the Moreau envelope, Since D is diagonal, we have the form of Lemma 3.1, with b replaced by Qb. The rest of the proof is analogous to that of Lemma 3.1.
PLQ Functions
We next generalize the results we have so far to include functions that have polyhedral domains. We begin by stating some results about the domain of the Moreau envelope; they will be useful in later sections.
The Domain of the Moreau Envelope
In this subsection, we include some useful lemmas about the domain of e r f. In our first result, we see that the more we restrict the domain of a function, the bigger the domain of the Moreau envelope can be.
Then dom e r f ⊆ dom e rf .
Proof:
We have Combining Theorem 3.3 with Lemma 4.1, we have the following corollary.
Proof: Using equation (3.4), we see that substitutingx i = b i satisfies the condition, which gives us that 1 r b ∈ dom erf. Lemma 4.1 completes the proof.
So for any quadratic function f with dom erf = ∅, Corollary 4.2 gives us a point in the domain of the Moreau envelope.
Polyhedral Conic Domains
Now we are ready to generalize the results of the previous section. We start with a simple case, f quadratic where dom f is a single, closed, unbounded, conic region. We will change variables to the generalized spherical coordinate form, also known as n-spherical coordinates. The variable change is as follows: x ∈ R n ↔ For ease of notation, we introduce the capital sine-k function Sin k φ.
Next, we show that there exists (ρ,φ) such that Hr(ρ,φ; φ) < 0 for some φ ∈ Φ. This means that (ρ,φ) meets the conditions of Theorem 4.4 (c), hence dom erf = R n . To see this, select any φ ∈ Φ. Consider the summation Notice that not all of the factors Sin i−1 φ cos φ i can be zero. We see this by writing out these terms, . . .
General polyhedral domains
Theorem 4.4 covers the case where dom f is an unbounded polyhedral cone. We now generalize to include all unbounded polyhedral domains. For this, we will need the recession cone, defined as follows. If S is polyhedral, then R(x) is the same independent of the choice ofx [31, Exercise 6.34], and we use simply R. If S is bounded, then R = {0}. If S is unbounded, then R represents all unbounded directions of S. We will see that in order to understand the threshold, it suffices to focus solely on what happens on R. We first prove that the thresholds themselves are the same on R as on S, in Theorem 4.8 below.
Theorem 4.8. Let f : S → R be a quadratic function with S polyhedral. For anyx ∈ S, define R := R(x) +x. Definef Letr f andrf be the thresholds of f andf , respectively. Thenrf =r f .
PLQ Functions
For a quadatic function f whose domain is a single, closed, unbounded polyheral region, we use Theorems 4.8 and 4.9 to identify the thresholdr, and dom erf. We will now use this as a basis for doing the same with a PLQ function. Since a PLQ function is continuous [31,Proposition 10.21], every piece is bounded below except possibly those whose domains are unbounded sets. Theorem 4.11 explicitly identifies the thresholds, and the domains of the Moreau envelopes at the thresholds where possible, of PLQ functions.
Then the threshold of f isr = max Moreover, if we define the active set A := {i :r i =r}, then Proof: We will make use of the following equation in the proof:
Examples
We now provide a few examples that illustrate some of the nuances of the results and highlight the procedures given in this paper. The first example illustrates the basic techniques for a full-domain quadratic function. otherwise.
Now we usex = Q ȳ to find that otherwise.
Hence, we have Finally, in accordance with Corollary 4.2, we observe that 1 r b ∈ dom erf. Our next example shows the difficultly in computing dom erf when non-conic sets are involved.
Next we have a simple example that shows it possible to construct PLQ functions with equal, positive thresholds, whose Moreau envelope domains are different.
Example 5.3. Define two regions on R : Then the PLQ functions both have thresholdr f =r g = 2, but dom e 2 F = {0}, whereas dom e 2 G = ∅. Finally, we have an example of a six-piece PLQ function on R 2 . We identify the threshold of each piece, and that of the PLQ function. We also make some conclusions with respect to the domain of the Moreau envelope for each piece, and for that of the PLQ function.
Details: Figure 2 shows the six regions of the domain of f, and Figure 3 is the graph of f. It is left to the reader to verify that f is indeed a PLQ function, that is, it is continuous at all boundary points.
Conclusion
In this paper, a variety of methods for identifying the thresholds and domains of Moreau envelopes for functions built on quadratics was presented. Several examples were given to illustrate the techniques. The results found in this paper are applicable to areas of ongoing computational research, wherever calculation of prox-thresholds is needed. This research raises several questions for further study. For example: i) Is it possible to determine computationally the exact threshold of prox-boundedness for some other useful class of functions?
ii) Any threshold found in this paper, when the domain of the Moreau envelope was the whole space, was equal to zero; does there exist a function f with dom erf = R n such thatr > 0?
iii) Can a calculus of proximal thresholds be created? I.e., given the proximal thresholds of two lsc functions f and g, could the proximal thresholds (or bounds for the proximal thresholds) be determined for their sum, product, and composition? iv) We relied on the partitioning of R n being polyhedral (each region convex, in particular) in order to employ the recession cone for each piece; can this restriction be relaxed? v) We also required n-dimensional functions, so as to take advantage of the compactness of closed, bounded sets. Can any or all of these results be extended to infinite-dimensional spaces? | 2016-11-02T19:01:41.000Z | 2016-11-02T00:00:00.000 | {
"year": 2016,
"sha1": "2cbf6aa2245658eed218940071e173ba1902afed",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "676d885a3de32edb5e354cfd0633c9d9b0394ada",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
21722851 | pes2o/s2orc | v3-fos-license | Comparison of incidence and cost of influenza between healthy and high-risk children <60 months old in Thailand, 2011-2015
Introduction Thailand recommends influenza vaccination for children aged 6 months to <36 months, but investment in vaccine purchase is limited. To inform policy decision with respect to influenza disease burden and associated cost in young children and to support the continued inclusion of children as the recommended group for influenza vaccination, we conducted a prospective cohort study of children in Bangkok hospital to estimate and compare influenza incidence and cost between healthy and high-risk children. Methods Caregivers of healthy children and children with medical conditions (‘high-risk’) aged <36 months were called weekly for two years to identify acute respiratory illness (ARI) episodes and collect illness-associated costs. Children with ARI were tested for influenza viruses by polymerase chain reaction. Illnesses were categorized as mild or severe depending on whether children were hospitalized. Population-averaged Poisson models were used to compare influenza incidence by risk group. Quantile regression was used to examine differences in the median illness expenses. Results During August 2011-September 2015, 659 healthy and 490 high-risk children were enrolled; median age was 10 months. Incidence of mild influenza-associated ARI was higher among healthy than high-risk children (incidence rate ratio [IRR]: 1.67; 95% confidence interval [CI]: 1.13–2.48). Incidence of severe influenza-associated ARI did not differ (IRR: 0.40; 95% CI: 0.11–1.38). The median cost per mild influenza-associated ARI episode was $22 among healthy and $25 among high-risk children (3–4% of monthly household income; difference in medians: -$1; 95% CI for difference in medians: -$9 to $6). The median cost per severe influenza-associated ARI episode was $232 among healthy and $318 among high-risk children (26–40% and 36–54% of monthly household income, respectively; difference in medians: 110; 95% CI for difference in medians: -$352 to $571). Conclusions Compared to high-risk children, healthy children had higher incidence of mild influenza-associated ARI but not severe influenza-associated ARI. Costs of severe influenza-associated ARI were substantial. These findings support the benefit of annual influenza vaccination in reducing the burden of influenza and associated cost in young children.
Introduction
Thailand recommends influenza vaccination for children aged 6 months to <36 months, but investment in vaccine purchase is limited. To inform policy decision with respect to influenza disease burden and associated cost in young children and to support the continued inclusion of children as the recommended group for influenza vaccination, we conducted a prospective cohort study of children in Bangkok hospital to estimate and compare influenza incidence and cost between healthy and high-risk children.
Methods
Caregivers of healthy children and children with medical conditions ('high-risk') aged <36 months were called weekly for two years to identify acute respiratory illness (ARI) episodes and collect illness-associated costs. Children with ARI were tested for influenza viruses by polymerase chain reaction. Illnesses were categorized as mild or severe depending on whether children were hospitalized. Population-averaged Poisson models were used to compare influenza incidence by risk group. Quantile regression was used to examine differences in the median illness expenses.
Introduction
Influenza virus infections account for 3-5 million severe illnesses and an average of 250,000-500,000 deaths globally each year. [1] Influenza vaccination remains the most effective method of influenza prevention. Although influenza affects persons of all ages, young children, older adults, persons with chronic underlying medical conditions, and pregnant women have been identified as groups at increased risk of severe influenza. [2,3] Studies have documented that children aged <60 months have higher rates of influenza-associated hospitalization compared to older children. [4,5] Additionally, compared to healthy children, children with underlying cardiac, pulmonary, and neurologic conditions are at even greater risk for influenza-associated hospitalization and severe outcomes. [4,[6][7][8][9] However, although children with underlying conditions are known to be at increased risk of severe influenza if infected, it is not known whether they are at increased risk for influenza virus acquisition. There are few data on differences in influenza-associated costs between children with underlying medical conditions compared to healthy children. In Thailand, increasing evidence suggests that influenza virus plays an important role in childhood respiratory illness. A population-based study in rural Thailand documented that persons hospitalized with influenza pneumonia were more likely to be young children or persons aged !65 years old compared to the general Thai population. [10] In this study, the annual incidence of influenza A in children aged <60 months with radiographically-confirmed pneumonia was 90 per 100,000 persons during 2003-2005. [11] However, there are few data on the incidence of influenza among children in urban Bangkok, where almost one fifth of the national population resides.
The Thai Ministry of Public Health currently recommends annual influenza vaccination for certain target groups, including children aged 6 months to <36 months. Each year, approximately three million doses of trivalent inactivated influenza vaccine are provided free of charge by the government to the estimated 11 million high-risk individuals (excluding healthcare workers which are covered by different source of vaccine) on a first-come, first-served basis. Besides this, vaccine is also available for purchase in both public and private sectors. Although recommended, influenza vaccine is not included the routine Expanded Program on Immunization schedule which comprises the required vaccinations for children in Thailand, and is considered an optional vaccine. Finding suggested only 1-2% of children aged 6 months to <36 months were immunized nationwide in recent years. [12] To inform policy decision with respect to influenza disease burden and associated cost in young children and to support the continued inclusion of children as the recommended group for influenza vaccination, we conducted an observational cohort study of Thai children in Bangkok who were enrolled before 36 months of age and followed for up to 24 months to estimate incidences and costs of influenza among healthy children compared to those with underlying medical conditions.
Ethical consideration
The study was approved by the ethics committees of the Queen Sirikit National Institute of Child Health (QSNICH; Bangkok, Thailand) and the Armed Forces Research Institute of Medical Sciences (AFRIMS; Bangkok, Thailand), with the U.S. Centers for Disease Control and Prevention's Institutional Review Board relied on the QSNICH's determination. Written parental informed consent was sought for all children.
Study population
The study population consisted of children receiving healthcare at the QSNICH, the largest children's tertiary care public hospital in Thailand serving an exclusively pediatric population from birth to 18 years of age. All study participants were residents of the metropolitan area of Bangkok and its vicinity.
Study design
This study used a prospective cohort design with rolling enrollment of children aged <36 months and aimed to enroll 500 pairs of healthy and high-risk children. Sample size calculation was based on the assumptions that influenza attack rates among healthy and high-risk children were 5% and 10%, respectively, and a type I error of 5% (one-sided hypothesis). With these assumptions, we would need to enroll 381 healthy children and 381 high-risk children to be able to reject the null hypothesis that the attack rates for healthy and high-risk children were equal with probability (power) 0.8. With expected 20% cohort attrition (i.e., 10% per year) and the round-up, 500 pairs of children would be needed. Healthy children were matched by calendar time and age at enrollment with children with underlying medical conditions. Each child whose parent/guardian provided written informed consent was followed up for 24 months (i.e., until up to <60 months of age) with weekly surveillance for acute respiratory illness (ARI) that included collection of respiratory specimens for testing for influenza and respiratory syncytial viruses (RSV). Enrolled children for whom matches could not be identified were allowed to participate in the study throughout the course of the study. Children exited the cohort when 24 months of observation had concluded or upon request of the caregivers.
Enrollment and eligibility
Enrollment occurred between August 2011 and September 2013. Children were eligible if they were residents of Bangkok metropolitan area and its vicinity and did not plan to move out over the course of the study, <36 months of age, routinely sought care at QSNICH, and were not acutely ill at enrollment. Children who already had a sibling enrolled in the study were excluded. Children were considered "high-risk" if they had !1 of the following conditions: prematurity (born at <37 weeks gestation), low birth weight (<2,500 grams), asthma, cardiovascular (excluding hypertension) or congenital heart disease, chronic lung or airway disease, kidney disease, liver disease, neurologic or neuromuscular disease, hemoglobinopathy, metabolic disease including diabetes, Down's syndrome, immunosuppressive conditions including long-term steroid use, HIV infection, or cancer. Healthy children were defined as those without any of the abovementioned conditions. High-risk children were enrolled when seeking care at specialty clinics, whereas healthy children were enrolled from the well-baby clinic or other outpatient clinics.
At enrollment, caregivers were asked about children's medical and breastfeeding histories and were given axillary thermometers and instructions on how to take children's temperatures. Within two months of enrollment, study staff visited children's homes to record family and household size, request information on household income and assets, ascertain family smoking status, observe characteristics of children's homes (type of housing, ownership of durable assets, access to sanitation facilities, and source of water), and record childhood immunizations from children's vaccine books (including influenza vaccination if !6 months old). Latitude and longitude of household location were recorded using a global positioning system to determine distance between residence and the QSNICH.
Active surveillance for acute respiratory illness and collection of cost data
Caregivers were called weekly and asked about ARI symptoms, defined as presence of !2 symptoms (documented fever, cough, sore throat or runny nose) with onset during the preceding seven days, among enrolled children. Caregivers were encouraged to call study staff if children developed an ARI between surveillance calls. Three attempts were made to reach caregivers on different days before the caregivers were considered unreachable for that week. Those missing calls for four consecutive weeks were considered lost to follow-up.
For children with ARI, study staff collected information on their highest measured temperature if they had fever and on the presence of ARI in other family members. A 14-day symptom-free period was required between two ARI episodes. Caregivers were encouraged to bring children with ARI to the QSNICH for examination, respiratory specimen collection, and care. Clinicians determined treatment with antivirals based on national treatment guidelines which state empirical treatment should be given to influenza suspect cases and there is no need to wait for laboratory confirmation. If children received medical attention at another facility, medical records were requested, reviewed, and abstracted.
At the post-illness survey, conducted 1-2 weeks following illness onset, study staff collected information from caregivers on children's daycare or school absentia, symptom duration, and any hospitalizations or costs associated with illness episodes. Caregivers were asked to provide estimates of all direct medical, laboratory, and transportation costs paid out of pocket as well as the indirect costs of reported lost income or time due to care of ill children. Additionally, study staff obtained costs of medical care covered by health insurance from the QSNICH's financial department. This medical care cost excluded salary of healthcare personnel paid on monthly basis by the Thai government that was not charged to patients or the health insurance scheme.
Specimen collection and laboratory testing
Children with ARI had combined nasal and throat swabs collected by study nurses for testing for influenza viruses and RSV by real-time reverse transcription polymerase chain reaction (rRT-PCR) at the AFRIMS. [13]
Influenza vaccination status
Throughout the follow-up period, the influenza vaccination status of children !6 months old was periodically updated using the child's vaccine book or medical records.
Analysis
Severity of illness was classified as mild if the child required no more than outpatient care and severe if the child required hospitalization. The influenza season was defined as June through May of the following year (e.g., the 2013 influenza season was June 2013-May 2014). [14] The Chi-Square test was used to compare baseline demographic, socio-economic (i.e., wealth index calculated using Principle Component Analysis method from characteristics of children's homes as mentioned above), and clinical characteristics between healthy and highrisk children. Population-averaged Poisson regression models [15][16][17], adjusting for potential confounders (age at ARI, influenza vaccination, recent history of ARI in the household, and influenza season), were fitted for number of events with person-time entered into the models as an offset. Parsimonious models were constructed (i.e., covariates significantly different in bivariate analyses, but not multivariate models, were not included). Adjusted incidence per 1,000 person-years (PY) of events (e.g., ARI, rRT-PCR-confirmed influenza-associated ARI) was calculated from Poisson parameters using least-squares means methods. [18] The 95% Poisson confidence intervals (95% CIs) were calculated using the exact method. The adjusted incidence for healthy and high-risk children, age group at the time of the ARI episodes, and influenza season was compared in Poisson models. All incidence estimates reported were adjusted incidence unless noted otherwise. Log-transformed lengths of illness (i.e., onset to illness resolution based on caregivers' reports) in healthy and high-risk children with influenza were compared using Student's t-test.
All direct medical, laboratory and transportation costs and indirect costs of lost income or time due to care of ill children for an ARI episode were summed. Costs paid by caregivers were collected from post-illness interviews and the value of caregiver time was calculated using the human capital approach. [19] Reported healthcare related costs paid for by the health insurance system were checked against the hospital's database. For salaried or wage-earning caregivers, daily gross pay was multiplied by the number of days that caregivers missed work to calculate lost income. For caregivers who did not work outside the home, time spent caring for ill children was valued at the minimum daily wage of 300 Baht/day (approximately $9) as established by the Thai government. [20][21][22] ARI-associated costs that incurred before 2015 were adjusted to the 2015 value using an inflation rate of 3%. [20,21] Costs were converted using exchange rate of 34.24 Baht to $1 [23] and reported in U.S. dollars in absolute terms and in relation to the median monthly household income of children in the cohort (reported as range of $584-$876/month). The median cost interquartile range of ARI were calculated. Because high-risk children were significantly more likely to live further away and incurred higher travel expense than healthy children, differences in the costs between the two groups were compared by quantile regression adjusted for distance from residence to the QSNICH. [24] All data analyses were conducted using Stata software version 12 (StataCorp LLC, College Station, Texas, USA).
Characteristics of study population
During August 2011 and September 2015, 1,149 children were enrolled (649 healthy and 500 high-risk children; Table 1). Upon review of medical records, 4 healthy children were reassigned to high-risk group due to the misunderstanding of disease definitions and 14 high-risk children were reassigned to healthy group, resulting in the analytic dataset having 659 healthy and 490 high-risk children. Of these, 621 children (54%) were enrolled at <12 months old and 624 (54%) were males. Healthy children were statistically significantly more likely than highrisk children to have ever been breastfed, live in households with higher monthly income, have primary caregivers with more years of education, live closer to QSNICH, live in an apartment complex, have electricity and air condition in the house, and pay for the illness out of pocket. For children who were eligible for influenza vaccination at the beginning of each season (i.e., !6 months of age), the coverage ranged from 3% to 29% in 2011 through 2015 ( Table 1). The
ARI episodes and detection of influenza viruses and RSV
We identified 3,108 ARI cases in 861 children (Fig 1) 1). The influenza A viruses identified were subtypes H3N2 (59/111, 53%) and H1N1pdm09 (97) QSNICH, Queen Sirikit National Institute of Child Health; ARI, acute respiratory illness γ Among 1,129 children whose household visits were conducted; wealth index was created among 1,125 children with complete data using the following variables: housing characteristics, ownership of durable assets, access to sanitation facilities, and source of water
The incidence of mild influenza-associated ARI adjusted for age at ARI, influenza vaccination status, recent history of ARI in the household, and influenza season was 37/1,000 PY (95% CI: 29-47) and was statistically higher in healthy than high-risk children (45/1,000 PY; 95% CI: 34-58 vs. 27/1,000 PY; 95% CI: 18-38). The incidence rate ratio was 1.67 (95% CI: 1.13-2.48). The incidence of mild influenza-associated ARI adjusted for influenza vaccination status, recent history of ARI in the household, and influenza season was lowest in infants aged <12 months and highest in those aged 36-47 months (Table 2A). In both healthy and highrisk children, the incidence of mild influenza-associated ARI adjusted for age at ARI, influenza vaccination status, and recent history of ARI in the household was highest in 2013 and lowest in 2012 (Table 2B).
The adjusted incidence of severe influenza-associated ARI was 4/1,000 PY (95% CI: 2-9). The incidence of severe influenza-associated ARI was 3/1,000 PY; 95% CI: 1-8 among healthy children and 7/1,000 PY; 95% CI: 3-16 among high-risk children. The incidence rate ratio was 0.40 (95% CI: 0.11-1.38). The adjusted incidence of severe influenza-associated ARI was lowest Influenza incidence and cost in young children Table 2. Incidence per 1,000 person-years of mild and severe influenza-associated acute respiratory illness in children enrolled in a pediatric respiratory infection cohort in Thailand γ .
Healthy
High-risk All
Influenza cases
Adjusted incidence (95% confidence interval) in children aged 48-59 months and highest in those aged 36-47 months (Table 2A). We could not reliably calculate the adjusted incidence of severe influenza-associated ARI by risk group and influenza season due to the low number of influenza cases.
Length of ARI and influenza-associated ARI
Among 3,092 ARI episodes that resolved by the time of the post-illness interviews (99% of ARIs identified), the median length of illness was 8 days (IQR: 8-11) among all children with no difference in illness duration among healthy versus high-risk children (8 days; IQR: 8-11 vs. 8 days; IQR: 7-12; p-value for difference in medians = 0.06). The length of mild influenzaassociated ARI was 9 days (IQR 8-12) among all children, with no difference in illness duration among healthy versus high-risk children (9 days; IQR: 7-12 vs. 10 days; IQR: 8-12; pvalue for difference in medians = 0.34). The length of severe influenza-associated ARI was 10 days (IQR: 9-15) among all children, with no difference in illness duration among healthy versus high-risk children (9 days; IQR: 7-12, vs. 11 days; IQR: 9-13; p-value for difference in medians = 0.84). Table 3 shows the median cost per ARI episode adjusted for distance from residence to the study site. In general, the adjusted median costs were similar in healthy and high-risk children.
Cost of influenza-associated ARI
The adjusted median costs for mild influenza-associated ARI was approximately 2-4% of the Median costs were adjusted for distance from residence to the study site and reported in US dollar. Ã Differences in the adjusted median costs between the two groups were compared by quantile regression. monthly household income of the children while that of severe influenza-associated ARI ranged from 28% to 42% of monthly household income. Among mild influenza cases, 37% of the total adjusted cost per episode was attributable to healthcare related direct cost (i.e., medicine, laboratory testing, etc.), 36% to non-healthcare related direct cost i.e., travel cost, and 27% to indirect cost (22% opportunity cost and 5% reported income loss). Among severe influenza cases, 82% of the total adjusted cost per episode was attributable to healthcare related direct cost, 7% to non-healthcare related direct cost, and 11% to indirect cost (9% opportunity cost and 2% reported income loss).
Discussions
In this study spanning three full influenza seasons (2012-2014), healthy children had a significantly higher incidence of mild influenza-associated ARI than children with high-risk conditions. The length of illness and cost of influenza-associated ARI were similar in healthy and high-risk children. Costs per episode of severe influenza-associated ARI, which may be borne either by the families or the healthcare system, were substantial. The overall incidence of mild influenza-associated ARI among healthy children in this study (45/1,000 PY) was comparable to the most recent global estimate of influenza incidence among children aged <60 months in developed countries (55/1,000 PY) [25], possibly reflecting the high influenza vaccination coverage among cohort children. However, our overall incidence estimate was lower than estimates reported in some single country studies conducted among children of the same age group. [26,27] For example, a study among community-dwelling children aged 6-59 months in Wisconsin, USA in 2006-2010, reported an estimated incidence of influenza-associated outpatient visits of 77/1,000 PY. [26] Another study conducted in Suzhou, China reported an influenza virus infection incidence among children aged <60 months ranging from 146 to 214/1,000 PY for the 2011 to 2014 seasons. [27] In studies conducted among children aged <60 months in Bangladesh (2009-2011) and India (2010-2012), higher incidence of severe influenza virus infection was reported. [28,29] The differences in incidences in these studies may be attributable to differences in study design, geographic and climate pattern, source population, influenza vaccination coverage, and severity of influenza seasons during which the studies were conducted.
In this study, the higher incidence of mild influenza-associated ARI in healthy compared to high-risk children may seem counter-intuitive, but healthy children might have been allowed to come into greater contact with contagious ARI cases than high-risk children. There was a trend towards healthy children in our study being more likely to attend daycare than high-risk children. Healthy children in our study were also more likely to live in buildings with other households potentially allowing for more comingling with other children and increasing the risk of exposure to respiratory infections. Our study did not identify a statistically significant difference in the incidence of severe ARI between healthy and high-risk children but this study may not have been powered to detect meaningful difference in severe influenza incidence between the two groups. Additionally, for the 2012, 2013, and 2014 seasons with complete follow-up data, we did not identify a statistically significant difference in the estimated incidence across the three seasons.
Our study showed that the cost of treating an episode of severe influenza-associated ARI could be as high as >$200/episode. While this cost is lower than figures reported from studies conducted in high-income countries [30,31], it is higher than that reported in other middle income countries such as Bangladesh and India (both of which reported total cost of less than $100/episode). [32,33] We found that cost of treating an episode of severe influenza-associated ARI is about two fifths of the median monthly household income of children in the cohort. This sizable cost is either borne by the parents/caregivers themselves or the healthcare system. This finding supports the benefit of annual influenza vaccination in reducing the burden of influenza and associated cost.
Our study has several strengths. It was conducted in a defined cohort with nearly 100% of ARI episodes managed at the QSNICH, reducing potential for missed influenza cases. Children with underlying medical conditions were oversampled, enabling us to estimate incidence of influenza in this sub-population. Further, we used rRT-PCR, an assay that is maximally sensitive and specific for influenza viruses, to ascertain influenza virus infection status, thus minimizing misclassification bias.
However, several limitations should be considered when interpreting our findings. First, all children with ARI were encouraged to visit the study hospital including those with mild symptoms who might not require a hospital visit and about 10% of ARI cases received empirical treatment using Oseltamivir (of which 64% were tested positive based on rapid influenza diagnostic testing) [34], thereby lessening disease severity. Therefore, the associated costs may be imprecise. Second, influenza vaccination coverage was higher among cohort children than estimates among the general Thai pediatric population (1-2%), possibly because the QSNICH is an academic pediatric facility that emphasizes influenza vaccination for children seeking care at the facility. The higher influenza vaccination coverage may have resulted in a lower incidence of influenza among cohort children compared to the general population. Third, the study was not powered to measure the incidences of influenza outcomes by type of underlying medical condition. It was also not powered to measure the incidences of severe influenza outcomes.
Data are needed to guide prioritization of target groups for seasonal influenza vaccination in less wealthy countries where influenza vaccine supplies may not always be adequate to cover all recommended target groups. In Thailand, influenza vaccination is recommended for children aged 6 months to <36 months, the vaccine is given infrequently to children in this age group who do not have underlying conditions. Our finding that healthy children have a higher incidence of mild influenza-associated ARI than high-risk children could be used to generate cost-effectiveness data and to estimate the marginal value of increasing influenza vaccine coverage among children aged 6 months to <36 months. Additionally, the finding that children >36 months of age have similar influenza incidence as younger children suggests the potential utility of evaluating the marginal benefit of expanding influenza vaccine use to older children.
Supporting information S1 Disclaimer: The findings and conclusions in this report are those of the authors and do not necessarily represent the official position of the U.S. Centers for Disease Control and Prevention or the U.S. Government. Additionally, material has been reviewed by the Walter Reed Army Institute of Research. There is no objection to its presentation and/or publication. The opinions or assertions contained herein are the private views of the author, and are not to be construed as official, or as reflecting true views of the Department of the Army or the Department of Defense. The investigators have adhered to the policies for protection of human subjects as prescribed in AR 70-25. | 2018-05-21T22:38:44.855Z | 2018-05-17T00:00:00.000 | {
"year": 2018,
"sha1": "c96729aa415a12778e5b9fc0c65edf1a4c376ca3",
"oa_license": "CC0",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0197207&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c96729aa415a12778e5b9fc0c65edf1a4c376ca3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
7580263 | pes2o/s2orc | v3-fos-license | Real-Time PCR in HIV/Trypanosoma cruzi Coinfection with and without Chagas Disease Reactivation: Association with HIV Viral Load and CD4+ Level
Background Reactivation of chronic Chagas disease, which occurs in approximately 20% of patients coinfected with HIV/Trypanosoma cruzi (T. cruzi), is commonly characterized by severe meningoencephalitis and myocarditis. The use of quantitative molecular tests to monitor Chagas disease reactivation was analyzed. Methodology Polymerase chain reaction (PCR) of kDNA sequences, competitive (C-) PCR and real-time quantitative (q) PCR were compared with blood cultures and xenodiagnosis in samples from 91 patients (57 patients with chronic Chagas disease and 34 with HIV/T. cruzi coinfection), of whom 5 had reactivation of Chagas disease and 29 did not. Principal Findings qRT-PCR showed significant differences between groups; the highest parasitemia was observed in patients infected with HIV/T. cruzi with Chagas disease reactivation (median 1428.90 T. cruzi/mL), followed by patients with HIV/T. cruzi infection without reactivation (median 1.57 T. cruzi/mL) and patients with Chagas disease without HIV (median 0.00 T. cruzi/mL). Spearman's correlation coefficient showed that xenodiagnosis was correlated with blood culture, C-PCR and qRT-PCR. A stronger Spearman correlation index was found between C-PCR and qRT-PCR, the number of parasites and the HIV viral load, expressed as the number of CD4+ cells or the CD4+/CD8+ ratio. Conclusions qRT-PCR distinguished the groups of HIV/T. cruzi coinfected patients with and without reactivation. Therefore, this new method of qRT-PCR is proposed as a tool for prospective studies to analyze the importance of parasitemia (persistent and/or increased) as a criterion for recommending pre-emptive therapy in patients with chronic Chagas disease with HIV infection or immunosuppression. As seen in this study, an increase in HIV viral load and decreases in the number of CD4+ cells/mm3 and the CD4+/CD8+ ratio were identified as cofactors for increased parasitemia that can be used to target the introduction of early, pre-emptive therapy.
Introduction
Chagas disease is endemic in Latin America, where fewer than 8 million people, many of whom live in urban centers, are infected by T. cruzi [1]. In Brazil, the control of the Chagas disease insect vector Triatoma infestans and prevention of the transmission of T. cruzi parasitosis by blood transfusion have led to epidemiologic changes, shifting the predominant T. cruzi transmission routes to oral, congenital, and organ transplant transmission. HIV/T. cruzi coinfection has been found in urban centers, and HIV infection [2] has spread to regions in which Chagas disease is endemic. In addition, Chagas disease is now an emerging disease in developed countries, with active congenital and organ transplant transmission and reactivation of the chronic disease [3,4].
Acute Chagas disease is characterized by high levels of parasitemia, which is detected by direct microscopy of fresh buffy coat, a quantitative buffy coat (QBC) test, or a microhematocrit test [5,6]. In the chronic disease, low-level parasitemia is observed and can be detected only by indirect parasitological methods (xenodiagnosis and blood culture) [7]. Anti-T. cruzi IgG antibodies are found in almost 100% of these patients [8]. Most chronically infected patients do not develop clinical symptoms of Chagas disease, but approximately 20-30% suffer from heart and or digestive tract disease [8].
T. cruzi parasites are detected more frequently and with higher parasitemia levels in HIV coinfected patients than in those with chronic Chagas disease alone [9,10]. Reactivation of chronic Chagas disease, which occurs in approximately 20% of individuals coinfected with HIV/T. cruzi, is characterized by high parasitemia levels, similar to an acute infection [11]. More severe disease (meningoencephalitis and/or myocarditis) has been commonly described in patients infected with HIV/T. cruzi [12][13][14][15]; the involvement of other organs, such as the skin [16], gastrointestinal tract, and pericardium, has also been reported [14]. The diagnosis of Chagas disease reactivation is based on direct observation methods [5,6]. However, this diagnosis is not usually made during the early phase of reactivation, and many patients die soon after diagnosis or during treatment [11]. Case fatality is higher in patients with late diagnosis of reactivation because they die soon after the introduction of the therapy [6,[8][9][10][11][12].
Sensitive and rapid methods are required to monitor parasitemia in immunosuppressed patients with Chagas disease. Xenodiagnosis and blood culture are highly sensitive for the acute disease but are labor-intensive and time-consuming methods and the results take 30-120 days to be analyzed. In addition, technical expertise is required to manipulate live parasites, due to the risk of infection of laboratory staff [7,9,17]. In HIV/T. cruzi-infected patients, semi-quantitative xenodiagnosis that indicates the percentage of nymphs per assay may predict the occurrence of Chagas disease reactivation. Episodes of reactivation have been observed in 50% of patients who show $20% nymphs per assay in a follow-up period of 5 years [11].
A competitive polymerase chain reaction (C-PCR) method [18,19] has been reported for monitoring the treatment of children with congenital Chagas disease, patients with chronic Chagas heart disease, and a patient with HIV and meningoencephalitis. It was used to demonstrate clearance or early detection of the parasite [19][20][21]. Another molecular method, quantitative realtime PCR (qRT-PCR), has been used to diagnose congenital [22,23] and chronic Chagas disease and showed 41% positive detection of the chronic disease [22]. Using this method, low parasitemia was found in mothers (93.3%,10 parasites/mL) and higher parasitemia was found in neonates (76.3%.1,000 parasites/mL) [23]; parasitemia correlated negatively with age (0.01-640 parasites/mL) [24].
The aim of this study was to evaluate the use of a new quantitative molecular method (qRT-PCR) to monitor T. cruzi parasitemia in HIV-infected patients with or without Chagas disease reactivation. In addition, the sensitivities of different molecular and parasitological tests were compared.
Participants
The study included 91 samples that were collected between 1996 and 2008 from patients $18 years old with Chagas disease who were admitted to the AIDS Clinic and/or Clinic of Infectious and Parasitic Diseases at the Hospital das Clinicas, a tertiary hospital attached to the School of Medicine of the University of São Paulo, Brazil. The patients were classified into two groups: (1) 57 immunocompetent patients with chronic Chagas disease (CR) and (2) 34 patients with chronic Chagas disease coinfected with HIV, of whom 29 lacked reactivation (CO) and 5 had reactivation of Chagas disease (RE). The inclusion criterion for patients with Chagas disease with or without HIV infection was the presence of antibodies in two or three conventional serological tests for Chagas disease (indirect immunofluorescence ($1/40), indirect hemagglutination ($1/40) or Enzyme linked immunoassay (ELISA)) [24]. HIV patients were included after detection of anti-HIV antibodies by ELISA and confirmation by immunoblot [25]. Chagas disease reactivation was diagnosed if at least one of the following tests was positive: direct blood microscopy or QBC for T. cruzi (two patients) or direct cerebrospinal fluid (CSF) examination for T. cruzi (three patients). A control group of 58 healthy individuals without Chagas disease (indicated by negative conventional serological tests for Chagas disease) was used to check for contamination during the sample extraction process; the control samples were paired with samples from patients.
Direct and indirect parasitological assays
Trypomastigotes were identified by direct microscopy of peripheral blood mononuclear cells (PBMCs) or through QBC analysis [6]. For QBC, the blood was collected in a microhematocrit tube containing acridine orange (BD Biosciences). After centrifugation, the parasites remaining in the platelet layer at the top of the buffy coat were identified by immunofluorescence microscopy. The blood culture assay was performed as previously described [17]. Six culture tubes were examined after 10, 20, 30, 60, 90 and 120 days of culture. The results are expressed as the number of positive tubes divided by the total number of tubes examined (% positive tubes); the result was considered positive if any tube was positive and negative if all were negative. Xenodiagnosis was performed with 20-40 nymphs of T. infestans fed in vitro with 10 mL of patient blood. The search for T. cruzi in the gut contents of each triatome was performed 30, 60 and 90 days later and the results are expressed as the percentage of positive insects (semi-quantitative xenodiagnosis); or a positive result if at least one insect was positive and negative if all of them were negative [7,26].
Sample preparation and DNA extraction
DNA was extracted with QIAamp TM DNA Mini Kit (Qiagen, Hilden, Germany) from whole blood collected in 6 M guanidine HCl plus 0.2 M EDTA buffer (pH 8); in a few cases, DNA was extracted from blood collected in EDTA (PBMC) or from CSF, which was collected from two patients with central nervous system reactivation, as previously reported [27,28]; samples were stored at 220uC. The quantity and purity of the DNA were determined with a spectrophotometer (Gene-Quant, Pharmacia Biotech, Cambridge, England), and only samples with high purity were used in the experiments.
Author Summary
Chagas disease is endemic in Latin America and is caused by the flagellate protozoan T. cruzi. The acute phase is asymptomatic in the majority of the cases and rarely causes inflammation of the heart or the central nervous system. Most infected patients progress to a chronic phase, characterized by cardiac or digestive involvement when not asymptomatic. However, when patients are also exposed to an immunosuppressant (such as chemotherapy), neoplasia, or other infections such as HIV, T. cruzi infection may develop into a severe disease (Chagas disease reactivation) involving the heart and central nervous system. The current microscopic methods for diagnosing Chagas disease reactivation are not sensitive enough to prevent the high rate of death observed in these cases. Therefore, we propose a quantitative method to monitor blood levels of the parasite, which will allow therapy to be administered as early as possible, even if the patient has not yet presented symptoms. qPCR and Chagas Disease Reactivation www.plosntds.org Qualitative PCR PCR was performed using the S35 and S36 primer pair, which amplifies a 330-bp minicircle sequence from T. cruzi (Gibco TM Life Technologies, CA, USA) [29]. The reactions contained Taq polymerase, 0.2 mM of each primer, 1.4 mM MgCl 2 and 50-150 ng of DNA. Negative controls for the master mix preparation and DNA addition and a positive control, which consisted of 2610 215 mg of DNA from the Y strain of T. cruzi, were used. The presence of inhibitors of DNA amplification was verified by bactin amplification and by amplification of duplicate patient samples containing parasite DNA. To assess the analytical sensitivity of the qualitative PCR assays, 10-fold dilutions from 0.2 pg to 0.002 fg of DNA parasites were processed; the detection limit of the assay was 0.2 fg of T. cruzi, which corresponded to 0.01 parasite/assay in an agarose gel.
Competitive PCR
A 280 bp DNA fragment with binding sites for the S34/S35 oligonucleotides was used for competitive PCR with the kDNA 330 bp product and was cloned into the pT7 Blue vector (kindly provided by the Laboratório Multidisciplinar de Pesquisa em Doença de Chagas, Universidade de Brasília). The assay was performed in the Laboratory of Immunology, Faculdade de Medicina da USP, as previously described [18]. Known concentrations of the competitor (150, 15, 1.5 and 0.15 fg), or no competitor, were mixed with four aliquots of DNA from patient samples that had previously shown positive PCR results for kDNA. For each competitive PCR analysis, we included five samples per patient.
The equivalence point was determined by visually comparing the intensities of the 280 and 330 bp products. The number of T. cruzi/mL of blood was calculated based on the blood volume used for extraction, the dilution of the sample, and the amount of patient DNA used in the PCR reaction.
Real-time PCR
PCR for the microsatellite sequence TCZ3/TCZ4 (TGCTG-CAGTCGGCTGATCGTTTTCGA/CAAGCTTGTTTGGT-GTCCAGTGTGTGA), which was previously described by Ochs et al. (1996) [30] as internal primers for TCZ1 and TCZ2 (Gibco TM Life Technologies, CA, USA), was performed using 20 ml SYBR TM Advantage TM qRT-PCR Premix (Clontech, CA, USA), according to the manufacturer's instructions. The mixture contains Taq polymerase, 0.2 mM of each primer, 1.4 mM MgCl 2 and 50-150 ng of DNA. We amplified a 149 bp sequence using 45 PCR cycles with a denaturation temperature of 94uC, an annealing temperature of 64uC and an extension temperature of 72uC on a RotorGene 3000 TM (Corbett Research, Australia).
All DNA extractions and amplification reactions were performed with the appropriate negative controls to detect contamination at any stage of the procedure and with positive controls that gave reproducible results during all of the experiments.
The standard amplification curve was prepared from 10-fold dilutions of DNA from blood spiked with 5610 5 to 5610 23 T. cruzi epimastigotes/mL. Y strain (human, Brazil). The detection limit was found at 0.005 parasite/mL ( Figure 1). The initial number of parasites (10 7 /mL) was counted by microscopic examination (Neubauer camera). The first blood aliquot mixed with Guanidine HCl-EDTA (v/v) was spiked with 5.10 5 parasites cells/mL. After homogenization, this tube was used as starter for preparing 10 ten-fold serial dilutions ranging from 5610 5 to 5610 23 T. cruzi epimastigotes/mL.
The final concentration of the patient sample was calculated based on the volume of the blood extracted, the amount of DNA amplified and the volume and dilution of the sample analyzed.
Dilution of the samples was necessary for patients with reactivation and for some coinfected patients with large numbers of parasites.
High levels of parasitemia were used for comparison among the different methods or groups but were not necessarily indicative of reactivation. Three trained individuals performed and read the tests (one was responsible for the xenodiagnosis, one for the hemoculture and the other for the molecular tests). The results were read blind.
Viral load
The HIV plasma viral load was determined by reversetranscriptase (RT)-PCR using an Amplicor TM HIV-1 Monitor Test (Roche Diagnostic Systems, NJ, USA) in the Central Laboratory of Hospital das Clínicas da Faculdade de Medicina da Universidade de São Paulo, which had a lower detection limit of 200 copies of HIV/mm 3 .
Ethics
All protocols were approved by the ethical committee (Comissão de É tica para Análise de Projetos de Pesquisa -CAPPesq), Hospital das Clínicas, Faculdade de Medicina of University of São Paulo. Written informed consent was obtained from all participants.
Statistical analysis
SPSS version 17 was used for the statistical analyses. The chisquared test and Fisher exact test were used to compare the qualitative results of the tests and the differences between the patient groups. The McNemar test was used for comparisons between paired proportions. For non-parametric data, Spearman's rank correlation coefficient was used for quantitative variables (P,0.01) [31]. Kruskal-Wallis one-way analysis of variance was also applied to non-parametric data to compare the three groups of patients, followed by the comparison of independent groups, two by two, by the Dunn's Multiple Comparison post test (Prism 3.0, Graph Pad Software Incorporation, CA, California). P values,0.05 were considered significant.
Demographic characteristics
There were no significant differences in gender (CR male frequency = 38.6%; CO male frequency = 48.3%; RE male frequency = 60.0%) among the groups with and without HIV infection.
Quantitative tests
Competitive PCR. The semi-quantitative assay was performed on 38 samples with positive S35/36 PCR results: 17 CR, 16 CO and 5 RE (Figure 2A). A higher number of copies was observed in patients with coinfection and Chagas disease reactivation, followed by the group with both infections but no reactivation and chronic Chagas disease patients without HIV infection ( Table 2). The test was not performed on all samples because it requires a large amount of purified DNA, which was not always available in the amount necessary for the second measurement required when the equivalence point could not be determined.
Real-time PCR. The standard amplification curve was prepared from 10-fold dilutions of DNA from blood spiked with 5610 5 to 5610 23 epimastigotes/mL. Figure 1 A shows the limit of detection 0.005 parasite/mL, found by Probit analysis; the slope was 23.2, the reaction efficiency was 1.05, and the linear regression curve showed R2 = 0.986 ( Figure 1B) and T m = 89.6660.25uC ( Figure 1C). This assay was reproducible at $0.005 parasite/mL, as observed by Ct and Tm values in 5 different assays using triplicates and duplicates. Amplification products from patient samples with low parasitemia and doubtful melting temperature were analyzed by agarose gel electrophoresis to check the size of the expected 149 bp product (data not shown).
All 91 patients with Chagas disease with or without HIV infection were analyzed by qualitative PCR and qRT-PCR (one sample per patient) - Table 1 The level of parasitemia, expressed as DNA copies/mL, was highest in the RE group, followed by CO and CR ( Figure 2B, Table 2). In the CSF of two patients with HIV/T. cruzi coinfection and Chagas disease reactivation, the number of parasite copies was greater than 5610 5 copies/mL.
Patients in the HIV/T. cruzi-coinfected group without Chagas disease reactivation presented different levels of parasitemia (Table 2); the majority were similar to the chronic cases, but a small portion (,10.0%) had the highest levels in this group, and the remainder had intermediate levels (data not shown).
The amplification of T. cruzi DNA extracted from the blood of four patients with HIV/T. cruzi coinfection (CO) is shown in Figure 3. Correlation analysis of the quantitative results Table 3 shows the Spearman correlation indices (r s ) of the quantitative molecular, parasitological and immunological tests. A strong positive correlation between the number of parasites/mL detected by C-PCR and qRT-PCR was shown in Figure 4.
To study the influence of CD4 + and CD8 + T cells on the level of parasitemia, we calculated the Spearman correlation coefficient for 30 samples from patients infected with HIV/T. cruzi with and without parasite reactivation and found negative correlations with the number of CD4 + T cells (data not shown) and the CD4 + / CD8 + ratio ( Figure 5). However, no correlation was observed with the number of CD8 + T cells (data not shown).
A positive correlation was found between the HIV viral load and the level of T. cruzi parasitemia in 20 samples from individuals infected with HIV/T. cruzi with and without Chagas disease reactivation ( Figure 6).
Discussion
In this study, the utility of the molecular methods C-PCR and qRT-PCR for the diagnosis and quantification of the number of Table 2. Results of xenodiagnosis a , C-PCR b and qRT-PCR c in the three groups of patients d (CO, CR, RE). Table 2).
High levels of parasitemia have been reported in three chronic heart disease patients after heart transplantation [24], but no report has compared parasitemia in reactivated and nonreactivated HIV/T. cruzi-infected groups. The inclusion of the latter group indicated that there are different levels of parasitemia in these patients. The majority had similar levels to the chronic cases, less than 10% had the highest levels of the group and the remainder showed intermediate levels of parasitemia. The patients with higher parasitemia might be targeted for therapy.
Additionally, unlike what we would expect based on the immunopathogenesis of HIV/T. cruzi coinfection, we found that one HIV-infected patient with reactivation had much lower parasitemia than the majority of the RE group. This episode of reactivation was characterized by the presence of trypomastigotes, as detected by direct microscopy of the blood [32]. The patient only presented mild symptoms without meningoencephalitis, myocarditis or other tissular lesions. The level of CD4 + T cells was more than 300/mm 3 , and, during this period, the patient showed .20% of nymphs on xenodiagnosis and an increased viral load. The lineage type of the parasite was no different from the majority of the cases. It is possible that the level of parasitemia and the level of CD4 + /mm 3 did not change because the reactivation was diagnosed in the initial phase of the disease and early therapy with benznidazole was administered. Although the number of patients with reactivation was small, the high number of DNA copies observed in the blood ( Figure 2B) or cerebrospinal fluid of the remaining RE patients with myocarditis or meningoencephalitis is impressive. In our study, qRT-PCR showed high performance with a low detection limit (0.005 parasite/mL) and good efficiency, as previously described [32][33][34]. We suggest that increasing parasitemia in subsequent examinations and/or stabilization at levels higher than previously seen in the same patient should be carefully monitored for CD4 + counts and viral load.
The two quantitative molecular methods, C-PCR and qRT-PCR, used in the present study were strongly correlated by Table 3. Spearman's rank correlation coefficients observed between tests: blood culture, xenodiagnosis, competitive PCR, quantitative real-time PCR.
Methods (quantification method) a
Spearman r s P Samples (n) The previous result of 41% positivity for chronic Chagas disease by qRT-PCR [22] is similar to the results observed in our study. In addition, qRT-PCR with the S35/S36 primers was used to measure parasitemia in neonates with congenital Chagas disease [23]; this analysis yielded similar data to those observed here in the RE group compared with the CR group using satellite sequences. Those authors reported a higher level of parasitemia (.1000 copies/mL blood) in neonates compared with their mothers with chronic Chagas disease (,10 copies/mL blood) [23]. In our study, the level of parasitemia in the RE group (median 1428.90 parasites) was higher than in the CO group (median 1.57 parasites) and CR group (median 0.00 parasite). A previous study on the reactivation of Chagas disease in heart transplant patients with positive Strout tests [24] reported a lower concentration of parasites in the blood than that shown in our study (9.07 and 468.0 parasites/mL).
Our data show that C-PCR and qRT-PCR had higher sensitivities than the parasitological tests (xenodiagnosis and blood culture) and confirmed the previously described higher sensitivity of S35/S36 PCR [35][36][37]. Moreover, PCR takes less time (a few hours) and has a low risk of infection, in contrast to the laborintensive and time-consuming parasitological tests (30-120 days), which have high specificity but require the manipulation of live parasites. The risk of DNA contamination in molecular tests needs to be minimized by using negative controls at each stage of the analysis. The risk of contamination is lower with qRT-PCR because it employs extraction kits and excludes post-processing PCR, although the high cost is a disadvantage.
Analyses of the demographical characteristics of the different clinical groups showed no differences in terms of gender, but RE patients were younger than the other groups, possibly due to the epidemiological characteristics of HIV-infected patients in Brazil [2]. A limitation of our study was that the small number of patients with Chagas disease reactivation did not allow for age-matched controls.
An analysis of the level of parasitemia in different groups represents a good strategy to monitor the host protozoan/virus imbalance. Our data were not influenced by the lineage of the parasite, which was similar for most of the isolates (data not shown), as previously observed [24]. In our study, the parasite level was lower in the CR group and higher in the CO and RE groups, possibly due to ability of the parasite to evade the host immune response in patients without HIV infection. Cellular immunity and macrophage deficiencies in HIV infection could explain the increased parasitemia observed in HIV/T. cruzi-infected patients. The highest level of parasitemia was seen in the RE group, which was associated with increased HIV viral load, a decreased number of CD4 + cells, and a decreased CD4 + /CD8 + cell ratio; these results confirm the failure of immune mechanisms in the RE group. These data are corroborated by clinical studies that showed a relationship between the reactivation of trypanosomiasis, increased HIV viral load, and decreased CD4 + counts in peripheral blood [11].
The correlation between HIV viral load and the concentration of parasites was demonstrated for the first time in our study, although it has been previously suggested by the relationship observed between an increased viral load and an increased rate of positive xenodiagnosis in HIV/T. cruzi-infected patients [11]. These data are also consistent with the rapid evolution of murine leukemia virus in mice infected by T. cruzi [38]. Although no relationship was found between CD8 + cells and the concentration of parasites, their ability to control infection via IFN-c secretion [39] has been demonstrated previously. The strong correlation between the number of parasites and CD4 + /CD8 + cells suggests that both cells play a role in the control of parasitemia. The data from a previous report [40] in mice, which showed high parasitemia in CD4 2 /CD8 + animals and low parasitemia and high survival in CD4+/CD8+ mice, help to explain the results of this study.
Reactivation of chronic Chagas disease or increased parasitemia has been reported in T. cruzi-infected patients with hematological malignancies or autoimmune diseases receiving cytotoxic or antiinflammatory therapy with corticosteroids [4,24,41]. Severe reactivation of trypanosomiasis has been described in about 20% [11] of patients infected with HIV (although these data are possibly overestimated by the inclusion of one referral center). In these cases, if treatment is delayed for at least 30 days, the mortality rate of such patients is 80%, but mortality decreases to 20% for patients treated within 30 days, indicating that earlier diagnosis and treatment increase patient survival [8][9][10][11][12][13][14][15][16].
One of the limitations of this study is the cross-sectional design, which does not allow for an investigation of the evolution of parasitemia. Another limitation is the low number of HIV-infected patients with Chagas disease reactivation, which constitutes an important challenge for prospective studies. Nevertheless, we observed that the high levels of parasitemia seen in the majority of HIV-infected patients with reactivation were not found in coinfected patients without reactivation.
Considering the imbalance of host-parasite interactions in HIV/T. cruzi-coinfected patients and the fact that HIV infection might favor parasite growth by itself, therapy might be considered in these coinfected patients on the basis of high parasitemia and low CD4 + count and decreased CD4 + /CD8 + ratio, even though symptoms were absent. The adverse effects of the drugs, previous immunosuppression and the immunosuppressive effects of Chagas disease, poor surveillance against neoplasia and the therapy efficacy, which is lower in the presence of low levels of parasitemia, must be taken into consideration when recommending universal therapy for any coinfected patient.
We propose that prospective multicenter studies are warranted to address important questions regarding the management of HIV/T. cruzi coinfection, including determining why Chagas disease reactivation occurs in some individuals with lower levels of parasitemia, characterizing the influence of parasite lineage and immune responses on reactivation, and evaluating the outcome of initiating therapy on the basis of serology [42] versus treating with pre-emptive therapy (parasitemia versus uniquely or persistently high levels of parasitemia).
Finally, in the present study, we demonstrated for the first time that qRT-PCR shows different levels of parasitemia in groups of HIV/T. cruzi-infected patients with and without Chagas disease reactivation. The highest concentrations of parasites were found in the latter, followed by coinfected patients and, finally, patients with chronic Chagas disease. We propose that this new test be evaluated under standardized conditions in prospective controlled studies to determine the importance of parasitemia (persistent and/or increased) as a criterion for initiating pre-emptive therapy in chronic Chagas disease patients with HIV infection or immunosuppression. The association of increased parasitemia with increased viral HIV load and a decreased CD4 + count and CD4 + /CD8 + ratio in peripheral blood suggests that these could be analyzed as cofactors of increased parasitemia to further support any intervention.
In addition, this association (increased parasitemia, increased HIV viral load and decreased number of CD4 + cells/mm 3 and decreased CD4 + /CD8 + ratio) reinforces the need to monitor parasitemia using quantitative methods to determine when to start therapy for the better management of Chagas disease in patients with immunosuppression. | 2016-05-12T22:15:10.714Z | 2011-08-01T00:00:00.000 | {
"year": 2011,
"sha1": "0e15882ee2f0a194af1e1ac61aee0391e7fd1656",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosntds/article/file?id=10.1371/journal.pntd.0001277&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0e15882ee2f0a194af1e1ac61aee0391e7fd1656",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
15712662 | pes2o/s2orc | v3-fos-license | Functional characterization of drought-responsive modules and genes in Oryza sativa: a network-based approach
Drought is one of the major environmental stress conditions affecting the yield of rice across the globe. Unraveling the functional roles of the drought-responsive genes and their underlying molecular mechanisms will provide important leads to improve the yield of rice. Co-expression relationships derived from condition-dependent gene expression data is an effective way to identify the functional associations between genes that are part of the same biological process and may be under similar transcriptional control. For this purpose, vast amount of freely available transcriptomic data may be used. In this study, we consider gene expression data for different tissues and developmental stages in response to drought stress. We analyze the network of co-expressed genes to identify drought-responsive genes modules in a tissue and stage-specific manner based on differential expression and gene enrichment analysis. Taking cues from the systems-level behavior of these modules, we propose two approaches to identify clusters of tightly co-expressed/co-regulated genes. Using graph-centrality measures and differential gene expression, we identify biologically informative genes that lack any functional annotation. We show that using orthologous information from other plant species, the conserved co-expression patterns of the uncharacterized genes can be identified. Presence of a conserved neighborhood enables us to extrapolate functional annotation. Alternatively, we show that single ‘guide-gene’ approach can help in understanding tissue-specific transcriptional regulation of uncharacterized genes. Finally, we confirm the predicted roles of uncharacterized genes by the analysis of conserved cis-elements and explain the possible roles of these genes toward drought tolerance.
Introduction
Interpretation of high-throughput data for rice toward improving its yield under varying environmental conditions is largely limited by the incomplete functional annotation of rice genes. With the seventh release of MSU Rice Genome Annotation project (http://rice.plantbiology. msu.edu/index.shtml), out of the 55,986 loci identified, around 40.9% are putative, 24.7% are expressed, 3.9% are hypothetical, 0.2% are conserved hypothetical proteins while 30.9% are transposable-element related genes. With over 50% of the genes in rice lacking annotation for biological processes (Rhee and Mutwil, 2014), there is an urgent need for annotation pipelines to be developed. Several approaches such as artificial mutagenesis followed by the analysis of phenotypic variations (Jiang and Ramachandran, 2010), systems-level annotations of genes to characterize the tissue, condition and developmental stage-specificity (Childs et al., 2011;Movahedi et al., 2011), and sequence homologybased annotation pipelines (Conesa and Götz, 2008) have been used in the functional characterization of genes. Moreover, systems level information, viz., transcriptome and proteome data (Cao et al., 2012;Franceschini et al., 2013), gene regulatory information (Higo et al., 1999), metabolic and pathway level information (Dharmawardhana et al., 2013), including sequence-level information (Monaco et al., 2014) allow for an integrated and more accurate level of functional annotation.
Recently, a new approach termed co-expression network analysis is being used for functional annotations of genes (Childs et al., 2011;Liang et al., 2014). This is based on the observation that a coordinated participation of multiple genes is required to bring about any biological process within the cell and genes which are part of the same biological process may have similar expression profiles across different conditions. Such genes are said to be co-expressed. One of the most important applications of gene co-expression network analysis is to identify functional gene modules. In this study, a gene-coexpression network is constructed using weighted gene correlation network analysis (WGCNA), a package in R . This method is built on the principles of graph theory where nodes correspond to genes and edges connecting them reflect correlations between gene expression profiles across samples (by typically using a weighted adjacency matrix). In general, two genes are connected if the similarity between their expression profiles is above a certain threshold, measured by Pearson correlation coefficient or other metrics. The network is then clustered into modules based on the topological overlap measure (TOM) which takes into account the correlation between two genes, as well as, the number of shared neighbors between them. This approach has been widely used on a range of systems, for e.g., co-expression network analysis of adipose genes to find genes correlated with high serum triglyceride (TG) levels (Haas et al., 2012), identify differences in transcriptome organizations between normal and autistic brain (Voineagu et al., 2011), transcriptional changes in Alzheimer's disease and normal aging (Miller et al., 2008), biotic and abiotic stress responses using whole transcriptome sequencing in potato (Massa et al., 2013), understand seed germination in Arabidopsis (Bassel et al., 2011), transcriptional reprogramming in Arabidopsis due to mechanical wounding and insect herbivores (Appel et al., 2014), miRNA-regulated biological pathways relevant to pathogenic and symbiotic interactions in Medicago truncatula (Formey et al., 2014), to name a few.
Here starting with the co-expression network, we propose an approach for the characterization of functional gene modules, and functional prediction of the uncharacterized genes. We consider gene expression data of a drought tolerant rice line from different tissues and developmental stages. First, we present the analysis of the gene modules to identify tissuespecific drought-responsive gene modules based on differential expression profiles of genes comprising these modules. The modules include previously reported stress-associated genes and transcription factors along with several uncharacterized genes. Next, we identify biologically informative genes that lack functional annotation using two approaches. First, by graph-topological measures such as degree centrality and tissue-dependent differential expression of genes (viz., foldchange) we filter the genes. Uncharacterized genes in such a set of well-connected and differentially expressed above a certain threshold across all tissues are considered for functional annotation. Alternatively using a guide-gene approach, we select transcription factors that are up-regulated and construct a subnetwork from its strongly connected neighbors, to identify drought-stress responsive genes that lack functional annotation. Here we show that sequence-based homology search, comparison with functional networks from model plant organisms followed by motif analysis of promoters of the uncharacterized genes and their neighbors gives us an insight into the role of these genes in response to drought stress.
Dataset Preprocessing
In this analysis, the genome-wide temporal-spatial gene expression data of a drought tolerant rice line (GSE26280) from three tissues, leaf, root, and young panicle, at three developmental stages (tillering, panicle elongation and booting stage) exposed to drought stress, is obtained from GEO-NCBI . The dataset consists of 36 samples (18 drought treated and 18 control samples) consisting of 57,381 probes which are mapped to the Affymetrix annotation file for rice. Invariant set normalization, log2-transformation and filtering are performed using dChip (Li and Wong, 2001) for removing systematic variations, scaling and eliminating probes with very low intensity values respectively. The probes are filtered based on certain criteria, such as expression level be more than 20 in at least 50% of the samples and a probe is 'present' in at least 20% of the arrays. Around 25,804 probes satisfying the above filtering criteria are obtained. Probes that did not have any annotation or those which mapped to more than one gene are discarded. Finally, for multiple probes mapping to the same gene, the one showing a higher fold-change across all the samples is considered. After preprocessing, 18,799 unique probe-gene pairs are used for further analysis.
Network Construction and Module Detection
The WGCNA software package in R is used to construct gene co-expression network of drought tolerant rice line from the normalized, log2-transformed expression matrix of 18,799 genes. In WGCNA, soft-thresholding is used for finding similarity relationships between gene-pairs. This is carried out by computing the unsigned Pearson's correlation matrix and then scaling it by power β=8 (soft-threshold, based on approximate scale free-topology criterion). Subsequently, the function block-wiseModules is used for hierarchical clustering of genes using Dynamic Tree Cut approach with maximum block size of 8000, minimum module size of 200, a "cut height" of 0.995 and "deep split" = 2. These results in 16 co-expressed modules ranging from 3798 (turquoise) to 296 (lightcyan) genes, with 360 genes left unclustered (grouped as gray module).
Statistical Significance of Network Modules
To test the robustness of co-expressed modules obtained in our network, re-sampling of the dataset is carried out to estimate the module quality statistics. Using modulePreservation function in WGCNA (Langfelder et al., 2011), 200 permutations are performed and log p-values and Z-scores for various network quality statistics such as density, module membership, connectivity, etc. for each module are computed which are summarized as psummary and Zsummary (see Supplementary Table S1). The Z-score provides evidence that a module is preserved more significantly than a random sample of all network genes, while p-value gives the probability of seeing the module quality statistic in a random sample of genes of the same size.
Here we observe that the psummary value is very low (∼0.0) and Zsummary > 10, thus providing strong evidence of network connectivity preservation and robustness of all the co-expressed modules.
Biological Relevance of Network Modules
The tissue-specificity of the modules based on the percentage of differentially expressed genes (DEGs) and their functions define the relevance of these modules toward various biological processes which are switched on, off, or remain unaffected in response to drought. For this purpose, differentially regulated genes are identified in a tissue and stage-specific manner at fourfold change with p-value ≤0.05 using dChip. We observe that ∼17% genes are differentially expressed in at least one of the tissues. Gene ontology and enrichment analysis of each of the modules is performed using agriGO (Du et al., 2010), RiceNetDB (Liu et al., 2013a), and RGAP (Kawahara et al., 2013). The RGAP database is also used for comparison of co-expression profiles and gene-pair associations with other rice datasets. The modules are visualized using Cytoscape (Shannon et al., 2003).
Alternative Network Inference Methods
For assessing the biological significance of gene-pair associations in the conserved gene clusters used for functional extrapolation, we constructed four co-expression networks based on different methods. Two methods of network construction used for the analysis are: correlation based and context likelihood of relatedness (CLR) based methods (Faith et al., 2007). For correlation based networks, two different association rules, viz., Pearson correlation and Spearman rank correlation were used, and for the CLR-based networks, two reverse engineering methods, viz., mutual information (MI) and maximal information coefficient (MIC) were used, (Reshef et al., 2011). The alternate network construction was done using DeGNServer (Li et al., 2013) and Markov Cluster Algorithm (MCL; Enright et al., 2002) was used for the clustering of co-expressed genes in these networks. The parameters used for construction and other details of these four networks are given in Supplementary Table S2.
Identification of Drought-Responsive Modules: Tissue Specificity
The 16 co-expressed gene modules obtained using WGCNA are analyzed to get an insight into the function of these co-expressed gene clusters. The GO enrichment analysis of the modules is performed by submitting the complete gene list of each module to agriGO and the statistical significance is determined using Fisher's exact test at p-value <0.05. In Figure 1, the general function of the sixteen co-expressed modules (represented in different colors) is given based on the most enriched GO term and p-value. Since the general function of the modules do not give any indication of the core set of genes/modules that may respond to drought stress, we next analyzed the differential expression of genes in a tissue-specific manner. The percentage of DEGs in each module at various developmental stages in the three tissues is identified and depicted in Figure 2. It may be noted that the red and midnightblue modules exhibit a high percentage of DEGs ubiquitously across all the three tissues and developmental stages. That is, these two modules comprise of important drought-responsive genes and we discuss their analysis in detail below. On the other hand, purple, salmon and magenta modules show very low percentage of DEGs across various tissues indicating their negligible role in drought response. In the panicle elongation stage in leaves, almost all the modules exhibit a high percentage of DEGs (∼18.5%) suggesting it to be the most important drought affected stage in plant. The analysis of DEGs would give an insight into various drought-responsive molecular processes activated during this stage in rice. For e.g., in the brown module, a high percentage of genes down-regulated in the panicle elongation stage in leaves, are associated with gene expression, translation and protein metabolic processes. This may be because of the various high energy requiring processes shut down during drought in leaves. Similarly, pink module exhibits a significant number of down-regulated genes in root tillering stage and in leaves panicle elongation stage. Some of these genes are involved in oxidoreductase activity, some as auxin-responsive genes [suggesting a decrease in lateral root development (Casimiro et al., 2001)] and some involved in the biosynthesis of secondary cell wall. In the tan module, genes up-regulated in both the stages in root are identified to be involved in ubiquitin mediated proteolysis, plant hormone signal transductions (phosphatases) and polymeric compound degradation including starch (chitinase and β-amylases). These have been implicated in remobilization of complex polymers to provide soluble sugars during stress conditions (Seiler et al., 2011).
In leaves, the tan module has a higher percentage of down-regulated genes in tillering and panicle elongation stage compared to the booting stage. Some of these down-regulated genes are associated to primary and secondary cell wall biosynthesis and signal transduction (ras-related proteins). In panicle elongation stage, all the modules, except purple and salmon, exhibit a large percentage of DEGs. Of these, green, red, midnightblue, greenyellow, and lightcyan modules show a high percentage of up-regulated genes, while the remaining modules exhibit a higher percentage of down-regulated genes. This stage being a reproductive stage, the plant is particularly sensitive to water requirements and certain processes are preferentially activated or shut down as apparent from the subsequent functional analysis. In the booting stage, when the panicle has already grown to a certain height, the percentage of downregulated genes is observed to be reduced in most of the modules. Red and midnightblue are the most important modules in terms of DEGs in both tillering and panicle elongation stages in root, while the pink module exhibit higher number of DEGs only in the tillering stage. In the panicle booting stage, the percentage of DEGs is much lower compared to all other tissues. About 10-12% of genes are differentially expressed in four modules, namely, red and midnightblue (mostly up-regulated) and tan and turquoise (mostly down-regulated). Thus, we see that by a systematic analysis of DEGs module-wise, we can identify tissue-specific functional roles of the modules in response to environmental stress. The analysis of clusters or subnetworks of DEGs in these modules may provide insight into the molecular processes activated in various tissues/stages.
Below we discuss a detailed analysis of some of these modules to understand the underlying mechanisms in various tissues at different development stages in response to drought. In particular, we discuss the analysis of red and midnightblue modules that show a significant percentage of DEGs across all the tissues, green, yellow, and blue modules that show differential expression in the panicle elongation stage in leaves, and turquoise module for the panicle booting stage. We also use graph-based approaches to identify important genes in the drought-induced modules, red, midnightblue, and green, that lacks any functional annotation. Two alternative approaches have been proposed for the function prediction of these uncharacterized genes.
Essential Drought-Responsive Modules
As is evident from Figure 2, a significant fraction of genes in red and midnightblue modules are differentially expressed (mostly up-regulated) ubiquitously in all the three tissues and developmental stages. In fact, percentage of DEGs common between root and leaf tissues is ∼40% in the red module and ∼47% in the midnightblue modules respectively. This suggests that the two modules comprise a core set of genes that are involved in drought-responsive processes at various development stages in the plant. Below we present a detailed analysis of these two modules to understand the functional role of the DEGs.
Analysis of Red Module
The red module consists of 1344 genes, a large fraction of which are involved in metabolism (∼45%), ∼18% in response to stimulus and ∼9% in abiotic stress (data obtained from RiceNetDB). We observe that ∼34.3% (461) of the genes exhibit fourfold or higher differential expression in at least one of the tissues (14.6, 16, and 9% up-regulated and 8.4, 7, and 3% downregulated in root, leaf and panicle respectively). Of these 95 DEGs (∼21%) are annotated as "expressed proteins, " having transcriptlevel evidence but lack specific GO annotation for biological processes. We also observe that 57 genes are differentially expressed across all tissues and stages at fourfold or more (56 up-regulated and 1 down-regulated), suggesting their ubiquitous role in drought stress response. Phospholipase D (PLDα4) gene is down-regulated across all the stages. PLDs are known to have a role in lipid metabolism, growth and development and PLDα4 is reported to be suppressed by most plant hormones including abscisic acid (ABA; Li et al., 2007). Functional analysis of the 56 up-regulated genes in agriGO and RGAP suggest their association with reproduction and post-embryonic development (LEAs, seed maturation proteins, CBS domain containing membrane protein, embryonic protein DC-8), stress responsive proteins (e.g., OsRCI2-5 and OsRCI2-7, dehydrins), and metabolic processes (e.g., phosphoglycerate mutase, dehydrogenase E1 component domain protein, transketolase-chloroplast precursor, glutathione S-transferase, cytokinin-O-glucosyltransferase 2, etc.).
Degree centrality analysis
To analyze if the DEGs are also well-connected with other genes, we carried out centrality-based analysis of top 20% (269) high-degree ('hub') genes in the red module. Numerous studies have shown that genes/proteins with high degree tend to be essential for the organisms (Jeong et al., 2001;Barabási and Oltvai, 2004). We observe that out of 269 hub genes, about 145 genes are differentially expressed across at least one of the tissues. Interestingly, 52 of the 56 up-regulated DEGs in the red module are also the hub genes. Of these 13 are uncharacterized "expressed proteins." These 13 genes that are well-connected and up-regulated across all tissues and stages are thus ideal candidates for further functional analysis.
As a first step toward understanding the functional role of these 13 uncharacterized genes, homology-search for orthologs in other plant species was carried out using AraNet and RGAP database. AraNet is a genome-wide, condition-independent functional network of Arabidopsis genes reconstructed by integrating functional genomics, proteomics, and comparative genomics datasets. The functional linkages among gene-pairs are weighted by the log likelihood of the linked genes to participate in the same biological processes (inferred from direct assays, protein-protein interactions, sequence/structure similarity, literature mining, etc.). The Rice Genome Annotation Database (RGAP; Kawahara et al., 2013) is another important resource that provides sequence and annotation data for the rice genome including information about rice orthologous groups in Arabidopsis, maize, grapevine, poplar, etc. It also provides coexpression patterns between gene-pairs from 15 different rice gene expression experiments. On searching in AraNet, orthologs for only 6 of the 13 uncharacterized rice genes were found; however, these orthologs also lacked any specific functional annotation for biological processes. Next, from the co-function network in AraNet, we extracted top 100 neighbors (ranked by total edge weight score) of the six Arabidopsis orthologs and mapped them on to our rice co-expression network. Rice orthologs for 82 of these 100 Arabidopsis neighbors were identified in our co-expression network. As expected, majority of the high-ranked neighbors of Arabidopsis orthologs mapped to the red module (∼34%), with a smaller fraction mapping to turquoise (∼21%) and blue (∼ 8%) modules. We observe that for the 6 uncharacterized genes, 27 neighbors, and 60 edges in this cluster are conserved between Arabidopsis and red module in our network. In Figure 3 the conserved edges are depicted in brown color between the six uncharacterized genes shown in green and 27 conserved network neighbors mapped on to the red module. It is observed that these 27 genes are mostly upregulated, especially in root and leaf, and are well-connected in the network as shown by gray edges in Figure 3. Majority of these genes are involved in seed development (embryonic protein DC-8, LEAs, seed maturation protein PM41, small hydrophilic plant seed protein) and in biotic and abiotic stress response (pathogen-related protein, OsRCI2-5, DnaK family protein, etc.). AraNet prioritizes the putative ontology of a gene based on the most enriched ontologies among its neighbors. Extrapolating the annotation from the Arabidopsis orthologs in AraNet to the six uncharacterized rice genes, we assign the following GO term to these genes: "regulation of transcription, " "response to ABA, " "seed development, " "response to water deprivation" and "response to chitin, " as shown in Table 1. For the remaining seven uncharacterized genes, ortholog search in other plant species resulted in orthologs for four genes in maize, but again with no specific functional annotation.
A search of these 13 uncharacterized genes and 27 conserved neighbors was carried out in RGAP database to further confirm their coexpression. It was observed that 10 out of 13 uncharacterized genes, and 11 out of 27 annotated genes are co-expressed and belong to the same module (turquoise) in the experiment GSE6901 (GSE6901-7-days-old rice seedlings grown in the presence of light and under control and stress conditions: drought, cold, and salinity). This includes five of the remaining seven uncharacterized genes with no Arabidopsis orthologs. (The co-expression profiles are given in Supplementary Figure S1). The GSE6901-turquoise module is shown to be associated with genes differentially expressed due to drought and salt stress by Childs et al. (2011). The conservation of co-expression profiles of 21 out of 40 genes in an independent experimental study further provides biological significance to the associations between these genes.
Further evidence toward biological relevance is provided by analyzing these associations in four different network constructions. For the Spearman's rank correlation network and the MI-based CLR network, we observe that all the 40 genes are clustered in the same module and 329 and 293 edges are conserved respectively between the 13 uncharacterized and 27 characterized genes. For the Pearson correlation network and the MIC-based CLR network, 37 genes are clustered together in the same module with 297 and 140 edges between them respectively (details given in Supplementary Table S3). Thus, based on the inference from alternate methods of networks construction, conserved neighborhood from AraNet and conserved coexpression profiles in independent experimental study in RGAP, we conclude the relatedness between these 40 genes under drought stress.
Analysis of cis-regulatory elements
Co-expressed genes that are densely connected to each other in a functional module are likely to share similar short responsive elements. From the annotation of the network neighbors of the uncharacterized genes (Figure 3), we observe that these are LEAs and other stress-responsive genes with a known role in ABA response. So it is highly likely that the uncharacterized genes may also have a role in the ABA signaling pathway. ABA is a regulatory molecule involved in drought stress tolerance and its main function is to regulate osmotic stress tolerance via cellular dehydration tolerance genes. ABA-inducible genes have the ABA-responsive element (ABRE) in their promoters (Debnath et al., 2011). To check for the presence of ABRE in the uncharacterized genes, 1 kb upstream region of 40 gene sequences (13 uncharacterized genes and 27 conserved network neighbors from Figure 3) are analyzed using the Plant Cis-acting Regulatory DNA Elements (PLACE) database (Higo, 1998). All the 40 genes had a number of (ABREs in the promoter region, e.g., ABRELATERD1 (Simpson et al., 2003;Nakashima et al., 2006), ABRERATCAL (Kaplan et al., 2006), ABREATCONSENSUS (Choi et al., 2000), etc. In order to see if these genes share any other regulatory motifs, the 1 kb upstream region of these gene sequences were analyzed using the motif discovery tool, MEME (Bailey et al., 2009). The predicted motifs were then searched in the STAMP server (Mahony and Benos, 2007), which performs motif alignments against various motif databases. Almost all the sequences were enriched for the AGTACSAO element which is FIGURE 3 | Uncharacterized genes in red module. Genes that are high degree and up-regulated across all developmental stages and tissues are depicted in 'green' color (having Arabidopsis orthologs) and 'purple' color (having no Arabidopsis orthologs). Their neighbors, identified as orthologs in the co-function network in AraNet are colored according to the average fold-change in drought samples. The edges in 'brown' denote conserved co-expressed links among genes in rice and Arabidopsis, while the 'gray edges correspond to co-expressed links in red module. The numbers in the bracket indicate the rank of the neighbors (based on edge weight) in AraNet.
probably linked to auxin (Kisu et al., 1998) and C2GMAUX28 also associated with auxin-responsive genes (Nagao et al., 1993). The presence of both ABRE and auxin-associated cis-elements suggest a possible link between these two phytohormones. The interdependency between these two hormones has been recently studied by Liu et al. (2013b). They reported that a crosstalk exists between auxin action in seed dormancy and ABA signaling pathway and showed that auxin acts upstream of ABI3 (major regulator of seed dormancy) by recruiting the ARF 10 and 16 to control the expression of ABI3 during seed germination.
Analysis of the Midnightblue Module
It is one of the smaller modules with only 383 genes. Functional enrichment analysis of this module in RiceNetDB indicate that about 29% of the genes belong to primary metabolism, 16% to response to stimulus, and 14% to nucleobase, nucleoside, nucleotide, and nucleic acid metabolism. Of these 98 genes (∼26%) are differentially expressed at fourfold in at least one of the tissues. In all the three tissues, percentage of up-regulated genes is higher (∼12.8, 12.5, and 9.7% in root, leaf and panicle respectively) compared to that of downregulated genes (∼4.4, 3.9, and 1.8%). About ∼18% of the DEGs are 'expressed proteins' with no functional annotation. The GO analysis revealed that this module contains a number of proteins involved in nucleotide binding, specifically ATP binding. Some of the up-regulated genes of this module involved in ATP binding activities are AAA-type ATPase family protein, ABC transporter-ATP-binding protein, plant PDR-ABC transporter associated protein, NBS-LRR disease resistance protein, plasma membrane ATPase, etc. Apart from these, genes involved in RNA biosynthetic process such as AP2/EREBP transcription factors (LOC_Os08g36920 and LOC_Os06g07030), bZIP transcription factor (LOC_Os01g64730), translation initiation factor SUI1, Homeobox domain containing protein (LOC_Os01g19694) are observed to be up-regulated. The AP2/EREBP (APETALA2/ethylene-responsive element-binding protein) is a large family of TF genes in the plant kingdom involved in a myriad of functions such as seed development, organ development, response to biotic and abiotic stress, etc. (Sharoni et al., 2011). The bZIP transcription factors belong to a large family of regulatory proteins involved in seed development and maturation, and stress response primarily through ABAdependent signaling pathways (Jakoby et al., 2002;Xu et al., 2012).
Degree Centrality Analysis
From topological network analysis, we observe that out of top 20% of the high-degree genes, ∼56% are up-regulated in at least one of the tissues. Analysis of the down-regulated genes show that the invertase/pectin methylesterase inhibitor family protein (LOC_Os10g36500) is down-regulated especially in root and leaf and has a role in basal disease resistance and tolerance to oxidative stress as shown in a recent meta-analytic study on rice biotic and abiotic stress conditions (Shaik and Ramakrishna, 2013). The gene, lachrymatory factor synthase (LOC_Os01g10210), found to be significantly down-regulated across all the tissues, has its ortholog in onion reported to be involved in secondary metabolism (Jones et al., 2004). We observe that 18 genes are up-regulated at fourfold across all tissues and stages. These genes are involved in nucleobasecontaining compound metabolic processes including those associated with ATP (AAA-type ATPase family protein, ATPdependent protease La, ABC transporter-ATP binding protein) as well as transcription factors like bZIP (LOC_Os01g64730), HSF (LOC_Os06g35960), and HB (LOC_Os01g19694), genes involved in various responses to stimulus such as stem-specific protein TSJT1, protein phosphatase 2C, protein disulfide isomerase (involved in cell growth and differentiation), cytochrome c oxidase subunit, etc. Apart from these, three "expressed proteins" are up-regulated across all the tissues and the developmental stages. From homology-based analysis of the three uncharacterized genes in AraNet and RGAP database, we found that two of these have characterized orthologs: ortholog of LOC_Os10g36180 is "responsive to dessication-29B" (RD29B) and is involved in ABA signaling pathway, leaf senescence, response to salt, cold and water deprivation and the ortholog of LOC_Os08g01370 is a seed maturation protein in Arabidopsis. The analysis of the third uncharacterized gene (LOC_Os01g73110) in AraNet and RGAP suggests its role in floral organ abscission and response to ABA. Identification of cis-regulatory elements in PLACE database indicates that all the three uncharacterized genes have ABREs in their promoter regions suggesting that these are activated in an ABAdependent manner. The promoter analysis is in accordance with the predicted annotation and the results are summarized in Table 1.
Leaves-Specific Module
In the tillering stage in leaves, tan, green and yellow modules show a significant fraction of DEGs apart from red and midnightblue modules. While in the panicle elongation stage, all the modules except purple, salmon and magenta show a very high fraction of DEGs and a similar trend (but with a lower percentage) is observed in the booting stage in leaves. The onset of reproductive stage is marked by the elongation of panicle and by booting stage, the panicle is completely developed. The morphology of the panicle is one of the main determinants of rice yield (Furutani et al., 2006). Previous studies have shown that a large number of genes, especially transcription factors, are differentially expressed during the early stages of panicle development due to the series of rapid morphological changes occurring in the plant (Zhang et al., 2005;Furutani et al., 2006). In our analysis we observe that the modules exhibiting a high percentage of DEGs specific to the panicle elongation stage in leaves are: green (∼23% up-regulated and 16.6% downregulated), greenyellow (∼15% up-regulated and 13.7% downregulated), yellow (∼7.5% up-regulated and 18.6%) and blue (∼6.6% up-regulated and ∼21.4% down-regulated) modules. While the DEGs in greenyellow module did not show a significant GO enrichment, down-regulated genes in the blue module were found to be involved in photosynthesis and tetrapyrrole (chlorophyll) biosynthetic process indicating that photosynthesis is inhibited during drought. A similar trend is observed in the yellow module with genes associated with photosynthesis, localization and transport being down-regulated.
Analysis of Green Module
The green module exhibits a high percentage of DEGs in leaves (∼14.3% up-regulated and 8% down-regulated) compared to roots and panicle. This module is characterized by the presence of many differentially expressed transcription factors probably due to the transition from a vegetative stage (tillering) to reproductive stage (panicle elongation), as the plant balances between countering drought and sustaining the growth of the plant. About 57 genes of this module are observed to be upregulated in all the stages of leaves. These include genes involved in response to stimulus as well as a number of transcription factors such as MYB, AP2-EREBP, WRKY (WRKY72 and WRKY55), bZIP, U-box domain containing protein, WIP3wound-induced protein precursor, thaumatin, ZOS12-09 -C2H2 zinc finger protein, etc. About 31 genes of this module are observed to be down-regulated across all stages in leaves. These include genes involved in metabolism (dehydrogenase, OsSub12 -Putative Subtilisin homolog, phosphoribosyltransferase, ZOS7-13 -C2H2 zinc finger protein, chlorophyll a/b binding protein, ent-kaurene synthase, chloroplast precursors, omega-3 fatty acid desaturase, chloroplast precursor, etc.).
Analysis of transcription factor: OsMYB2
The green module has a number of transcription factors that are differentially expressed in leaves across various stages. Here we present an alternative approach for the functional annotation of genes that are tightly coupled, differentially expressed and are likely to be co-regulated by a transcription factor. As a representative example, we consider R2R3-MYB transcription factor OsMYB2 (LOC_Os07g48870) as the 'guide-gene, ' which is known to play a role in salt, cold and dehydration stress (Yang et al., 2012). The top 5% first-degree neighbors (69) of this transcription factor, OsMYB2, are considered for further analysis. We observe that 47 of these first-neighbors of MYB are up-regulated at twofold change or higher, 15 are down-regulated, and seven genes did not show any significant fold-change.
The GO analysis of the 47 up-regulated genes indicate that 37 of these are involved in stress response (such as ABA stressripening protein, pleiotropic drug resistance protein, receptorlike protein kinase HAIKU2 precursor, harpin-induced proteins, white-brown complex homolog protein 11, oxidoreductase, aldo/keto reductase family protein, hypoxia-responsive family protein, glycosyl hydrolase, WIP4 -wound-induced protein precursor, etc.) and 10 are uncharacterized genes as shown in Figure 4. Apart from these, stress-induced transcription factors such as ZOS12-09-C2H2 zinc finger protein, WRKY72, WRKY55 and MYB family transcription factor (LOC_Os01g03720) are also included suggesting that OsMYB2, which exhibits a higher fold-change compared to other differentially expressed transcription factors, may be functioning as a master-regulator in response to drought in leaves. Of the 10 uncharacterized genes, only three have characterized orthologs in RGAP, a PIGA (Phosphatidylinositol n-acetylglucosaminyltransferase) subunit p involved in lipid metabolism (LOC_Os03g60520), a heme binding (LOC_Os03g19580) and a metal binding protein probably involved in chlorophyll binding (LOC_Os02g37180). The functional extrapolation for the three uncharacterized genes from AraNet is given in Table 1. Since no ortholog of OsMYB2 is known in Arabidopsis, we searched these 47 genes in the RGAP database. We found that 26 out of 37 annotated genes and 7 out of 10 uncharacterized genes are reported to be positively correlated with OsMYB2 in at least one of the experiments in RGAP (GSE17245, GSE6901, GSE6893, and GSE19024), providing evidence for the association between these genes with OsMYB2 (Childs et al., 2011).The association between the 48 genes (OsMYB2 and its 47 neighbors) was examined in four alternate network constructions as well. We observed that in Pearson and Spearman rank correlation-based networks, 48 and 47 genes are clustered together and 47 and 46 edges respectively with OsMYB2 in the two networks. Similarly, in the case of MI and MIC based CLR networks, 48 and 45 genes are clustered together in the same module along with 46 and 32 edges with OsMYB2 respectively (details given in Supplementary Table S3). The conserved associations in independent experiments in RGAP and in four alternate inference methods, suggests that OsMYB2 may indeed be regulating these 47 genes. Thus, in this case with no known ortholog in the model organism, the single-guide gene approach provides a reliable approach for function annotation. For further confirmation and functional annotation, promoter analysis of these 47 genes for the presence of MYB motif is carried out.
To predict the functional role of remaining seven uncharacterized genes, 1 kb upstream sequences of the upregulated neighbors (47 genes including 10 uncharacterized genes) of the transcription factor OsMYB2 were analyzed using MEME. The predicted cis-regulatory elements from MEME were filtered based on the frequency of occurrence in the promoter regions of the 47 genes and searched against databases of known motifs using STAMP. The motif analysis suggests that these genes may be possible targets of the AtSPL8 transcription families [involved in sporogenesis in anthers and ovules (Unte et al., 2003)], AtMYB15 [involved in enhanced sensitivity to ABA and improved drought tolerance (Ding et al., 2009)] and ABI4_2 (involved in seed germination, plastid-to-nucleus signaling, sugar signaling (Bossi et al., 2009;Zhang et al., 2013). The 1 kb upstream sequences of the 10 uncharacterized genes were individually searched against the PLACE database. In all the 10 sequences, motifs for ABRE and MYB binding sites were observed. The role of ABA signaling pathway in drought response is well known and the co-presence of MYB sites suggest its role in ABA signaling pathway. Two other interesting motifs observed were WRKY71 and CACTFTPPCA1. It is known that the gene OsWRK71 is a transcriptional repressor of gibberellins (GA) signaling. Recent studies indicate that inhibition of GA-signaling promotes drought tolerance by forming smaller stomatal pores, and reducing leaf desiccation (Nir et al., 2014;Zawaski and Busov, 2014). The CACTFTPPCA1 motif occurred in high frequency in all the 10 sequences and the 'CACT' is a key component for mesophyll (leaves)-specific gene expression in the C4 plants (Gowik et al., 2004). Thus, the presence of these regulatory elements further confirm the role of these uncharacterized genes as specific to leaves and probably involved in desiccation tolerance. The results of the analysis are summarized in Table 1.
Panicle-Specific Module
In panicle booting stage, we observe very few DEGs compared to other tissues. About 10% of the DEGs belong to the four modules, red, midnightblue, tan and turquoise. The turquoise module has a large fraction of down-regulated genes in this stage compared to other tissues. Based on the DEGs, we observe a number of processes to be switched-off in this module. For example, genes involved in carbohydrate metabolic processes including several "glycosyl hydrolases" and "cellulose synthases" which are involved in cellulose biosynthesis are downregulated. Genes involved in microtubule-based movements (having kinesin motor domains) are down-regulated suggesting that processes associated with cell cycle, cell elongation and tissue expansion are probably affected due to drought in panicle. A number of peroxide precursors involved in oxidoreductase activities like ROS scavenging activities are down-regulated. Another interesting cluster of genes involved in auxin response (OsSAUR57,OsSAUR33,OsIAA31,etc.) are also down-regulated in the panicle. As auxin-responsive pathways are commonly associated with differentiation and development (Zhao, 2010) and their repression have been linked to plant defense responses (Wang et al., 2007), panicle growth and development is probably limited in this stage in response to drought. Few genes that are up-regulated at greater than twofold change in this tissue include transcription factors AP2-EREBP (LOC_Os09g11480, LOC_Os05g29810, and LOC_Os01g49830) and genes involved in stress response (FAD binding domain of DNA photolyase domain containing protein, universal stress protein domain containing proteins, uvrB/uvrC motif family protein, dehydrins, etc.). The AP2-EREBP transcription factor family is specific to plants and has been shown to be involved in various developmental processes, pathogen response and abiotic stress response.
Another conspicuous module in this tissue is the tan module with ∼4.7% genes up-regulated and ∼8.6% down-regulated. Carbohydrate metabolic processes are down-regulated in panicle as evident from the down-regulated glycosyl hydrolases, glycosyl transferase, sucrose synthase, and β-galactosidase. An important gene GASR7 -Gibberellin-regulated GASA precursor protein is down-regulated. It has been identified as a candidate gene in determining the grain length in rice (Huang et al., 2012) suggesting smaller grain size yield in drought conditions. In the red module, ∼9% of the genes are up-regulated in the panicle tissue and are mostly associated with seed maturation, LEA proteins, dehydrins (involved in abiotic stress) and protein phosphatase 2Cs (involved in hormonal signaling). The midnightblue module has ∼9.7% of the genes up-regulated in the panicle tissue which include stress-induced transcription factors AP2-EREBP, bZIP (involved in hormone signaling) and HSF and other nucleotide binding proteins (ABC transporter, AAAtype ATPase, and ATP-dependent protease La). These nucleotide binding proteins are associated with chaperone like activities, ATP-hydrolysis and proteolytic activities during drought stress.
Discussion
In the past, several methods have successfully used the concept of 'guilt by association' approach to transfer annotations among genes based on certain features such as sequence similarity, similarity in mRNA expression profiles, common biological processes, sharing of protein domains, genes part of the same protein complex or genes which co-regulate or coevolve (Vandepoele et al., 2009;Ficklin and Feltus, 2011;Lee et al., 2011;Wong et al., 2014). In rice, with only ∼1% of the protein coding genes having experimental evidence for their functions, researchers are increasingly turning to integrated solutions which link the genomic, transcriptomic, proteomic, and metabolomic information under different experimental conditions to understand the plant response mechanisms to abiotic and biotic stress tolerance, cell wall biology, photosynthesis, hormone regulations, immune response, etc. This requires planned experimental studies carried out under different conditions such as tissues, developmental stages or environmental conditions. Here we consider one such experimental study wherein expression profiles in different tissues and developmental stages are obtained under drought stress. The objective of the study is to identify and annotate stress-induced genes that lack functional annotation. Here the co-expressed gene clusters are analyzed using topological based approach for identifying tightly coupled, DEGs that show prevalence of some stress regulatory cis-element or are coregulated by a common transcription factor.
Understanding the molecular mechanisms in drought response can be challenging due to the presence of a large number of complex interactions between 100s of DEGs. Dissecting these complex interactions into a modular view in a tissue-or stage-specific manner can provide a systems-level understanding of drought response. As a first step toward identifying drought stress-induced genes, we construct a network of co-expressed gene modules to first identify drought-responsive modules. This is carried out by analyzing the percentage of DEGs in various tissues and developmental stages and mapping them on to each module. We observe that the two modules, red and midnightblue, exhibit a large fraction of DEGs across all the developmental stages in the three tissues, root, leaf, and panicle. These modules consist of a high percentage of genes, ∼45% (red) and ∼29% (midnightblue), respectively, involved in metabolism. This is in agreement with the emerging view that stress adaptive signaling is tightly linked to the cellular primary metabolism, energy supply and developmental processes. It is observed that many of the top ranked genes based on degree centrality in these two modules also exhibit a high positive fold-change in all the tissues. Hence, for the functional annotation, we select candidate 'uncharacterized' genes that are high-degree nodes (top 20%) and up-regulated (fourfold) across all tissues and developmental stages.
Conserved co-expression patterns in functional networks across species provide an effective way to transfer annotations from a model organism to the organism of interest. With the availability of large amount of high-throughput data, a number of such systems-based resources are now available, viz., AraNet (Lee et al., 2014), PlaNet (Mutwil et al., 2011), MetNet Online (Sucaet et al., 2012), etc. First step in any functional annotation transfer is homology-based analysis. In the analysis of red module, we observed that homologs of the uncharacterized rice genes in other plant organisms are reported as "conserved, " "expressed" proteins, i.e., they lack functional annotation in other species as well. So we next analyze the conserved coexpression patterns of homologous genes in the model plant, Arabidopsis thaliana. A direct advantage of such an analysis is the elimination of irrelevant gene connections arising in the network due to noise. The subnetwork of the homologs of 13 uncharacterized rice genes is extracted from the Arabidopsis cofunction network in AraNet and mapped on to the red module. This resulted in a network of 40 genes (13 uncharacterized genes and its 27 conserved neighbors), shown in Figure 3. Analysis of such conserved subnetworks provides reliable extrapolation of functional annotation to the uncharacterized genes from their conserved neighborhood. We observe terms such as "seed development, " "response to ABA, " and "water deprivation" as a common theme among the closely connected genes in this subnetwork. Motif analysis of this subnetwork genes show that they all share ABREs and auxin-associated cis-element motifs in their promoter sequences. It is known that majority of ABAregulated genes share the conserved ABRE motif. In a study by Seo and Park (2009), it has been reported that ABA and auxin play critical roles in root growth under drought through complex signaling networks, suggesting a strong relationship between the two phytohormones, ABA, and auxin. Thus, by the above approach, we are able to show that 13 previously uncharacterized genes are involved in ABA and auxin mediated signaling pathways.
Midnightblue module is observed to be another important drought-responsive module across all stages and tissues, especially root and leaf, with the top GO term indicating nuclease binding activities. High degree and up-regulated genes in this module include many stress responsive genes, namely, those involved in nucleobase-containing compound metabolic processes including ATP binding, suggesting a role in the regulation of ATP synthesis and transport as well as proteolytic activities which are significant during drought. Combining the information from AraNet and analysis of cis-regulatory elements, we predict that the three uncharacterized genes in this module are also induced in an ABA-dependent manner. The association of these ABA regulated genes of transcription factors along with other high degree and up-regulated genes such as bZIP (LOC_Os01g64730) and HSF (LOC_Os06g35960) transcription factors, stress-responsive genes such as AAA-type ATPase, ATPdependent protease, etc., suggest a possible cross-talk in the proteolytic activities during drought.
We observed that the green module displayed a tissue-specific response in leaves, especially in the panicle elongation stage. A number of up-regulated genes involved in small molecule metabolic processes, protein amino acid phosphorylation as well as regulation of gene expression (transcription factors) are observed in this module. It is interesting to observe that the blue and yellow modules, having a role in photosynthesis and associated metabolic processes, are down-regulated and this effect is more profound in the panicle elongation stage (Figure 2). The growth and morphology of the panicle is a key factor in determining the yield (Li et al., 2010). A reduction of photosynthetic activities during this stage suggests that the grain yield may be affected due to drought.
An alternate approach to the identification of tissue-specific, drought-responsive genes is discussed in the analysis of the green module. Considering transcription factor as a guide-gene, a subnetwork of co-regulated genes is obtained. A number of studies (Mao et al., 2009;Fu and Xue, 2010;Wong et al., 2014) have used this approach where 'bait genes' with known functions are used in querying the co-expression network. The resulting subnetwork consists of the guide gene along with its first neighbors associated with each other possibly due to a common biological process. Here we consider OsMYB2 transcription factor as guide-gene and construct its subnetwork by considering differentially expressed, tightly coupled first neighbors. OsMYB2 does not have a characterized ortholog in the other plant species. However, functional analysis of its first-neighbors shows their involvement in various biotic and abiotic stress responses. A few of its neighbors in the subnetwork are uncharacterized. Promoter analysis of these sequences confirmed the presence of MYB binding sites in the uncharacterized genes and other up-regulated neighboring genes. In plants, MYB transcription factors are known to be involved in key processes such as development, secondary metabolism hormone signaling, disease resistance and abiotic stress response (Allan et al., 2008;Cominelli and Tonelli, 2009). In the promoter regions of the uncharacterized genes, a number of ABRE motifs were also detected. Several studies have indicated the accumulation of ABA in vegetative tissues during drought and stomatal closure is one of its key functions in leaves (Trejo et al., 1993;Xiong and Zhu, 2003).
Conclusion
In this study, we present a co-expression network-based approach for functional annotation of uncharacterized genes in rice under drought stress. The study reveals the role of topological properties of gene co-expression networks to identify droughtresponsive modules in a tissue-specific manner. Here we consider clusters of co-expressed genes/transcription factors that are wellconnected, have a conserved neighborhood across species and share common cis-elements. Analysis of such clusters provides a powerful approach for the functional annotation of genes in response to environmental stress. By this approach, our attempt to provide functional annotation to 13 uncharacterized genes in the red module indicates their involvement in ABA and auxin mediated signaling pathways and suggests a cross-talk between ABA-regulated and auxin-responsive genes in response to drought stress. Similarly, the functional annotation of three uncharacterized genes in midnightblue module suggest they are activated in an ABA-dependent manner and their association with other transcription factors and protease genes suggest a possible cross-talk in the proteolytic activities during drought. Alternatively, in the situation when homologs of a transcription factor and its first degree neighbors are not present in model plant organism, we proposed single guide-gene approach. Based on this analysis, the 10 uncharacterized neighbors of OsMYB2 transcription factor are shown to be associated with ABA-response, leaf desiccation and photosynthesis. The proposed approach is particularly useful when the genes lack a known domain or functionally characterized homologs in the database. We expect that on integrating other types of information, such as protein-protein interaction, phylogeny and RNAseq data may result in reliable function predictions. | 2016-05-12T22:15:10.714Z | 2015-07-30T00:00:00.000 | {
"year": 2015,
"sha1": "61d3ec31bdc17df77ef7ff476c564f92d537fb46",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fgene.2015.00256/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "61d3ec31bdc17df77ef7ff476c564f92d537fb46",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
259017403 | pes2o/s2orc | v3-fos-license | Genomics driven precision oncology in advanced biliary tract cancer improves survival
Highlights • Biliary tract cancers (BTCs), including intrahepatic, perihilar, and distal cholangiocarcinoma as well as gallbladder cancer are rare but aggressive malignancies with few effective therapies.• Integrative clinical sequencing of 124 BTC patients at University of Michigan identified actionable/ potentially actionable aberrations in a majority of cases.• Patients treated with molecularly matched targeted therapies showed a significantly improved survival as compared to those who could not be treated with matched therapy.
Introduction
Biliary tract cancers (BTCs) arise from the epithelial lining of the biliary ducts and comprise of intrahepatic and extrahepatic (perihilar with median overall survival (OS) from diagnosis of less than 12 months [4 , 5] , and five-year survival rate of about 5% despite therapy [6] . Current systemic chemotherapy options for patients with advanced BTCs remain nonspecific and suboptimal, thus it is imperative to further our understanding of the molecular biology of this disease and to define more targeted and effective therapeutic options.
Molecular profiling of BTCs has identified many common drivers [7][8][9][10][11] , as well as molecular aberrations associated with specific anatomical subgroups, for example, FGFR2 fusions, and mutations in IDH1/2, BAP1, ARID1A , and KRAS predominantly seen in intrahepatic CCA (iCCA) [12][13][14] ; KRAS, TP53 , and ARID1A mutations or amplification of ERBB2 or ERBB3 in extrahepatic CCA; and TP53, ERBB2/3, CDKN2A/B , and ARID1A in gallbladder cancer [13 , 15] . Among these, targeted therapy options have recently become available for BTC patients with FGFR2 and IDH1 aberrations. Pan-FGFR inhibitors pemigatinib and infigratinib received accelerated FDA approval for use in patients with FGFR2 fusion or translocation who had progressed on firstline therapy, following multiple clinical trials that demonstrated clinical benefit in refractory BTC patients with objective response rates (ORR) ranging from 25% to 36%, and disease control rates as high as 70% to 80% [16][17][18] . Ivosidenib, an IDH1 inhibitor, demonstrated a statistically significant improvement in median progression-free survival (PFS) from 1.4 months to 2.7 months and disease control rate from 28% to 53% in patients as compared to placebo which led to its FDA approval in 2021 [19] . Other promising targets in BTC under clinical investigation include BRAF V600E [20] and ERBB2/HER-2 amplification [21] . Additionally, application of immune checkpoint inhibitors (ICI) in BTCs has yielded variable benefit across several trials evaluating role of ICI as single agent or dual therapy, and combination with chemotherapy. In the frontline setting, the combination of chemotherapy with ICI resulted in response rates ranging from 27-73% and median PFS of 4.3-11.0 months and median OS of 10.6-20.7 months [22][23][24][25] . In the refractory population, single agent or combination immunotherapy demonstrated an ORR ranging from 5.8 to 23% with median PFS of 1.5-3.6 months and median OS of 4.3-14.23 months [24 , 26-30] . In many of these trials, the median duration of response had not been reached suggesting a subset of patients had durable response. Fewer than 5% of patients with BTC have underlying microsatellite instability /deficient mismatch repair or high tumor mutational burden for which ICI has received FDA approval in a tissue-type agnostic manner.
Herein, we summarize findings from clinical sequencing of 124 BTC patients with a focus on defining the spectrum of molecularly matched therapeutic options for this rare cancer and assess their impact on the clinical management of patients.
Materials and methods
Sequencing was performed via the MI-ONCOSEQ program using standard protocols under Institutional Review Board (IRB HUM00046018, HUM00067928, HUM00056496) approved studies at Michigan Center for Translational Pathology a Clinical Laboratory Improvement Amendments (CLIA) compliant sequencing lab at University of Michigan [31][32][33][34] . Patients enrolled in the MI-ONCOSEQ study provided written informed consent to perform comprehensive molecular profiling of tumor/germline exomes and tumor transcriptome on either fresh tumor biopsies or (FFPE) tissue blocks. In addition, patient data was collected from the electronic medical records under IRB application HUM00165244.
Next generation sequencing library preparation
Sample details, including age, gender, and disease stage are summarized in Table 1 and Supplementary Table S1. Tissue acquisition, pathology review and sequencing of matched pair (tumor/normal DNA) exome, and tumor only transcriptome libraries were prepared using previously described protocols [32] . Samples with low tumor content were macro-dissected to enrich for tumor tissue based on pathologist assessment. "Human All Exon v4 " Agilent exon probes and a selected target capture panel probes were used to capture tumor DNA and enriched following manufacturer's protocol (Agilent/Roche). DNA/RNA paired-end sequencing libraries were sequenced using the Illumina HiSeq 2000 or HiSeq 2500 (2 × 100 nucleotide read length) (Illumina Inc. San Diego, CA).
Exome sequencing analysis
Whole Exome paired end Fastq sequence files were aligned to GRCh37 genome build using Novoalign multithreaded (version 2.08.02, Novocraft). Novosort and Picard (version 1.93) were used to sort, index and remove duplicates from the aligned bam files. Mutation analysis was carried out on matched normal-tumor pairs using freebayes (version 1.0.1) and pindel (version 0.2.5b9) as previously described [31 , 32 , 35] . Somatic SNV and Indel files from freebayes and pindel were postfiltered using at least 5% variant allelic fraction, minimum six variant reads, < 2% variant allelic fraction in normal with at least 20X coverage. The indel thresholds were optimized using a pool of hundreds of matched normal samples sequenced using the same protocol and platform as described [35] . Germline mutation analysis was performed using at least 10 variant reads in normal sample, with > = 20% allelic fraction and, < 1% population frequency in 1000 Genomes and ExAC. Variant annotation was performed using snpEff and snpSift (version 4.1g) based on refseq (from UCSC genome browser, retrieved on 8/22/2016), COSMIC v79, dbSNP v146, ExAC v0.3, and 1000 Genomes phase 3 databases.
Copy number aberration analysis was performed on exome data using DNAcopy (version 1.48.0) to get CBS segments, regions were normalized for GC content, and log2-transformed exon coverage ratio between tumor/normal samples across the targeted regions were calculated as previously described [32 , 35] . Cohort wise copy number analysis was performed by merging all the segment files used as input to gistic version 2, and maftools was used to generate cumulative copy number plot.
RNA sequencing data analysis
Strand-specific RNA sequencing (RNA-seq) libraries were used for gene expression and fusion analysis. Gene expression quantification was performed using kallisto version 0.43.1, transcript per million (TPM) values were used as input for qlucore omics ( https://www.qlucore.com/ ) software for downstream expression analysis. Genes with transcripts with < 1 TPM in at least 95% of the cohort were removed and the data was transformed to log2. The expression data was normalized for preservation method (FFPE/Fresh Frozen), biopsy sites and tumor content. Unsupervised hierarchical clustering was performed for the 69 immune marker genes, including 66 genes recently evaluated (Cancer Genome Atlas Research Network) plus IFNresponsive chemokines (CXCL9-11). Fusion calling was performed using a combination of CRISP, CODAC MI-ONCOSEQ pipeline [32 , 35] , fusioncatcher_v1.10 [36] and arriba_v1.1.0. [37] The fusions calls were compiled and reported in Supplementary Table S8.
Mutation burden estimation
Freebaye's mutation calls were used for the mutation burden estimation. Mutations were filtered for coverage ( > = 10x) and variant allelic fraction ( > = 6%). Mutation burden was expressed as (number of mutations/ total covered bases) × 10 6 . Varscan2 processed VCF files from TCGA CCA cohort (N = 51) were downloaded from the GDC data portal and lifted-over from the GRCh38 to GRCh37 reference genome using CrossMap for comparison with the MI-ONCOSEQ cohort.
Pathogenic germline variant analysis
Pathogenicity of germline variants were determined through review of the published literature, public databases including but not limited
Survival analysis
Subject efficacy data was manually extracted from review of electronic medical records. OS was defined as the duration of time from the date of advanced unresectable or metastatic disease until death from any cause. Follow-up time was censored at the date of last disease evaluation. The survival analysis was estimated using the product-limit method of Kaplan and Meier (GraphPad Prism 8, San Diego, CA). The analyzes should be considered post hoc, and the results herein exploratory with the intention to guide further definitive studies. A significance threshold for P value was arbitrarily set to 0.05 for all statistical tests.
In order to associate patient outcome with reported molecular alterations, we included all consecutive subjects with BTC with targeted gene panel analysis completed using alternative CLIA platforms at our institution. Gene alterations predictive of response to an FDA approved drug(s) were classified as 'actionable (Tier 1)', and aberrations associated with potential responsiveness to experimental drugs based on emerging data from ongoing clinical trials or compelling pre-clinical evidence were designated as 'potentially actionable (Tier 2)', and frequent aberrations noted in this cohort for which currently no therapeutic approach is available were deemed non-actionable (Supplementary Table S9).
Somatic aberration landscape of advanced BTCs presents diverse therapeutic avenues
Clinical sequencing data was obtained from a total of 124 consecutive patients with advanced BTC (from a total of 239 patients enrolled between September 2011 and February 2020 (Fig. S1). The sequencing cohort was comprised of 52% women, median age of 59 (range, 17-80) years), including intrahepatic (N = 88), perihilar (N = 10), and distal (N = 8) CCA, mixed hepatocellular/CCA (N = 5), and gallbladder cancer (N = 13), with 83 (67%) cases being post-chemotherapy and 63 (51%) metastatic ( Table 1 and Supplementary Table S1). We obtained high quality exome sequencing data from 92 tumor/normal samples at MI-ONCOSEQ as indicated by the 94% median alignment rate (range, 53-97%), mean coverage of 203X for whole exome (WXS) and 506X for target capture panel, and overall low PCR duplication rate averaging 8% (range 0.6-79%) (Supplementary Table S2). In parallel, high quality capture transcriptome sequencing data from 85 tumor tissues was analyzed for gene fusions (Supplementary Table S7), and gene expression (Supplementary Table S8). Additionally, tumors from 32 cases were analyzed through other CLIA-approved gene panels from commercial vendors including 30 from Foundation Medicine, and 2 from Guardant Health) (Supplementary Table S4).
Somatic mutations (Supplementary Tables S3 and S4), copy number aberrations (Supplementary Table S6), gene fusions (Supplementary Table S7), and tumor mutation burden (Supplementary Table S5) . Apart from mutations, high mutation burden in tumors also defines an actionable aberration, that can be potentially matched with checkpoint blockade immunotherapy. Enumeration of mutation burden in the MI_Oncoseq cohort identified three cases with high tumor mutation burden, defined as > 10 mutations/Mb (Supplementary Table S5). These included MO_1347, a 46-year-old male with metastatic CCA (and a history of ampullary carcinoma) previously treated with gemcitabine and cisplatin; a lymph node biopsy from this case, histologically seen as poorly differentiated high-grade adenocarcinoma admixed with prominent inflammation, was found to harbor 225 mutations/MB and high microsatellite instability (MSI-high) score, consistent with a biallelic loss of function of the mismatch repair deficiency gene MSH2 (with truncating germline mutation MSH2 c.2494G > T; p.Glu832Ter; dbSNP: rs863225396), coupled with the somatic loss of heterozygosity through the splice acceptor mutation, MSH2 c.1662-1G > A. No specific mutation or extrinsic etiology could be associated with the high mutation burden of 109 mutations/MB in the tumor from TP_2475, a 62-year-old female with stage IV metastatic CCA, "mixed " subtype (CMS-HCC) previously treated with CDDP/gemcitabine. The third case with high mutation burden, TP_2703 with 25.3 mutations/MB displayed mutation signature 4 (associated with tobacco smoking [18 , 19] ), consistent with the patient's 30 pack-year history of smoking. The average mutation burden in the MI_Oncoseq cohort, after excluding 2 cases with low tumor content and one, MO_1347 with MSI high associated outlier mutation burden (Supplementary Table S5), was calculated as 4.3 mutations/Mb (range 0.45 to 108.9 mutations/Mb). This mutation burden in the cohort of advanced, metastatic tumors was found to be significantly higher than that of the TCGA-BTC cohort comprised of primary tumors (Wilcoxon rank test p-value 0.05 * , Fig. 2 A), consistent with similar observations across tumor types (for example, Robinson et al [32] ). No significant difference was noted in the mutation burden of tumor samples post-chemotherapy (N = 52), compared to advanced tumors prior to chemotherapy (N = 39).
Gene fusion analysis using RNA-seq data identified known [12] and novel translocation events in 12 (9.7%) patients, including FGFR2, FGFR3 and YAP1 fused in frame with known and novel partners (Fig. S2). FGFR translocations were enriched in iCCA subtype (N = 9) with three cases of FGFR2-BICCI , and one each with FGFR2-KIAA1967, FGFR2-AFF4, FGFR2-AHCYL1 and FGFR2-CCDC6 fusion, and one each with novel partners including , FGFR2-TAX1BP1 and MATN4-FGFR2 . One gallbladder patient was identified to have FGFR3-TACC3 fusion, and one patient with mixed hepatocellular and CCA subtype had FGFR2-BICCI fusion (Fig. S1). All the FGFR rearrangements were found to retain the kinase domain and all the FGFR fusion partners potentially exhibited oligomerization capability, suggesting a shared mode of kinase activation as noted previously [12] .
In addition to FGFR2 gene fusions, two samples had hotspot activating mutations p.Y375C (also reported as p.Y276C) and p.C382R (also reported as p.C383R), and two cases had a novel in-frame indel p.H167_N173del located in the extracellular domain ( Figs. 2 B and S3). Importantly, a significant upregulation of FGFR2 gene expression (P = 0.029) was noted in the FGFR2 mutants (N = 4) as compared to the wild type cases ( Fig. 2 C), suggesting that the two patients with the novel indels represent a potentially activating aberration. Moreover, the median OS of patients in the fusion cohort (N = 12; 21.3 months) and mutation cohort was similar (N = 4; 21.5 months).
Patient MO_1778 with perihilar CCA exhibited two known, recurrent driver oncogenic fusions: FGFR2-BICC1 and YAP1-MAML2. While the FGFR2 fusion is a known driver in iCCA, recurrent YAP1-MAML2 fusion associated with aberrant Hippo pathway signaling has not been reported in CCA, but has been previously identified in other cancers [38][39][40][41] . The YAP1-MAML2 fusion encodes TEAD1, WW1 and WW2 domains from YAP1 and loss of Notch interaction domain in MAML2 , associated with transactivation of TEAD target genes leading to dedifferentiation or proliferation [39 , 42 , 43] .
Potentially actionable aberrations and novel avenues for targeted therapies in advanced BTCs
A total of 79 (63.7%) cases harbored one or more potentially actionable (Tier 2) aberrations for which preliminary clinical/ preclinical rationale is available to match with experimental targeted therapeutics in ongoing clinical trials (Supplementary Table S9). Among the most common aberrations in this category were 35 patients with homozygous deletion or biallelic loss of Cyclin Dependent Kinase Inhibitor 2A, CDKN2A (p16INK4), associated with potential sensitivity to CDK4/6 inhibitors.; 21 patients with activating mutations in KRAS/NRAS (and one case with deleterious mutation in NF1 ), that may be considered for treatment with novel KRAS and/or MEK inhibitors; and 23 patients with truncating mutations in ARID1A , a SWI-SNF pathway regulator, potentially associated with synthetic lethality to PARPi, ATRi or EZH2i. Additional cases with potentially actionable aberrations included tumors with amplification of MDM2 (with wild type TP53 ); CDK4 and CCND1 (with wild type RB1 ), NTRK1, MYC and CCNE1 ( Fig. 2 D), supported by outlier expression of these genes (data not shown). We identified recurrent in frame indels in FGFR2 that may represent gain of function mutations responsive to FGFR inhibitors as recently shown, and non-BRAF V600 mutations (class II and class III) [44] that may respond to MEK inhibitors. Overall, 105 out of 124 (84.7%) cases analyzed, were determined to harbor one or more actionable or potentially actionable aberrations that could be matched with FDA approved or experimental therapies in ongoing clinical trials.
Germline alterations
Pathogenic germline mutations were noted in 6 patients in the MI_Oncoseq cohort (6.5%) with majority in DNA damage repair pathway genes (2 cases of MUTYH , one each of BRCA1, BRCA2, ATM and MSH2) , and one case with germline mutation in FH, an essential gene in the tricarboxylic acid cycle ( Fig. 1 ; Supplementary Tables S3 and S4). Three patients with pathogenic germline mutations in MSH2, MUTYH and BRCA2 were found to harbor a second somatic aberration in the tumor resulting in biallelic loss.
Targeted therapies and survival
The median OS for the expanded cohort (MI-ONCOSEQ and other CLIA platforms) from date of diagnosis of advanced unresectable or metastatic disease was 15.2 months (range, 1.5-96.9), and from date of diagnosis was 19.2 months (range, 1.3-166). The median follow-up from date of release of genomic analysis report was 8.6 months (range, -1.4-61.3). We observed no significant imbalances in the baseline characteristics and treatment variables between the actionable matched and unmatched cohorts, including gender, age, ECOG performance status, FGFR status, exposure to platinum therapy, or number of lines of therapy, or distance from cancer center ( P > 0.05 by Fisher's exact test or Wilcoxon test; data not shown).
In the actionable cohort (N = 54; 43.5%), defined by the presence of a molecular aberration that can be matched with an FDA approved therapeutic, 22 (40.7%) subjects received a molecularly matched therapy (matched treated cohort) off-label or on clinical trials ( Table 2 ), while 32 (59.3%) patients did not receive molecularly matched therapy (matched untreated cohort). The remaining patients were defined as non-actionable for this analysis (70; 56.5%). In the matched treated group, patients received matched therapy after failure of systemic chemotherapy with the exception of one subject who received cobimetinib and vemurafenib off-label as first-line therapy for BRAF V600E mutation ( Table 2 , Fig. 3 A). Patients with actionable mutations who received a matched therapy (N = 22) had significantly longer OS than the 70 patients in the non-actionable group, or 32 patients in the matched untreated group (28·1 months, 13.3 months and 13.9 months, respectively; P < 0.01). The median OS between the matched treated and untreated arms of the actionable cohort had a hazard ratio of 0·33 (95% CI, 0·18-0·60, P < 0·01). However, median OS did not differ between the untreated actionable group and non-actionable group (HR 1.13, 95% CI, 0·80-2.00, P = 0·31) ( Fig. 3 B).
A novel association between BRAF/ KRAS mutations and immune-modulator NT5E
Apart from somatic mutations or copy number aberrations, we used RNAseq data (in MI_Oncoseq cohort) to help inform precision oncology avenues. This included sensitive detection of FGFR2 gene fusions in a partner-agnostic manner, and corroboration of outlier expression in cases with amplification of targetable genes such as ERBB2, CCND1, CCNE1 , and MDM2 . Additionally, querying individual driver aberrations for therapeutically informative gene expression correlates, we discovered a remarkable association between tumors with RAS/RAF mutations and expression of 5 ′ -Nucleotidase Ecto, NT5E (CD73), a membrane protein that converts extracellular nucleotides to membrane-permeable nucleosides, associated with promotion of tumor immunosuppression. Fig. 4 A shows a significantly higher level of NT5E in BTC cases in the MI_Oncoseq cohort with activating mutations in KRAS and BRAF , the latter being significantly higher than KRAS . To assess this correlation in an external dataset, we accessed TCGA pan-cancer dataset from cBioportal, and compared NT5E expression in tumors with (1) BRAF V600E mutation, (2) KRAS G12/13 or Q61 substitutions, and (3) wild-type BRAF and KRAS . Tumors with other mutations in BRAF or KRAS , and cases with mutations in NRAS or HRAS , as well as cases with amplification or deletion of NT5E were excluded from this analysis to ensure relatively discreet comparisons. As seen in Fig. 4 B, tumors with activating KRAS/BRAF mutations showed significantly higher levels of NT5E expression, with BRAF mutated tumors showing relatively higher expression than KRAS mutated. In the context of BTCs, we corroborated the association between BRAF/KRAS mutations and NT5E expression level by IHC staining of select tumor tissue sections, as indicated ( Fig. 4 C-D). This association suggests follow up investigations for combination therapy with MEK and NT5E inhibitors in KRAS/BRAF mutant cases.
Discussion
Recent large-scale sequencing efforts like TCGA, ICGC and TARGET have provided insights into underlying molecular mechanisms in variety of cancer types. In this study, we analyzed a cohort of 124 patients with advanced BTC and subjected data to integrative clinical sequencing. Overall, a sizeable 43% of BTC patients harbored actionable mutations, of which, the 40.7% that received matched therapy had significantly longer OS by approximately 15 months compared to the cohort with actionable mutations that did not receive matched therapy. This suggests that patients with well-defined actionable molecular alterations derive considerable benefit in survival from receiving matched targeted therapy.
Admittedly, the definition of actionability varies in literature but we used a common and perhaps stringent interpretation to include only FDA approved therapies for specific molecular alterations in any cancer unless BTC-specific data suggested lack of benefit, such as palbociclib monotherapy in cases with CDKN2A deletion [45] . Unfortunately, only 40.7% of the actionable cohort received matched targeted therapy. The most common reason was lack of available early phase clinical trial (n = 15), but other reasons included, the molecular analysis report preceded clinical trial investigation/availability (n = 9), inability to obtain off-label targeted therapy for those who did not meet trial eligibility (n = 5), decline in functional status or demise of the patient prior to release of the molecular analysis report (n = 2), or patient refusal to participate in clinical trial (n = 1). These outcomes suggest that precision oncology has a substantial clinical impact in patients with biliary cancer and warrants consideration of genomic analysis in all patients particularly earlier in their treatment course, and continued investigation of novel biomarkers and therapeutics in this rare cancer.
We found the mutational burden in our cohort to be significantly higher compared to the TCGA cohort perhaps since majority of the patients in our cohort had sequencing on tissue obtained at advanced disease (89%), biopsies mostly included metastatic sites (55%), and patients had had prior exposure to chemotherapy (56%). In comparison the TCGA cohort includes tissue obtained at primary resection. This finding supports the hypothesis that tumor mutational burden may worsen with prior exposure to chemotherapy and perhaps during the natural progression of the cancer. The overall tumor mutational burden is still low, however, compared to other cancers [46] , and only a small percentage of tumors have a high enough mutational burden (3% with ≥ 10 mutations/Mb in our cohort) to leverage potential therapeutic benefit from immune checkpoint blockade [47 , 48] .
Tumors with DDR gene mutations have been associated with sensitivity to DNA damaging chemotherapy, including platinum agents, as well as PARP inhibition [49][50][51] . Germline mutants of BRCA1 and 2 without defined locus specific loss of heterozygosity (LOH) in tumors have been associated with functional homologous recombination deficiency as the wild-type allele may be inactivated via alternative mechanisms, such as promoter methylation. However, in absence of LOH inactivation they may lack sensitivity to DNA damaging agents [52] . Results from the phase 3 POLO trial showed significant improvement in median PFS when patients with germline BRCA1 or 2 mutated metastatic pancreatic adenocarcinoma were treated with olaparib as maintenance therapy after platinum-based chemotherapy in patients compared to placebo [53] . It is worthwhile to hypothesize a similar benefit in BRCA-mutated BTC treated with PARP inhibitors, and indeed multiple clinical trials with PARP inhibitors alone (NCT04298021, NCT04042831), or in combination with anti-PD1 antibody (NCT03639935) are accruing patients with BTC. In addition to BRCA1 or 2 (incidence of 3-5%) [54] in BTC, other DDR mutated genes have also been identified including ATM and PALB2 (5 patients in our cohort; 4%) that may also benefit with PARP inhibitors [55 , 56] . Furthermore, patients with IDH1 or IDH2 hotspot mutations (10-15%) may also be susceptible to PARP inhibitors due to production of (R)-2hydroxyglutarate (2HG), an oncometabolite that may impair homologous recombination by inhibiting the function of histone demethylases [57] . Thus, there is a potential for significant benefit in up to 30% of patients with BTC with PARP inhibitors. A significantly high frequency (34.6%) of deleterious mutations in epigenetic modifiers, ARID1A, BAP1 and PBRM1 in the SWI/SNF accessory subunit highlights the role of dysregulated chromatin remodeling in BTC. ARID1A, BAP1 and PBRM1 encode subunits of the SWI/SNF chromatin-remodeling genes and were mutated in 20%, 15% and 10% of the samples, respectively; these have previously been shown to be drivers of progression in iCCA [15] . ARID1A and BAP1 have been shown to impair homologous repair in vitro [58 , 59] and therefore increase susceptibility to PARP inhibitors; clinical trials to test this hypothesis are ongoing (e.g. NCT03207347). In addition, epigenetic inhibitors such as HDAC and EZH2 inhibitors [60] , proteolysis targeting chimera (PRO-TAC) degraders [61] , anti PD-1 antibodies [62] , and Aurora kinase A inhibitors [63] may also hold promise in targeting these mutations in BTC.
In addition to the mutations in the SWI/SNF complex, other epigenetic regulators such as IDH1 (and less commonly IDH2, FH ) hotspot mutations have been described in iCCA [64 , 65] . As noted above, the 2-HG oncometabolite is a byproduct of the IDH1 mutation and is known to dysregulate the function of the histone methylases [57] . Recently an IDH1 inhibitor, ivosidenib showed significant improvement in median PFS and OS compared to placebo in a phase 3 clinical trial [66] . Interestingly, 6 out of 23 (26%) patients with IDH1/2 mutations had concurrent mutations in either ARID1A, BAP1 or PBRM1 thus suggesting potential benefit from a combination of an IDH inhibitor and histone modifying agents such as HDAC or demethylating inhibitors in this subset, similar to AML [67 , 68 , 69] .
ERBB2 amplification was identified in 4% of our cohort consistent with other studies with fluke-negative BTCs [70] . Data from in vitro experiments [71] , retrospective case series [72] , and prospective phase 1/2 trial [73] support the ongoing investigation of ERBB2 targeted therapies in clinical trials in BTC (NCT02693535, NCT03613168, NCT01953926, NCT04466891). We also identified amplifications in CCND1 [74] , MDM2 [75] and NTRK [76 , 77] and when targeted have shown modest preliminary clinical data noted in other cancers, and are under further investigation in integral biomarker trials.
Other molecular alterations with less than 5% incidence in BTC that have shown promising activity include BRAF V600E mutation [20] . We identified 12 (9.7%) patients with BRAF mutation in our cohort of which 7 patients had non-V600E activating mutations, including class III (D594N, D594E, D594G), and undefined kinase domain mutations, K483E, M693V, G466E as well as N661K. Cells with BRAF class III mutations have been shown to be responsive to MEK inhibitors [44] . BRAF K483E is a recurrent mutation, shown to be transforming in culture [78] , and thus may represent a therapeutic target. Additionally, one case had a gene TRIM24-BRAF fusion, previously reported in a case of melanoma, sensitive to MEK inhibitor [79] .
The MI-ONCOSEQ study first described the FGFR2 fusions across diverse cancers in 2013. Herein, we describe that FGFR2 activating mutations also lead to upregulation of gene expression similar to the fusions. Moreover, the median OS in the FGFR fusion cohort was similar to the FGFR activating mutation cohort (21.3 versus 21.5 months, respectively; data not shown) although the cohort sizes are small (N = 15 and 4, respectively). The median OS of patients in the FGFR cohort (fusions or activating mutations) was higher compared to the FGFR wild type (21.3 versus 14.0 months, respectively; p value 0.07; data not shown). Of the 19 patients in the FGFR fusion/activating mutation cohort, 8 patients were treated with pan FGFR inhibitors and had a median OS of 22.8 months compared to 17.3 months in the untreated arm (p value of 0.31; data not shown). These data suggest that FGFR fusions (and potentially activating mutations) are both prognostic and predictive biomarkers in this rare cancer. We also identified a FGFR3-TACC3 fusion in a patient with gallbladder cancer, and to our knowledge this is the first report of a FGFR3 fusion in gallbladder cancer. Multiple FGFR fusion partners have been previously identified of which BICC1 is the most commonly noted [16] . Herein, we describe additional novel fusion partners, specifically the FGFR2-TAX1BP1, and MATN4-FGFR2 . Clinical sequencing efforts like MI-ONCOSEQ which incorporate transcriptome analysis for gene fusions are important to identify targetable FGFR fusions due to the combinatorial possibilities of FGFR family fusion to a variety of oligomerization partners, as well as other rare fusions [80 , 81] .
The discovery of novel association between BRAF/KRAS mutations and the expression of immunomodulatory target NT5E may define dualprecision therapeutic targets in a subset of cancers including the relatively intractable KRAS driven cancers. Notably, CD73 inhibitors are under intense clinical investigation for therapy across various cancers, wherein some exciting results were noted in pancreatic cancer, a predominantly KRAS driven malignancy. In a Phase I ARC-8 trial (NCT04104672), treatment with small-molecule CD73 inhibitor AB680 in combination with gemcitabine and nab-paclitaxel and PD-1 inhibitor zimberelimab, in previously untreated patients with metastatic pancreatic adenocarcinoma demonstrated effectiveness, with ORR 41%. Tumors reportedly shrank or stabilized in 11 of 13 patients who received the treatment for at least 16 weeks [82] , spurring dose-expansion and placebo-controlled phase II trials.
We acknowledge the limitations of sample resources including neoplastic cellularity which reduced the sample size in RNA-seq and immune cluster analysis. Our study also merged data from different sequencing platforms (whole exome and targeted sequencing), thus limiting our analysis across the cohort to genomic regions common across the platforms. However, a uniform MI-ONCOSEQ analysis pipeline was used to ensure consistency and concordance across samples. These results may not be applicable in the community for multiple reasons, including the use of a more inclusive genomic analysis platform such as MI-ONCOSEQ, lack of clinical trials at many non-academic sites, patient willingness to travel to an academic institution which may represent a more motivated sub-group (preserved performance status, younger age), and use of non-MI-ONCOSEQ genomic analysis reports in our expanded cohort includes a biased group of patients referred specifically for open clinical trials.
Conclusion
This study highlights the importance of integrative clinical sequencing in defining molecularly matched targeted therapy options for biliary tract cancer, a rare yet anatomically and molecularly diverse malignancy with an aggressive clinical course, poor long-term prognosis due to limited therapeutic options. We observed significant improvement in survival when patients with actionable targets can receive matched therapies, and also enumerate several potentially actionable targets that provide a basis for matching with investigational drugs in ongoing clinical trials. Furthermore, we describe novel FGFR activating mutations and novel FGFR2 fusion partners which are likely to have direct impact on patient care, and diagnostic and therapeutic investigation. The novel association between KRAS/BRAF mutant tumors and the immunomodulatory target NT5E merits further investigation as a potential dual targeting modality in subsets of BTCs (as well as other cancers). These data provide evidence to strongly consider molecular analysis of tumors in patients with this rare cancer and the role of investigational therapies.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Supplementary materials
Supplementary material associated with this article can be found, in the online version, at doi: 10.1016/j.neo.2023.100910 . | 2023-06-02T15:04:49.363Z | 2023-05-31T00:00:00.000 | {
"year": 2023,
"sha1": "addeb0602597e518e6ba6e1496b4f48102b3a4f8",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.neo.2023.100910",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "a3f9b8d8dd865402bd02af2a69cb62f746ab6ffc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
160014993 | pes2o/s2orc | v3-fos-license | Compound Mapping and Filter Algorithm for Hybrid SSD Structure
With the recent development of byte-unit nonvolatile random access memory (RAM), various methods utilizing quad level cell (QLC) not-AND (NAND) flash memory with non-volatile RAM have been proposed. However, tests have shown that these hybrid structures lead to a reduction in the performance of a hybrid solid state disk (SSD) owing to issues regarding space efficiency. This study proposes a compound address method and filter algorithm suitable for the next generation of NAND flash, called hybrid storage media, where QLCs and phase-change memory (PCM) are used together. The filter-mapping algorithm includes a management method that stores data in phase-change memory or flash memory according to the next command, which is accessed when a write command that is half or less than half a page in length is received from the file system. Tests have shown that the compound mapping and filter algorithm reduces the wasted pages by more than half and the number of merge operations is also significantly decreased. This leads to a decrease in the number of delete operations and improves the overall processing speed of the hardware. Keywords—Pram; hybrid architecture; QLC NAND flash
I. INTRODUCTION
There have been rapid changes affecting the memory layer in recent years with the development of byte-unit non-volatile (NV) RAM (phase-change memory).Phase-change memory (PCM) is similar to not-AND (NAND) flash memory, but also includes fast read and write operations, which are characteristics of main memory units.Moreover, its lifespan is approximately 10 times higher than that of NAND flash memory.Key examples of NVRAM include ferroelectric RAM (FeRAM), phase-change memory, and resistive RAM (ReRAM).A hybrid solid state drive (SSD) includes a flash translation layer (FTL), which is a software layer used to efficiently exchange information between hardware components by considering the hardware properties of the phase-change memory and NAND flash memory [1].
The existing hybrid SSD [2,3] categorizes data into hot and cold data depending on the reading and writing frequency, and then stores high-frequency hot data and metadata into phasechange memory and stores low-frequency cold data into flash memory.When commands are given in duplicate locations it is possible to overwrite the phase-change memory and thus, reduce the number of merge operations in the flash memory to achieve the ultimate goal of improving overall performance.These existing hybrid SSD structures have the disadvantage of a reduction in overall space utilization efficiency because if a write operation delivers less than one page (a write unit) of data from the file system, the entire page will not be filled.To resolve the above issue, this paper proposes a compound mapping and filter algorithm for a hybrid SSD structure.Hybrid filtering refers to an algorithm that differentiates and stores data in the proper memory unit using two types of chips.This filter can be implemented through a buffer, such as a DRAM or resistor.The algorithm conducts two major functions; first, it gathers short write commands and stores them on a single page to improve the space efficiency.When a write command is one half of 8 KB or less (namely, 4 KB or less), where 8 KB is the standard page size in QLC NAND flash memory, the write command information is stored in a hybrid filter to await the next command.If the next command is 4 KB or less and would be written on a different page, both the existing command in the filter and the new command are stored in the phase-change memory.Second, the number of merge operations is reduced through the hybrid filter, which improves the overall performance.When a command is present in the same sector as the command stored in the filter, so that the data must be overwritten, there is no need for a separate operation because the command information is immediately overwritten in the filter, unlike a log block system, in which free blocks must be allocated to the log block to merge the commands.This reduces the number of merge operations, thereby improving the overall performance.Section 2 analyzes existing studies and analyzes limitations.Section 3 describes the newly proposed filter algorithm and its implementation examples.Finally, Section 4 analyzes the test results and presents future research directions.
A. Information Update Connection
Existing FTL algorithms are categorized into 1:1 [4], 1:N [5,6], and M:N [7,8,9] depending on the number of data blocks that are connected to a single logic block.A data block is where the data are first written, and a log block delays the merge operations as long as possible by recording the overwritten data in different locations according to each algorithm in the event of a store command involving overlapping pages.In the 1:1 connection, if a write command occurs on overlapping pages, a new log block is allocated www.ijacsa.thesai.orgfrom the free blocks, and a duplicate sector is recorded in that block to delay the merge operation.However, because only one data block is linked to a single log block, if repeated write commands occur in the same page, merge operations occur much more frequently, thereby reducing the overall performance of the flash memory.In the 1:N connection, a total of N data blocks are linked to one log block.In other words, several data blocks can share a single log block.In addition, because they are generally used an "out-of-place" method that fills the space in any order, the space utilization efficiency is extremely high.However, in the worst-case scenario, data blocks equaling the number of a page will be connected to a single log block, and a significant delay will occur when conducting merge operations.In the M:N connection, this architecture attempts to overcome the disadvantages of a 1:N connection.The main concept is to limit the number of data blocks that can be linked to a single log block.
B. Limitations of Previous Studies
The algorithms for the connection schemes mentioned in Section II.A are difficult to implement in hybrid SSDs, or do not provide optimum efficiency when implemented.Regardless of the algorithm applied, if the size of a write command is eight sectors (4 KB) or less, at least half of the 16 sectors, which is the standard number for a QLC NAND flash memory page, will inevitably be wasted.If a sector mapping method is applied to resolve this phenomenon using only the NAND flash memory, it will require an extremely large memory volume in the main memory device.However, if a hybrid structure comprised of phase-change memory and NAND flash memory is applied, and a phase-change memory of a certain size is mapped based on sector units, a relatively small volume will be required instead.
A. Issue Analysis
Analysis of the traces of existing file system commands available in the UMass Trace Depository [10] indicated that 25% of all traces were not written chronologically.Of these, 24.7% were write commands of one half page size or less.Based on the characteristics of flash memory, if the next page is used after processing a write command, it is impossible to go back to the previously-written page.Pages with wasted sectors after writing fewer than eight sectors (4 KB) accounted for 7% of all write commands.In conclusion, only 26.8% of the volume in all blocks was used, indicating that approximately 73% of the total volume was wasted.
B. Filter Algorithm
To resolve the issues discussed above, a compound mapping and filter algorithm is proposed.The overall structure is shown in Fig. 1.If a command is given to the file system, the command is stored in the appropriate storage space of the PCM and NAND flash memory after passing through the filter algorithm area.The NAND flash memory has data blocks and log blocks.Data passes through the registers before being stored in these blocks.Finally, the PCM contains only data blocks.The general characteristic of flash memory and PCM is that one block consists of four pages and one page consists of sixteen sectors.In this architecture, PCM uses sector mapping and NAND flash memory uses block mapping.The filter algorithm is called a compound mapping because it uses both types of mappings.
For existing 1:N association algorithms, only data blocks and log blocks are used, which places a heavy burden on these two block types.This can result in a significant number of merge operations, shortening a device"s lifetime and reducing its performance.To mitigate these issues, we added a new PCM storage space and filter area.The filter area identifies the data according to Algorithm 1 to be described later, and either selects the PCM or flash memory and stores the data in the most suitable device.This will reduce the merging workload and increase storage space efficiency, ultimately improving overall performance.www.ijacsa.thesai.orgAs shown in Fig. 2, a command contains "command, logical sector number, data, size" information.A "command" is a command that runs in the file system as flash memory."W" means write and "R" means read, but only "W" is used because only write commands are needed.The "logical sector number" is the number of the sector corresponding to the write command."Data" is the content to be saved and "size" is the capacity of the write command.The unit of size uses bytes by default.
Given the data to process, divide the logical sector number (LSN) by the number of sectors per page to obtain the quotient, where the quotient is called the logical page number (LPN).The LSN and LPN will only reside in the DRAM in the filter area for a short time until the next command is given.The size of the in which the instruction can be stored for a short time is equal to the maximum capacity of the filter specified in Algorithm 1.
In order to simplify the filter, it is represented as a single page.In this paper, the LSN is recorded in parentheses for intuitive confirmation.On an actual system, the LSN is not stored in the filter.
For example, in Fig. 2, If LSN 1 is divided by 16, the quotient is 0 and the remainder is 1.Therefore, the LPN is written as 0 and data is written to sector 1 of the filter.This command is the "Filter_command" in Algorithm 1, and a detailed description of this will be provided in the next paragraph.A "Filter_command" will be saved to the PCM or flash memory and stored in the filter as determined by Algorithm 1 when processing the next command.
Algorithm 1 describes the overall processing of the filter and example commands are provided in Fig. 3.We used "OLTP Application I / O", a collection of I / O command information given to storage among the traces provided publicly in UMass Trace Repository [10].In Algorithm 1, a 'Filter_command' implies that the command is already stored in the filter and a "Next_command" refers to the command that is currently being processed.
The first item to check when given a command to process is the size of the command.If the size of the "Next_command" is less than or equal to the maximum value that can be stored in the filter, verify that the filter has a "Filter_command" already stored.
In the case of an instruction given as "(1) W, 1, A, 6144" in Fig. 3, the size of the instruction is larger than 4096 B, so it does not go through the filter (Algorithm 1, lines 17-18).This is because when a trace is analyzed, very few consecutive write commands that exceed the maximum size of the filter appear in the same sector.When a scenario occurs in which a command is to be stored in flash memory, the filter collects as many identical page commands into registers as possible before they are stored in flash memory.If the page currently being collected in the register is equal to the LPN of the command or if the register is empty, the command is stored in this register.This includes the processing of the "(3) W, 9, C, 512" and "(4) W, 4, D, 1536"commands, for example.However, if the page number that is collected in the register differs from the LPN in the next instruction or if the next instruction causes a register overflow, the data of the existing register is stored in the flash memory before the next instruction is stored in the register.
If a scenario occurs that saves a command to a filter, such as processing the "(2) W, 9, B, 512" command, the command can be saved to the filter immediately if the filter is empty (lines 1 and 15-16).www.ijacsa.thesai.orgIf a "Next_command" must be saved in the filter (lines 3-7), but the filter is not empty (a "Filter_command" is already stored in the filter), compare the LPNs of both commands (line 3).If the LPN is the same, the LSNs are compared again.If the LSNs are the same, the filter is overwritten (lines 4-5).In the figure, the command "(3) W, 9, C, 512" would overwrite command "(2) W, 9, B, 512".
If the LSNs are not the same, two write commands, such as the "(3) W, 9, C, 512" and "(4) W, 4, D, 1536" commands, are stored in the flash memory on the same page and the filter state changes to empty (lines 6-7).
When the "(6) W, 30, F, 512" command is given as the "Next command", the LPNs of the "Filter command" and "Next command" are different (line 8).Therefore, the sector mapping table is referred to and the PCM checks whether this is the same sector as the "Filter command".In the current situation, because the PCM is empty, there is no identical sector, so the "Filter command" is stored in the PCM and the "Next command" is stored in the filter (lines 12-14).
When the command "(7) W, 15, G, 1536" is given, the PCM is not empty but there is no same sector for the filter command, so data "F" corresponding to the "Filter command" is newly saved in the PCM.However, if the "(8) W, 30, H, 512" command is given, the PCM overwrites existing data 'E' with filter command data "G" because this is the same sector as that of the filter command.If the algorithm used were the 1:N association, it would have already used a significant amount of log block space due to overwriting.
C. Example Execution and Limitations
Fig. 4(a) and (b) show the results of the 1:N algorithm and the proposed filter algorithm performed on the same command.The command is one of the "OLTP Application I / O" commands publicly available from the UMass Trace Repository used for performance evaluations [10].
In the results analysis of Fig. 4, when running the compound mapping and filter algorithm, three pages were used for NAND flash memory and 55 sectors were used for PCM, resulting in 52,682 B. However, using 1:N concatenation, 10 pages were used for the data block and 13 pages for the 188,416 B log block.As a result, the space utilization efficiency of the filter algorithm is three times higher than for the 1:N association algorithm.In addition, the 1:N association algorithm wastes approximately 17 times more space than the filter algorithm.
Compared to the 1:N connection algorithm, not only does the compound mapping algorithm conduct much fewer merge operations, its use of partial sector mapping greatly improves the space utilization.Fig. 4(a) shows the processing of a command by this method.At a glance, the space utilization efficiency and the data storage density are much higher than the conventional 1:N association algorithm shown in Fig. 4(b).
Use of the conventional 1:N association algorithm results in many page allocations, as shown in Fig. 4(b).However, the amount of data that is actually stored in this space is very small, resulting in wasted capacity and lower space utilization efficiency.
The filter algorithm can store data on a sector-by-sector basis, and data of less than half a page (4 KB) can be algorithmically executed in the phase-change memory, where it is possible to overwrite data immediately when a write command occurs for the same position, and data are managed using sector-by-sector mappings.Complex flash memory mapping can be accomplished through a block-mapping application.With this approach, redundant sectors, which account for 79% of all traces, can be effectively managed.
However, from a cost point of view, there is a limit to the capacity of the phase change memory because this memory is expensive.Therefore, a small amount of space should be allocated to maximize the cost efficiency of the phase change memory.A minimum amount of space should also be allocated for the merge operator because if data should be stored in the phase change memory, but its space is not sufficient or the amount of invalid data that can be overwritten is too low, a merge operation will be performed.That is, since the size of the phase change memory is small, the number of merging operations increases.Therefore, the cost of the merge operator should be minimized.
As limitations, the volume used in the sector-mapping table is relatively large, and the cost of the phase-change memory is high.Therefore, a means to reduce the size of the mapping table and at the same time the amount of phasechange memory that provides the greatest efficiency for each NAND flash memory capacity should be sought.It is also necessary to consider a more efficient method of merging and to check detailed conditions on how to exchange information between the flash memory and PCM.
A. Test Results
This section compares and analyzes the efficiency of the proposed algorithm to a compound mapping filter algorithm that uses 1:N linking sectors based on traces, and measures the number of operations performed and the time needed to achieve the results.The consumed time for flash memory is assumed in the simulation by referring to a technical note provided by Micron Technology [11].The time required for a random write per sector is 55 μs, and the time required for a block erasure is 500 μs.Also, trace analysis indicates that the average size of one write operation is 3584 B. We used the "OLTP Application I / O", a collection of I / O storage command information for traces provided publicly by the UMass Trace Repository [10].We analyzed the read / write command and the corresponding sector number and size, and conducted performance evaluations based on this command.Because QLC is still in the development stage, it was not possible to provide a hardware performance evaluation, so the evaluation was performed based on software coding.
As common characteristics for the two algorithms, one block is composed of 64 pages, and each page consists of 16 sectors, as shown in Table I.In both cases, the data block and log block domains use NAND flash memory.The 1:N connection algorithm uses a 1 GB data domain and a 10 MB log block domain; the filter algorithm was set up using a data domain of 1 GB, log block of 5 MB, and filter domain of 5 MB, where the filter domain used dynamic, static, or parameter RAM (DRAM, SRAM, or PRAM).
Table II shows the results of analyzing 378,914 write commands on a single chip.Here, a merge operation refers to merges between the data domain and log blocks in the NAND flash memory.For the 1:N connection algorithm, 1,112,738 write operations were required.This represents approximately 7,789,166 sectors (55 microseconds per sector), or approximately 428.4 seconds in total.On the other hand, the filter algorithm required 200.9 seconds because 522,012 sectors were involved.Therefore, using the filter algorithm, it is possible to reduce the number of write operations and their associated time by 46% compared with the conventional method.Erase operations also yielded significant differences.In the 1:N chain algorithm, 546 block deletion operations and 273 merge operations were performed.However, the filter algorithm applied only 34 block deletions and 17 merge operations.These numbers indicate that when using the filter algorithm, the numbers of delete and merge operations are reduced by 93% compared to the 1:N connections algorithm.
B. Directions for Future Research
This study assumed that the filter will use DRAM or SRAM.However, such memory types are relatively expensive compared to PRAM, and hence the memory volume must be reduced as much as possible for greater cost efficiency.Therefore, a method that uses PRAM should be considered.PRAM has a slower access speed compared to DRAM or SRAM, and hence an algorithm that uses a two-or four-step pipeline technique must be designed to improve the speed.
To the two-step pipeline, two filters (Filter 1 and Filter 2) composed of eight connected sectors operate within the phasechange memory.After reading the write command in Filter 1, the write command is also read in Filter 2. Because differences in the delay time may occur depending on the input command, the filter that finishes its operation first will read the new command and process it according to the algorithm.
To further elaborate, if the domain in a phase-change memory uses a filter, and phase-change memory is used for storage, the filter is converted into the data domain immediately, and the eight sectors that are connected out of the extra domains in the phase-change memory will be used as a new filter.The algorithm described in this paper requires two operations when data are stored in the NAND flash memory or phase-change memory because of data passage through the filter.However, when PRAM is used, the filter is incorporated in the phase-change memory architecture, so only one write operation is needed to store data in the phasechange memory.
Fig. 3 .
Fig. 3. Algorithm 1 Process with Examples (Proceeding Left to Right on the First Line).
ACKNOWLEDGMENT
This work was supported by Basic Science Research through the National Research Foundation of Korea (NRF) This study was also supported by a 2018 Research Grant from Kangwon National University (No. 000000000).
TABLE I .
ESTABLISHMENT OF TEST HYPOTHESIS | 2019-05-12T03:54:43.541Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "3253d01c7d894f8b5e0c72f6976b51899768a7ee",
"oa_license": "CCBY",
"oa_url": "http://thesai.org/Downloads/Volume10No4/Paper_14-Compound_Mapping_and_Filter_Algorithm.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3253d01c7d894f8b5e0c72f6976b51899768a7ee",
"s2fieldsofstudy": [
"Engineering",
"Computer Science",
"Materials Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
3246471 | pes2o/s2orc | v3-fos-license | Best Values for the CP-odd Meson-Nucleon Couplings from Supersymmetry
In the supersymmetric models, the dominant sources of the hadronic flavor-diagonal CP violation at low energy are the theta term and the chromoelectric dipole moments of quarks. Using QCD sum rules, we estimate the preferred range and the best values for the CP-odd meson-nucleon coupling constants induced by these operators. When the theta term is removed by the axion mechanism, the size of the most important isospin triplet pion-nucleon coupling is estimated to be \bar g_{\pi NN}^{(1)} = 2 times 10^{-12}(tilde d_u - tilde d_d), where chromoelectric dipole moments are given in units of 10^{-26} cm.
The search for CP violation in flavor-conserving processes is of paramount scientific importance. The suppression of CP-violating effects induced by the complex phase of the Kobayashi-Maskawa matrix allows the use of electric dipole moments (EDMs) of neutrons or heavy atoms as well as T-odd asymmetries in the decays and scattering of baryons as powerful tools for probing new physics beyond the Standard Model.
The wide separation between the energy scale of "new physics" (superpartners, technicolor, etc.) and the characteristic momenta of particles in non-accelerator experiments permits consideration of only the first few terms in the effective CP-odd Lagrangian. In the minimal supersymmetric models only the theta term, three-gluon operator, and EDMs and color EDMs of light quarks are important: where GG ≡ G a µνG a µν , Gσ ≡ t a G a µν σ µν and GGG ≡ f abc G a µν G b ναG c αµ . The coefficients in (1) are generated by the CP violation in the SUSY breaking sector and evolved down to 1 GeV, which is the border line of viability of the perturbative quark-gluon description.
In this Letter we present a systematic study of the transition from this Lagrangian to the effective T-odd meson-nucleon interactions which determine the magnitude of the CP-violating nuclear moments and the T-odd asymmetries in nucleon scattering. T-odd nuclear forces are the main source for the EDMs of heavy diamagnetic atoms (see e.q. [1]). The quality of constraints imposed on supersymmetric models from a recently improved measurement of the EDM of the xenon and mercury atoms [2,3] (as well as of future experimental efforts with the EDMs of diamagnetic atoms and the T-odd nucleon scattering [4]) depends crucially on the treatment of QCD and nuclear effects, i.e. on the extraction of limits ond i from the experimental bound on the atomic EDM. The implications of this powerful constraint for the CP violation in the supersymmetric models have been emphasized in Refs. [5,6], and numerically exploited in [7]. The purpose of this work is to give "state-of-the-art" estimates for various T-odd nucleon-meson coupling constants, i.e to find their best values in terms of the coefficients in eq.
(1). This problem is reminiscent of Ref. [8] which estimates various P-odd meson-nucleon couplings in terms of the Fermi constant.
T-odd nuclear forces inside the nucleus can be approximated by a meson exchange with one of the meson-nucleon couplings being T-violating [9]. It is natural to expect that pion exchange dominates in the T-odd channel. The coupling of nucleons with pions can be conveniently parametrized [10,11] as These couplings are generated by the theta term and by the color EDMs of quarks,ḡ ≡ḡ(θ,d i ), whereθ = θ G + θ q . Couplings that change the isospin by two units can be generated only at the expense of an additional m u − m d suppression and are ignored in the present analysis.ḡ (0) πN N (θ) is rather well known [12], as it can be deduced from the size of the N|ūu −dd|N matrix element. For most of the models of CP violation including minimal SUSY models,θ has to be removed by the Peccei-Quinn (PQ) symmetry leaving quark color EDMs as the dominant source for CP-odd nuclear forces. The contribution of the three-gluon operator GGG toḡ is additionally suppressed by m q and can be neglected.
The first step in the calculation ofḡ (0) πN N (θ,d u +d d ) andḡ (1) πN N (d u −d d ) is the reduction of the pion field by means of PCAC [13], Fig. 1a. The smallness of the t-channel pion momentum compared to the characteristic hadronic scale justifies this procedure, The commutator of the zero component of the axial current with CP-violating operators O CP = qg s Gσγ 5 q can be easily computed, leading to the matrix elements of the qg s Gσq operators over the nucleon state [13]. However, eq.
(3) is an incomplete result. A second class of contributions was pointed out in Ref. [6] and in Refs. [14,15] in the context of the neutron EDM problem. They consist of the pion pole diagrams, Fig. 1b, which contribute at the same order of chiral perturbation theory. Indeed, the quantum numbers of the qg s Gσq operators allow them to produce zero-momentum π 0 's from the vacuum. The pion-nucleon scattering amplitude at vanishing pion momentum is proportional to the first power of the quark mass, whereas the pion propagator contains 1/m q , so that Fig. 1b and 1a both contribute at the same O(d q m 0 q ) order.
(a) Figure 1: Two classes of diagrams contributing to the CP-odd pion-nucleon coupling constant.
Using the low energy theorems that relate the pion-nucleon scattering amplitude with the matrix elements of m qq q over the nucleon state, we arrive at the following intermediate result for the NNπ 0 vertex: In this expression, m * = m u m d /(m u + m d ) and m 2 0 = 0|qg s Gσq|0 / qq = −(0.8 ± 0.1)GeV 2 [17] parametrizes the strength of the quark-gluon dim=5 vacuum condensate. In our case, this originates from the π 0 |qg s Gσγ 5 q|0 matrix element and the minus sign is included into the definition of m 2 0 for convenience. An alternative way of obtaining the amplitude (4) is to chirally rotate quark masses to the basis where pions cannot be produced from the vacuum, π 0 |2 qm q θ q γ 5 q + d q qg s Gσγ 5 q|0 = 0, while keeping the theta term fixed, θ q = const. This eliminates diagrams 1b, but creates an additional contribution to 1a, leading to the same result (4).
When the PQ mechanism is activated, removing the theta term, the minimum of the axion vacuum is shifted fromθ = 0 by the color EDM operators [16]. It turns out that the true minimum is such that the square bracket in eq. (4) is zero so that only the first line survives. This leads to a relatively simple expression for the couplings in terms of matrix elements of the H u and H d operators: Previously, using a combination of QCD sum rules and scaling arguments, Ref. [13] estimated that N|qg s Gσq|N ∼ 5 3 m 2 0 N|qq|N . Another analysis [18] finds similar estimate N|qg s Gσq|N ∼ m 2 0 N|qq|N . Obviously, these estimates are not sufficient to derive a reliable answer forḡ (0) πN N andḡ (1) πN N because of the additional −m 2 0 N|qq|N contribution coming from diagrams 1b. The danger of mutual cancelation between the two contributions was realized in Ref. [6] where the need for a dedicated analysis of N|H u(d) |N was emphasized. In the rest of this paper we derive the QCD sum rules [19] for the matrix elements of the H u(d) operators. The advantage of this approach is that the operator product expansion (OPE) will contain similar vacuum condensates for both sources, qg s Gσq and −m 2 0 qq, which allows us to trace possible cancelations. Following Refs. [15,20], we introduce the generalized nucleon interpolating current, which combines the two Lorentz structures, j 1 = 2ǫ abc (u T a Cγ 5 d b )u c and j 2 = 2ǫ abc (u T a Cd b )γ 5 u c . We compute the OPE for the correlator of this current in the presence of where Q 2 = −p 2 , with p the current momentum. We limit our calculation to the Lorentz structure proportional to / p because it is less susceptible to direct instanton contributions and excited resonances than the chirally even structure proportional to 1. Relevant diagrams for this correlator are shown in Fig. 2. After a straightforward calculation, we find The leading order term is given by the diagrams 2a-2b, Here we have introduced the combinations d + =d u +d d and d − =d u −d d . It turns out that the diagrams 2a, where the external source enters through the 0|q/ Dq|0 structure, give large and opposite sign contributions for the g sq Gσq and −m 2 0q q sources so that their combined effect in H q is nil. Fortunately, this cancelation does not hold for diagrams 2b that give (10).
The next-to-leading term corresponds to diagrams 2c, which contribute to the OPE (9) with the log of the infrared cutoff Λ IR . g 2 s (GG) ≃ 0.4 − 1 GeV 4 is the vacuum gluon condensate. The next-to-nextto-leading order contains the vacuum polarizabilities, and the vacuum factorization assumption has been made in (12). The sum rules prescription involves matching the OPE with the phenomenological part, Π OPE (Q 2 ) = Π phen (Q 2 ), where contains double and single pole contributions, and the continuum. After Borel transformation of the sum rule Π OPE (Q 2 ) = Π phen (Q 2 ) we obtain Here s is the continuum threshold and E 0 = 1−e −s/M 2 . A and B parametrize the contribution of excited states and are assumed to be independent of M.
It is reasonable to start the numerical treatment from a simple estimate, a la Ioffe [21], which assumes the dominance of the ground state and the LO OPE term, and eliminates λ using the nucleon mass sum rule for / p. Separating different isospin structures, we find Here F 1 (β) = (5 − 2β − 3β 2 )/(5 + 2β + 5β 2 ) and F 0 (β) = 5(1 − β) 2 /(5 + 2β + 5β 2 ). F 1 (0) = F 0 (0) = 1. To get numerical estimates, we choose β = 0, extensively used in lattice simulations. It is well known that the j 1 current has a much better overlap with the nucleon ground state and λ 1 ≫ λ 2 . Substituting M = 1GeV, we obtain In most SUSY models, d u(d) = loop factor ×M −2 SU SY × a linear combination of m u and m d . When combined with qq from eqs. (18)(19), this forms m 2 π f 2 π times a function of m u /m d , thus eliminating a major source of uncertainty in EDM calculations due to the poor knowledge of m u + m d [14,15,20]. The estimate (18) is twice smaller than the value ofḡ (1) πN N used in [6]. Also in agreement with [6], eqs. (18)(19) suggest thatḡ For a more systematic analysis, one has to include NLO and NNLO terms in the OPE. Here, we immediately face the problem of the unknown vacuum condensates χ S,T . Even though the vacuum correlators qq,qq can be determined using chiral perturbation theory [22], there is no direct information on qq,qg s Gσq other than that it is likely to be comparable with m 2 0 qq,qq . At this point we would like to take advantage of the possibility to choose β in such a way as to minimize higher order terms in the OPE. We note that χ S in (12) is multiplied by 1+2β−3β 2 which becomes 0 at β = −1/3 and 1. The choice of β = 1 also suppresses the leading order, while β = −1/3 maximizes it. For the expected size of χ S [22], χ S ∼ ±0.16 × m 2 0 qq f −4 π , we can choose β in a range such that the whole square bracket in front of d − in eq. (12) is zero. This gives a range of interpolating currents around β = −1/3, where we can tune the NNLO terms to zero in theḡ (1) πN N channel. Variation of β in this range contributes to an estimate of the uncertainty in our analysis. In theḡ (0) πN N channel there is no obvious choice of β that would remove χ T and leave the leading order term un-suppressed, so we will choose the same β as forḡ (1) πN N . We note that this range is close to β = 0 as used for (18)(19). One should also worry about the dependence on Λ IR in NLO. Remarkably, in the range (20), this dependence is softened by cancelation of the m 4 0 and g 2 s (GG) terms. The preferred range forḡ (1) πN N andḡ πN N is determined according to the following procedure. We take the OPE side of (15) at the lower point of the usual Borel window, M 2 = 0.8 GeV 2 , and vary it through the range of parameters −0.5 ≤ β ≤ 0, 300MeV ≤ Λ IR ≤ 500MeV, 0.7GeV 2 ≤ |m 2 0 | ≤ 0.9GeV 2 , 0.4GeV 4 ≤ g 2 s (GG) ≤ 1GeV 4 , and 2GeV 2 ≤ s ≤ 3GeV 2 , 0.8GeV 6 ≤ (2π) 4 λ 2 ≤ 0.9GeV 6 and finally −6 GeV −1 ≤ χ T /| qq | ≤ 6 GeV −1 . On the r.h.s of (15) we assume the dominance of the double pole contributions for M 2 = 0.8 GeV 2 and allow for a 50% correction due to the presence of the unknown parameters A and B, thus effectively widening the allowed range forḡ πN N . As expected, the couplings are most sensitive to the value of m 2 0 . The final results are presented in Table 1. Our "best" value for g (1) πN N is determined by averaging over β and choosing the central values for condensates, which also suppresses the logarithmic term. In order to separate the contribution ofḡ (1) πN N from the A and B terms, we impose a relation among A, B and π LO obtained by requiring the same large M 2 asymptotic behavior for both sides of (15). The resulting sum rule is fitted numerically and produces a result 1.5 times smaller than the naive estimate (18). For g (0) πN N the best value cannot be determined as the OPE side changes sign depending on the value of χ T .
Also included in this table are the preferred ranges for the CP-odd couplings of nucleons with η, ρ and ω mesons. The couplings with ρ and ω have the EDM-like structures − i 2N (∂ ν V µ − ∂ µ V ν )σ µν γ 5 N with properly arranged isospin indices. They can be easily extracted from the calculation of the neutron EDM d n , induced byd u(d) [15] after a simple re-assignment of charges for the external vector currents. Best values forḡ ρN N andḡ ωN N follow from the central values of d n (d u ,d d ) given in [15]. Finally, the coupling to the η meson is dominated by the strange quark chromoelectric dipole momentd s in the isospin-singlet and byd u −d d in the isospin-triplet channels, and in both cases only the expected range can be quoted.
In conclusion, we have shown that the size of the CP-odd pion-nucleon constant generated by quark chromoelectric dipole moments is given by the matrix element ofqg s Gσq−m 2 0q q over the nucleon state. We have constructed a QCD sum rule for this matrix element and determined the preferred range and the best value for theḡ (1) πN N coupling. The upper part of the preferred range agrees with previous estimates. However, in the interpretation of the experimental limit on the EDM of the mercury atom [3] in terms of limits imposed on new CP-violating physics, a more conservative valueḡ (1) πN N = 2(d u −d d )/10 −14 cm should be used. This translates the result of Ref. [3] (see [1,6] for details) into the bound |d u −d d | < 2 × 10 −26 cm. This constraint provide a sensitive probe of CP violation in the supersymmetric spectrum up to M SU SY of few TeV. | 2014-10-01T00:00:00.000Z | 2001-09-06T00:00:00.000 | {
"year": 2001,
"sha1": "afee5984321fe0e05584f7efdfd014ad52982e62",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-ph/0109044",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "fbcb956e4e131f9766fde1b9bcb1d87c58549cd2",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
232404268 | pes2o/s2orc | v3-fos-license | Quantum Self-Supervised Learning
The resurgence of self-supervised learning, whereby a deep learning model generates its own supervisory signal from the data, promises a scalable way to tackle the dramatically increasing size of real-world data sets without human annotation. However, the staggering computational complexity of these methods is such that for state-of-the-art performance, classical hardware requirements represent a significant bottleneck to further progress. Here we take the first steps to understanding whether quantum neural networks could meet the demand for more powerful architectures and test its effectiveness in proof-of-principle hybrid experiments. Interestingly, we observe a numerical advantage for the learning of visual representations using small-scale quantum neural networks over equivalently structured classical networks, even when the quantum circuits are sampled with only 100 shots. Furthermore, we apply our best quantum model to classify unseen images on the ibmq\_paris quantum computer and find that current noisy devices can already achieve equal accuracy to the equivalent classical model on downstream tasks.
I. INTRODUCTION
In the past decade, machine learning has revolutionised scientific analysis, yielding breakthrough results in protein folding [1], black hole imaging [2] and heart disease treatment [3]. At the forefront of this progress is deep learning [4], characterised by the successive application of artificial neural network layers [5,6]. Notably, its use in computer vision has seen the top-1 accuracy on benchmark datasets such as ImageNet soar from 52% [7] to over 90% [8], fuelled by shifts in the underlying techniques used [9,10]. However, what has remained consistent in these top performing models is the use of labelled data to supervise the representation learning process. Whilst effective, the reliance on large quantities of human-provided annotations presents a significant challenge as to whether such approaches will scale into the future. Crucially, modern datasets such as the billions of images uploaded to social media are both vast and unbounded in their subject, quickly making the task of labelling unfeasible. This has reignited interest in an alternative approach, termed self-supervised learning [11], which seeks instead to exploit structure in the data itself as a learning signal. Rather than predict human annotations, a model is trained to perform a proxy task, that makes use of attributes of the data that can be inferred without labelling. Furthermore, the proxy task should encourage the model to learn representations that capture useful factors of variation in the visual input, such that solving it ultimately correlates with solving tasks of interest after training. Recent progress in the self-supervised * These authors contributed equally to this work learning of visual data has been driven by the success of contrastive learning [12][13][14][15][16], in which the proxy task is differentiating augmented instances of the same image from all other images. Provided the correct choice of augmentations, this produces a model which is invariant to transformations that do not change the semantic meaning of the image, allowing the learning of recognisable features and patterns in unlabelled datasets. With these techniques, contrastive learning is able to learn visual representations with comparable quality to supervised learning [16,17], without the bottleneck of labelling. However, it is a fundamentally more difficult task than its supervised counterpart [18], and capturing complex correlations between augmented views requires more training data, more training time and larger network capacity [14,15]. Therefore, it is important to consider whether emerging technologies can contribute to the growing requirement for more powerful neural networks [19]. Variational quantum algorithms (VQAs) [20], a near term application of quantum computing, are one such new paradigm. While VQAs have been used to solve many types of optimisation problems [21][22][23][24][25], it is their application to supervised learning [26][27][28], unsupervised learning [29], generative models [30,31] and reinforcement learning [32][33][34] which has led to them being referred to as quantum neural networks (QNNs) [35][36][37]. In theory, the power of these models comes from their access to an exponentially large feature space [27] and ability to represent complex high-dimensional distributions, as formalised by the effective dimension [38]. Importantly, early evidence suggests that quantum models can achieve an advantage over their classical counterparts, yet these works focus only on the supervised learning of either artificial data [36,39] or simple historical datasets [38]. For example, whilst widely used to study QNNs [40,41], classical supervised learning of MNIST can already achieve 99.3% top-1 accuracy with a twolayer 784-800 width multi-layer perceptron (MLP) [42]. Thus, it is highly unlikely that this problem would practically benefit from a quantum model with access to a > 2 50 dimensional feature space and careful consideration should be made about whether supervised learning is the best setting to try to achieve quantum advantage. By comparison, self-supervised learning of ImageNet with the widely-used ResNet 50 architecture [43] (with maximum channel width 2048) achieves only 76.5% top-1 accuracy [17]. The necessity for large capacity models means that self-supervised learning may be a better setting in which to seek useful quantum advantage through quantum neural networks [38]. In this work, we construct a contrastive learning architecture in which classical and quantum neural networks are trained together. By randomly augmenting each image in the dataset, our hybrid network learns visual representations which groups different views of the same image together in both classical and Hilbert space. Afterwards, we test the quality of the representations by using them to train a linear classifier, which then makes predictions on an unseen test set. We find that our hybrid encoder, constrained in both size and training time by quantum simulation overheads, achieves an average test accuracy of (46.51 ± 1.37)%. In contrast, replacing the QNN with a classical neural network of equivalent width and depth results in a model which obtains (43.49±1.31)% accuracy. Thus, our results provide the first indication that a quantum model may better capture the complex correlations required for self-supervised learning. We then apply the best performing quantum model to classify test images on a real quantum computer. Notably, the accuracy achieved using the ibmq paris [44] device equals the best performing classical model, despite significant device noise. This illustrates the capability of our algorithm for real-world applications using current devices, with flexibility to assign more of the encoding to QNNs as quantum hardware improves. While further research is required to demonstrate scalability, our scheme provides a strong foundation for quantum self-supervised learning. Excitingly, given that contrastive learning has also been successfully applied to non-visual data [15,[45][46][47][48], our work opens the possibility of using QNNs to learn large, unlabelled datasets across a range of disciplines.
A. Contrastive learning architecture
Given an unlabelled dataset, the objective of selfsupervised learning is to find low dimensional encodings of the images which retain important higher level features. In this work, we train a model to do this by adapting the widely used SimCLR algorithm [17], the steps of which can be seen in Fig. 1. Firstly for a given image, xi, a pair of random augmentations are generated and applied to form a positive pair ⃗ These are transformed by the encoder network, consisting of classical convolutional layers and a quantum or classical representation network, into representation vectors ⃗ y 1 i , ⃗ y 2 i . The projection head subsequently maps the representations to the vectors ⃗ z 1 i , ⃗ z 2 i , such that contrastive loss can be applied without inducing loss of information on the encoder. the data of which is contained within ⃗ x i , we generate two augmentation functions. Each one randomly crops, rotates, blurs and colour distorts the picture, such that two augmented views ⃗ x 1 i , ⃗ x 2 i of the same base image are produced. Importantly, these augmentations still allow for the underlying object to remain visually distinguishable. This enables us to assert that these two views contain a recognisable description of the same class, which we call a positive pair. Once this positive pair is generated, each view is passed through a set of neural networks. First, an encoder network is applied, which maps the high dimensional input data ⃗ i . Then the output of the encoder network is passed to the projection head, a small multi-layer-perceptron (MLP) [49] consisting of two fully connected layers. This produces the final representations ⃗ z 1 i , ⃗ z 2 i . Given a batch of N images, the above process is repeated such that we are left with 2N representations corresponding to 2N augmented views. Looking at all possible pairings of these representations, we have not only positive pairs (e.g., ⃗ z 1 i , ⃗ z 2 i ) but also negative pairs (e.g., ⃗ , which we cannot definitely say contain the same class. For each training step, all of these possible pairs are used to calculate the normalised temperature-scaled cross entropy loss (NT-Xent) [50] (see Appendix A), which is minimised via stochastic gradient descent [51]. Intuitively, minimising this loss function can be understood as training the network to produce representations in which positive pairs are mapped close together and negative pairs far apart, as measured by their cosine similarity. This idea is a core concept in contrastive learning and many machine learning techniques [52]. Note that whilst it is possible to train the network by applying NT-Xent directly to the output of the encoder, the contrastive loss function is known to induce loss of information on the layer it is applied to [17]. Therefore, the addition of the projection head ensures that the encoder remains sensitive to image characteristics (e.g., colour, orientation) that improves performance on downstream tasks. In order to incorporate QNNs, we modify the encoder to contain both classical and quantum layers working together. The first part of the encoder consists of a convolutional neural network, which in this work is the widely used ResNet-18. This produces a 512 length feature vector, which is already an initial encoding of the augmented image. However, we then extend the encoder with a second network, which we call the representation network as it acts directly on the representation space. This consists of either a multi-layer QNN of width W , or a classical fully connected MLP with equivalent width and depth. Ideally the representation network would have width W = 512, so as to minimise loss of information. However, we instead look to work in a regime which is realisable on current quantum computers, and as such in this work we use W = 8. This is achieved by following the convolutional network with a single classical layer that compresses the vector, a common technique used to link classical and quantum networks together [53,54]. After the representation network is applied, the resultant encoding is passed onto the previously described projection head. To maintain the structure of the original Sim-CLR architecture, we limit the projection head to be no wider than the width of the QNN.
B. Quantum representation network
The quantum representation network follows the structure shown in Fig. 2, beginning with a data loading uni-taryD(⃗ v). Whilst schemes exist to encode data into quantum circuits with exponential compression [55,56], these require a prohibitively large number of logic gates compared to current hardware capabilities. By compressing the output of the ConvNet as described in section II A, we need only to solve the simpler issue of loading a vector ⃗ v of length W into equally as many qubits. This is achieved by applying a single qubit rotationR x to each qubit in the register; Here, v k is the kth element of input vector ⃗ v and is mapped to the range [0, π] to prevent large values wrapping back around the Bloch sphere. Once the input data is loaded, we apply the learning component of our QNN, a parameterised quantum circuit ansatz. In applications where the ansatz is used to solve optimisation problems relating to a physical system (e.g., the simulation of molecules), the circuit structure and choice of logic gates can be inspired by the underlying Hamiltonian [57]. However, without such symmetries to guide our choice, we use a variational ansatz based on recent theoretical findings in expressibility and entangling capability [58]. The ansatz is shown in Fig. 3, the structure of which is derived from circuit 14 of Ref. [58] and was chosen due to its performance in both these metrics. After the application of several ansatz layers, the network is finished by measuring each qubit to obtain an expectation value in theσ z basis. When evaluated on a real quantum computer or sampling-based simulator, the expectation value is constructed by averaging the sampled eigenvalues over a finite number of shots. If evaluated on a statevector simulator, the expectation value is calculated exactly. The gradients of the QNN output with respect to the trainable parameters and the input parameters are calculated using the parameter shift rule [35,59], which we describe here. Consider an observableÔ measured on the state resulting from the application of M parameterised gateŝ U 1 ,Û 2 , . . . ,Û M and M fixed gatesV 1 ,V 2 , . . . ,V M , where gatesÛ i = e iθiPi 2 are generated by operatorsP i ∈ { 1,σ x ,σ y ,σ z } ⊗n that are tensor products of the Pauli operators. According to the parameter shift rule, the gradient of the expectation value f = ⟨ψ( ⃗ θ) Ô ψ( ⃗ θ)⟩ with respect to parameter θ i is given by For each parameterised gate within the circuit, including both the variational ansatz and data loading unitary, an unbiased estimator for the gradient is calculated by measuring the QNN with the two shifted parameter values given in Eq. (2). Once the QNN gradients have been calculated, we combine them with gradients of the classical components to obtain gradients of the loss function with respect to all trainable quantum and classical parameters via backpropagation [60]. In this way, the QNN is trained simultaneously with the classical networks, and the quality of the gradients produced on quantum hardware play a crucial role in the training ability of the whole network.
A. Training
To examine whether the proposed architecture can successfully train, we apply it to the CIFAR-10 dataset [61].
In this preliminary experiment we restrict the dataset to the first two classes, leaving 10,000 32×32 colour images containing either an aeroplane or automobile. We also train this initial model without a projection head, since it is not being used for classification later. The quantum representation network here is a simulated two-layer QNN and is trained together with the classical components from scratch by integrating [62] the Qiskit [63] and PyTorch [64] frameworks. The full list of training hyperparameters can be found in Appendix B. Fig. 4 shows the results of several key metrics after training for 100 batches. Firstly, we record the loss after each batch, the minimisation of which represents the ability to produce representations in the classical W dimensional space whereby positive pairs have high similarity. Our results show that the loss decreases from 9.13 to 5.16 over the course of training, indicating that our model is able to learn. Importantly, since the quantum and classical parameters are trained together, this shows that information is successfully passed both forwards and backwards between these different network paradigms. Secondly, we log the Hilbert-Schmidt distance (D HS ), a metric that has been applied in quantum machine learning previously to study data embedding in Hilbert space [54]. Here, we use it to track the separation between our pseudo classes in the 2 W dimensional quantum state space while optimising the classical loss function. For a given positive pair ⃗ statistical ensembles where ψ α i ⟩ is the statevector produced by the hybrid encoder given augmented view ⃗ x α i . The Hilbert-Schmidt distance is then given by We repeat this for each positive pair in the batch and record the mean,D HS = 1 N ∑ i D HS,i . Focusing on the inset of Fig. 4, we see in the upper-left panel thatD HS increases consistently across the range of training, indicating that the QNN successfully learns to separate positive and negative pairs in Hilbert space. Expanding out the quadratic in Eq. (4), we can break down the metric into the so-called purity terms tr(ρ 2 ) and tr(σ 2 ), which are measures of the intra-cluster overlaps, and the term tr(ρσ), which is the inter-cluster overlap. Looking at the upper-right panel, we see that the average positive pair clustering tr(ρ 2 ) increases rapidly at the start of training, before steadying at a value around 0.85. This demonstrates one mechanism by whichD HS increases, through the QNN producing representations which group positive pairs close together in Hilbert space. The bottom panels of Fig. 4 show the average negative pair clustering tr(σ 2 ) and average negative-positive pair overlaps tr(ρσ), which decrease consistently throughout training. This demonstrates a second behaviour, whereby the QNN produces representations in which negative pairs are well separated. We note that these two values are very similar, which occurs in our self-supervised learning algorithm because of both the need to average over all positive pairs and because of the fixed size of ρ i . Thus, in the limit N → ∞ the ensemble σ i contains the entire batch and both metrics are effectively measuring the clustering of all data points.
Overall, Fig. 4 shows that the quantum component of the encoder contributes to the overall learning process, despite the network's parameters being optimised explicitly in a classical space. It is notable that the training time presented here is significantly less than classical benchmarks, which would typically be 100s of epochs. Due to its technological infancy, executing quantum circuits on real or simulated hardware is computationally expensive. Thus, the 1-2 epochs of training used in this work represents the limit of our current experiment, although we expect this to improve dramatically in the coming years with the release of GPU-enhanced simulators [65]. This also justifies our choice of dataset, since CIFAR-10 is both a modern relevant dataset [66][67][68] yet contains few enough images that we can complete at least one epoch.
B. Linear probing
Once training is complete, we require a way to test the quality of the image representations learnt by the encoder. Specifically, a good encoding will produce representations whereby different classes are linearly separable in the representation space [69]. Therefore, we numerically test the encoder using the established linear evaluation protocol [69], in which a linear classifier is trained on the output of the encoder network, whilst the encoder is frozen to stop it training any further. Once this linear probe experiment has trained for 100 epochs, we apply the whole network to unseen test data and record the classification accuracy.
C. Quantum and classical results on the simulator
We repeat training, this time with the first five classes of CIFAR-10 and a projection head. We train models with three different types of representation networks; classical MLP with bias and Leaky ReLU activation functions after each layer, quantum trained on a statevector simulator and quantum trained on a sampling-based simulator. We choose the representation networks to be width W = 8 in order to minimise the simulation overhead, whilst still being in a compression regime where training is stable (see Appendix C). Quantitatively, this means our twolayer classical and quantum representation networks have 144 and 32 learnable parameters respectively. checkpoints across 176 batches of contrastive training. We find that when the quantum circuits are evaluated using a statevector simulator, the quantum representation network produces higher average accuracy on the test set than the equivalent classical network at all points probed throughout training, and is separated by more than one standard deviation for over half of these. The highest accuracy is obtained at the end of training, where the quantum model achieves an accuracy of (46.51 ± 1.37)% compared to (43.49 ± 1.31)% for the classical model. In these results, the confidence interval corresponds to one standard deviation on the mean of six independently trained models. Furthermore, we also find that this numerical advantage holds for a range of smaller width models and is highly dependent on the correct choice of ansatz, more details of which are given in Appendices D and E. Subsequently, we explore whether using a finite number of shots limits this advantage. We train another quantum model on a simulator where the expectation values of measured qubits are sampled from 100 shots, both in the forward pass (generating the representations) as well as the backwards pass (calculating gradients). We find that beyond the first batch, the average accuracy of this model is still above what is achieved by the classical representation network, reaching (46.34 ± 2.07)% by the end of training. Significantly, this matches the performance of the statevector simulator, which represents the limit of infinite shots, demonstrating resilience of our scheme to shot noise. However, we note that the additional uncertainty introduced by the sampling does manifest as a larger standard deviation between repeated runs, com- FIG. 6. Confusion matrix from classifying 900 images using the best performing (a) classical model evaluated on a classical computer (b) quantum model evaluated on a real quantum computer with 100 shots per circuit. For a given true label (rows) and predicted label (columns), the number in each box shows the total number of times that prediction was made.
promising the consistency of the advantage.
D. Real device experiments
In section III C, we showed that a numerical advantage can be achieved for self-supervised learning with a quantum representation network, even when sampling the quantum circuits with only 100 shots. However, it does not follow that such an improvement can necessarily be realised on current quantum devices. The biggest barrier to this is the complex noise present on quantum hardware, a product of both the finite lifetime that qubits can be held in coherent states for and imperfections in the application of logic gates. To this end, we test the ability of real devices to accurately prepare representations produced by a pretrained quantum model and how this changes downstream accuracy on the test set. We construct a linear probe experiment with a quantum representation network and load in weights from the best performing pretrained model in which circuits were evaluated with 100 shots. Freezing all of the layers so that the entire network no longer trains, we repeat classification of images from the test set, however this time the circuits are executed on IBM's 27-qubit ibmq paris quantum computer. To reduce the number of gates, particularly SWAP operations caused by a mismatch between the ansatz and physical qubit connectivities, the circuits are recompiled using incremental structural learning [70] before execution, the details of which can be found in Appendix F. Fig. 6a shows the result of classifying 900 images randomly sampled from the test set, using the best performing classical model and evaluated on a classical computer. Fig. 6b shows the result when classifying the same images using the best performing 100-shot quantum model, evaluated on ibmq paris. Overall, the classical and quantum models achieve an accuracy of 47.27% and 47.00% respectively. Excitingly, this demonstrates that in this experiment, error induced by noise on the quantum computer is able to be offset by the enhanced theoretical performance of quantum neural networks, provided the circuit depth is reduced with recompilation techniques. Furthermore, in both setups the most correctly predicted class was aeroplanes (71.1% and 71.7%) whilst the most incorrectly predicted class was birds (15.9% and 20.5%), both of which the quantum model performed better on. We propose that birds and deer were most likely to be mistaken with one another due to the images sharing a common background of the outdoor natural environment.
IV. CONCLUSION
In this work, we propose a hybrid quantum-classical architecture for self-supervised learning and demonstrate a numerical advantage in the learning of visual representations using small-scale QNNs. We train quantum and classical neural networks together, such that encodings are learnt that maximise the similarity of augmented views of the same image in the representation space, as well as implicitly in Hilbert space. After training is complete, we determine the quality of the embedding by tasking a linear probe to classify images from different classes. We find that an encoder with a QNN acting in the representation space achieves higher average test set accuracy than one in which the QNN is replaced by a classical neural network with equivalent width and depth, even when evaluating quantum circuits with only 100 shots. We note that although making such a comparison has been established in previous works [38], how to fairly compare quantum and classical neural networks still remains a significant open question. We then apply our best performing pretrained classical and quantum models to downstream classification, whereby the quantum circuits were evaluated on a real quantum computer. The observation of a quantum predictive signal with equivalent accuracy to that of the classical model, despite the complex noise present on current quantum devices, is representative of the potential practical benefit of our setup. If recent progress in superconducting qubit hardware continues [71][72][73], it is possible that QNNs running on real devices will outperform equally sized classical neural networks in the near future in this experiment. One advantage of the hybrid approach taken in this paper is the resulting flexibility in how much of the encoder is quantum or classical. In fact, there now exist numerous software solutions for producing and testing such hybrid architectures [63,74,75]. As the quality and size of quantum hardware improves, our scheme allows classical capacity to be substituted for quantum, eventually replacing ResNet entirely. By optimising directly for the Hilbert-Schmidt distance, it is also possible with a fully quantum encoder to apply our setup to problems in which the data is itself quantum [76][77][78][79][80]. Promisingly, in this regime it may prove that the advantage observed in this work is further extended, given the ability of a quantum model to inherently exploit the dimensionality of the input [81]. Recently developed data sets consisting of entangled quantum states [82,83] serve as an obvious target for such work. In this case contrastive augmentations could be quantum operations that change the state but conserve the properties of interest, for example LOCC operations that do not affect the amount of entanglement in the system. With classical contrastive learning having been applied to non-visual problems in biology [47] and chemistry [48], our work provides a strong foundation for applying quantum self-supervised learning to fundamentally quantum problems in the natural sciences [84].
An open question remains as to whether a general quantum advantage for self-supervised learning may prove possible [71,85], in which no classical computer of any size can produce accuracies equal to that of a quantum model. In [38,86], the authors define effective-dimension, a metric measuring the expressive power of classical and quantum neural networks. In general, quantum models are able to achieve a higher effective dimension, and therefore capture a larger space of functions, than classical models with comparable width and number of parameters. Although it does not necessarily increase monotonically, the effective dimension of quantum models can remain larger than classical as the model and data set size are increased. Such behaviour indicates that the expressive power available to QNNs may allow for an advantage over classical neural networks, particularly for a problem such as self-supervised learning where highly expressive, large capacity models are believed to be particularly important for achieving highly accurate predictions [17].
Achieving experimental quantum advantage would require, as a minimum, a QNN with width greater than 60 qubits, such that the dimensionality of the accessible feature space becomes classically intractable. Furthermore, the QNNs would need to be trained on real devices, which remains a challenge due short qubit lifetimes and low gate fidelity. Therefore, considerable research still remains into the scalabiltiy of our scheme, which was only demonstrated at the small sizes feasible on current quantum hardware. Promisingly however, our method can be adapted to use different QNN structures that avoid the scaling issue of barren plateaus [87][88][89], which could be tested already using more efficient simulators [90]. Looking forward, the rate at which quantum hardware continues to progress provides the possibility of representing intractable distributions using QNNs. In this way, quantum computers may yet push self-supervised learning beyond the performance afforded by classical hardware.
V. DATA AVAILABILITY Data used to generate the above figures are available upon request from the authors.
VI. CODE AVAILABILITY
The code used to train the models described in this work can be found at https://github.com/bjader/QSSL. The code used to incorporate and train Qiskit quantum neural networks into PyTorch can be found at https:// github.com/bjader/quantum-neural-network and is required to build quantum representation networks.
Appendix A: Contrastive Loss Function
Here we formally define the process of contrastive learning. Let us have an augmentation function ξ(⋅; a). This augmentation combines cropping, rotation, Gaussian blurring and colour distortion of the image, and the amount by which each of these operations is performed is governed by a list of continuous random variables a. Each time we apply an augmentation, we randomly sample a from a distribution A such that applications of the augmentation function are independent from one another. For a particular image ⃗ x i , we now have a pair of views P i = { ξ(⃗ x i ; a 1 ), ξ(⃗ x i ; a 2 ) a 1 , a 2 ∼ A } which came from the same base image. We call this a positive pair. We define the negative pairs as the set of all augmented versions of different images. During contrastive training, all augmented views within the batch are passed through our architecture. The encoder network f (⋅) ∶ ⃗ x → ⃗ y and projection head g(⋅) ∶ ⃗ y → ⃗ z, are applied to give outputs ⃗ z α i = g(f (ξ(⃗ x i ; a α ))); a α ∼ A for each of the two arms (labelled by α = 1, 2). For simplicity, we define the NT-Xent loss for each input data separately labelled by index i as follows. The overall loss function corresponds to the sum of these terms over all input images (and correspondingly defined positive and negative pairs). A single term in the loss term is given by where i = 1, 2, . . . , N labels the input image and α, β = 1, 2 labels the (arbitrary) distinction between the first and second augmentation making up the positive pair. The overall loss L is given by the sum over each of i.
Appendix C: Classical width ablation
In order to incorporate QNNs that can be run on current quantum devices into contrastive learning, a compression of the feature vector is required after ConvNet. Since this would not be necessary in a purely classical setting, its impact on final performance is not well understood. To this end, we perform a study of the accuracy achieved by models with different representation network widths. We do this with classical representation networks to remove the quantum specific considerations of statistical noise and optimal circuit architecture, focusing purely on width. The classical representation network is a twolayer, width W MLP, with Leaky ReLu activation functions after each layer and with bias. Each model is trained on the first five classes of the CIFAR-10 dataset and a linear probe experiment evaluates the performance at regular checkpoints during training. Fig. 7 shows the result comparing models with different representation network widths, including the W = 512 case which corresponds to no compression. Starting from W = 2, we see that increasing the width of the representation network improves the test accuracy. Furthermore, we find that W = 8 is the lowest width network in which test accuracy retains the same qualitative behaviour as the uncompressed network. Therefore, in our proof-of-principle quantum experiments, we use an eight width representation network corresponding to eight qubits.
Appendix D: Quantum and classical results at different widths In section III C we demonstrate that for an architecture with a W = 8 representation network, using a QNN to form a hybrid model leads to higher performance in linear probing experiments than the purely classical case. Here we supplement this with additional experiments for the W = 2, 4 and 6 cases alongside an equally sized classical comparison for each one. The same problem setup and training parameters are used as in Fig. 5. Fig. 8 shows the accuracy achieved by these additional models in linear probing experiments at intervals across training, as well as the W = 8 results from the main text. Focusing on the circle markers representing the quantum models, we see that the accuracy improves consistently when increasing the QNN width. This matches the behaviour of the classical models, represented in this figure by the crossed markers, illustrating that our intuition for how compression of the network affects performance can be applied to both the quantum and classical regimes. Secondly, we compare between quantum and classical models of the same width, as shown by the lines of the same colour. Here we see that for the new cases of W = 2, 4, 6, there is a numerical improvement in the average accuracy achieved across all training checkpoints sampled, consistent with the W = 8 case. Whilst these are still small models, they provide further impetus to consider whether this improvement would remain for models with width W > 8, eventually competing directly with the uncompressed SimCLR algorithm at W = 512. Looking forward, testing this hypothesis towards the W = 60 qubit range may be possible with more efficient simulators [92,93] as well as by employing training shortcuts such as calculating gradients directly with the quantum state rather than using the parameter shift rule [40].
Appendix E: Performance of alternative ansatz
In section III all QNNs are constructed using the variational ansatz seen in Fig. 3, which connects the qubits in a ring of parameterised controlled rotation gates. Here we introduce a second ansatz, as seen in Fig. 9, which is different in that it connects all of the qubits together and only has single qubit parameterised gates. Notably, this ansatz was recently shown to exhibit a larger effective dimension when applied to supervised learning than equivalent classical networks [38]. Therefore, we test whether this circuit structure is also a good candidate for improved performance in a self-supervised setting. We train a model with a quantum representation network structured as the new all-to-all ansatz, simulated on a statevector simulator. The dataset consists of the first five classes of CIFAR-10 and the model is trained with a projection head. Importantly, for a fair comparison, we apply three layers of the all-to-all ansatz, so that it has the same number of learnable parameters as two layers of the ring ansatz. The result of the linear probe experiments can be seen in Fig. 10, along with the previous models for comparison. We see that for the all-to-all ansatz, test accuracy is no higher than the classical model beyond the statistical variance of repeating training with different initial parameters, and below the ring ansatz. Indeed, by the end of training, the all-toall ansatz achieves a final accuracy of (43.46 ± 1.68)%, which is similar to the classical model. Thus, we show that achieving an advantage using quantum neural networks in contrastive learning is highly dependent on the correct choice of quantum circuit structure.
Appendix F: Recompilation of quantum neural networks
When executing QNNs on the ibmq paris device, translating the ring topology of our variational ansatz to the honeycomb structure that the qubits are physically connected by requires a significant number of SWAP operations. Quantitatively this increases the number of twoqubit gates in the circuit from 16 to 143, which poses a significant challenge to obtaining a predictive signal beyond random noise since the total circuit error scales exponentially with the number of gates. To mitigate this, for each image evaluated we approximately recompile the QNN using incremental structural learning (ISL) [70], adapted so that only two-qubit connections available on the real device can be applied. Using this method, for over half of the executed circuits, an equivalent circuit is found which produces the same statevector with at least 99% overlap using on average 14 CNOT gates. For the re-maining images, we apply ISL once again, but this time without any constraints on the connectivity of the circuit. This produces a shallower equivalent circuit with at least 99% overlap using on average 8 CNOT gates. Although some of these two qubit gates require SWAPs when implemented on the real device, they still represent a significant reduction in the depth of the circuit and total error incurred. | 2021-03-30T01:16:17.702Z | 2021-03-26T00:00:00.000 | {
"year": 2021,
"sha1": "fe280c23eabdf88c0204359a0a42b11f7d78352f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "fe280c23eabdf88c0204359a0a42b11f7d78352f",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Physics",
"Computer Science"
]
} |
234169083 | pes2o/s2orc | v3-fos-license | Gourd (Lagenaria siceraria (Mol.) Standl) seedling production and transplanting in different containers
interdisciplinarity
INTRODUCTION
Throughout the world, the gourd (Lagenaria siceraria (Mol.)Standl.)crop has a broad range of uses.
For example, this species has food potential ( VAN WYK, 2011) and has medicinal properties (CHEN et al., 2008), with an antioxidant effect (MOHAN et al., 2012) and works as an agent to combat stress (VINOD & SHIVAKUMAR, 2012).Santos et al. (2014) found high oil quantity and quality in gourd seeds, comparable in quality to the olive oil concerning omega's type oils.In Brazil, more specifically in the state of Rio Grande do Sul (RS), on small family-type rural properties, gourds are cultivated in order to produce gourds devices for drinking mate -a traditional gaucho drink.Since 2010, the RS Forestry Program has incentivized actions in order to improve the structure of gourd production chain, increasing the use of the species and recognition of the importance of the species for the region and for the country.Despite its nutritional and cultural importance, the gourd is still underutilized within the exploitation of horticultural crops.Thus, we have the challenge of inserting the gourd in new agricultural scenarios.Nakai.]growing, with eight varieties registered in China (LEE et al., 2010) resistant to fusarium wilt (Fusarium oxysporum f. sp.Lagenariae FOLag) (LOUWS et al., 2010), with tolerance to waterlogged soils (YETISIR et al., 2006) and to high salinity conditions (HUANG et al., 2013).The species stands out also as an ornamental plant, where it is found in residential gardens in Africa ( VAN WYK, 2011) and in the Orient (TRINH et al., 2003), and has landscaping potential for use on walls, roofs and balconies because individual plants grow vertically, without competing for occupation of the soil by building constructions, a characteristic which is currently greatly appreciated in landscaping (GUILLAUIC, 2010).Since it is a rustic, it may be more successfully used in contemporary landscaping projects.
Traditionally, the gourd propagation occurs in a sexual way, where seeds are early extracted from fruit selected from the crop itself and used for the following crop (BISOGNIN & STORCK, 2000).Sowing occurs in the springs and summer seasons (i.e., August to October) and under the conventional system, using high soil disturbance (use of plow and disc harrow, for example).This factor combination the gourd crop encounters several adversities, such as excess rainfall and low temperature (SANTOS et al., 2010).
Thus, adopting technologies that minimize these inconveniences and that enhance the gourd productive chain, such as the production of quality seedlings, is a key factor to encourage the use of the species and provide profitability to producers.
As one of the major agronomic challenges is to improve the establishment of crops, the production of seedlings of this Cucurbitaceous in a protected environment is an alternative for crop applications, as well as for horticulture, since it has the advantage of allowing better control of edaphoclimatic conditions at the beginning of the cycle (REGHIN et al., 2007).In horticultural crops, the supply of quality seedlings to producers is important to obtain high production after the establishment of plants in their cultivation environment (CHIOMENTO et al., 2019;COSTA et al., 2020).This quality refers to the plant robustness in the face of abiotic and biotic stresses (ZHAO et al., 2016).However, one difficulty in producing seedlings in containers is to ensure the production of aerial biomass with a limited portion of roots (LEMAIRE, 1995), restricted to a small volume of substrate.Thus, in order to obtain quality seedlings, the size of the container is important since root development is directly affected (REGHIN et al., 2007), and it will affect photosynthesis, chlorophyll content, growth, and nutrient acquisition (NESMITH & DUVAL, 1998), as well as production and aesthetic presentation for commercialization in the case of ornamental use (PINTO et al., 2010).
Therefore, here we investigate the viability of gourd seedling production in different types and sizes of containers, as well as the seedling response to transplantation, with the purpose of contributing to enhance the use of this species in sustainable landscaping, as rootstock and even in direct planting in crops.Rules (BRAZIL, 1992) indicate for the gourd.
EXPERIMENTAL DESIGN
Experiment #1 consisted of evaluation of two containers for sowing the gourd: tray cells (TC) with a 143 cm3 volume (polystyrene tray with 72 cells.plugs-1),and black plastic bag (PB), with a volume of 339 cm3, arranged in a completely randomized design with 6 replications, each plot consisting of six plants.
To evaluate the transplanting procedure (Experiment #2), a completely randomized design with 4 replications was used, consisting of a bifactorial (2 x 2) arrangement, evaluating the effect of two containers (from sowing to 24 DAE = TC and PB) and two sowing densities in the final pot (1 and 2 seedlings).The four treatments were thus designated: TC2, pots with 2 seedlings coming from sowing in polystyrene tray cells; TC1, with 1 seedling coming from sowing in a polystyrene tray cell; PB2, pots with 2 seedlings coming from sowing in plastic bags; PB1, with only 1 seedling transplanted.Each pot constituted a plot.
PROCEDURES
In both experiments, irrigation was performed manually every two days.There was no monitoring of the internal temperatures of the protected environment and, even during the winter, there was no problem of losses to outside frosts which occur in this period.To carry out the experiments, no synthetic chemical product was used, not even soluble chemical fertilizer, and no plant health treatment recommended in organic production was necessary.
Manual sowing was performed on July (winter) 14, and each TC or PB container received 2 seeds, at a depth of 1.5 cm.At 14 days after sowing (DAS), as of the first seedling to emerge (which occurred on July 28), the emergence dates of each seed were registered for validation in emergence percentage.As soon as the second seedling in each container emerged (TC or PB), thinning was carried out so as to leave only the most vigorous seedling.(3 plants) were collected for analysis.The attributes we evaluated regarding the shoot of the seedlings were plant height (H, cm), with caliper rule, fresh mass (FMS, g) and dry mass (DMS, g).Regarding the root system of the seedlings, we also evaluated the fresh mass (FMR, g) and dry mass (DMR, g).Evaluation of fresh mass (shoot and root) consisted of quantification of the weight of recently collected seedlings, washed and dried with paper toweling, through the use of a precision balance.These samples were placed in a laboratory oven at 105°C for 24 h and then quantification of the dry mass (shoot and root) was carried out with a balance (0.01g precision).After that, we determined the total fresh (TFM, g) and dry (TDM, g) masses of seedlings.The remainder of the seedlings of the plot were used in transplanting to pots, thus generating Experiment #2.
Experiment #2 began on August 20, the ending date of Experiment #1, at which time the height of all the plants coming from the two types of containers was measured through the use of a caliper rule, and the seedlings were transplanted to larger black plastic pots with a volume of 15,168.5 cm3 (Figure 1).
Removal of the plants from the plastic bags and the trays and transplanting to pots was performed manually, without damage to the earth around the roots.The difficulty experienced in removing the plants produced in the polystyrene tray cells is noteworthy.
Themes focused on interdisciplinarity and sustainable development worldwide V.01 -Common Mistakes Due to Mathematical Sophistry 9.6 mg.dm-3; boron = 0.7 mg.dm-3.To evaluate the survival and growth response of gourd to transplanting, over the 37 days after transplanting (DAT), in the period from August 20 to September (spring) 26, the percentage of viable live plants and plant height (H, cm) was evaluated weekly at 8, 15, 22, 30 and 37 DAT through the use of tape, measuring the length from the root collar to the tip of the main stem.
STATISTICAL ANALYSIS
The data obtained from the variables analyzed in Experiment #1 were subjected to analysis of variance (Anova) and the T test.In Experiment #2, over the time periods, the data obtained from the variables were subjected to regression analysis, and for each time period, the Tukey test was applied when Anova showed significance at 5% probability of error.1).In relation to production of fresh mass and dry mass of the root at 24 DAE, the container of greater volume (PB) exhibited seedlings with values greater than those produced in tray cells (Table 1).The fresh mass and dry mass of the above ground part and of the whole plant (Table 1) showed the same trend for root biomass; in other words, lower production in smaller containers. 2 PE: percentage of emergence; H: height shoot; FMR fresh mass of root; FMS: fresh mass of shoot; TFM: total fresh mass; DMR: dry mass of root; DMS: dry mass shoot; TDM: total dry mass. 3Coefficient of variation.
EXPERIMENT #2: VIABILITY OF TRANSPLANTED GOURD SEEDLINGS
Due to the occurrence of interaction between containers and time periods, the development of the seedlings coming from tray cells and plastic bags when transplanted to pots are shown in Table 2. Comparing the time periods, we observed that on the transplanting date and at the first evaluation at 8 days after transplanting (DAT) (August 20 and 28, respectively) there was no significant difference among the types of containers in relation to the plant height, since this was an establishment phase for the seedlings after transplantation.
Themes focused on interdisciplinarity and sustainable development worldwide V.01 -Common Mistakes Due to Mathematical Sophistry Table 2. Plant height (cm) and percentage of viable plants (%) at 37 days after transplanting (DAT) of gourd (L.siceraria) seedlings when transplanted to pots (15,168.5 cm3).At 15 DAT, plant height in the PB2 treatment was greater only in comparison to treatment TC1.At 22 DAT, the PB2 treatment, without differing from PB1, exhibited greater height only in comparison to TC2, and at 30 DAT, it did not differ from PB1 and was greater than the others.The development of these seedlings over the 37 days of growth, when transplanted to pots, follows the regression equations with quadratic behavior (Table 3).
DISCUSSION
The data presented in our study concern gourd seedling process as alternative to the use of seed directly in the crop system.The study aimed then to verify if the production and transplanting of gourd seedlings in different types and sizes of containers may be agronomic alternative to gourd crop.Our results suggested that we can anticipate seedling process on farm, protecting topsoil rapidly as well as having high plant development.
Themes focused on interdisciplinarity and sustainable development worldwide V.01 -Common Mistakes Due to Mathematical Sophistry The results of pre-test to determine sowing density of gourd in pots showed that the seeds did not exhibit good physiological qualities, which increases the risk of losses during the establishment of seedlings (NASCIMENTO, 2005).Bisognin et al. (1999), upon evaluating the physiological quality of gourd seeds obtained when extracted from fruits at different times after harvest, observed that in terms of germinating power (GP), maximum GP was obtained in extraction of seeds at 58 days, with 82.8% normal seedlings, and the lowest percentage of non-germinated seeds at 62 days, with only 1.2% of non-germinated seeds.In gourd, there is the flow of nutrients from the fruit to the seed after harvest, and the fruit must remain in a dry, shaded and well-ventilated place for a period of around 60 days before seed extraction (BISOGNIN et al., 1999).As the germplasm used did not allow greater understanding of the process through which it was extracted, the low physiological quality of the seeds explains the low percentage of emergence (50.68%) obtained afterwards in Experiment #1 conducted in a protected environment, requiring the sowing of two seeds per container to make up for such a low percentage.This is one of the problems in production of native species, mixed varieties or unconventional garden crops that do not use standardized genetic breeding material, therefore constituting a concern of branch of horticulture which deserves greater appreciation.
Up to 24 days after emergence (DAE) the seedlings coming from TC were not different from the seedlings produced in PB (Table 1).Bisognin et al. (2004) observed that squash (Cucurbita spp.), watermelon and cucumber show greater rates of increase in leaf area as of the appearance of the first true leaves in relation to the gourd, and that the gourd after attaining equivalence between leaf area and cotyledon area shows a slow increase of leaf area.Since slow growth of leaf area is a characteristic fact of the species, this may also be reflected in plant height and, for that reason, there was no difference in height among the containers studied (Table 1).
Seedling responses after transplantation to the final cultivation environment is directly influenced by the root system morphology, as it is a source of hormones responsible for shoot growth (MARCHIORETTO et al., 2020).The restriction of the root system caused by cells of trays with restricted volume, as we verified in our study (Table 1), makes the taproot of dicot seedlings to get a matted and spiraled aspect of the secondary roots making the seedling more susceptible to abiotic stresses (TSAKALDIMI & GANATSAS, 2006).In addition, that restriction of the root system harms development of the above ground part (PEREIRA & MARTINEZ, 1999).Thus, it is important that nurserymen choose larger containers for the seedling production, which will result in plants with better quality during acclimatization and greater viability after transplantation.At 37 DAT, the pot with one plant coming from the plastic bag (PB1) exhibited greater height, without differing from that which contained two plants (PB2), both greater than the other treatments in tray cells (Table 2).The plants that grew least at 30 and 37 DAT were those that contained two seedlings coming from tray cells (TC2), indicating that competition in the pot accentuated the deleterious effect of producing this cucurbitaceous sown and initially grown in tray cells.In relation to the percentage of viable plants after transplanting (Table 2), we observed that only the Themes focused on interdisciplinarity and sustainable development worldwide V.01 -Common Mistakes Due to Mathematical Sophistry TC2 treatment showed a lower percentage, explained by the low root mass of the seedlings coming from tray cells (Table 1) and by the difficulty of removal of these seedlings for transplanting.
The greatest growth of seedlings after transplantation is strongly related to the root system, as nonmatted roots have the potential to grow faster and acquire more water and nutrients, reducing the ecophysiological stresses promoted by the transplanting (GROSSNICKLE & MACDONALD, 2018).In our study, we confirmed that gourd seedlings (produced in larger containers) with higher root biomass (Table 1) showed greater viability after transplantation (Table 2).The survival of transplanted seedlings is correlated to internal nutrient status (LIU et al., 2016).This means that seedlings with an enhanced nutrient content are able to remobilize it to biomass growth and then speed up plant growth in the field (MARCHIORETTO et al., 2020).
Therefore, we confirmed that smaller plants were produced in the smaller containers (TC), and the seedlings produced in containers of greater volume exhibited better development after transplanting because the seedlings exhibited greater height and greater root fresh mass (Table 1).Although similar results have been found in cucumbers (SEABRA JÚNIOR et al., 2004), our findings on the viability of transplanting in the gourd crop are unprecedented.The findings of our study can be useful to support the choice of containers by nurserymen so that these professionals can produce better quality seedlings.Thus, it will be possible to minimize the inconveniences caused by the traditional propagation of gourd culture [soil disturbance and bare soil condition, for example (Figure 2)] and enhance the productive chain of this horticultural crop.We emphasize that the production of quality seedlings is a key factor to encourage the use of the species and provide profitability to the gourd producers, who will certainly choose to acquire more robust seedlings, with higher survival rates and better adaptation of the plants after transplantation in the growing environment.Seedlings with a more developed root system, as seen in those produced in PB containers (Table 1), suffer less from abiotic and biotic stresses after transplantation (GROSSNICKLE, 2005).Finally, these investigations are filling the gap between the production of gourd seedlings related to their viability after the plant establishment in the field.
CONCLUSIONS
The present study placing together the type of container and the number of seedlings transplanted per container leads to the conclusion that there is viability for production of gourd seedlings transplanted at 24 days after emergency, which makes for alternative management of the species (sowing in the field).
The species accepts transplanting, with more vigorous seedlings when produced in large containers from the beginning of production.In addition to ensuring its use as rootstock in horticulture, the viability of transplanted seedlings thus expands use of the species as multifunctional ornamental in sustainable landscaping and for direct planting in crops.
Themes focused on interdisciplinarity and sustainable development worldwide V.01 -Common Mistakes Due to Mathematical Sophistry Gourd seedlings are used as rootstock for watermelon [Citrullus lanatus (Thunb.)Matsum.&
Figure 1 .
Figure 1.Establishment of Experiment #2 with the remaining seedlings of Experiment #1 produced in TC (A) and PB (B).After transplanting to the final cultivation site (C and D) the seedlings were carried out in a greenhouse.
Figure 2. Illustrative images from topsoil (~1 m2) after plowing and get disk arrow system in two systems: sowing and seedling.DAE = days after emergency; DAT = days after transplanting.Photos: D. B. Santos.Unpublished images, but it were used for determining of the soil cover vegetation presented in Santos et al. (2010).
Common Mistakes Due to Mathematical Sophistry 2 MATERIAL AND METHODS
Themes focused on interdisciplinarity and sustainable development worldwide V.01 - This experiment ended on August (winter) 20, 24 days after emergence (DAE) of the seedlings (or 38 days after sowing) based on a study performed by Seabra Júnior et al. (2004), who observed a trend toward interruption of seedling growth of cucumbers (Cucumis sativus L.) produced in a smaller container (34.6 cm3) as of 29 days of age of the plants.At that time, half the plants from each plot PRE-TEST TO DETERMINE SOWING DENSITY OF GOURD IN POTS Regarding to pre-test to determine sowing density of gourd in pots, the results of the germination test (±standard deviation) performed in the laboratory showed at first count 59% (±5.81) normal seedlings, and, in the second count, 13% (±3.80) normal seedlings, 2% (±2.37) abnormal seedlings, 19% (±4.14) dead seedlings and 7% (±3.22) Themes focused on interdisciplinarity and sustainable development worldwide V.01 -Common Mistakes Due to Mathematical Sophistry3 RESULTS3.1
* Values relative to the mean values of height of the two plants. *
Table 3 .
Regression equations and determination coefficients from quantitative data of gourd (L.siceraria) seedlings when transplanted to pots in each treatments. | 2021-05-11T00:07:04.956Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "47a1dfce37d44f825ecc15df05b8781540c081e6",
"oa_license": null,
"oa_url": "https://www.brazilianjournals.com/index.php/BRJD/article/download/25621/20379",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "b0bc50455d7026ea49109582bf20c01cf2b145dc",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
230622914 | pes2o/s2orc | v3-fos-license | Systematical Analysis of Alpha-active Nuclides
The systematical analysis of alpha-active nuclides is useful for both the nuclear technology applications and understanding of nuclear structure study. Geiger-Nuttall law, which describes a dependence of the disintegration constant on the range of α-particles, was deduced from the Gamow theory explaining the passage of the α-particles through the Coulomb barrier by the quantum mechanical tunneling effect. Ground-to-ground state α-transitions for natural and artificial α-active nuclides were analyzed utilizing the Geiger-Nuttall rule. From rough analysis five group-like branches on the dependence of α-decay half- lives on α-particle energy were observed. Detailed analysis shows that precise linear dependences of the logarithm of α-decay half-lives on the reciprocal of square root of the α- particle energy for even-even isotopes of the U, Pu and Cm there are. However, for some even-even isotopes of the Po, Ra and Th regular behaviour of mass numbers was broken. This non-regularity of the mass numbers on the Geiger-Nuttall line is explained by the nuclear shell model.
Introduction
The alpha decay is a disintegration of the radioactive nucleus which emits an α-particle consisting of two protons and two neutrons. The systematical analysis of alpha-active nuclides is useful for both the nuclear technology applications and understanding of nuclear structure study. For example, the alpha decay leads to accumulation of helium gas in the structural materials of nuclear fission and fusion reactors.
In 1911 Geiger and Nuttall established [1] an empirical law which describes a dependence of the disintegration constant on the range of α-particles. Energy of the outgoing α-particle is usually lower than potential energy of the daughter nucleus [2]. Although from the view point of the classical mechanics it is unclear how alpha particle can overcome from the nuclear potential, Gamow theory [3,4] can describe the passage of α-particles through the Coulomb potential barrier by the quantummechanical tunneling effect.
In this work the Geiger-Nuttall law deduced from the Gamow theory and ground-to-ground state α-decay data were systematically analyzed using this law and the nuclear shell model deductions.
Theoretical background
Geiger and Nuttall law [1] relates the decay constant of a radioactive isotope with the range of the α-particle as following: where E α is the α-particle energy; A and B are the constants. Then, the Geiger-Nuttall law can be rewritten as following: The formula (2) can be deduced from Gamow theory [3][4][5]. Penetration probability of α-particle through potential barrier is determined by [5,6]: ( Here: m α is the mass of the α-particle; V(r) is thepotential energy of the daughter nucleus; R and R o are the inner and outer classical turning points, respectively ( Fig.1). The inner classical turning point, R, can be obtained for square potential well as a daughter nuclear radius: where: A D is the mass number of the daughter nucleus and r 0 =1.25·10 -13 cm. For the potential energy of the daughter nucleus, V(r), we can use the Coulomb potential as a first approximation: where: e is the elementary charge; and Z is the proton number of the daughter nucleus. Then, from Fig.1 the outer classical turning point can be determined from following expression: So, the formula (3) can be rewritten in the form The following simple substitutions are used to calculate the integral in Eq. (7): Then, from the expression (7) can be gotten following formula: The integral in Eq.(9) can be taken as follows: Here the approximation of x 0 <<1 was used. If we use the substitution x=sin 2 θ the integral in Eq.(10) is taken as following: So, the integral in (9) is given by Then, from Eqs. (7), (9) and (12) the following formula can be obtained Taking into account an α-clustering effect, the disintegration constant for α-decay can be expressed as following: where: α φ is the α-clustering factor; f α is the collision frequency of the α-particle in the potential barrier of the daughter nucleus. In the case of one body approximation [7] the α-clustering factor can be assumed as f α =1. Then, the Eq.(14) can be rewritten in the form The collision frequency of the α-particle can be obtained as (16) From Eqs.(15) and (16) the half-life is given by So, from Eqs. (13) and (17) can be got following expression where: (21) It can be seen that the formula (19) is the same as the expression (2) which was directly written from the Geiger and Nuttall law (1). It should be noted that α-particle energy under logarithm is included in the parameter b which can be considered almost constant in comparison with Also, the proton number Z in Eqs.(20) and (21) can be taken as an effective and average value for all considered nuclides. Thus, the Eq.(19) will be utilized for systematical analysis of known experimental data of the α-decay.
Result of analysis and discussion
Decay data of the ground-to-ground state α-transitions for over 450 natural and artificial alpha-active nuclides [8][9][10] including rare-earth and super-heavy elements were analyzed using the Geiger-Nuttall law (19). The dependence of the logarithm of α-decay half-lives, T 1/2 (sec),on the reciprocal of square root of the α-particle energy, E α (MeV), for studied isotopes is shown in Fig.2. From the preliminary and rough analysis it was seen that five group-like branches in the dependence of half-life on the α-particle energy were observed [11]. The detailed analysis shows that precise linear dependences of the logarithm of α-decay half-lives on the reciprocal of square root of the α-particle energy for even-even isotopes of the U, Pu and Cm there are (Fig.3). Also, mass numbers of these isotopes are regularly increased along the line corresponding to the Geiger Nuttall law. At the same time for some even-even isotopes of the Po, Ra and Th such regular behaviour of the dependence of the lnT 1/2 versus α E / 1 was broken (Fig.4). It can be seen from Fig.4 that 196 ,198,208,210 Po, 214 Ra and 216 Th are off the regular behaviour of mass numbers which are increased along the Geiger-Nuttall law line. REN Zhong-Zhou et al. [12] attempted to explain this effect by the nuclear shell model. For the isotopes of 210 Po, 214 Ra and 216 Th number of neutrons is, really, N=126 (magic number) and neutron shell is closed (see Fig.5). Next energy levels are usually split from the closed shell by appreciable energy gap. So, sudden breaks of the regular behaviour of mass numbers for the 208 Fig.6). Figure 6. The same as in Fig. 2 for isotopes of the Fm and Cf.
In these cases, the neutron number N=154 and the subshell 1i 11/2 is closed. In addition, a straight relation between the lnT 1/2 and α E / 1 appears for isotonic chains with N=124, 126, 150 and 152 [12] (see Fig.7). However, theoretical explanation of this regularity, as far as we know, is not available.
Conclusions
1. The Geiger-Nuttal law was deduced from the quantum Gamow theory. The ground-to-ground state α-transitions for natural and artificial ~450 α-active nuclides were analyzed using the Geiger-Nuttall rule. Five group-like branches for considered nuclides were observed. 2. Precise linear dependence of the logarithm of α-decay half-lives on the reciprocal of square root of the α-particle energy for even-even isotopes of the U, Pu and Cm were established. Mass numbers of the isotopes are regularly increased along the Geiger-Nuttall line. | 2020-12-10T09:04:22.631Z | 2020-12-05T00:00:00.000 | {
"year": 2020,
"sha1": "3d4e29342204192d94b3bf450a1eb7a4f41a3502",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/1000/1/012002",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "32665c6c97b43ff7db5d4fd93f6fb59eaa3b8969",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
40598815 | pes2o/s2orc | v3-fos-license | Sensitive Electrochemical Detection of Dopamine With a Nitrogen-doped Graphene Modified Glassy Carbon Electrode
In this paper nitrogen-doped graphene (NG) nanosheets were used as the modifier on the surface of glassy carbon electrode (GCE). The modified electrode (NG/GCE) was further applied to the sensitive detection of dopamine (DA) by voltammetric method. Due to the unique properties of NG such as large surface area and excellent electrocatalytic activity, electrochemical response of DA was greatly enhanced on NG/GCE with a pair of well-defined redox peaks appeared on cyclic voltammogram. Electrochemical behaviors of DA on NG/GCE were carefully investigated with the electrochemical parameters calculated. Under the selected conditions the oxidation peak currents of DA had a good linear relationship with its concentration in the range from 8.0 ×10–7 mol L–1 to 8.0 ×10–4 mol L–1 with a detection limit of 2.55 ×10–7 mol L–1 (3σ). The proposed method was further applied to the DA injection samples determination with satisfactory results.
INTRODUCTION
S a two-dimensional carbon nanomaterial with an atomically thick honeycomb lattice, graphene (GR) has aroused great interests due to its unique properties such as big surface area, high electrochemical conductivity and good biocompatibility. [1,2]Also GR can be used as the matrix for the synthesis of different kinds of composite materials.GR and GR-based composites have been synthesized and used in various fields such as biosensor, lithium ion batteries, fuel cells and supercapacitors. [3,4]Due to the specific electrochemical characteristics of GR, the presence of GR and its related materials on the electrode surface can provide a high conductive interface with electrocatalytic activity, which can accelerate the electron transfer rate. [5,6]he applications of GR and its related composite in the electrochemistry had been reviewed recently. [7,8]ecently nitrogen-doped graphene (NG) has been used in the field of electrochemistry and molecular sensing. [9,10] an important strategy to tailor the structure and properties of GR, chemical doping GR can be realized by different synthetic methods. [11]Because nitrogen atom has comparable atomatic size and contains five valence electrons available to form strong valence bonds with carbon atoms, NG exhibits different properties as compared with that of the pristine GR.There are three commonly bonding configurations of nitrogen atom that could be found in NG, that is pyridinic N, porrolic N and graphitic N. [12] Wang et al. reviewed the recent progresses of NG and its potential applications. [13]Shao et al. applied NG in the electrochemical applications such as electrochemical energy devices and biosensors. [14]Wang et al. applied NG in electrochemical biosensing for glucose. [15]Sun et al. fabricated a NG modified carbon ionic liquid electrode for the detection of rutin. [16]herefore NG has the potential applications in the fields of electrochemistry and electrochemical sensor.Dopamine (DA) is a kind of biogenic amine that acts as neurotransmitter in the physiological system.It has been
A
widely studied due to its important functions in renal, cardiovascular, hormonal and nervous systems. [17]Because of the electroactivity of DA in the biological samples, electrochemical detection of DA has gained increasing attentions. [18]Kan et al. applied a multi-walled carbon nanotube composite with homogeneous molecularly imprinted polymers outer layer modified glassy carbon electrode (GCE) for the recognization of DA in presence of ascorbic acid (AA). [19]ztekin et al. used a copper nanoparticles (CuNP) modified GCE for the selective determination of DA in the presence of AA, uric acid and p-acetamidophenol. [20]Niu et al. applied a 3,4,9,10-perylene tetracarboxylic acid functionalized GR sheets/multi-walled carbon nanotubes/ionic liquid modified electrode for the DA detection. [21]These electrochemical sensors with different kinds of modifiers have exhibited the advantages including cheap instruments, fast response, low cost, high sensitivity and good selectivity.
In this paper NG nanosheet was used to modify the commonly used GCE and the fabricated NG/GCE was further applied to the sensitive electrochemical detection of DA.Due to the electrocatalytic activity of NG with its unique structure and large surface area, DA exhibited an enhanced electrochemical response on NG/GCE with a pair of welldefined redox peaks appeared.Electrochemical behaviors of DA were carefully investigated on NG/GCE with the electrochemical parameters calculated.The proposed method was further applied to the DA injection samples detection with satisfactory results.
Apparatus and Reagents
Electrochemical measurements were performed on a CHI 750B electrochemical workstation (Shanghai CH Instrument, China) with conventional three-electrode cell.A bare GCE or NG/GCE was used as the working electrode.A saturated calomel electrode (SCE) and a platinum wire were used as the reference and counter electrodes, respectively.Scanning electron microscopy (SEM) was recorded on a JSM-6700F scanning electron microscope (Japan Electron Company, Japan).Transmission electron microscopy (TEM) image was acquired by a JEM-2100 transmission electron microscope (JEOL, Japan) at a 200 kV acceleration potential.Dopamine (DA) hydrochloride was purchased from Aladdin Chemical Reagent Co. Ltd. (Shanghai, China).NG was synthesized in accordance with recent reported procedure. [22]A 1.0 mg mL -1 NG dispersion solution was prepared by ultrasonication for 2 hours in water.0.1 mol L -1 phosphate buffer solution (PBS) was used as the supporting electrolyte.All other reagents were of analytical grade and all aqueous solutions were prepared with doubly distilled water.
Preparation of NG/GCE
Prior to use, GCE with a diameter of 3 mm was polished on a polishing cloth with 1.0, 0.3 and 0.05 μm alumina powder, respectively, and rinsed with doubly distilled water, followed by sonication in ethanol solution and doubly distilled water successively.Then the electrode was dried in a stream of nitrogen.Afterwards, a 5 μL of 0.5 mg mL -1 NG solution was dropped to fully cover the surface of the polished GCE and dried at room temperature to get the modified electrode (NG/GCE).
Electrochemical Detection
A certain concentration of DA solution prepared with 0.1 mol L -1 PBS of pH = 7.0 was added into a 10 mL electrochemical cell and the three-electrode system was inserted into the solution.Then cyclic voltammetry (CV) was performed in the potential range from -0.2 V to 0.6 V at the scan rate of 100 mV s -1 .Differential pulse voltammetric (DPV) measurements were carried out for the quantitative analysis with the parameters set as: step increment potential of 0.004 V, pulse amplitude of 0.05 V, pulse width of 0.05 s, and pulse period of 0.2 s.
Characteristics of NG/GCE
Figure 1A is showing the SEM image of NG/GCE with inset as the TEM image of NG.As shown in the inset of Figure 1A, NG was present as the nanosheet with some folded and rippled.After NG was modified on the surface of GCE, large amounts of nanosheet was present with porous structure appeared, indicating that the presence of NG nanosheet on the electrode surface resulted in the increase of the effective electrode area.
Electrochemical behaviors of different modified electrodes were further investigated in the ferricyanide solution with the cyclic voltammograms shown in Figure 1B.The redox peak currents of [Fe(CN)6] 3-/4-on GCE (curve a) was much smaller than that of NG/GCE (curve b) with the decrease of the peak-to-peak separation (∆Ep) on NG/GCE.The results indicated that the presence of NG on the GCE surface can greatly enhance the electrochemical responses, which may be attributed to the specific characteristics of NG that can accelerate the electron transfer rate.EIS experiments were further carried out with the results shown in Figure 1C.The semicircular portion at high frequencies in the Nyquist diagrams corresponds to the electron-transferlimited process and its diameter is equal to the electrontransfer resistance (Ret), which controls the electrontransfer kinetics of the redox probe at the electrode.Meanwhile, the linear part at lower frequencies corresponds to the diffusion process.The Randles circuit model is chosen to fit the impedance data got in the experiment.The Ret value of GCE was got as 76.1 Ω (curve a), which was bigger than that of NG/GCE (46.6 Ω).The results indicated that the decrease of the interfacial resistance was due to the presence of NG.Electrochemical behaviors of ferricyanide on NG/GCE were further investigated with the changes of scan rate and the corresponding cyclic voltammograms were listed in Figure 1D.It can be seen that a pair of well-defined redox peaks appeared at different scan rate.The redox peak currents exhibited good linear relationships with scan rate, and the linear regression equations were got as Ipc /μA = 66.99 υ 1/2 /(V s -1 ) -3.13 (n = 11, γ = 0.999) and Ipa / μA = -65.76υ 1/2 /(V s -1 ) + 2.74 (n = 11, γ = 0.998).So the electrochemical reaction of ferricyanide on NG/GCE was a diffusional controlled process, which could be attributed to the presence of high conductive NG with large surface area on the electrode surface.NG has been proven to exhibit excellent electrocatalytic activity with fast electron transfer rate.The relationships of redox peak potentials with lnυ were also obtained with the following regression equations as Epc /V = -0.037ln(υ/(V s -1 )) + 0.21 (n = 11, γ = 0.998) and Epa /V = 0.064 ln(υ/(V s -1 )) + 0.31 (n = 11, γ = 0.997).Based on Nicholson's equations: [23] 0' 1/2 1 [0.78 ln( ) 0.5ln ] ln , 2 (1 ) (1) (1 The electrochemical parameters, such as the electron transfer coefficients (α), the electron transfer number (n) and the apparent heterogeneous electron transfer rate constant (ks), of the electrochemical reaction of ferricyanide were calculated as 0.34, 0.98, and 1.04 s -1 , respectively.
Cyclic Voltammograms of DA on NG/GCE
Cyclic voltammograms of 1.0× 10 -4 mol L -1 DA on different working electrodes in PBS of pH = 7.0 were recorded with the results shown in Figure 2. It can be seen that a pair of well-defined redox peaks appeared on the voltammograms, which was the typical results of DA electrochemical reaction. [24]On GCE the redox peak potentials were located at 0.246 V (Epa) and 0.139 V (Epc) with the redox peak current as 1.308 µA (Ipa) and 0.992 µA (Ipc).The peak-topeak separation (∆E) was calculated as 107 mV with the Ipa / Ipc value as 1.318, which was a typical result of quasireversible reaction.Whereas on NG/GCE the redox peak potentials were located at 0.218V (Epa) and 0.156V (Epc) with the redox peak currents as 5.231 µA (Ipa) and 4.897 µA (Ipc).The values of ∆Ep and Ipa /Ipc were got as 62 mV and 1.068, indicating a more reversible electrochemical process.Also the redox peak current increased for 4.00 and 5.03 times than that of GCE, respectively, which was the typical electrocatalytic effect.The result was attributed to the presence of NG on the surface of GCE.Chemical doping is a strategy for the preparation of functionalized carbon materials and the presence of nitrogen in the carbon structure can modulate the properties. [25]Nitrogen atom has comparable atom size with carbon, and it contains five valence electrons bonds with carbon atom.So the chemical doping of nitrogen can partly restore the conductivity of GR.The presence of pyridinic N, pyrrolic N and quaternary N on the GR surface exhibited certain catalytic ability. [26]nd the two-dimensional structure of GR still remained with large surface area.Also NG has aromatic rings with rich delocalized π electrons, which can interact with the DA molecules that have a specific aromatic ring through π-π stacking force.So the presence of NG on the electrode surface exhibited excellent electrocatalytic activity to the DA electrooxidation with the decrease of the overpotentials and the increase of the redox peak currents.Therefore NG/GCE is a suitable working electrode for the sensitive detection of DA.
Influence of Scan Rate
The kinetics of the electrode reaction were investigated by studying the effect of scan rate on the electrochemical responses of DA on NG/GCE.Figure 3A showed the cyclic voltammograms of 1.0× 10 -4 mol L -1 DA on NG/GCE in the scan rate range from 20.0 to 500.0 mV s -1 .It can be observed that a pair of well-defined redox peaks appeared at different scan rate with the redox peak potential and currents changed gradually.With the increase of scan rate the redox peak currents increased and good linear relationships of the redox peak current (Ip) with the square root of scan rate (υ 1/2 ) were obtained.The linear regression equations were got as Ipa / μA = -6.31υ 1/2 /(V s -1 ) -1.28 (n = 15, γ = 0.997) and Ipc / μA = 9.38 υ 1/2 /(V s -1 ) + 0.57 (n = 15, γ = 0.998), illustrating a diffusional controlled process.The result indicated that electrode reaction of DA was fast and DA molecules diffused to the electrode surface could take place electrochemical reaction quickly, which could be attributed to the high conductivity of NG on the electrode surface.The relationship of redox peak potentials and ln υ was also constructed and the linear regression equations were got as Epa /V = 0.0274 ln(υ/(V s -1 )) + 0.3205 (n = 15, γ = 0.997) and Epc /V = -0.0205ln (υ/(V s -1 )) + 0.0910 (n = 10, γ = 0.998).According to the Nicholson's equations, [23] the electrochemical parameters of DA on NG/GCE, such as the charge transfer coefficient (α), the number of electron transfer (n) and the electrode reaction rate constant (ks), were calculated as 0.49, 2.2 and 1.174 s -1 .
Influence of Buffer pH
The influence of buffer pH on the cyclic voltammetric responses of 1.0× 10 -4 mol L -1 DA was investigated in the pH range from 4.5 to 9.0, The relationships of the oxidation peak current and the formal peak potential (E 0 ') with buffer pH were plotted with the results shown in Figure 4.The maximum value of the oxidation peak current appeared at buffer of pH = 7.0 and decreased gradually with the further increase of buffer pH (Figure 4A).Therefore, pH = 7.0 was selected as the optimal pH for detection in the following experiments.With the increase of the buffer pH, the value of E 0 ' also shifted to the negative direction, indicating that protons participated in the reaction.The relationship between E 0 ' and pH was calculated as E 0 '/V = -0.052pH+ 0.54 (n = 10, γ = 0.996) (Figure 4B).The slope value of -52.0 mV pH -1 was close to the theoretical value of -59.0 mV pH -1 , [27] indicated that the ratio of electrons and protons taking part in the electrode reaction was 1:1.According to the above result of the number of electron transferred was calculated as 2, so the protons involved in the electrode reaction was also got as 2.
Chronocoulometric Experiments
Since the electrode reaction was diffusion-controlled, the chronocoulometric response of DA on NG/GCE was investigated to calculate the diffusional coefficient (D). Figure 5A showed the chronocoulometric curves of NG/GCE in the given solutions, and a good linear relationship between Q and t 1/2 was shown in Figure 5B.According to Anson's equation: [28] Q = 2n FA D 1/2 c t 1/2 /π 1/2 + Qdl + n FΓ, the D value of DA was calculated as 9.86× 10 -5 cm 2 s -1 .
Interferences
The influences of some interfering materials on the determination of 1.0× 10 -4 mol L -1 DA were investigated.The proposed method showed good selectivity for DA detection without the interferences from common coexisting materials in samples, such as ions (e.g.Zn 2+ , K + , Ca 2+ , Cl -, Na + , Cu 2+ ), glucose, deoxyribonucleic acid, ribonucleic acid etc. with signal changes below ±5 % (as shown in the Table 1).Thus, electrochemical signals of these substances did not disturb that of DA and hence NG/GCE showed good selectively with the ability to distinguish other electrochemical responses.
Sample Determination
The developed method was applied to the detection of the content in DA injection samples, which were diluted and detected by the proposed procedure with the results shown in Table 2. Also the standard addition method was used to calculate the recovery.From Table 2 it can be seen that the DA injection samples were detected with satisfactory results.The relative standard deviation (RSD) values were in the range from 1.76 % to 2.79 % with the recovery in the range from 97.5 % to 104.7 %, which indicated that this modified electrode could be used for the DA samples detection.
Stability and Repeatability of the Modified Electrode
The stability and repeatability of the modified electrode were evaluated.The RSD of eleven successive scans for 1.0× 10 -4 mol L -1 DA was got as 1.89 %, indicating good reproducibility of NG/GCE.In dry state, only 2.4 % loss of DPV peak current value was found even after two weeks storage, indicating the good stability of NG/GCE.The repeatability of eight independently fabricated electrodes showed a satisfactory RSD value of 3.72 % for the detection of 1.0× 10 -4 mol L -1 DA.All these results indicated that NG/GCE was stable for the electrochemical application.
CONCLUSION
In the paper a NG modified GCE was fabricated and further used for investigation on the electrochemistry of DA in detail.Electrochemical behaviors of DA on NG/GCE were carefully studied with the electrochemical parameters calculated.The presence of NG on the electrode showed good promotion to the electro-oxidation of DA, which could be attributed to the specific properties and unique structure of NG.Based on the oxidation peak current, DA can be detected in the concentration range from 8.0× 10 -7 mol L -1 to 8.0× 10 -4 mol L -1 with the detection limit of 2.55× 10 -7 mol L -1 (3σ) by differential pulse voltammetry.Under the selected conditions a new electrochemical method for DA detection was further established and successfully applied to DA injection sample detection with satisfactory results.
Figure 4 .
Figure 4. (A)The relationship between the oxidation peak current (Ipa) and pH; (B) Linear relationship of the formal peak potential (E 0 ') and pH.
Table 2 .
Determination of DA in the injection sample (n= 6) | 2018-05-31T10:56:12.231Z | 2016-09-28T00:00:00.000 | {
"year": 2016,
"sha1": "6747d520da25bd6dcb361568b90acec423f5cad1",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5562/cca2679",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "6747d520da25bd6dcb361568b90acec423f5cad1",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
266467320 | pes2o/s2orc | v3-fos-license | The role of ion migration, octahedral tilt, and the A-site cation on the instability of Cs1-xFAxPbI3
Organic-inorganic hybrid perovskites are promising materials for the next generation photovoltaics and optoelectronics; however, their practical application has been hindered by poor structural stability mainly caused by ion migration and external stimuli. Understanding the mechanism(s) of ion migration and structure decomposition is thus critical. Here we observe the sequence of structural changes at the atomic level that precede structural decomposition in the technologically important Cs1-xFAxPbI3 using ultralow dose transmission electron microscopy. We find that these changes differ, depending upon the A-site composition. Initially, there is a random loss of FA+, complemented by the loss of I-. The remaining FA+ and I- ions then migrate, unit cell by unit cell, into an ordered and more stable phase with a √2 x √2 superstructure. Further ion loss is accompanied by A-site dependent octahedral tilt modes and associated tetragonal phases with different stabilities. These observations of the loss of FA+/I- ion pairs, ion migration, octahedral tilt modes, and the role of the A-cation, provide insights into the atomic-scale structural mechanisms that drive and block ion loss and ion migration, opening pathways to inhibit ion loss, migration and improve structural stability.
measured damage thresholds.Under these controlled circumstances, there is an opportunity.While the physical mechanism of damage with light and electrons may be different, electron-induced damage can act as a proxy to provide insights into possible pathways for structural degradation.With careful control of additional electron dose above damage thresholds, sequences of structural change can be observed, revealing possible mechanisms for vacancy formation, ion migration, phase transitions and ultimately structural decomposition.
In the case of FAPbI 3 , an intermediate phase was observed recently using moderately low dose atomic resolution scanning transmission electron microscopy (STEM) (dose ~200 e/Å 2 /s) 20 .This revealed a √2 × √2 ordered structure of FA + vacancies (V − FA ) in the decomposition from cubic FAPbI 3 into PbI 2 .This specific superstructure stabilized by ordered A-site vacancies has been proposed to explain the unusual regenerative properties of hybrid perovskite solar cells when degraded MAPbI 3 solar cells are post-treated with gaseous MAI 27 .
In the case of MAPbX 3, intermediate phases were first detected from the appearance of additional (sometimes called forbidden) reflections in selected area diffraction (SAD) patterns (~1-2 e/Å 2 / s) 18,23,24,28 .Different structural models were proposed for these intermediate phases, such as ordered halogen vacancies 18,24,28 and octahedral tilt or rotation 23 , however, these models cannot be distinguished easily from SAD alone.Recently, further information was obtained for the specific case of MAPbI 3 using low-dose high-resolution TEM (HR-TEM) combined with a direct-detection electron counting camera (DDEC) which observed an intermediate phase of MA 0.5 PbI 3 similar to that observed in FAPbI 3 19 .Despite these important observations, the mechanisms underpinning structural instability and ion migration remain unclear, including the mechanism by which ordered A-site vacancies are formed and whether there is any associated ordering of I − vacancies (V + I ) and/or octahedral tilting.Furthermore, we need to understand what role, if any, the A-cation might play, as A-site engineering may provide an avenue for improving device stability 29 .We investigate these questions in the present paper for the technologically important mixed-cation perovskite Cs 1−x FA x PbI 3 .Such mixed-cation perovskites (A 1−x A' x BX 3 ) have attracted great interest in photovoltaic applications due to their relatively good stability and charge transport properties [30][31][32] .The crystal structure of bulk and quantum dot Cs 1−x FA x PbI 3 is the same, so either is suitable for this study [33][34][35][36] .We choose to examine quantum dots because they align consistently along a major zone axis, facilitating minimization of electron dose.We examine a high-quality synthesis that previously achieved a certified record power conversion efficiency of 16.6% at a composition of Cs 0.5 FA 0.5 PbI 3 37 .We also note that quantum dots have their own exciting applications in photo-active devices 37,38 .
We first examine the pristine structures for pure FAPbI 3 and Cs 0.5 FA 0.5 PbI 3 , and then the subsequent structural evolution at the atomic level, through ion-vacancy formation and ion migration, using ultra-low-dose HR-TEM with DDEC and low-dose annular dark-field STEM (STEM-ADF).
Intermediate phase 1: A-cation and halogen vacancies (V −
A and V + I ) and ordering To identify unambiguously any atomic-level compositional variations, we first use low-dose STEM-ADF because the image intensity is directly related to the number and species of atoms in the atomic column (Fig. 1).The total electron dose was carefully minimized and measured by a direct electron detector to be 44 e/Å 2 per image (Supplementary note 1).This dose was the lowest we could use while still obtaining interpretable STEM-ADF images with atomic-level information (that is, sufficient signal-to-noise ratio (SNR)).However, we note that while this dose is several orders of magnitude lower than conventional STEM-ADF images, it still represents a significant dose for this class of materials, and we anticipate some electron-beam damage.We consider first FAPbI 3 , Fig. 1A-D.The pristine structure has previously been determined to be cubic (space group: pm 3m) using synchrotron x-ray diffraction (XRD) 37,39 .In the 〈001〉zone axis atomic-resolution STEM-ADF image, the highest intensity maxima correspond to atomic columns comprising Pb 2+ /I − with small local maxima in-between corresponding to I − columns (see intensity line scan along 〈100〉, Fig. 1C); the middle-intensity maxima correspond to I − columns and the lowest intensity maxima correspond to FA + columns due to less scattering to high angles from the organic molecule (Fig. 1B).In this image, it is evident that the image intensity at the FA + column positions alternates (high-low-high-low), suggesting a doubling of the original cubic unit cell.This is confirmed by the intensity line profile across the FA + column positions (Fig. 1D).Moreover, we also observe similar ordering of the intensity at the Pb 2+ /I − columns and I -columns (Fig. 1C, D).These lower-intensity FA + columns and I − columns suggest the presence of vacancies.Moreover, an ordered pattern of both FA + and I − vacancies is evident, making a coordinated √2 × √2 superstructure of V - FA and V + I (Fig. 1I).We have confirmed this image interpretation using STEM-ADF image simulations.In particular, simulations show that the coordinated intensity modulations at the FA + and Pb 2+ /I − and I − sites are not due to the effects of dynamical electron scattering.(supplementary figure 1).
We note in passing that despite the evident superstructure in the STEM images, the image SNR is too low to generate any detectable superlattice reflections in the corresponding Fourier transform (FT).Hence, we do not use FTs as a method to identify the minimum dose at which damage occurs (Supplementary Note 3).We also confirm that these vacancies and ordering are evident in the lowest dose raw data images and are not introduced by the post-filtering process (Supplementary Note 4).
Let us now consider the Cs 0.5 FA 0.5 PbI 3 (Fig. 1E-H).In this case, the A-site contains a mix of FA + and Cs + with the pristine structure previously being determined to be cubic (space group: pm 3m) 37 .As with FAPbI 3 , we observe an alternating modulation of image intensity at all three atomic column sites (Pb 2+ /I − , I − and at the A-site, in this case, FA + / Cs + (Fig. 1E, F).Interestingly, the intensity line profiles indicate a smaller difference between high-intensity A-site columns and lowintensity A-site columns than for FAPbI 3 , Fig. 1G, H.This suggests fewer FA + /Cs + and I − vacancies have occurred in the mixed cation Cs 0.5 FA 0.5 PbI 3 , compared with FAPbI 3 for the same electron dose.For Cs 0.5 FA 0.5 PbI 3 , the A-site columns are nominally half occupied by FA + and half by Cs + .We hypothesize that FA + vacancies occur more readily than Cs + vacancies (because FA + , is known to readily break down into smaller molecules (such as NH 3 and CH 2 N) 19,40 ), so while FA + cations may be lost, Cs + cations remain on the A-site in sufficient numbers to generate intensity peaks in the ADF image.Hence, lower intensity peaks are still visible in the image at the vacancy-containing FA + /Cs + columns, due to the remaining Cs + cations (Fig. 1J and Supplementary Note 5).Whereas for FAPbI 3 , the lower intensity 'peaks' at the vacancycontaining A-site columns are barely visible or invisible.
To summarize the observations thus far, a low electron dose of 44 e/Å 2 applied with a scanned focussed electron probe, is sufficient to induce an ordered and coordinated √2 × √2 superstructure of A-site and I − vacancies in both FAPbI 3 and Cs 0.5 FA 0.5 PbI 3. Furthermore, image contrast is consistent with the A-site vacancies in Cs 0.5 FA 0.5 PbI 3 being predominantly FA + cations, rather than Cs + .
Formation mechanism-intermediate phase 1−FA + /I − ion migration and ordering in FAPbI 3 A key question arises, namely, how do these ordered A-site and I − vacancy superstructures form from the initial undamaged cubic phase.We address this question by examining FAPbI 3 using an even lower dose imaging method, namely phase contrast HR-TEM combined with a DDEC.This technique can be performed with an order of magnitude lower dose than that of STEM-ADF and offers much better temporal resolution.However, the image contrast mechanism is different from STEM-ADF and the relationship between atomic column composition and image contrast is less direct.For this reason, we only consider images of FAPbI 3 where the A-site only comprises FA + , so intensity variations at the A-site can be exclusively related to the occupancy of FA + .With the DDEC, we obtained successive images, each with a dose of 1.5 e/ Å 2 and then we summed sequences of these to provide images corresponding to a dose of our choice.This allows an identification of the dose at which structural changes begin and, critically, allows us to observe the subsequent structural changes, step-by-step, at the atomic level.We note that due to the different illumination conditions in STEM-ADF and in HR-TEM and the different way of estimating electron dose, the absolute dose may not be directly comparable and the critical dose for inducing structural changes may be different across the two techniques.
In the first instance, we sum six successive HR-TEM image frames, each acquired with a dose of ~1.5 e/Å 2 , to form an image where the atomic structure can just be resolved with sufficient signal-to-noise ratio to permit quantitative measurements (Supplementary Notes 6 and 7).In the raw image, the FA + columns appear to have uniform image intensity and the corresponding FT is consistent with the pristine cubic perovskite structure, with no additional reflections.However, in the integrated column intensity map, some intensity variations are revealed (even though the precision of the column intensity analysis is affected by shot noise at such a low dose (7.8 e/Å 2 )).These variations are consistent with the presence of random V − FA and V + I .Occasionally, there are even ordered vacancies in some local regions.Given the low dose, it could be that these vacancies are intrinsic to the pristine FAPbI 3 , particularly if prepared with insufficient surface ligand (oleic acid).Or it could be that, even at this low dose, there are sufficient electrons to generate a few vacancies.Both explanations may apply.However, the number of vacancies that were observed at this stage (HR-TEM at 7.8 e/Å 2 ) is much less than in the first acquired STEM-ADF images taken with 44 e/Å 2 (Fig. 1).This suggests at least some of the vacancies observed in the first STEM-ADF images (Fig. 1) were induced by the electron beam, even though the STEM-ADF image was taken with the lowest achievable dose for STEM-ADF.We cannot know whether the vacancies incurred in the HR-TEM (taken at 7.8 e/Å 2 ) are intrinsic to the specimen or induced by the electron beam.We hypothesize that it is likely both are true.
To improve the SNR, we sum additional frames to generate an image corresponding to a total dose of 35 e/Å 2 .The same observations apply as to the 7.8 e/Å 2 images but with greater clarity (Fig. 2I).At this stage, vacancies are still largely random and have not been ordered in a √2 × √2 superstructure, confirming there is a random initial loss of FA + and I − pairs.With a further increase in the total dose (105 e/Å 2 ), the contrast in the image changes, most evident at FA + column positions (Fig. 2B).In the corresponding Fourier transform, additional 1=2,1=2,0 c and 1=2,3=2,0 c reflections are evident, inconsistent with the initial cubic structure with a space group pm 3m (Fig. 2F).The integrated intensity map (Fig. 2J) shows significant variations in the FA + column intensity and in some regions, these have formed into an ordered pattern (e.g., region 1).With further continuous beam exposure (to 175 e/Å 2 and then 245 e/Å 2 ), V − FA and its ordering can be observed directly in the HR-TEM images (Fig. 2C, D).In the FT, the number of forbidden reflections also increases, and the intensity of forbidden reflections becomes stronger (Fig. 2G, H).Most interestingly, the formation of a fully ordered √2 × √2 pattern of V − FA vacancies is clearly visualized in the corresponding column intensity maps (Fig. 2K, L).
These HR-TEM image series reveal a step-by-step process of migration of FA + ions via A-site vacancies to form and an ordered √2 × √2 superstructure.For example, in region 1 from Fig. 2I, J, an ordered square of V − FA is formed.This square of vacancies further diffuses by one unit cell (Fig. 2J and K), so the intensity is reversed (i.e., low-intensity FA + vacancy columns become high-intensity FA + occupied columns and vice versa).The same V − FA diffusion process is observed in region 2 (Fig. 2K and L), resulting in a fully ordered pattern across the combined region.Our observations here suggest that the initial loss of FA + cations is random, and an ordered pattern is formed by the subsequent migration of FA + ions via V − FA .This demonstrates the mechanism of loss, migration, and ordering of FA + and is schematically illustrated in Fig. 2M-O.
In parallel with the above analysis of FA + vacancies, we performed a similar analysis of the intensity change of Pb 2+ /I − and I − columns to study the presence of I − vacancies (Supplementary Note 8).Consistent with the observations from ADF-STEM, we find that the I -vacancies are correlated with the FA + vacancies.In addition, and significantly, we observe the same process of migration of iodine ions via V + I to form a √2 × √2 V + I superstructure.This appears to occur in consort with the FA + vacancies and the formation of the √2 × √2 V − FA superstructure.
Intermediate phases 2: A-site dependent octahedral tilting in FAPbI 3 and Cs 0.5 FA 0.5 PbI 3 -initial observations Thus far, we have observed the initial FA + /I − vacancies and examined the associated FA + /I − ion migration to form an ordered √2 × √2 V − FA and V + I vacancy superlattice-the intermediate phase 1.We now investigate whether there are any subsequent structural changes with further exposure to the electron beam, before the established final decomposition to PbI 2 .The first phase change, the ordered pattern of vacancies, was evident in ADF-STEM after the first scan at 44 e/Å 2 (and at lower doses in HR-TEM).We now examine a sequence of subsequent ADF-STEM images of FAPbI 3 and Cs 0.5 FA 0.5 PbI 3 taken with increasing electron dose.
In the case of FAPbI 3 (Fig. 3), the first scan (at ~44 e/Å 2 ) exhibits the beginnings of a √2 × √2 V − FA and V + I vacancy superstructure (just as we found in Fig. 1, confirmed by intensity line profiles in Supplementary Note 9).In the second scan (at ~88 e/Å 2 ), very weak additional 1=2,3=2,0 c reflections (and their symmetry-equivalents, highlighted by red circles) appear in the FT (Fig. 3B, at 88 e/Å 2 ).After three scans (Fig. 3C, at 132 e/Å 2 ), these reflections are much stronger.Moreover, a distortion of the perovskite framework is clearly evident in the zoomin image (Fig. 3M), consistent with octahedral tilting (see later).We expose for a further two scans and find there are no newly formed reflections nor any additional structural changes (Fig. 3D, E).We will call this, FAPbI 3 -intermediate phase 2. (Note that this is not fully stoichiometric due to the loss of FA + /I − .) In the case of Cs 0.5 FA 0.5 PbI 3 (Fig. 4), the first scan (at ~44 e/Å 2 ) exhibits the start of a √2 × √2 V − FA and V + I vacancy superstructure plus weak additional 1=2, À 1=2,0 c reflections (along the 〈1-10〉direction) are just evident in the FT (Fig. 4A).These forbidden reflections correspond to an extra lattice frequency in the image in the 〈1-10〉 direction (perpendicular to the red arrows).We have carefully examined the first STEM-ADF scans of many Cs 0.5 FA 0.5 PbI 3 and find that the threshold dose at which such forbidden reflections can be first observed is in the range 44-132 e/Å 2 (as the first scan is at 44 e/Å 2 , we cannot exclude the possibility that these forbidden reflections might appear below 44 e/Å 2 ).We suspect these small variations in the threshold dose for observing these additional ± 1=2, ± 1=2,0 c reflections are related to small composition variations in Cs 0.5 FA 0.5 PbI 3 (i.e., the Cs + /FA + ratio).
In the next scan at 88 e/Å 2 (Fig S13 B), these 1=2, À 1=2,0 c reflections (along the 〈1−10〉 direction) disappear and a new set appears, À1=2,1=2,0 c , along the perpendicular direction, 〈−110〉.By 750 e/Å 2 , the structure appears to have stabilized with all ± 1=2, ± 1=2,0 c reflections present (Fig. 4C).We will call this, Cs 0.5 FA 0.5 PbI 3 -intermediate phase 2. (Again, note that this is not fully stoichiometric due to the loss of A + /I − .) Forbidden reflections then gradually disappear with a further increase in total dose, and the structure stabilizes into a square perovskite framework (Fig. 4D, E).By square, we mean a = b and there is a 180°angle between octahedra, in contrast to the octahedral-tilted intermediate phase 2. (We cannot determine the third dimension nor the space group from this single projection.)It is surprising that instead of decomposing quickly into hexagonal PbI 2 , the square perovskite framework remains, even up to a total dose of 3400 e/Å 2 (Supplementary Note 11).Although the FT of this square perovskite framework structure (Fig. 4E) is similar to that of the pristine cubic perovskite phase, a high density of vacancies (V − FA , V + I and likely V − Cs ) must be present.Moreover, throughout the progression from the 1st to the 2nd intermediate phase and then to the square perovskite framework structure in Cs 0.5 FA 0.5 PbI 3 (Fig. 4A-D), we notice the formation of local PbI 2 spherical clusters that fit coherently into the perovskite structure, as well as incoherent Pb clusters which quickly transform into coherent PbI 2 (Supplementary Note 12).
Intermediate phases 2-Identification of A-site dependent octahedral tilt phases
The structures of intermediate phase 2 for FAPbI 3 and Cs 0.5 FA 0.5 PbI 3 are different, giving rise to different additional reflections as reported above.These two different phase 2 structures are determined here and shown in Fig. 5.
In the case of FAPbI 3 , phase 2 can be attributed to an in-phase octahedral tilt (a 0 a 0 c + in Glazer notation), thus leading to 1=2,3=2,0 c forbidden reflections in the FT (Fig. 5A, B).We further find that this is consistent with a tetragonal p4=mbm perovskite structure viewed in the [001] direction (Fig. 5C-E).We note that a similar tetragonal perovskite structure has been observed recently in Cs 0.05 FA 0.78 MA 0.17 Pb(I 0.83 Br 0.17 ) 3 by scanning electron diffraction 41 .The octahedral tilt in that system was proposed by the authors to be intrinsic to the pristine structure and to offer a stabilization mechanism for FA + -rich mixed-cation perovskites.
Our observations here for FAPbI 3 are very different.In the case of FAPbI 3 , this octahedral tilt phase is not present in the pristine structure.It is unequivocally electron beam induced and occurs at or below 88 e/Å 2 (at 300 kV in STEM mode).
In the mixed-cation Cs 0.5 FA 0.5 PbI 3 , the ADF-STEM images of phase 2 show an elongation of the intensity maxima at the Pb 2+ /I − column positions.This is found to result from an octahedral tilt mode (a + a + c 0 in Glazer notation).(note that we cannot determine the phase ± of octahedral tilts from the projection).Specifically, this image is consistent with the [001] projection of a tetragonal I4=mmm perovskite structure (Fig. 5F-J).This specific octahedral tilt mode results in strong forbidden reflections at 1=2,1=2,0 c .We further notice that the p4=mbm tetragonal phase observed for FAPbI 3 is also present in local regions of Cs 0.5 FA 0.5 PbI 3 (Supplementary Note 13).This suggests a local segregation of the A-cations to generate FA + -rich regions.
We note it has been reported in some parts of the literature that the pristine structure of Cs 1−x FA x PbI 3 is consistent with the I4/mmm tetragonal phase 42 , same as in Fig. 5F.Our observations here of the 50/ 50 Cs/FA, Cs 0.5 FA 0.5 PbI 3 identify the pristine structure to be the pm 3m cubic phase and the I4/mmm tetragonal phase to be unequivocally electron beam induced, occurring in the range <44 to 132 e/Å 2 (at 300 kV in STEM mode).
Discussion
This study provides direct insights, at the atomic-level, into the structural response to stimuli of the mixed cation perovskite Cs 1−x FA x PbI 3 and its dependence on A-site composition (Fig. 6).The stimulus used here is an applied electron beam with extremely low current-density.This is used as a proxy with which to study the structural response to light, heat, and electric currents, and to understand at the atomic-level, how these stimuli cause the ion migration and structural degradation that are currently limiting device applications.
Point defects, such as vacancies, in photoactive perovskites are generally believed to be electronically benign, due to the observed long carrier diffusion lengths and low recombination rates 43 .However, while vacancies may be electronically benign, this study shows that they can be structurally toxic, being pivotal to ion migration and structural degradation and thereby undermining the potential of these materials for use in solar cell devices.
Even in the nominally pristine structure, occasional vacancy pairs are evident (Supplementary Fig. 8).Although low density, these provide the initial space essential to permit ion movement and rearrangement.Once a stimulus is applied, additional vacancy pairs can form, further facilitating ion migration and local ordering.These are the first atomic-scale steps to device hysteresis and ultimately structural degradation.
We observe the loss of ion pairs, facilitating subsequent ion migration, unit cell by unit cell, leading sequentially to two intermediate phases, a vacancy-ordered phase, and then A-site dependent octahedral-tilt phases.These insights into the mechanisms of ion loss and migration suggest strategies for reducing ion migration and increasing structural stability which we discuss below.
1. Our first observation is that the structural change commences with a coincident and random loss of cation/anion (FA + and I − ) pairs, resulting in the formation of vacancies (V − FA and V + I ).That is, the loss of an FA + cation will stimulate the loss of an I − anion and vice versa.This applies to both FAPbI 3 and Cs 0.5 FA 0.5 PbI 3 , however, we observed a slower rate of A-cation loss in the mixed cation compound because the bonding of FA + with PbI 6 octahedra is weaker than that of Cs + .The initial loss of ions would result in a nominal formula of (FA (1−x) PbI (3−x) , x = 0-0.5)or (Cs 0.5 FA (0.5−x) PbI (3−x) , x = 0−0.25).This loss of ions in pairs demonstrates that maintaining local charge neutrality is a dominant driving force within the crystal structure.This suggests that to enhance the structural stability of halide perovskites, it is crucial to suppress vacancies of either type.This in turn suggests that to engineer maximum structural stability of the photoactive phase, it is critical to introduce sources of both cations and ions (such as AX) that can limit or block both vacancy types, both during the initial perovskite material synthesis and for device fabrication purposes.A possible strategy for suppressing vacancy formation is the pinning of A cations and halide sites by enhancing the ionic bonding, for example, through the introduction of B site metal dopants, 2D lattices, or core-shell structures.2. Our second observation is that in the FA + /I − deficient structure with randomly distributed vacancies, FA + /I − ions demonstrate high mobility and promptly migrate toward an ordered superstructure.This shows that vacancies play a pivotal role in facilitating ion migration, indeed they may be a necessary condition for ion migration.This further emphasises the need to minimize vacancies in sample preparation, in order to passivate ion migration, improve structural stability and promote superior device performance.We have performed density function theory (DFT) calculations to better understand the A-site cation migration and the role of vacancy in the ion migration.A-site cation migration is not realistic in a perfect perovskite structure with fully occupied ions.DFT results show that the introduction of A-site vacancies will activate the migration of remaining A-site cations (Supplementary Note 14).It suggests that vacancies provide a driving force for the subsequent re-ordering of cations through ion migration.We have also attempted to perform preliminary ab-initio molecular dynamics simulations to model the formation of the superstructure and its diffusion on a larger supercell scale.However, conclusive results have not been reached due to the complexity of the system, as discussed in (Supplementary Note 14). 3. Our third observation is that once vacancies have formed a √2 × √2 superstructure, a relatively stable intermediate phase (with the nominal formula of FA 0.5 PbI 2.5 or Cs 0.5 FA 0.25 PbI 2.75 ), is established in which ion migration is substantially impeded, despite the presence of a significant number of FA + /I − vacancies.This suggests the ordered configuration is energetically favourable, requiring a high activation energy to move an ion pair and break the local symmetry.It demonstrates how a well-ordered structure can discourage ion migration and suggests the design of well-ordered crystal structures, even those with ordered vacancies, might be one avenue for mitigating ion migration.Furthermore, it also indicates a potential atomic-level mechanism for how nonreversible ion migration and hysteresis might occur, namely, by driving ions into an ordered, energetically favourable superlattice from which larger energy is required to subsequently move them.4. Our fourth observation is that, with the further loss of ions, the pristine perovskite Pb-I framework can no longer be maintained, and a second intermediate phase is formed through octahedral tilt resulting in a tetragonal phase transition.At this stage the nominal formula corresponds to (FA (0.In summary, these observations reveal at the atomic scale the mechanisms by which ion migration occurs.This in turn suggests several strategies for designing the structure of perovskite photoabsorbers that will inhibit ion migration and promote structural stability, as follows: • We find ion migration requires vacancies and these occur as cation/ion pairs.Chemical processes that block the formation of either vacancy type (cation or ion), will inhibit ion migration and enhance structural stability.Chemical processes and/or composition engineering that block both types (cation and ion), may block ion migration altogether.• Well-ordered vacancy superstructures have higher stability and might be incorporated deliberately into the structure to discourage ion migration (for example, through lowtemperature annealing in chemical processing).• Structural stability can be enhanced through octahedral tilting, which might be induced through A-site composition engineering.
Since other stimuli such as high-intensity light, heat, or electrical fields may induce vacancy formation and ion migration in an analogous manner, these findings may provide fundamental instructions to guide the development of stable optoelectronic devices under various conditions.
Synthesis of CsPbI 3 QDs
Cs-oleate was obtained by dissolving 0.1 g of Cs 2 CO 3 into 0.4 ml of OA and 10 ml of ODE, and the mixture was loaded into a 50-ml three-neck flask and stirred under vacuum for 30 min at 120 °C.After fully dissolving, the Cs-oleate in ODE was stored under nitrogen until it was used.PbI 2 (0.4 g), ODE (20 ml), OA (2 ml), and OLA (2 ml) were stirred in a 100ml flask and degassed under vacuum at 120 °C for 1 h.The flask was then filled with N 2 and kept under constant N 2 flow.The temperature was increased to 170 °C, and then 3.4 ml of the Cs-oleate precursor was swiftly injected into the mixture.After 10 s, the reaction was quenched by immediate immersion of the flask into an ice bath.After cooling to room temperature, 30 ml of MeOAc was added, and the mixture was centrifuged at 6440 xg for 10 min.The resulting QD precipitate was dispersed well in 2 ml of hexane and was centrifuged again at 906×g for 2 min to remove agglomerations.The concentration of the obtained QDs ink was further adjusted to 50 mg ml −1 by adding the proper amount of hexane.Then the CsPbI 3 QDs ink was stored under nitrogen until use.
Synthesis of FAPbI 3 QDs
Pb(acetate) 2 •3H 2 O (0.152 g), FA-acetate (0.157 g), ODE (16 ml) and OA (4 ml) were added in a 100-ml three-neck flask and dried under vacuum for 30 min at 40 °C.The mixture was then heated to 80 °C under an N 2 atmosphere, followed by an injection of OLA-I (0.474 g dissolved in 4 ml of toluene).After 30 s, the reaction mixture was cooled in the water bath.After cooling to room temperature, 20 ml of MeOAc was added, and the mixture was centrifuged at 6440×g for 5 min.The resulting QD precipitate was dispersed in hexane and was centrifuged again at 1610×g for 4 min to remove agglomerations and impurities.The concentration of the purified QDs ink was further adjusted to 50 mg ml −1 by adding a proper amount of hexane.Then the QDs ink was stored under nitrogen until use.
Synthesis of Cs 0.5 FA 0.5 PbI 3 QDs Cs 0.5 FA 0.5 PbI 3 QDs were obtained by cation-exchange reactions: The stored CsPbI 3 QDs and FAPbI 3 QDs were mixed under N 2 atmosphere with a calculated volume ratio to guarantee the desired composition of the QDs.The ligand-assisted cation-exchange reaction was completed in 60 min at room temperature.The obtained QD ink was kept in an N 2filled glovebox for an additional 12 h to guarantee the even distribution of surface ligands.
Ligand density reduction
The surface ligand density of the obtained FAPbI 3 QDs and Cs 0.5 FA 0.5 PbI 3 QDs were further reduced by adding EtOAc (volume ratio of QD solution to EtOAc was 1:1) into the QD inks and centrifuged at 6440×g for 5 min.The resulting QD precipitate was dispersed in hexane and was centrifuged again at 1610×g for 4 min to remove agglomerations.The concentration of the QD inks was further adjusted to 50 mg ml −1 by adding a proper amount of hexane.The purification process, including the mixing of QD inks with MeOAc or EtOAc, the centrifuge, and the dispersion of QDs in hexane was conducted in dry air with a relative humidity between 20% and 25%.
TEM specimen preparation
QD solutions were stored in the glove box filled with dry N 2 .Ultrathin carbon-coated Cu TEM grids were plasma-cleaned under H 2 /O 2 for 30 s before use.A sample of QD solution was taken and immediately dropped onto the TEM grid and allowed to dry for 30 s.This whole sample preparation was conducted in the glove box (N 2 ).The prepared TEM specimen was transferred from the glove box to the TEM room, using a homemade stainless steel vacuum transfer unit.The total time of TEM specimen exposure in the atmosphere is <30 s.
TEM characterization
STEM-ADF was carried out using an FEI Titan 3 80-300 FEG-TEM equipped with probe and imaging spherical aberration correctors.All images were acquired at 300 kV, a 15 mrad probe-forming aperture, and 39-200 mrad detector collection angle.Electron dose was measured using an electron microscope pixel array detector (EMPAD) based on a measurement of 10,000 frames of the vacuum probe.HR-TEM was performed using a Thermo Fisher Scientific Spectra φ FEG-TEM equipped with a monochromator and probe and imaging C5 aberration correctors.HR-TEM images were acquired on a Gatan K3 camera in counting mode at 75 fps.All TEM experiments followed a strict "shoot blind" protocol whereby a fresh region of the specimen is first exposed to the electron beam at the start of data acquisition and only exposed for the duration of data acquisition.In particular, no electron dose was applied to the material for tilting to a zone axis or adjusting imaging parameters.Images were post-filtered with a combined Bragg filter and a Butterworth filter and corresponding raw images are given in the supplementary material.
STEM simulations
STEM-ADF images were simulated using a GPU-enhanced frozen-phonon multislice code (µSTEM).The simulations employed supercells by tiling perovskite unit cells by 8 × 8 (supercells in size 5 × 5-10 × 10 nm), combined with 1024 × 1024 pixels to ensure accuracy.Experimental conditions were used as the parameters for the calculation of the STEM images.30 frozen phonon passes were calculated.
DFT calculations
DFT as implemented in the Vienna Ab Initio Package (VASP) 44 is used to study the Cs x FA 1−x PbI 3 perovskite structure.The GGA-PB 45 functional is considered for all the calculations.To minimize the effect of periodic boundary conditions on atomic interactions, a supercell consisting of 8 unit cells with 96 atoms (12.703Å × 12.703 Å × 12.703 Å) for FAPbI 3 and a supercell with 40 atoms (12.564Å × 12.564 Å × 12.564 Å) for CsPbI 3 are considered for this study.A k-point of 1 × 1 × 1 at the Γ-point is applied for all the calculations.To compare the migration energy barrier (NEB) of FA and Cs ions in the structure of FAPbI 3 and CsPbI 3 , nudge elastic band calculations with a force tolerance of 0.03 eV Å −1 and an energy cutoff of 520 eV are carried out.To make the initial and final structures, one FA and I and similarly one Cs and I are removed from FAPbI 3 and CsPbI 3 structures (FA 0.875 PbI 2.875 and Cs 0.875 PbI 2.875 ) and DFT ground state optimization is used to converge the initial and final migration points.The energy-optimized structures are used to generate six intimidating images for NEB calculations.
Ab-initio molecular dynamics (MD) simulation
The samples that were used for NEB calculations are also being used for Born-Oppenheimer molecular dynamics simulations.Ab initio MD simulations were performed using the CP2K/Quickstep package 46 .The Perdew-Burke-Ernzerhof (PBE) generalized gradient approximation (GGA) 45 was selected for the DFT exchange-correlation functional.To correct for van der Waals interactions the DFT-D3 47 method was used.Pseudopotentials of Goedecker, Teter, and Hutter (GTH) 48 were employed and the DZVP-MOLOPT-SR-GTH 49 was selected as basis set.This is a Gaussian and plane-wave (GPW) 50 basis and a cutoff energy of 280 Ry was selected.A 1 × 1 × 1 k-point mesh (Γ point) was used in all calculations.For ab initio MD simulations, the equations of motions were integrated using a velocity Verlet algorithm with a time step of 1 fs.
Fig. 1 |
Fig. 1 | A-site and I − vacancy ordering in FAPbI 3 and Cs 0.5 FA 0.5 PbI 3 -Low dose STEM-ADF images in the 〈001〉zone axis.A-D FAPbI 3 .E-H Cs 0.5 FA 0.5 PbI 3 .A, E Lower-magnification images.Total dose is 44 e/Å 2 .B, F Enlarged images of regions marked in (A, E).C, G Intensity line profiles integrated over Pb 2+ /I − and I − columns as marked in region 1 in (B, F).D, H Intensity line profiles integrated over I − and FA + (or Cs + /FA + , in the case of Cs 0 .5 FA 0.5 PbI 3 ) columns as marked in region 2 in (B, F).Arrows highlight atomic columns: red-Pb 2+ /I -; purple-I − ; blue-FA + (or Cs +/ FA + ).I, J schematic diagrams of vacancy ordering in FAPbI 3 and Cs 0.5 FA 0.5 PbI 3 .Blue diamonds indicate the ordered remaining A-site cations.Low-dose STEM-ADF images are filtered by a combined Bragg filter and Butterworth filter to enhance image contrast.All structures and orderings present in the filtered images were also observed to exist in the raw images (details and raw images in Supplementary Note 4).Source data are provided as a Source Data file.
Fig. 2 |
Fig. 2 | Atomic-scale HR-TEM images of FAPbI 3 show the initial random FA + vacancies and subsequent ordering via FA + ion migration.A-D HR-TEM images of FAPbI 3 with increasing electron beam exposure.Red circles correspond to the highest intensity Pb 2+ /I − columns; orange circles correspond to I − columns and blue circles correspond to the FA + column.Red squares in (D) highlight ordered V − FA .E-H FTs of images (A-D).Forbidden 1=2,1=2,0 and 1=2,3=2,0 reflections are marked by red circles.I-L Integrated column intensity maps based on HR-TEM images in (A-D).Colour bar represents integrated intensity from 5000 to 10,000 in arb.units.Red solid diamonds highlight ordered V − FA pattern in the image and red dashed diamonds highlight ordered V − FA pattern in the previous image.M-O schematic diagrams illustrating the process of loss, migration and ordering of FA + revealed in (A-D) and (I-L).Selected regions of ordered V − FA are marked by blue diamonds.Red arrows indicate the migration of FA + .Black arrows between M, N and N, O indicate the loss and migration of ions with time series (or dose).
Fig. 3 |
Fig. 3 | Sequence of STEM-ADF images of FAPbI 3 shows the structural change of octahedral framework.A-E STEM-ADF images with increasing total dose.F-J FTs of STEM-ADF images in (A-E).Reflections forbidden in the cubic structure are highlighted by red circles.K-O enlarged images of the regions marked in (A-E).
Fig. 5 |
Fig. 5 | Structural identification of intermediate phase 2 octahedral tilt modes in (A-E) FAPbI 3 and (F-J) Cs 0.5 FA PbI 3 .A, F STEM-ADF image of fully established intermediate phase 2. B-G FTs from (A, F).Circles highlight reflections that are forbidden in the pristine cubic structure.C, H Unit cell structures of the region marked in (A, F).D, I The proposed structures showing different octahedral tilt modes for FAI-deficient FAPbI 3 (a 0 a 0 c + ) and Cs 0.5 FA 0.5 PbI 3 (a + a + c 0 ).Green, grey, and purple atoms represent Cs + /FA + , Pb 2+ , and I − atomic columns, respectively.E, J STEM-ADF simulations based on the crystal structures proposed in (D, I).
Fig. 4 |
Fig. 4 | Sequence of STEM-ADF images of Cs 0.5 FA 0.5 PbI 3 shows a structural change of octahedral framework that is different from FAPbI 3 .A-E STEM-ADF images with increasing total dose.F-J FTs of STEM-ADF images in (A-E).Reflections forbidden in the cubic structure are highlighted by red circles.(K-O) enlarged images of the regions marked in (A-E).Red arrows indicate extra lattice frequency that does not exist in the cubic perovskite structure.STEM-ADF images between 44 and 220 e/Å 2 are shown and discussed in Supplementary Note 10.
17
).The octahedral tilt mode (or symmetry of the tetragonal perovskite phase) depends upon the type of A-site cation.The observed correlations between A-site cations (type and occupancy) and the Pb-I framework highlight how the A-site can influence structural stability by tuning the octahedral tilt mode.
Fig. 6 |
Fig. 6 | Schematics of the observed ion migration mechanisms and associated phase changes of Cs 1−x FA x PbI 3 under the electron beam.A FAPbI 3 .It involves an initial loss of FA + and I − (green arrows), followed by the formation of vacancyordered superstructure (intermediate phase 1, indicated by blue diamond) through ion migration (red arrows).With the further loss of ions, the pristine cubic phase transforms into a tetragonal phase through octahedral tilt (intermediate phase 2, indicated by yellow arrows).B Cs 0.5 FA 0.5 PbI 3 .The loss and migration of ions are thought to be similar to FAPbI 3 , however, the octahedral tilt mode for the intermediate phase 2 is different due to the alloying of Cs + at A-site. | 2023-12-23T06:17:05.792Z | 2023-12-22T00:00:00.000 | {
"year": 2023,
"sha1": "009932ee17654aa7e9624c6eba72f432d215cc89",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "f1c4591a430b140d97582f61d6a08826e39099cf",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
187999749 | pes2o/s2orc | v3-fos-license | Assessment of Labor Potential on the Regional Level by the Index Method
Analysis and tendencies in labor potential development are part of a complex socio-economic system. This system as well as its state is of great importance for justification of investments and further vectors of development. At present, however, there is no generally accepted methodology for labor potential assessment. The authors introduce the index method to assess basic characteristics of labor potential taking the Samara region as an example. The research is based on the calculations labor potential indices in the Samara region. The authors detect changes in labor potential structure which are due to the growth of indices of education, labor efficiency and scientific research with some decrease in the index of demographic performance. The obtained results serve as a basis for determining the possibilities of area development and help to see the logic of advanced labor potential formation.
Introduction
Economic development of a country is determined, first of all, by those people who live and work on its territory. The level and quality of life, well-being and industrial growth mostly depend on population workability. There is a need for a progressive increase in the work potential and a constant search of reserves for labor potential formation to ensure the sustainable development of key sectors of the economy. All these reasons ensure the relevance of this study. The assessment of labor potential characteristics is of special importance. The central issues here are education and skills of workers. There is also a need to develop conceptual approaches that implement opportunities and prospects of labor potential development in the context of key measures of social and economic policy of the state [2].
Different regions of the Russian Federation have different structure, level of economic development, level and quality of life, which create conditions for regional differentiation of labor resources and hinder regional development. The lack of opportunities for labor potential formation and realization can serve as a threat to the effective development of the region's economy and strengthen intraregional inequalities caused by different natural conditions and historical background. There exist significant regional and industry-specific differences in quantitative and qualitative parameters of labor potential which can hinder complex development, create economic disproportion in the system of economic interaction. All this prevents effective use of available natural resources. The increasing demand of the economy in innovative development and labor potential realization stimulate research studies which aim at changing labor potential structure and reveal tendencies of its development [3]. Despite the active development and implementation of target state and regional labor management programmes, there is often no comprehensive approach to different aspects of labor potential formation and use. In addition, there is no correlation between education results and earned income, there is no long-term policy of planning the needs of the economy in qualified personnel, there is a decline in the quality of labor potential in certain regions. The solution of these problems is hampered by the lack of consistent approach to methods of labor potential assessment, incoherence of goals and tasks of advanced labor potential formation. The development of approaches to labor potential assessment requires scientific rationale and methodical support, which confirms the relevance of the selected research topic.
Assessment of labor potential on the regional level
The purpose of the research is to revel tendencies in labor potential development. The study is based on the analysis of the existing regional level. Calculations of index potentials are taken as an example. To achieve the goal of the study the researchers plan to fulfil the following tasks: to examine the structure of labor potential in the Samara region and to implement the index approach for labor potential assessment; to define quantitative characteristics of labor potential in the Samara region on the basis of demographic, migration, labor, educational and structural data; to assess the current situation on the basis of the calculation of index potentials and characteristics of labor potential in the Samara region; to see the logic of advanced labor potential formation for determining directions of labor potential development. The analysis and research of demographic, migration, structural indicators of labor potential in accordance with statistics data and labor market of the Samara region studies served as the research base for the present study. To assess the current state of labor potential at the regional level, the authors took the Samara region as one of industrially developed regions of the Russian Federation, which has significant natural and human resources. In the beginning of 2017, the Samara region was ahead of other regions both in the Volga Federal District and in the Russian Federation upon employment and economic activity indications. Its unemployment rates were among the lowest in the whole country [4]. However, there is a shortage of qualified personnel in the region, which particularly affects the priority sectors of the economy. It is of a scientific and practical interest to reveal tendencies in the structure of labor potential to identify development opportunities.
Tendencies of economic development in Russia and its regions are greatly determined by quantity and quality of human resources which can be used in certain territories [5]. The reason for that is the transition of the economy to the industrial phase, the increase of the influence of information technologies on all spheres of life, production intellectualization, the dynamics of which depends on the quality of the employed workforce [6]. It is through the innovative qualities of the workforce that you can increase the competitiveness of all areas of the economy and create breakthrough technologies. Education system, which is the basis for the formation and development of labor force quality characteristics, plays a decisive role here. Qualitative and quantitative characteristics of labor resources are closely related. Quantitative indicators serve as a basis for planning the structure of the workforce quality and for all directions and levels of economic activities staffing [8].
One of the important characteristics of quantitative parameters is the age structure of the labor force. It has a tendency to reduce the proportion of young people and to increase the number of older people, which leads to an increase in the economic burden on economically active population, risks in social security, distortion in employment (see Fig. 1).
Fig. 1. Distribution of the number of employed in the Russian Federation according to age groups (calculated on the basis of Samarastat data [9])
The most significant indicator of the dynamics of labor quality is the level of education of the population in the region. Vocational education in the modern world economy plays the role of the catalyst for many socio-economic processes.
It is a significant social institution, shaping the environment and determining the ways of society development [10,11].
Graduates educated in the Samara region work all over Russia and abroad. The educational system of the Volga region attracts many people from different cities and villages of Russia and from abroad. Assessment of educational level of the population and the received education structure allow to carry out qualitative analysis of labor resources and to estimate prospects of development.
Labor migration has an increasing impact on the state of labor potential [12]. The influx or outflow of labor migrants can have a significant impact on the state of labor potential. Regions are interested in the influx of skilled labor, but there should be favorable conditions created in the region to attract skilled labor force. Effective workplaces, intensive technologies, development of automation and robotic, developed education system are critical elements of labor potential formation.
One of the most reliable methods of assessing the real state of labor potential is the factor analysis of statistical indicators with the highest degree of certainty. Data analysis can be carried out by using several calculation groups: they are natural, cost and index approaches. Let's start with the index method, which allows to describe the main characteristics of labor potential. The main characteristics of labor potential and approaches to their calculation are shown in Table 1.
To calculate the Labor potential index (LPI), let us use the following formula: FRscope of fundamental scientific researches, mln. rubles; ARscope of applied scientific research, mln. rubles; SWamount of scientific research works, mln. rubles; psindex potential of qualification Ilm -Labor migration index It allows to assess changes in labor potential caused by migration of population = + + х 1 р Pwthe number of migrants moving within the region, people; Pothe number of migrants coming to the region from outside, people; Mwthe number of departing migrants within the region, people; Mothe number of migrants leaving the region, people; plmindex potential of labor migration р = 1 ∑ [ Ile -Labor efficiency index It allows to estimate the volume of the gross regional product to the income of population who have produced this GRP. It is accepted as an axiom of the dependence between labor volumes and income of the population GRP -Gross regional product, mln. rubles; GI -Gross income of the population, mln. rubles; peindex potential of labor efficiency The calculation of indices in the Samara region was made on the basis of statistics in the Samara region for 2015 and 2016 (see Table 2). Calculation of indices of labor potential for 2015-2016 and index potentials make it possible to determine the dynamics of indices constituting the characteristic of the labor potential and to identify indices for which there is a growth or a fall. The results of data analysis presented in Table 1 show that the components of the labor potential in the Samara region have changed unevenly and in 2016 there was an increase in all index values, except for the index of demographic productivity. It could be a result of consequences of the demographic pitfall in the 1990s. The calculations can also be used to compare the indices of labor potential here with other regions and in the Russian Federation in general. For example, the education index of the Samara region is a positive trend, it is constantly increasing and holds the 9 th position in the country, which is good enough considering the number of regions [13]. In general, due to the growth of economic indicators, the labor potential index in 2016 was 1.042, that is, the labor potential of the Samara region grew by 4.2%. However, the calculation of labor potential has limited characteristics and does not allow to fully assess the quality of labor potential in many qualitative parameters as there isn't enough data. That is why additional studies with the use of on-site monitoring of personnel at enterprises are required and monitoring of the quality of management decision regarding labor potential at the regional and federal levels seems also useful. At the same time, the calculations do not contain such important qualitative indicators of labor potential as health, cultural and moral levels, population social activity These indicators are represented in the statistical data, still they have a significant impact on the quality of labor potential. These parameters assessment can be done through expert assessments, representative surveys and indirect indicators. Nevertheless, this technique availability and ease of use can be highly appreciated because statistical data used in the calculations can be selected from available sources. This methodology allows to assess the labor potential of the region with a fairly high degree of objectivity, which is an essential requirement for public administration.
The formation of labor potential of the region is connected with the organization of complex interaction between different branches of power, business and education. This interaction is necessary to achieve a balance of quality of labor at all stages of management and production of consumer goods. The advanced labor potential, characterized by high quality of labor force, is required today by high-tech branches of economy. They need qualified labor force to achieve breakthrough results, allowing to make a complex technological breakthrough as it is at this level the global challenges facing the whole economy of the country can be faced [14]. The formation of advanced labor potential occurs in a certain sequence and under the influence of various objective and subjective factors (see Fig. 2).
Figure 2. The logic of advanced labor potential formation
Natural abilities of a person, determined genetically, together with his living environment, a level of society development, both in technical-technological, and in social sphere, determine this person's choice of a profession and a career. In different periods of society development there is cyclical redistribution of labor potential into more effective spheres of economy, which makes a significant impact to the education system as far as labor force retraining is concerned [15]. This circumstance adds certain peculiarities to the process of professional education and influences skilled labor force and redistribution of labor potential.
Conclusion
The research demonstrates that existing tendencies of change in the structure of labor potential are caused mainly by a changing demographic situation. The authors based their study on the calculations of labor potential characteristics in the Samara regiona large industrial region of the Russian Federation with developed infrastructureand found index potentials for the following significant labor potential characteristics: demographic productivity, education of labor resources, scale of research work, labor migration and labor efficiency. The analysis of labor potential in the Samara region, on the basis of characteristics assessment with the index method, showed a slight growth of indices of main indicators, except for the index of demographic productivity and revealed a change in its structure that had occurred in recent years. It means that there is some potential for more intensive growth in the workforce, which requires an advanced target programme to increase its rate of growth. At the same time, there is a risk of reducing labor potential through reduced demographic productivity, which must be taken into account in the draft programme. The research also helps recognize growth potential of labor force and suggests necessary measures to compensate for negative trends. The result obtained in the study is important for the development of the state policy of labor potential formation. To achieve this goal, the Population of the region Prospects of labor potential formationprofessional orientation, primary work skills Basic (Stable) core of labor potential skilled labor force Advanced labor potentialhighly skilled workforce Professional and higher education at all levels Choice of profession depending on prestige, career prospects, opportunities to get free education, parents or friends recommendations, personal preferences Change of career path depending on personal reasons or due to change labour market structure Impact of external and internal factors: natural abilities, experience and results of activity, degree of complexity and responsibility of tasks, role and place in labor processes researchers have applied the logic of advanced labor potential formation, which can serve as a starting point for further studies.
It is also determined that the system of education, which forms the basis of labor quality and has a special influence on labor potential formation, creates additional opportunities for the development of the region's economy. On the basis of the index of labor resources education and its growth, it is shown that a proportion of population in the Samara region with higher and secondary vocational education is comparatively high. However, the rate of growth of labor force indices is negligible. It cannot satisfy the growing need of the region's economy for skilled labor. For a further research we suggest determining prospective needs for qualified labor force and calculating amount of labor potential. Such perspective results will be important for the formation of practice-oriented educational programs, for determination of the state funds for labor training, and for plans of professional retraining in conditions of enterprises restructuring.
Thus, there are real opportunities for increasing labor potential and its use in the Samara region for improving the quality of labor activity. Further measures in this direction require additional research. On the basis of such studies a unique, effective system of labor potential management of the region can be formed. | 2019-06-13T13:17:13.404Z | 2018-06-13T00:00:00.000 | {
"year": 2018,
"sha1": "0958084aaa70b0f55dda319c927979fb4de5d8a9",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2018/29/matecconf_spbwosce2018_01069.pdf",
"oa_status": "GOLD",
"pdf_src": "ElsevierPush",
"pdf_hash": "e0adfe0b8d933ef5cdcba4c4d078b2406c556547",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": []
} |
166845017 | pes2o/s2orc | v3-fos-license | User Frustrations as Opportunities
User frustrations are an excellent source of new product ideas. Starting with this observation, this article describes an approach that entrepreneurs can use to discover business opportunities. Opportunity discovery starts with a problem that the user has, but may not be able to articulate. User-centered design techniques can help elicit those latent needs. The entrepreneur should then try to understand how users are solving their problem today, before proposing a solution that draws on the unique skills and technical capabilities available to the entrepreneur. Finally, an in-depth understanding of the user allows the entrepreneur to hone in on the points of difference and resonance that are the foundation of a strong customer value proposition.
Introduction
Any business opportunity starts with a good understanding of the current or potential user of a product.As an entrepreneur, you need to understand what problems the user faces, and how you can use your skills and technical capabilities to solve them.It is critical to keep those two aspects of developing a new opportunity apart.On one hand, users are just looking for a solution to their problem.They are not interested in the technology underlying your solution.On the other hand, you can only solve problems that match your skills.Often, entrepreneurs make one of two mistakes: they either assume that their technical solution will "wow" the customer or they target an opportunity on which they cannot deliver, because they do not have access to the required skills and capabilities.
To learn about your users, you should answer these questions: 1. What problem are you solving for your user?
2. What frustrations do users experience with current solutions?
3. How are users solving their problem today?
4. What better ways are there to solve the user's problem?Do you have the required skills?
5. How is your solution different from other solutions on the market?
What problem are you solving?
If we could just ask customers what they need, developing new products would be simple.Traditional market research relies on customer input obtained through surveys and focus groups.However, users often cannot articulate their needs, and their imagination of what solutions can be provided to their problems is limited by what they have come to know.Asking customers about their needs will lead to incremental improvements, not new ways of solving their problems.
In order to understand what problem the user faces, you need to put yourself into the user's shoes.From the user's perspective, your product needs to address needs the user has.User needs come in two types.
Needs that the user can articulate are also known as perceived needs.An example of a perceived need is a user looking for a faster portable scanner or one with greater memory capacity.Most needs, however, are difficult to articulate.For example, the user's experience User frustrations are an excellent source of new product ideas.Starting with this observation, this article describes an approach that entrepreneurs can use to discover business opportunities.Opportunity discovery starts with a problem that the user has, but may not be able to articulate.User-centered design techniques can help elicit those latent needs.The entrepreneur should then try to understand how users are solving their problem today, before proposing a solution that draws on the unique skills and technical capabilities available to the entrepreneur.Finally, an in-depth understanding of the user allows the entrepreneur to hone in on the points of difference and resonance that are the foundation of a strong customer value proposition.
In the middle of difficulty lies opportunity.
User Frustrations as Opportunities
Michael Weiss with current products may limit their ability to imagine a different type of solution.These needs are called latent needs.An example of a latent need is that users really want to limit the number of gadgets they have to carry with them.
Continuing with the scanner example, we note that currently, most portable gadgets have a single purpose.So, an industrial designer may need to take a potpourri of gadgets wherever he goes, including a digital camera for taking photos, a voice recorder for conducting interviews or sampling sounds, a portable scanner to scan photos and articles, a sketchbook for capturing ideas when the inspiration strikes, and a collection of pencils of different strength.I happened to sit next to a wellknown designer once at an event, when he emptied his bag on the table to make this very point.Our designer's latent need is: there are too many gadgets to carry, but if he leaves one of them at home, it may be the one he needs most.So, he has learned to live with this constraint; he is not content, but he lacks a viable alternative.
What frustrations do users experience with current solutions?
To discover latent needs, look for frustrations that the user experiences.They are often hiding behind workarounds that the users have adopted to make do with current solutions.Users may also simply be unaware of which alternatives are technically feasible and have come to except the limitations of current products.
Their experience with existing products also frames how they can articulate their needs (Leonard and Rayport, 1997; tinyurl.com/7qvfakd).Thus, for discovering latent needs, a different approach from surveying users is required.
User Intuit (intuit.com), the developer of the Quicken personal financial software, requires its developers to spend a few days each year shadowing new users using the software.From this exercise, not only does Intuit learn how to improve the documentation and usability of its software, it also gains insights into the environment in which users are using Quicken.One of the lessons for Intuit from its "Follow Me Home" program (tinyurl.com/32u7pxr)was that small business owners were using Quicken to keep their books.As a result of this observation, Intuit created the QuickBooks financial software product for small businesses, which allowed the company to enter a lucrative new market.
How are users solving their problem today?
Understanding how users help themselves when they face a problem also makes you aware of the alternative solutions available to them.Additionally, the Internet is an excellent resource for finding information about competing solutions, not only in terms of their features, but in terms of user feedback and the frustrations users experience using those competing solutions.Many entrepreneurs limit their attention to products that directly compete with their solution.Doing so, they fail to recognize what the user is trying to achieve, in other words, what job the user would be "hiring" their product to do (Christensen and Raynor, 2003: tinyurl.com/7n7x5rd;Christensen, 2006, tinyurl.com/mdazmc).
For example, if your product is a portable scanner, you might just be comparing it to other portable scanners on the market.However, your real competition may be far broader than originally conceived, but so are your solutions.A new solution to a problem that the customer faces may involve another type of technology or an www.timreview.ca
User Frustrations as Opportunities
Michael Weiss alternative approach.Solutions competing with a portable scanner include copiers (if one is nearby), the user's memory (often unreliable), pen and paper (slow and tedious), as well as a camera-equipped smartphone (a very viable alternative, as we will see).
What better ways are there to solve the problem?
What you bring to the table as an entrepreneur are skills and technical capabilities.When you learn about the customer's problem, you are actually constantly looking for opportunities to match your skills and technical capabilities to the user's needs.This process enables you to imagine solutions that users cannot conceive, given that their experience is limited to products that exist.Users may not be able to imagine solutions that are within your reach.In other words, you are a peddler of possibilities.
For example, users like our industrial designer may need to scan documents on the go.Existing solutions to this problem have been cumbersome (e.g., are difficult to use, force the user to carry an extra piece of equipment, require battery power, produce low-quality results, require transferring scanned images to other computers).Using a smartphone as a scanner is an effective alternative.It is a device users already carry with them, so no extra equipment is required.The user already keeps it charged regularly.Smartphones have built-in cameras that are often of high-enough quality to capture a sufficient level of detail.The functionality of a scanner can be emulated by an application on the smartphone.The smartphone solution makes a tradeoff between quality (high-resolution scans) and convenience (many devices in one).
How is your solution different from other solutions on the market?
However, it is not enough merely to solve the problem as effectively as other solutions.Your solution must excel in some dimensions.Look for points of difference that set you apart from your competition.In fact, if you are doing this well, what you want to emphasize are the points of difference where you demonstrate an intimate understanding of your customer.You can do this through a resonating focus on just the dimensions that matter most (Anderson et al., 2006: tinyurl.com/6tmrqvv;see also Shankar, 2012: timreview.ca/article/525, in the February issue of the TIM Review).The time you spent earli-er, observing users and trying to understand their latent needs, will pay off handsomely now.The better you understand your customer, the better you will be able to identify just what features and attributes of your product matter to them most, which is why they will want to buy the product from you rather than your competition.
The first company to offer a smartphone application that effectively turns a smartphone into a portable scanner demonstrated a superior understanding of one of the most pressing user needs.Rather than innovating, as its competitors did, on dimensions that customers were well-aware of, such as modifying the design of a portable scanner so it can operate independently from a computer, this company recognized something important that had eluded its competitors.It understood that, for many users, carrying a separate piece of equipment that they did not use regularly, and keeping it charged at all times, was a major nuisance.This understanding could only be obtained by close observation of users in their working environment.Armed with the knowledge of the frustration that existing solutions created, the company was able to recalibrate the trade-off between quality and convenience in its favour.
Conclusion
This article described an approach that entrepreneurs can use to discover business opportunities.In summary, to learn about your (current or potential) users, answer these questions: 1. What problem are you solving for my user?
2. What frustrations do users experience with current solutions?
3. How are users solving their problem today?
4. What better ways are there to solve the user's problem?Do you have the required skills?
5. How is your solution different from other solutions on the market?
Your answers to the first three questions will tell you whether the problem is big enough to become the foundation of a new business.Your solution needs to be a significant improvement over the solutions currently available to users on the market.Your answers to the fourth and fifth questions will tell you whether the opportunity you discovered is something that you can and want to act on.If there is no match with your skills or future goals, the opportunity may not be the right one for you.Finally, your answer to the last question will give you insights into why users will buy the solution from you.If you are a new player, you cannot build on an existing relationship with your users, but you need to demonstrate a level of understanding of your users' needs that surpasses the competition.Once you have the answers to these questions, you are well-prepared to create a compelling customer value proposition, which will be the centrepiece of your business opportunity.
the Author Michael Weiss holds
a faculty appointment in the Department of Systems and Computer Engineering at Carleton University, and he is a member of the Technology Innovation Management program.His research interests include open source business models, collective innovation, mashups and enduser development, product line engineering, and business patterns.Michael has published over 100 papers in conferences and journals. | 2017-10-15T12:25:30.333Z | 2012-04-01T00:00:00.000 | {
"year": 2012,
"sha1": "b119868aa319c5917cfcb1115cc5f544d444e9fd",
"oa_license": "CCBY",
"oa_url": "https://timreview.ca/sites/default/files/article_PDF/Weiss_TIMReview_April2012.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b119868aa319c5917cfcb1115cc5f544d444e9fd",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
249837198 | pes2o/s2orc | v3-fos-license | Perspectives on Wider Integration of the Health-Assistive Smart Home
: Most older adults desire to be as independent as possible and remain living in their ancestral home as they age. Aging-in-place maximizes the independence of older adults, enhancing their wellbeing and quality of life while decreasing the financial burden of residential care costs. However, due to chronic disease, multimorbidity, and age-related changes, appropriate conditions are required to make aging-in-place possible. Remote monitoring with smart home technologies could provide the infrastructure that enables older adults to remain living independently in their own homes safely. The health-assistive smart home shows great promise, but there are challenges to integrating smart homes on a larger scale. The purpose of this discussion paper is to propose a Design Thinking (DT) process to improve the possibility of integrating a smart home for health monitoring more widely and making it more accessible to all older adults wishing to continue living independently in their ancestral homes. From a nursing perspective, we discuss the necessary stakeholder groups and describe how these stakeholders should engage to accelerate the integration of health smart homes into real-world settings.
Introduction
Health systems around the world are faced with challenges including an explosion in the number of older adults. The global population of persons aged 60 and older is expected to nearly double from 12% to 22% between 2015 and 2050 [1]. Older adults typically desire to remain living in their ancestral homes for as long as possible [2,3]. However, complex interacting challenges such as chronic disease, multimorbidity, and age-related changes necessitate a certain level of oversight of the health and wellness of the older adult who is aging-in-place [4,5]. Many older adults will require specialized care [6] to manage chronic conditions, and many older adults typically have one or more chronic conditions requiring ongoing medical management [4,5,7]. The Health Smart Home (HSH) could serve as the infrastructure that enables the convergence of independent living with health, wellness, and medical care. The HSH could be central to the development of new, innovative models of aged care with a focus on dignity, independence, autonomy, and the avoidance of institutionalization of older adults which could enable cost-effective quality health outcomes [8][9][10][11].
Health-Smart Home
Smart homes for health monitoring are made possible by the Internet of Things (IoT) which connects a variety of electronic devices to enable the collection of large amounts of sensor-collected data which is then stored and processed using machine learning algorithms without human intervention [12]. Smart home sensors are described in various ways in the extant literature. They usually involve continuous, unobtrusive, in-home sensorbased health monitoring of social, behavioral, physical, and biological (e.g., vital signs) monitoring that is represented in activities of daily living (ADLs) which have been described as everyday activities such as sleeping, eating, toileting, and cooking [13]. Bennett, Roka, and Chen (2017) [14] define smart health care in the home as: "A home or dwelling with a set of networked sensors and devices that extend the functionality of the home. . . in the pursuit of improving the health and wellbeing of its occupants and assisting in the delivery of healthcare services" (p. 2). We will use the term HSH to refer to sensor technology that is embedded or deployed in the home environment (e.g., attached to ceilings, walls, furniture, appliances) to detect motion in persons living in private dwellings in the community or retirement villages with an emphasis on remotely monitoring the health and wellness of the occupant to detect changes to facilitate early intervention.
A primary goal for the HSH is forecasting and predicting potential changes in health because with this knowledge the clinical team can implement early interventions prior to a health crisis [15,16]. For example, studies have shown that smart home sensors can recognize clinically significant changes in ADLs, showing potential for the HSH to support pain management and monitor function, falls, and sleep [17][18][19][20]. Some smart home products that offer services to customers that provide alerts to family caregivers or medical service providers if changes in the baseline are detected are already available on the market [21]. Despite some smart home companies catering to the needs of the older adult end-user, the HSH is not currently designated as a medical device, and the full potential of artificial intelligence for precision health using this technology is yet to be realized and widely adopted by older adults and caregivers [21].
Accelerating the Integration of the Health Smart Home
The World Health Organizations Global Strategy on Digital Health 2020-2025 guiding principles state that successful digital health initiatives require an integrated strategy [22].
The HSH should be viewed as part of the digital health ecosystem that can address the wider health needs of aging older adults.
Accordingly, there needs to be both coordination and collaboration among different stakeholders. Although interdisciplinary and multidisciplinary collaboration are not new concepts in smart home research, nor in aging research, the fruits of this collaboration remain elusive. For example, the integration of the HSH as a standard intervention for older adults living at home with chronic disease and age-related changes is currently not evident. This could be because older adults may not be sufficiently informed of the possibilities of the HSH, but the gross lack of adoption reveals that there are larger issues [23]. Given the HSH's potential to support positive health outcomes and extended independence for older adults and its potential to support the healthcare system by providing data about patients' health during complex and unprecedented challenges like COVID-19, more widespread adoption would be expected. In addition, there is little evidence that new models of home care such as those provided by HSH systems are being introduced to frontline health professionals to prepare them for clinical practice settings where home monitoring technologies are used.
In response to this perplexing situation, we reached out to multiple stakeholders including nurses with home care expertise to elicit their thoughts on how our disciplines can work together to advance the use of HSH data and the wider adoption of the technology. With this paper, we aim to stimulate a discussion leading to meaningful advancement of the HSH's use for extending older adults' independence. We propose a Design Thinking (DT) process for specific stakeholder groups that we (as nurses) believe should be engaged to accelerate the integration of the HSH into real-world settings.
Design Thinking
There are many types of DT processes, models, and frameworks that can be used to develop products and solutions [24]. We prefer the DT framework used by the Hasso Plattner Institute of Design at Stanford which includes five design phases: empathize, define the problem, ideate, prototype, and test. According to Auernhammer and Roth (2021), DT is made possible because it is based on psychological theories which integrate creative and human values. DT has been described as a strategic big-idea process that results in a product that addresses or solves a problem [25]. DT includes gaining a deep understanding of the perspectives of end-users' requirements and needs. DT processes have not been used to the full extent possible in the development of smart homes [26]. Thus, we will propose a process.
Whilst user-centered design is important [27], it is futile if the HSH is not accessible to the population it was designed for or if it cannot be integrated into everyday clinical aged care. User-centered design models are the most widely accepted and may have, at least in part, led to the current low uptake environment. These designs have not holistically considered accessibility (financial, acquisition), wants versus needs, privacy concerns, and end-user support and maintenance [23,28,29]. These design models may not take into consideration that the end-user is likely not to be the primary barrier to wider integration [30].
It is noteworthy to mention that whilst there is potential for wider integration, to our knowledge the HSH is currently not in the tool kit of healthcare professionals or medical service providers (e.g., home-care, primary care clinics) for routine recommendations for older adults who have a chronic disease, multimorbidity, and decreased function-and live alone-nor has it become part of new models of home care post-hospitalization or used as a routine intervention in retirement apartments to ensure the health, wellness, and safety of its occupants. From a gerontologic nursing perspective, both wider accessibility and integration of the HSH into everyday clinical aged care are required for the HSH to be widely adopted. We hypothesize a lack of wide integration is likely due to certain stakeholder groups not being included in the design process from the beginning. We believe the stakeholders highlighted here should be involved in the design. However, practically speaking, it is impossible to get all stakeholders together (across disciplines, industries, and patient populations) at the same design table for every meeting at every phase.
In Figure 1, we exhibit a stakeholder-based DT process aimed at facilitating wider integration of the HSH so older adults can be supported in their quest for independent living in their ancestral home for as long as possible. As Auernhammer and Roth (2021) indicate, moving from product design to product integration requires fluency in thinking and flexible approaches. Product development requires the capabilities and dynamic innovative synergy of a variety of stakeholders and a robust design culture with the capabilities of removing hindrances or barriers from the team [24].
Stakeholder Groups
It is well-known that a variety of disciplines and stakeholder groups must collaborate to provide quality care to older adults [31]. However, to facilitate the integration of HSHs in everyday clinical practice, wider groups of stakeholders must be included in the DT processes ( Figure 1). From a nursing perspective, we believe that certain stakeholder groups are key to each design phase (Table 1). Stakeholders need to begin by empathizing with older adults living in poverty, suffering from chronic disease, multimorbidity, and natural age-related changes who wish to remain living in their ancestral home. This phase could be achieved using world-café methods to generate dialogue which should be thematically analyzed.
Stakeholder Groups
It is well-known that a variety of disciplines and stakeholder groups must collaborate to provide quality care to older adults [31]. However, to facilitate the integration of HSHs in everyday clinical practice, wider groups of stakeholders must be included in the DT processes ( Figure 1). From a nursing perspective, we believe that certain stakeholder groups are key to each design phase (Table 1). Stakeholders need to begin by empathizing with older adults living in poverty, suffering from chronic disease, multimorbidity, and natural age-related changes who wish to remain living in their ancestral home. This phase could be achieved using world-café methods to generate dialogue which should be thematically analyzed. Next, the stakeholders need to begin defining the problem and analyzing the associated root causes. This could be achieved through blended activities such as brainstorming, diagrammatic illustrations, and drawing on the expertise and knowledge of stakeholders. After ideas are generated using humanistic and creative processes, a multi-faceted set of solutions and accompanying mechanisms can be developed and approaches to testing can be determined. To facilitate the DT process, the relationship between all primary stakeholder groups needs to be transparent and collaborative (Figure 2). Members of the DT team should be aware that some stakeholder groups may be in closer proximity to the older adult and family caregiver at the microsystem level, whereas other stakeholders such as health care policymakers, whilst important, may be situated at the macrosystem level. The stakeholder group's role in the DT process should be commensurate with where in the system the stakeholder groups are situated, and their input and contribution may vary in intensity depending on the phase they are most critically needed for in the DT process. Next, the stakeholders need to begin defining the problem and analyzing the associated root causes. This could be achieved through blended activities such as brainstorming, diagrammatic illustrations, and drawing on the expertise and knowledge of stakeholders. After ideas are generated using humanistic and creative processes, a multi-faceted set of solutions and accompanying mechanisms can be developed and approaches to testing can be determined. To facilitate the DT process, the relationship between all primary stakeholder groups needs to be transparent and collaborative (Figure 2). Members of the DT team should be aware that some stakeholder groups may be in closer proximity to the older adult and family caregiver at the microsystem level, whereas other stakeholders such as health care policymakers, whilst important, may be situated at the macrosystem level. The stakeholder group's role in the DT process should be commensurate with where in the system the stakeholder groups are situated, and their input and contribution may vary in intensity depending on the phase they are most critically needed for in the DT process.
Older Adult, Family Caregivers & Frontline Healthcare Professionals
As the older adult population increases and alternative aging-in-place solutions emerge, it is important that HSHs are designed for the unique needs of the older adult end-user [32]. The older generation may not be universally comfortable with technology
Older Adult, Family Caregivers & Frontline Healthcare Professionals
As the older adult population increases and alternative aging-in-place solutions emerge, it is important that HSHs are designed for the unique needs of the older adult end-user [32]. The older generation may not be universally comfortable with technology leading to varied perceptions about living with sensor-based monitoring [23]. Older adults and their family caregivers need to be more informed about how the HSH could support older adults to remain living independently, and HSHs should be designed to consider user preferences, add value, and not impinge on the quality of life or privacy [33]. Frontline health care professionals such as nurses, recognized as a trusted profession [34], are optimally positioned to provide education about the HSH to the older adults and family caregivers and to advocate for the implementation of the HSH [35]. Community-based nurses are familiar with a person's medical history, psychosocial context, desires, and needs and can use this holistic view to make recommendations for the HSH to doctors on behalf of the older adult. Subsequently, front-line health care professionals like nurses as well as older adults and family caregivers must become valued members at each phase of the DT process, with opportunities to participate in every DT phase [36].
The readiness of HSH adoption among older adults is questionable, and more studies are needed. Previous studies have reported that older adults have limited knowledge of how the HSH works and how it can be used to support them to age-in-place [23,29]. However, others suggest that whilst older adults also have concerns about maintaining privacy, they are willing to trade perceived loss of privacy for independence if they can remain living in their own home with the HSH [23,[37][38][39]. The utilization of HSH technology to support family caregivers' roles in providing care for older adults is also important and could reduce the family caregiver burden [40].
However, more research involving both family caregivers and older adults is needed, because they may have different perceptions about the factors that may influence their readiness to adopt HSH [41]. In addition, family caregivers may be influential in supporting the adoption of the HSH. Accordingly, the lived experience and the desires and needs of both older adults and family caregivers and how HSH could support them is critical knowledge needed by the DT process to overcome the barriers impeding the adoption of HSHs and guide knowledge development and create pragmatic solutions to make the technology more acceptable, and adaptable.
Multidisciplinary Research and Academic or Educator Roles
Some healthcare professionals work in academic institutions to educate the next generation of healthcare professionals and conduct research, and some are educators in a variety of healthcare settings [42]. Whether the educator role is an academic role or a role in a health care setting, these professionals are optimally positioned to educate students, clients, and family caregivers about how the HSH works, the benefits, and how it can be accessed. This stakeholder group is positioned to develop a curriculum for the allied health professions and develop age-appropriate educational material for older adults and family caregivers which could support the integration of the HSH. The engagement of these stakeholder groups in defining the problem and idea generation is critical because the university environment is a natural and optimal place to explore novel solutions to real-world problems.
Social Workers
Because the home environment sets the stage for the older person's health and wellbeing, social workers could be instrumental in assessing the complex social needs of this population by ensuring social justice and social program access [43].
Therefore, social workers are critical at the idea phase as they can share insights about the pathways that can make the HSH accessible to older adults living in poverty. However, to our knowledge, no HSH development teams include social workers. These insights may be missed if this stakeholder group is not included. Social worker stakeholders should also collaborate with governmental and non-governmental agencies to develop policies or services which can support the creation of health care policies and the integration of the HSH.
Health Care Organizations
Health care organizations that provide home services and operate primary care clinics routinely care for older adults. Health care professionals that work for these providers follow the model of care required by the health care organization [44,45]. If the HSH is not integrated into the toolkit of health care professionals to be used in everyday practice, it is unlikely that a health care professional would recommend the HSH to their patients/clients. If the HSH is to be financially paid for by private or public insurance, medical doctors, nurse practitioners, and other allied health care professionals with prescribing ability may need to "prescribe" the HSH as a medical device. In addition, mechanisms to make the smart home data accessible to health care professionals to support care coordination would have to be created.
Whilst health care professionals could use the smart home data to identify changes in ADLs that may warrant early interventions, integration into the existing electronic medical record system and a dashboard to visualize that data with the healthcare professional in mind is still needed. Currently, health care professionals do not receive education on the use of smart home sensor data to augment clinical decision-making.
Accordingly, health care professionals need training and practical tools, work processes, and policies for the HSH to be integrated into in-home care services, primary care clinics, and similar settings that provide home services and telehealth to older adults. Many care costs are managed within healthcare organizations which makes these organizations a key partner in defining the problem. Cost-effective solutions will be more widely adopted and the systems (people within it) that manage those costs understand where solutions should be applied.
Aged Care Industry
The aged care industry provides a range of services and support to older adults wherever they live, including home care, aged care villages, long-term, and respite care. In 2018, more than 1.2 million people received aged care services in Australia alone, with most (77%) receiving support in their home or other community-based settings. Over 70% of older Australians live at home and will require clinical and supportive services at some point [11]. Insights on the global aged care market confirm that home-based care dominates the markets across not only Australia but also North America, Europe, Asia Pacific, Latin America, the Middle East, and Africa [46]. Importantly, many aged care organizations are early adopters of novel technologies as they can see the potential uses for their consumers. Despite its potential, the aged care sector has struggled to integrate innovative technologies into the aged-care system [47]. Whilst the aged care industry is a key stakeholder in the adoption of HSH technology across the aging spectrum, digital immaturity may inhibit some aged care providers from engaging with HSH technology. For example, one report found that cost, concerns about data privacy and security, and questions about the reliability and validity of the HSH technology are barriers to adoption among some aged care providers [48,49].
Partnering with these organizations to define the problem will be key to developing technology solutions that will be widely adopted. Likewise, these organizations are an optimal environment within which to test prototype solutions.
Computer Science & Electrical Engineering Teams
Researchers such as electrical engineers, computer scientists, start-ups, and big organizations (e.g., Google) are leading development and research in the smart home sensor technology space [15,50]. Although computer scientists use sophisticated computer-based modeling techniques and statistical analysis to understand sensor data of older adults in the home, they need health care professionals to provide real-world context about the meaning of the data. Without this context, it is difficult to produce reliable HSH monitoring and interventions [16]. Data visualization dashboards that make sense for health care professionals and can integrate with existing electronic health record systems are needed for the HSH to be integrated into healthcare settings. The consistent engagement of electrical engineers and computer scientists with health care professional stakeholders will be useful for further innovation and machine learning [16,17]. Collaboration with start-up companies and collaboration with existing companies is needed to manufacture and market the most effective and user-friendly HSH system with the least cost possible. This stakeholder group is critical to every DT phase but should focus on more robust engagement in the empathizing phase. Nursing collaborators and other frontline caregivers are optimally positioned to facilitate first-hand empathizing experiences for the engineering group.
Companies and Start-Ups
This stakeholder group is critical at all phases. Companies usually include business strategists that can establish the business case for the HSH, implement the design plan effectively, and highlight its successes [21].
While the cost-benefit seems obvious to the health discipline stakeholders, making a positive business case to health policymakers and health care organizations can be challenging [51]. Collaborating with health care economists could prove useful in establishing a positive business case which can be evaluated during the testing phase. In addition to supporting the need for the HSH, business strategists establish timelines and address logistical considerations such as project schedules, team dynamics, and change management to optimize each HSH rollout to the health care market.
However, academic and commercial HSH development without front-line health care professionals will likely result in limited clinical application and therefore insufficient efficacy, redirecting efforts into a dead-end regarding how much can be achieved to support aging-in-place and quality of health care, safety, life for the older adult, and will culminate in slowing widespread adoption and real-world integration [35,52]. A primary goal for the HSH is forecasting and predicting potential changes in health because with this knowledge the clinical team can implement early interventions prior to a health crisis [15,16]. For example, studies have shown that smart home sensors can recognize changes in ADLs, showing potential for the HSH to support pain management and monitor function, falls, and sleep [17][18][19][20]. The consistent integration of the stakeholders including technical and clinical teams has a symbiotic effect to develop HSHs with the ability to predict potential changes in health and mitigate or eliminate health problems which has implications for health care utilization and subsequently can be used to make a positive business case for scaling up.
Healthcare Policy Makers and Special Interest Groups
Health care policies at the government level are needed to support the development of mechanisms that will facilitate new models of digitally enhanced care [45].
Government policies are needed for the integration of the HSH into everyday clinical practice and to ensure that all older adults-regardless of financial means-have access to the HSH if they desire to use it to support aging-in-place. Policies and procedures governing the use, operationalization, and expectation of the HSHs must be articulated to provide end-users with realistic expectations of the HSHs' capabilities. This is important to protect older adults and their families and to support informed decision-making. Because the health care policymakers function at the macrosystem level, they should leave their helicopter view and engage in the empathize phase which will be a driver to take policy recommendations forward during the defining the problem, idea, prototype, and testing phases.
There are a variety of special interest groups that could be helpful to disseminate information about the HSH to older adults and family caregivers during the testing phase. Special interest groups such as the Alzheimer's Association or Dementia Australia could also use their influence to advocate for specific HSH requirements for certain sub-populations, for example, older adults with dementia. Special interest groups are important stakeholders in the testing phase because they can use successful outcomes obtained in the testing phase and influence the development and implementation of health policies [53] which could enable and solidify mechanisms for access and coverage of the HSH.
Master System Integrator
The smart building or technical design stakeholders have a critical role in the implementation of technology within the built environment, assessing current systems and stakeholder needs to develop a realistic gap analysis-driven approach to getting from the current state to a desired HSH future state. The technology specialist determines how software and hardware components should be architected to satisfy the requirements captured from stakeholder groups [54].
A Master System Integrator (MSI) should be incorporated into the DT process to develop the prototype and in the testing phases because this role is knowledgeable about balancing the depth and breadth of knowledge required to design and manage smart building systems in a short-and long-term basis. The multiskilled MSI is described as the "glue" and could be working closely with all stakeholder groups to support the integration of the HSH into real-world settings such as private homes, and retirement villages by connecting the multidisciplinary team members with trade teams for effective design, implementation, and evaluation [55].
Implications for Design Thinking for Real-World Integration of the HSH
A DT process that includes critical stakeholders could be effective in finding solutions and providing equitable access to the HSH. However, there are advantages and limitations to using this approach. The advantage of the consistent engagement of stakeholder groups in a DT process is that this interaction includes the relevant stakeholder groups which will ensure that each stakeholder group is able to give feedback and contribute equally through the humanistic and creative practices. Importantly, older adults, family caregivers, and front-line healthcare professionals will be able to provide valuable insights on wide integration and the issues surrounding accessibility. In addition, using a DT approach with the aim of HSH integration could also overcome global health system challenges such as workforce supply issues, pandemics, and ensure cost-effective, individualized care without overburdening the family caregiver. Finding solutions to make the HSH widely accessible is a complex undertaking as it is embedded in social and healthcare policy situated at the macrosystem level. Healthcare and social policies are needed to enable the development of mechanisms that support HSH integration into everyday care approaches.
Yet, the enactment of healthcare policy to drive HSH development and to provide wide access should not be viewed as a panacea for the wider adoption of the HSH. For example, whilst a policy-driven approach in China has indeed led to rapid development and scaling up of HSH to meet the older populations' needs, the demand for HSH by older adults has remained very low [29]. Primary concerns include older adults' lack of knowledge about the HSH, limited understanding of how this technology can support them, and concerns about cost and user-friendliness. Accordingly, potential barriers impeding HSH adoption should be examined, and addressed in tandem with the DT process to overcome these barriers [23,29,56]. Whilst most DT processes are time-limited, the complexity of making the HSH widely accessible will likely require a greater time commitment which may result in inconsistent stakeholder input, which could fragment the process. Another limitation of using the DT process is that it may be challenging to find organizational commitment that is also matched with a design culture and capabilities that can pragmatically support the process and remove barriers that may hinder the DT process and the actions that stakeholders may need to take.
Finally, further discussion is needed regarding how the integration of the HSH will be tested, and the type of health outcomes that could be measured to determine the effectiveness of the HSH in terms of achieving positive health outcomes on a population's health level. More research is needed to understand the readiness for adoption of the HSH in older adults and their family caregivers, and pragmatic clinical trials are needed to determine the value of the HSH.
Conclusions
The development of the HSH does not end with a final product but continues with the integration of the HSH into the lives of everyday older adults who desire to remain living in their ancestral home as they age. Whilst the HSH can make this desire a reality, equitable access for all older adults-regardless of financial means-and scalability can only be achieved if critical stakeholder groups use a process like DT when developing the necessary mechanisms that make access for all older adults possible. | 2022-06-19T15:14:11.975Z | 2022-06-16T00:00:00.000 | {
"year": 2022,
"sha1": "2fe2575b1da3a1ef03e6d560374f7992142d2ef0",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2673-9259/2/2/13/pdf?version=1655373839",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "bd5b0f972d426293db4c0c5a6a4a255b556007ed",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": []
} |
261540520 | pes2o/s2orc | v3-fos-license | ALK, ROS1, RET and NTRK1–3 Gene Fusions in Colorectal and Non-Colorectal Microsatellite-Unstable Cancers
This study aimed to conduct a comprehensive analysis of actionable gene rearrangements in tumors with microsatellite instability (MSI). The detection of translocations involved tests for 5′/3′-end expression imbalance, variant-specific PCR and RNA-based next generation sequencing (NGS). Gene fusions were detected in 58/471 (12.3%) colorectal carcinomas (CRCs), 4/69 (5.8%) gastric cancers (GCs) and 3/65 (4.6%) endometrial cancers (ECs) (ALK: 8; RET: 12; NTRK1: 24; NTRK2: 2; NTRK3: 19), while none of these alterations were observed in five cervical carcinomas (CCs), four pancreatic cancers (PanCs), three cholangiocarcinomas (ChCs) and two ovarian cancers (OCs). The highest frequency of gene rearrangements was seen in KRAS/NRAS/BRAF wild-type colorectal carcinomas (53/204 (26%)). Surprisingly, as many as 5/267 (1.9%) KRAS/NRAS/BRAF-mutated CRCs also carried tyrosine kinase fusions. Droplet digital PCR (ddPCR) analysis of the fraction of KRAS/NRAS/BRAF mutated gene copies in kinase-rearranged tumors indicated that there was simultaneous co-occurrence of two activating events in cancer cells, but not genetic mosaicism. CRC patients aged above 50 years had a strikingly higher frequency of translocations as compared to younger subjects (56/365 (15.3%) vs. 2/106 (1.9%), p = 0.002), and this difference was particularly pronounced for tumors with normal KRAS/NRAS/BRAF status (52/150 (34.7%) vs. 1/54 (1.9%), p = 0.001). There were no instances of MSI in 56 non-colorectal tumors carrying ALK, ROS1, RET or NTRK1 rearrangements. An analysis of tyrosine kinase gene translocations is particularly feasible in KRAS/NRAS/BRAF wild-type microsatellite-unstable CRCs, although other categories of tumors with MSI also demonstrate moderate occurrence of these events.
Introduction
Microsatellite instability (MSI), being a consequence of deficient mismatch repair (dMMR), is manifested by multiple mutations affecting repetitive DNA sequences [1,2].MSI may occur in tumors associated with Lynch hereditary cancer syndrome.These malignancies arise in subjects with inherited pathogenic variants in the MLH1, MSH2, MSH6, PMS2 or EPCAM genes, and their development involves a somatic second-hit inactivation of the involved member of the dMMR pathway [3].MSI is also characteristic of some sporadic malignancies, being attributed to MLH1 promoter hypermethylation [4].Microsatellite-unstable carcinomas have an increased tumor mutation burden (TMB) and are responsive to inhibitors of immune checkpoints [5,6].
MSI is particularly common for colorectal carcinomas, with approximately 5-15% of tumors displaying this phenotype.Microsatellite-unstable CRCs often carry mutations leading to MAPK signaling pathway activation, particularly amino acid substitutions in the KRAS, NRAS and BRAF genes [4,7].Somewhat unexpectedly, MSI-CRCs were repeatedly shown to contain rearrangements in genes encoding receptor tyrosine kinases [2,[8][9][10].A recent study revealed that these translocations are related to an increased frequency of mutations within the G:C-reach intronic regions of the involved genes [11].There are several drugs targeting ALK, ROS1, RET and NTRK1-3 tyrosine kinases; therefore, the detection of these gene fusions is of high clinical importance [10,12].In addition to CRC, MSI is characteristic of several non-colorectal cancer types, particularly gastric and endometrial carcinomas [13,14].It has not yet been systematically studied whether tyrosine kinase gene rearrangements occur at noticeable frequencies in microsatellite-unstable cancers arising in organs other than the colon.
A comprehensive analysis of ALK, ROS1, RET and NTRK1-3 gene rearrangements requires RNA-based next generation sequencing, which is an expensive technique [2].We have developed an efficient laboratory screening procedure for ALK, ROS1, RET and NTRK1-3 translocations, which is largely based on an analysis of 5 /3 -end unbalanced expression of these genes [15,16].When the gene is not affected, the number of transcripts corresponding to its kinase portion and to the upstream nucleotide sequences are equal.ALK, ROS1, RET and NTRK1-3 rearrangements usually result in fusion of the kinase domain to the actively transcribed gene.In the latter case, the expression of the kinasedomain-related portion of the receptor tyrosine kinase is elevated as compared to sequences located upstream to the breakpoint.This pipeline, being coupled with the identification of the ALK, ROS1, RET and NTRK1-3 fusion variants, allows for an analysis of a large number of tumor samples
In addition to the above-mentioned well-known gene rearrangements, we performed an analysis of novel fusions (BCR::PKHD1 and CLIP1::LTK) that were recently identified in lung carcinomas [17,18].None of the microsatellite-unstable carcinomas carried these translocations.
Somewhat surprisingly, as many as 5/267 (1.9%) CRCs with activating mutations in the KRAS, NRAS or BRAF genes were found to have kinase gene rearrangements (Figure 2, Table 1).In addition, the gastric tumor with TPM3::NTRK1 (T8;N10) rearrangement also simultaneously carried the p.G12C mutation in the KRAS oncogene.Activating genetic lesions in genes involved in the MAPK pathway are usually mutually exclusive; therefore, the coincident occurrence of gene fusions and KRAS/NRAS/BRAF mutations is intriguing.This coincidence may occur due to the presence of the above events in distinct cells, i.e., the mosaicism of activating genetic lesions, or due to the simultaneous occurrence of two mutations in the same cell.We evaluated the fraction of KRAS/BRAF-mutated cells in these tumors using digital droplet PCR (Supplementary Table S2).The obtained data were highly concordant with the results of the visual inspection of the slides, thus suggesting that KRAS/BRAF mutations are not mosaic but present in all tumor cells.It is noteworthy that all but one of the above-described tumors with tyrosine kinase rearrangements showed 5′/3′-end unbalanced expression of the affected gene; this imbalance could not be detected if only minor fractions of the tumor cells carried a fusion (Table 1).Furthermore, the only tumor with a non-altered expression pattern carried a translocation in the NTRK3 gene; as mentioned above, alterations in this kinase are not always accompanied by changes in the 5′/3′-end transcript ratio.Somewhat surprisingly, as many as 5/267 (1.9%) CRCs with activating mutations in the KRAS, NRAS or BRAF genes were found to have kinase gene rearrangements (Figure 2, Table 1).In addition, the gastric tumor with TPM3::NTRK1 (T8;N10) rearrangement also simultaneously carried the p.G12C mutation in the KRAS oncogene.Activating genetic lesions in genes involved in the MAPK pathway are usually mutually exclusive; therefore, the coincident occurrence of gene fusions and KRAS/NRAS/BRAF mutations is intriguing.This coincidence may occur due to the presence of the above events in distinct cells, i.e., the mosaicism of activating genetic lesions, or due to the simultaneous occurrence of two mutations in the same cell.We evaluated the fraction of KRAS/BRAF-mutated cells in these tumors using digital droplet PCR (Supplementary Table S2).The obtained data were highly concordant with the results of the visual inspection of the slides, thus suggesting that KRAS/BRAF mutations are not mosaic but present in all tumor cells.It is noteworthy that all but one of the above-described tumors with tyrosine kinase rearrangements showed 5 /3 -end unbalanced expression of the affected gene; this imbalance could not be detected if only minor fractions of the tumor cells carried a fusion (Table 1).Furthermore, the only tumor with a non-altered expression pattern carried a translocation in the NTRK3 gene; as mentioned above, alterations in this kinase are not always accompanied by changes in the 5 /3 -end transcript ratio.
MSI Analysis in Non-Colorectal Tumors Carrying ALK/ROS/RET/NTRK Rearrangements
This and other studies have demonstrated that a subset of microsatellite-unstable tumors carries tyrosine kinase gene fusions.We questioned whether the same overlap between these two events is observed when the MSI testing is applied to non-colorectal kinase-rearranged carcinomas.We utilized pentaplex panel testing for 23 lung carcinomas, 23 sarcomas, 4 thyroid carcinomas, 3 salivary gland tumors, and 3 pancreatic carcinomas with oncogenic gene translocations (ALK: 31; ROS1 10; RET: 9; NTRK1: 2; NTRK3: 4).None of the above tumors demonstrated microsatellite alterations.
Discussion
This is apparently the largest single-center study to systematically evaluate gene rearrangements in microsatellite-unstable tumors belonging to various cancer types.It has been confirmed that ALK, RET and NTRK1-3 gene rearrangements are very frequent in KRAS/NRAS/BRAF mutation-negative CRCs, especially in patients aged above 50 years, although these events may also occur in colorectal tumors carrying RAS/RAF mutations as well as in non-colorectal MSI-positive CRCs.
MSI Analysis in Non-Colorectal Tumors Carrying ALK/ROS/RET/NTRK Rearrangements
This and other studies have demonstrated that a subset of microsatellite-unstable tumors carries tyrosine kinase gene fusions.We questioned whether the same overlap between these two events is observed when the MSI testing is applied to non-colorectal kinase-rearranged carcinomas.We utilized pentaplex panel testing for 23 lung carcinomas, 23 sarcomas, 4 thyroid carcinomas, 3 salivary gland tumors, and 3 pancreatic carcinomas with oncogenic gene translocations (ALK: 31; ROS1 10; RET: 9; NTRK1: 2; NTRK3: 4).None of the above tumors demonstrated microsatellite alterations.
Discussion
This is apparently the largest single-center study to systematically evaluate gene rearrangements in microsatellite-unstable tumors belonging to various cancer types.It has been confirmed that ALK, RET and NTRK1-3 gene rearrangements are very frequent in KRAS/NRAS/BRAF mutation-negative CRCs, especially in patients aged above 50 years, although these events may also occur in colorectal tumors carrying RAS/RAF mutations as well as in non-colorectal MSI-positive CRCs.
The results of this report are consistent with a recent study by Madison et al. [11], who demonstrated that the emergence of gene rearrangements in CRCs is attributed to the mutagenic effects of microbiota-derived butyrate and the consequent generation of 8-oxoguanine.These data convincingly explain the significant differences in the incidence of gene rearrangements between colorectal and non-colorectal malignancies.However, moderate frequencies of tyrosine kinase fusions were observed in gastric and endometrial tumors, suggesting that microsatellite-unstable cells may acquire translocations even in the absence of the influence of gut microbes.
Our report confirms that alterations in the genes belonging to the MAPK pathway are generally mutually exclusive.Indeed, activation of a single member of this molecular cascade, be it a receptor tyrosine kinase or KRAS, NRAS or BRAF oncogene, is usually sufficient to drive the signaling.Furthermore, the above-mentioned genetic alterations are usually considered to be more or less equivalent in terms of phenotypic consequences [7].Interestingly, despite this apparent equivalence, the frequencies of alterations in particular genes vary considerably between MSI-positive and MSI-negative CRCs.Microsatellite-unstable CRCs have approximately a twice lower frequency of KRAS mutations but approximately a four times higher incidence of BRAF mutations as compared to MSI-negative tumors [7,19].Furthermore, while kinase gene rearrangements are common in CRCs with MSI, they are exceptionally rare in MSI-negative colorectal carcinomas [11].Overall, the cumulative frequency of the activation of the MAPK cascade is similar in CRCs with and without MSI; for example, 320/471 (67.9%) microsatellite-unstable CRCs analyzed in this study had evidence of genetic alteration of the MAPK pathway (Figure 2), which is very close to the estimates obtained for microsatellite-stable colorectal carcinomas [19].The remaining 30-40% of CRCs do not have overt genetic alterations within this signaling cascade and deserve further investigations [7].Interestingly, the analysis of our dataset revealed a few instances of the simultaneous occurrence of genetic events affecting two distinct oncogenes.Several prior investigations produced similar examples [19][20][21].Single-cell sequencing may permit reliable discrimination between mutation mosaicism and the true co-occurrence of several MAPK activating events within the same cell [22].Here, we utilized ddPCR for a rough analysis of the fraction of KRAS/BRAF-mutated cells, which strongly suggested that these mutations are truncal but not mosaic (Supplementary Table S2).Interestingly, there were two CRCs with simultaneous occurrences of BRAF p.V600E mutations and NTRK translocations.These tumors are unique with regard to clinical opportunities, as they have three highly actionable targets (MSI for immune therapy, BRAF p.V600E substitution for combined EGFR/BRAF inhibition and NTRK activation for the use of TRK inhibitors) [4,7,12].
Gene fusions occurred at high frequencies in CRC patients aged above 50 years, but were uncommon in younger subjects (Supplementary Tables S3 and S4).This age threshold is commonly utilized for discrimination between sporadic CRCs, which develop microsatellite instability due to somatic hypermethylation of the MLH1 promoter, and hereditary CRCs, which are attributed to the biallelic mutation-driven inactivation of DNA mismatch repair genes [3,4,7].In this respect, tyrosine kinase translocations are similar to the BRAF p.V600E mutation, which is a validated marker for the exclusion of microsatellite-unstable CRCs from Lynch syndrome germline testing [3,7].
The spectrum of gene fusions in microsatellite-unstable tumors has some characteristic features.NTRK1-3 gene fusions are relatively frequent in some pediatric tumors and sarcomas, although they are exceptionally rare in common cancer types [23][24][25].However, NTRK1-3 rearrangements compose the majority of the gene translocations observed in MSI-positive carcinomas.Similar to other tumor types [25], TPM3::NTRK1, EML4::NTRK3 and ETV6::NTRK3 fusion variants represented the majority of NTRK1-3 translocations identified in this study.ALK and RET gene rearrangements are particularly frequent in lung carcinomas.While common translocation variants constitute over 90% of ALK fusions in pulmonary malignancies [26], only one such rearrangement was detected in our dataset (EML4::ALK (E6;A20)).Similarly, KIF5B::RET fusions represent more than 70% of RET alterations in lung cancers [16]; however, none of the RET-rearranged microsatelliteunstable tumors carried this variant.
This study has some limitations.In particular, we did not have survival data for the patients; therefore, we could not evaluate whether the presence of gene rearrangements affects disease outcomes.MSI detection was based on a standard PCR protocol.Although this approach is reliable for the detection of tumors with highly unstable microsatellite repeats, it may miss a subset of mismatch repair-deficient carcinomas, which have low rates of cell proliferation and, therefore, low numbers of mutations in mononucleotide tracks [5,27].The discordance between the immunohistochemical (IHC) evaluation of MMR proteins and the PCR analysis of MSI is mainly observed for tumors arising outside the gastrointestinal tract [5,27].Therefore, given the preferential occurrence of gene fusions in CRCs, it is highly unlikely that these MMR-deficient but microsatellite-stable tumors would carry kinase-activating rearrangements.
This study detected 31 samples with 5 /3 -end unbalanced expression, in which variant-specific PCR failed to identify known gene rearrangements.Only 13 of these tumors were available for NGS, and all these samples contained rare fusion variants.It is highly likely that the majority of the remaining 18 tumors (ALK: 2; RET: 3; NTRK1: 5; NTRK3: 8) also carry uncommon types of rearrangements in the mentioned genes.The poor availability of microsatellite-unstable tumor samples for RNA-based NGS is attributed to the study design.The standard procedure for sample processing in our laboratory involves the simultaneous isolation of DNA and RNA, followed by cDNA synthesis in the same tube.This protocol is not compatible with subsequent RNA-based NGS sequencing; therefore, we needed to retrieve the tissue samples and subject them to a new round of RNA isolation.This effort turned out to be inefficient, as the majority of archival blocks were returned to the primary hospital immediately after the completion of standard molecular testing.This described drawback can be easily resolved if the aliquot of the DNA/RNA sample is stored without subsequent cDNA synthesis and, therefore, used for NGS whenever necessary.We have now incorporated this amendment in the processing of those tumors that may potentially require NGS testing.It also has to be acknowledged that the administration of ALK and NTRK inhibitors is not necessarily based on the identification of particular translocation variants, as the FISH assay is an approved method for the analysis of these genes [28].Overall, 5 /3 -end unbalanced expression very rarely produces false-positive results, i.e., virtually all tumors identified by this assay indeed carry gene rearrangement [15,16].Therefore, in theory, the results of 5 /3 -end unbalanced expression per se may be sufficient to guide therapy in some circumstances.
Materials and Methods
The analysis of MSI was applied to 14,111 CRCs, 1756 gastric carcinomas and 506 endometrial carcinomas, which were diagnosed on the basis of current World Health Organization (WHO) classification [29] and referred for molecular analysis to the N.N.Petrov Institute of Oncology (St.Petersburg, Russia) within the years 2013-2023.The majority of the CRCs included in this study, as well as a subset of GCs, were analyzed for mutations in the KRAS, NRAS and BRAF oncogenes [19].In addition, MSI testing was applied to 459 cervical carcinomas, 64 cholangiocarcinomas, 339 ovarian cancers and 474 pancreatic cancers.In the years 2013-2021, microsatellite analysis was largely based on the use of a single marker, BAT26, given the evidence for its high accuracy for MSI detection [30].In the years 2022-2023, this assay was replaced by the standard MSI test involving 5 markers (BAT25, BAT26, NR21, NR22 and NR24).MSI analysis was generally performed using tumor tissues only; in exceptionally rare instances of ambiguous results, corresponding normal cells were utilized for comparison.The primers and probes for these assays are described in Supplementary Table S5.For the pentaplex panel, tumors with two or more shifts were classified as MSI-positive [31].Capillary electrophoresis was performed using the GenomeLab GeXP Genetic Analysis System (Beckman Coulter, Brea, CA, USA) or the Nanophore-05 instrument (Syntol, Moscow, Russia).
The detection of ALK, ROS1, RET and NTRK1-3 rearrangements was carried out for 619 microsatellite-unstable tumors.The methodology of the analysis of translocations in the above genes has been described in [15,16].Briefly, tumor blocks were compared against corresponding histological slides, and areas with sufficient content of malignant cells were dissected from the specimens.The extraction of nucleic acids from manually dissected tumor cells involved the simultaneous isolation of both DNA and RNA, followed by cDNA synthesis.The quality of cDNA was controlled by PCR amplification of the SDHA-specific transcript; samples with a cycle threshold (Ct) above 35 were considered unreliable for further analysis.The ALK, ROS1, RET, NTRK1, NTRK2 and NTRK3 genes were subjected to tests for 5 /3 -end unbalanced expression.The primers and probes for these assays are described in Supplementary Table S6.In addition to the 5 /3 -end unbalanced expression screening test, all MSI-positive tumors were analyzed by variant-specific PCR for the most common translocations affecting the ALK (4 variants), ROS1 (10 variants) and RET (11 variants) genes, as well as BCR::PKHD1 and CLIP1::LTK gene fusions [17,18].The design and multiplexing of the variant-specific tests are described in Supplementary Tables S7-S10.Tumors with unbalanced 5 /3 -end ALK, ROS1, RET, NTRK1, NTRK2 or NTRK3 expression, which lacked the above-mentioned common rearrangements, were further tested for rare translocation variants (see the list in Supplementary Tables S11-S15).In addition, we applied this testing for all MSI+ tumors with detectable RET and NTRK1-3 expression, even in the absence of 5 /3 -end expression imbalance.Finally, samples with unbalanced expression, which did not have PCR-detectable rearrangements, were subjected to RNA-based NGS.
Conclusions
Overall, this study demonstrates that the comprehensive analysis of ALK, RET, NTRK1, NTRK2 and NTRK3 rearrangements is particularly feasible in MSI-positive KRAS/NRAS/ BRAF mutation-negative CRCs, although there is also a moderate frequency of these events in other categories of microsatellite-unstable tumors.The spectrum of the involved tyrosine kinases and their partners is characterized by a high level of diversity; therefore, the utilization of indirect methods, such as IHC or FISH, may be associated with some uncertainty.While RNA-based NGS is the gold standard for the detection of gene rearrangements, the diagnostic pipeline presented here may be considered as a cost-efficient alternative for facilities with limited access to massive parallel sequencing.
Supplementary Materials:
The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/ijms241713610/s1.Funding: This research has been supported by the Russian Science Foundation (grant number 23-15-20032).Funding sources did not influence how the study was conducted or the description of its results.
Figure 2 .
Figure 2. Flowchart of the detection of gene rearrangements in MSI-positive samples of colorectal cancers.RAS/RAF-KRAS/NRAS/BRAF mutation status.
Figure 2 .
Figure 2. Flowchart of the detection of gene rearrangements in MSI-positive samples of colorectal cancers.RAS/RAF-KRAS/NRAS/BRAF mutation status.
Table 1 .
Clinical data for MSI-positive tumors with tyrosine kinase gene fusions.
CRC-colorectal cancer, EC-endometrial cancer, GC-gastric cancer, f-female, m-male, n/a for NGS-material not available for next generation sequencing. | 2023-09-06T15:16:40.010Z | 2023-09-01T00:00:00.000 | {
"year": 2023,
"sha1": "837ee277d6412b8b5c5d91845c98e04429531805",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/24/17/13610/pdf?version=1693795865",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "235c61007072a16bdf21d80a668c3ca9d6ee0318",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
7682281 | pes2o/s2orc | v3-fos-license | Amphetamine Containing Dietary Supplements and Acute Myocardial Infarction
Weight loss is one of the most researched and marketed topics in American society. Dietary regimens, medications that claim to boost the metabolism, and the constant pressure to fit into society all play a role in our patient's choices regarding new dietary products. One of the products that are well known to suppress appetite and cause weight loss is amphetamines. While these medications suppress appetite, most people are not aware of the detrimental side effects of amphetamines, including hypertension, tachycardia, arrhythmias, and in certain instances acute myocardial infarction. Here we present the uncommon entity of an acute myocardial infarction due to chronic use of an amphetamine containing dietary supplement in conjunction with an exercise regimen. Our case brings to light further awareness regarding use of amphetamines. Clinicians should have a high index of suspicion of use of these substances when young patients with no risk factors for coronary artery disease present with acute arrhythmias, heart failure, and myocardial infarctions.
Introduction
Amphetamines are widely known to cause appetite suppression and encourage weight loss. Their other known side effects are hypertension, tachycardia, arrhythmias, and myocardial infarctions [1]. Herein we describe the case of a 35-year-old female who was undergoing a weight loss regimen of daily exercise with a dietary supplement who subsequently suffered an acute myocardial infarction and was found to test positive for amphetamines.
Case Presentation
A 35-year-old African American female with no prior history of coronary artery disease and no significant family history presented with sudden onset of exertional chest discomfort with radiation to the back. The patient became unresponsive shortly after arrival to the emergency department and was subsequently found to be in ventricular fibrillation-cardiac arrest (V-fib). The patient was in V-fib for 6 minutes, with conversion after electrical cardioversion and subsequent development of PEA-arrest for a total of 4 minutes. Repeat EKG after return of spontaneous circulation demonstrated inferolateral STEMI ( Figure 1). The patient received tenecteplase and heparin prior to urgent transfer to the catheterization laboratory. Left heart catheterization showed 99% thrombotic occlusion of mid-distal LAD (Figures 2 and 3). Two overlapping drug eluting stents (Xience RX 2.5 mm and Xience RX 2.75 mm) were placed in the distal LAD, and TIMI III flow was achieved ( Figure 4). Due to severely decreased ventricular function with ejection fraction of 10-15%, the patient received ventricular support with an Impella, with discontinuation soon after secondary to hemolysis. Further evaluation revealed positive toxicology screen for amphetamines. Pertinent laboratory results for possible autoimmune causes of the acute myocardial infarction were negative, including antinuclear antibody (ANA), ribonucleoprotein antibody (RNP), RA latex turbid, anti-chromatin IgG antibodies, SSA (Ro) and SSB (La) antibodies, and double stranded DNA antibody (dsDNA).
The patient's cardiac function recovered with medical management. Subsequent transthoracic echocardiograms revealed improved ejection fraction to 60-65% after 11 days and a new finding of left ventricular apical thrombus. The patient received anticoagulation with intravenous heparin, as well as continuous treatment with dual antiplatelet therapy with aspirin and clopidogrel.
The patient remained in the coronary care unit (CCU) for a total of 17 days. CCU course was further complicated by development of pulmonary edema with diffuse alveolar hemorrhage and developing MRSA and pseudomonas pneumonia.
Due to the need for prolonged mechanical ventilation, the patient received a tracheostomy and continued to improve in terms of her pulmonary function while treated with antibiotics for ventilator-associated pneumonia. Her neurological status improved significantly, and on interview, she denied any use of Adderall, amphetamines, or illicit drugs that could have precipitated this event. She reported that recently she had increased her level of physical activity in order to lose weight and was supplementing such efforts with the addition of a natural weight loss dietary supplement.
Discussion
Amphetamine use is strongly associated with coronary artery disease [2]. The immediate cardiovascular effects of amphetamine use include tachycardia and hypertension, both of which are caused by the increase in circulation of catecholamines. These can lead to life-threatening arrhythmias and enhancement of coronary vascular tone, increase platelet aggregation, and ultimately promote plaque rupture with subsequent development of an acute myocardial infarction [1,3]. The mechanism of myocardial injury due to amphetamine use is believed to be acute coronary vasospasm, with subsequent decreased perfusion and development of an acute myocardial infarction. Chronic use of amphetamines can also lead to accelerated atherosclerosis and increased thrombogenicity [4], both of which can lead to thromboocclusive acute coronary syndromes in young individuals. Bashour reported the first documented case of intracoronary thrombus as the culprit of acute myocardial infarction in a patient with amphetamine abuse. They postulated this increase in thrombogenicity to be secondary to catecholamine-induced platelet aggregation [5]. Westover and colleagues, in a cross-sectional study which evaluated the link between amphetamine abuse and incidence of acute myocardial infarction, revealed a significant association with Case Reports in Cardiology 3 amphetamine use and acute myocardial infarction in young adults (adjusted odds ratio = 1.61; 95% CI = 1.24-2.04, = 0.0004) [6].
There are multiple cases reported in the literature involving the development of an acute myocardial infarction due to amphetamine abuse. Chang and colleagues reported an unusual case of a silent ST elevation myocardial infarction following amphetamine use in a 61-year-old diabetic patient. In their case, the patient presented to the hospital without chest pain and normal cardiac enzymes; however, EKG revealed ST elevations in the inferior leads with reciprocal changes in the precordial leads. Subsequent percutaneous coronary angiography revealed total occlusion of the posterior-lateral segment of the right coronary artery. On further history, the patient had reported abusing amphetamines via inhalation prior to presentation [7]. In a similar case, Waksman and colleagues reported the incidence of an acute anterior wall myocardial infarction in a 31-year-old patient who was using amphetamine intravenously and presented to the hospital with generalized discomfort after 4 doses 48 hours prior to presentation [8]. On this particular case, myocardial infarction was diagnosed via electrocardiogram changes which were reported as T wave inversions in inferior and anterior leads, with subsequent transition to a new left bundle branch block in a repeat electrocardiogram 5 minutes after the prior. Conservative treatment was instituted and the patient subsequently transferred to the intensive care unit. Transthoracic echocardiogram 3 days after admission revealed decreased anterior wall motion as well as a reduced ejection fraction at 25%. The patient was unable to undergo cardiac catheterization due to leaving the hospital prematurely [8]. Watts and McCollester reported the case of a 23-year-old patient who presented to the emergency room with abdominal pain and generalized malaise less than 24 hours after inhalation of amphetamines. Electrocardiogram revealed ST elevations in the precordial leads V1-V4, a junctional rhythm, and a complete heart block, with troponin I noted acutely elevated. The patient underwent cardiac catheterization which revealed normal coronaries and a decreased ejection fraction of 15-20% [9]. Subsequent clinical course included evaluation with a transesophageal echocardiogram revealing ventricular asynergy, placement of a dual chamber pacemaker, and a follow-up transthoracic echocardiogram revealing an improved ejection fraction of 35-40% and electrocardiogram revealing resolution of ST elevation [9]. An additional case reported in the literature by Furst and colleagues involves the case of a 41-year-old patient who presented to the hospital after use of intranasal methamphetamine with chest pain [3]. Electrocardiogram revealed ST elevations in II, III, and AVF, with reciprocal changes in the precordial leads. The patient was initially treated with thrombolytics, with resolution of ST elevations and chest pain, but suffered a recurrence of chest pain 24 hours after treatment. Cardiac catheterization was performed which showed subtotal occlusion of the mid-segment of the right coronary artery, with the patient undergoing percutaneous coronary intervention [3]. This case also illustrates how despite treatment the risk of subsequent myocardial infarction after amphetamine use remains a real concern.
Turnipseed and colleagues, in a study that aimed at determining the frequency of acute coronary syndrome in patients presenting to the hospital with chest pain after methamphetamine use, concluded that acute coronary syndrome is common in patients presenting with chest pain after methamphetamine use as well as the fact that a normal electrocardiogram does not necessarily rule out the possibility of myocardial infarction in patients known to be methamphetamine abusers [10].
One of the side effects of amphetamine use is decreased appetite, a side effect that is desirable to some patients. The realization of such effect and the potential for inducing weight loss led to the introduction of amphetamines as appetite suppressants in the 1950s and development of the combination of Phentermine and Fenfluramine in the 1990s [11]. The medication was approved and widely used in the early 1990s due to its significant effect in weight reduction. However, due to subsequent reports of increased cardiovascular events, the drug was withdrawn from the market in 1997 [12]. With the increased use of amphetamines for treatment of Attention Deficit Hyperactivity Disorder and Adult Attention Deficit Disorder, there has been a substantial increase in their abuse due to the desirable side effect of decreased appetite and weight loss. In view of this side effect profile, manufacturers of dietary supplements marketed to promote weight loss are including -Methylphenethylamine (B-Methyl), a positional isomer of amphetamine with a similar profile effect as amphetamines [13], in their dietary supplements. This compound, which is similar in structure and composition to amphetamine, can be detected in routine toxicology screens as amphetamine.
In view of the positive toxicology screen for amphetamines and the lack of history of abuse or use by our patient, we propose the notion that weight loss dietary supplements in fact may contain amphetamines or amphetamine-like substances. With the popularity of such products among patients searching for aids in weight loss, there is a possibility that a portion of the population who regularly use these products might be exposed to unregulated levels of amphetamines or amphetamine-like substances. The acute and chronic consequences of use of these substances can be detrimental to patients, as they are at a higher risk for acute myocardial infarctions, increased risk of accelerated atherosclerosis, and early development of cardiac dysfunction due to recurrent myocardial injury [1,4,5,12].
Conclusion
Our case illustrates how inadvertent use of amphetamines by patients with no history, risk factors, or significant family history of coronary artery disease can be the culprit for life threatening events. Patients often struggle with weight management, looking for alternatives to supplement their efforts to lose weight. Without proper disclosure and recent trend of the addition of amphetamines to dietary supplements [13], it is important to educate patients and maintain a high index of suspicion when young adults present with acute myocardial infarction and have no history or laboratory values significant for illicit drug use. The detrimental effects of the use of such substances can be seen in the acute phase by myocardial infarctions and left ventricular dysfunction [3,9] and potentially in the chronic phase by subsequent valvular heart disease and congestive heart failure [12]. | 2018-04-03T05:52:42.569Z | 2016-07-19T00:00:00.000 | {
"year": 2016,
"sha1": "b8bd6527e389a261cf608dde395f13f64888680c",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/cric/2016/6404856.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1bc322059d2160e064730c3ed141596e225d9e40",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
41402812 | pes2o/s2orc | v3-fos-license | Pierre Auger Data, Photons, and Top-Down Cosmic Ray Models
We consider the ultra-high energy cosmic ray (UHECR) spectrum as measured by the Pierre Auger Observatory. Top-down models for the origin of UHECRs predict an increasing photon component at energies above about $10^{19.7}$eV. Here we present a simple prescription to compare the Auger data with a prediction assuming a pure proton component or a prediction assuming a changing primary component appropriate for a top-down model. We find that the UHECR spectrum predicted in top-down models is a good fit to the Auger data. Eventually, Auger will measure a composition-independent spectrum and will be capable of either confirming or excluding the quantity of photons predicted in top-down models.
INTRODUCTION
The origin of the highest energy cosmic rays has been a subject of great interest for some time.
The particular aspect of Ultra-High Energy Cosmic Ray (UHECR) physics that has raised the greatest degree of interest is the question of whether the cosmic ray spectrum exhibits a feature known as the GZK cutoff [1]. This cutoff is the result of the suppression of the cosmic ray spectrum above a few 10 19 eV due to protons interacting with the cosmic microwave background. If a GZK suppression is not observed, it would indicate that the sources of these ultra-high energy events would be of local origin, cosmologically speaking (within 10 to 50 Mpc). However, no nearby astrophysical sources sources capable of accelerating particles to such high energies are known to exist.
Measurements of the UHECR spectrum have not clearly settled the issue of whether a GZK suppression is present. On one hand, the spectrum measured by the AGASA experiment shows no indication of a GZK suppression [2]. In particular, in Ref. [3] it is shown that the number of events with energies above 10 20 eV expected in a GZK scenario is 3.6, while the number of events observed by AGASA is 11, corresponding to a significance of 3 standard deviations. In contrast, the HiRes experiment appears to have observed the presence of a GZK suppression: Ref.
[4] concludes that HiRes data exclude a non-GZK scenario with a significance of 3 to 4 standard deviations. Given this discrepancy, it appears that further data would be required to resolve the question at hand. In particular, the Pierre Auger Observatory (or simply Auger) is currently under construction at its southern site in Argentina. Auger combines the techniques used by AGASA (an air shower ground array) and HiRes (fluorescence detectors) allowing it to make energy spectrum measurements which are less composition and model dependent than either HiRes or AGASA.
Auger's first results were released in 2005, but did not clearly resolve the question of whether a GZK feature is present in the UHECR spectrum. These data was collected with only a fraction of the southern site completed, but yet the total exposure at this point was slightly larger than the total accumulated by AGASA. Auger calibrated their ground array data using their florescence detectors, resulting in a largely composition independent energy measurement. Due to a lack of events, this hybrid calibration was only possible at energies well below the GZK cutoff, however, and if the composition of the UHECR spectrum charges between the calibration energies and higher energies, then the highest energy bins in the published Auger spectrum must be modified.
This effect is particularly pronounced in the case of photon primaries. Auger's ground ar-ray measures energy with a parameter known as S(1000) [5], which is proportional to the water Cherenkov signal in the surface array at a distance of 1000 meters from the shower axis. Due to the lack of muons generated by photon primaries, photon-induced events would produce a smaller S(1000) than proton-induced events of the same primary energy. Therefore, the actual cosmic ray flux may be considerably larger at the highest energies than reported by Auger if a substantial fraction of the highest energy cosmic rays are photons.
A substantial fraction of the highest energy cosmic rays are expected to be photons in topdown cosmic ray models. In this article, we consider the effect that this will have on the UHECR spectrum observed by Auger. In particular, we show that Auger's data result in a spectrum without the appearance of a GZK cutoff if the photon fraction of UHECRs follows the prediction of topdown models.
TOP-DOWN COSMIC RAY MODELS
If no GZK cutoff is found to be present in the UHECR spectrum, then either local (within 10 to 50 Mpc) sources of UHECRs must exist [6], or some kind of exotic physics must be invoked to evade the GZK effect. Among exotic possibilities, proposals have included UHECRs composed of exotic hadrons [7], or strongly-interacting neutrinos [8], or that protons can travel super-GZK distances due to a violation of Lorentz invariance [9]. The solution to the UHECR problem that we focus on in this article is a top-down scenario, in which the highest energy cosmic rays are generated locally (in our galaxy) by the decays of supermassive particles [10] or topological defects [11].
Unlike other proposed scenarios, in top-down models the highest energy cosmic rays are mostly photons. For this reason, it can be misleading to compare the spectrum presented by the Auger collaboration to the predicted spectrum in these models. In Fig. 1, we compare the published Auger data to a spectrum of protons from homogeneous astrophysical sources (left frame) and to the same spectrum plus a top-down component (right frame). We find, somewhat unexpectedly, that the data fits both scenarios reasonably well.
In order to compare the Auger data with the predictions of a top-down model we consider the effect a proton-photon mixed primary composition would have on the Auger energy calibration curve. If at the present energy calibration range most of the primaries are protons, then the energy of a primary photon would be underestimated by a factor of two [5]. Consider a photon fraction In the right frame, we plot the same conventional source spectrum along with the spectrum from the decay of supermassive particles or topological defects of mass M X = 6 × 10 21 eV. In the right frame, the Auger data has been shifted to account for the photon composition in the top-down spectrum. Both models fit the data quite well.
f (E, P 0 , M X ), which is a function of the energy E, the injection power P 0 , and the mass of the decaying particle M X , which is small at the Auger energy calibration range (10 18 ∼ 10 19.4 ). Above this range a change in the calibration curve would lead to the following shift on the energy: where E shif t is the shifted energy. For each bin of mean energy E in the published Auger spectrum, we solved this equation to find the corresponding shifted energy E shif t . We finally chose the values of P 0 and M X that best fit the data.
The photon fraction of the UHECRs in the model shown in the right frame of Fig. 1 is shown in Fig. 2. The spectrum and composition of UHECRs generated in top-down models has been calculated using the publicly available program SHDECAY [15]. We have shown results here for the case of decays to quark-antiquark pairs, and assumed the presence of supersymmetry, although our conclusions are largely insensitive to these choices. shown in the right frame of Fig. 1. This prediction is clearly below the limits set by the Auger [12], Haverah Park (HP) [13] or AGASA (A1, A2) data [14].
FUTURE PROSPECTS FOR CONFIRMING OR EXCLUDING TOP-DOWN MODELS
Although we have shown here that present UHECR data (Auger data in particular) is not capable of confirming or excluding top-down models, the prospects for testing such models in the near future are very promising. These prospects come from at least four types of observations: future improvement on the systematic uncertainty on the energy measurement, future UHECR photon fraction measurements, future UHECR anisotropy measurements, and future ultra-high energy neutrino measurements.
Auger will certainly improve its systematic uncertainties in the near future, leading to a better established energy spectrum. Such a spectrum would clearly resolve the issue of the existence of a GZK cutoff and confirm or rule out the hypothesis of an additional high energy component.
As Auger accumulates more data, its hybrid detector will be able to place limits on the photon fraction at increasingly high energies. Currently, the Auger photon limit only constrains the composition above 10 19 eV to be less than 26% photons (at the 95% confidence level) [12]. To test most top-down scenarios, a similar constraint would be needed at a much higher energy, perhaps around 10 19.7 eV. Auger will accumulate an exposure sufficient to accomplish this only after about 5-6 years of operation with a full southern array.
Auger will also be capable of studying the isotropy of the UHECR spectrum to unprecedented levels of precision. If a substantial fraction of the highest energy events are generated in topdown decays within the galactic halo, this can lead to an observable level of anisotropy directed toward the center of our galaxy. To identify such an isotropy, several hundred events of the highest energies will be required. It has been estimated that such a signal could be resolved at Auger South after 3 years of operation with a full array [16]. Current Auger data only constrains anisotropies at 0.8-3.2 EeV [17], well below the range effected by top-down models.
In Auger are expected to reach the sensitivity needed to detect top-down neutrinos as well [18]. Such neutrinos will be diffuse and difficult to identify as being of top-down origin, however. The rates anticipated from ultra-high energy proton interactions with the cosmic microwave background (the cosmogenic neutrinos flux) are similar to those for top-down models, and are virtually impossible to distinguish from each other. The lack of such events, on the other hand, would be a fairly compelling piece of evidence against top-down models, and may imply a substantial component of heavy nuclei in the UHECR spectrum [19].
Among these four classes of observation, top-down models should testable within the next several years, and are likely to be either experimentally excluded or confirmed.
SUMMARY AND CONCLUSIONS
The calibration of the Pierre Auger Observatory has been made using a hybrid technique at sub-GZK energies. This fact leads to a large systematic uncertainty at the highest energies of the order of about 40%. A change in composition above Auger calibration energies might lead to a systematic shift in their higher energy events, even to the full extent of this uncertainty. This is particularly true in top-down cosmic ray models, in which the highest energy cosmic rays are generated in the decays of super-massive particles or topological defects. In these models many of the highest energy cosmic rays are photons, which have their energies underestimated by about 50% at Auger.
In this article, we calculated the expected shift on the Auger spectrum assuming the photon content of a typical top-down model. This shift is consistent with the quoted experimental systematic uncertainty and is toward higher energies. We find that the resulting spectrum agrees quite well to the top-down prediction. We also showed that the spectrum is consistent with a pure extragalactic proton hypothesis where no shift is needed.
6
As the Pierre Auger Observatory accumulates more data, its ability to calibrate in a composition independent fashion will be applied at increasingly higher energies. At least 5-6 years of exposure with a full southern array will be required to reach the energy at which photons begin to dominate the UHECR spectrum in top-down models, however. Anisotropy measurements by Auger may also be able to test top-down models after a few years of observation, and upcoming ultra-high energy neutrino measurements will be relevant to top-down models as well. | 2017-09-23T20:23:35.292Z | 2006-03-01T00:00:00.000 | {
"year": 2006,
"sha1": "99391cde677b27092f68b80a9e14b13674730d87",
"oa_license": null,
"oa_url": "https://digital.library.unt.edu/ark:/67531/metadc883280/m2/1/high_res_d/892523.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "99391cde677b27092f68b80a9e14b13674730d87",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
15869464 | pes2o/s2orc | v3-fos-license | Application of GPS Trajectory Data for Investigating the Interaction between Human Activity and Landscape Pattern : A Case Study of the Lijiang River Basin , China
The interaction between human activity and landscape pattern has been a hot research topic during the last few decades. However, scholars used to measure human activity by social, economic and humanistic indexes. These indexes cannot directly reflect human activity and are not suitable for fine-grained analysis due to the coarse spatial resolution. In view of the above problems, this paper proposes a method that obtains the intensity of human activity from GPS trajectory data, collects landscape information from remote sensing images and further analyzes the interaction between human activity and landscape pattern at a fine-grained scale. The Lijiang River Basin is selected as the study area. Experimental results show that human activity and landscape pattern interact synergistically in this area. Built-up land and water boost human activity, while woodland restrains human activity. The effect of human activity on landscape pattern differs by the land cover category. Overall, human activities make natural land, such as woodland and water, scattered and fragmented, but cause man-built land, such as built-up land and farmland, clustered and regular. Nevertheless, human activities inside and outside urban areas are the opposite. The research findings in this paper are helpful for designing and implementing sustainable management plans.
Introduction
Human activity has a direct and indirect impact on landscape pattern [1], and their mutual relationship shows different characteristics with the development of society [2].With the rapid growth of population and the improvement of the engineering skills of man, the impact of people on the natural environment is continuously increasing [3,4]; whereas in pre-history time, humans' ability to change the environment was limited.Particularly after the Industrial Revolution since the 1740s, the scale of human impact has been considerably larger than at any point previously [5], and the landscape pattern of the Earth's surface has significantly changed since then.According to [6], one-third of the humid forest in Southeast Asia disappeared between the beginning of the twentieth century and World War II.The arable land area has decreased more than 80% in the Gorce Mountains area in the past 50 years [7].In addition, about 1% of the global coastal wetland stock is destroyed each year, caused by direct human reclamation [8].Human activity and related socioeconomic variables have become very important factors influencing the change of the landscape [9].On the other hand, the landscape also exerts an effect on human activity [10].The lack of knowledge about the relationship between human activity and landscape could hinder the implementation of sustainable management plans [11]; therefore, understanding the interaction between them becomes an urgent need [12][13][14].
The advent of various kinds of airborne and satellite-based Earth observation platforms makes it possible to collect a large area of ground information, including landscape information, quickly and frequently [15,16].This technology triggers research on investigating the coupling relationship between landscape and human activity [17][18][19].Two main ways of studying the impact of human activity on landscape are: (1) making an assumption that a certain type of landscape change is caused by human activity and then analyzing the impact of humans based on the change of this type of landscape.Siyuan et al. analyzed the effect of human activities on the landscape in the Yellow River Basin based on local soil erosion [20].Geri et al. assumed that human activity is the primary driving force of landscape pattern change and studied the effects of human activity by analyzing the heterogeneity of the Mediterranean landscape [21].(2) The other is measuring human activity using indicators, such as socio and economic indexes, then inferring human activity and the resulting impact on the landscape.In order to assess the intensity of human activity, Lü et al. used road density, the area ratio of human settlements, industrial land and farmland [22].Guo et al. calculated the human disturbance degree based on the proportion of construction, tourists, town and country effects [23].Gu et al. selected four variables (the ratio of native and non-native species, wetland uses, surrounding landscape and wetland landscape characteristics) to represent human activity [24], while Hoang et al. chose socio-economic factors, including engagement in tourism, ethnic group, poverty rate, population growth and effect of preservation policy [25].Zeng et al. and Garbarino et al. used Euclidean distance-based factors, including distance from buildings, roads and tourism lodges, as assessment indexes of human activity [26,27].A few scholars also investigated the impact of landscape pattern on human activity.Di Giulio et al. summarized related literature and concluded that the landscapes meeting people's biological and cultural needs tend to be human's preference, while land use types, such as roads with high traffic frequencies, have a barrier effect on humans [10].
The first approach can only show an overall trend, since human activity is not quantified.As for the second approach, there are two problems: Firstly, the social and economic factors cannot reflect human activity accurately and directly.For instance, the poverty ratio is related to human activity, but the specific relationship between them is unknown.Secondly, the human activity indexes derived from socio-economic factors are usually at the scale of the administrative unit, far coarser than the scale of the land cover classification result; which hinders a deeper analysis on the coupling relationship between human activity and landscape pattern.Therefore, finding an approach to directly monitor human activity at a fine spatial scale is necessary.As location and wireless communication technologies develop and gradually become ubiquitous, more and more sensors are being used in various walks of life and are producing a large amount of data.This type of data collection process is called participatory sensing by [28].Participatory sensing data are of various forms, including GPS trajectory data [29,30], cell phone positioning data [31,32], RFID data, etc., and provide an effective way to collect and represent a long time series of real-time human activity.
The objective of this paper is to analyze the fine-grained interaction between human activity and landscape pattern based on remote sensing images and GPS trajectory data.We hypothesize that: (1) the impacts of landscape on human activities are influenced by a landscape composition characterized by the composition percentage of land classes; (2) the impact of human activities on the landscape varies from land class to land class and also changes with the intensity of activities.Our major contribution is to propose using GPS trajectories to quantify human activity, to generate the spatial distribution of human activity with the same resolution as the land cover result and to discover the otherwise invisible fine-grained interaction characteristics.
The next section introduces the study area and experimental datasets and also presents the workflow of the methodology and the detailed methods.Section 3 demonstrates and discusses the experimental results in the Lijiang River Basin, China.Section 4 concludes with the limitations of this paper and points out future directions.
Study Area
The Lijiang River Basin is located in Guilin, China's Guangxi Zhuang Autonomous Region, covering the geographic extent 110 ˝31 55 11 -110 ˝56 1 58 11 E, 24 ˝37 1 12 11 -25 ˝55 1 13 11 N (Figure 1).The beautiful rivers and mountains in this basin constitute hundreds of miles of famous Lijiang River karst landscape.The unique landform, water and cultural landscape provide rich resources for the rise and development of the local tourism industry [33].Over 30,000,000 tourists visited this region in 2014.The fast development of the tourism industry greatly promoted the local economy and society during the last two decades, but Lijiang River Basin has witnessed severe environmental issues, especially the significant change of land cover and landscape pattern [34,35].
ISPRS Int.J. Geo-Inf.2016, 5, 104 3 of 17 experimental results in the Lijiang River Basin, China.Section 4 concludes with the limitations of this paper and points out future directions.
Study Area
The Lijiang River Basin is located in Guilin, China's Guangxi Zhuang Autonomous Region, covering the geographic extent 110°3′55′′-110°56′58′′E, 24°37′12′′-25°55′13′′N (Figure 1).The beautiful rivers and mountains in this basin constitute hundreds of miles of famous Lijiang River karst landscape.The unique landform, water and cultural landscape provide rich resources for the rise and development of the local tourism industry [33].Over 30,000,000 tourists visited this region in 2014.The fast development of the tourism industry greatly promoted the local economy and society during the last two decades, but Lijiang River Basin has witnessed severe environmental issues, especially the significant change of land cover and landscape pattern [34,35].
Data Acquisition
Six datasets are used in this study: Landsat images, the ASTER DEM product, high spatial resolution remote sensing images from Google Earth, land use and cover products, GPS trajectory data and road vector data. Landsat
Data Acquisition
Six datasets are used in this study: Landsat images, the ASTER DEM product, high spatial resolution remote sensing images from Google Earth, land use and cover products, GPS trajectory data and road vector data.
Landsat images of the study area in 2009 and 2013 were downloaded from the USGS data archive.The images in 2009 were collected by the Thematic Mapper (TM) on Landsat 5, while those in 2013 were collected by the Operational Land Imager (OLI) on Landsat 8.The spatial resolution of both TM and OLI images is 30 m. Four scenes of Landsat images (path: 124-125; row: 42-43) cover the study area, and the mosaic images are shown in Figure 2 in the form of a false color composite.The weather in the study area tends to be cloudy and foggy, which would not allow for a proper comparison.Thus, the Landsat images acquired between late October and early December when few clouds are visible are used, and the cloud-detecting algorithm proposed by [36] is applied to remove clouds before land cover classification.The weather in the study area tends to be cloudy and foggy, which would not allow for a proper comparison.Thus, the Landsat images acquired between late October and early December when few clouds are visible are used, and the cloud-detecting algorithm proposed by [36] is applied to remove clouds before land cover classification.
(a) (b) The ASTER GDEM is produced based on the observation data of NASA's new generation of Earth observation satellites called Terra, which are available at the website of NASA's Jet Propulsion Laboratory.The spatial resolution is 30 m.This dataset is used to compute the slope and aspect of the ground surface to facilitate land cover classification.The elevation-related information together with the spectral information can help improve the classification accuracy.
Google Earth is a virtual Earth product developed by Google Inc. and provides multi-temporal high resolution remote sensing images of many important regions in the world.The spatial resolution of images can be up to the meter level in some populated regions.These images are used as a land cover reference due to their timely and detailed information.
The land use/cover products include land use/cover maps and the topographic maps of nature reserves from the local tourism bureau and statistics bureau.Although these products are not collected at the same time as Landsat images, they can be used as supplementary reference data after digitization and geometric correction.Besides, our research group made several on-the-spot investigations of land cover in the study area using GPS receivers and digital cameras and generated land use/cover data of a part of the study area as ground truth data.
GPS trajectory data are a typical type of participatory sensing data.The trajectory data used in this paper are collected by the National Commercial Vehicle Monitoring Platform (NCVMP) operated by the Ministry of Transportation of China.In the NCVMP, tourist shuttles and coaches are equipped with GPS receivers and wireless communication equipment and send their real-time location and motion parameters to a monitoring center while moving.GPS trajectory data across the country are accumulated in this center.The sampling interval varies from 30 s-5 min, and the positioning accuracy is from 5-10 m. Figure 3a shows the spatial distribution of raw vehicle tracking data in the study area, which were collected in 2012.The tourist shuttles and coaches are two dominant transportation tools within Guilin, since the number of private cars per capita in China and Guangxi is extremely small [37,38], and, thus, can well reflect the activity of both local residents and outside The ASTER GDEM is produced based on the observation data of NASA's new generation of Earth observation satellites called Terra, which are available at the website of NASA's Jet Propulsion Laboratory.The spatial resolution is 30 m.This dataset is used to compute the slope and aspect of the ground surface to facilitate land cover classification.The elevation-related information together with the spectral information can help improve the classification accuracy.
Google Earth is a virtual Earth product developed by Google Inc. and provides multi-temporal high resolution remote sensing images of many important regions in the world.The spatial resolution of images can be up to the meter level in some populated regions.These images are used as a land cover reference due to their timely and detailed information.
The land use/cover products include land use/cover maps and the topographic maps of nature reserves from the local tourism bureau and statistics bureau.Although these products are not collected at the same time as Landsat images, they can be used as supplementary reference data after digitization and geometric correction.Besides, our research group made several on-the-spot investigations of land cover in the study area using GPS receivers and digital cameras and generated land use/cover data of a part of the study area as ground truth data.
GPS trajectory data are a typical type of participatory sensing data.The trajectory data used in this paper are collected by the National Commercial Vehicle Monitoring Platform (NCVMP) operated by the Ministry of Transportation of China.In the NCVMP, tourist shuttles and coaches are equipped with GPS receivers and wireless communication equipment and send their real-time location and motion parameters to a monitoring center while moving.GPS trajectory data across the country are accumulated in this center.The sampling interval varies from 30 s-5 min, and the positioning ISPRS Int.J. Geo-Inf.2016, 5, 104 5 of 17 accuracy is from 5-10 m. Figure 3a shows the spatial distribution of raw vehicle tracking data in the study area, which were collected in 2012.The tourist shuttles and coaches are two dominant transportation tools within Guilin, since the number of private cars per capita in China and Guangxi is extremely small [37,38], and, thus, can well reflect the activity of both local residents and outside visitors.The dataset is composed of the following information: vehicle ID, province ID, latitude, longitude, speed, direction, status and collection time.Part of the GPS tracking data is shown in Table 1.Road vector data are from the 1:10,000 basic geographical database of Guilin City collected by the Guangxi Bureau of Surveying, Mapping and Geoinformation.The vector road network is used to filter out noisy GPS data.
Workflow
The overall workflow is composed of three steps, as shown in Figure 4: landscape pattern calculation, activity intensity calculation and interaction analysis.Four datasets (Landsat images, ASTER GDEM, Google Earth images, land use/cover products) are used in the first step, and they are split into two parts: one part for classification model training and another for validation.The land use/cover result is generated through the object-oriented image classification method and further used to derive landscape pattern status based on a group of selected landscape metrics.In the second step, we first conduct map matching between raw GPS trajectories and road networks to filter out noisy GPS points.We then use the kernel density estimation method to calculate the spatial distribution of activity intensity and further obtain its grading map by an established grading standard.Finally, based on the above three results, we conduct correlation analysis at both the landscape level and class level to explore the interaction characteristics between these two components.Road vector data are from the 1:10,000 basic geographical database of Guilin City collected by the Guangxi Bureau of Surveying, Mapping and Geoinformation.The vector road network is used to filter out noisy GPS data.
Workflow
The overall workflow is composed of three steps, as shown in Figure 4: landscape pattern calculation, activity intensity calculation and interaction analysis.Four datasets (Landsat images, ASTER GDEM, Google Earth images, land use/cover products) are used in the first step, and they are split into two parts: one part for classification model training and another for validation.The land use/cover result is generated through the object-oriented image classification method and further used to derive landscape pattern status based on a group of selected landscape metrics.In the second step, we first conduct map matching between raw GPS trajectories and road networks to filter out noisy GPS points.We then use the kernel density estimation method to calculate the spatial distribution of activity intensity and further obtain its grading map by an established grading standard.Finally, based on the above three results, we conduct correlation analysis at both the landscape level and class level to explore the interaction characteristics between these two components.
Landsat Image Classification
Remote sensing images of Lijiang River Basin are collected on two different paths of satellites, which makes the image acquisition conditions (sunlight strength, image acquisition angle, etc.)
Landsat Image Classification
Remote sensing images of Lijiang River Basin are collected on two different paths of satellites, which makes the image acquisition conditions (sunlight strength, image acquisition angle, etc.) different.Using the same set of classification parameters to process images would lead to inconsistencies; therefore, the image classification is conducted scene by scene, and then, the classification results for each image are mosaicked.Land use/cover are divided into five types (woodland, water, built-up land, farmland and others) according to the local situation and our research needs.eCognition is used to implement the object-oriented classification [39] in which the multi-resolution segmentation [40] is selected with Landsat images, the derived NDVI products and DEM as input and the scale parameter set to 5. Before image classification, the high-resolution satellite images on Google Earth, the land use/cover products from government agencies and field surveying results are used as ground verification data for model training and validation.The overall classification accuracy is 84.25% by comparing classification results to the validation dataset.Figure 5 shows the land use/cover classification result.
ISPRS Int.J. Geo-Inf.2016, 5, 104 7 of 17 different.Using the same set of classification parameters to process images would lead to inconsistencies; therefore, the image classification is conducted scene by scene, and then, the classification results for each image are mosaicked.Land use/cover are divided into five types (woodland, water, built-up land, farmland and others) according to the local situation and our research needs.eCognition is used to implement the object-oriented classification [39] in which the multi-resolution segmentation [40] is selected with Landsat images, the derived NDVI products and DEM as input and the scale parameter set to 5. Before image classification, the high-resolution satellite images on Google Earth, the land use/cover products from government agencies and field surveying results are used as ground verification data for model training and validation.The overall classification accuracy is 84.25% by comparing classification results to the validation dataset.
Figure 5 shows the land use/cover classification result.
Landscape Pattern Analysis
The landscape metric is a commonly-used landscape pattern analysis method based on categorical maps, especially the land cover/use classification result derived through remote sensing images.The basis or building blocks of categorical maps are patches whose internal heterogeneity is often ignored.Landscape metrics are a set of important mathematical indicators to quantify spatial characteristics and the distribution of patches.A large number of metrics have been proposed so far, such as the size, perimeter and shape of patches or patch density calculated by the number of patches per hectare.However, we choose only a few metrics, since some of them are correlated or have limited ability in describing specific landscape patterns [41,42].We select five groups of metrics based on conceptual category: (1) area/density/edge metrics: patch density, edge density; (2) shape metrics: shape index; (3) core area metrics: total core area and core area density; (4) aggregation metrics: proximity index and contagion index; (5) diversity index: Shannon's diversity index.Using the land cover/use classification result in Section 2.4, we apply the spatial analysis program Fragstats 4.1 [43] to compute the landscape metrics in Lijiang River Basin between 2009 and 2013, with an 8-cell neighborhood rule.The landscape pattern analysis is performed at two scales: the landscape level and the class level; however, the metrics used for these two scales are slightly different due to the
Landscape Pattern Analysis
The landscape metric is a commonly-used landscape pattern analysis method based on categorical maps, especially the land cover/use classification result derived through remote sensing images.The basis or building blocks of categorical maps are patches whose internal heterogeneity is often ignored.Landscape metrics are a set of important mathematical indicators to quantify spatial characteristics and the distribution of patches.A large number of metrics have been proposed so far, such as the size, perimeter and shape of patches or patch density calculated by the number of patches per hectare.However, we choose only a few metrics, since some of them are correlated or have limited ability in describing specific landscape patterns [41,42].We select five groups of metrics based on conceptual category: (1) area/density/edge metrics: patch density, edge density; (2) shape metrics: shape index; (3) core area metrics: total core area and core area density; (4) aggregation metrics: proximity index and contagion index; (5) diversity index: Shannon's diversity index.Using the land cover/use classification result in Section 2.4, we apply the spatial analysis program Fragstats 4.1 [43] to compute the landscape metrics in Lijiang River Basin between 2009 and 2013, with an 8-cell neighborhood rule.The landscape pattern analysis is performed at two scales: the landscape level and the class level; however, the metrics used for these two scales are slightly different due to the applicability of the metrics.The output can either be a value for each metric or a continuous surface grid when the metric is calculated for each pixel.
Human Activity Analysis
Noisy points exist in raw GPS trajectory data due to various kinds of factors impacting positioning accuracy, such as the multiple-path effect and signal blocking.Before conducting activity analysis, we use the map matching algorithm [44] to filter out the noisy points that do not match with road networks.
Human activity intensity is the amount of human activity per unit area per unit time.GPS trajectory data can reflect fine-grained human activity well, and considering its characteristics, the metric based on GPS trajectory data is defined as the number of trajectory points per unit area per unit time.This metric is intrinsically the activity density of vehicles.Considering that the simple density calculation method is sensitive to the analysis scale and ignores the impact of subject on the surrounding area, we choose the kernel density calculation method [45,46].The Gaussian kernel is used to represent the uniform decay of human influence as the following formula: where h is the bandwidth, n is the number of points within the bandwidth, x i is the location of point objects and x is the location to calculate density.The bandwidth is set to 1000 m according to [47] in which Kong et al. studied the spatial distribution characteristics of human-impacted landscape and found the distance of significant impact on landscape to be between 1000 and 1200 m.The grid cell size for the calculation is set to 30 m to let the calculation result be consistent with the land cover result.
Interaction Analysis between Human Activity and Landscape Pattern
In order to better explore the interaction and coupling mechanism between these two components, we investigate the two effects separately based on the following observations: the difference in the spatial distribution of human activity intensity at one time is mainly determined by the landscape, and so, it can reflect the impact of the landscape pattern on human activity; yet, the evolution of the landscape pattern with time in a region is mainly caused by human activity and can reflect the effect of human activity on landscape.In order to understand the difference of density values in a semantic context, we classify intensity values into five grades.The approach to choose threshold values is as follows: first, intensity contour lines are generated based on the intensity image, and then, the contours that best fit to the boundaries of the geographical functional regions of Guilin are selected and their associated values used to establish the grading standard shown in Table 2.After grading, the difference of human activity intensity matches with the difference of functional characteristics.Correlation and evolution analysis is performed between human activity intensity (or intensity grade) and the landscape pattern index to explore the fine-grained interaction between the two components.
Landscape Pattern in the Lijiang River Basin
Table 3 shows the change of landscape indexes at the landscape level in Lijiang River Basin from 2009-2013, including patch density (PD), edge density (ED), Shannon's diversity index (SHDI), contagion index (CONTAG), mean proximity index (PROX_MN), mean shape index (SHAPE_MN), total core area (TCA) and disjunct core area density (DCAD).The PD goes up to 2.3316 from 1.7139, and the ED also experiences a slight increase from 30.5714-34.3983.The CONTAG drops to 66.1679% from 66.3172%.The change of these indexes shows that the landscape becomes more and more fragmented during the four years.On the other hand, the SHDI keeps stable, which indicates that the diversity of landscape does not change too much, even though the landscape is experiencing fragmentation.Table 4 demonstrates the change of landscape indexes at the class level.The patch densities of built-up land and farmland are larger than those of woodland and water in both 2009 and 2013, indicating that the fragmentation degree of built-up land and farmland is more severe.Comparing the differences of landscape indexes between 2009 and 2013, it can be found that NP (number of patches) and PD are increasing for every land class.
Human Activity Distribution
Figure 3b shows GPS trajectory data after preprocessing.The intensity image of human activity is calculated using the formula in Section 2.6, and the result is shown in Figure 3c.The highest intensity value is 70,494.Different colors are used to display the strength of human activity.Red regions represent strong human activity; yellow represents medium strength; while green shows weak human activity.Figure 3d shows the spatial distribution of human activity for different intensity grades.It can be seen that the human activity of Grade 1 mainly appears in the remote natural areas; the Grade 2 region spreads along major roads; the Grade 3 region is the suburban area; and the Grade 4 and Grade 5 regions are a belt region around downtown and the downtown area, respectively.The covering area percentages of human activity from Grades 1-5 are 83.6%,13.9%, 1.6%, 0.6% and 0.3% respectively.It can be concluded that the highly intensive human activities are distributed in a relatively small portion of the basin.intensity grades.It can be seen that the human activity of Grade 1 mainly appears in the remote natural areas; the Grade 2 region spreads along major roads; the Grade 3 region is the suburban area; and the Grade 4 and Grade 5 regions are a belt region around downtown and the downtown area, respectively.The covering area percentages of human activity from Grades 1-5 are 83.6%,13.9%, 1.6%, 0.6% and 0.3% respectively.It can be concluded that the highly intensive human activities are distributed in a relatively small portion of the basin.The most important land class in the Grade 1 region is woodland accounting for nearly 80% of total area, which indicates that very few people live in woodlands.There are very little built-up land and water in this region, the percentage of which is less than 2%.In the Grade 2 region, the percentage of woodland significantly drops down to about 36% compared to that in the nature dominant area.The farmland and built-up land account for about 40% and 12%, respectively.The PLAND for each land class is relatively balanced, which is likely because there are many roads and The most important land class in the Grade 1 region is woodland accounting for nearly 80% of total area, which indicates that very few people live in woodlands.There are very little built-up land and water in this region, the percentage of which is less than 2%.In the Grade 2 region, the percentage of woodland significantly drops down to about 36% compared to that in the nature dominant area.The farmland and built-up land account for about 40% and 12%, respectively.The PLAND for each land class is relatively balanced, which is likely because there are many roads and scenic spots in this region.In the Grade 4 and Grade 5 regions characterized by intensive human activities, the PLAND of built-up land increases to around 70%, and the areas of woodland, water and farmland are close.These two regions together are the urban area of Guilin, indicating that the dominant land class is built-up land.Overall, the dominant land class gradually changes from woodland to built-up land from the Grade 1 region with the weakest human activity to the Grade 5 region with the most intensive activity.
Interaction Analysis Result between Human Activity and Landscape Pattern
By analyzing the change of human activity intensity with PLAND, we find that the effects of woodland, water, built-up land and farmland on human activity show an obvious monotonic pattern.The intensity of human activity significantly increases as the PLAND of built-up land and water grows, and it also increases as the PLAND of water goes up, but the growth rates are slower.In other words, the people in Lijiang River Basin prefer to live in the regions with more built-up land and water, but the need for built-up land is larger than water.A contrary trend is found for woodland compared to built-up land and water: as the woodland percentage decreases, the intensity of human activity significantly increases.This indicates that woodland restrains human activity.The above results reinforce two previous research findings in a more precise way: water is a key factor evoking interest, calm and positive feelings and, therefore, has a high aesthetic preference [48].In contrast, dense forests are less preferred [49].It can be inferred that the most attractive landscape element composition is a high portion of built-up land and a certain amount of woodland and water.
We analyzed the impact of human activity on landscape based on the change of the landscape element in the same region.Figure 7a shows the change of PLAND of woodland during the four years from 2009-2013.It can be seen that in the region with the intensity value between zero and 7000, which is the nature dominant areas and the outskirts of Guilin, the PLAND of woodland increases as time goes by, and the amount of the increment grows as the activity intensity increases.This indicates that this region has witnessed afforestation during this time period.In addition, the PLAND of woodland increases in the region with an intensity value between 20,000 and 50,000, but decreases in the region with an intensity value over 50,000.It can be concluded that the greening rate increases in most urban areas, but drops down in crowed areas where human activity is most intensive.
Figure 7b shows the change of water during the same time period.We can find that the total area of water decreases in four years, and the regions where water area is decreasing are mainly those with strong human activity (the intensity value is over 20,000).In the regions with less intensive human activity, the area of water keeps stable, indicating that the shrinking of water happens mainly within the urban area and is probably caused by human activity.
Figure 7c demonstrates the change of built-up land during four years.Overall, the built-up land shows a growth trend.In the regions with weak human activity, the increment of built-up land is obvious, and the amount of increment rises as the activity intensity increases.The PLAND of built-up land does not change much in the regions with an intensity value over 30,000.The large gap between 20,000 and 30,000 reflects that the regions where the built-up land significantly increases are not the most developed areas of the city, but the fast developing outskirts of the city.
Figure 7d shows the change of farmland.Overall, the total area of farmland decreases from 2009-2013, and the regions where the decrement happens are mainly suburban areas.Looking at the increment of woodland in the same region, it can be inferred that the Grain for Green policy, a program undertaken by China's government for converting the sloped cropland to forest or grassland in order to tackle deforestation, has an effect in Lijiang River Basin, which is among the first regions to stipulate this program [50].
Figure 7e shows the change of other types of land.Overall, the percentage of other land shows a decreasing trend, especially in the nature dominant area and the suburban area.The area keeps stable in other regions.PD is also selected as a metric for further analyzing the relationship between human activity and landscape.Figure 8 shows the change of PD with human activity density.It can be seen that as the intensity of human activity increases from the nature dominant area (Grade 1 area) to the road dominant area and suburban area (Grade 2 and Grade 3 areas), to the city core belt (Grade 4 area), the PD increases as well; in other words, the landscape becomes more and more fragmented, and the degree of fragmentation grows while the activity intensity increases.However, in the city core area whose intensity grade is 5, although the human activity intensity increases, the PD drops down, PD is also selected as a metric for further analyzing the relationship between human activity and landscape.Figure 8 shows the change of PD with human activity density.It can be seen that as the intensity of human activity increases from the nature dominant area (Grade 1 area) to the road dominant area and suburban area (Grade 2 and Grade 3 areas), to the city core belt (Grade 4 area), the PD increases as well; in other words, the landscape becomes more and more fragmented, and the degree of fragmentation grows while the activity intensity increases.However, in the city core area whose intensity grade is 5, although the human activity intensity increases, the PD drops down, which is contrary to the overall change trend.It can be inferred that the effect of human activity on landscape within the urban area is influenced by city planning rules or regulations, and the human activity is more regular than that in other areas.In addition, the comparison results of PD between 2009 and 2013 show that the human activity in the non-urban area is becoming more fragmented, while that in the urban area is becoming more regular as time goes by.which is contrary to the overall change trend.It can be inferred that the effect of human activity on landscape within the urban area is influenced by city planning rules or regulations, and the human activity is more regular than that in other areas.In addition, the comparison results of PD between 2009 and 2013 show that the human activity in the non-urban area is becoming more fragmented, while that in the urban area is becoming more regular as time goes by.Compared to the landscape-level result, the result at the class level provides an insight into the relationship between human activity and the degree of fragmentation of each land class, as shown in Figure 9.It can be seen that the PD of woodland, water and other land classes (mainly grassland and shrub in the study area) increases with activity intensity.On the contrary, the PD of built-up land and farmland decreases with the growth of activity intensity.The former three land classes are human activity-influenced natural land, while the latter two land classes are man-built land.Therefore, we can infer that overall, human activity causes the natural land to become fragmented while transforming it, but making the man-built land more regular.However, in urban areas, woodland and other land classes are becoming more clustered, which shows a contrary change trend with the overall trend.This further enriches the research results on landscape at a finer scale: human activity generally imposes opposite impacts on natural land and man-built land, but its impacts on natural and man-built land are the same (clustered and regular) within urban areas due to the city and landscape planning regulations.
(a) Compared to the landscape-level result, the result at the class level provides an insight into the relationship between human activity and the degree of fragmentation of each land class, as shown in Figure 9.It can be seen that the PD of woodland, water and other land classes (mainly grassland and shrub in the study area) increases with activity intensity.On the contrary, the PD of built-up land and farmland decreases with the growth of activity intensity.The former three land classes are human activity-influenced natural land, while the latter two land classes are man-built land.Therefore, we can infer that overall, human activity causes the natural land to become fragmented while transforming it, but making the man-built land more regular.However, in urban areas, woodland and other land classes are becoming more clustered, which shows a contrary change trend with the overall trend.This further enriches the research results on landscape at a finer scale: human activity generally imposes opposite impacts on natural land and man-built land, but its impacts on natural and man-built land are the same (clustered and regular) within urban areas due to the city and landscape planning regulations.
which is contrary to the overall change trend.It can be inferred that the effect of human activity on landscape within the urban area is influenced by city planning rules or regulations, and the human activity is more regular than that in other areas.In addition, the comparison results of PD between 2009 and 2013 show that the human activity in the non-urban area is becoming more fragmented, while that in the urban area is becoming more regular as time goes by.Compared to the landscape-level result, the result at the class level provides an insight into the relationship between human activity and the degree of fragmentation of each land class, as shown in Figure 9.It can be seen that the PD of woodland, water and other land classes (mainly grassland and shrub in the study area) increases with activity intensity.On the contrary, the PD of built-up land and farmland decreases with the growth of activity intensity.The former three land classes are human activity-influenced natural land, while the latter two land classes are man-built land.Therefore, we can infer that overall, human activity causes the natural land to become fragmented while transforming it, but making the man-built land more regular.However, in urban areas, woodland and other land classes are becoming more clustered, which shows a contrary change trend with the overall trend.This further enriches the research results on landscape at a finer scale: human activity generally imposes opposite impacts on natural land and man-built land, but its impacts on natural and man-built land are the same (clustered and regular) within urban areas due to the city and landscape planning regulations. (a)
Conclusions
The advent of remote sensing technology characterized by airborne and satellite-based Earth observation platforms provides an effective way to continuously monitor the change process of landscape; meanwhile, participatory sensing data generated by humans have become a direct way of collecting human activity at high spatial and temporal resolution.Inspired by the cross and integration of geographic processes, ecological processes and emerging subjects envisioned by [51,52], this paper proposes the idea of the integrated use of remote sensing and participatory sensing data to analyze the interaction between human activity and landscape pattern at a fine-grained scale.The Lijiang River Basin in China's Guilin City is selected as the study area.Experimental results show that human activity and landscape pattern are mutual impact factors in Lijiang River Basin.The research findings are two-fold: (1) as for the impact of landscape on human activity, by analyzing the change of human activity with the change of land percentage for each land class, we find that built-up land and water boost human activity, and humans are clustered in the regions with a large portion of built-up land and water; on the contrary, woodlands restrain human activity; (2) as for the impact of human activity on landscape, its impact on landscape differs from land class to land class.Overall, human activities tend to cause natural land, such as woodland and water, to become scattered and fragmented, and the degree of fragmentation increases with the growth of activity intensity, while they make man-built land, such as built-up land and farmland, clustered and regular.Nevertheless, the human activity within the urban area is opposite from that outside the urban area: the human activity in the suburban area is relatively unconstrained, while that within cities is standardized and regular.
Although this work has made progress in applying emerging technologies to landscape research, we need to note the following aspects: (1) compared to the socioeconomic indexes in the existing literature, the trajectory intensity is a more direct and accurate indicator of human activity and makes the interaction analysis between human activity and landscape at the same spatial scale become possible.However, similar to other indexes, the trajectory intensity is not a comprehensive indicator yet, and it represents the mobility information.The analysis results in this paper reveal the interaction characteristics between the two components from the perspective of mobility.(2) Different from the bus schedules, the GPS trajectory data contain the whole journey information and can help derive the spatial distribution of human activity in continuous space.However, the rate of occupied seats is not taken into account due to data access difficulties.Though the trips taken by the bus are not equal to the number of bus rides, these two numbers can be deemed linearly related in the study area, because the tourist shuttles and coaches are two dominant transportation tools and are crowed most of the time.Therefore, the analysis result can still give an insight into the relationship between
Conclusions
The advent of remote sensing technology characterized by airborne and satellite-based Earth observation platforms provides an effective way to continuously monitor the change process of landscape; meanwhile, participatory sensing data generated by humans have become a direct way of collecting human activity at high spatial and temporal resolution.Inspired by the cross and integration of geographic processes, ecological processes and emerging subjects envisioned by [51,52], this paper proposes the idea of the integrated use of remote sensing and participatory sensing data to analyze the interaction between human activity and landscape pattern at a fine-grained scale.The Lijiang River Basin in China's Guilin City is selected as the study area.Experimental results show that human activity and landscape pattern are mutual impact factors in Lijiang River Basin.The research findings are two-fold: (1) as for the impact of landscape on human activity, by analyzing the change of human activity with the change of land percentage for each land class, we find that built-up land and water boost human activity, and humans are clustered in the regions with a large portion of built-up land and water; on the contrary, woodlands restrain human activity; (2) as for the impact of human activity on landscape, its impact on landscape differs from land class to land class.Overall, human activities tend to cause natural land, such as woodland and water, to become scattered and fragmented, and the degree of fragmentation increases with the growth of activity intensity, while they make man-built land, such as built-up land and farmland, clustered and regular.Nevertheless, the human activity within the urban area is opposite from that outside the urban area: the human activity in the suburban area is relatively unconstrained, while that within cities is standardized and regular.
Although this work has made progress in applying emerging technologies to landscape research, we need to note the following aspects: (1) compared to the socioeconomic indexes in the existing literature, the trajectory intensity is a more direct and accurate indicator of human activity and makes the interaction analysis between human activity and landscape at the same spatial scale become possible.However, similar to other indexes, the trajectory intensity is not a comprehensive indicator yet, and it represents the mobility information.The analysis results in this paper reveal the interaction characteristics between the two components from the perspective of mobility.(2) Different from the bus schedules, the GPS trajectory data contain the whole journey information and can help derive the spatial distribution of human activity in continuous space.However, the rate of occupied seats is not taken into account due to data access difficulties.Though the trips taken by the bus are not equal to the number of bus rides, these two numbers can be deemed linearly related in the study area, because the tourist shuttles and coaches are two dominant transportation tools and are crowed most of the time.Therefore, the analysis result can still give an insight into the relationship between human activity and landscape.The use of more data, such as ticket statistics or mobile phone positioning data, if possible, would be helpful for a more detailed analysis.
More research work needs to be conducted for identifying a more comprehensive and deeper understanding of the human-landscape interaction, including the following: (1) Participatory sensing data are field-based, while remote sensing images are raster-based.
These two datasets represent information in two completely different forms, which brings difficulties to the integrated analysis of data.How to build a model or devise a method to compare, overlay and fuse these two types of data would be a key problem to solve.(2) Every type of participatory sensing data is collected or created by a certain group of people and, thus, represents the activity of part of the entire population.In order to allow the analysis result based on participatory sensing data be more representative, more sources or forms of participatory sensing data need to be used.Therefore, the fusion analysis of multiple sources of data needs to be considered.(3) This study explores the interaction between human activity and landscape pattern from the point of view of intensity and ignores the type difference of the population.In fact, personal experience and the utility function also play a role in the effect of the landscape on humans.For example, favorite sites attract visitors because of the restorative effect caused by feelings, such as calm, happiness being away from everyday life [53,54], but for local people, the visual characteristics of the landscapes are not as important as their functions [10].Therefore, how the type of population influences the interaction between human and landscape would be focused on in the future.
Figure 1 .
Figure 1.Location of the study area.
images of the study area in 2009 and 2013 were downloaded from the USGS data archive.The images in 2009 were collected by the Thematic Mapper (TM) on Landsat 5, while those in 2013 were collected by the Operational Land Imager (OLI) on Landsat 8.The spatial resolution of both TM and OLI images is 30 m. Four scenes of Landsat images (path: 124-125; row: 42-43) cover
Figure 1 .
Figure 1.Location of the study area.
ISPRS Int.J. Geo-Inf.2016, 5, 104 4 of 17 the study area, and the mosaic images are shown in Figure 2 in the form of a false color composite.
Figure 4 .
Figure 4. Flowchart of the interaction analysis between human activity and landscape.
Figure
Figure 6a,b illustrates the percentage of land (PLAND) for different intensity grades of human activity for each landscape element in 2009 and 2013, respectively.Overall, the change trend of PLAND with intensity grade in 2009 is very similar to that in 2013 for every land class.In addition, the composition percentage of the five land classes for different intensity grades is also similar in the two time periods.The above phenomena manifest that the relationship between landscape and human activity in Lijiang River Basin is time independent.
Figure 6a ,
Figure 6a,b illustrates the percentage of land (PLAND) for different intensity grades of human activity for each landscape element in 2009 and 2013, respectively.Overall, the change trend of PLAND with intensity grade in 2009 is very similar to that in 2013 for every land class.In addition, the composition percentage of the five land classes for different intensity grades is also similar in the two time periods.The above phenomena manifest that the relationship between landscape and human activity in Lijiang River Basin is time independent.
Figure 6
Figure6also illustrates the dominant land class in regions with different grades of human activity.The most important land class in the Grade 1 region is woodland accounting for nearly 80% of total area, which indicates that very few people live in woodlands.There are very little built-up land and water in this region, the percentage of which is less than 2%.In the Grade 2 region, the percentage of woodland significantly drops down to about 36% compared to that in the nature dominant area.The farmland and built-up land account for about 40% and 12%, respectively.The PLAND for each land class is relatively balanced, which is likely because there are many roads and
Figure 6
Figure 6 also illustrates the dominant land class in regions with different grades of human activity.The most important land class in the Grade 1 region is woodland accounting for nearly 80% of total area, which indicates that very few people live in woodlands.There are very little built-up land and water in this region, the percentage of which is less than 2%.In the Grade 2 region, the percentage
Figure 8 .
Figure 8. Change of patch density with human activity intensity at the landscape level.
Figure 8 .
Figure 8. Change of patch density with human activity intensity at the landscape level.
Figure 8 .
Figure 8. Change of patch density with human activity intensity at the landscape level.
Figure 9 .
Figure 9. Change of patch density with human activity intensity at the class level: (a) 2009; (b) 2013.
Table 1 .
Part of the GPS tracking data in Lijiang River Basin.
Table 1 .
Part of the GPS tracking data in Lijiang River Basin.
Table 2 .
Grading standard of human activity intensity.
Table 4 .
Metrics at the class level in Lijiang River Basin between 2009 and 2013. | 2016-07-09T08:41:28.331Z | 2016-06-29T00:00:00.000 | {
"year": 2016,
"sha1": "d47eb0dd523edb2fe7524740b573cecc2bb17838",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2220-9964/5/7/104/pdf?version=1467192044",
"oa_status": "GOLD",
"pdf_src": "Crawler",
"pdf_hash": "d47eb0dd523edb2fe7524740b573cecc2bb17838",
"s2fieldsofstudy": [
"Environmental Science",
"Geography"
],
"extfieldsofstudy": [
"Computer Science",
"Geography"
]
} |
247787934 | pes2o/s2orc | v3-fos-license | Recent advances in pharmacogenomics research of anti-asthmatic drugs: a narrative review
Background and Objective Bronchial asthma, a common respiratory disease in children and young adults, is characterized by hyperresponsiveness and reversible narrowing of the airways which manifest clinically as shortness of breath, cough, and/or wheezing. Although its pathogenic mechanism remains unknown, it’s known that asthma patients have substantial interindividual variability in drug responsiveness, among which genetic factors play key roles. For improving the understanding of the biological mechanism of asthma and useful recognition of diagnostic and therapeutic targets, and the main purpose of this article is to optimize drug selection by analyzing genes associated with different drug responsiveness in asthmatic patients through the use of genomic techniques. Methods β2-agonists, inhaled corticosteroids (ICS), and leukotriene modulators are the most commonly used to treat asthma, and major genetic variations associated with differential response to these three drugs were identified via candidate gene association analysis, genome-wide association study (GWAS), and RNA sequencing. Key Content and Findings Genomics focuses on the effects of genetic variations in a group of genes. Most current studies have focused on the effect of single gene polymorphisms on drug efficacy, but the pharmacogenomics of asthma is inherently complex, with each factor having a small effect on drug responsiveness, and no single locus has yet been able to predict the variability in drug responsiveness. Conclusions According to epidemiological researches, a worldwide increase in the prevalence of bronchial asthma over the past four decades was shown. Genomic approaches can be used to screen for genetic variants associated with drug response. Stratifying patients prior to treatment helps to optimize drug selection, maximize the effectiveness of individual treatment, and improve clinical outcomes.
Introduction
Bronchial asthma, a common respiratory disease in children and young adults, is characterized by hyperresponsiveness and reversible narrowing of the airways which manifest clinically as shortness of breath, cough, and/or wheezing. Airway inflammation is the pathological features of asthma, which is closely related to immune response cells, inflammation mediators, cytokines and adhesion molecules. The past 3 decades have witnessed a rapid increase in the prevalence of bronchial asthma. It is projected that approximately 400 million people worldwide will experience asthma by 2025 (1). An epidemiological study in the United States revealed that there were more than 20 million people (or 8% of the overall population) with bronchial asthma nationwide in 2016, and the cost of diagnosing and treating the disease was estimated to be 20 billion USD (2). Asthma is mainly treated medically; however, as with most medications used in disease treatment, antiasthmatic drugs have notably different responsiveness among individual patients, which limits the clinical efficacy of the medications. A lot of elements may affect individual response to medications, including gender, age, diet, smoking, disease status, and drug interactions (3). A large proportion of such interindividual variability in drug responsiveness can be explained by genetic factors, and genetic variation across an array of genes has been revealed as associated with differences in patients' response to anti-asthmatic drugs. Genomics has been used to study the effects of genetic variation of many genes, at the levels of both DNA and RNA. Genomics in the field of drug therapy has focused on how individual genetic differences affect interindividual variability in drug responsiveness. Pharmacogenomics is a new discipline, which offers the possibility of personalized drug selection with genetic information to improve effectiveness or avoid adverse reactions. Here, we have summarized the recent advances in the application of genomics in anti-asthmatic medications. Specifically, several genes associated with common asthma drugs were elaborated. We present the following article in accordance with the Narrative Review reporting checklist (available at https://atm.amegroups.com/article/ view/10.21037/atm-22-291/rc).
Methods
Information used to write this paper was collected from the sources listed in Table 1.
Genomics: overview and analytic methods
Genomics focuses on the effects of genetic variations in a group of genes. Such changes include single nucleotide polymorphism (SNP), base insertion or deletion, copy number variation (CNV), and variable number of tandem repeats (VNTR). Several of these variants influence the number, timing, and function of encoded proteins, thus affecting certain physiological and pathological processes of the organism and its response to the outside world (4). As shown in Figure 1, the commonly used analytical methods in genomics include candidate gene association, genomewide association studies (GWAS), RNA sequencing, and biological pathway analysis. Candidate gene association analysis is the study of associations between variants in genes of interest and disease phenotypes and is commonly used to analyze alleles in patients with different drug responses. Building on the existing knowledge of the function of a specific gene, it identifies and selects SNPs with potential function, detects their presence in patients and controls having the feature, and finally correlates the gene variants with drug response profiles. The genetic association is large when the minor allele frequencies (MAF) of the SNPs are greater than 10%. The strength of candidate gene association analysis is that it needs a relatively small sample size and is simple and economical to conduct; however, it requires prior knowledge of the function of genes associated with drug response, and selection of genes can be difficult if only limited information is available (5). Meanwhile, GWAS allows the analysis of thousands of SNPs, associating them with specific phenotypes or drug responses. It typically detects SNPs in the set of genomic regions with the greatest inter-individual variation on each individual's DNA microarray, with each SNP being tested independently. The GWAS method is characterized by its powerful statistical ability as it can process large sample sizes and detect and analyze entire genomes (6). In contrast, RNA sequencing helps to discover new genes involved in drug response by analyzing differences in expression profiles, usually in cell lines (7). Biological pathway analysis means that after the identification of candidate genes, other potential genes that influence drug action can be discovered by analyzing genetic networks and pathways (8).
Genomic association of commonly used antiasthmatic drugs β2-adrenergic receptor agonists, inhaled glucocorticoids It identifies and selects SNPs with potential function, detects their presence in subjects and controls having the feature, and finally correlates the gene variants with drug response profiles.
Genome-wide association studies (GWAS)
It detects SNPs in the genomic region with the greatest interindividual variation on each individual's DNA microarray, with each SNP being tested independently.
RNA sequencing
By analyzing differences in expression profiles, it helps to discover new genes involved in drug response by analyzing differences in expression profiles, usually in cell lines.
Biological pathway analysis
After the identification of candidate genes, the analysis of genetic networks and pathways can contribute to discovering other potential genes that influence drug action.
(ICS), leukotriene modulators, and anticholinergics are the most commonly used medications for asthma. These drugs can be divided into 2 groups: (I) anti-inflammatory drugs, which include ICS and long-acting beta agonists (LABA); and (II) drugs for rapid relief of symptoms such as acute bronchial stenosis, chest tightness, and wheezing, including short-acting beta agonists (SABA) (9). To date, most genomic studies on asthma pharmacotherapy have focused on 3 drug classes: beta agonists, ICS, and leukotriene modulators.
β2-agonist-related gene variations
Relying on the duration of action, β2-agonists are fallen into 3 classes: SABA (e.g., fenoterol, isoprenaline, levoproterenol, and salbutamol), LABA (e.g., salmeterol and formoterol), and ultra-long-acting agonists (e.g., vilanterol and indacaterol) (10). Discovered by Kobilka et al. in 1987, the adrenoreceptor beta 2 (ADRB2) gene and is localized to chromosome 5q31-q32, an area linked with asthma-associated phenotypes (11). More than 80 SNPs have been identified for ADRB2, the most common being Arg16Gly (rs1042713) and Glu27Gln (rs1042714). The approximated frequency of the Arg16 variant is 39.3% in whites, 49.2% in blacks, and 51.0% in Han Chinese (12). It has been shown that homozygotes of Arg16 have a greater bronchodilating effect of salbutamol compared to homozygotes of Gly16, with a significant increase in forced expiratory volume 1 (FEV1) after drug administration. However, the decline in FEV1 was also faster in individuals with Arg16 genotype after LABA use, and several patients who received the treatment of salmeterol even suffered from severe asthma exacerbations (13). Children who are homozygous for Arg16 have poor outcomes while receiving the treatment with LABA and ICS, and therefore montelukast has been recommended as an alternative to salmeterol as customized second-line asthma controller therapy in asthmatic kids. Other uncommon nonsynonymous coding variants of ADRB2 have been disclosed. For instance, the SNP rs1800888 encodes a threonine on Thr164Ile; compared with carriers of wild-type Thr164, individuals homozygous for Ile164 are 3-to 4-fold less responsive to LABA (14).
The adenylyl cyclase type 9 (ADCY9) gene is part of the signaling pathway of the β2-adrenergic receptor (β2-AR). Slob et al. found that the SNP Ile772Met (rs2230739 in ADCY9) was related to acute bronchodilation to SABA in asthmatic patients and also with changes in lung function in response to ICS (15). Arginases, which are encoded by ARG1 and ARG2, are metabolized in vivo into L-arginine, which in turn generates nitric oxide (NO) in the presence of nitric oxide synthase (NOS), and NO is an endogenous bronchodilator. Ziyab et al. found that ARG1 polymorphisms (rs2781659 and rs2781667) were related to acute SABAinduced bronchodilation in asthmatic patients (16). A Dutch asthma population-based cohort study demonstrated that 2 polymorphisms in ARG2 (rs17249437 and rs3742879) were related to asthma and more serious airway obstruction (17). The bioactivity of NO is mediated through the formation of S-nitrosothiols (SNOs), whereas S-nitrosoglutathione reductase (GSNOR) metabolizes SNO. A recent sequencing study of the GSNOR gene in the United States identified 13 SNPs, with an allele frequency of >5%. The authors demonstrated an interaction between GSNOR and ADRB2 in Mexicans, which was believed to be associated with decreased bronchial responsiveness to bronchodilators (18). Through GWAS, Kabesch et al. identified 4 asthmaassociated SNPs (rs350729, rs1840321, rs1384918, and rs1319797) in the spermatogenesis associated serine rich 2 like (SPATS2L) gene on chromosome 2, which may be associated with β2-adrenergic receptor downregulation (19).
ICS-related gene variations
The earliest studies on glucocorticoid responsiveness were focused on the glucocorticoid receptor gene nuclear receptor subfamily 3, group C, member 1 (NR3C1), which is located on chromosome 5q31. It has been shown that 2 SNPs of this gene have a potential impact on glucocorticoid responsiveness, one of which, Asn363Ser (rs56149945 in NR3C1), has been recognized in some populations, and lymphocytes of individuals carrying this genetic variant have a higher sensitivity to dexamethasone compared to noncarriers (20).
The corticotropin releasing hormone receptor 1 (CRHR1) gene encodes the main receptor for corticotropinreleasing hormone and is a core regulator of corticosteroid synthesis and catecholamine generation. Rijavec et al. observed a great association correlation between improved pulmonary function after ICS treatment and CRHR1 SNPs (rs1876828, rs242939, and rs242941), and individuals homozygous for this polymorphism had significantly higher mean FEV1 than other patients (21). A low-affinity receptor for immunoglobulin E (IgE), a core molecule for B-cell stimulation is encoded by the Fc epsilon receptor 2 (FCER2) gene. It was observed that the SNP rs28364072 of FCER2 is related to a growing risk of re-exacerbation after ICS treatment in asthmatic children, who also had significantly higher serum IgE levels, possibly by a mechanism in which FCER2 variants adversely affect the normal negative feedback mechanism on IgE synthesis (22). The stress stimulated phosphoprotein 1 (STIP1) gene encodes a heat shock protein, which is essential for assembling and activating of the glucocorticoid receptor. It was shown that SNPs (rs6591838, rs2236647, and rs1011219) in STIP1 are greatly related to improved FEV1 responses in asthmatic patients with reduced lung function after 4 weeks of glucocorticoid treatment (23). Weitzel et al. performed RNAseq analysis of the transcriptome of 4 classes of human airway smooth muscle (ASM) cells and identified cysteine rich secretory protein LCCL domain with 2 (CRISPLD2), encoding a secreted protein associated with lung growth and endotoxin control (24). The CRISPLD2 gene was found to have an SNP associated with ICS resistance in asthmatic patients. Reverse transcription polymerase chain reaction (RT-PCR) and western blotting further displayed that dexamethasone treatment grew the expression of CRISPLD2 messenger RNA (mRNA) and protein levels in ASM cells, and functional researches confirmed that CRISPLD2 could regulate the anti-inflammatory roles of glucocorticoids in ASM (25). Another candidate gene associated with ICS treatment response is T-box transcription factor 21 (TBX21). Mice with a targeted deletion of the TBX21 gene rapidly exhibited airway hyperresponsiveness, increased airway eosinophilia, and accelerated airway remodeling processes. The SNP rs2240017 of TBX21 was related to improved bronchoprotection (26). Hernandez-Pacheco et al. conducted a cohort study and found that patients heterozygous for rs2240017 had significantly lower airway hyperresponsiveness during ICS treatment compared to those homozygous for this SNP (27).
Leukotriene modulator-related gene variations
Leukotriene modulators have potent anti-inflammatory activity and can improve the clinical course of asthma with minimal side effects. Depending on their mechanism of action, they are divided into 2 classes: cysteinyl leukotriene receptor antagonists (e.g., montelukast, zafirlukast, pranlukast, and tomelukast) and 5-lipoxygenase inhibiting agents (e.g., zileuton).
To date, the vast majority of pharmacogenetic studies on leukotriene modulators have focused on the variants of 5-LOX gene (ALOX5) and LTC4 synthase (LTC4S). Located on chromosome 10q11.12, the ALOX5 gene has 14 exons. Its activity is related to many repetitive sequences in the promoter area Sp1/Erg1. Mutant ALOX5 repeat polymorphism has been related to declined exacerbation rates in montelukast-treated asthma patients. Another study in Spain showed a reduced number of acute asthma exacerbations and increased FEV1 in patients with wildtype alleles or heterozygotes; in addition, these patients had increased urinary leukotriene E4 concentrations, reflecting increased leukotriene biosynthesis (28). Candidate gene analysis suggested that other ALOX5 SNPs (rs2115819, rs4987105, and rs4986832) might also affect the response to montelukast (29). The leukotriene C4 synthase gene (LTC4S) is one of to the S-glutathione synthase family, catalyzing the transformation of LTA4 to LTC4. The most significant SNP identified so far is rs730012, which is associated with increased generation of LTC4 in eosinophils (30). Pham et al. found a 73% reduction in the risk of acute asthma exacerbations in montelukasttreated patients homozygous for rs730012 (31). The ATP binding cassette C1 (ABCC1) gene, which encodes multidrug resistance protein 1 (MRP1) and exerts a significant effect on the transmembrane transport of LTC4, has also been studied. A polymorphism of this gene (rs119774 in LTC4) was associated with the montelukast treatment response, and individuals heterozygous for rs119774 had 24% elevated FEV1 compared to those homozygous for this polymorphism (32). Meanwhile, LTA4 hydrolase acts to convert LTA4 to LTB4, and the gene encoding it is located on chromosome 12q22. A polymorphism of this gene (rs2660845 in LTA4) is related to with the risk of acute asthma exacerbations during montelukast treatment. Individuals heterozygous for rs2660845 have a 4-fold higher risk of acute asthma exacerbations than the homozygous individuals (33). The mechanism may be that this SNP lowers LTA4 hydrolase activity, leading to a decrease in LTB4 synthesis, which stimulates the LTC4-synthesis pathway to promote the synthesis of cysteinyl leukotriene. The solute carrier organic anion transporter family member 2B1 (SLCO2B1) gene encodes protein 2B1, which exerts a significant effect on the active transport of organic anions by the intestinal wall. rs12422149 is associated with the transport and serum level of montelukast, and individuals with rs12422149 had 39% lower serum level of montelukast than controls (34). A summary of the asthma drug treatment response-related genes is shown in Table 2. stratification, and shortage of reproducibility, which need to be addressed in future studies. High-throughput techniques have made large-size genotyping and expression studies possible in recent years. In addition, gene-environment interactions, mutual effects between variants in various genes and genetic pathways, epigenetic regulation, and transcriptional regulation of small interfering RNAs (siRNAs) and long-stranded non-coding RNAs (lncRNAs) are also topics for future pharmacogenomics studies. For instance, DNA methylation is an epigenetic alternation in which the addition of methyl to the cytosine residues of cytosine-and guanine-rich (CpG islands) DNA fragments within gene promoters stops the binding of transcription elements, which leads to downregulation of gene expression and may affect disease susceptibility (35). Interferon (IFN) gene promoter hypermethylation and interleukin-4 (IL-4) promoter hypomethylation have been revealed as related to elevated airway IgE levels in asthmatic patients, and DNA methylation of the 5-LO promoter regulates the expressions of key genes in the leukotriene pathway (36).
Most current studies have focused on the effect of single gene polymorphisms on drug efficacy, but the pharmacogenomics of asthma is inherently complex, with each factor having a small effect on drug responsiveness, and no single locus has yet been able to predict the variability in drug responsiveness. Therefore, developing statistical models to predict treatment responsiveness based on multiple genetic loci is warranted. Integrative genomics approaches that combine genome-wide SNP data with gene expression profiles will also be useful tools for recognizing new genes or mechanisms that leading to inter-individual modifiability in drug reaction. In recent years, great strides have been made in human genome analysis technologies and international information sharing networks. Large whole-genome sequencing projects, such as the NHLBI Exome Sequencing Project, 1000 Genomes, and gene sequencing projects in African ancestral populations, have achieved excellent results and created databases of rare genetic variants that could serve pharmacogenetic studies in different racial and ethnic groups in the future.
Summary
Although the etiology of asthma is still not fully elucidated, genetic factors have been demonstrated to play key important roles. Response to anti-asthmatic drug therapy varies widely among patients, and some patients may even experience life-threatening adverse drug reactions.
Genomic approaches can screen for genetic variants associated with drug response. Stratifying patients prior to treatment helps to optimize drug selection, maximize the effectiveness of individual treatment, and minimize the risk of adverse reactions. Genomics can also offer new visions to the mechanisms of drug action and facilitate the growth of novel therapeutic options in the future. | 2022-03-30T15:20:36.408Z | 2022-03-01T00:00:00.000 | {
"year": 2022,
"sha1": "175ff6656bdf9a2f18c264cc010ed635780ec52c",
"oa_license": "CCBYNCND",
"oa_url": "https://atm.amegroups.com/article/viewFile/91982/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "dafdae689cf709521d368e45eb982d25b73290ad",
"s2fieldsofstudy": [],
"extfieldsofstudy": []
} |
117836968 | pes2o/s2orc | v3-fos-license | The Epidemics of Corruption
We study corruption as a generalized epidemic process on the graph of social relationships. The main difference to classical epidemic processes is the strong nonlinear dependence of the transmission probability on the local density of corruption and the mean field influence of the overall corruption in the society. Network clustering and the degree-degree correlation play an essential role in corruption dynamics. We discuss phase transitions, the influence of the graph structure and the implications for epidemic control. Structural and dynamical arguments are given why strongly hierarchically organized societies like systems with dictatorial tendency are more vulnerable to corruption than democracies. A similar type of modelling can be applied to other social contagion spreading processes like opinion formation, doping usage, social disorders or innovation dynamics.
Introduction
Corruption seems to be an unavoidable part of human social interaction, prevalent in every society at any time since the very beginning of human history till today. In sharp contrast to the high prevalence of corruption in many countries and the rather large literature on political, social and economical aspects of corruption there is only a small number of attempts to model the dynamics of corruption in a mathematically quantified way. The modelling approach in these few attempts essentially follows two paths. The first is in the sense of microeconomics and incorporates game theoretic aspects (for a recent model in this direction see the book by Steinrücken [17] and the references therein) or rules for maximizing a certain economically based profit functional ( [16] [10]). Then a set of differential equations for the evolution of the mean corruption is derived and a stability analysis done on that basis. In these models one usually makes rather detailed assumptions about the underlying organization structure on which the individuals interact. The second line of approach is more in the sense of cellular automata (CA) models with rather simple state variables and local interaction dynamics. For example in the article by Wirl [19] a simple 1-dimensional deterministic cellular lattice automata model is used to describe the propagation of corruption. Nevertheless, as is well known in CA-modelling, the global dynamical picture can be highly complex and nontrivial.
Up to now all these attempts did not take into account the complex network of social relationships as the underlying structure for the spread of corruption. In this article we will present a model for the spread of corruption on complex networks in the spirit of epidemiology. The model describes aspects of the evolution of corruption in a virtual population and incorporates some basic universal features of corruption. The local interaction dynamics of the model is similar to cellular automata but "lives" not on a lattice type graph like most of the CA-models but on complex networks.
Considering corruption as a nonstandard epidemic process relies on the plausible assumption that corruption rarely emerges out of nothing but is usually related to some already corrupt environment which may "infect" susceptibles. Of course the spontaneous decision of somebody to act corruptly is possible and could easy be handled in the model as an external weak source of infection. One of the very special features in corruption propagation which differs from what is used in describing classical epidemic processes is the threshold like dependence of the local transition probabilities. By this we mean that a noncorrupt individual gets infected with high probability if the number of corrupt individuals in the group of his direct social contacts (encoded as the set of neighbors in a "friendship" or acquaintance graph) exceeds a certain threshold number. Otherwise if the number of corrupt individuals in somebodies social neighborhood is below that threshold value there is only a small probability to get corrupt via such "local" interactions. The second main difference to classical epidemic processes is the mean field dependence of the corruption process. By this we mean that an individual can get corrupt just because there is a high prevalence (or believed prevalence) in the society even when there is no corruption in the local neighborhood. There is another interesting mean field term entering the game, namely the society strikes back to corruption with an efficiency proportional to the fraction of the noncorrupt people. Both mean field terms are nonlinear and together with the local propagation mechanisms they give rise to a rather complex dynamical picture.
What is corruption?
Corruption is a substructure of human social interaction. Common sense associates corruption mainly with a deviation from fair play interaction in the development of social relations. Clearly what is meant be fair play depends on the cultural context of a given population/society. This vague description of corruption is in the spirit of sociology and psychology and differs from the more narrow corruption concepts usually considered in economics or political sciences. There, corruption is mainly seen as a misuse of public power to gain profit in a more or less illegal way. In any case, corruption has many different faces in its concrete appearance and no single model approach will be able to describe the whole picture in an adequate way. But this does not at all imply that mathematical models are useless in this situation. They can provide a substantial improvement in our understanding of corruption as long as one clearly defines the aim and limitations of the taken approach.
For the model approach developed in this article we will use the notion of corruption in the more general, first sense. More precisely our intention is to describe changes in mind ranging from damming of corruption as a criminal act to accepting corruption as an attractive option. Therefore in this paper we do not introduce the group of state representatives or officials since we assume that the essential changes in mind which allow corrupt acts happen long before an individual is in the position to act corruptly. Empirical investigations about motives and "typology" of corrupt actors (see [3] for results from case studies in Germany) have shown that the majority of individuals involved in corruption affairs are highly educated, well positioned with respect to social status and do not think to have done something wrong, There is a notorious problem in finding good empirical data which would allow to estimate the prevalence of corruption. Probably the greatest effort over the last years to measure the degree of corruption in various countries was made by "Transparency International" (TA), a non profit group of individuals and organizations which are highly concerned by the lack of sound data. Since 1995 they publish a yearly corruption report and a so called Corruption Perception Index (CPI) [18]. TA is well aware of the subjectivity in peoples perception of corruption but hopes that the large number of cases involved in the CPI averages out most of the bias. Figure 1 gives a CPI-rank plot of the 2004 date from TA. Note that a value of 10 for the CPI corresponds to the absence of corruption. For 2004 Finland holds the top ranking and Germany is on place 15 with an index of 8.2.
It is not our aim to explain the values of the CPI or other corruption data sets, since this would require a semirealistic modelling of the social and economical structure of individual countries which is completely illusionary at the present stage of research. Rather we want to demonstrate which scenarios are dynamically possible and whether there are phase transitions.
Corruption as a generalized epidemic process
In this section we first describe the basic setting for our model structure.
Refinements and more detailed aspects will be discussed later on. Due to the common view, corruption is first of all a property of the relations between individuals irrespectively which definition of corruption one uses. Since an act of corruption requires that at least one of the participants in a corrupt relation has a mental state which tolerates or even assigns a positive value to (his personal view of) corruption we will focus mainly on the spread of this mental state change (from not accepting to accepting corrupt acts as an option for one's own activities). Therefore to discuss corruption as an epidemic process in the afore mentioned sense it is useful to assign a corruption property to the individuals themselves. In the simplest case we just have a time dependent 0 − 1 state variable ω (x, t) assigned to each individual, encoding whether the vertex is corrupt (1) or not (0) at time t (of course more refined scales for the degree of corruption are possible and will be discussed in a forthcoming paper). The underlying structure on which corruption spreads is a given finite graph G from a random graph space G with fixed vertex set V = {1, ....., n}. Furthermore we consider in this article only stationary graphs with no changes in time on the underlying graph structure (the study of corruption on evolutionary graphs requires a paper in its own). The dynamics is specified by conditional transition probabilities (p ij (x)) which depend mainly on the states on B 1 (x) = {y : d (x, y) ≤ 1} and a meanfield term reflecting the influence of the total prevalence of corruption in the society. Here d (., .) is the usual graph metric on G ∈ G and d (x) is the degree of x. We define b t := 1 N y∈V ω (y, t) as the density of corruption at time t. The standing assumptions on (p ij (x)) are the following: in other words the probability to become corrupt depends only on the local prevalence of corruption among the neighbors and the mean corruption in the society and individuals who became corrupt can cure from corruption with a rate proportional to the density of the noncorrupt individuals in the society. In classical i.i.d. epidemics one would have the following as functional dependence for the local part of the conditional probabilities: f (k) = 1 − (1 − ε) k which is for small ε and k proportional to εk. For corruption the function f is more like in voter models, that is below a critical value ∆ (x) of the number of corrupt individuals in B 1 (x) the value of f is close to zero and above ∆ (x) it is a number α (x) much larger then zero. Due to this property local clustering can force the epidemics to spread whereas in classical epidemic processes high clustering slows down the spread of an infection due to reinfection of the already infected. We want to illustrate this by two simple examples.
Example 1
The simplest, almost trivial example is the Z 1 lattice with additional edges to the next-nearest neighbors. Setting f (1) = µ > 0 and f (i) = 1 for i > 1 it is easy to see that there is a nonzero probability for infecting all vertices starting with one infected individual at time 0.
Example 2
The infection function f will be the same as in example 1. We start with a regular tree of degree 3. Replacing each vertex by a triangle and gluing the triangles along the former edges of the regular tree gives a regular graph of degree 4 where the triangle corners act now as the new vertices. In each neighbor pair of triangles (A, B) (that are the triangles which have a common vertex) we form an edge randomly between the set of vertices lying in A \ B and B \ A (see Fig. 1). Once a triangle is infected the corruption jumps to all the three neighbor triangles due to the extra random edge present between each neighbor pairs of triangles. Hence again we have a nonzero probability that the whole graph becomes infected.
In the above examples we have used a very simple and somehow extreme form of the infection function f. In the following we will investigate the situation for two canonical subclasses of infection functions. We say that f is a vertex independent, fixed threshold infection function if there is a ∆ such that f (i) = ε for 0 < i < ∆ and f (i) = α ≫ ε for i ≥ ∆. For the second class of functions we assume the threshold to be degree dependent. Namely we call f x a vertex dependent, relative threshold function if for some δ ∈ (0, 1) we have f x (i) = ε for 0 < i < δd (x) and f (i, x) = α ≫ ε for i ≥ δd (x). Furthermore we say that f is a voter-type infection function if . In this paper we will mainly investigate the spread of corruption for the fixed threshold case. α -process the local transmission process for # of corrupt neighbors ≥ ∆ α >> ε, β, γ β -process the mean field transmission process due to the total prevalence or perception of corruption ε < β < γ γ -process the corruption recover/elimination process due to the fight of the society against corruption β ≤ γ < α ε -process the classical local epidemic process for # of corrupt neighbors < ∆ ε << α, β, γ To distinguish between the different ways in which an individual can become corrupt we will speak about the α, β, ε or γ− process. For convenience of the reader we give in tabular 1 a summary of the different processes.
Note that in contrast to standard voter models we do not have the possibility of a locally induced backflip from the corrupt state to the noncorrupt. A kind of quenched disorder could easily be introduced by randomizing the relevant parameters individually but this will be the subject of a forthcoming paper. Generalizations of classical epidemic dynamics to processes with a local threshold have recently also been studied in the context of models of contagion (see [9] and references therein) but not yet been mixed with global mean field processes.
The structure of social networks
In the last 10 years there has been an enormous progress in understanding the fine structure of social networks. This is mostly due to the availability of large data sets for some special social networks like E-mail correspondence, coauthorship network in scientific publications and movie actor networks to name just a few prominent examples. All these networks of social relations share three remarkable properties of the associated graph which are: 1) the diameter scales at most logarithmically in size 2) the graphs have a very high, asymptotically non-vanishing clustering coefficient-in other words the graphs are locally far from being tree-like 3) the degree distribution follows a power law (scale-free graphs). Properties 2 and 3 have striking consequences for the spread of corruption as will be discussed later on.
There exists meanwhile a large collection of algorithms to generate complex networks with the above mentioned properties.
A widely used quantity to measure the local clustering is the triangle number A (x) := # {triangles containing x} and it's averaged valueĀ. A natural generalization is the k− clique number C k (x) defined as the number of complete graphs of order k containing x. In social network graphs A (x) is usually proportional to d (x) andĀ becomes independent of the population size for large N and stays bounded away from zero. Another very remarkable property of real networks is the power-law distribution for the degree. By an asymptotic power-law distribution for a discrete random variable d we denote every functional behavior of the form Pr {d = k} = k −λ+o k (1) with exponent λ > 1. Most real networks have exponents between 2 and 4 (see [1] for an excellent overview). Classical epidemic processes on such graphs have been studied by many authors, and perhaps the most astonishing result in this context is the absence of an epidemic threshold in case the exponent is below 3 ( [15]). This phenomenon is related to the existence of a massive center of size independent diameter induced by the high number of hubs (vertices with an exceptional large degree). Hubs play also a significant role for α -process as will be explained in the next section.
One of the main differences between corruption epidemics and classical epidemics is the different effect of clustering on the epidemic threshold and the total number of infected individuals. In the classical situation any epidemics will be slowed down by the presence of local cycles due to the high probability of reinfection. In corruption epidemics local clustering may speed up the propagation of corruption due to the nonlinear dependence of the infection probability on the number of infected neighbors as was already demonstrated in example 1 in the previous section. In the next section we will give two further examples where the strength of this effect can explicitly be computed.
Phase transitions for the α -process
In this section we want to look at some threshold properties associated to the α− process. We are still far from a good understanding of the quantitative picture of this kind of processes for a given type of graph, which is mainly a consequence of our lack of knowledge how to handle graphs with high local clustering in a mathematical satisfactory way. In this section we want to state just some general observations and numerical results concerning the spread of threshold-like dynamics. Furthermore we will analyze two examples of treelike graphs which might serve as an illustration. A more careful mathematical analysis of α− processes requires a paper in its own.
One of the remarkable differences between a classical epidemic process and a process based on local threshold dynamics is the dependence on the initial number of "infected" vertices in the latter case. Classical epidemics does not know such things-either an epidemic process is overcritical (reproduction number R 0 > 1) and a single initial infected vertex infects with with positive probability a positive fraction of the whole population, or the process is below criticality (R 0 < 1) and all infected will die out respectively become healthy. In corruption epidemics both parts -the mean field process as well the local α− process -can have phase transitions with respect to the initial number of corrupt vertices. That means, there is critical initial density of corrupt vertices b c 0 such that for initial densities below b c 0 the number of infected stays as it is or goes down to zero. Above b c 0 the entire population becomes corrupt with high probability. As an illustration we give in Fig. 3 the dependence of b c 0 on the edge density M N on a classical random graph space G (N, M) with N vertices and M edges.
Although in this paper we mainly concentrate on the case of absolute threshold values ∆ we give for comparison in Fig.4 the edge density dependence result for a relative, degree dependent threshold ∆ (x) = ⌈0.8 · d (x)⌉. There is still a critical density but its value increases with the edge density since the mean threshold increases now proportional to the mean degree. As already mentioned in section 3 one expects that the presence of clustering (respectively many triangles) decreases the critical density b c 0 since the αprocess can propagate more easily. In Fig. 5 the effect of the increase of the triangle number is clearly to see. Here we used a modified G (N, M) random graph space where randomly triangles are added (keeping the total number of edges constant). The threshold value ∆ was chosen to be 2 since for higher ∆ one has to add higher order complete subgraphs instead of triangles. The next figure (Fig. 6) shows the dependence of the critical density on ∆. The two curves represent the threshold values for an end-prevalence of 10 respectively 90 percent. Since the mean degree in this simulation is about 6.5 one has a vanishing contribution of the α-process above ∆ = 8. The critical threshold b c 0 stays than essentially at a value given by the mean field process (see next section for details). To get an impression of the contribution of the different kind of processes (local α and ε, global β and γ -for details see next section) to the end-prevalence we give in Fig. 7 the accumulated number of state changes caused by each of the subprocesses till saturation. For small values of ∆ the α-process dominates all others. We turn now to a more theoretical consideration, namely which type of vertices (type in the sense of degree and local clustering) are especially well suited for the propagation of corruption via the α−process. Assume we have given a random scale free graph space G with N vertices. We further assume that there are two types of edges (according to the way they where generated) the independent ones, generated at random with just preferences to the degree (like the preferential attachment rule by Albert&Barabasi or the "Cameo-Principle" in [4]) and local ones which are relevant for the creation of triangles. Let further the (asymptotic) degree distribution given by ϕ (k) : The independent edges are generated with probability p 0 and each individual generates k 0 > 2 edges by himself. From [4] one knows that the triangle number A (x) is proportional to d (x) with a proportionality constant C (p 0 ). A basic quantity in highly clustered networks is the probability q (x) that two random chosen elements from Assuming that the generation of triangles is a sufficiently independent process one obtains for the conditional l− clique number Here C l (x) is the number of lcliques (complete graphs of order l) containing x. For l > 4 the power in the k−dependence gets negative and hence the high degree vertices contribute almost nothing to the Clique-clustering. Of course all this consideration rely on the assumption of some kind of independence in the triangle-formation process. In any case this results indicate that highly clustered medium degree vertices are especially well suited for the spread of corruption. A similar kind of analysis can be carried out for random graph models which have an intrinsic high probability to generate local cliques e.g. intersection graphs (for an introduction to random intersection graphs and comparison with Erdös-Renyi random graphs see [12] and [11]). The above arguments seem to support the conjecture that in corruption epidemics the vertices from the tail of the degree distribution play a less dominant role. This is indeed true in the case of a relative, degree-dependent threshold where hubs are much more difficult to infect than medium or low degree vertices. For absolute thresholds in the αprocess the situation is more complex since for scale free degree distributions with small exponents (λ < 3) there are other mechanisms than local clustering which can cause a radical dropdown of the critical initial density. In Fig.9 we give numerical results for the relation between the critical density b c 0 and the exponent λ keeping the edge density fixed. There is a clear phase transition around λ ∼ 2.3 for ∆ = 5 and λ ∼ 2.9 for ∆ = 2. The explanation of this observation is closely related to a structural phase transition in scale free random graphs at λ = 3 -namely that for most vertices x an asymptotically positive fraction of all vertices has bounded distance to x. To link this property with the α -process one has to look more closely on the degree-degree correlation in scale-free graphs. Depending on the choice of the model one can have very different correlations like: . Formula 2 holds for instance for the Cameo -model ( [4]) whereas formula 3 is valid for scale-free graphs generated via the Molloy&Reed algorithm (the later one represents the random graph space containing all graphs with a given scale-free degree distribution equipped with the uniform measure and was used for the simulations in Fig.8 and 9). Evolutionary graphs like the Albert&Barabasi model have usually asymmetric and more complicated correlations. Since a detailed analysis of the α -process for scale-free graphs is beyond the scope of this paper we just give a heuristic outline why in graphs with a correlation as in formula 3 the threshold density b c 0 tends to zero as N → ∞ for exponents λ < 3. For fixed b 0 > N 1 λ −ν and ν > 0 (note that the typical maximal degree is about N 1 λ ) it is obvious that vertices x with d (x) ≥ k 0 >> ∆ b 0 get almost surely infected (as N → ∞) via the αprocess as soon as γ < α. Let A k 0 be the set of such vertices. One the other side it follows from 3 that a vertex y with d (y) = k < k 0 is linked to the set A k 0 with probability . Since q k is close to 1 for k > k λ−2 0 one has an almost sure multiple linkage of vertices y with d (y) > k λ−2 0 < k 0 to the set A k 0 . These vertices get now again infected via the α -process. By iterating this procedure one may arrive at an positive N-independent infection density b t >> b 0 such that the β -process is overcritical and finally the whole vertex set becomes corrupt. The mechanism described requires N to be large and therefore we conjecture that the difference to the numerical results depicted in Fig.8 (phase transition at λ < 2.3 instead of 3) is due to finite size effects. In the case of ∆ = 2 (Fig.9) the finite size effects are smaller and the phase transition is closer to 3. A similar kind of arguments shows, that the expected path-length is finite for λ < 3. Namely since the expected number S l of vertices at distance l from a vertex x with degree k 0 is approximately given γ · log k 0 < 1). The essential diameter diam e (a large fraction if the whole vertex set is within a ball of diameter diam e ) is then given by the smallest l such that (l−1)(3−λ) γ > 1 (for a more extensive discussion of the notion of essential diameter see [5]). For λ = 2 one obtains therefore diam e = 3. For λ > 3 the essential diameter is no longer bounded but growths logarithmically in N. It is interesting that the jump in the critical density at 2.3 in Fig.8 coincides with a jump in diameter from 4 to 5. A small essential diameter can have fatal consequences for corruption epidemics since most vertices are closely linked to hubs and, as was outlined above, hubs are with high probability corrupt. A precise estimation of the dependence of b c 0 from N, M and λ requires a careful discussion of the involved constants. For scale-free graphs with additive degree correlation like Cameo-graphs one still has a bounded essential diameter for exponents less than 3. But the first argument about chains of almost sure linkages from high degree to low degree vertex sets can not be adopted. One expects therefore a higher value of the critical density b c 0 . In Fig.10 and Fig. 11 we give numerical results for a scale-free graph with additive degree-correlation generated via a modified Molloy&Reed algorithm 1 To compare with the multiplicative case we have chosen the same parameters and degree-distribution as for Fig.9. There is a clear increase of b c 0 to 1 In the usual Molloy&Reed algorithm one generates d (x) virtual vertices for each vertex x and makes than a random matching between the virtual vertices. Two vertices x and y are connected by an edge if there is an edge between two corresponding virtual vertices. To generate an additive degree correlation we mark M virtual vertices as red such that each vertex x has at least one red and at most C M N red associated virtual vertices (if there are note too many vertices with very small degree, the constant C can be choosen as M N ). Than the marked red virtual vertices are randomly matched with the unmarked ones. 3) where λ c depends on thew concrete model. It is remarkable that low λ and a tendency to multiplicative correlation is mainly expected to hold in societies with strong hierarchical structures of social dependencies e.g. dictatorships (see [8] for details), whereas democracies are characterized by less strong degree correlation. Finally we will discuss two examples of graph structures where the critical infection density can be explicitly be computed. The first one is a regular infinite tree of degree 4 where of course no triangles are present (see Fig. 12). . The second structure is a regular infinite graph of again of degree 4 with positive local cluster coefficient (A (x) = 2) and a global tree-like structure (see Fig.13).
4-tree with initial white and black vertices
In both cases an exact computation of the critical infection density is possible. We give a short outline for the case of threshold value ∆ = 2 and α = 1 (the case α < 1 requires more lengthy computations but can be done in a similar fashion) and start with the case of the regular 4 tree. An initial configuration is given by marking each vertex with probability p as noncorrupt (black) and with probability 1 − p as corrupt (white). We ask for the critical probability p c such that for p < p c almost surely the entire tree becomes white (corrupt) and for p > p c there remains an infinite cluster of noncorrupt (black) vertices with probability one. Note that no finite cluster of black vertices -that is a finite black subgraph surrounded by white vertices-can survive so there are either infinite black clusters or none. We call an invariant infinite black cluster immune. Since ∆ = 2 any vertex in an immune cluster must have at least three black neighbors from that cluster. Denote by T R (3) the rooted tree with outdegree 3 (fixing a root gives a canonical direction to the edges of the tree so it makes sense to speak about the outdegree of a vertex). Every vertex has degree 4 except the root which has degree 3. Let x be the p− dependent probability that the root is contained in an immune cluster (as a subgraph of T R (3)) conditioned that the root vertex is initially black. By arguments from the general theory of branching processes x equals the largest solution of the following recursion equation for T R (2, 1). Again let x be the probability that the root vertex is in an immune cluster conditioned that the root is initially black. One gets the following recursion equation (see Fig.15). The solutions are 1 2p 4 1 2 p 2 + p 3 ± 1 2 −7p 4 + 4p 5 + 4p 6 and 0.
Again since −7p 4 + 4p 5 + 4p 6 ≥ 0 is needed to get a positive nonzero solution we get for the critical probability p c = √ 2 − 1 2 ≃ 0.914 21. That means the presence of clustering in this example lowers the critical initial density needed to infect the whole graph by almost a factor of 3 4 . The study of the regular 4−tree generalizes easily to the case of regular n + 1− trees (n > 2). The recursion equation in this case is A straightforward but lengthy computation gives for the critical probability p c = (n − 1) 2n−3 n n−1 (n − 2) n−2 ; n > 2 .
In the special case of a 3− tree (n = 2) one obtains p c = 1 2 . For completeness we give without proof the formula for the computation of the critical probability in case of a rooted random tree with arbitrary outdegree distribution. Let g (z) = i≥2 a i z i be the generating function for the outdegree; that is a i is the probability that a random chosen vertex has outdegree i (and hence total degree i + 1). The critical probability p c is given by the smallest p such that the equation has a positive real solution.
A careful reader may have noticed that there is a big structural difference between the generalized tree in example 2 of section 3 and the generalized tree just discussed above. Namely the graph of the first example has the property, that any two vertices can be linked by a chain of triangle where neighbor triangles always have a common edge. The graphs in the examples of this section do not have this property since neighboring triangles have only a common vertex. For threshold values ∆ > 2 one has to consider chains of ∆ + 1 cliques. We say that a graph is well k -linked if any pair of vertices can be linked by a chain of complete graphs of order k such that all neighboring k− cliques have a k − 1-clique in common. For well k-linked graphs the critical density b 0 c is zero (a finite number of initially infected vertices can already infected a positive fraction of the vertex set) for α−processes with ∆ < k whereas for graphs which are not well linked one needs a positive critical density.
The above study on trees or generalized trees is insofar important as in most random graph models used for complex networks one has as a tree or generalized tree as the typical local structure around a random chosen vertex. Furthermore the dependence of the corruption dynamics on graph properties like edge density or degree distribution is in large parts of the parameter space entirely caused by the α− process. 6 State and individual (mean field β− and γ− process) In this section we want to have a closer look at the mean field dependence of the corruption process. To gain some insight in the possible type of behavior we start with some simple assumptions which will be refined later on. Again we will argue in a discrete time model but the transition to continuos time makes no problem and gives the same results. Let b t the density of corrupt people at time t. We assume that the affinity for an individual to change its behavior from noncorrupt to corrupt increases proportional to the corruption prevalence. Furthermore to become really corruptly minded an individual has to overcome some fear which we put proportional to (1 − b t ). Formally this reads as Pr {ω (x, t + 1) = 1 | ω (x, t) = 0} = βb 2 t with β ∈ [0, 1]. Corrupt individuals can recover due to state and police effects (uncovering, fear etc.). Again it seems reasonable to assume that the probability to recover is proportional to 1 − b t since only the noncorrupt part of a society is willing to fight corruption. Formally we will assume that with the two obvious fixed points 0 and 1. For β = 0 there is a third intermediate fixed point b * := γ β . An interesting phenomena happens for parameter pairs (β, γ) s.t. γ < β since under this conditions both fixed points at 0 and 1 are locally stable. Hence there are two basins of attraction-one for 0 and one for 1− with b * as the boundary point. In other words, if the initial percentage of corruption is less b * corruption stays under control whereas for an initial value larger b * things run out of control and a corruption collapse takes place. Of course this mean field part of the model is still very simplistic and one should not expect any quantitative fit with empirical data. But the qualitative statement seems to be quite stable with respect to modifications. For instance there are good reasons to believe that neither the mean field infection nor the mean field recover process are linear in b t .
We want to end this section with a small modification of the mean field "Ansatz" where we include social weights. This is a natural and meanwhile very common approach in network dynamics and can easily be adopted to the corruption model. In the above argumentation on the attraction of becoming corrupt it is plausible to assume that corrupt individuals with high social influence have a stronger influence on the mean field probability to get corrupt than individuals with low social importance. A similar argument holds for the recover probability. As a simple measure for social strength we use the degree of the vertices since high degree vertices are more likely to play a dominant social role than low degree vertices. Formally we introduce the weighted density b w t at time t as where d k is the number of vertices with degree k and I (k) t the number of corrupt (state 1) vertices with degree k at time t. The mean field equation for group k is now given by . Multiplying the last equation by d k k (d k ) 2 and summing over k gives which is the same as equation (11). Therefore the introduction of social weights does not add anything new to the dynamical picture. There is of course a difference in the interpretation since a small real initial prevalence of corruption can give rise to a high initial value of b w 0 as soon as the corruption is concentrated at the high degree vertices. Here also a difference between scale free networks and classical random networks is seen since in the scale free case high degree vertices (hubs) are much more frequent than in the classical case.
7 Interaction between the mean field process and the local threshold dynamics (β +γ versus α) In this paragraph we will investigate some aspects of the interplay between the mean field process described in the previous section and the local, threshold dependent, corruption propagation. For α > γ there is a core infected component generated via the α− process. To gain some insight how such a core infected part of the population changes the mean field dynamics we will assume that a certain fraction, say a, of the population is permanently infected and resistant to the γ−deletion process. Denoting by q t = b t − a the density in the noncore part of the population (the normalization here is still with respect to the total population size) we get the following mean field dynamics: . Since the state where all individuals are infected is stationary we get the following set of fixed points: . For −4aβγ + γ 2 < 0 there are no real fixed points except q * = −a + 1 which becomes globally stable under this condition. Since we have a polynomial of degree 3 we get 1 β 1 2 γ − aβ + 1 2 −4aβγ + γ 2 < 1 as the condition for the fixed point at 1 − a to be locally stable. Furthermore in this case also the fixed point at 1 −4aβγ + γ 2 becomes a local attractor. This is for instance the case when a becomes very small and β > γ -being back essentially in the situation of the previous section. In case when −4aβγ + γ 2 becomes a global attractor (to see this just note that the derivative at q t = 0 is always positive for the relevant parameter intervals). The above considerations show that the possible dynamical evolution scenarios are the same for a = 0 and a = 0. But there is a very strong influence of a on the parameter regimes of β and γ for which one has a corruption collapse. Whereas in case a = 0 one is always in the basin of attraction of zero for b 0 sufficiently small and γ = 0 (in other words b = 1 is never a global attractor) one can now have the phenomenon that only the complete saturation with corruption is stable ( q = 1 − a). As an example lets look at the case where β = 2γ. For a = 0 there is a fixed point at b * = 0.5 and hence for an initial infection density b 0 < 0.5 the pure mean field dynamics converges to zero. In the case a = 0 one has for a > 1/8 only the stable fixed point b * = a + q * = 1. At a = 1/8 there is a phase transition since a new indifferent (slope 1) fixed point at b * = 1/4 emerges. For a < 1/8 this fixed point bifurcates into two fixed points where the first one at b * = 1 4 − 1 4 √ 1 − 8a becomes locally stable with a basin of attraction given by b 0 < 1 4 + 1 4 √ 1 − 8a. We close this section by presenting a numerical result showing the different contributions to the overall infection (end-prevalence) of the local and mean field processes as a function of the edge density in the random graph space G (N, M). Fig.16 gives the accumulated number of state changes (divided by N) caused by the α, β, γ and ε-process at initial density values slightly above the critical one. Up to an edge density of 2 (corresponding to a mean degree of 4) the β− process gives the major contribution to the end prevalence in the overcritical situation. Parallel to the increase in the edge density increases the contribution of the α-and ε-process (in the intermediate phase of density between 2 and 3 dominated by the α-process) till a sharp peak at edge density 4.5 where the ε− process outperforms all the others (at the same time the critical initial corruption density b c 0 drops down and becomes almost zero). The peak is easy to understand since for the chosen parameters we have at an edge density of 4 an equality between the recover rate γ and the expected number of new corruptions caused by a single corrupt vertex via the ε− process (which is E (d (x)) · ε). In terms of classical epidemic processes this corresponds to the case of reproduction number R 0 = 1. Above this value single initial corrupt vertex is already enough to cause in conjunction with the mean field process a total infection of the network.
Single run simulation results
In this section we want to present some simulation results of the corruption process taking place on some medium size complex networks. Small graph sizes are interesting as they are typical for communities in highly social structured populations. As a simple to generate random graph space with high clustering and power law degree distribution we have chosen so-called intersection graphs. Intersection graphs can easily be defined as follows. First one forms random sets from a finite base set of N elements (random means in this context that the set elements are chosen uniform i.i.d. from the base set). These sets constitute the vertices of a random graph. Edges will be defined via the set intersection property, namely there is an edge between i and j if the associated sets A i and A j have nonempty intersection. The size (cardinality) |A| of a set A is itself a random variable drawn i.i.d. from a pre-given probability distribution ϕ(k). To get interesting graph spaces one furthermore requires N < |A i | < const · N. For theoretical results about the structure of random intersection graphs see [7][12] [11]. It is worth noting that intersection graphs have a high clustering by definition (if an element is contained in say k sets simultaneously this k sets form a complete subgraph). Most simulations were done for the case when ϕ is an asymptotic power law distribution with exponent 3 or when ϕ is singular (all sets have the same size). Random intersection graphs have a multiplicative degree correlation and therefore the critical threshold should be very low for exponents less than 3 be the arguments from section 5. Above that value the form of the degree distribution has only little influence on the corruption propagation.
Besides random intersection graphs generated according to some degree specifications we used also a collection of real collaboration graphs. These graphs come from a database about research and development projects funded by the European Community (FP2-3). It's vertices are organizations involved in European research projects. Two organizations are linked if they have a joint project (see table II for the main graph characteristics). In total the data base contains about 8000 projects and 13000 participating organizations. In essence the network shows all the main characteristics that are known from other complex network structures like scale free degree distribution (with exponent between 2 and 3), small diameter and high clustering and vertex correlation. The initial fraction of infected individuals was either distributed at random over the vertex set or clumped together in a sufficiently large ball with a random chosen vertex as center.
In the following we want to give a small sample of simulations on the just mentioned graphs and try to discuss its main features. Fig.17 displays the prevalence of corruption on the real network FP2. The absolute threshold value ∆ = 30 is very high and does not allow for a big outbreak of corruption. But there is a metastable small community of individuals, highly linked and almost resistant to the γ−process. It took more then 800 complete updates till this structure broke down. The next figure 18 presents a similar situation on an almost twice as large real graph (FP3). In contrast to the previous case we have a much smaller α− value and an only slightly reduced threshold ∆.
The network FP3 is extremely high clustered (mean degree = 48.6, mean triangle number = 418 and a total of 7710 nodes and 187704 edges) and stays metastable with a very small corruption cluster for about 200 updates till it jumps by a factor 10 to another metastable state. Fig.19 gives a more detailed view of the accumulated contributions by the different processes for a time interval around the jump in prevalence. In the initial phase the ε−process was dominating the β−process and vice versa in the second phase. The next pictures show a situation where after an initial phase of slow growth a corruption collapse happened. It seems that the absolute threshold value ∆ = 20 is well below the critical value where the system can still stabilize. It is surprising that the system semi-stabilizes after an initial rapid increase in the prevalence for a rather long time (Fig.20). To a certain extend the results can be explained along the argumentation in section 7. In Fig.22 the accumulated infection processes for the initial phase are shown. Here the ε-process, although undercritical, causes a redistribution of infection till a clustered configuration is reached such that the α-process can start. Than the systems stays in almost complete balance till the β− process (which is slow in all our examples) wins (Fig.21). Note the difference to Fig.19, where the β− process never really contributes to the infection. Finally we show two simulations for a sample of a random set graph model with about 1000 vertices ( Fig.23 and Fig.24). Although both prevalence curves look similar there is a clear difference in the process fine-structure ( Fig.25 and Fig.26).
In the first instance the ε− and β− process are causing the collapse whereas in the second case the α− process in conjunction with the ε− process is the main booster.
The few examples of single simulation runs given in this section show already, that there are many different routes to obtain high prevalence in corruption typically interrupted by long phases of metastability. Similar to other complex systems with hidden phase transitions (e.g. the climate) there can be an unnoticed small accumulation of infection till a critical density-a point of no return-of corruption is reached from which on an almost complete saturation of the society (or a corresponding subsystem) by corruption becomes the normality.
Epidemic control
One of the basic question in classical epidemics as well as in corruption dynamics is: what can be done to slow down the "infection" propagation or prevalence. Knowing the different phase transitions and their dependency on structure properties and social parameters is of great help in designing proper prevention scenarios. In the following we will try to relate some of the findings from our model to what is considered by practicians as useful in corruption reduction. First we would like to emphasize again that the present model deals in a rather abstract way with the propagation of mental willingness to be corrupt and not so much with realized corruption which always requires a specific environment and additional structural assumptions. Hence concerning corruption control, we only will be able to support certain prevention scenarios in the sense, that they go into the right direction and that there effect is strong or weak but without being able to make quantitative statements.
The model presented in this paper contains, besides structural parameters for the underlying network, 5 relevant parameters: α-characterizing the strength of the local threshold process, β-characterizing the strength of the mean field attraction on becoming corrupt, γ-the strength of the "society strikes back" term, ε-the strength of the classical epidemic process (assumed to be very small) and ∆-the height of the local threshold. Three of the parameters-α, β and ε-are positively correlated to the spread of corruption whereas 2 parameters-∆ and γ-are negatively correlated. As is well known from classical epidemic control for infectious diseases it is very hard if not impossible to change basic social parameters in a short time. This can only be achieved in a long running educational process. Therefore not much can be done in avoiding high clustering in certain relevant areas of the society in order to prevent the emergence of highly connected corruption nets.
As the name already indicates, Transparency International favours as an effective tool to decrease corruption especially the increase of transparency in all forms of administrative decision making as well as transparency in financial affairs of socially exposed persons, institutions and companies. The effect of an increase of transparency translates into our model as an increase of the value of ∆ and a decrease of the values of β, α and ε. Strengthening of justice, police and similar instruments to fight and uncover corruption has again the effect of lowering β (via increase of fear) but may also increase the value of γ (uncovering rate). Since an increase of γ above the value of β and α would perhaps require a total police state, it is illusionary to overcome corruption just by means of law, justice and police. Besides necessary long term educational efforts in school and public to strengthen the moral resistance against corruption (increase of ∆ and decrease of α) it seems a good strategy to make administrative and political decision hierarchies as independent and decentralized as possible to avoid high clustering.
We would like to end these short remarks by a few comments on the role of hubs -the very high degree vertices typically present in scale free graphsin corruption dynamics. While a priori not especially well suited to transmit corruption via the α− process due to the local tree like structure around the hubs (compared with low degree vertices) they nevertheless are more often exposed to corruption and have therefore a higher probability to get corrupt. If the hub density is sufficiently high (as is the case for scale-free degree distributions with exponent λ < 3) and the degree correlation is stronger than additive many vertices are linked to the hubs via social dependencies and in turn also can get corrupt. Furthermore they may play a fatal role in increasing the weighted corruption density relevant for the mean field process as was explained at the end of section 7. The described situation is probably typical for strongly hierarchically organized countries or regional substructures e.g. systems with a dictatorial or monarchical tendency. In such societies a high prevalence of corruption seems almost unavoidable since the threshold b c 0 is close to zero. For democratic societies it seems therefore wise, to watch the behavior of hubs-whatever their social interpretation might be-more intensively than the "normal" part of the society.
Summary and perspectives
In this article we have presented a first study of the spread of corruption on scale free and highly clustered networks. One of the main observations so far is the strong dependence of the asymptotic dynamics on the initial number of corrupt individuals. This holds as well for the mean field process as for the local dynamics. Second there is a fatal resonance effect between global and local dynamics lowering dramatically the critical density of initial infection. As expected there is a positive correlation between clustering and spread of corruption respectively the critical initial density. Scale-freenes seems to play an important role for the corruption process for distributions with small exponent (λ < 3) and multiplicative degree correlation due to the high prevalence of infected hubs and the strong linkage of medium and low degree vertices to them. For higher exponents the dynamics is rather insensitive to the degree distribution. The strength of the degree correlation (from weak -additive till strong -multiplicative or even higher powers) in networks of social acquaintances seems to be related to the political and institutional structure of a society which favours liberal organization forms as being less vulnerable to corruption.
There is a whole bunch of natural continuations or generalizations which have to be investigated next. Clearly a deeper understanding of the pure α− process and its phase transitions is necessary. The mathematical problem is already highly nontrivial on trees. The following short list gives a selection of natural generalizations and refinements: -quenched disorder in all parameters -inclusion of geographical or regional structure into the network -inclusion of administrative or political substructures in which corruption typically will be realized -evolving networks -interaction between the corruption process and the network structure -more heterogeneity in the social networks e.g. by incorporating family like structures or social profiles -weighted networks -refined transition rules e.g. asymmetry between infecting and getting infected -different kinds and strength of corruption and their interplay -economic impacts in a virtual population.
Besides the specific context of corruption dynamics there is a multitude of topics where the model presented in this paper could easily be adopted to. This includes so different themes as political opinion formation, social disorder processes, strategies for advertisement, doping usage, the spread of prejudices, migration dynamics, global terrorist networks and innovation processes. In all these examples one has a local and global dynamics very similar to the one described here. Of course there are differences. For instance in many mind formation problems the state space of individuals is rather complex and the local dynamics allows for many transitions not just 0-1 as in the corruption model. Furthermore aging phenomena and limits of resources could be included. But besides this addition of structure and complexity and the various interpretations there remains a good part of the findings of this work to be true. There will be phase transitions in the initial density of certain properties and there can be resonance effects between the nonlinear global and local dynamics -both making the prediction of future difficult and challenging. | 2019-04-14T03:14:17.135Z | 2005-05-04T00:00:00.000 | {
"year": 2005,
"sha1": "6f0e1cbf90677478b4daaded5b5a6af106f7264f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "989e8d6c648b7625a79070f7e61379e57ebdbe51",
"s2fieldsofstudy": [
"Political Science",
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
216239748 | pes2o/s2orc | v3-fos-license | Experiments Research on Electrical Discharge Grinding Polycrystalline Diamond Tool based on Surface Quality Analysis
As the hardest material in the field of synthetic materials, polycrystalline diamond has attracted more and more attention in the field of tools, which reflects the basic characteristics of ‘high efficiency, precision and flexibility’ of modern advanced cutting technology and clean production. Electrical discharge grinding (EDG) is often used in polycrystalline diamond tools, especially in special shape, thin and fine edge cutting tools. Taking two kinds of grain size of polycrystalline diamond as the research object, taking the machining test and material test as the analysis means, this paper simply compares the surface quality of disc electrode and wire electrode machining, and studies the influence of diamond particle size, electrode polarity and electrode rotation speed on the surface quality and material removal rate of precision electrical discharge grinding polycrystalline diamond. The results show that there is no porous structure, good surface quality and no selectivity in discharge removal when using the negative polarity machining. With the increase of electrode rotation speed, the removal amount of polycrystalline diamond material increases gradually, and the surface roughness of polycrystalline diamond material decreases first, then increases. When the electrode rotation speed reaches 80 m/min, the surface roughness of the polycrystalline diamond sample reaches the minimum value, and the removal amount of polycrystalline diamond material also tends to be stable.
Introduction
Polycrystalline diamond (PCD) is a synthetic diamond powder grown by catalyst under high temperature and high pressure. Its structure is very similar to natural diamond, which is formed by C-C bond and has good toughness. Polycrystalline Diamond tool has a higher hardness than HSS cutting tools, carbide cutting tools and better wear resistance, better thermal stability, chemical stability and thermal conductivity, excellent embodies the modern advanced cutting technology [1,2].Developed countries in aerospace, automobile manufacturing, machinery processing, such as precision machining industry, especially the processing of non-ferrous metals and their alloys, the widespread adoption of PCD cutting tool instead of carbide cutting tools, processing efficiency more than 2 times [1], the Because the hardness and abrasion resistance of polycrystalline diamond are very high, it is a common method to use electrical discharge machining (EDM) to achieve material removal by using soft and rigid materials. EDM is a self-excited discharge process based on the principle of pulsed discharge erosion. The physical process of discharge erosion is a comprehensive process of electromagnetism, thermodynamics, fluid dynamics, etc. [5,6].Different from mechanical machining, EDM belongs to non-contact machining, without mechanical cutting force, with the advantages of simple tool electrode forming, low relative loss, etc., it can be effectively applied to the PCD cutting tool machining with sharp edge and complex shape edge. However, there are also differences between disc electrode and wire electrode processing, suitable for processing tools are also different, the cutting edge of the processing tools also shows different morphology.
Experts have made many explorations in PCD materials such as EDG and EDM wire cutting, and obtained much valuable research result. These studies can be divided into three categories, basic type of research is around the processing mechanism of PCD material removal of [2,[6][7][8][9], the processing and analysis of a combination of materials testing, SEM observation before and after by PCD material processing, XRD analysis, energy spectrum analysis and Raman spectrum analysis and carry on the processing element before and after the contrast and the analysis of the components, the analysis of the grain boundary, to judge the PCD material removal mechanism in EDM. Another kind is the study of PCD material processing technology [10][11][12], also basic use test is given priority to, to the processing surface quality of workpiece, such as surface roughness, straightness, etc.), processing efficiency, material removal rate as the evaluation index, such as establishing process parameters (such as open circuit voltage, peak current, pulse width, pulse interval, etc.) and the relationship between evaluation index model, search process. The third category is the selection of electrode materials for EDM [2,13,14], and the influence of white copper, red copper and graphite electrodes on EDM and the effect of rotating electrode on machining efficiency. These studies all focus on important factors affecting discharge machining and ignore seemingly unimportant processing parameters, such as electrode polarity, electrode rotation speed and diamond particle size. For EDM precision machining, these "minor factors" also have important influences on surface quality and machining efficiency. In addition, the particle size of polycrystalline diamond also affects the quality and efficiency of processing to a certain extent.
In this study two kinds of grain size of polycrystalline diamond material as the research object, through processing according to the method of analysis of the material science, simple analysis compares the surface quality of plate electrode and wire electrode processing differences, explore the diamond granularity, electrode and electrode polarity speed of polycrystalline diamond spark discharge grinding surface quality and the influence of material removal rate. This provides an experimental basis for making reasonable electrical discharge grinding process of polycrystalline diamond. Polycrystalline diamond samples were selected with diamond PCD composite tablets with particle sizes of 2 and 10 microns respectively (their material properties are shown in table 1).The electrode adopts disc-shaped copper electrode and copper wire, the end surface of which is used for grinding PCD layer, and the line electrode is used for cutting PCD cutter edge. The electrical discharge machine named BDM-903 (as showed in Fig. 1) produced by Beijing Institute of Electro-machining is selected as experimental machining equipment. The machine tool with a wire electrode selected the ALN400Q of Sodick co. (as shown in Fig. 2). HITACHI's s-4800 scanning electron microscope, D/ max-ra X-ray diffraction analyzer and domestic TR240 surface roughness meter were selected as the experimental analysis instruments.
Surface quality analysis of EDG disk electrode and wire electrode
The cutting edge of PCD sample with particle size of 10 microns processed by 0.2mm brass wire is shown in Fig.3. Three modes of rough processing, semi-finishing processing and finishing processing are respectively adopted. It can be easily seen from Fig.3 that after PCD processing, the cutting edge was in an obvious melting state, and there were different degrees of corrosion pits on the cutting edge, and the cutting edge had a good straightness. The size and number of each crater are related to the processing technology. Figure 4 shows the surface morphology of PCD processed with 200mm diameter copper disc electrode, as well as rough machining, semi-finishing and finishing. From the figure, it can be easily found that the processed surface also has molten materials and craters of different sizes, but the straightness of the cutting edge is general, and a curved surface appears at the cutting edge. Compared with Fig.3 and Fig.4, it is obvious that the straightness of the cutting edge processed by disc electrode is worse than that processed by wire electrode, and the size of the etching craters is basically the same. From the surface roughness, disc electrode processing is better than line electrode processing. Fig.5 and Fig.6 are the curves of the relationship between the electrode speed, diamond and the quality and efficiency of the discharge grinding process under the unified discharge machining process parameters test. As can be seen from Fig.5, with the increase of electrode rotation speed, the surface roughness of PCD samples first decreases, and then increases. In particular, when electrode speed is at 60 m/min, the surface roughness of samples reaches the minimum value. Similarly, about 10 microns granularity PCD material samples of the surface roughness value is reduced, first after the increase, but the sample surface roughness value reaches the minimum value is in electrode speed 80 m/min, it also suggests that, in the same discharge processing, particle size is different, the surface roughness of the workpiece are different, the optimization of processing technology changed correspondingly adjusted according to the material particle size. Fig. 6 The relation curve of the electrode speed, diamond particle size and the workpiece material removal
Influence of electrode rotation speed and diamond particle size
As can be seen from Fig.6, when the electrode is not rotated, the removal amount of PCD material is the lowest. With the increase of the rotation speed of the electrode, the removal amount of PCD material gradually increased. When the electrode speed reached 80m/min, the removal amount of PCD material gradually slowed down. This is because the rotation of the electrode improves the discharge machining conditions and accelerates the material removal speed. However, when the linear velocity of the electrode rotation matches the discharge parameters, the effect of electrode rotation speed on material removal speed is not obvious. It can also be seen from Fig.6 that, with the increase of the rotation speed of the electrode, the removal speed of large-size diamond is faster, but the change trend is the same.
Conclusions
From the above analysis, we can get the following conclusion: (1) The straightness of the cutting edge processed by disc electrode is worse than that processed by wire electrode, and the size of the etching craters is basically the same.
(2) With the increase of the speed of electrode rotation, the surface roughness of PCD material first decreases and then increases. When electrode speed is at 60~80 m/min, the surface roughness of the sample reaches the minimum value. With the increase of particle size, the speed of the electrode which obtained the best surface quality also increased under the same discharge machining process.
(3) With the increase of the electrode rotation speed, the removal amount of PCD material gradually increases. When electrode speed reaches 80m/min, the removal amount of PCD material gradually slows down. | 2020-04-02T09:20:58.150Z | 2020-03-31T00:00:00.000 | {
"year": 2020,
"sha1": "110cefe858aed289a946c106b1dd06cbb7c0eeb7",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/772/1/012092",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "4e5ac5b0d2ed6d884024a690bfbe0ab8c24b3467",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
202537740 | pes2o/s2orc | v3-fos-license | Size diversity of old Large Magellanic Cloud clusters as determined by internal dynamical evolution
The distribution of size as a function of age observed for star clusters in the Large Magellanic Cloud (LMC) is very puzzling: young clusters are all compact, while the oldest systems show both small and large sizes. It is commonly interpreted as due to a population of binary black holes driving a progressive expansion of cluster cores. Here we propose, instead, that it is the natural consequence of the fact that only relatively low-mass clusters have formed in the last ~3 Gyr in the LMC and only the most compact systems survived and are observable. The spread in size displayed by the oldest (and most massive) clusters, instead, can be explained in terms of initial conditions and internal dynamical evolution. To quantitatively explore the role of the latter, we selected a sample of five coeval and old LMC clusters with different sizes, and we estimated their dynamical age from the level of central segregation of blue straggler stars (the so-called dynamical clock). Similarly to what found in the Milky Way, we indeed measure different levels of dynamical evolution among the selected coeval clusters, with large-core systems being dynamically younger than those with small size. This behaviour is fully consistent with what expected from internal dynamical evolution processes over timescales mainly set by the structure of each system at formation.
2 values, from a fraction of a parsec to almost 10 pc ( Figure 1, panel a), similar to what is measured for the Milky Way (old) GCs. After ruling out any possible bias due to selection effects, the observed trend has been interpreted in terms of an evolutionary sequence 4 . In this scenario, all clusters formed with compact cores (r c ∼2-3 pc), then most of them maintained small cores, while several others experienced core expansion and moved to the upper-right corner of the diagram.
Such an expansion, however, needs to be powered by some "ad hoc" mechanism. Among the different possibilities discussed in the literature 5,6 , one often quoted scenario 7 is that the core expansion is due to the heating action of a population of stellar-mass binary black holes (BHs) retained after the supernova explosions. Dynamical interactions among single and binary BHs led to multiple BH scatterings and ejections, thus driving the expansion of the central cluster regions.
An alternative reading of the cluster size-age distribution
Although intriguing, the proposed scenario implicitly requires an evolutionary link between the younger and the older GCs in the LMC, with the former being representative of the progenitors of the oldest ones. However, the two groups show different masses and positions within the LMC: all the young clusters are light stellar systems (with M<10 5 M⊙), while old clusters are all more massive than 10 5 M⊙ (panel b in Figure 1); moreover, the young objects are observed in the innermost regions of the host galaxy (within ~5 kpc from the centre), while the old ones are orbiting at any distance (panel c in Figure 1). These pieces of evidence strongly indicate that the progenitors of the old LMC clusters must have been more massive (up to a factor of 100) than the currently young systems, hence there does not appear to be a direct evolutionary connection between the two groups. In turn, this seriously challenges the reading of the LMC r c -age distribution in terms of an "evolutionary sequence".
On the other hand, the observed distributions ( Figure 1) show how the cluster parameters changed over the time in the LMC: • during the early formation epoch of the LMC (~ 13 Gyr ago), many star clusters more massive than 10 5 M⊙ formed over a quite short time scale (the old clusters, in fact, all formed within a period of ~ 1 Gyr -see Methods section 'The age of the five LMC clusters') at any distance within the galaxy; • after a long period of quiescence (Δt ~10 Gyrs, the so-called "age-gap") [8][9][10] , about 3 Gyr ago cluster formation was reactivated (likely because of a strong tidal interaction with the Small Magellanic Cloud) 11 and only less massive structures have been generated since then (i.e., over a much more extended period, of several Gyrs) essentially in the innermost region of the galaxy (Rg< 4-5 kpc) around the LMC bar 11 .
Within this scenario, the lack of young clusters with large r c would be the natural consequence of the observed mass-age and distance-age distributions: since all recent clusters are light systems formed in the innermost region of the LMC, only the most compact ones can survive the tidal effects of the host galaxy, while any loose and light system that might have formed had been already disrupted. This directly explains why the upper-left portion of the r c -age diagram is empty.
According to the observed mass distribution of the old clusters, none of the young light systems currently observable in the LMC will probably survive over the next 10 Gyr.
Following these considerations, it remains to be understood why old GCs span a wide range of rc values. Here we propose that this is primarily due to a combination of different properties at the moment of cluster formation and different stages of internal dynamical evolution (different dynamical ages) currently reached by each system, with the larger-core GCs being dynamically less evolved (younger) than those with small r c . Indeed, it is well known that GCs are dynamically active stellar systems, where gravitational interactions among stars can significantly alter the overall energy budget and lead to a progressive internal dynamical evolution 12 through processes like mass segregation, evaporation of light stars, core collapse, etc. Thus, star clusters formed at the same cosmic time (i.e., with the same chronological age) may have reached quite different stages of dynamical evolution, corresponding to different modifications of their internal structure with respect to the initial conditions. An innovative method to empirically measure the level of dynamical evolution suffered by a stellar system has been recently proposed 13-15 based on blue straggler stars (BSSs). These peculiar objects are thought to be generated by some massenhancement processes, like mass-transfer in binary systems 16 14,15 , finding a strong correlation with the core relaxation time (t rc ) and thus confirming that the level of BSS central segregation is a powerful indicator of the dynamical age of the parent cluster: the method is therefore dubbed 13 the "dynamical clock". Following this line of reasoning, we propose that the r c spread observed for old GCs in the LMC could be explained in terms of different levels of dynamical evolution reached by systems of fixed chronological age.
The dynamical ages of five old star clusters in the LMC
To provide arguments to support this scenario, we determined (through the A+ parameter) the dynamical age of a sample of old LMC clusters, for which Hubble Space Telescope observations adequate enough to properly study the BSS population and reliably evaluate the LMC field star Figure 1. The photometric catalogues were first used to re-determine the gravitational centre of each system. In fact, a correct location of the cluster centre is a key step, especially in such distant stellar systems, since even small errors can significantly affect the derived radial behaviour of the observed stellar populations. With respect to previous works we found differences up to several arcseconds (see Table 1). Note that one arcsecond corresponds to 0.24 pc at the distance of the LMC (we assumed d=50 kpc) 23 . We then determined new star density profiles and structural parameters (namely the core, half-mass and tidal radii, the concentration parameter, etc.) from resolved star counts and by properly taking into account the LMC field contamination (see Methods section 'Field Decontamination' and Table S1). According to similar works 15,24 performed on Galactic GCs, the BSS population has been selected in the "normalized Colour Magnitude Diagram (n-CMD)", where the magnitudes of all the measured stars are shifted to assign coordinates (0,0) to the colour and magnitude of the Main Sequence Turn-off (MS-TO) point. The co-added n-CMD of the 5 target clusters is plotted in Figure 2 (grey dots): as apparent, the main stellar evolutionary sequences of the five GCs are remarkably well superposed one on another, suggesting that these systems are all coeval. Moreover the perfect match with the CMD of M30, one of the oldest Milky Way GCs with comparable metallicity 25 , suggests a common age of ~13 Gyr, (see Methods section 'The age of the five LMC clusters'). Hence, the BSS population has been identified by adopting the same selection box in all the target clusters. The same holds for the selection of the reference population, i.e., a sample of normal cluster stars tracing the overall star density profile of the system. In particular, to be consistent with previous works performed in Galactic GCs 15,24 : (1) We only considered BSSs with normalized V magnitude V*< −0.6. This selection includes only the most massive portion of the BSS population, thus maximizing the sensitivity of the A+ parameter to the dynamical friction effect. Moreover, it excludes the faintest portion 5 where increasing photometric errors and blends can make the BSS selection more problematic.
(2) As reference population we adopted the lower portion of the Red Giant Branch and the Sub Giant Branch, in the same range of magnitudes of the selected BSSs. This indeed provides the ideal reference population, as it includes several hundred stars (thus making statistical fluctuations negligible), and it assures the same level of completeness of the BSS sample.
(3) We measured the A+ parameter within one half-mass radius (r h ). This assumption allows a direct comparison among the five different systems and with the large sample of Galactic GCs studied in the literature 14,15 .
The n-CMDs for all the stars measured within one r h in the five programme clusters are shown in to just a few units for Hodge 11 and NGC1841. The figure also shows an impressive match between the results obtained here for the five LMC clusters and those previously found for a sample of 48 old and coeval Galactic GCs (grey circles) 15 , demonstrating that the "dynamical clock" can be efficiently used in any stellar environment. The right panel of Figure 5 shows the effect of dynamical evolution on the core size for the entire sample of 48 Galactic GCs (grey dots) and the five LMC systems studied here (large red squares). As can be seen from the nice correlation, clusters with large core radius are dynamically younger (with lower values of the A+ parameter) than compact systems. The former have possibly maintained unchanged or only slightly modified 6 their initial structure (in terms of core size, concentration, central density), while all dynamically old clusters currently appear as quite compact objects, although they possibly formed with a larger core. Hence, the internal dynamical evolution tends to generate compact clusters, systematically moving large-core systems toward small-size compact objects over a timescale that mainly depends on the cluster structure. Panels (a) and (c) in Figure 1 suggest that also the local environment might have had some impact on the cluster dynamical evolution: in fact, the old GCs with smallest core radii are located at the smallest galactocentric distances, indicating that their internal evolution has been accelerated by an increased evaporation/tidal stripping of low mass stars in the innermost region of the LMC. Of course also the fraction of dark remnants (as BHs and neutron stars) retained within each cluster and their ejection timescale has an impact on the dynamical evolution of the system (in the sense of slowing it down for increasing retention fraction) 20 . However both these quantities are unknown at the moment, and only a few observational evidence of BH candidates in GCs has been found so far 26,27 . Hence, here we consider the action of dark remnants as a secondorder effect on the dynamical cluster aging.
Conclusion and future perspectives
On the basis of these results, we conclude that the observed spread of r c at a given chronological age can be interpreted as the "natural" consequence of GC internal dynamical evolution that brings systems with relaxation time significantly shorter than their age to populate the small core radius region of the diagram. It is also somehow "natural" that chronologically old GCs display the largest spread of core sizes, since in this case a variety of initial configurations (with intermediate/short relaxation times) could have evolved toward small r c configuration. Of course the proposed scenario leaves completely unaffected the portion of the diagram corresponding to small chronological ages (t=10 7 -10 8 yr), because all young clusters have relaxation times comparable to (or larger than) their age and their internal dynamical evolution processes had not enough time to move them toward the small r c portion of the diagram. It will be interesting to extend this study to the intermediate-age clusters (logt>8-9), which could also show evidence of different levels of dynamical evolution.
Indeed a first attempt 28 to measure the dynamical age of 7 LMC clusters in this age range suggests quite modest levels of dynamical evolution. However a detailed analysis of the oldest clusters in this age range (with ages larger than 2-3 Gyrs, as NGC 2121, NGC 2155 and SL 663) is still lacking and it can certainly provide further hints on this topic.
The evidence presented in this paper provides a new interpretative scenario for the age-size distribution in the LMC clusters that does not require the action of BHs, but it is essentially driven by the cluster internal dynamical evolution. This scenario removes the necessity of an evolutionary 7 path in which compact young clusters evolve into old globulars with a wide range of radii.
Moreover, it provides further support to the other structural (see Figure 1) and chemical 29-31 pieces of evidence that already challenged such an evolutionary connection. Hence, this result redirects our attention to the cluster formation history in the LMC, its dramatic changes over the cosmic time and the environmental conditions at which this process is occurring.
Methods
The Data-set: For this study we used a set of high-resolution images acquired with the Wide Field Channel of the Advanced Camera for Survey (ACS/WFC) on board the Hubble Space Telescope, secured under proposal GO14164 (PI: Sarajedini). We used the images acquired through the filters F606W (V) and F814W (I) to sample the cluster population, and those (typically located 5' from the cluster centre) obtained through the filters F606W and F435W (B) to sample the Large Magellanic Cloud (LMC) field population. For both data-sets, an appropriate dither pattern of a few arcseconds has been adopted in each pointing in order to fill the inter-chip gaps and avoid spurious effects due to bad pixels. The photometric analysis was performed via the point-spread function (PSF) fitting method, by using DAOPHOT IV 32 , following the "standard" approach used in previous works 33,34 .
Briefly, PSF models were derived for each image and chip by using some dozens of stars, and then applied to all the sources with flux peaks at least 3σ above the local background. A master list including stars detected in at least four images was then created. At the position of each star in the master-list, a fit was forced with DAOPHOT/ALLFRAME 35 in each frame. For each star thus recovered, multiple magnitude estimates obtained in each chip with the same filter were homogenised by using DAOMATCH and DAOMASTER, and their weighted mean and standard deviation were finally adopted as star magnitude and photometric error. Instrumental magnitudes were calibrated onto the VEGAMAG photometric system 36 by using the recipes and zero-points reported in the HST web-sites. Instrumental coordinates were first corrected for geometric distortions by using the most updated Distortion Correction Tables IDCTAB provided on the dedicated page of the Space Telescope Science Institute for the ACS/WFC images. Then, they were reported to the absolute coordinate system (α, δ) as defined by the World Coordinate System by using the stars in common with the publicly available Gaia DR2 catalog 37 . The resulting 1σ astrometric accuracy is typically ≤ 0.1 mas.
The age of the five LMC clusters -The CMDs obtained in the present work allowed us to tightly constrain the age of the five target clusters, which are considered as old stellar systems (with ages larger than 10 Gyr) in all the compilations present in the literature.
8
The co-added n-CMDs of the 5 targets are shown in Figure 2 (grey dots), where only stars within the half-mass radius of each system have been plotted to better highlight the cluster populations. As can be appreciated, the match among the main evolutionary sequences is impressive: the co-added n-CMD appears as a single population, thus demonstrating that the 5 clusters are indeed coeval within less than 1 Gyr. To quantify their age, we superimposed the n-CMD of M30 38 , a Galactic GC with comparable metallicity ([Fe/H]=−1.9) 25 and very well constrained age (13 Gyr) 39,40 . Another impressive match of the main evolutionary sequences is found, demonstrating that M30 is coeval to the 5 LMC clusters. Thus an age of 13 Gyr±1 Gyr (with a conservative estimate of the error) can be assumed for the 5 LMC clusters. [41][42][43] underline the advantages of using star counts, instead of surface brightness, profiles to derive the cluster structural parameters.
Cluster structural parameters -Many papers in the literature
In fact, surface brightness profiles are known to suffer from possible biases due to the presence of a few very bright stars, which instead do not affect the number density profiles. In spite of this, most of the morphological parameter estimates (including those for the 5 LMC clusters considered here) are still based on surface brightness profiles. We thus performed new determinations based on star count profiles. The full analysis, including artificial star experiments for the photometric completeness estimate, is described and discussed elsewhere. Here we just summarize its main steps and the structural parameters relevant for the present discussion. According to the procedure adopted in previous works 44,45 , we first determined the centre of gravity (Cgrav) of each system by averaging the right ascension (α) and declination (δ) of all stars brighter than a given threshold magnitude (to avoid incompleteness biases) and lying within a circle of radius r. For the five clusters discussed here, the threshold magnitude is around the main sequence turnoff level, while the typical radius r varies from 6"-65" depending on the cluster morphology. The derived values of Cgrav differ by ~2"-3" from previous determinations, but for NGC 2257 where the difference amounts to almost 6" (see Table 1). To build the number density profile, we thus divided the photometric sample in (typically [15][16][17][18][19][20] concentric annuli centred on Cgrav, each one split into an adequate number of sub-sectors. The number of stars lying within each sub-sector (and with magnitude above a threshold adopted to avoid incompleteness biases) was then counted, and the star surface density was obtained by dividing these values by the corresponding sub-sector area.
The stellar density in each annulus was then obtained as the average of the sub-sector densities, and the standard deviation was adopted as the uncertainty. The LMC background level has been estimated from the parallel observations, typically located at 5' from each cluster (see the section 'The data set'). These have the F606W filter in common with the cluster observations, thus 9 allowing a consistent estimate of the level of LMC field contamination at any fixed magnitude limit. Once estimated, the background level was subtracted to the stellar density measured in each annulus, thus to obtain the density profile of the cluster. Finally, this has been compared with the family of King models 46 characterized by different values of the dimensionless parameter W 0 , which is proportional to the gravitational potential at the center of the system. The best-fit solution has been determined through a procedure that minimizes the sum of the unweighted squares of the residuals and evaluates the corresponding reduced χ 2 . The uncertainties on the derived structural parameters have been estimated in agreement with other studies in the litterature 22,42 : they correspond to the maximum variations of the parameter within the subset of models that provide a χ 2 min ≤ χ 2 best +1, where χ 2 best is the best-fit χ 2 , while χ 2 min is the minimum χ 2 obtained for every value of W 0 explored. The core and half-mass radii of the 5 clusters, which are relevant for the present discussion, are listed in Table 1, while the full discussion of the adopted procedure and the results obtained will be given in a forthcoming paper.
Central relaxation time -Central relaxation times have been computed by adopting the newly determined structural parameters and following the well-know relation 47 where ρ c is the central mass density in M ! /pc 3 , M cl is the cluster mass in M ! ; m * is the average star mass (here we adopted 0.3 M ! ) and r c is the core radius in pc. The values of the central relaxation times for the five clusters are listed in Table 1.
Field decontamination -It is well known that the CMD of the LMC clusters can be significantly contaminated by field star interlopers observed along the line of sight. Unfortunately, given the LMC distance, a detailed separation between field and cluster stars based on proper motions is possible only for a few cases. Moreover, accurate Gaia DR2 proper motions are available only for the brightest stars. As a consequence, to asses the impact of field contamination in the five cases discussed here we used a statistical approach based on the comparison between the CMD stellar distribution observed in the innermost regions of each cluster and that of a region representative of the surrounding LMC field.
To this end, we accurately analysed all the available observations in the vicinity of the program clusters. For three of them (namely NGC 1466, NGC 1841 and NGC 2257) the field contamination turns out to be negligible, with only a few stars measured over the entire field of view (11 square arcmin) of the ACS/WFC parallel observations sampling the nearby LMC field.
In the case of NGC 2210 and Hodge 11, the LMC field appears to be more pronounced and we thus performed a statistical decontamination procedure. This required us to transform the boxes used for the population selection (see Figure 3) Table 1 for three radial bins (r<rc, rc<r<rh/2, rh/2<r<rh) adopted to preserve the radial information. To determine reliable (i.e., field-decontaminated) values of A+ we thus randomly removed these numbers of stars from the BSS population sampled in each bin, and we repeated this random decontamination procedure 5000 times, each time registering the resulting value of A+. Supplementary Figure 2 shows the histogram of the obtained values. As can be seen, a peaked distribution with a small dispersion (smaller than 0.01) is obtained in both cases, thus testifying that the value of A+ is solidly estimated also in these contaminated clusters.
Errors in the measure of A+ -As discussed in previous papers 14 Table 1 and reported in each panel of Figure 4.
Data Availability:
The data that support the plots within this paper and other findings of this study are available from the corresponding author upon reasonable request. is compared with that of the Galactic globular cluster M30 38 (t=13 Gyr) 39,40 . The comparison clearly demonstrates that the 5 clusters are all old and coeval, with an age of ~13 Gyr. illustrating that cluster sizes move toward smaller values with the long-term internal dynamical evolution of the system: compact clusters are dynamically more evolved than large-rc GCs. | 2019-09-04T18:42:25.000Z | 2019-09-04T00:00:00.000 | {
"year": 2019,
"sha1": "a9724871884459e9a717a551e00da0207893d79f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "a9724871884459e9a717a551e00da0207893d79f",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Physics"
]
} |
258720399 | pes2o/s2orc | v3-fos-license | Asymmetric presentation with a novel RP2 gene mutation in X-Linked retinitis pigmentosa: a case report
Background We present the detailed multimodal imaging analysis in a case of X-linked retinitis pigmentosa (XLRP) exhibiting a markedly asymmetric presentation with a novel RP2 mutation. Case presentation A 25-year-old woman complained of decreased vision in the right eye as well as night blindness. Her visual acuity was 20/100 (OD) and 20/20 (OS). Fundus examination revealed bone spicule pigmentation with tessellated changes in the fundus within the posterior pole. Optical coherence tomography (OCT) showed generalized disruption of foveal microstructures in the OD. No abnormal findings were identified, but localized ellipsoid zone band losses were observed on OCT in the OS. Fundus autofluorescence revealed multiple patchy hypo-autofluorescent lesions in the OD and a tapetal-like radial reflex against a dark background in the OS. Fluorescein angiography and OCT angiography revealed diffuse mottled hyperfluorescence with reduced retinal vessel density in the OD and no evidence of vascular compromise in the OS. Goldmann perimetry demonstrated a constricted visual field, and electrophysiological assessment revealed an extinguished rod response and a severely impaired cone response in the OD. Molecular genetic tests via next-generation sequencing revealed the pathogenic variant to be a heterozygous frameshift mutation in RP2 (RP2, p.Glu269Glyfs*7), resulting in premature termination of the protein. Conclusions Random X-inactivation may be attributed to interocular differences in the severity of XLRP in female carriers. A novel frameshift mutation in the RP2 gene and a comprehensive phenotypic evaluation in the current study may broaden the spectrum of the disease in XLRP carriers. Supplementary Information The online version contains supplementary material available at 10.1186/s12886-023-02968-4.
Background
Retinitis pigmentosa (RP) is the most common inherited retinal disease and is characterized by the progressive degeneration of the rod and cone photoreceptors. The classic triad symptoms of RP are pale waxy optic disc, attenuation of retinal vessels, and bone spicule pigmentation [1,2]. As the retinal pigment epithelium (RPE) and photoreceptor degeneration progresses, nyctalopia, gradational vision loss, and constriction in the visual field develop, causing subsequent irreversible vision loss [3]. The disease is genetically heterogenous, and it has been linked to nearly 100 different genes [4,5].
X-linked retinitis pigmentosa (XLRP) is particularly severe, with an early onset in childhood and rapid progression. Retinitis pigmentosa GTPase regulator (RPGR) gene (OMIM #312610) variants account for 70 − 80% of XLRP, RP2 (OMIM #312600) variants account for a further 5 − 20%, and OFD1 (OMIM #300170) has been identified as a rare cause of XLRP [6][7][8]. Mostly, genetic RP presents bilaterally, although some cases show interocular asymmetry. Female carriers of XLRP show variable clinical symptoms of the disease with asymmetric manifestations [9]. The wide spectrum of phenotypes in female carriers of XLRP is likely attributable to random X chromosome inactivation during embryogenesis [10]. This random inactivation, a physiological phenomenon called lyonization, persists in daughter cells and results in a mosaic distribution [10][11][12].
The human RP2 protein comprises of 350 amino acids and is widely expressed. The function of RP2 is not fully understood, but it is widely considered to constitute the plasma membranes of all retinal cell types, and that the acylation of RP2 is critical for its function in the retina [13]. The encoded RP2 protein is implicated in ciliary trafficking of myristoylated and prenylated proteins in photoreceptor cells. In the present study, we identified a novel RP2 mutation in a female XLRP carrier with a markedly asymmetric presentation. However, little is known regarding the relationship between RP and interocular asymmetry. Since the number of reported case series of asymmetric RP is limited, the elucidation of this condition requires more information. Herein, we present the detailed multimodal imaging analysis in an RP case showing a markedly asymmetric presentation and a previously unreported null mutation in the RP2 gene.
Case presentation
A 25-year-old woman presented to our clinic with decreased vision in the right eye as well as night blindness. The patient was systemically healthy and had no history of ocular trauma or uveitis. She could not easily navigate a dark movie theater without support since high school, but it did not worsen, and the condition was maintained. She reported a family history of RP with poor vision and nyctalopia involving her maternal grandfather and maternal cousin. However, we were unable to ophthalmologically examine for her family members. The family pedigree based on history and symptoms is shown in Fig. 1.
On examination, the best-corrected visual acuity was 20/100 in the right eye (OD) and 20/20 in the left eye (OS). Her refractive error was -4.0 Dsph = -4.0 Dcyl x A180 OD and -1.25 Dsph = -3.25 Dcyl x A20 OS. A slitlamp examination revealed no specific findings. Fundus examination revealed multiple bone spicule pigmentations and attenuation of retinal vessels in the OD. Tessellated fundus changes within the posterior pole were also observed. However, no conspicuous bone spicule pigmentation was found, with an almost normal appearance of the retina in the OS ( Fig. 2A, B). Optical coherence tomography (OCT) revealed visible thinning of the entire outer retina and generalized disruption of foveal microstructures, with a small area of preserved faint ellipsoid zone subfoveally in the OD. Localized attenuation and losses in the ellipsoid zone band were observed in OS (Fig. 2C, D). Fundus autofluorescence (FAF) revealed multiple patchy and reticular hypoautofluorescent lesions with abnormal hyperautofluorescence in the fovea of the OD. Interestingly, a completely different aspect of the FAF findings was noted in the OS: a tapetal-like reflex showing a characteristic bright radial reflex against a dark background (Fig. 3A, B). Ultra-widefield fluorescein angiography (FA) imaging showed diffuse, blotchy, or mottled hyperfluorescence corresponding to the affected whole retinal areas in the OD. No specific abnormalities were identified in the OS (Fig. 3C, D). Optical coherence tomography angiography (OCTA) revealed significantly reduced retinal vessel density, blood flow, and retinal thinning in the OD (Fig. 4).
Goldmann perimetry demonstrated a central visual field within 10° and islands of the visual field, particularly on the inferior side of the OD. A central scotoma was also identified in the remaining central visual field. A normal visual field with a physiological blind spot was observed in the OS (Fig. 5). Electrophysiologic assessment was performed, and electroretinography (ERG) revealed an extinguished rod response and a severely impaired cone response in the OD. The amplitudes of the rod and cone responses were unremarkable in the OS, revealing marked asymmetry in both the structure and function of the retina (Fig. 6).
Molecular genetic tests using next-generation sequencing (NGS)-based gene panels were performed using peripheral blood samples obtained from patient after providing informed consent. The exome-based targeted panel comprised 244 candidate genes associated with inherited retinal diseases and screened the coding region and its flanking region of the gene using the NovaSeq system (Illumina, USA) (Supplementary Table). The variant interpretation was performed using the guidelines of the American College of Medical Genetics and Genomics (ACMG) [14]. We identified a novel RP2 mutation (c.803dup) in exon 3 at position 803 that caused a frameshift and premature termination signal at codon 269 (RP2, p.Glu269Glyfs*7, heterozygote). These changes have not been previously reported in the literature. This corresponded to the carrier of XLRP2, and no other pathogenic or likely pathogenic variants were identified among the 244 inherited retinal disease-related genes. The possibility of pathological mutations not being assessed cannot be excluded.
Discussion and conclusion
Heterozygous female carriers of XLRP may exhibit signs of distinctive mosaic retinopathy and variable phenotype. Several studies have identified some characteristics of mosaic retinopathies. However, most cases exhibit varying degrees of fundus changes with bilateral symmetry, and reports regarding XLRP carriers showing discordance are limited [11,12,15]. Furthermore, few studies have presented detailed multimodal imaging findings with functional assessments in this unique population.
To our knowledge, the present study is the first to report a novel RP2 mutation and to present the detailed multimodal imaging characteristics of an XLRP carrier patient, exhibiting remarkable asymmetry between the eyes.
Ophthalmologic findings in XLRP-carrier females can range from a tapetal-like reflex and isolated regions of peripheral pigment atrophy and clumping to extensive retinal degeneration, including diffuse bone spicule pigmentation and vessel attenuation [16]. Comander et al. [16] showed a wide range of functions among XLRP carriers, and most carriers had mildly or moderately reduced visual function but rarely became legally blind. Patients with RP2 variants predominantly exhibit a RP phenotype. One study comparing the phenotypic features between XLRP revealed that, on average, visual acuity at all ages was lower in the RP2 group than in the RPGR group [17]. This was likely due to early macular involvement in RP2. Jayasundera et al. [18] presented two cases of RP2 female carriers with a phenotype similar to that of affected males. They exhibited atrophic macular changes, poor visual acuity, and central scotoma. The right eye of the patient in this study also manifested macular involvement, poor visual acuity, and central scotoma, which corresponds with the findings of previous reports. Jayasundera et al. [18] have also shown that one female carrier demonstrating asymmetrical disease had anisometropia of 8.00 D, the severely affected eye being myopic. In our patient, anisometropia of approximately 3.00 D, with the severely affected eye showing myopic tessellated fundus, further supported the association of myopia with RP2 retinopathy. The mildly affected eye exhibited classic tapetal-like reflex with bright radial hyperautofluorescence in XLRP carriers on FAF images. FA and OCTA revealed no evidence of vascular compromise in the retina, and the visual function in the ERG or visual field was also not affected in the OS. However, the OCT findings from her mildly affected eye support previous reports of increased reflectivity from the RPE and irregularities or disruption of the ellipsoid zone band [19,20]. In patients with inherited retinal disease and marked interocular asymmetry, microperimetry or multifocal ERG could be useful in detecting localized abnormalities in the better eye. Decaying visual function in the left eye over time may occur during long-term follow-up. The phenomenon of XLRP patients exhibiting various phenotypes is commonly explained by mosaicism, which is the consequence of lyonization or random X-inactivation [10,11]. During embryonic development, especially at the two-cell stage, a genetic and epigenetic mosaic embryo could be found because of asymmetric cell division [21]. Gametic half-chromatid mutations, mitotic selective chromatid segregation, chromosomal disjunction, asymmetric mitosis, and post-zygotic mutations are well-known mechanisms underlying this phenomenon [21]. As an embryonic left-right separation mechanism explained above occurs, mosaic embryos with different proportions of cells carrying disease-causing mutations on the left and right would develop into an individual with the disease manifesting asymmetrically on the left and right. In this case, we assumed that the proportion of cells carrying the wild-type X chromosome and those carrying the mutant X chromosome was different in the left and right eyes, from the mosaic embryonic stage. It is reasonable to hypothesize that the activation ratio of the X chromosome with the RP2 gene mutation was be higher in the right eye in our case. Lyonization during clonal expansion early in photoreceptor cell differentiation and peripheral migration could result in mosaic patterns within the retina, random variation in the total amount of retinal tissue affected, and interocular differences in severity [12].
The RP2 gene encodes the RP2 protein, which has two domains: an N-terminal tubulin folding cofactor C-like (TBCC) domain and a C-terminal nucleoside diphosphate kinase-like (NDPK) domain [13,22]. RP2 acts as a GTPase-activating protein specifically for ADP-ribosylation factor like GTPase 3 (ARL3), which interacts with the N-terminal TBCC domain of the RP2 protein [23]. These proteins play an important role in the assembly and trafficking of membrane-associated proteins in the photoreceptor cilium [17,24,25]. RP2 mutations include nonsense, missense, frameshift, insertion and deletion changes, which in most cases result in a severely truncated form of the protein [26][27][28][29][30][31]. In the current study, a novel mutation, c.803dup, p.Glu269Glyfs*7 of the RP2 gene was identified, expanding the spectrum of RP2 mutations that cause XLRP. The mechanism of premature termination of translation due to frameshift mutation would most likely result in the lack of translation of the RP2 protein. In cases where premature stop codons are located in the terminal exon, the truncation of the C-terminus of RP2 results in a misfolded and nonfunctional protein. Rather than localizing to the plasma membrane, as is the case for the wild-type protein, it localizes to the cytoplasm and is susceptible to enhanced lysosomal degradation [32][33][34].
In conclusion, we have presented the detailed multimodal imaging analysis of an XLRP carrier female showing marked asymmetrical retinal involvement and identified a novel mutation in the RP2 gene. Random X-inactivation may be attributed to interocular differences in severity in this unique population. A novel frameshift mutation in the RP2 gene and a comprehensive phenotypic evaluation in the current study may broaden the spectrum of RP2 mutations and the phenotypic spectrum of the disease in XLRP carriers. ADP-ribosylation factor like GTPase 3 | 2023-05-17T14:06:59.106Z | 2023-05-17T00:00:00.000 | {
"year": 2023,
"sha1": "f92bbf7ace43255254aac94e88586d97fd02ef6f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "f92bbf7ace43255254aac94e88586d97fd02ef6f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
220058526 | pes2o/s2orc | v3-fos-license | Undifferentiated carcinoma with osteoclast-like giant cells of the pancreas harboring KRAS and BRCA mutations: case report and whole exome sequencing analysis
Background Undifferentiated carcinoma with osteoclast-like giant cells (UC-OGC) is an extremely uncommon pancreatic neoplasm that comprises less than 1% of all exocrine pancreatic tumors. To date, cases and data from whole-exome sequencing (WES) analysis have been reported by specific studies. We report a case of pancreatic UC-OGC with a literature review, and provide novel insights into the molecular characteristics of this tumor entity. Case presentation A 31-year-old male presented with intermittent abdominal pain for several months, and positron emission tomography (PET) showed isolated high metabolic nodules during the pancreatic uncinate process that were likely to be malignant disease. Pathological examination after radical excision revealed UC-OGC associated with poorly differentiated adenocarcinoma at the head of the pancreas. The disease recurred 7.4 months after radical surgery. The KRAS p.G12D (c.35G > A) and somatic BRCA2 p.R2896C (c.8686C > T) mutations were detected by subsequent WES analysis. The patient showed no response to platinum-based systemic chemotherapy, and his condition quickly worsened. He finally died, with an overall survival of 1 year. Conclusions As an extremely uncommon tumor entity, UC-OGC is really a unique variant of conventional pancreatic ductal adenocarcinoma due to its similarities, as shown by genomic WES analysis. Clinical examination and molecular analysis by WES could further indicate potential treatment strategies for UC-OGC.
conventional PDAC [8]. In addition, a few molecular studies of UC-OGC reported that KRAS mutations most frequently occurred, which was similar to that observed in PDAC [11][12][13]. Additionally, one detailed study reported the molecular features of UC-OGC by performing whole-exome sequencing (WES) analysis [14], and all these results implied that pancreatic UC-OGC was analogous to PDAC. To date, more cohorts of patients are needed to investigate the pathological and genetic features of this unique tumor variant. Herein, we report a case of pancreatic UC-OGC harboring the KRAS p.G12D mutation and somatic BRCA2 mutation, as detected by WES, in a patient experienced reduced disease-free survival (DFS) and overall survival (OS). Furthermore, we provide a literature review of UC-OGC studies and analyze them to obtain novel insights regarding the molecular characteristics of this tumor entity.
Case presentation
A 31-year-old male with no past medical or family history of disease presented with intermittent abdominal pain lasting almost 2 months, and he was admitted to the local hospital on February 28, 2017. Positron emission tomography (PET) showed isolated high metabolic nodules during the pancreatic uncinate process that were likely to represent malignant disease (Fig.1a, b).
The patient then underwent radical pancreaticoduodenectomy on March 9, 2017. Pathological examination after radical excision showed poorly differentiated ductal adenocarcinoma associated with UC-OGC at the head of the pancreas (Fig.2a-d). Immunohistochemistry staining revealed that the cells were positive for CD68 and CK7, whereas the cells were negative for vimentin and S-100 (Fig. 2e, f). The tumor was measured to be 3 × 3 × 2 cm in size and exhibited invasion of the nerves, nearby pancreatic tissues, duodenum and the lower part of the common bile duct. The surgical margins were negative, and there was no discovery of lymph node metastasis. The surgical-pathological staging of the tumor was IIA (T3N0M0) according to the 7th edition of the American Joint Committee on Cancer (AJCC)/Union for International Cancer Control (UICC) TNM staging system.
Adjuvant chemotherapy with gemcitabine and albuminbound paclitaxel was administered starting on April 10, 2017 for six cycles, and the toxicity was acceptable. However, the patient developed a backache 2 months after the termination of adjuvant chemotherapy. The contrasted computed tomography (CT) scan performed on November 27, 2017 showed multiple lymph node metastases in the mesenteric region (Fig. 1c) and peritoneum (Fig. 1d) with a serum CA199 level > 900 U/ml. Exploratory laparotomy was performed on November 29, 2017, and affirmed peritoneal metastasis was confirmed by peritoneal biopsy. The patient afterwards received systemic Fig. 1 The PET showed high metabolic nodules at pancreatic uncinated process and inclined to be malignant disease at baseline (a, b). The contrasted CT scan showed multiple lymphatic metastases in the mesenteric region(c) and peritoneum (d) beyond termination of adjuvant chemotherapy chemotherapy with the FOLFIRINOX regimen (combination of oxaliplatin, irinotecan, fluorouracil and leucovorin) for two cycles. Unfortunately, the serum tumor marker CA199 level was elevated to 1595 U/ml after two treatment cycles, and the patient's condition deteriorated due to obvious myelosuppression and digestive tract toxicity caused by the chemotherapeutic drugs. Finally, he had to suspend chemotherapy and was admitted to our hospital on January 11, 2018.
WES analysis was performed, and the KRAS p. G12D (c. 35G > A) and somatic BRCA2 p. R2896C (c. 8686C > T) mutations were detected in both surgical formalinfixed paraffin-embedded (FFPE) tumor tissues and plasma ctDNA samples. Additionally, WES indicated that the tumor did not show microsatellite instability (MSI) and did not present a high tumor mutational burden (TMB). Considering the poor condition of the patient and the fact that the polyadenosine diphosphate-ribose polymerase (PARP) inhibitor olaparib was not available, we administered apatinib combined with tegafur/gimeracil/oteracil potassium capsules (S-1) for his disease. However, the patient's condition worsened rapidly with the occurrence of fever, jaundice and vomiting after 1 month of treatment with this regimen, and eventually he died on March 12, 2018. The disease-free survival (DFS), which was defined as the time from radical surgery to disease recurrence, was just 7.4 months. The overall survival (OS), which was defined as the time between the primary diagnosis of UC-OGC and death, was only 12.6 months.
Discussion and conclusion
Undifferentiated carcinoma of the pancreas, is a highly malignant tumor that tends to exhibit invasion of the perineum, lymph nodes and blood vessels and is called "giant cell carcinoma" or "pleomorphic large cell carcinoma" [15]. Tumors with osteoclast-like giant cells (OGCs) have been documented in a variety of organs, including the kidney, breast, thyroid gland, heart, parotid gland and skin [7,[16][17][18]. The UC-OGC is composed of pleomorphic neoplastic mononuclear cells that are and intermixed with large non-neoplastic multinucleated giant cells, as observed under microscopy [19], and it is suggested that UC-OGC is derived from epithelial tumors and the components of vimentin-positive carcinoma, which represent the mesenchymal transition of ductal cells [20,21]. Based on the pathological features, the World Health Organization (WHO) had classified UC-OGC as a unique PDAC variant in 2010 [22].
The OGCs within the background of anaplastic malignant cells in UC-OGC are commonly considered to be of benign histiocytic origin, which has been supported in several cases by their immunoreactivity with CD68 [16]. Currently, it is hypothesized that OGC recruitment is a result of chemotactic factors produced by neoplastic cells and is indicative of a better prognosis [16]. Notably, such tumors can be classified as pure UC-OGC if they are not associated with a distinct neoplasm with a different morphology [14]. Luchini et al. [14] reported that the median OS (mOS) of 16 analyzed UC-OGC patients was 20 months, and the mOS of patients with pure UC-OGC was significantly higher than that of patients with associated PDAC (36 vs. 15 months, P = 0.04). Furthermore, it revealed an UC-OGC associated with PDAC conferred a five-fold increased risk of death [14], which was in accordance with the survival data reported by Muraki et al. [8]. The presence of UC-OGC in our case was confirmed by CD68 staining in the margin of undifferentiated tumors, and immunoreactivity with CK7 showed the presence of an associated adenocarcinoma component, which proved that this particular case was not pure UC-OGC. The 31-year-old male patient in our case survived for only 1 year, which was similar to the length of survival previously reported above [8,14].
WES analysis of 8 UC-OGC patients had revealed that KRAS oncogenic mutations were identified in all analyzed cases, which implied that this tumor entity shared similar genomic features with conventional PDAC [14]. In addition, other previous studies also indicated the prevalence of KRAS mutations in UC-OGC [11][12][13]23]. Based on the WES outcome for the UC-OGC cohort reported by Luchini et al. [14], all variants of KRAS mutations were found in codon 12, including the G12V, G12D and G12R mutations. In addition, additional somatic mutations in the tumor suppressor genes TP53, CDKN2A and SMAD4 were detected in these UC-OGC cases, which further indicated that UC-OGC is a unique phenotype of PDAC due to the fact that these alterations either commonly appear in PDAC [14]. Additionally, Luchini et al. found the same SERPINA3 variant (p.M290L) in a hotspot region in two UC-OGC cases and suggested that it may be an oncogene that had been previously reported in squamous cell carcinoma in the cervix [14]. SERPINA3 encodes α-1-antichymotrypsin, which inhibits a plasma protease belonging to the serine protease inhibitor class [24]. Of note, the upregulation of SERPINA3 is correlated with increases in cancer cell migration and invasion, and indicated a poor prognosis for several cancer types [25,26]. WES analysis also suggested that GLI3 was a driver gene of UC-OGC, as it was detected in two cases [14]. GLI3, as a target of microRNAs and transcription factors of the Hedgehog signalling pathway, is known to be upregulated in multiple cancers, in which it results in cancerous cell behaviour such as anchorage-independent growth, angiogenesis, proliferation and migration [27]. Except for the above mutations, it was difficult to interpret the importance of the other nonsynonymous mutations in MEGF8, MAGEB4 and TTN detected by WES [14]. Muller et al. reported that the dosage gain in KRAS p. G12D dosage gain was not only related to early tumor progression, but also associated with metastasis in PDAC [28]. Unfortunately, there is currently no highly selective agent to suppress KRAS-mutated cancer. The WES analysis of our case indicated that the KRAS p. G12D mutation functioned as a major driver that resulted in the activation of downstream signalling pathways and high-grade disease malignancy. The patient suffered a pancreatic tumor at a young age and his disease progressed rapidly within an extremely short time after the previous radical operation. These results indicated that KRAS mutations in both in UC-OGC and PDAC result in the activation of oncogenes, which results in a poor prognosis, and that targeted agents against KRAS oncogenic mutations are urgently needed.
PDAC has been reported to have an immunosuppressive tumor microenvironment with a high programmed cell death-ligand 1 (PD-L1) expression, and in turn, the overexpression of PD-L1 inhibited the cytotoxic effects of activated T-cells [29]. Several studies have indicated that all indicated PD-L1 expression in PDAC is associated with a significantly poorer prognosis compared to that in patients without PD-L1 expression [29][30][31][32][33][34]. Luchini et al. investigated the PD-L1 expression patterns in pancreatic UC-OGC and finally found that PD-L1 was more frequently expressed in cases associated with PDAC than in cases associated with pure UC-OGC (P = 0.04), and PD-L1-positive UG-OGC was associated with a three-fold (P = 0.034) higher risk of mortality than PD-L1-negative UC-OGC [35]. In addition, the mismatch repair (MMR) system plays a crucial role in the repair of DNA sequence mismatches during replication. Defects in the MMR system (dMMR) could lead to errors in DNA replication, resulting in a high-TMB or increased MSI [36]. Thus, a high neoantigen load that increases proinflammatory cytokine levels and the activation of T cells is accumulated due to somatic mutations and contributes to the immunogenicity of MSI tumors with a sensitivity to immune checkpoint blockade [37]. Nevertheless, the prevalence of MSI/dMMR in PDAC is likely to be much lower than that in other gastrointestinal cancers, with only a 0-0.8% prevalence rate, as previously reported [38,39]. Salem et al. analyzed 870 PDAC cases and found a low prevalence (1.4%) of high TMB in PDAC, and the majority of cases had a low TMB in either MSI-high or MSI-low patients [40]. A genomic profile analysis with a large sample size including 3594 PDAC cases [6] demonstrated that MSI-high and/or TMB-high status was detected in only 0.5% of samples [6]. In addition, KRAS, TP53, CDKN2A and SMAD4 were the most frequently altered genes, and KRAS mutations ranked the first, with a prevalence of 88%. Additionally, alterations of the BRCA and FANC genes, which encode DNA damage repair proteins, were found in 14% of PDAC cases [6]. The tumor did not show MSI and did not present a high-TMB in our case, and the PD-L1 expression of this case was unknown. Based on the description given above, the patient associated with our case had no indication for immunotherapy.
In addition to the common KRAS oncogenic mutations, additional somatic BRCA2 alterations were detected by WES in this case. Pancreatic cancer was reported to be the third most common cancer associated with BRCA mutations [41]. Approximately 7% of patients with pancreatic cancer carried germline mutations in BRCA1/2, and the frequency of BRCA1/2 mutation carriers was estimated to be at 4.9 to 26% in familial pancreatic cancer [42]. To date, the largest reported PDAC case series involving patients with germline BRCA mutations showed that the median OS was 27.6 months [43]. Ashkenazi Jews have been the population with the highest prevalence of BRCA1/2 mutations in pancreatic cancer, with approximately 96% of patients having mutations in BRCA1/2 (BRCA1 185delAG, BRCA1 5382insC, or BRCA2 6174delT), and the BRCA2 6174delT variant is the most common variant in familial pancreatic cancer [44]. The PARP inhibitor olaparib had an objective response rate (ORR) of 21.7% in heavily pretreated pancreatic cancer patients with germline BRCA1/2 mutations in a phase II study [45]. A randomized phase III study [46] showed that after first-line platinum-based chemotherapy, olaparib functioned as a maintenance therapy in pancreatic cancer patients with germline BRCA1/2 mutations and significantly prolonged the median PFS compared with that in patients subjected to maintenance with a placebo (7.4 vs. 3.8 months, P = 0.004).
Advances in pancreatic cancer are lacking, as it is actually a highly heterogeneous disease resistant to conventional cytotoxic chemotherapeutic drugs or targeted agents [47]. The chemotherapy regimen of FOL-FIRINOX (combination of oxaliplatin, irinotecan, fluorouracil and leucovorin) [48] or gemcitabine plus albumin-bound paclitaxel [49] is the preferred first-line recommendation for the treatment of in metastatic PDAC. Some evidence has also shown that BRCA-deficient cells are more susceptible to platinum than BRCAproficient cells [50,51], which has been supported by several clinical trials [52,53]. The new version of the National Comprehensive Cancer Network (NCCN) Guidelines had recommended gemcitabine/cisplatin chemotherapy as one of the first-line regimens for BRCA1/BRCA2-mutated PDAC [54]. Waddell et al. reported that 4 patients with unstable genomes or a high BRCA mutational signature burden had robust complete or partial responses to platinum-based chemotherapy among 8 PDAC patients who received the same regimen, while 3 patients without these characteristics did not respond. Subsequent research also indicated that BRCA2mutant patient-derived xenografts (PDXs) responded to cisplatin, and PDXs without mutations in a BRCA pathway gene failed to respond to cisplatin as well [55]. All these findings demonstrated that mutations in BRCA pathway genes or genomic instability had potential implications for the selection of PDAC treatment. In our case, the patient was a carrier of the somatic BRCA2 mutant (p. R2896C), which has not been characterized to have known functional consequences. Subsequent bioinformatics analysis with various prediction software packages predicted the BRCA2 p. R2896C mutation to be neutral. The disease in this patient rapidly progressed after only two cycles of platinum-based chemotherapy, and treatment with a PARP inhibitor was not possible owing to the presence of a non-germline BRCA2 mutation.
Based on the mutational landscape of the genomics by WES, Waddell et al. [55] classified PDAC into four subtypes based on potential clinical utility according to exome and copy number variation (CNV) analyses including stable, locally rearranged, scattered and unstable. In the stable subtype, tumor genomes showed evidence of ≤50 structural variations that were located randomly throughout the genome. The locally rearranged type, it exhibited at least 50 focal variations on one or two chromosomes and nearly 1/3 the tumors of this subtype contained regions of copy number gain that harbored certain oncogenes. The scattered subtype exhibited nonrandom chromosomal damage and fewer than 200 structural variations. The unstable subtype exhibited a large number of structural variations (> 200), and the high level of genomic instability suggested defects in DNA maintenance and potentially showed sensitivity to DNA-damaging agents. In addition, Bailey et al. defined pancreatic cancer according to another four subtypes, including squamous, pancreatic progenitor, immunogenic and aberrantly differentiated endocrine exocrine [5]. These different types are associated with distinct histopathological characteristics, and each inferred the presence of different mechanisms of the molecular evolution of pancreatic cancer. To some degree, the assessment of the subtype can guide accurate therapeutic selection for pancreatic cancer. Furthermore, researchers have identified five new susceptibility loci for pancreatic cancer in the Chinese population to provide effective markers for the early screening and diagnosis of this very malignant cancer [56]. In this case, WES analysis revealed that the CNV in the SOX9 gene gained approximately 1.11% variarion, whereas the CNV results for the KRAS and BRCA2 genes were normal. Based on the mutational landscape of pancreatic cancer illustrated above, the case in this study deserved to be classified as the stable subtype owing to the presence of less than 50 structural variation events in the CNV.
In conclusion, although pancreatic UC-OGC is extremely uncommon and complex, the current evidence has clarified that it is a unique variant of conventional PDAC due to the genomic similarities between it and PDAC revealed by WES analysis. Assessment of the clinical and molecular characteristics by WES would further provide potential treatment strategies for this tumor entity. | 2020-06-26T14:48:46.794Z | 2020-06-26T00:00:00.000 | {
"year": 2020,
"sha1": "7bb9e32794eb854ce96e8894c60d5eda020d1d55",
"oa_license": "CCBY",
"oa_url": "https://bmcgastroenterol.biomedcentral.com/track/pdf/10.1186/s12876-020-01351-7",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7bb9e32794eb854ce96e8894c60d5eda020d1d55",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
5827459 | pes2o/s2orc | v3-fos-license | Magnetometer calibration using inertial sensors
In this work we present a practical algorithm for calibrating a magnetometer for the presence of magnetic disturbances and for magnetometer sensor errors. To allow for combining the magnetometer measurements with inertial measurements for orientation estimation, the algorithm also corrects for misalignment between the magnetometer and the inertial sensor axes. The calibration algorithm is formulated as the solution to a maximum likelihood problem and the computations are performed offline. The algorithm is shown to give good results using data from two different commercially available sensor units. Using the calibrated magnetometer measurements in combination with the inertial sensors to determine the sensor's orientation is shown to lead to significantly improved heading estimates.
Introduction
Nowadays, magnetometers and inertial sensors (gyroscopes and accelerometers) are widely available, for instance in dedicated sensor units and in smartphones. Magnetometers measure the local magnetic field. When no magnetic disturbances are present, the magnetometer measures a constant local magnetic field vector. This vector points to the local magnetic north and can hence be used for heading estimation. Gyroscopes measure the angular velocity of the sensor. Integration of the gyroscope measurements gives information about the change in orientation. However, it does not provide absolute orientation estimates. Furthermore, the orientation estimates suffer from integration drift. Accelerometers measure the sensor's acceleration in combination with the earth's gravity. In the case of small or zero acceleration, the measurements are dominated by the gravity component. Hence, they can be used to estimate the inclination of the sensor.
Inertial sensors and magnetometers have successfully been used to obtain accurate 3D orientation estimates for a wide range of applications. For this, however, it is imperative that the sensors are properly calibrated and that the sensor axes are aligned. Calibration is specifically of concern for the magnetometer, which needs recalibration whenever it is placed in a (magnetically) different environment. When the magnetic disturbance is a result of the mounting of the magnetometer onto a magnetic object, the magnetometer can be calibrated to compensate for the presence of this disturbance. This is the focus of this work.
Our main contribution is a practical magnetometer calibration algorithm that is designed to improve orientation estimates when combining calibrated magnetometer data with inertial data. The word practical refers to the fact that the calibration does not require specialized additional equipment and can therefore be performed by any user. More specifically, this means that the orientation of the sensor is not assumed to be known. Instead, the calibration problem is formulated as an orientation estimation problem in the presence of unknown parameters and is posed as a maximum likelihood (ML) problem. The algorithm calibrates the magnetometer for the presence of magnetic disturbances, for magnetometer sensor errors and for misalignment between the magnetometer and the inertial sensor axes. Using the calibrated magnetometer measurements to estimate the sensor's orientation is experimentally shown to lead to significantly improved heading estimates. We aggregate and extend the work from [1] and [2] with improvements on the implementation of the algorithm. Furthermore, we include a more complete description and analysis, more experimental results and a simulation study illustrating the heading accuracy that can be obtained with a properly calibrated sensor.
To perform the calibration, the sensor needs to be rotated in all possible orientations. A perfectly calibrated magnetometer would in that case measure rotated versions of the local magnetic field vector. Hence, the magnetometer data would lie on a sphere. In practice, however, the magnetometer will often measure an ellipsoid of data instead. The calibration maps the ellipsoid of data to a sphere as illustrated in Figure 1. The alignment of the inertial and magnetometer sensor axes determines the orientation of the sphere. Since we are interested in improving the heading estimates, the actual magnitude of the local magnetic field is of no concern. Hence, we assume without loss of generality that the norm is equal to 1, i.e. the sphere in Figure 1 is a unit sphere.
Related work
Traditional magnetometer calibration approaches assume that a reference sensor is available which is able to provide accurate heading information. A well-known example of this is compass swinging [3]. To allow for any user to perform the calibration, however, a large number of approaches have been developed that remove the need for a source of orientation information. One class of these magnetometer calibration algorithms focuses on minimizing the difference between the magnitude of the measured magnetic field and that of the local magnetic field, see e.g. [4]. This approach is also referred to as scalar checking [5]. Another class formulates the calibration problem as an ellipsoid fitting problem, i.e. as the problem of mapping an ellipsoid of data to a sphere, see e.g. [6,7,8]. The benefit of using this formulation, is that there is a vast literature on solving ellipsoid fitting problems, see e.g. [9,10]. Outside of these two classes, a large number of other calibration approaches is also available, for instance [11], where different formulations of the calibration problem in terms of an ML problem are considered.
The benefit of the approaches discussed above is that they can be used with data from a magnetometer only. Our interest, however, lies in calibrating a magnetometer for improved heading estimation in combination with inertial sensors. Alignment of the sensor axes of the inertial sensors and the magnetometer is in this case crucial. This alignment can be seen as determining the orientation of the blue sphere of calibrated magnetometer data in Figure 1. Algorithms that only use magnetometer data can map the red ellipsoid of data to a sphere, but without additional information, the rotation of this sphere remains unknown.
A number of recent approaches include a second step in the calibration algorithm to determine the misalignment [6,12,13,14] between different sensor axes. A common choice to align the magnetometer and inertial sensor axes, is to use accelerometer measurements from periods of fairly small accelerations [12,13]. The downside of this approach is that a threshold for using accelerometer measurements needs to be determined. Furthermore, data from the gyroscope is hereby omitted. In [15] on the other hand, the problem is reformulated in terms of the change in orientation, allowing for direct use of the gyroscope data.
In our algorithm we instead formulate the magnetometer calibration problem as a problem of estimating the sensor's orientation in the presence of unknown (calibration) parameters. This formulation naturally follows from the fact that the problem of orientation estimation and that of magnetometer calibration are inherently connected: If the magnetometer is properly calibrated, good orientation estimates can be obtained. Reversely, if the orientation of the sensor is known accurately, the rotation of the sphere in Figure 1 can accurately be determined, resulting in a good magnetometer calibration. In this formulation, data from the accelerometer and the gyroscope is used to aid the magnetometer calibration.
Our formulation of the calibration problem requires solving a non-convex optimization problem to obtain ML estimates of the calibration parameters. To obtain good initial values of the parameters, an ellipsoid fitting problem and a misalignment estimation problem are solved. Solving the calibration problem as a two-step procedure is similar to the approaches in [12,13]. We analyze the quality of the initial estimates and of the ML estimates in terms of their heading accuracy, both for experimental and simulated data. Based on this analysis, we show that significant heading accuracy improvements can be obtained by using the ML estimates of the parameters.
Problem formulation
Our magnetometer calibration algorithm is formulated as a problem of determining the sensor's orientation in the presence of unknown model parameters θ. It can hence be considered to be a grey-box system identification problem. A nonlinear state space model on the following form is used where the state x t represents the sensor's orientation at time t. We use the change in orientation, i.e. the angular velocity ω t , as an input to the dynamic model f t (·). The angular velocity is measured by the gyroscope. However, the measurements y ω,t are corrupted by a constant bias δ ω and Gaussian i.i.d. measurement noise with zero mean and covariance Σ ω , i.e. e ω,t ∼ N (0 3×1 , Σ ω ). The measurement models h a,t (·) and h m,t (·) in (1b) describe the accelerometer measurements y a,t and the magnetometer measurements y m,t , respectively. The accelerometer measurement model assumes that the acceleration of the sensor is small compared to the earth gravity. Since the magnetometer is not assumed to be properly calibrated, the magnetometer measurement model h m,t (·) depends on the parameter vector θ. The exact details of the magnetometer measurement model will be introduced in Section 4. The accelerometer and magnetometer measurements are corrupted by Gaussian i.i.d. measurement noise The calibration problem is formulated as an ML problem. Hence, the parameters θ in (1) are found by maximizing the likelihood function p θ (y 1:N ), where y 1:N = {y 1 , . . . , y N } and Θ ⊆ R n θ . Using conditional probabilities and the fact that the logarithm is a monotonic function we have the following equivalent formulation of (3), where we use the convention that y 1:0 ∅. The ML estimator (4) enjoys well-understood theoretical properties including strong consistency, asymptotic normality, and asymptotic efficiency [16]. The state space model (1) is nonlinear, implying that there is no closed form solution available for the one step ahead predictor p θ (y t | y 1:t−1 ) in (4). This can systematically be handled using sequential Monte Carlo methods (e.g. particle filters and particle smoothers), see e.g. [17,18]. However, for the magnetometer calibration problem it is sufficient to make use of a more pragmatic approach; we simply approximate the one step ahead predictor using an extended Kalman filter (EKF). The result is where the mean value y t|t−1 (θ) and the covariance S t (θ) are obtained from the EKF [19]. Inserting (5) into (4) and neglecting all constants not depending on θ results in the following optimization problem, which we can solve for the unknown parameters θ. The problem (6) is non-convex, implying that a good initial value for θ is required.
Magnetometer measurement model
In the case of perfect calibration, a magnetometer measures the local magnetic field and its measurements will therefore lie on a sphere with a radius equal to the local magnetic field. Since we are interested in using the magnetometer measurements to improve the orientation estimates from the state space model (1), the actual magnitude of the local magnetic field is of no concern. Hence, we assume without loss of generality that its norm is equal to one. We denote the normalized local magnetic field by m n . Ideally, the magnetometer measurements then lie on a sphere with radius equal to one as where h m,t is defined in (1b). The explicit dependence on x t and θ has been omitted for notational simplicity. The matrix R bn t is the rotation matrix representation of the orientation at time t. The superscript bn denotes that the rotation is from the navigation frame n to the body frame b. The body frame b is aligned with the sensor axes. The navigation frame n is aligned with the earth's gravity and the local magnetic field. In case the coordinate frame in which a vector is defined can be ambiguous, we explicitly indicate in which coordinate frame the vector is expressed by adding a superscript b or n. Hence, m n denotes the normalized local magnetic field in the navigation frame n while m b t denotes the normalized local magnetic field in the body frame b. The latter is time-dependent and therefore also has a subscript t. Note that the rotation from navigation frame to body frame is denoted R nb t and R bn t = (R nb t ) T . In outdoor environments, the local magnetic field is equal to the local earth magnetic field. Its horizontal component points towards the earth's magnetic north pole. The ratio between the horizontal and vertical component depends on the location on the earth and can be expressed in terms of the dip angle δ. In indoor environments, the magnetic field can locally be assumed to be constant and points towards a local magnetic north. This is not necessarily the earth's magnetic north pole. Choosing the navigation frame n such that the x-axis is pointing towards the local magnetic north, m n can be parametrized in terms of its vertical component m n or in terms of the dip angle δ Note that the two parametrizations do not encode exactly the same knowledge about the magnetic field; the first component of m n in (8a) is positive by construction while this is not true for (8b). However, both parametrizations will be used in the remainder. It will be argued that no information is lost by using (8b) if the parameter estimates are properly initialized. The main need for magnetometer calibration arises from the fact that a magnetometer needs recalibration each time it is placed in a magnetically different environment. Specifically, a magnetometer measures a superposition of the local magnetic field and of the magnetic field due to the presence of magnetic material in the vicinity of the sensor. In case this magnetic material is rigidly attached to the magnetometer, it is possible to calibrate the magnetometer measurements for this. The magnetic material can give rise to both hard and soft iron contributions to the magnetic field. Hard iron effects are due to permanent magnetization of the magnetic material and lead to a constant 3 × 1 offset vector o hi . Soft iron effects are due to magnetization of the material as a result of an external magnetic field and therefore depend on the orientation of the material with respect to the local magnetic field. We model this in terms of a 3 × 3 matrix C si . Hence, the magnetometer measurements do not lie on a sphere as in (7), but instead, they lie on a translated ellipsoid as As discussed in Section 2, when calibrating the magnetometer to obtain better orientation estimates, it is important that the magnetometer and the inertial sensor axes are aligned. Let us now be more specific about the definition of the body frame b and define it to be located in the center of the accelerometer triad and aligned with the accelerometer sensor axes. Furthermore, let us assume that the accelerometer and gyroscope axes are aligned. Defining the rotation between the body frame b and the magnetometer sensor frame b m as R bmb , the model (9) can be extended to Finally, the magnetometer calibration can also correct for the presence of sensor errors in the magnetometer. These errors are sensor-specific and can differ for each individual magnetometer. They can be subdivided into three components, see e.g. [8,7,6]: 1. Non-orthogonality of the magnetometer axes, represented by a matrix C no .
2. Presence of a zero bias or null shift, implying that the magnetometer will measure a non-zero magnetic field even if the magnetic field is zero, defined by o zb .
3. Difference in sensitivity of the three magnetometer axes, represented by a diagonal matrix C sc .
We can therefore extend the model (10) to also include the magnetometer sensor errors as To obtain a correct calibration, it is fortunately not necessary to identify all individual contributions of the different components in (11). Instead, they can be combined into a 3 × 3 distortion matrix D and a 3 × 1 offset vector o where The resulting magnetometer measurement model in (1b) can be written as In deriving the model we have made two important assumptions: Assumption 1. The calibration matrix D and offset vector o in (12) are assumed to be time-independent. This implies that we assume that the magnetic distortions are constant and rigidly attached to the sensor. Also, the inertial and the magnetometer sensor axes are assumed to be rigidly attached to each other, i.e. their misalignment is represented by a constant rotation matrix. Additionally, in our algorithm we will assume that their misalignment can be described by a rotation matrix, i.e. that their axes are not mirrored with respect to each other.
The local magnetic field m n is assumed to be constant. In outdoor environments, this is typically a physically reasonable assumption. In indoor environments, however, the local magnetic field can differ in different locations in the building and care should be taken to fulfill the assumption.
Calibration algorithm
In our magnetometer calibration algorithm we solve the optimization problem (6) to estimate the parameter vector θ. In this section we introduce the resulting calibration algorithm which is summarized in Algorithm 1. In Section 5.1, we first discuss our optimization strategy. A crucial part of this optimization strategy is the evaluation of the cost function. Some details related to this are discussed in Section 5.2. Finally, in Section 5.3 we introduce the parameter vector θ in more detail.
(b) Obtain an initial D 0 and o 0 based on ellipsoid fitting (see Section 6.1).
(c) Obtain initial D 0 , o 0 and m n 0 by initial determination of the sensor axis misalignment (see Section 6.2).
2. Set i = 0 and repeat, (a) Run the EKF using the current estimates and evaluate the cost function in (6). (b) Determine θ i+1 using the numerical gradient of the cost function in (6), its approximate Hessian and a backtracking line search algorithm.
Optimization algorithm
The optimization problem (6) is solved in Step 2 of Algorithm 1. Standard unconstrained minimization techniques are used, which iteratively update the parameter estimates as where the direction of the parameter update at iteration i is determined by Typical choices for the search direction include choosing G(θ i ) to be the gradient of the cost function in (6) and H(θ i ) to be its Hessian. This leads to a Newton optimization algorithm. However, computing the gradient and Hessian of (6) is not straightforward. Possible approaches are discussed in [20,21] for the case of linear models. In the case of nonlinear models, however, they only lead to approximate gradients, see e.g. [22,23]. For this reason we make use of a numerical approximation of G(θ i ) instead and use a Broyden-Fletcher-Goldfarb-Shanno (BFGS) method with damped updating [24] to approximate the Hessian. Hence, the minimization is performed using a quasi-Newton optimization algorithm. A backtracking line search is used to find a good step length α i .
Proper initialization of the parameters is crucial since the optimization problem (6) is non-convex.
Step 1 summarizes the three-step process used to obtain good initial estimates of all parameters.
Evaluation of the cost function
An important part of the optimization procedure is the evaluation of the cost function in (6). This requires running an EKF using the state space model (1) to estimate the orientation of the sensor. This EKF uses the angular velocity ω t as an input to the dynamic model (1a). An estimate of the angular velocity is obtained from the gyroscope measurements y ω,t which are modeled as The measurement model (1b) entails the accelerometer measurements and the magnetometer measurements. The magnetometer measurement model can be found in (13). The accelerometer measurements y a,t are modeled as y a,t = R bn t (a n t − g n ) + e a,t ≈ −R bn t g n + e a,t , where a n t denotes the sensor's acceleration in the navigation frame and g n denotes the earth's gravity. The rotation matrix R bn t has previously been introduced in Section 4. The state in the EKF, which represents the sensor orientation, can be parametrized in different ways. In previous work we have used a quaternion representation as a 4-dimensional state vector [1]. In this work we instead use an implementation of the EKF, which is sometimes called a multiplicative EKF [25,26,27]. Here, a 3-dimensional state vector represents the orientation deviation from a linearization point. More details on this implementation can be found in [28].
The EKF returns the one step ahead predicted measurements { y t|t−1 (θ)} N t=1 and their covariance {S t (θ)} N t=1 which can be used to evaluate (6). The cost function needs to be evaluated for the current parameter estimates in Step 2a but also needs to be evaluated once for each component of the parameter vector θ to compute the numerical gradient. Hence, each iteration i requires running the EKF at least n θ + 1 times. Note that the actual number of evaluations can be higher since the backtracking line search algorithm used to determine α i can require a varying number of additional evaluations. Since n θ = 34, computing the numerical gradient is computationally rather expensive. However, it is possible to parallelize the computations.
The parameter vector θ
As apparent from Section 4, our main interest lies in determining the calibration matrix D and the offset vector o, which can be used to correct the magnetometer measurements to obtain more accurate orientation estimates. To solve the calibration problem, however, we also estimate a number of other parameters.
First, the local magnetic field m n introduced in Section 4 is in general scenarios unknown and needs to be estimated. In outdoor environments, m n is equal to the local earth magnetic field and is accurately known from geophysical studies, see e.g. [29]. In indoor environments, however, the local magnetic field can differ quite significantly from the local earth magnetic field. Because of that, we treat m n as an unknown constant. Second, the gyroscope measurements that are used to describe the change in orientation of the sensor in (1a) are corrupted by a bias δ ω . This bias is slowly time varying but for our relatively short experiments it can be assumed to be constant. Hence, it is treated as part of the parameter vector θ. Finally, we treat the noise covariance matrices Σ ω , Σ a and Σ m as unknown. In summary, the parameter vector θ consists of where m n x and m n y denote the x-and y-component of m n , respectively. The notation Σ 0 denotes the assumption that the matrix Σ is positive semi-definite.
Although (17c) and (17e) -(17g) suggest that constrained optimization is needed, it is possible to circumvent this via suitable reparametrizations. The covariance matrices can be parametrized in terms of their Cholesky factorization, leading to only 6 parameters for each 3 × 3 covariance matrix. The local magnetic field can be parametrized using only one parameter as in (8). Note that in our implementation we prefer to use the representation (8b) for the ML problem (6). Although this latter parametrization does not account for the constraint m n x > 0, this is of no concern due to proper initialization. The procedure to obtain good initial estimates of all parameters is the topic of the next section.
Finding good initial estimates
Since the optimization problem is non-convex, the parameter vector θ introduced in Section 5 needs proper initialization. An initial estimate θ 0 is obtained using a three-step method. As a first step, the gyroscope bias δ ω and the noise covariances of the inertial sensors, Σ ω , Σ a , and of the magnetometer, Σ m , are initialized. This is done using a short batch of stationary data. Alternatively, they can be initialized based on prior sensor knowledge. As a second step, described in Section 6.1, an ellipsoid fitting problem is solved using the magnetometer data. This maps the ellipsoid of data to a sphere but can not determine the rotation of the sphere. The rotation of the sphere is determined in a third step of the initialization procedure. This step also determines an initial estimate of the normalized local magnetic field m n .
Ellipsoid fitting
Using the definition of the normalized local magnetic field m n , we would expect all calibrated magnetometer measurements to lie on the unit sphere, In practice, the measurements are corrupted by noise and the equality (18) does not hold exactly. The ellipsoid fitting problem can therefore be written as with Assuming that the matrix A is positive definite, this can be recognized as the definition of an ellipsoid with parameters A, b and c (see e.g. [9]). We can rewrite (19) as a linear relation of the parameters as where ⊗ denotes the Kronecker product and vec denotes the vectorization operator. This problem has infinitely many solutions and without constraining the length of the vector ξ, the trivial solution ξ = 0 would be obtained. A possible approach to solve the ellipsoid fitting problem is to make use of a singular value decomposition [9,2]. This approach inherently poses a length constraint on the vector ξ, assuming that its norm is equal to 1. It does, however, not guarantee positive definiteness of the matrix A. Although positive definiteness of A is not guaranteed, there are only very few practical scenarios in which the estimated matrix A will not be positive definite. A non-positive definite matrix A can for instance be obtained in cases of very limited rotation of the sensor. The problem of allowing a non-positive definite matrix A can be circumvented by instead solving the ellipsoid fitting problem as a semidefinite program [30,31] min where S 3×3 ++ denotes the set of 3 × 3 positive definite symmetric matrices. By constraining the trace of the matrix A, (23) avoids the trivial solution of ξ = 0. The problem (23) is a convex optimization problem and therefore has a globally optimal solution and does not require an accurate initial guess of the parameter vector ξ. The optimization problem can easily be formulated and efficiently solved using freely available software packages like YALMIP [32] or CVX [33].
Initial estimates of the calibration matrix D and the offset vector o can be obtained from the estimated A, b, c as where o 0 denotes the initial estimate of the offset vector o. From (24b) it is not possible to uniquely determine the initial estimate of the calibration matrix D. We determine an initial estimate of the calibration matrix D using a Cholesky decomposition, leading to a lower triangular D 0 . However, any D 0 U where U U T = I 3 will also fulfill (24b). As discussed in Assumption 1 in Section 4, we assume that the sensor axes of the inertial sensors and the magnetometers are related by a rotation, implying that we restrict the matrix U to be a rotation matrix. The initial estimate D 0 can therefore be defined in terms of D 0 as The unknown rotation matrix R D will be determined in Section 6.2.
Determine misalignment of the inertial and magnetometer sensor axes
The third step of the initial estimation aims at determining the misalignment between the inertial and the magnetometer sensor axes. It also determines an initial estimate of the normalized local magnetic field m n 0 . These estimates are obtained by combining the magnetometer measurements with the inertial sensor measurements. The approach is based on the fact that the inner product of two vectors is invariant under rotation. The two vectors considered here are m n and the vertical v n = 0 0 1 T . Hence, it is assumed that the inner product of the vertical v b t in the body frame b, and the normalized local magnetic field m b t in the body frame, is constant. The matrix R D in (26b) denotes the rotation needed to align the inertial and magnetometer sensor axes. The rotation matrix R nb t in (26a) is a rotation matrix representation of the orientation estimate at time t obtained from an EKF. This EKF is similar to the one described in Section 5.2. It does not use the magnetometer measurements, since they have not properly been calibrated yet and can therefore not result in accurate heading estimates. However, to determine the vertical v b t , only the sensor's inclination is of concern, which can be determined using the inertial measurements only.
The inner product between m n and v n is equal to m n z (see also (8a)). Since this inner product is invariant under rotation, we can formulate the following minimization problem min RD,m n z,0 The rotation matrix R D can be parametrized using an orientation deviation from a linearization point similar to the approach described in Section 5.2. Hence, (27) can be solved as an unconstrained optimization problem. Based on these results and (25) we obtain the following initial estimates Hence, we have obtained an initial estimate θ 0 of the entire parameter vector θ as introduced in Section 5.
7 Experimental results
Experimental setup
Experiments have been performed using two commercially available inertial measurements units (IMUs), an Xsens MTi-100 [34] and a Trivisio Colibri Wireless IMU [35]. The experimental setup of both experiments can be found in Figure 2. The experiment with the Xsens IMU was performed outdoors to ensure a homogeneous local magnetic field. The experiment with the Trivisio IMU was performed indoors. However, the experiment was performed relatively far away from any magnetic materials such that the local magnetic field is as homogenous as possible. The Xsens IMU was placed in an aluminum block with right angles which can be used to rotate the sensor 90 • to verify the heading results. For both sensors, inertial and magnetometer measurements were collected at 100 Hz.
Calibration results
For calibration, the IMU needs to be slowly rotated such that the assumption of zero acceleration is reasonably valid. This leads to an ellipsoid of magnetometer data as depicted in red in Figs. 1 and 3. Note that for plotting purposes the data has been downsampled to 1 Hz. To emphasize the deviation of the norm from 1, the norm of the magnetometer data is depicted in red in Figure 4 for both experiments. using Algorithm 1. Applying the calibration result to the magnetometer data leads to the unit sphere of data in blue in Figure 1. The norm of the magnetometer data after calibration can indeed be seen to lie around 1, as depicted in blue in Figure 4. As a measure of the calibration quality, we analyze the normalized residuals S −1/2 t (y t − y t|t−1 ) after calibration from the EKF. For each time t, this is a vector in R 6 . In the case of correctly calibrated parameters that sufficiently model the magnetic disturbances, we expect the stacked normalized residuals {S −1/2 t (y t − y t|t−1 )} N t=1 ∈ R 6N to be normally distributed with zero mean and standard deviation 1. The histogram and a fitted Gaussian distribution can be found in Figure 5a. The residuals resemble a N (0, 1) distribution except for the large peak around zero and -not visible in the plot -a small amount of outliers outside of the plotting interval. This small amount of outliers is due to the fact that there are a few (y t − y t|t−1 ) from the EKF after calibration for the estimation data set (left) and for a validation data set (right) for the experiments performed with the Xsens IMU. A Gaussian distribution (red) is fitted to the data. measurement outliers in the accelerometer data. Large accelerations can for instance be measured when the setup is accidentally bumped into something and violate our assumption that the acceleration of the sensor is approximately zero. We believe that the peak around zero is due to the fact that the algorithm compensates for the presence of the large residuals.
To analyze if the calibration is also valid for a different (validation) data set with the same experimental setup, the calibrated parameters have been used on a second data set. Figures of the ellipsoid of magnetometer data and the sphere of calibrated magnetometer data are not included since they look very similar to Figs. 1 and 4. The residuals after calibration of this validation data set can be found in Figure 5b. The fact that these residuals look very similar to the ones for the original data suggests that the calibration parameters obtained are also valid for this validation data set.
The Trivisio IMU outputs the magnetometer data in microtesla. Since our algorithm scales the calibrated measurements to a unit norm, the obtained D and offset vector o from Algorithm 1 are in this case of much larger magnitude, The sphere of calibrated data and its norm can be found in blue in Figs. 3 and 4. Note that for plotting purposes, the magnetometer data before calibration is scaled such that its mean lies around 1. The obtained D and o are scaled accordingly to plot the red ellipsoid in Figure 3. The normalized residuals S −1/2 t (y t − y t|t−1 ) of the EKF using both the estimation and a validation data set are depicted in Figure 6. For this data set, the accelerometer data does not contain any outliers and the residuals resemble a N (0, 1) distribution fairly well.
From these results we can conclude that Algorithm 1 gives good magnetometer calibration results for experimental data from two different commercially available IMUs. A good fit of the ellipsoid of data to a sphere is obtained and the algorithm seems to give good estimates analyzed in terms of its normalized residuals. Since magnetometer calibration is generally done to obtain improved heading estimates, it is important to also interpret the quality of the calibration in terms of the resulting heading estimates. In Section 7.3 this will be done based on experimental results. The heading performance will also be analyzed based on simulations in Section 8.
Heading estimation
An important goal of magnetometer calibration is to facilitate good heading estimates. To check the quality of the heading estimates after calibration, the block in which the Xsens IMU was placed (shown in Figure 2) is rotated around all axes. This block has right angles and it can therefore be placed in 24 orientations that differ from each other by 90 degrees. The experiment was conducted in Enschede, the Netherlands. The dip angle δ at this location is approximately 67 T (see also (7) and (8b)). The calibrated magnetometer data from the experiment is shown in Figure 7 and consists of the following stationary time periods: z-axis up During the period 0−105s, the magnetometer is flat with its z−axis pointing upwards. Hence, the z-axis (red) of the magnetometer measures the vertical component of the local magnetic field m n z . During this period, the sensor is rotated by 90 • around the z-axis into 4 different orientations and subsequently back to its initial orientation. This results in the 5 steps for measurements in the x-(blue) and y-axis (green) of the magnetometer. z-axis down A similar rotation sequence is performed with the block upside down at 110 − 195s, resulting in a similar pattern for measurements in the x-and y-axis of the magnetometer. During this time period, the z-axis of the magnetometer measures −m n z instead.
x-axis up The procedure is repeated with the x-axis of the sensor pointing upwards during the period 200 − 255s, rotating around the x-axis into 4 different orientations and back to the initial position. This results in the 5 steps for measurements in the y-and z-axis of the magnetometer.
x-axis down A similar rotation sequence is performed with the x-axis pointing downwards at 265 − 325 seconds.
y-axis down Placing the sensor with the y-axis downwards and rotating around the y-axis results in the data at 350 − 430 seconds. The rotation results in the 5 steps for measurements in the x-and z-axis of the magnetometer.
y-axis up A similar rotation sequence is performed with the y-axis pointing upwards at 460 − 520 seconds.
Since the experimental setup was not placed exactly vertical, it is not possible to compare the absolute orientations. However, it is possible to compare the difference in orientation which is known to be 90 • due to the properties of the block in which the sensor was placed. To exclude the effect of measurement noise, for each of the stationary periods in Figure 7, 500 samples of magnetometer and accelerometer data are selected. Their mean values are used to estimate the orientation of the sensor. Here, the accelerometer data is used to estimate the inclination. The heading is estimated from the horizontal component of the magnetometer data. This procedure makes use of the fact that the orientation of the sensor can be determined from two linearly independent vectors in the navigation frame -the gravity and the direction of the magnetic north -and in the body frame -the mean accelerometer and magnetometer data. It is referred to as the TRIAD algorithm [36]. Table 1 reports the deviation from 90 • between two subsequent rotations. Note that the metal object causing the magnetic disturbance as shown in Figure 2 physically prevents the setup from being properly placed in all orientations around the y-axis. Rotation around the y-axis with the y-axis pointing upwards has therefore not been included in Table 1.
Our experiment investigates both the heading errors and the improvement of the heading estimates over the ones obtained after the initial calibration, i.e. Step 1 in Algorithm 1. In Table 1 include both the heading errors using the initial parameter estimates D 0 (28a) and o 0 (24c) and the heading errors using ML parameter estimates D and o (29) obtained using Algorithm 1. As can be seen, the deviation from 90 • is small, indicating that good heading estimates are obtained after calibration. Also, the heading estimates using the initial parameter estimates are already fairly good. The mean error is reduced from 1.28 • for the initial estimate to 0.76 • for the ML estimate. The maximum error is reduced from 4.36 • for the initial estimate to 2.48 • for the ML estimate. Note that the results of the ML estimate from Algorithm 1 are slightly better than the results previously reported by [1]. This can be attributed to the fact that we now use orientation error states instead of the quaternion states in the EKF (see Section 5.2). This results in slightly better estimates, but also in a smoother convergence of the optimization problem. The quality of the heading estimates is studied further in Section 8 based on a simulation study.
Simulated heading accuracy
Magnetometer calibration is typically performed to improve the heading estimates. It is, however, difficult to check the heading accuracy experimentally. In Section 7.3, for instance, we are limited to doing the heading validation on a different data set and we have a limited number of available data points. To get more insight into the orientation accuracy that is gained by executing all of Algorithm 1, compared to just its initialization phase (Step 1 in the algorithm), we engage in a simulation study. In this study we focus on the root mean square (RMS) heading error for different simulated sensor qualities (in terms of the noise covariances and the gyroscope bias) and different magnetic field disturbances (in terms of different values for the calibration matrix D and offset vector o).
In our simulation study, we assume that the local magnetic field is equal to that in Linköping, Sweden. The calibration matrix D, the offset vector o and the sensor properties in terms of the gyroscope bias and noise covariances are all sampled from a uniform distribution. The parameters of the distributions from which the sensor properties are sampled are chosen as physically reasonable values as considered from the authors' experience. The noise covariance matrices Σ ω , Σ a and Σ m are assumed to be diagonal with three different values on the diagonal. The calibration matrix D is assumed to consist of three parts, where D diag is a diagonal matrix with elements D 11 , D 22 , D 33 and D rot is a rotation matrix around the angles ψ, θ, φ. The matrix D skew models the non-orthogonality of the magnetometer axes as where the angles ζ, η, ρ represent the different non-orthogonality angles. The exact simulation conditions are summarized in Table 2.
The simulated data consists of 100 samples of stationary data and subsequently 300 samples for rotation around all three axes. It is assumed that the rotation is exactly around the origin of the accelerometer triad, resulting in zero acceleration during the rotation. The first 100 samples are used to obtain an initial estimate of the gyroscope bias δ ω,0 by computing the mean of the stationary gyroscope samples. The covariance matrices Σ ω,0 , Σ a,0 and Σ m,0 are initialized based on the covariance of these first 100 samples. The initial estimate then consists of these initial estimates δ ω,0 , Σ ω,0 , Σ a,0 , Σ m,0 and the initial calibration matrix D 0 (28a), the initial offset vector o 0 (24c) and the initial estimate of the local magnetic field m n 0 (28b). To study the heading accuracy, the EKF as described in Section 5.2 is run with both the initial parameter values θ 0 and their ML values θ ML . The orientation errors ∆q t , encoded as a unit quaternion are computed using where denotes a quaternion multiplication and the superscript c denotes the quaternion conjugate (see e.g. [27]). It is computed from the orientation q nb t estimated by the EKF and the ground truth orientation q nb ref,t . Computing the orientation errors in this way is equivalent to subtracting Euler angles in the case of small angles. However, it avoids subtraction problems due to ambiguities in the Euler angles representation. To interpret the orientation errors ∆q t , they are converted to Euler angles. We focus our analysis on the heading error, i.e. on the third component of the Euler angles.
The RMS of the heading error is plotted for 150 Monte Carlo simulations in Figure 8. As can be seen, the heading root mean square error (RMSE) using the estimate of the calibration parameters from Algorithm 1 is consistently small. The heading RMSE based on the initialization phase in Step 1 of the algorithm, however, has a significantly larger spread. This clearly shows that orientation accuracy can be gained by executing all of Algorithm 1. Note that in all simulations, analysis of the norm of the calibrated magnetometer measurements as done in Figure 4 does not indicate that the ML estimate is to be preferred over the estimate from the initialization phase. Hence, analysis of the norm of the calibrated magnetometer measurements does not seem to be a sufficient analysis to determine the quality of the calibration in the case when the calibration is performed to improve the heading estimates.
Conclusions
We have developed a practical algorithm to calibrate a magnetometer using inertial sensors. It calibrates the magnetometer for the presence of magnetic disturbances, for magnetometer sensor errors and for misalignment between the inertial and magnetometer sensor axes. The problem is formulated as an ML problem. The algorithm is shown to perform well on real data collected with two different commercially available inertial measurement units.
In future work the approach can be extended to include GPS measurements. In that case it is not necessary to assume that the acceleration is zero. The algorithm can hence be applied to a wider range of problems, like for instance the flight test example discussed in [2]. The computational cost of the algorithm would, however, increase, since to facilitate the inclusion of the GPS measurements, the state vector in the EKF needs to be extended.
Another interesting direction for future work would be to investigate ways of reducing the computational cost of the algorithm. The computational cost of the initialization steps is very small but actually solving the ML problem in Step 2 of Algorithm 1 is computationally expensive. The algorithm both needs quite a large number of iterations and each iteration is fairly expensive due to the computation of the numerical gradients. Interesting lines of future work would either explore different optimization methods or different ways to obtain gradient estimates.
Finally, it would be interesting to extend the work to online estimation of calibration parameters. This would allow for a slowly time-varying magnetic field and online processing of the data. | 2016-01-21T08:33:46.331Z | 2016-01-20T00:00:00.000 | {
"year": 2016,
"sha1": "893bd4c1263a47c9730205533be133f7056e8022",
"oa_license": null,
"oa_url": "http://liu.diva-portal.org/smash/get/diva2:719169/FULLTEXT01",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "18cc31b7c84242fe729a38b670ba32f925e57290",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Computer Science",
"Mathematics"
]
} |
267760447 | pes2o/s2orc | v3-fos-license | Orthogonal neural representations support perceptual judgements of natural stimuli
In natural behavior, observers must separate relevant information from a barrage of irrelevant information. Many studies have investigated the neural underpinnings of this ability using artificial stimuli presented on simple backgrounds. Natural viewing, however, carries a set of challenges that are inaccessible using artificial stimuli, including neural responses to background objects that are task-irrelevant. An emerging body of evidence suggests that the visual abilities of humans and animals can be modeled through the linear decoding of task-relevant information from visual cortex. This idea suggests the hypothesis that irrelevant features of a natural scene should impair performance on a visual task only if their neural representations intrude on the linear readout of the task relevant feature, as would occur if the representations of task-relevant and irrelevant features are not orthogonal in the underlying neural population. We tested this hypothesis using human psychophysics and monkey neurophysiology, in response to parametrically variable naturalistic stimuli. We demonstrate that 1) the neural representation of one feature (the position of a central object) in visual area V4 is orthogonal to those of several background features, 2) the ability of human observers to precisely judge object position was largely unaffected by task-irrelevant variation in those background features, and 3) many features of the object and the background are orthogonally represented by V4 neural responses. Our observations are consistent with the hypothesis that orthogonal neural representations can support stable perception of objects and features despite the tremendous richness of natural visual scenes.
human psychophysics and monkey neurophysiology, in response to parametrically variable naturalistic stimuli.We demonstrate that 1) the neural representation of one feature (the position of a central object) in visual area V4 is orthogonal to those of several background features, 2) the ability of human observers to precisely judge object position was largely unaffected by taskirrelevant variation in those background features, and 3) many features of the object and the background are orthogonally represented by V4 neural responses.Our observations are consistent with the hypothesis that orthogonal neural representations can support stable perception of objects and features despite the tremendous richness of natural visual scenes.
Significance Statement
We studied how the structure of the mid-level neural representation of multiple visual features supports robust perceptual decisions.We combined array recording with parametrically controlled naturalistic images to demonstrate that the representation of a central object's position in monkey visual area V4 is orthogonal to that of several background features.In addition, we used human psychophysics with the same stimulus set to show that observers' ability to judge a central object's position is largely unaffected by variation in the same background features.This result supports the hypothesis that orthogonal neural representations can enable stable and robust perception in naturalistic visual environments and advances our understanding of how visual processing operates in the real world.
Introduction
A major function of the visual system is to infer properties of currently relevant stimuli, without interference from the tremendous amount of task-irrelevant information that bombards our retinas.Many laboratory studies of the neural basis this ability use, for good reasons, simple (von der Heydt et al., 1984;Peterhans and Heydt, 1991;Gallant et al., 1993;Leopold and Logothetis, 1996;Pasupathy and Connor, 2002;Rust and Movshon, 2005;Martinez-Garcia et al., 2019;Peters and Kriegeskorte, 2021;Snow and Culham, 2021).An advantage of this approach is experimental control: one can parametrically vary stimuli and completely specify the input to the visual system.A downside of using such stimuli, however, is that their very simplicity prevents them from fully illuminating the neural algorithms by which the brain sorts through the large quantity of visual information that is characteristic of natural viewing (see simulations in (Ruff et al., 2018a)).
In contrast to simple artificial stimuli, natural images can vary in many features, and these features are jointly encoded by the responses of populations of neurons in visual cortex (Cadieu et al., 2007;Oleskiw et al., 2018;Kim et al., 2019;Yamane et al., 2020;Srinath et al., 2021;Hatanaka et al., 2022).To investigate the relationship between the representation of multiple features within such populations, we consider a high-dimensional neural-response space in which each dimension represents the firing rate of one neuron (Kohn et al., 2020;Vyas et al., 2020).The response of the entire population at any given moment (e.g. in response to one visual scene) is a point in this space.Systematically varying one scene feature, such as the position of a banana, traces out a continuous trajectory in the response space, which can typically be approximated by a line (Misaki et al., 2010;Okazawa et al., 2021a).If just one scene feature varies, then that feature can be read out by projecting the response onto this line (aka linear decoding).If multiple features can vary, they can each be linearly decoded without interference from the others if their variation traces out orthogonal lines.But robust readout will be difficult, if not impossible, if multiple features trace out similar lines.Intermediate cases are also possible, in which robust readout is possible but requires processing more complex than linear projection (e.g., quadratic classification (Burge, 2020)).
Following Hong et al., 2016, we reasoned that variation in task-irrelevant features of a natural scene should not impair performance on a visual task if two conditions are met: visual information is read out of a neural population in a way that approximates a linear decoder, and the representations of relevant and irrelevant features are orthogonal in the relevant neural populations.
Here, we leverage the power of computer graphics to take parametric control of stimulus features in naturalistic stimuli, enabling us to vary many naturalistic stimulus dimensions and test the hypothesis that the observers' perceptual abilities to make fine perceptual distinctions will not be perturbed on a threshold-level judgment task by task-irrelevant variations in stimuli and backgrounds if the neural representations of task-relevant and irrelevant features are orthogonal.
Using a combination of human psychophysics and monkey neurophysiology, we demonstrate that 1) the population representation of object position in V4 is orthogonal to those of several background features, 2) the ability of human subjects to make precise perceptual judgments about object position was largely unaffected by task-irrelevant variation in those background features, and 3) many features of the object and the background (position, color, luminance, rotation, and depth) are independently decodable from V4 population responses.Together, these observations support the idea that orthogonal neuronal representations enable stable perception of objects and features despite the tremendous irrelevant variation inherent in natural scenes.
Data Availability Statement
The data and code that generate the figures in this study have been deposited in a public Github repository https://github.com/ramanujansrinath/UntanglingBananas.MATLAB code for creating and displaying the images for human psychophysical experiments, as well as analyzing the raw data from these experiments, can be found at https://github.com/AmyMNi/NaturalImageThresholds.Request for further information should be directed to and will be fulfilled by the corresponding author David H. Brainard (brainard@psych.upenn.edu) in consultation with the other authors.
Monkey electrophysiology
Two adult male rhesus monkeys (Macaca mulatta, 10 and 11 kg) were implanted with titanium head posts before behavioral training.Subsequently, multielectrode arrays were implanted in cortical area V4 identified by visualizing the sulci and using stereotactic coordinates.
All animal procedures were approved by the Institutional Animal Care and Use Committees of the University of Pittsburgh and Carnegie Mellon University.
Human Psychophysics
This study was preregistered at ClinicalTrials.gov,NCT number NCT05004649, https://clinicaltrials.gov/ct2/show/NCT05004649. The experimental protocols were approved by the University of Pennsylvania Institutional Review Board.Participants were invited to volunteer to participate in this study.Participants provided informed consent and filled out a lab participant survey.We also screened for visual acuity using a Snellen eye chart and for color deficiencies using the Ishihara plate test.Participants were excluded prior to the experiment if their bestcorrected visual acuity was worse than 20/40 in either eye or if they made any errors on the Ishihara plate test.
Participants were excluded after the conclusion of their first session if their horizontal position discrimination threshold in the no variation condition (see description of conditions below) was higher than 0.6 degrees of visual angle, and participants excluded at this point did not participate in any further experimental sessions.
Image Generation (for both human psychophysics and monkey electrophysiology)
All the stimuli were variants of the same natural visual scene: a square image with a central object (a banana) presented on an approximately circular array of overlapping background objects (made up of overlapping branches and leaves).The central object and/or the background objects changed in horizontal position, rotation, and/or depth across different stimuli.In the larger set of stimuli (detailed below) the luminance and color of the central and background objects also changed.The central object and background objects are presented in the context of other objects (a rock ledge, a skyline, and three moss-covered stumps) that remain unchanged across all stimulus conditions.This natural visual scene was created using Blender, an open-source 3D creation suite (https://www.blender.org,Version 2.81a).The object and background parameters were varied using ISET3d, an open-source software package (https://github.com/ISET/iset3d)that works with a modified version of PBRT (https://github.com/scienstanford/pbrt-v3-spectral;unmodified version at https://github.com/mmp/pbrt-v3).
To convert a hyperspectral image created using ISET3d to an RGB image for presentation on the calibrated monitor, the hyperspectral image data were first used to compute LMS cone excitations.The LMS cone excitations were converted to a metameric rendered image in the RGB color space of the monitor, based on the monitor calibration data.A scale factor was applied to this image so that its maximum RGB value was 1 and the image was then gamma corrected, again using monitor calibration data.This process was completed separately for the two different monitors used, one for the psychophysics and one for the neurophysiology.
Array implantation, task parameters
Both animals were implanted with titanium headposts prior to behavioral training.After training, microelectrode arrays were implanted in area V4 (96 recording sites; Blackrock Microsystems).Array placement was guided by stereotactic coordinates and visual inspection of the sulci and gyri.The monkeys were trained to perform a fixation task along with other behavioral tasks that were not relevant to this study.The stimulus images used in this study were not displayed outside of the context of this task.The monkeys fixated a central spot for a pre-stimulus blank period of 150-400ms followed by stimulus presentations (200-250ms) interleaved with blank intervals (200-250ms).The stimuli were presented one at a time at a peripheral location that overlapped the receptive fields of the recorded neurons.In each trial, 6-8 stimuli were presented, after which the monkey received a liquid reward for having maintained fixation on the central spot until the end of the stimulus presentations.If the monkey broke fixation before the end of the stimulus presentations, the trial was terminated.The intertrial interval was at least 500ms.The stimuli were presented pseudo-randomly.
The visual stimuli were presented on a calibrated (X-Rite calibrator) 24" ViewPixx LCD monitor (1920 x 1080 pixels; 120 Hz refresh rate) placed 54 cm (monkey 1) or 56 cm (monkey 2) from the monkey, using custom software written in MATLAB (Psychophysics Toolbox;Brainard, 1997;Pelli, 1997).Eye position was monitored using an infrared eye tracker (EyeLink 1000; SR Research).Eye position (1000 samples/s), neuronal activity (30,000 samples/s) and the signal from a photodiode was recorded to align neuronal responses to stimulus presentation times (30,000 samples/s) using Blackrock CerePlex hardware.
Neural responses
The filtered electrical activity (bandpass 250-5000Hz) was thresholded at 2-3% RMS value for each recording site and the threshold crossing timestamps were saved (along with the raw electrical signal, waveforms at each crossing, and other signals).Spikes were not sorted for these experiments and 'unit' refers to the multiunit activity at each recording electrode.The stimulusevoked firing rate of each V4 unit was calculated based on the spike count responses between 50-250ms after stimulus onset, to account for V4 response latency.The baseline firing rates were calculated based on the spike count responses in the 100ms time-period prior to the onset of the stimulus.
Neuron exclusion
For each experimental session, for each unit, the average stimulus-evoked responses across all stimuli were compared with its average baseline activity.The unit was included in further analyses if the average evoked activity was at least 1.1x the baseline activity.This lenient inclusion criterion was chosen because, for the chosen experimental design and stimuli, dimensionality reduced decoding analyses are resilient to noise and benefit from information distributed across many neurons.Each recording experiment yielded data from 90-95 units (mean 94.1).
Receptive Field Mapping
A set of 2D closed contours, 3D solid objects, and black-and-white Gabor images were flashed with the same timing as described above in the lower left quadrant of the screen.The positions and sizes were chosen manually across several experiments to home in on the receptive fields of each V4 recording site.Typically, a grid of 5x5 positions and 2 image sizes were chosen such that the images overlapped partially.The spikes were counted within a 50-250ms window after stimulus onset and a RF heat map was constructed for each site.The center of mass of this heat map was chosen as the center of the RF and an ellipse was fit to circumscribe the central two standard deviations.This resulted in centers and extents of the RF of each recording site.The naturalistic image sets for the experiments described below were scaled such that the circular aperture within which the background objects were contained fully overlapped the population RF.This necessitated that the image boundary exceeded the RF of some neurons but the image information outside of the circular aperture was held constant across images.
Experiment 1: Effect of task-irrelevant stimulus changes on the ability of V4 neurons to encode a feature of interest about the central object The first goal of the electrophysiology experiments was to determine if the information about the chosen parameter of the central object (banana position) interferes with information about distracting parameters (background object rotation and depth).To do this, we systematically varied the horizontal position of the object and the background parameters in an uncorrelated fashion.The values and ranges of the object and background parameters were customized for each monkey such that there was a differential response to each condition on average across all other conditions i.e., a 3-way ANOVA for object position and the two background conditions all had a significant main effect (p<0.01).Five values of object position, background depth, and background rotation were chosen and permuted yielding 125 image stimuli.Further details of stimuli can be found in Figure 1 and Figure 1-1, and the associated code and data repositories.
The data were collected in 26 recording experiments (17 sessions across 11 days from monkey 1, 9 sessions across 8 days from monkey 2).Recording experiments with fewer than 3 repetitions per stimulus image were excluded.Each stimulus was therefore presented between 3 and 16 times yielding between 381-2084 presentations (mean 831).
Experiment 2: Relationships between multiple feature dimensions The second goal of the monkey electrophysiology was to determine different visual features are encoded orthogonally in neuronal population responses.We therefore measured responses to stimuli that varied many features of the central object (banana), including its horizonal position, depth, orientation, and two surface parameters (color and luminance).We also varied the same five features of the background objects (branches and leaves) in an independent way.We used two values of each of the ten features, which we chose to make the ten features equally decodable by the population of V4 neurons (see Figure 5).We therefore measured responses to five repetitions of each of 2 10 =1024 stimuli.Each stimulus image was repeated between two to three times.Because of the large dataset required for this experiment, the data analyzed in Figure 5 were collected from one session from monkey 1.
Apparatus
A calibrated LCD color monitor (27-inch NEC MultiSync PA271Q QHD Color Critical Desktop W-LED Monitor with SpectraView Engine; NEC Display Solutions) displayed the stimuli in an otherwise dark room, after participants dark-adapted in the experimental room for a minimum of 5 minutes.The monitor was driven at a pixel resolution of 1920 x 1080, with a refresh rate of 60 Hz and with 8-bit resolution for each RGB channel.The host computer for this monitor was an Apple Macintosh with an Intel Core i7 processor.The head position of each participant was stabilized using a chin cup (Headspot, UHCOTech, Houston, TX).The participant's eyes were centered horizontally and vertically with respect to the monitor, which was 75 cm from the participant's eyes.The participant indicated their responses using a Logitech F310 gamepad controller.
Stimulus parameters
The entire image subtended 8 degrees in both width and height, the central object subtended ~4 degrees in the longest dimension, and the circular array of background objects (branches and leaves) subtended ~5 degrees of visual angle.The images were created using ISET3d at a resolution of 1920 x 1920 with 100 samples per pixel, at 31 equally spaced wavelengths between 400 nm and 700 nm.
Psychophysical task
The psychophysical task was a two-interval forced choice task with one stimulus per interval.Each stimulus interval had a duration of 250 ms.Stimuli were presented at the center of the monitor.Between the two stimulus intervals, two masks were shown in succession at the center of the monitor (Figure 5).Each mask was be presented for a duration of 400 ms, for a total interstimulus interval of 800 ms (see Session organization below for mask details).Display times are approximate as the actual display times were quantized by the hardware to integer multiples of the 16.67 ms frame rate.
The task of the participant was to determine whether, compared to the central object presented in the first interval, the central object presented in the second interval was to the left or to the right.Following the two intervals, the participant had an unlimited amount of time to press one of two response buttons on a gamepad to indicate their choice.Feedback was provided via the auditory tones.Trials were be separated by an intertrial interval of approximately 1 second.
Session organization
The first session experimental session for each participant included participant enrollment procedures (informed consent, vision tests, etc.; see Participants above for details) as well as familiarization trials (see next paragraph) and lasted one and a half hours.The additional experimental sessions lasted approximately one hour each.
For the first session only, the participant began with 30 familiarization trials.The familiarization trials comprised, in order: 10 randomly selected easy trials (the largest positionchange comparisons), 10 randomly selected medium-difficulty trials (the 4th and 5th largest position-change comparisons), and 10 randomly selected trials from all possible position-change comparisons.The familiarization trials did not include any task-irrelevant variability and data from these trials was not saved.
In each session, there were two reference positions for the banana, and for each reference position there were 11 comparison positions: five comparison positions in the positive horizontal direction, five comparison positions in the negative horizontal direction, and a comparison position of 0 indicating no change.On each trial, one interval contained one of the two reference stimuli and the other interval will contain one of that reference stimulus's comparison stimuli.The order in which these two stimuli were presented within a trial was selected randomly per trial.
A block of trials consisted of presentation of the 11 comparison positions for each of the two reference positions for a total of 22 trials per block.The trials within a block were run in randomized order.Each was completed before the next block began.Each block was repeated 7 times in a run of trials, for a total of 154 trials per run.
Within each run of 154 trials, a single background variation condition was studied.There were three such conditions, as described in more detail below -"no variation", "rotation only", and "rotation and depth".Two runs for each of the three conditions was completed in each experimental session, and except as noted in the results, each subject completed 6 sessions.The six runs were conducted in random order within each session, and each run was separated by a break that lasted at least one minute and during which the participant was encouraged to stand or stretch as needed.After a minimum of one minute, the next run was initiated when the participant was ready.
Additionally, each session began with four practice trials (including in the first experimental session, where these practice trials were preceded by the familiarization trials as described).Each run after the first also started with one practice trial.The practice trials were all easy trials as described above and not include any task-irrelevant variability.The data from the practice trials was be saved.The maximum variation in background features was matched to the maximum variation in the neurophysiology experiments but sampled more finely as described for each of the variation blocks below.
For the "no variation" condition, there were not any changes to the background objects (the branches and leaves).This run determines the participant's threshold for discriminating the horizontal position of the central object without any task-irrelevant stimulus variation.
The "rotation only" run introduced task-irrelevant variability single task-irrelevant feature: rotation of the background objects.For each trial, a single rotation amount was drawn randomly from a pool of 51 rotations, and the background objects (leaves and sticks) in the stimulus were all rotated by that amount around their own centers.The rotation was drawn separately (randomly with replacement) for each of the two stimuli presented on a trial (the reference position stimulus and the comparison position stimulus).Thus subjects had to judge the position of the central object across a change in the background, so that any effect of background variation on the positional representation of the central object would be expected to elevate threshold.The pool of 51 rotations comprised: a rotation of zero (no change to the background objects), 25 equally spaced rotations in the clockwise direction in 2-degree intervals, and 25 equally spaced rotation amounts in the counterclockwise direction in 2-degree intervals.
"Rotation and depth" runs had variation in two task-irrelevant features: rotation and depth of the background objects.For this run, there was a pool of 51 rotations, but along with the rotation of the background objects, these objects also varied in depth.There were 51 possible depth amounts (one depth amount of zero, 25 equally spaced depth amounts in the positive depth direction, and 25 equally spaced depth amounts in the negative direction; depth amounts ranged from -500 mm to 500 mm in the rendering scene space).One of the of images was a rotation of zero and a depth amount of zero.For the remaining 50 images in the pool, each of the remaining 50 rotation amounts was randomly assigned (without replacement) to one of the remaining 50 depth amounts.The same depth shift was applied to each of the background objects.From this pool of 51 images, a single image was randomly drawn (with replacement) for each of the two stimuli presented in the trial.
Finally, as noted above (see Psychophysical task), two masks were shown per trial during the interstimulus interval.All masks across all background variation conditions were created from the same distribution of stimuli (stimuli with "no variation", thus containing no task-irrelevant noise).To create each of the two masks, first the central object positions in the first and second intervals of the trial were be determined.The two stimuli with that matched the central object positions in the first and second intervals were then used to create the trial masks.For each of these two stimuli, the average intensity was calculated in each RGB channel per 16 x 16 block of the stimulus.Next, each 16 x 16 block of a mask was randomly drawn from the mask corresponding to the two stimuli.Thus, the two masks shown per trial were each a random mixture of 16 x 16 blocks from stimuli with the two central object positions for that trial.
Monkey electrophysiology
Cross-validated Parameter Decoding (Figure 2) First, the response matrix (multiunit spike rates for each site for each image stimulus presentation) was reduced to 10 dimensions of activity.This ensured sufficient dimensionality for the decoding of object and background parameters and explained between 87.8% and 94.8% (mean 91.2%) of the variance across stimulus responses.(Parameter decoding without dimensionality reduction produced qualitatively similar results.)Then, for each background condition (unique combination of background rotation and depth -or "specific decoding"), the object position in each presentation/trial was decoded from neural responses by learning regression weights from all other trials (leave-one-out cross-validation).The same procedure was repeated for "general decoding" where the background parameters were ignored (Figure 2b).
Decoding accuracy was defined as the correlation between the actual values and the decoded values.Perfect decoding would result in an accuracy of 1 and chance decoding in an accuracy of 0. We did not encounter decoding accuracies below 0. We also calculated other decoding performance measures like mean squared error, cosine distance, etc.While other measures provide more sensitivity in the specific kinds of decoding error, their estimate of aggregate performance was qualitatively similar to correlation-based measures.Error in decoding was defined as the difference between the predicted object position and the actual position (Figure 2d).
The same procedure for specific and general decoding was repeated for each of the two background conditions as well (Figure 2-1).
Angle calculation (Figure 3)
To calculate the angle between the specific decoders, an n-dimensional line was fit to the dimensionality reduced responses and the unit vector was found.The angle between each specific decoder and the decoder for the central condition was calculated as the arc-cos of the dot product of the two unit vectors.
Linear discriminant analysis and comparison with human psychophysics (Figure 4c) To directly compare human psychophysics discrimination accuracy with decoding results, we matched the three blocked conditions -no background variation, rotation only, and rotation and depth variation -by subsampling trials from the 5x5x5 stimulus set from experiment 1.For the three conditions, we either found all pairs of trials, pairs of trials that varied in rotation only (by holding background depth at the central value), or pairs of trials that varied in depth only (by holding background rotation at the central value).Then, for 200 folds, we sampled a maximum of 500 pairs of trials and depending upon the object position on those trials, we assigned a left or right choice.If the positions were identical, we randomly assigned the choice for that pair.We then collated the responses across the pairs of trials and a fit linear discriminant in a leave-one-out fashion to predict the correct choice.The classification prediction accuracy for each of the three blocked conditions was calculated independently.
Cross-decoding analysis (Figure 5) For experiment 2, even though only two values were chosen for each of the five object and five background parameters, linear regression was chosen to instead of classification using discriminant analysis for comparison with decoding analyses in the previous experiment.Even though each stimulus image was only repeated 2-3 times, since each parameter could take one of two values each, all unique pairs of images would be informative about at least one parameter change.To enable cross-decoding, we altered the cross-validation procedure.For each parameter pair, for each of 100 folds, we randomly split all image presentations evenly into training and testing sets (uneven splits also produced qualitatively similar results).We then trained a linear regression model for one parameter using the training trials and used it to predict the values of the other parameter for the held-out testing trials.The decoding accuracy was calculated as the average correlation across folds between the actual and decoded parameter values.Since each parameter decoder was trained while ignoring all other parameter variations, the diagonals in Figure 5b are akin to the general decoder accuracy for those parameters, and the off diagonals correspond to how well those general decoders are aligned to the representations of the other parameters.The diagonal correlations were all significantly above 0 (p < 10 -80 ; t-test across folds) and none of the off-diagonal correlations were except the cross-decoding of background and object color.
Human psychophysics (Figure 4a-b)
Per session, the participant's threshold for discriminating object position was measured for each background variation condition.First, for each comparison position, the proportion of trials on which the participant responded that the comparison stimulus was located to the right of the reference stimulus was calculated.Next, the proportion the comparison was chosen as rightwards was fit with a cumulative normal function using the Palamedes Toolbox (http://www.palamedestoolbox.org).To estimate all four parameters of the psychometric function (threshold, slope, lapse rate, and guess rate), the lapse rate was constrained to be equal to the guess rate and to be in the range [0, 0.05] and the maximum likelihood fit determined.Threshold was calculated as the difference between the stimulus levels at performances (proportion the comparison was chosen as rightwards) equal to 0.7602 and 0.5 as determined by the cumulative normal fit.
Central hypothesis: orthogonal representations enable observers to ignore irrelevant visual information
We tested the hypothesis that task-irrelevant information will not affect a perceptual judgment if the representations of the task-relevant and irrelevant features are orthogonal (Hong et al., 2016).Figure 1a c: Hypothesized implications of the neural formatting of visual information on the ability to decode a visual feature.Consider the responses of a population of neurons in a high-dimensional space in which the response of each neuron is one dimension.The population responses to a series of stimuli that differ only in one parameter (e.g. the position of the central object) changes smoothly in this space (left).Responses to a set of stimuli that differ in the same parameter but also have, for example, a difference in the background will trace out a different path in this space (e.g. the red points in the center and right panels).Relative to the first (blue) path, changing the same parameter on a different background could change the population response in a parallel way; more specifically, changing the background could move the population along a dimension that is orthogonal to the dimension encoding the parameter of interest (center).This scenario would enable linear decoding of the parameter of interest that is invariant to changes in the background.Alternatively, the direction that encodes of the parameter of interest could depend on the background (right).Under the linear readout hypothesis, in this case varying the background would impair the ability of a population of neurons to support psychophysical estimation of the parameter of interest.
Naturalistic stimuli with parameterizable properties
To test these predictions, we created naturalistic stimuli that had many parameterizable features (Figure 1a-b).We parametrically varied the position of a central object (banana), and the rotation and depth of background objects (leaves and branches) set against a larger fixed contextual scene (rocks, moss-covered stumps, mountains, and skyline).We presented these stimuli within the joint receptive fields of recorded V4 neurons (Figure 1-1) while each of two monkeys fixated on a central point.In sum, we recorded V4 responses in 26 experimental sessions across two animals (85-94 visually responsive multiunits per session).Most of the units in our measured population were modulated by the position of the central object and the variations in background rotation and depth (Figure 1-1d).
V4 neurons robustly encode stimulus position for each stimulus background
We first measured the extent to which V4 neurons encode banana position by linearly decoding that position for each unique background stimulus.Figure 2 shows that for each unique background configuration (rotation and depth), V4 neurons from a single session support good linear decoding of the position of the banana (each of the 25 panels in Figure 2a
V4 representations of stimulus position and background features are approximately orthogonal
Three lines of evidence support the view that the neural representation of the position of the central object (banana) in V4 is orthogonal to the representations of features of the background (depth and rotation of the leaves and branches) in our stimuli.
First, we compared our ability to decode stimulus position in each background (Figure 2a) with our ability to decode the position of the banana across the entire stimulus set (Figure 2b), where the variability in terms of background depth and rotation has the opportunity to intrude.The ability of a single decoder to read out banana position across all background variations is similar to that of the decoders that were optimized for each unique background stimulus (compare Figure 2b with 2a) and is high across all sessions for two monkeys (Figure 2c).We also found that our ability to decode background rotation (Figure 2-1a and 2-1c) and depth (Figure 2-1b and 2-1d) was similar using a single decoder for all stimuli and using a unique decoder for each combination of the other two features, suggesting that the same population of V4 neurons encodes all three axes of image variation well and orthogonally.
Second, we found that on a trial-by-trial basis, errors in the decoded estimates of banana position are not correlated with errors in decoding of background rotation and depth (Figure 2d and Figure 2-1e).This lack of correlation also suggests that the representations of banana position are independent in V4 from representations of background rotation and depth.
Finally, if the representations of stimulus position and background parameters are orthogonal, then the decoders optimized for each unique stimulus (Figure 2a) should be mutually aligned in neural population space.Put another way, if the representation of stimulus position is robust to variation in the background, this representation should vary along the same direction across backgrounds.To probe this, we calculated the line in neural population space that best explains population responses to each stimulus position for each background.These are depicted in Figure 3a for each unique background condition (plotted for the first two principal components of the population neural responses for visualization purposes only; each point is a trial, each bright point is the average population response to a particular object position, and gray to yellow point colors represent the five object position values).The angle in the population space between the decoders for each unique background and the decoder for the background configuration whose decoder is shown in the center of 3a marked with * (chosen as a reference simply to define the origin of the angular measure) is indicated in degrees.The distribution of angles for this example session (Figure 3b) and across all sessions in both animals (Figure 3c) is skewed toward much smaller angles than expected by chance -gray distributions in Figure 3c depicting a median of ~90° for decoders trained on shuffled (randomizing trial labels within background configuration) responses to each unique background condition (labeled "shuffle"; dark gray) and angles between random vectors in a response space with the same dimensionality as the neural decoding space (labeled "random"; light gray).Together, these recording results support the idea that the neural representation of stimulus position is orthogonal to the representations of background rotation and depth in V4.
Human subjects discriminate stimulus position robustly with respect to background variation
A prediction of our central hypothesis is that when the representations of two stimulus features are orthogonal in the brain, varying one should not impact the ability of subjects to discriminate the other.We tested this hypothesis by measuring the ability of human observers to discriminate the position of the central banana in our stimuli amid variation in the background rotation and depth.By using a threshold paradigm, we test this idea for stimulus step sizes that approach the limits of perception.Since object position representation in V4 neurons is robust to background variation, we tested whether changing background properties affects the decoding of object position in humans.a: Human psychophysical task.Two images containing a banana were presented, with two masks in between.Participants were instructed to report whether the relative position of the banana in image 2 was left or right of that in image 1.In blocks, background rotation and/or depth were held constant or varied as described below.b: Averaged (across participants, N = 10) psychometric functions for each background variation condition (individual participant performance shown in Figure 4-1).Three background conditions were tested: black: no background variation between the two presentations of the banana on each trial; red: background rotation changed randomly across the two presentations of the banana, but background depth was held fixed; blue: both background rotation and depth were randomized across the two presentations of the banana.Across participants, object position change detection performance was not substantially different across the three background variation conditions.c: To compare human behavior and monkey electrophysiology, we selected stimulus presentations in the monkey experiments to approximate the three background variation levels used in the human psychophysics experiment (see Methods for details of sample matching).We trained linear discriminants to separate trials into right or left position shift, for each background variation condition.During training, the trials with the banana in the central position were randomly assigned to be left or right.Classifier performance also did not differ substantially across the background variation conditions.
Human subjects viewed two images of the banana separated by two different masks (Figure 4a) and reported whether the banana in the second image was positioned to the left or right of the banana in the first presentation.The offset between the two banana positions was varied systematically, to allow determination of discrimination threshold.Across blocks of trials, we varied the amount of within-trial image-to-image variability in the background objects across the two presentations of the banana.When there was background variation, intrusion of that variation on decoding of banana position would manifest itself as an elevated discrimination threshold if the representation of the banana position was not orthogonal to that of the background features (Singh et al., 2022;Reynolds and Singh, 2023).However, consistent with the idea that the orthogonal representations of object position and background features that we found in the neural recordings enables background-independent perception, introducing variability into background did not significantly impact the position discrimination performance of human subjects (Figure 4b and Figure 4-1).To make a direct comparison between parameter decoding of neural representations and human psychophysical performance, we partitioned neural data into "no variation", "rotation variation only", and "depth and rotation variation" groups and trained linear discriminants to classify left or right position difference between pairs of presentations (Figure 4c).The crossvalidated discrimination performance of these classifiers also did not differ across the three background variation groups, as with the human psychophysical performance.Thresholds for the neural classifiers are higher than those for the human subjects (note difference in x-axis scale between Figure 4b and 4c), but this is not surprising given the difference in visual field location and the fact that it seems unlikely that we recorded from all of the neurons that support psychophysical performance.
At least ten object and background features are represented approximately orthogonally in V4
To test the extent to which the orthogonality of representations of different features generalizes to other features of the banana and background in our stimuli, we measured V4 responses to a large image set where the color, luminance, position, rotation, and depth of both the background and object each took one of two values (this yields 2 10 = 1,024 unique images; Figure 5a).If any two parameters are encoded orthogonally in neural population space, then it should be possible to linearly decode those parameters successfully despite the variation in the others.Conversely, a decoder trained on one parameter should not provide information about the others.To test these predictions, we trained linear decoders for each of the object or background features and then tested our ability to decode each of the ten features with each decoder.We generated a large image set where the color, luminance, position, rotation, and depth of both the background and object each took one of two values yielding 2 10 = 1,024 images.We collected V4 population responses to these images as in Figure 2c.a: Example images illustrating object and background parameter variation.b: If two features are encoded orthogonally (independently) in neural population space, then a decoder trained on one feature should not support decoding of the other feature.We trained linear decoders of V4 responses for each of the object or background features (x-axis) and tested the ability to decode each of the 10 features.The diagonal entries provide, for each feature the correlation between the decoded and actual feature parameter values for a decoder trained on that feature.Correlations obtained through cross-validation.Decoding performance was above chance (correlation of 0) for all features (p < 10 -80 ; t-test across folds).The off-diagonal values depict evaluate performance of a classifier trained on one feature (x-axis) for decoding another (y-axis).This cross-decoding performance was not distinguishable from chance except in the case of color of the object and background Each of the ten features was encoded in the V4 population despite the variation in the other features, meaning that the correlation between the actual value of the feature parameter and the value predicted by a cross validated linear decoder was above chance (diagonals in Figure 5b).In addition, the correlation between a given parameter and the value predicted by a decoder trained on a different parameter (off-diagonals in Figure 5b) was indistinguishable from chance except in one case (the color of the central object and background objects).These observations suggest that a population of V4 neurons can encode a relatively large number of natural scene parameters independently, enabling observers to avoid distraction by task-irrelevant stimulus features.The observation that there is an interaction between central and background objects presents an opportunity for future work to test the prediction that task-irrelevant variation in background object color should affect psychophysical discrimination of central object color, an outcome that would be consistent with the results of Singh et al. 2022.
Discussion
Using a combination of multi-neuron electrophysiology in monkeys and human psychophysics, we tested the hypothesis that features of irrelevant objects in naturalistic scenes will not interfere with the perception of target object features when their representations are orthogonal in visual cortex.We demonstrated that 1) in monkey area V4, the representation of object position is orthogonal to the representations of many irrelevant features of that object and the background, and 2) consistent with our hypothesis, threshold for human observers to judge a change in object position was unaffected by the variations in the background stimulus that were shown neuronally to have orthogonal representations in monkey V4.
Relationship to the notion of untangling and representational geometry
The conditions under which objects can be disambiguated from neural population responses have been studied using the concept of untangling (DiCarlo and Cox, 2007;Rust and Dicarlo, 2010;DiCarlo et al., 2012;Pagan et al., 2013).Untangling has been primarily discussed in the context of object classification.The hypothesis is that different objects can be appropriately classified (e.g.discriminating images of bananas from images of leaves) when the neural population representations of those objects are linearly separable in the face of irrelevant variations in the images (e.g.changes in position, orientation, size, or background).Support for this hypothesis comes from the observation that as one moves from early to late stages of the primate ventral visual stream, representations of different object categories become more linearly separable (DiCarlo et al., 2012;Yamins et al., 2014;Majaj et al., 2015;Hong et al., 2016;Hénaff et al., 2019).Progress has been made in understanding how the tuning functions and mixed selectivities of neurons support untangled population representations (Rigotti et al., 2013;Fusi et al., 2016;Kriegeskorte and Wei, 2021).
The untangling framework has also been extended to the linear readout of object properties from orthogonal neuronal representations (Hong et al., 2016).Here, we employ this extension to explore the ability of observers to represent the continuous values taken on by lower-level visual features like the ones we studied.To connect the untangling idea as applied to object classification with the formulation here, note that the linear separability employed for object-class discrimination is effective when the effect of irrelevant image variations is orthogonal to the hyperplane that separates the classes in the neural population space.Thus we use the same concept of orthogonality but apply it to linear decoding of feature parameter values rather than to linear separation of object classes.Additionally, we vary the irrelevant features (background depth and rotation) parametrically to estimate independent position axes and demonstrate that across irrelevant feature variation, the position encoding axes are parallel to each other.We link the neural orthogonality to behavioral performance by deploying a recently developed psychophysical paradigm (Singh et al., 2022;Kramer et al., 2023;Reynolds and Singh, 2023) to quantify the degree of perceptual orthogonality between the same task-relevant and task-irrelevant stimulus features that we study neurally.
Other studies consistent with this line of thinking have also considered features and have demonstrated that neural responses to relevant and distracting features of simple stimuli are linearly separable in the brain areas (or analogous layers of deep network models of vision) that are thought to mediate that aspect of vision (DiCarlo and Cox, 2007;Khaligh-Razavi and Kriegeskorte, 2014;Yamins et al., 2014;Chung et al., 2016;Cohen et al., 2020).Indeed, a previous study in our lab also found a relationship between our ability to linearly decode visual information the activity of neural populations in monkeys and the ability of human observers to discriminate the same stimuli (Kramer et al., 2023).Our conclusions are also consistent with those reached in a study that considered neuronal representations in V4 and IT and behavioral estimates of the properties of objects presented against natural image backgrounds (Hong et al., 2016).That study found increasing orthogonality of representation from V4 to IT, and good behavioral estimation of the orthogonally represented properties.Given those results, it seems possible that our stimuli would have revealed increased orthogonality in areas further along the processing hierarchy than the V4 site of our electrode array; such a result would not change the general conclusions we draw about the relation between orthogonality and behavior performance.
Opportunities from studying parameterizable naturalistic images
The present study extends the measurements of the relationship between the neural untangling of lower-level features and visually guided behavior with respect to features in naturalistic images.Our view is that the perception of object features in complex natural images provides increased power for testing the untangling hypothesis in the context of feature decoding.
Unlike the case of simpler stimuli, the number of task-irrelevant features available for manipulation is larger and is likely to more fully challenge the coding capacity of neural populations whose representations are of limited dimensionality (Chung and Abbott, 2021).Furthermore, visual distractors (like variation in the background) heavily influence scene categorization performance in artificial stimuli but not natural stimuli, suggesting that orthogonal feature representations in natural stimuli are more resilient to noise (Zhou et al., 2000;Chung et al., 2018).Studying the relationship between neurons and visually guided behavior using parameterizable naturalistic images solves many of the challenges inherent in using simple artificial stimuli on the one hand or natural images on the other (Felsen and Dan, 2005;Rust and Movshon, 2005;Martinez-Garcia et al., 2019;Cowley et al., 2023;Ding et al., 2023;Maheswaranathan et al., 2023).The graphicsgenerated stimuli we employ strike a balance between the experimental control available through parameterization and the ability to measure principles governing neural responses to and perception of features of natural images that are difficult or impossible to glean using artificial stimuli.
Opportunities from cross-species investigations of visual perception
Our results highlight the power of pairing neural population recordings in animals with behavior in humans for understanding the neural basis of visual perception.Although simultaneously recording neurons and measuring behavior has many advantages, comparison with human performance provides some assurance that the neural results obtained in an animal model generalize to humans.In addition, our approach links observations from the more peripheral visual field locations where for technical reasons the neural recordings are most often made, to the central visual field locations that are typically the focus of studies with human subjects.
Since the monkeys were simply rewarded for fixating during the recordings, our experiments focus on neural population activity that is stimulus driven, rather than reflecting internally driven processes like attention or motivation.In future work, it will be interesting to merge our knowledge of how stimulus-driven and internal processes combine to influence neuronal responses and performance on visual tasks.
Mechanisms supporting orthogonality
Our study quantifies how neural populations represent multiple naturalistic stimulus variations, but it does not provide direct insight about how the encoding and processing of visual stimuli produce those representations.Under biologically realistic assumptions, simulations show that although it is possible to learn about the orthogonality of feature representations within a population from small population recordings, it is generally not possible to characterize the role of each recorded neuron (Ruff et al., 2018b).
In recent years, a large number of studies have demonstrated that neural networks trained to categorize natural images produce representations that bear strong resemblance to neural representations in the ventral visual stream (Oleskiw et al., 2018;Pospisil et al., 2018;Bashivan et al., 2019;Srinath et al., 2021;Cowley et al., 2023).These models provide an opportunity to understand the conditions under which aspects of natural stimuli are represented orthogonally, which is a subject of ongoing work (Majaj et al., 2015;Chung et al., 2016Chung et al., , 2018;;Hong et al., 2016;Cohen et al., 2020;Ni et al., 2022;Kramer et al., 2023).We hope that our results will lead to a productive coupling of computational analysis of the mechanisms by which orthogonal representations emerge with behavioral experiments using the same parametrically varied computer-graphics stimuli.
Conclusion
Our results provide behavioral and neurophysiological evidence supporting the powerful untangling hypothesis, further extend the study of untangling to representations of features of objects and backgrounds and demonstrate the value of parameterizable naturalistic images for studying the neural basis of visual perception.They also suggest a promising future of investigating the neural basis of perceptual and cognitive phenomena by leveraging the complementary strengths of multiple species.The receptive fields of the recorded multiunits were mapped using Gabor and two-dimensional shape stimuli.The estimated receptive field (RF) center of each visually responsive multiunit is depicted by a black point.The mean size of the RF for each population is depicted by the dashed circle.The black square indicates the position of the image on the screen.The size and position of the images for each session were chosen such that all the variations in stimulus position and background depth were within the estimated RF of the recorded V4 population.The variation in object position is indicated by the false color image in each panel.c: While monkeys fixated a central dot, stimuli were flashed on (200-250 ms) and off (150-200 ms) up to eight times before the monkey received a juice reward.Each of the 125 images was repeated between 8-10 times in every session.Electrophysiological recordings were collected using multielectrode arrays implanted in V4.Receptive fields (RFs) were mapped in an independent experiment using Gabor and 2D shape stimuli.Images were placed such that the variation in the central object (the banana) and background overlapped a large majority of RFs.d: Comparison of response sensitivity for banana position, background depth, and background rotation.Here, sensitivity is defined as the modulation index (rmax-rmin)/(rmax+rmin) where rmax and rmin are respectively the maximum and minimum responses to variation in the corresponding parameter.The white cross represents the mean of the sensitivities across all visually responsive multiunits (n=2377) across all sessions.
Extended Data
-b depict how we used computer graphics to parametrically vary different scene features, such as the position of a central banana, the rotational position of objects in the background, and the depth position of objects in the background.Using this set of stimulus variations, consider the effect of varying the position of the banana on a hypothetical neural population response as illustrated in the left panel of Figure 1c.Each point in the plot represents the noisy population response to one presentation of an image, illustrating how varying banana position against a fixed background can trace out a line in the high-dimensional neural population space.For this background, the position of the background could be read out by projecting the population response onto the line shown (labeled 'position axis for one background' in the figure).The middle panel of 1c shows a way that varying the background objects could affect this line in an orthogonal manner.Here the line tracing out the neural population response to the banana at various positions is shifted in a direction orthogonal to the position axis shown in the left panel.Although a different line is swept out by varying banana position against this second background, projecting onto the line for the first background continues to accurately decode the banana position.If, on the other hand, changing the background causes a change in the position axis that is not orthogonal (right panel of Figure 1c), projecting onto the line for the first background will not provide an accurate linear position readout.Thus we test the hypothesis that changing an irrelevant feature of the background (e.g. the position of background objects) will not impact perception of the task-relevant feature if the irrelevant background changes are orthogonal to the relevant ones (Figure 1c middle).
Figure 1 :
Figure 1: Stimulus design and hypotheses about how neural representations enable generalizable decoding.a: We generated photorealistic images with permuted central object (the banana) and background properties using a Blender-based image generation pipeline that gave us control over central-and background-object properties (their position, size, pose, color, depth, luminance, etc.) b: Example images showing variations in three parameters -central-object position in the horizontal direction, a rotation of the background objects (leaves and branches), and the depth of the background objects.Five values of each of the three parameters were chosen for each monkey based on receptive field properties (see below), yielding an image set of 5x5x5=125 images.c:Hypothesized implications of the neural formatting of visual information on the ability to decode a visual feature.Consider the responses of a population of neurons in a high-dimensional space in which the response of each neuron is one dimension.The population responses to a series of stimuli that differ only in one parameter (e.g. the position of the central object) changes smoothly in this space (left).Responses to a set of stimuli that differ in the same parameter but also have, for example, a difference in the background will trace out a different path in this space (e.g. the red points in the center and right panels).Relative to the first (blue) path, changing the same parameter on a is for a specific background configuration; each gray point in the panels shows decoded banana position for a single presentation; the mean and standard deviation for each position -open circles and error bars -summarize our ability to predict the position of the banana from the activity of V4 neurons).The numbers at the upper left of each panel provide the correlation between the predicted and actual banana stimulus position (mean performance = 0.698).
Figure 2 :
Figure 2: Object position decoding from V4 population responses is consistent across background variations.a: We can linearly decode object position for each background stimulus (for the example session shown here).Each panel represents a unique configuration of background rotation and depth, with rows representing variations in rotation and columns representing variations in depth.Each gray point shows the decoded position for a single image presentation in this session.These points depict the actual object position (x-axis, in visual degrees relative to the center of the image) and the decoded position (y-axis) using a separate, cross-validated linear decoder for each unique background.The open circles represent the trial-averaged predicted position (vertical length is the standard deviation).The number in the top-left is the correlation between the actual and decoded positions and the yellow dashed line is a linear fit.Gray to yellow gradient is a redundant cue for stimulus position variation.The gray dashed line represents the identity.b: Position decoding is largely consistent across background variations.This plot is in the same format as those in A. Here, a common decoder that ignores variations in the background and therefore incorporates all stimulus presentations is used.The data are the same as those shown in A. Compare with Figure 2-1a and 2-1c for background rotation and depth decoding.d: Distribution of specific decoder accuracies (correlation) across all sessions for each monkey (each session contributed 25 values to the histogram).Blue and red arrows represent the median accuracy (0.662 for monkey 1, 0.703 for monkey 2).The box plots above the histograms summarize general decoder accuracy across sessions for each monkey.The central line indicates the median (0.735 for monkey 1, 0.724 for monkey 2), box edges indicate 25 and 75 percentiles, whiskers indicate minimum and maximum values, and + symbols indicate outliers.Compare with Figure 2-1b and 2-1d for background rotation and depth decoding.d: Error in decoding object position (across background variations) for each trial compared with the error in decoding background rotation.See Figure 2-1e for comparison with error in trial-wise background depth decoding.
Figure 3 :
Figure 3: Object position axes across background variations are aligned with each other.Since object position decoding is tolerant to background variations, we tested whether the linear decoding axes for each background configuration were aligned by visualizing the decoders in the first two principal components of the neural response space.These dimensions were computed for the full set of neural responses obtained in each session.a: As with Figure 2a, each panel represents a unique configuration of the background rotation and depth with rows representing variations in rotation and columns representing variations in depth.Each dim point represents a single image presentation, and bright points represent trial-averaged responses.Gray-to-yellow gradient represents monotonic variation in object position.A gradient line was fit to the responses for each background condition, shown here in two dimensions for illustration.The lines shown have been normalized to have the same length in the projected space shown.The text label at the top left represents the relative angle between each decoder and the central decoder (the middle background condition plot, marked with * ) calculated in the full dimensional space of responses used for decoding.b: Distribution of angles in A as a histogram.Arrow at the top represents the median angle for this session (7.78°).c: Distribution of relative decoder angles across all object position decoders (like those in A) across sessions for both monkeys (blue and red distributions).Blue and red arrows represent the median of angles across sessions (16.4° for monkey 1, 12.07° for monkey 2).Dark gray distribution represents the angles of object position decoders after shuffling the position values for each trial (median 88.13°, shown as dark gray arrow).Light gray distribution represents the angles between randomly chosen vectors of the same dimensionality as the neural population space (median 89.99°, shown as light gray arrow).
Figure 4 :
Figure 4: Human psychophysics experiments suggest that discrimination of object position is unaffected by variation in the stimulus background.Since object position representation in V4 neurons is robust to background variation, we tested whether changing background properties affects the decoding of object position in humans.a: Human psychophysical task.Two images containing a banana were presented, with two masks in between.Participants were instructed to report whether the relative position of the banana in image 2 was left or right of that in image 1.In blocks, background rotation and/or depth were held constant or varied as described below.b: Averaged (across participants, N = 10) psychometric functions for each background variation condition (individual participant performance shown in Figure4-1).Three background conditions were tested: black: no background variation between the two presentations of the banana on each trial; red: background rotation changed randomly across the two presentations of the banana, but background depth was held fixed; blue: both background rotation and depth were randomized across the two presentations of the banana.Across participants, object position change detection performance was not substantially different across the three background variation conditions.c: To compare human behavior and monkey electrophysiology, we selected stimulus presentations in the monkey experiments to approximate the three background variation levels used in the human psychophysics experiment (see Methods for details of sample matching).We trained linear discriminants to separate trials into right or left position shift, for each background variation condition.During training, the trials with the banana in the central position were randomly assigned to be left or right.Classifier performance also did not differ substantially across the background variation conditions.
Figure
Figure 5: Orthogonal representations for variations in up to 10 object and background features.We generated a large image set where the color, luminance, position, rotation, and depth of both the background and object each took one of two values yielding 2 10 = 1,024 images.We collected V4 population responses to these images as in Figure2c.a: Example images illustrating object and background parameter variation.b: If two features are encoded orthogonally (independently) in neural population space, then a decoder trained on one feature should not support decoding of the other feature.We trained linear decoders of V4 responses for each of the object or background features (x-axis) and tested the ability to decode each of the 10 features.The diagonal entries provide, for each feature the correlation between the decoded and actual feature parameter values for a decoder trained on that feature.Correlations obtained through cross-validation.Decoding performance was above chance (correlation of 0) for all features (p < 10 -80 ; t-test across folds).The off-diagonal values depict evaluate performance of a classifier trained on one feature (x-axis) for decoding another (y-axis).This cross-decoding performance was not distinguishable from chance except in the case of color of the object and background
Figure
Figure 1-1: Recording locations, stimulus positions, behavioral task, and unit responsivity for monkeys.a: Multielectrode arrays (96 channels each) were chronically implanted in V4 of two monkeys.b:The receptive fields of the recorded multiunits were mapped using Gabor and two-dimensional shape stimuli.The estimated receptive field (RF) center of each visually responsive multiunit is depicted by a black point.The mean size of the RF for each population is depicted by the dashed circle.The black square indicates the position of the image on the screen.The size and position of the images for each session were chosen such that all the variations in stimulus position and background depth were within the estimated RF of the recorded V4 population.The variation in object position is indicated by the false color image in each panel.c: While monkeys fixated a central dot, stimuli were flashed on (200-250 ms) and off (150-200 ms) up to eight times before the monkey received a juice reward.Each of the 125 images was repeated between 8-10 times in every session.Electrophysiological recordings were collected using multielectrode arrays implanted in V4.Receptive fields (RFs) were mapped in an independent experiment using Gabor and 2D shape stimuli.Images were placed such that the variation in the central object (the banana) and background overlapped a large majority of RFs.d: Comparison of response sensitivity for banana position, background depth, and background rotation.Here, sensitivity is defined as the modulation index (rmax-rmin)/(rmax+rmin) where rmax and rmin are respectively the maximum and minimum responses to variation in the corresponding parameter.The white cross represents the mean of the sensitivities across all visually responsive multiunits (n=2377) across all sessions.
Figure 2 - 1 :
Figure 2-1: Background depth and rotation can also be decoded well from V4 population responses.Compare with Figure 2. a: Same as Figure 2b but for background rotation.Here, a general linear decoder was used to estimate the background rotation value in the face of variation in background depth and banana position.b: Same as Figure 2b but for background depth.Here, a general linear decoder was used to estimate the background depth value was decoded in the face of variation in background rotation and object position.c: Same as Figure 2c for specific background rotation decoding (each session contributes 25 values to the distribution).Here, the background rotation value was decoded while ignoring the other parameter variations across images.Arrows represent median decoding accuracy (0.558 for monkey 1, 0.344 for monkey 2).Box plots above the histograms show distribution of general decoder performance.d: Same as Figure 2c for specific background depth decoding (each session contributes 25 values to the distribution).Arrows represent median decoding accuracy (0.703 for monkey 1, 0.634 for monkey 2).Box plots above the histograms show distribution of general decoder performance.e: Comparison of errors for general decoding of object position and general decoding of background depth across trials.Compare with Figure 2d. | 2024-02-21T14:11:46.694Z | 2024-02-19T00:00:00.000 | {
"year": 2024,
"sha1": "1f20608d57bd92894bbea22334d4cee96768f6fb",
"oa_license": "CCBYNCND",
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10925131",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "e036e038e4944314003f5c13ca21f6da45a91a61",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
9322240 | pes2o/s2orc | v3-fos-license | Maple umbral calculus package
We are developing a Maple package of functions related to Rota's Umbral Calculus. A Mathematica version of this package is being developed in parallel.
Introduction
Umbral calculus is the study of the analogies between various polynomial sequences and the powers sequence x n . For example, x n has many parallels with the lower factorial sequence (x) n = x(x − 1) · · · (x − n + 1): • The forward difference operator ∆ : p(x) → p(x + 1) − p(x) plays a role with respect to (x) n analogous to that played by the derivative d with respect to x n .
• Taylor's theorem is analogous to Newton's theorem.
• The binomial theorem for (x+y) n is replaced by Vandermonde's identity for (x+y) n .
Although Umbral Calculus dates back to the 18th century, it was only put on a rigorous foundation by Gian-Carlo Rota and his collaborators [6,12] in the 1970's. We now characterize each polynomial sequence under study by one or more polynomial operators (usually shift-invariant 1 ) associated with it. The duality between operators and polynomials is the key tool to deriving umbral calculus results.
Umbral Calculus has many applications in enumerative combinatorics. The powers x n counts all functions from an n-element set to an x-element set, while the lower factorials (x) n count injections. Similarly, given any species of combinatorial structures (or quasispecies), let p n (x) be the number of functions from an n-element set to an x-element set enriched by this species. A function is enriched by associating a (weighted) structure with each of its fibers. The resulting sequence of polynomials (p n ) n∈IN is said to be of binomial type since it obeys the "binomial" identity For example, given the species of rooted forests, the enriched functions are called persistent functions and are enumerated by the Abel polynomials A n (x) = x(x + n) n−1 . Other applications include lattice path counting [7,8,15].
Our Maple package provides a number of different tools by which to enter operators. These operators can then be manipulated in many different ways. In particular, the polynomial sequences associated with them can be explicitly calculated.
This package has already aided us in our research [3]; we hope that it will help you too. We expect to release a Mathematica version of this package in the near future.
Polynomial Operators
Polynomial operators (shift-invariant or not) can be specified in several convenient manners: Explicitly by their action on polynomials. For example, the shift operator is defined < subs(x = x+a, p) | p > using the "angle-bracket" notation for functional operators. (See ?operators[functional] and ?unapply for details.) Similarly, the Bernoulli operator p(x) → x+1 x p(t)dt is defined <int(subs(x=t, p), t=x..(x+1)) | p | t> .
As an analytic function of the derivative. By the expansion theorem (see [ Abstractly as an unspecified function of d. For example, f(d) or f(d,x) in the case of a non-shift-invariant operator [5].
Using the powseries package. If the coefficients of the formal power series given by the expansion theorem are all known, then use powcreate. For example, powseries [powcreate] (f(n) = a^n/n!);.
See ?linear for more information.
Operators can be converted easily from one form to another with convert. A delta operator Q is a shift-invariant operator such that Q (x) is a non-zero constant. An abstract operator is assumed to be invertible unless indicated otherwise. Our package also allows the expansion of linear operators which are not shift-invariant. Such expansions [5] express the operator as a formal power series in d whose coefficients are polynomials in x. For example, if Q is the operator Q: p(x) → x 0 p(t)dt, then the expansion convert(Q,function,5,x) or convert(Q,powseries,x) of Q in terms of multiplication by x and the derivative d gives an elementary proof of Bourbaki's method of asymptotic integration [1, Sections 3.5 and 3.6].
Polynomial Sequences
Given the necessary operators, the program can calculate polynomials of binomial type (bfo), Sheffer sequences (sfo), Steffensen sequences (steff), and cross-sequences (cseq) (see [12,Sections 5 an 8]). For example, the sequence of binomial type bfo(p(d), x, n) associated with a delta operator p(d) is defined by the conditions dp(p(d), bfo(p(d), x, n)) = n * bfo(p(d), x, n) for n>0 bfo(p(d), 0, n) = 0 for n>0 bfo(p(d), x, 0) = 1, or equivalently by its exponential generating function exp(q(x)t) where q is the compositional inverse of p. Note that q(t) is the generating function of the associated species.
For example, the lower factorial (x) n is the basic sequence of binomial type for the forward difference operator ∆.
> factor(bfo(delta,x,4)); x (x -1) (x -2) (x -3) If the degree is not explicitly given, then only the most significant terms will be computed. Several functions in the package do further operations on polynomial sequences. Arbitrary polynomials can be expressed in terms of such sequences (polynomialExpansion, shefferExpansion, basicExpansion). For example, Connection constants can be determined between arbitrary polynomial sequences. For example, the Stirling numbers are given by cc(topseq(powerx, 5, x), topseq(lower, 5, x), x) where powerx(n,x) is x n and lower(n,x) is (x) n . Other features include umbral composition (uc), and umbral inversion (ui).
Several authors (eg. [10,14]) have generalized the umbral calculus by considering not only sequences of binomial type with generating function exp(g(x)t) but also those whose generating function is Φ(g(x)t) where Φ(t) = ∞ n=0 t n /[n]! and [n]! denotes the generalized factorial [n]! = a(1)a(2) · · · a(n). Most of the functions in the umbral calculus package allow an optional argument a which is either left undefined, or defines the coefficients used by the "generalized derivative." Thus, > dp(d, x^3, x, proc(n) 1 end); 2 x The following possible choices for a are predefined.
Umbral Calculus dp(d,p,x,a) a Classical [6,12] dp(x) dx classical(n) = n q-umbral calculus [10,11] Divided Difference [4,13] p(x) − p(0) x divided(n) = 1 p(x) hyperbolic(n) = 2*n*(2*n-1) See ?genderiv for details. Generalizations of the umbral calculus to several variables [9,15] are supported. Most functions included in the package have an alternate syntax for use in multivariate umbral calculi. In particular, d[i] represents the partial derivative with respect to the ith variable. Instead of a single delta operator, a collection of operators are required to define a sequence of binomial type. This generalization is completely compatible with the above generalization. See ?multilinear and ?moe for details.
For further instructions consult the on-line help and examples provided in the package. For help, type ?key-word. An index of key-words is available via ?umbral.
See [2] for an extensive survey and bibliography of the umbral calculus. | 2014-10-01T00:00:00.000Z | 1995-02-09T00:00:00.000 | {
"year": 1995,
"sha1": "87b99f891d93951a4c63341fe677938b8fc31fbc",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "87b99f891d93951a4c63341fe677938b8fc31fbc",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
36634448 | pes2o/s2orc | v3-fos-license | The 19S proteasome is directly involved in the regulation of heterochromatin spreading in fission yeast
Cumulative evidence suggests that non-proteolytic functions of the proteasome are involved in transcriptional regulation, mRNA export, and ubiquitin-dependent histone modification and thereby modulate the intracellular levels of regulatory proteins implicated in controlling key cellular functions. To date, the non-proteolytic roles of the proteasome have been mainly investigated in euchromatin; their effects on heterochromatin are largely unknown. Here, using fission yeast as a model, we randomly mutagenized the subunits of the 19S proteasome subcomplex and sought to uncover a direct role of the proteasome in heterochromatin regulation. We identified a mutant allele, rpt4-1, that disrupts a non-proteolytic function of the proteasome, also known as a non-proteolytic allele. Experiments performed using rpt4-1 cells revealed that the proteasome is involved in the regulation of heterochromatin spreading to prevent its uncontrolled invasion into neighboring euchromatin regions. Intriguingly, the phenotype of the non-proteolytic rpt4-1 mutant resembled that of epe1Δ cells, which lack the Epe1 protein that counteracts heterochromatin spreading. Both mutants exhibited variegated gene-silencing phenotypes across yeast colonies, spreading of heterochromatin, bypassing of the requirement for RNAi in heterochromatin formation at the outer repeat region (otr), and up-regulation of RNA polymerase II. Further analysis revealed Mst2, another factor that antagonizes heterochromatin spreading, may function redundantly with Rpt4. These observations suggest that the 19S proteasome may be involved in modulating the activities of Epe1 and Mst2. In conclusion, our findings indicate that the proteasome appears to have a heterochromatin-regulating function that is independent of its canonical function in proteolysis.
The proteasome is a highly conserved multiprotein complex that engages in various cellular processes (1). The most wellknown function of the proteasome is the degradation of polyubiquitylated proteins; this occurs via the collaborative efforts of two subcomplexes, the 19S regulatory particle (19S RP) 2 and the 20S core particle (20S CP). The 19S RP may be subdivided into the lid and base subcomplexes (2,3). The lid recognizes ubiquitylated proteins, and the base deubiquitylates, unfolds, and translocates the substrate into the 20S CP, which is the site of protein degradation (4,5). Intriguingly, the 19S RP appears to exhibit a protein chaperone activity (6,7) that is independent of its proteolytic function (8,9). Other non-proteolytic functions of the proteasome have been reported, covering a range of cellular processes as follows: transcription initiation by regulating activators and co-activators (10 -15); transcription elongation by remodeling/stabilizing the stalled polymerase complex (16); ubiquitin-dependent histone modification (17); and mRNA export (18). Furthermore, accumulating evidence suggests that the respective 19S RP subunits have distinct non-proteolytic roles. In particular, the six ATPases of the 19S RP (Rpt1-6) have been reported to play non-proteolytic roles, both as monomers and in combination. In the budding yeast, Saccharomyces cerevisiae, Rpt6p mediates the gene-promoter targeting and stimulation of the co-activator, SAGA (12). Rpt4p, together with Rpt6p, is recruited directly to the yeast GAL gene, where it binds to the Gal4p activation domain to strip the activator off the chromatin (14). Rpt2p was recently reported to engage in ejecting the H2Bub1-deubiquitylating module (Sgf73-DUBm) from the SAGA complex to facilitate mRNA export (18). In the fission yeast, Schizosaccharomyces pombe, Rpt1 contacts the PAF complex via Cks1 to facilitate efficient transcription elongation (19). In humans, heterodimers of Sug1, S7, and S6a (corresponding to fission yeast Rpt6, Rpt1, and Rpt5, respectively) are recruited to induced CIITA plV promoters (13). To date, however, the non-proteolytic functions of the proteasome have been mainly investigated with respect to euchromatin; their effects on heterochromatin remain poorly understood. Moreover, each 19S ATPase has a unique set of interaction partners, and the same subunit may act upon multiple processes (20), suggesting that there may be additional yet-undiscovered nonproteolytic roles of the proteasome.
In fission yeast, constitutive heterochromatin is formed at centromeres, telomeres, and the mating-type locus. RNA inter-ference (RNAi) is the dominant mechanism through which heterochromatin is formed at centromeres, and it also contributes partly to forming heterochromatin at telomeres and the mating-type locus (21). Antiparallel transcription of the outer repeat by RNA polymerase II (pol II) produces non-coding RNA transcripts that are processed into small interfering RNAs (siRNAs) by the ribonuclease, Dicer (Dcr1). These siRNAs are subsequently loaded onto the RNA-induced transcriptional silencing (RITS) complex, which consists of Chp1, Argonaute (Ago1), and Tas3, and uses the siRNAs to target it to homologous chromatin for silencing (22)(23)(24). The RITS complex recruits the cryptic loci regulator complex to chromatin via the bridging protein, Stc1 (25), resulting in a targeted H3K9 methylation (H3K9me) that is mediated by the Clr4/Suv39h methyltransferase (26 -28). The H3K9me mark provides a binding site for the heterochromatin protein 1 (HP1) orthologs, Swi6, Chp1, and Chp2 (29).
Once established, heterochromatin is tightly confined to a defined domain to prevent unwanted invasion of heterochromatin into neighboring euchromatin. The borders of the heterochromatin domains in the centromeres are characterized by sharp transitions in histone modification profiles that coincide with specific boundary elements called IRCs; the exception to this is seen at centromere 2, where clusters of tRNA genes act in place of IRCs (30). The IRCs are enriched for Epe1, which has been shown to counteract heterochromatin spreading (31). Epe1 is recruited throughout the heterochromatin via Swi6, but it is maintained only at the boundary regions; elsewhere, it undergoes Cul4-Ddb1 E3 ligase-dependent ubiquitination and subsequent proteasome-mediated proteolysis (32). The possibility that the proteasome could have a function in heterochromatin other than conventional proteolytic degradation has long been suggested (33), but its roles in numerous aspects of protein homeostasis have obscured researchers from pinpointing such a function.
Recent reports have identified several other factors that are involved in maintaining the heterochromatin boundary. Leo1, which is a component of the Paf1 complex, was identified as an antagonizing factor of heterochromatin spreading; several groups made this discovery but proposed different mechanisms (34 -36). Mst1 and Mst2, which act as acetyltransferases of histone H4 lysine 16 (H4K16) and histone H3 lysine 14 (H3K14), respectively, have also been shown to antagonize heterochromatin spreading (37,38). Mst1-induced histone H4K16 acetylation (H4K16ac) provides a platform for the BET family bromodomain protein, Bdf2, to be recruited to IRCs by Epe1. Once recruited, Bdf2 antagonizes the Sir2-mediated deacetylation of histone H4K16ac to prevent heterochromatin spreading at IRCs (38). Mst2 genetically interacts with Epe1, such that the mst2⌬ epe1⌬ mutant shows a severe growth defect. This growth defect is also seen when the enzymatic activities of Mst2 and Epe1 are abolished, suggesting that the two proteins share redundant functions (39). A recent study found that Mst2 also acetylates Brl1, a component of the histone H2B ubiquitin ligase complex, and forms a positive feedback loop to hinder the formation of heterochromatin, adding another layer of complexity to the regulation of the heterochromatin boundary (40).
Here, we show for the first time that the proteasome functions at centromeric heterochromatin regions in a direct, nonproteolytic way. The non-proteolytic allele, rpt4-1, disrupts heterochromatin integrity and is associated with variegated heterochromatin spreading. Moreover, rpt4-1 takes a regulatory pathway similar to that of Epe1 and Mst2, but not Leo1 or Bdf2, suggesting that the proteasome could possibly act as a protein chaperone in the regulation of Epe1 and Mst2.
Proteasomes are localized at centromeres and involved in heterochromatin regulation
The ATPase ring of the 19S base is composed of six AAAtype ATPases that effect the conformational change of the ATPase ring (41). As mentioned earlier, most of the non-proteolytic proteasome alleles reported in fission yeast and other organisms are mutations in one of these ATPases (4,8,9). We therefore selected three of the ATPases, Rpt3, Rpt4, and Rpt6, as our targets for mutagenesis in our effort to screen for nonproteolytic proteasome mutants that affect heterochromatin (Fig. 1A).
We performed random mutagenesis of all three subunits and successfully isolated a pool of mutations that affected the integrity of heterochromatin by examining the expression of an ade6ϩ reporter inserted in the outer repeat region (otr). The expression of ade6ϩ was examined in low-adenine medium (YES-Ade) and adenine-deficient medium (PMG-Ade) (Fig. 1C). To our surprise, all three subunits produced mutants that affected the heterochromatin to varying degrees. Although we do not know whether the three subunits share the same mechanism for regulating heterochromatin, our results clearly indicate that the proteasome plays a general role in heterochromatin regulation.
The proteasome has been reported to be localized at pericentromeric heterochromatin in fission yeast, yet only the components of 19S RP appear to be recruited (42). As the ChIP efficiency of the proteasome has been shown to be highly dependent on the utilized antibody or target subunit (43), we speculated that targeting a different subunit of the 20S CP might improve its ChIP efficiency and possibly reveal the physical residence of the intact 26S proteasome. To this end, we FLAG-tagged the 20S CP component, Pre1, and the 19S RP subunit, Rpn1, and performed ChIP-seq analysis. We observed a profound enrichment of the proteasome at all three centromeres. Moreover, the localization patterns of the 19S RP and 20S CP were the same, indicating that the full 26S proteasome, not just the 19S RP, is recruited to the centromeres (Fig. 1B). This further validates the previous observation that the proteasome exists as a 26S holozyme even at sites where the 20S CP might seem unnecessary (43).
Rpt4 mutant allele, rpt4-1, is non-proteolytic
From the generated mutant alleles, we selected rpt4-1 for further investigation because it showed the most severe de-repression of heterochromatin (Fig. 1C). We first tested whether there was any defect in the proteolytic function of rpt4-1 cells, by using a poly-ubiquitin antibody to test cell lysates for the accumulation of poly-ubiquitylated species (18). No accumula-
19S proteasome involvement in heterochromatin spreading
tion of a poly-ubiquitylated product was observed in rpt4-1 cells, indicating that rpt4-1 is a non-proteolytic allele ( Fig. 2A, lanes 1 and 3 versus lanes 2 and 4). To verify this observation, we examined the levels of FLAG-tagged Rum1, which is rapidly degraded by the proteasome (44,45). No accumulation of Rum1 was observed in rpt4-1 cells (Fig. 2B, compare lanes 1-6 with lanes 7-9), which is consistent with the results of our polyubiquitination assays and further confirmed the non-proteolytic nature of the rpt4-1 allele. Silver staining of proteasomes purified from rpt4-1 cells showed that all subunits were intact (Fig. 2C, compare lanes 2 and 3).
Sequencing of rpt4-1 revealed a single aspartic acid to valine mutation at position 249 (D249V), which is highly conserved throughout all eukaryotes (Fig. 2D). The mutation is not located within the key ATPase domains, such as the Walker A, Walker B, Sensor, and R finger domains, where any mutation would significantly affect the ATPase function and subsequent proteolysis (46). Rather, the mutation is located adjacent to the first ␣-helix domain after the Walker B motif. We speculate that it may induce a conformational change that is subtle enough to disturb only the non-proteolytic functions of the proteasome. Wild-type, rpt4-1, mts2-1, and rpt3-1 cells were collected, and whole-cell extracts were subjected to Western blotting with FK2 antibodies against poly-ubiquitin. Rpt2 was used as a loading control. B, Rum1-FLAG proteins do not accumulate in rpt4-1 cells. Wild-type, rpt4-1, and mts3-1 cells were grown to log phase in YES at 30°C and then shifted to 37°C. The cells were collected at the indicated times, and whole-cell extracts were subjected to Western blotting with anti-FLAG antibodies. Rpt2 was used as a loading control. C, Rpn1-TAP-tagged 26S proteasomes were purified and visualized using silver staining. Protein molecular weight standards are indicated. Asterisk denotes the FLAG-tagged Rpn2 subunit in the rpt4-1 proteasome. D, schematic representation of the rpt4-1 mutation (D249V). Domains present in Rpt4p are shown (top) and the partial amino acid sequences of Rpt4 from eight species are aligned. The mutation site is highlighted in pink.
rpt4-1 cells exhibit variegated silencing at pericentromeric heterochromatin
The repression status of the pericentromere can be easily identified by colors using the ade6ϩ reporter inserted at the otr region (Fig. 3, A, C, and D) (47). When cells are grown in a low-adenine medium (YE), the repressed state is reflected by a red color, although the de-repressed state is white. Unlike other mutant alleles that showed a uniform color change, rpt4-1 cells exhibited a mixture of red and white colonies when plated on YE medium, indicating the co-existence of both repressed and de-repressed cells (Fig. 1C). To verify this observation, we randomly selected rpt4-1 colonies grown on rich medium (YES) and spotted them onto low-adenine medium (YE). Indeed, the colonies differed in their degrees of repression, ranging from fully repressed (red) to fully de-repressed (white) colonies (Fig. 3A). To exclude the possibility that this variegation phenotype was an indirect effect of the ade6ϩ reporter, we used a ura4ϩ reporter inserted at the outer repeat region of rpt4-1 cells (Fig. 3B, top), and we tested whether the variegation phenotype persisted. The same variegation of silencing was observed with the ura4ϩ reporter, indicating that the variegation is a bona fide phenotype of rpt4-1 (Fig. 3B, bottom).
The variegation phenotype of rpt4-1 is reminiscent of the null mutation of Epe1, epe1⌬, which is well-known to exhibit variegation in heterochromatin silencing because of the oscillation of heterochromatin domains (48). Here, we found that this variegation status persisted in the double mutant of rpt4-1 epe1⌬ (Fig. 3C), suggesting that Rpt4 and Epe1 may share the same pathway to prevent the stochastic dysregulation of heterochromatin. The stochastic nature of the rpt4-1 mutant was also apparent when we tried to analyze its sensitivity to the microtubule-destabilizing agent, thiabendazole (TBZ). Although a population of rpt4-1 cells seemed insensitive to TBZ (Fig. 3D, top), single colonies exhibiting full ade6ϩ de-repression (white colonies) showed TBZ sensitivity (Fig. 3D, bottom). Varying TBZ sensitivity is another reported phenotype of epe1⌬ (48), which further supports the possible linkage between Rpt4 and Epe1.
rpt4-1 cells show heterochromatin spreading
Epe1 has been identified as an anti-silencing factor, as its inactivation stimulates the continuous spreading of heterochromatin beyond its natural boundaries (31). Because rpt4-1 showed a variegation phenotype similar to that of epe1⌬, we tested whether rpt4-1 also exhibited heterochromatin spreading. To this end, we generated rpt4-1 mutant cells in which the ura4ϩ reporter gene was inserted immediately outside the IRC heterochromatin boundary element on the left side of centromere 1 (IRC1L:ura4ϩ) (Fig. 4A). This ura4ϩ reporter gene is euchromatic and is thus expressed. In wild-type cells, ura4ϩ expression confers sensitivity to 5-fluoroorotic acid (FOA), yielding poor growth in FOA medium. In cells with impaired boundary function, such as in the epe1⌬ and 1eo1⌬ mutants (34,36), heterochromatin spreads beyond the boundary to
19S proteasome involvement in heterochromatin spreading
silence the ura4ϩ reporter, conferring resistance to FOA. As anticipated, the rpt4-1 mutant showed growth on FOA, which is indicative of heterochromatin spreading (Fig. 4B, top). The mild overexpression of wild-type rpt4ϩ suppressed this evidence of heterochromatin spreading in the rpt4-1 mutant (supplemental Fig. S1), indicating that the observed heterochromatin spreading is a direct consequence of the rpt4-1 mutation. The spreading in the rpt4-1 mutant seemed marginal compared with those of the epe1⌬ or leo1⌬ mutants, but the spreading of FOA-resistant colony was just as robust as the known boundary regulator mutants (Fig. 4B, bottom). This indicates that heterochromatin spreads in a stochastic fashion in the rpt4-1 mutant, as reported previously in epe1⌬ cells (48).
Beyond the epe1⌬ mutant, the null mutations of Mst2, Bdf2, and Leo1 (mst2⌬, bdf2⌬ and leo1⌬, respectively) also reportedly exhibit heterochromatin spreading (36 -38). Our genetic analysis revealed that rpt4-1 showed synergistic enhancement of heterochromatin spreading with the bdf2⌬ and leo1⌬ mutations (Fig. 4C) but not with the mst2⌬ or epe1⌬ mutations (Fig. 4D). These results suggest that, for heterochromatin spreading, Rpt4 functions redundantly with Bdf2 and Leo1 via a different pathway, although it shares a pathway with Epe1 and Mst2. Our RNA-seq analysis of the rpt4-1, epe1⌬, and mst2⌬ mutants further revealed that there is functional redundancy among Rpt4, Epe1, and Mst2. The gene clusters that exhibited differential expression in the epe1⌬ and mst2⌬ mutants (Ͼ1.5-fold change relative to wild-type) showed significant overlaps with those exhibiting differential expression in rpt4-1 cells (p ϭ 9 ϫ 10 Ϫ91 and p ϭ 7.3 ϫ 10 Ϫ29 for epe1⌬ and mst2⌬, respectively), indicating that molecular functions are shared among Rpt4, Epe1, and Mst2 (supplemental Fig. S2A). Gene ontology analysis revealed that the three proteins are all involved in regulating the gene groups related to the stress response and protein synthesis, suggesting that they share a role in the cellular response to environmental stimuli (supplemental Fig. S2, B and C).
Because rpt4-1 is a non-proteolytic mutant and therefore does not affect the level of Epe1 or Mst2, we questioned whether it might affect the localizations of Epe1 and/or Mst2, particularly at the IRC boundary regions. However, Mst2 was recently reported to be absent from centromeric regions (40), making it most logical to examine the localization of Epe1 alone. We found that the localization of Epe1 was unaffected, or marginally increased, if any, in rpt4-1 cells (supplemental Fig. S3). Together, our results suggest that the functions of Epe1 and/or Mst2 may be impaired in the rpt4-1 mutant. These possibilities are discussed further below.
rpt4-1 bypasses the requirement of RNAi in heterochromatin formation
In epe1⌬ and mst2⌬ mutants, heterochromatin formation is restored in the absence of RNAi, implying that an RNAi-independent heterochromatin formation pathway is activated (37,48). We tested whether the rpt4-1 mutant could bypass the requirement of RNAi for heterochromatin formation, and indeed we found that rpt4-1 cells bypassed the requirement of active RNAi. Compared with the ago1⌬ and dcr1⌬ mutants, rpt4-1 ago1⌬ and rpt4-1 dcr1⌬ mutants showed increased silencing of the ade6ϩ reporter, indicating that heterochromatin was restored in the double mutants (Fig. 5A). Likewise, rpt4-1 ago1⌬ cells exhibited recovery of the poor growth seen in
rpt4-1 reduces transcription at pericentromeric heterochromatin in the absence of RNAi
Pericentromeric heterochromatin is readily transcribed by pol II during S phase of the cell cycle to produce nascent RNAs that RNAi machineries eventually transform into siRNAs (49,50). However, the accessibility of pol II is tightly regulated by the balance between the histone deacetylase protein, Clr3, and the anti-silencing protein, Epe1. Clr3 restricts the accessibility of pol II, whereas Epe1 counteracts this by promoting the accessibility of pol II (51). A previous study showed that the level of pol II was increased in dcr1⌬ cells due to the de-condensation of heterochromatin; however, this was suppressed in dcr1⌬ epe1⌬ cells, demonstrating that Epe1 can promote pol II accessibility in pericentromeric heterochromatin (51). Similar to the dcr1⌬ epe1⌬ mutant, the dcr1⌬ mst2⌬ mutant also reportedly exhibits suppression of the increased pol II seen at the centromeres of dcr1⌬ cells (37). We therefore anticipated that the rpt4-1 mutation would reduce the level of pol II in situations that are typically characterized by abnormally high levels of pol II. To test this hypothesis, we examined whether the rpt4-1 ago1⌬ mutant exhibited suppression of the increased pol II level seen in the ago1⌬ mutant. As expected, the pol II level was reduced in the rpt4-1 ago1⌬ mutant relative to the ago1⌬ mutant (Fig. 6A). Consistent with this finding, the transcript level in the otr was reduced in rpt4-1 ago1⌬ cells (Fig. 6B). We also failed to observe siRNAs in rpt4-1 ago1⌬ cells, indicating that the restoration of silencing (i.e. the decrease in pol II) does not reflect the activation of an alternative small RNA-producing pathway (Fig. 6C). Together, these results indicate that a non-proteolytic function of the proteasome promotes the pericentromeric level of pol II.
Discussion
In this study, we elucidated a new, non-proteolytic function of the proteasome in regulating heterochromatin via a mechanism similar to that mediated by Epe1 and Mst2 (Fig. 7). Our observation that the non-proteolytic allele, rpt4-1, showed variegation of heterochromatin silencing (Fig. 3, A and B) and TBZ sensitivity (Fig. 3D) led us to investigate the potential involvement of Epe1, which was previously shown to prevent the spreading of heterochromatin beyond its natural borders (31). The last column has been enlarged to enable a clear comparison of the cell growth of the indicated strains. C, ChIP-qPCR analysis of H3K9me2 levels at dg relative to act1ϩ. D, ChIP-qPCR analysis of Swi6 levels at dg relative to act1ϩ. Immunoprecipitated DNA was recovered using the Chelex-100 resin (Bio-Rad) and quantified by quantitative PCR using the primers listed in supplemental Table S2.
19S proteasome involvement in heterochromatin spreading
As in epe1⌬ cells, heterochromatin spreading was observed in rpt4-1 cells (Fig. 4A). Genetic analysis revealed that the rpt4-1 regulates heterochromatin spreading via a pathway that is distinct from those involving Bdf2 and Leo1 (Fig. 4C) but is shared by Epe1 and Mst2. Notably, rpt4-1 bypassed the requirement of RNAi for heterochromatin maintenance (Fig. 5) and reduced the accessibility of pol II (Fig. 6), which are two phenotypes also reported for epe1⌬ and mst2⌬ cells (48,51).
As our group previously showed that 19S RP can remodel protein complexes via chaperone activity (18), a simple hypothesis would be that the proteasome regulates the recruitment of Epe1 and/or Mst2. However, our ChIP analysis showed that the recruitment of Epe1 is not affected in rpt4-1 cells (supplemental Fig. S3), and Mst2 is either absent from centromeric regions, as reported (40), or its action at centromeric regions may be too transient to be detected by the current ChIP technique. This leaves the possibility that the proteasome may directly remodel Epe1 and/or Mst2 to alter the activity of one or both of these proteins. In the case of Epe1, this hypothesis is feasible to some extent because 19S RP and Epe1 have been reported to undergo a physical interaction (38), and we found that the proteasome was highly enriched at heterochromatin regions, including the Epe1-enriched IRC boundaries (Fig. 1B). However, a detailed biochemical study is needed to fully examine the potential proteasome-directed remodeling of Epe1. Although Epe1 contains a JmjC domain, it seems to lack any demethylase activity (48,51). It has been reported to promote histone turnover in vivo (52), but there is no conclusive biochemical evidence showing that histone turnover is a genuine function of Epe1. In contrast, Mst2 has well-characterized enzymatic activity as a histone H3K14 acetyltransferase. Like Epe1, Mst2 has also been reported to promote histone turnover in vivo, although the underlying mechanism remains unknown (39). The global histone H3K14ac level was not altered in the rpt4-1 mutant (data not shown), indicating that the histone acetylation activity of Mst2 is not affected by the rpt4-1 mutation. However, Mst2 was recently shown to acetylate Brl1 (40), a component of the histone H2B ubiquitin-ligase complex; thus, it might be possible that Rpt4 is involved in regulating the H2B mono-ubiquitylation pathway (17).
The recruitment of the proteasome to pericentromeric heterochromatin may arise through poly-ubiquitinated substrates, such as Epe1 (32). However, the enrichment of the proteasome at heterochromatin is above the genomic average, suggesting that there may be different modes of proteasome recruitment. Because the proteasome is enriched in NM (33) and both the proteasome and the centromere are localized at the inner nuclear membrane (NM) in fission yeast (53,54), it could be the same nuclear compartment that accounts for the high enrichment of the proteasome at the centromere. In fact, NM proteins are reportedly involved in both regulating heterochromatin and anchoring the proteasome to the NM (55,56). Thus, it may be plausible that NM proteins are involved in recruiting the proteasome to the centromere.
A previous study identified cep (centromere enhancer of position effect) mutants in a genetic screen for mutants that affect centromeric silencing within the central core region. The authors found that mutations in the 19S RP subunit, rpt2ϩ and rpn11ϩ (cep2-12 and cep1-1, respectively), were responsible for the enhancement of heterochromatin (33). The cep mutants were distinguished from the conventional proteolytic mts2-1 or mts3-1 mutants in that they did not accumulate short spindles. The authors hypothesized that substrate-specific degradation may have caused the phenotypes in cep mutants, but we propose that these phenotypes may actually have reflected the nonproteolytic activity we observed in this study. In fact, the rpt4-1 mutation enhanced the centromere core silencing (data not shown), suggesting the tantalizing possibility that there may be functional redundancy between the cep mutants and rpt4-1.
A recent study found that the 19S proteasome subunit, Rpt3, regulates the distribution of CENP-A (42). The C-terminal truncation mutant, rpt3-1, showed defective regulation of CENP-A, wherein CENP-A crossed the borders of the centromere core and spread to the otr. In our screening for heterochromatin mutants, we also acquired the rpt3-1 mutant. However, the rpt4-1 and rpt3-1 mutants were clearly distinct from one another. Besides the distinction that they are alleles of Rpt4 and Rpt3, respectively, the rpt4-1 mutant did not show temperature sensitivity in Cnp1-overexpressing cells (supplemental Fig. S4), which is a key phenotype of the rpt3-1 mutant. We also failed to observe the spreading of CENP-A (characteristic of rpt3-1) to the otr in rpt4-1 mutant cells (data not shown). Finally, the rpt3-1 mutant was found to be significantly proteo-
19S proteasome involvement in heterochromatin spreading
lytic ( Fig. 2A), which is another important distinction from the non-proteolytic rpt4-1 mutant. Our findings suggest that the phenotypes of rpt3-1 cells might be an indirect effect of defective proteolysis, whereas those of rpt4-1 cells appear to be the direct consequence of a non-proteolytic function of the proteasome.
In conclusion, the proteasome is a highly conserved protein complex from yeast to humans. Our study demonstrates for the first time that the proteasome plays a role in heterochromatin regulation that is independent of its canonical proteolytic func-tion. In the future, it would be intriguing to investigate whether this non-proteolytic function is conserved in higher eukaryotes, such as humans.
Yeast strains and plasmids
The fission yeast strains used in this study are listed in supplemental Table S1. Standard procedures were used for growth In wild-type cells, the proteasome actively degrades the poly-ubiquitylated Epe1 in the repeat regions, while regulating Epe1 and/or Mst2 to facilitate boundary maintenance mechanisms. The effect of boundary maintenance is sufficient to antagonize heterochromatin spreading. In rpt4-1 cells, the mutant proteasome does not affect the degradation of the polyubiquitylated Epe1 but exhibits impaired regulation of Epe1 and/or Mst2. The effect of boundary maintenance is not sufficient to counteract heterochromatin spreading, and heterochromatin spreads beyond the natural border.
19S proteasome involvement in heterochromatin spreading
and genetic manipulations. All strains were grown at 30°C unless otherwise stated. The deletion strains and tagged strain were generated using a PCR-based method. The wild-type and mutated Rpt4 ORFs were cloned into an S. pombe expression plasmid under the control of the nmt41 promoter. The wild-type and point-mutated Rpt4 ORFs were confirmed by sequencing.
Screening of proteasome mutants
The wild-type ORFs encoding Rpt3, Rpt4, and Rpt6 were subjected to random mutagenesis by error-prone PCR using GeneMorph II random mutagenesis kits (Stratagene) according to the manufacturer's protocol. The mutated ORFs were fused with the 3Ј-and 5Ј-UTRs of their respective genes and the Kan R cassette, and the generated constructs were transformed into an otr1::ade6ϩ reporter strain. Colonies showing both a white color when grown on YE plates and survival on PMG-Ade plates were selected. gDNAs of the selected colonies were sequenced to exclude false positives, and only colonies with mutation(s) in the respective ORF were selected. The proteasome mutants were then respotted on YE and PMG-Ade plates to test the effect of each mutation on heterochromatin silencing. The mutant with the strongest phenotype (rpt4-1) was selected. The effect of the rpt4-1 mutation (D249V) was further verified by introducing it into the wild-type strain.
ChIP and ChIP-seq analysis
ChIP was performed as reported previously (18), with minor modifications. Briefly, 2.4 ϫ 10 8 cells were fixed in 1% formal-dehyde for 15 min at room temperature, and cell extracts were prepared using the standard bead-beating method. Immunoprecipitation was performed overnight at 4°C using the following antibodies: anti-FLAG (F3165, Sigma), anti-H3K9me2 (ab1220, Abcam), anti-H4K16ac (made in-house), and anti-Swi6 (made in-house). Immunoprecipitated DNA was recovered using the Chelex-100 resin (Bio-Rad) and quantified by quantitative PCR using the primers listed in supplemental Table S2. For Epe1-FLAG ChIP, a quantity of S. cerevisiae cells corresponding to 1/9 of the original input cells was added prior to the cell lysis step as an add-in control. ChIP-seq analysis was performed as described previously (57).
Serial dilution assays
Strains or single colonies were spotted in 5-fold dilutions onto the appropriate plates and incubated for 3-4 days at 30°C. To assess the sensitivity to TBZ, serial dilutions were spotted onto YES containing 10 g/ml TBZ.
Western blot analysis
Whole-cell extracts were prepared from logarithmically growing cells. Cells were harvested and resuspended in either trichloroacetic acid (for the blotting of Epe1) or 2ϫ SDS-PAGE loading buffer containing 1 mM PMSF (for all other blots). The resuspended cells were vortexed with beads, boiled in SDS-PAGE loading buffer, and used for immunoblotting.
RNA-seq, small RNA purification, and library preparation for small RNA sequencing
RNAs were purified using the previously described hot-phenol method (58) and subjected to library preparation using a NEXTflex Illumina RNA-seq library preparation kit version 2 (BIOO) according to the manufacturer's instructions. Small RNAs were purified as described previously (48), and the obtained small RNAs were subjected to library preparation using a NEXTflex TM small RNA-seq kit version 3 (BIOO) according to the manufacturer's instructions. Each library was sequenced on a HiSeq2500 using the single-end method (50-bp reads). The adaptor sequences were automatically trimmed, and the processed reads were aligned to the S. pombe genome (ASM294v2) using the STAR (for RNA-seq) or Novoalign software packages. The bam2wig.py Python script from RSeQC (59) was used to further analyze the aligned read data. RNA-seq data of mst2⌬ cells was adopted from GEO, accession number GSE93432.
Data availability
The sRNA and ChIP-seq data reported in this paper are available from GEO under accession number GSE97865. | 2018-04-03T03:49:46.580Z | 2017-08-07T00:00:00.000 | {
"year": 2017,
"sha1": "6301dd6385a957d887522b0ea7795a14e7f59049",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/292/41/17144.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "f5c8bb196f79f893ecc81495ba0a547c07936893",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
243779854 | pes2o/s2orc | v3-fos-license | RET Proto-Oncogene Mutational Analysis in 45 Iranian Patients Affected with Medullary Thyroid Carcinoma: Report of a New Variant
Background The aim of this study was to identify germline mutation of the RET (rearranged during transfection) gene in patients with medullary thyroid carcinoma (MTC) and their first-degree relatives to find presymptomatic carriers for possible prophylactic thyroidectomy. Methods/Patients. We examined all six hot spot exons (exons 10, 11, 13, and 14–16) of the RET gene by PCR and bidirectional Sanger sequencing in 45 Iranian patients with MTC (either sporadic or familial form) from 7 unrelated kindred and 38 apparently sporadic cases. First-degree relatives of RET positive cases were also genotyped for index mutation. Moreover, presymptomatic carriers were referred to the endocrinologist for further clinical management and prophylactic thyroidectomy if needed. Results Overall, the genetic status of all of the participants was determined by RET mutation screening, including 61 affected individuals, 22 presymptomatic carriers, and 29 genetically healthy subjects. In 37.5% (17 of 45) of the MTC referral index patients, 8 distinct RET germline mutations were found, including p.C634R (35.3%), p.M918T (17.6%), p.C634Y (11.8%), p.C634F (5.9%), p.C611Y (5.9%), p.C618R (5.9%), p.C630R (5.9%), p.L790F (5.9%), and one uncertain variant p.V648I (5.9%). Also, we found a novel variant p.H648R in one of our apparently sporadic patients. Conclusion RET mutation detection is a promising/golden screening test and provides an accurate presymptomatic diagnostic test for at-risk carriers (the siblings and offspring of the patients) to consider prophylactic thyroidectomy. Thus, according to the ATA recommendations, the screening of the RET proto-oncogene is indicated for patients with MTC.
Introduction
Calcitonin-secreting parafollicular C cells of the thyroid gland are the origin of 5-10% of thyroid cancers, so called medullary thyroid carcinoma (MTC). MTC has a worse prognosis than the most common form of thyroid cancer, papillary thyroid carcinoma (PTC), which accounts for 60-80% of thyroid carcinomas and originates from follicular cells [1]. MTC has two forms of sporadic (isolated) and hereditary that compromise about 75% and 25% of the cases, respectively. Hereditary forms are transmitted as autosomal dominant pattern of inheritance and are seen either as isolated familial MTC (FMTC) with a prevalence of 10% or as a syndromic form of cancer, also known as multiple endocrine neoplasia type 2 (MEN2A and MEN2B, which have a prevalence of 85% and 5%, respectively) [2,3]. In familial medullary thyroid carcinoma, the only lesion present is MTC, but MEN2A is characterized by MTC and pheochromocytoma with or without parathyroid hyperplasia or adenoma [1][2][3].
PTC and MTC have clear clinical and pathological differences, but RET proto-oncogene contributes somehow to the carcinogenesis of both types. Activation of the RET gene by rearrangement (inversion or translocation) appears to be only seen in patients suffering from PTC [4] or thyroid adenoma [5], and germline missense mutations of the RET gene have been shown to be the cause of the hereditary form of MTC (MEN2) [6]. Interestingly, somatic mutations in RET have been found in 23-70% of sporadic MTCs and 10-15% of sporadic pheochromocytoma patients, as well [7,8]. Gain of function mutations in exons 10,11,13,14, and 15 of the RET gene have mostly been reported MEN2A (e.g., codons 609, 611, 618, 620, and 634) but about 95% of the patients with MEN2B have a mutation in exon 16 (codon 918). erefore, to obtain a reliable screening genetic test for MTC patients, exons 10,11,13,14,15, and 16 should be considered as the hot spots for RET gene mutations.
Identification of the germline mutation of the RET gene in a patient with MTC provides an accurate presymptomatic diagnostic test for the siblings and offspring of the patients. Here, we report the results of the mutational analysis of the RET proto-oncogene in 45 Iranian patients with MTC (either sporadic or familial form) from 7 unrelated kindreds and 38 apparently sporadic cases.
Patients and Specimen
Collection. Seven unrelated families and 38 apparently sporadic cases with medullary thyroid carcinoma referred to Endocrinology and Metabolism Research Institute of Tehran University of Medical Sciences participated in this study. First, 45 index cases whose MTC was diagnosed by an endocrinologist and confirmed histopathologically and underwent thyroidectomy based on the ATA guideline were genetically consulted. en, each index case was evaluated for mutations in six hot spot exons of the RET gene, and segregation analysis was done for their at-risk relatives if a causative mutation was found [3]. Overall, 112 participants were examined for RET mutations. All procedures were in accordance with the ethical standards of Tehran University of Medical Sciences and the Helsinki Declaration of 1975 revised in 1983. Written informed consent was obtained from all participants or their parents for their participation and also their permission for the publication of the results.
Molecular Analysis.
Genomic DNA was extracted from EDTA peripheral venous blood using the standard saltingout/proteinase K method. Primers were designed to amplify all six exons, 10, 11, 13, 14, 15, and 16, and their exon-intron boundaries using the NCBI Primer-design tool (https:// www.ncbi.nlm.nih.gov/tools/primer-blast/) and Gene Runner software (version 6.0.28). Fifty nanograms of the extracted DNA were used as a PCR template. PCR conditions are available upon request. e PCR products were run on a 2% agarose gel, and subsequently, all six exons were bidirectionally sequenced using the ABI3130 automated sequencer and analyzed with the Chromas version 2 (http://chromas.software.informer.com/2.0/). e sequence data were compared with the RET sequence data (Ref Seq NM_020975) from the NCBI BLAST human database (http://www.ncbi.nlm.nih.gov/BLAST/) and Ensembl human database (GRCh37).
Clinical and Molecular Findings.
Of 45 referred MTC patients, 7 cases had FMTC or MEN syndrome, with more than one affected members. irty-eight patients seemed to be apparently sporadic MTC. All six RET exons (10,11,13,14,15, and 16 exons) were sequenced in 45 index patients. Eight RET known distinct mutations, one unclassified variant, and one novel variant were detected (Table 1). e family IR-F4 was an extended kindred with two affected individuals in the second generation (II 2 and II 3 ), two affected (III 3 and III 10 ), three carriers in the third generation (III 5-7 ), and two carriers in the fourth generation (IV 2 and IV 8 ) (Figure 1(a)). e 44-year-old proband III 3 and other affected individuals of the pedigree had a known heterozygous mutation (c.2370G > T) in exon 13 (p.L790F). According to the ATA risk classification, p.L790F mutation is stratified as ATA-MOD (A level), indicating a moderate risk of aggressive MTC with ∼10% incidence of PHEO. Carriers of this mutation are advised to undergo prophylactic thyroidectomy only if it occurs as FMTC, or they have a high level of calcitonin. erefore, these carriers were referred to the endocrinologist for more clinical follow-up. e proband of the family IR-F15 (II 2 ) also had a known mutation (c.1900T > C) in exon 11 (p.C634R). Her mother (I 2 ) died because of MTC and was not available but two other family members did not have any RET mutations in the six tested exons (Figure 1(b)). e family IR-F18 was another family with three affected daughters (II 2 , II 6 , and II 7 ), one affected son (II 4 ), three affected grandchildren (III 3 , III 4 , and III 8 ), and one carrier granddaughter (III 6 ) (Figure 1(c)). A known heterozygous mutation in exon 11 (c.1900T > C (p.C634R)) was found in all affected individuals (II 2 , II 4 , II 6 , II 7 , III 3 , III 4 , and III 8 ) and one carrier granddaughter (III 6 ). Interestingly, individual III 6 had two genetically normal triplets. According to the Journal of yroid Research ATA risk classification, she was advised to undergo prophylactic thyroidectomy before 5 years of age. e family IR-F25 was an extended kindred consisting of a healthy father (II 3 ) and an affected mother (II 4 ) with one affected son (III 1 ), one affected daughter (III 2 ), two carrier daughters (III 4 and III 6 ), one phenotypically healthy son (III 3 ), one genetically healthy daughter (III 5 ), three carrier grandsons (IV 1 , IV 4 , and IV 9 ), one carrier granddaughter (IV 2 ), and three genetically healthy grandchildren (IV 3 , IV 7 , and IV 8 ) (Figure 1(d)). e 63-year-old proband (II 4 ) was diagnosed with MEN2A (MTC, pheochromocytoma (PHEO), and hyperparathyroidism) and underwent thyroidectomy, parathyroidectomy, and adrenalectomy. Sequencing of six RET exons of the proband and her affected children revealed a heterozygous mutation (c.1888T > C) in exon 11 (p.C630R) classified as a pathogenic variant. All carrier relatives of c.1888T > C mutation with elevated calcitonin levels, III 4 , III 6 , IV 1 , IV 4 , and IV 9 , underwent prophylactic thyroidectomy at the time of diagnosis (age: 36, 24, 13, 12, and 5 years old, respectively). e 19-year-old carrier granddaughter (IV 2 ), who had a normal calcitonin level, did not undergo prophylactic thyroidectomy. However, according to the ATA risk level, p.C630R RET mutation is stratified as ATA-MOD (level B) with a moderate risk of aggressive MTC, and the patients are recommended to receive prophylactic thyroidectomy before the age of 5 years old. e family IR-F31 was another kindred comprising an affected father (I 1 ), a healthy mother (I 2 ), two affected daughters (II 2 and II 4 ), one affected son (II 5 ), one genetically healthy son (II 7 ), and two carrier grandsons (III 3 and III 4 ) (Figure 1(e)). MEN2A was diagnosed in the 45-year-old grandfather as the proband with clinical manifestations of MTC and pheochromocytoma. He underwent thyroidectomy and adrenalectomy. His affected children also had MEN2A, including MTC and pheochromocytoma. We identified a heterozygous known mutation in exon 11 (c.1900T > C, p.C634R) in this family which has been classified as a pathogenic variant with a high ATA risk (ATA-H, level C). Prophylactic thyroidectomy is recommended for this mutation before 5 years of age because of the high risk of aggressive MTC. To conduct a follow-up study, two carrier grandsons (III 3 and III 4 ) were referred to the endocrinologist and underwent prophylactic thyroidectomy at age 5. e family IR-F33 had a known RET mutation (c.1852T > C) in exon 10 (p.C618R) (Figure 1(f )). e proband (II 5 ) was first referred for RET genetic testing because of a high calcitonin level (above 1,500), and histopathological results showed MTC. Initially, a heterozygous known mutation (c.1852T > C) was seen in exon 10 (p.C618R) in the proband (II 5 ) at age 27, and subsequently, this variant was found in his affected father I 1 and three carrier siblings (II 1 , II 2 , and II 3 ). According to the ATA risk Journal of yroid Research classification, p.C618R mutation is stratified as ATA-MOD (level B), and carriers are recommended to undergo prophylactic thyroidectomy before 5 years of age. Follow-up visits showed that the calcitonin level was above the normal range (300 pg/ml) in II 5 even after prophylactic thyroidectomy. irty-eight index patients who seemed to be apparently sporadic MTC were tested for RET mutation at six mentioned exons. e results showed that 10 out of 38 patients had germline RET mutations. In addition, in four families, we found four carriers of a causative mutation in presymptomatic children who had affected parents, including family IR-F14 with c.1832G > A mutation in exon 10 (p.C611Y), families IR-F17 and IR-F29 with c.1900T > C mutation in exon 11 (p.C634R), and family IR-F22 with c.1901G > T mutation in exon 11 (p.C634F).
In the family IR-F17, a 32-year-old affected mother (II 8 ) with a p.C634R mutation had a 4-year-old daughter with elevated calcitonin who was found to have the same mutation as observed in her mother. e proband in the family IR-F29 (III 16 ) had an asymptomatic 10-year-old boy who carried the same mutation (p.C634R). Moreover, a 15-yearold carrier of p.C611Y in the family IR-F14 with a moderate ATA (ATA-MOD, Level B) and a 12-year-old carrier of p.C634F in the family IR-F22 were found. All of the carriers in these four families were referred to the endocrinologist for further clinical follow-up and prophylactic thyroidectomy. Furthermore, three index patients in families IR-F11, IR-F23, and IR-F37 had a known RET germline mutation (c.2753T > C) in exon 16 (p.M918T), which is a MEN2B specific mutation with the highest ATA risk (ATA-HST, level D). However, no available persons in their families had this mutation. e apparent sporadic patient IR-F39 had a known RET mutation (c.1901G > A) in exon 11 (p.C634Y). Likewise, another apparently sporadic patient (IR-F38) had a RET germline mutation (p.C634Y) and a novel uncertain variant (c.1973A > G) causing p.H658R, both in exon 11. RET novel variant p.H658R was predicted to be of uncertain significance by InterVar, benign by polyphen2, disease causing by mutation taster, and tolerated with SIFT score 0.5 by SIFT web server (Table 2). RET sequencing results of another sporadic patient (IR-F19-II 7 ) showed an uncertain variant, c.1942G > A, in exon 11 (p.V648I). is variant was predicted to be likely benign by InterVar, benign by poly-phen2, disease causing by mutation taster, and tolerated with SIFT score 0.43 by SIFT web server (Table 2).
Medullary thyroid cancer (MTC) is responsible for about 10% of thyroid cancers, but its prognosis is worse than the most common thyroid cancer, papillary thyroid cancer (PTC). However, 25% of MTCs are hereditary, known as MEN2 syndromes, including MEN2A and MEN2B. Germline missense mutations in hot spot exons of the RET gene (exons 10,11,[13][14][15][16] cause 90% of all MEN2 syndromes. Biochemical tests used for screening at-risk cases of MEN2 have false positive/negative results. By contrast, RET mutation detection is a promising/golden screening test. Accurate identification of genotype-phenotype correlation in MEN2 syndromes is essential for presymptomatic detection of carriers and clinical management of the affected subjects. In addition, RET genetic testing can contribute to distinguishing between three types of MEN2 syndromes that have some manifestation similarities. According to the results, 37.5% (17 out of 45) of the MTC index patients had nine distinct RET germline mutations, including p.C634R (35.3% in 6 patients), p.M918T (17.6% in 3 patients), p.C634Y (11.8% in 2 patients), p.C634F (5.9% in 1 patient), p.C611Y (5.9% in 1 patient), p.C618R (5.9%-in 1 patient), p.C630R (5.9%-in 1 patient), and p.L790F (5.9%-in 1 patient). e distribution of the detected mutations in hot spot exons of RET gene was as follows: 64.7% (11/17) e ATA guideline also indicates that p.C634R mutation is seen in FMTC (as a type of MEN2A). Substitution mutations at codon 634 in exon 11, especially cytosine to arginine, are predominately seen in Iranian MEN families [10][11][12][13][14]. It has been stated that mutations at codon 634 are the most frequent mutation in Caucasians [15].
is mutation comprised 53% of all detected mutations in this study. e present study showed that mutations in exon 11, especially at codon 634, are commonly observed in the Iranian population, which is consistent with other studies [10][11][12][13][14].
Interestingly, follow-up studies in the IR-F25 family showed that a 19-year-old carrier, IV-2, did not manifest any MEN2A phenotypes, unlike other carriers. e carrier IV-2 might have some other modifier genes that give her an opportunity to be a healthy carrier with a normal calcitonin level until 19 years of age. In addition, a review of the literature showed that p.L790F mutation is reported for the second time in Iranian families with FMTC [16]. Similarly, it is the second report of p.C618R mutation in Iranian MTC families [13]. Nevertheless, p.C618R mutation is the most common mutation in MEN2A in Saudi Arabian families [17]. e present study showed five distinct RET mutations in ten out of thirty-eight (26.3%) apparently sporadic Iranian MTC cases, including two p.C634R, two p.C634Y, one p.C634F, three p.M918T, and one p.C611Y mutations. Additionally, one RET uncertain variant (p.V648I) was detected in one apparently sporadic MTC Iranian patient for the first time. Moreover, it is the first report of an apparently sporadic MTC Iranian patient who had a p.C634R mutation and a novel H658R uncertain variant. Further molecular studies are needed to identify the clinical effect of these two variants. On the other hand, in this study, p.M918T mutation was another common mutation in sporadic Iranian MTC cases, which is inconsistent with findings in European populations [11,16]. To the best of our knowledge, p.M918T mutation has been reported in two cases and has not been detected in Iranian Azari and other populations [10][11][12][13][14]. Different genetic backgrounds could be the reason for the mentioned conflicting results, which requires more investigations. On the other hand, it seems that some modifier polymorphisms that were found in some carriers of RET mutation in our study might contribute to MTC manifestations [18]. Additional genetic evidence is required to clarify the additive effect of modifier polymorphisms.
In conclusion, evaluation of RET mutations in hot spot exons resulted in the genetic diagnosis of 37.5% of the Iranian MTC patients. e genotype of all participants in this study was precisely determined, and 22 presymptomatic carriers were referred for further clinical management and prophylactic thyroidectomy. us, RET mutation screening could serve as an essential detection method for any MTC patient to find presymptomatic RET carriers.
Data Availability
e data used to support the findings of this study are included within the article. | 2021-11-06T15:14:40.435Z | 2021-11-03T00:00:00.000 | {
"year": 2021,
"sha1": "6f1df149a914933f066abc4e65b5e973da345df9",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2021/7250870",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "467b7c463d8fe8b5dc96e588039c40e1dbc4d155",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
34253140 | pes2o/s2orc | v3-fos-license | Complexity Concepts and Non-Integer Dimensions in Climate and Paleoclimate Research
The ongoing global climate change has severe effects on the entire biosphere of the Earth. According to the most recent IPCC report [1], it is very likely that anthropogenic influences (like the increased discharge of greenhouse gases and a gradually intensifying land-use) are important driving factors of the observed changes in both the mean state and variability of the climate system. However, anthropogenic climate change competes with the natural variability on very different time-scales, ranging from decades up to millions of years, which is known from paleoclimate reconstructions. Consequently, in order to understand the crucial role of man-made influences on the climate system, an overall understanding of the recent system-internal variations is necessary.
Introduction
The ongoing global climate change has severe effects on the entire biosphere of the Earth. According to the most recent IPCC report [1], it is very likely that anthropogenic influences (like the increased discharge of greenhouse gases and a gradually intensifying land-use) are important driving factors of the observed changes in both the mean state and variability of the climate system. However, anthropogenic climate change competes with the natural variability on very different time-scales, ranging from decades up to millions of years, which is known from paleoclimate reconstructions. Consequently, in order to understand the crucial role of man-made influences on the climate system, an overall understanding of the recent system-internal variations is necessary.
The climate during the Anthropocene, i.e. the most recent period of time in climate history that is characterized by industrialization and mechanization of the human society, is well recorded in direct instrumental measurements from numerous meteorological stations. In contrast to this, there is no such direct information available on the climate variability before this epoch. Besides enormous efforts regarding climate modeling, conclusions about climate dynamics during time intervals before the age of industrial revolution can only be derived from suitable secondary archives like tree rings, sedimentary sequences, or ice cores. The corresponding paleoclimate proxy data are given in terms of variations of physical, chemical, biological, or sedimentological observables that can be measured in these archives. While classical climate research mainly deals with understanding the functioning of the climate system based on statistical analyses of observational data and sophisticated climate models, paleoclimate studies aim to relate variations of such proxies to those of observables with a direct climatological meaning. Classical methods of time series analysis used for characterizing climate dynamics often neglect the associated multiplicity of processes and spatio-temporal scales, which result in a very high number of relevant, nonlinearly interacting variables that are necessary for fully describing the past, current, or future state of the climate system. As an alternative, during the last decades concepts for the analysis of complex data have been developed, which are mainly motivated by findings originated within the theory of nonlinear deterministic dynamical systems. Nowadays, a large variety of methods is available for the quantification of the nonlinear dynamics recorded in time series [2,3,4,5,6,7,8,9], including measures of predictability, dynamical complexity, or short-as well as long-term scaling properties, which characterize the dynamical properties of the underlying deterministic attractor. Among others, fractal dimensions and associated measures of structural as well as dynamical complexity are some of the most prominent nonlinear characteristics that have already found wide use for time series analysis in various fields of research.
This chapter reviews and discusses the potentials and problems of fractal dimensions and related concepts when applied to climate and paleoclimate data. Available approaches based on the general idea of characterizing the complexity of nonlinear dynamical systems in terms of dimensionality concepts can be classified according to various criteria. Firstly, one can distinguish between methods based on dynamical characteristics estimated directly from a given univariate record and those based on a (low-dimensional) multivariate projection of the system reconstructed from the univariate signal. Secondly, one can classify existing concepts related to non-integer or fractal dimensions into self-similarity approaches, complexity measures based on the auto-covariance structure of time series, and complex network approaches. Finally, an alternative classification takes into account whether or not the respective approach utilizes information on the temporal order of observations or just their mutual similarity or proximity. In the latter case, one can differentiate between correlative and geometric dimension or complexity measures [10]. Table 1 provides a tentative assignment of the specific approaches that will be further discussed in the following. It shall be noted that this chapter neither gives an exhaustive classification, nor provides a discussion of all existing or possible approaches. In turn, the development of new concepts for complexity and dimensionality analysis of observational data is still an active field of research.
In order to illustrate the specific properties of the different approaches discussed in this chapter, the behavior of surface air temperature data is studied. Specifically, the data utilized in the following are validated and homogenized daily mean temperatures for 2342 meteorological stations distributed over Germany ( Figure 1) and covering the time period from 1951 top 2006. The raw data have been originally obtained by the German Weather Service for a somewhat lower number of stations before being interpolated and postprocessed by the Potsdam Institute for Climate Impact Research for the purpose of validating regional climate simulations ("German baseline scenario"). Before any further analysis, the annual cycle has been removed by means of phase averaging (i.e. subtracting the long-term climatological mean for each calendar day of the year and dividing the residuals by the corresponding empirical standard deviation estimated from the same respective day of all years in the record). This pre-processing step is necessary since the annual cycle gives the main contribution to the intra-annual variability of surface air temperatures in the mid-latitudes and would thus lead to artificially strong correlations on short to intermediate time-scales (i.e. days to weeks) [11]. In addition, since some of the methods to be discussed can exhibit a considerable sensitivity to non-stationarity, linear trends for the residual mean temperatures are estimated by a classical ordinary least-squares approach and subtracted from the de-seasoned record. The remainder of this chapter will follow the path from established self-similarity concepts and fractal dimensions (Section 2) over complexity measures based on the auto-covariance structure of time series (Section 3) to modern complex network based approaches of time series analysis (Section 4). Mutual similarities and differences between the individual approaches are addressed. The performance of the different approaches is illustrated using the aforementioned surface air temperature records. Subsequently, the problem of adapting the considered methods to time series with non-uniform (and possibly unknown) sampling as common in paleoclimatology is briefly discussed (Section 5).
Self-similarity approach to fractal dimensions
The notion of fractal dimensions has originally emerged in connection with self-similar sets such as Cantor sets or self-similar curves or objects embedded in a metric space [9]. The most classical approach to quantifying the associated scaling properties is counting the number of boxes needed to cover the fractal object under study in dependence on the associated length scale, which behaves like a power-law for fractal systems. More formally, studying the asymptotic behavior of the double-logarithmic dependence between number and size of hypercubes necessary to cover a geometric object with ever decreasing box size defines the box-counting dimension (often also simply called "the" fractal dimension) Given a trajectory of a complex system in a d-dimensional space that is supposed to correspond to an attractive set, covering the volume captured by this trajectory by hypercubes in the way described above allows estimating the fractal dimension of the associated attractor. More general, considering the probability mass of the individual boxes, pi, one can easily generalize the concept of box-counting dimensions to so-called Renyi dimensions [12,13] 0 log lim , (1 )log which give different weights to parts of phase space with high and low density (in fact, the box coverage probabilities pi serve as naïve estimators of the coarse-grained invariant density p(x) of the dynamical system under study). The special cases q=0,1,2 are referred to as the box-counting (or capacity), information, and correlation dimension.
In typical situations, only a univariate time series is given, which can be understood as a low-dimensional projection of the dynamics in the true higher-dimensional phase space. In such cases, it is possible to reconstruct the unobserved components in a topologically equivalent way by means of so-called time-delay embedding [14], i.e. by considering vectors ( 1) ( , ,..., ), where the unknown parameters N and (embedding dimension and delay, respectively) need to be appropriately determined. The basic idea is that the components of the thus reconstructed state vectors are considered to be independent of each other in some feasible sense, thus representing the dynamics of different observables of the studied system. There are some standard approaches for estimating proper values for the two embedding parameters. On the one hand, the delay can be inferred by considering the time after which the serial correlations have vanished (first root of the auto-correlation function) or become statistically insignificant (de-correlation time) -in these cases, the resulting components of the reconstructed state space are considered linearly independent. Alternatively, a measure for general statistical dependence such as mutual information can be considered to estimate the time after which all relevant statistical auto-dependences have vanished [15]. On the other hand, the embedding dimension is traditionally estimated by means of the false nearest-neighbor method, which considers the changes in neighborhood relationships among state vectors if the dimension of the reconstructed phase space is increased by one. Since such changes indicate the presence of projective effects occurring when considering a too low embedding dimension, looking for a value of N for which the neighborhood relationships between the sampled state vectors do not change anymore provides a feasible estimate of the embedding dimension [16]. An alternative approach is considering the socalled singular system analysis (SSA), which allows determining the number of statistically relevant eigenvalues of the correlation matrix of the high-dimensionally embedded original record as an estimate of the true topological dimension of the system under study [17,18].
Having reconstructed the attractor by finding a reasonable approximation of its original phase space as described above, one may proceed with estimating the fractal dimensions by means of box-counting. However, since this approach requires studying the limit of many data, is may become unfeasible for analyzing real-world observational time series of a given length. As alternatives, other approaches have been proposed for estimating some of the generalized fractal dimensions Dq, with the Grassberger-Procaccia algorithm for the correlation dimension [19,20] as the probably most remarkable example. Details on corresponding approaches can be found in any contemporary textbook on nonlinear time series analysis.
A noteworthy alternative to considering fractal dimension estimates based on phase space reconstruction has been introduced by Higuchi [21,22], who studied the behavior of the curve length associated with a univariate time series in dependence on the level of coarse-graining, (where [.] denotes the integer part), which scales with a characteristic exponent corresponding to the fractal dimension D0, Figure 2 shows the actual behavior of the thus computed curve length with varying coarsegraining level k (equivalent to the embedding delay in Equation (3)) for the daily mean temperature record from Potsdam. One can see that there are two distinct scaling regimes corresponding to time scales up to about one week and above about ten days. For the shorter time-scales, the slope of the linear fit in the double-logarithmic plot yields values between 1.6 and 1.7, which are of the order of magnitude that is to be expected for lowdimensional chaotic systems with two topological dimensions (note that the drawing of the curve underlying the definition of the curve length L(k) corresponds to a two-dimensional space). In turn, for larger time scales, the slope of the considered function takes values around 2, implying that the dynamics on these time-scales is less structured and resembles a random walk without the distinct presence of an attractive set in phase space with a lower (fractal) dimension. It should be emphasized that the shorter time-scale appears to be coincident with typical durations of large-scale weather regimes, whereas the second range of time-scales exceeds the predictability limit of atmospheric dynamics. The difference between both scaling regimes becomes even more remarkable when studying the corresponding spatial pattern displayed by all 2342 meteorological stations in Germany ( Figure 3). On the shorter time-scales, the fractal dimension is significantly enhanced in the easternmost part of the study area, whereas the same region shows the lowest values of D0 on the longer time-scales. The presence of two different ranges of time-scales with distinctively different spatial pattern is actually not unique to the fractal dimension, but can also be observed by other complexity measures (see Section 3.5 of this chapter). The probable reason for this finding is the presence of atmospheric processes (related to more marine and continental climates as well as low-and highlands) affecting the different parts of the study area in different ways on short and long time-scales. A more detailed climatological interpretation of this finding is beyond the scope of the present work.
Complexity measures based on serial correlations
As an alternative to concepts based on classical fractal theory, scaling properties based on the linear auto-covariance structure of time series data also contain valuable information.
Corresponding approaches utilizing basic methods from multivariate statistics have been referred to as multivariate dimension estimates [11,23,24,25] and provide meaningful characteristics that can be reliably estimated even from rather short time series, which still constitute a fundamental limit for classical fractal dimension analysis.
The original motivation for the introduction of multivariate dimension estimates to climate research has been that the ''complete'' information about the climate of the past requires considering a set of complementary variables, which form a multivariate time series. The fraction of dynamically relevant observables, which is interpreted as a measure for the average information content of a given variable, can vary itself with time due to the nonstationarity of the climate system. Temporal changes of this information content, i.e. of the effective ''dimension'' of the record, can therefore serve as an indicator for changes in environmental conditions and the corresponding response of the climate system. Moreover, widely applicable ideas from the theory of nonlinear deterministic processes can be used to adapt this approach to univariate time series. In the following, the mathematical background of the corresponding approach will be detailed.
Dimensionality reduction of multivariate time series
Quantifying the number of dynamically relevant components in multivariate data sets commonly requires an appropriate statistical decomposition of the data into univariate components with a well-defined variance. In the most common case, these components are required to be orthogonal in the vector space spanned by the original observables, i.e. linearly independent. A corresponding decomposition (typically with the scope of achieving a suitable dimensionality reduction of a given high-dimensional data set) is commonly realized by means of principal component analysis (PCA) [26,27], which is also referred to as empirical orthogonal function (EOF) analysis or Karhunen-Loève decomposition (KLD) depending on the particular scientific context and application. The basic idea beyond this technique is that a proper basis adjusted to the directions of strongest (co-)variation in a multivariate data set can be identified using a principal axis transform of the corresponding correlation matrix. In this case, the associated eigenvectors of the correlation matrix contain weights for linear superpositions of the original observables that result in the largest possible variance. The actual amplitude of this variance is characterized by the associated non-negative eigenvalues.
Technically, consider simultaneous records Xij of different observables X (j) at times ti combined in a TxN-dimensional data set X=(Xij) with column vectors representing T successive observations of the same quantity and row vectors containing the simultaneous measurements of N different observables. Here, the columns of X may represent different variables measured at the same location or object, or spatially distributed records of the same observable or different variables. The associated correlation matrix is given as the covariance (or scatter) matrix S=Y T Y where the matrix Y is derived from X by subtracting the column means from all columns of X and then dividing the residual column vectors by their standard deviations. It should be emphasized that column mean and standard deviation represent here estimates of the expectation value and expected standard deviation of the respective observable. The elements of S are the linear (Pearson) correlation coefficients between all pairs of variables, which provide reasonable insights into mutual linear interrelationships between the different variables if the observations are normally distributed or the sample size is sufficiently large to neglect the former requirement according to the central limit theorem. By definition, S is symmetric and positive semidefinite, i.e. has only non-negative eigenvalues i 2 . Without loss of generality, one may arrange these N eigenvalues in descending order and interpret them as the variances of the principal components of X given by the corresponding eigenvectors.
It shall be noted that there are various generalizations of linear PCA, involving decompositions of multivariate data sets into projections onto curved manifolds that take the place of the orthogonal eigenvectors describing the classical linear principal components. Due to the considerably higher computational efforts for identifying these objects in the underlying vector space and correctly attributing the associated component variances, corresponding methods like nonlinear PCA [28], isometric feature mapping (Isomap) [29], or independent component analysis (ICA) [30], to mention only a few examples, will not be further discussed here, but provide possibilities for generalizing the approach detailed in the following.
KLD dimension density
The idea of utilizing PCA for quantifying the number of dynamically relevant components, i.e. transferring this traditional multivariate statistical technique into a dynamical systems context, is not entirely new. In fact, it has been used as early as in the 1980s for identifying the proper embedding dimension for univariate records based on SSA (see Section 2 of this chapter). Ciliberti and Nicolaenko [31] used PCA for quantifying the number of degrees of freedom in spatially extended systems. Since these degrees of freedom can be directly associated with the fractal dimension or Lyapunov exponents of the underlying dynamical system [32,33,34], it is justified to interpret the number of dynamically relevant components in a multivariate record as a proxy for the effective dimensionality of the corresponding dynamical system.
More formally, Zoldi and Greenside [35,36,37,38] suggested using PCA for determining the number of degrees of freedom in spatially extended systems by considering the minimum number of principal components required to describe a fraction f (0<f<1) of a multivariate record. Let i 2 , i=1,…,N, again be the non-negative eigenvalues of the associated correlation matrix S given in descending order. The aforementioned number of degrees of freedom, which is referred to as the KLD dimension, can then be defined as follows [23]: For spatially extended chaotic systems, it has been shown that the KLD dimension increases linearly with the system size N, i.e. the number of simultaneously recorded variables [37]. This motivates the study of a normalized measure, the KLD dimension density / , instead of DKLD itself.
LVD dimension density
While the KLD dimension density can be widely applied for characterizing complex spatiotemporal dynamics based on large data sets (i.e. both N and T are typically large), it reaches its conceptual limits when being applied to multivariate data sets with a small number of simultaneously measured variables (small N), or used for studying non-stationary dynamics in a moving-window framework (small T). On the one hand, small N implies that KLD can only have very few distinct values (i.e. multiples of 1/N), so that small changes in the covariance structure of the considered data set may lead to considerably large changes of the value of this measure. On the other hand, short data sets (small T) imply problems associated with the statistical estimation of correlation coefficients between individual variables (particularly large standard errors and the questionable reliability of the Pearson correlation coefficient as a measure for linear interrelationships in the presence of non-Gaussian distributions). However, both cases can have a considerable relevance in the field of geoscientific data analysis.
As an alternative, Donner and Witt [11,23,24,25] suggested studying the characteristic functional behavior of KLD in dependence on the explained variance fraction f. Specifically, if the residual variances decayed exponentially, i.e.
the KLD dimension density would scale as in the limit of large N. The resulting coefficient (f) can be understood as characterizing the effective dimensionality of the system. The derived quantity (the dependence on f will be omitted for brevity from now on) has been termed the linear variance decay (LVD) dimension density of the underlying data set. Its estimation by means of linear regression according to Equation (9) has been discussed in detail elsewhere [11,25].
It should be mentioned that * does not yet give a properly normalized dimension density with values in the range between 0 and 1, which can already be observed for simple stochastic model systems [23,25]. However, using the limiting cases of identical (lowest possible value min) and completely uncorrelated (highest possible value max) component time series, one can derive analytical boundaries and properly renormalize the LVD dimension density to values within the desired range [39] as min max min It shall be noted that using the LVD dimension density instead of the KLD dimension density solves the problem of discrete values in the limit of small N, but still shares the conceptual limitations with respect to the limit of small T. As another positive feature, LVD has a continuous range and a much smaller variability with f than KLD. This variability is mainly originates from insufficiencies of the regression model (Equation (8)) and would vanish in case of large N and an exactly exponential decay of the residual variances, which is a situation that is, however, hardly ever met in practice [27].
Possible modifications of the LVD dimension density approach include the consideration of alternative measures of pair-wise statistical association, such as Spearman's rank-order correlation or phase synchronization indices [40,41], which may be of interest in specific applications. Although the formalism described above can applied in exactly the same way to such matrices of similarity measures, the statistical meaning of the corresponding decomposition is not necessarily clear.
Dimension densities from univariate time series
The previously discussed approach can be easily modified for applications to univariate time series [42]. For this purpose, the correlation matrix S of the multivariate record is replaced by the Toeplitz matrix of auto-correlations estimated from a univariate data set. In other words, the PCA commonly utilized for defining the KLD and LVD dimension densities is replaced by an SSA step (i.e. a "PCA for univariate data").
As a particular characteristic of the resulting "univariate dimension densities", it should be emphasized that the obtained results crucially depend on the particularly chosen "embedding" parameters, i.e. the "embedding dimension" N and time delay . In case of SSA-based methods, it is common to use an "over-embedding", i.e. a number of time-shifted replications of the original record that is much larger than the actual supposed dimensionality of the studied data. Since serial correlations usually decay with increasing time delay, increasing N beyond a certain value (i.e. adding more and more dimensions to the embedded time series) will not change the number of relevant components in the record anymore. As a consequence, LVD asymptotically takes stationary values. In turn, selecting the "embedding delay" allows studying the dynamical complexity of time series on various time-scales (i.e. from the minimum temporal resolution of the record to larger scales limited only by the available amount of data). Consequently, LVD can change considerably as is varied.
Application: Surface air temperatures
For the purpose of discussing measures of dimensionality based on the auto-covariance structure of an observational record, it is useful to first examine the auto-correlation function itself. As a first example, let us consider again the daily mean temperature record from Potsdam, Germany (Figure 4a). For this time series, the auto-correlations decay within only about 7-10 days to values below 0.2 ( Figure 4b). Consequently, using short time delays (below about one week) for embedding temperature records leads to components with considerable mutual correlations. In this case, one can expect a low LVD dimension density, since the information contained in one of the embedded components is already largely determined by the other components. In turn, for larger delays, the embedded components become approximately linearly independent of each other, implying that since correlations are generally weaker, more components need to be taken into account for explaining a given fraction of variance from the multivariate embedded record. Hence, the LVD dimension density should considerably increase with the delay. Indeed, this expectation is confirmed by Figure 4c, which displays a sharp increase of LVD with increasing embedding delay especially at the scales below one week, whereas there is a saturation for larger delays at values rather close to one. Another interesting feature can be observed in the behavior of the LVD dimension density with increasing embedding dimension N (Figure 4c). For small delays (i.e. time scales with considerable serial correlations within the observational record), LVD increases with increasing N towards an asymptotic value that can be well approximated by estimating this measure for large, but fixed N. In contrast, for large delays, we find a decrease of the estimated LVD dimension density with increasing N without a marked saturation in the considered range of embedding dimensions. A probable reason for this is the insufficiency of the underlying exponential decay model. In fact, the exact functional form of the residual variances for random matrices clearly differs from an exponential behavior, but displays a much more complicated shape [27]. Furthermore, it should be noted that as both delay and embedding dimension increase, the number of available data decreases as Teff=T-(-1)N, which can contribute to stronger statistical fluctuations (however, the latter effect is most likely not relevant in the considered example). For intermediate delays, one can thus expect a certain crossover time scale between both types of behavior, which is related to the typical time scale of serial correlations. In order to further support these findings, Figure 5 shows the spatial pattern displayed by the LVD dimension density at all 2342 stations. For larger embedding delays (right panel), the components of the reconstructed multivariate record are in reasonable approximation linearly independent, resulting in high values of the LVD dimension density close to 1 (the limiting case for perfectly uncorrelated records). However, one can observe a marked West/East gradient with high values of LVD in the western and central part and much lower values in the eastern part of Germany. Referring to the interpretation of this measure, this finding could indicate that the temporal correlations decay slower in the eastern part that is subject to a more continental climate which typically varies on longer time scales than a marine climate present in the western part of the study area. It should be emphasized that the general spatial pattern closely resembles the behavior of the fractal dimension D0 (Figure 3b).
In turn, for low embedding delays (1 day), the observed spatial pattern is more complex with more fine-structure, yielding enhanced values (though still indicating considerable correlations) in the eastern and western parts of Germany and lower values in central Germany in a broad band from North to South, as well as in the southeastern part. The qualitative pattern again resembles that of the fractal dimension D0 (Figure 3a), with the exception that the enhanced values in the eastern part are less well-expressed, whereas the contrasts in the western part are considerably stronger.
In general, both characteristics display similar differences between the behavior on short and longer time scales, which are clearly related to the presence of auto-correlations with a spatially different decay behavior. Regarding the short-term dynamics, this statement is supported by the fact that a qualitatively similar spatial pattern as for the considered dimension estimates (but with opposite trend) can be obtained by coarsely approximating the temperature records by a first-order auto-regressive (AR [1]) process Xt=1Xt-1+t, where t is Gaussian white noise ( Figure 5, left panel). In fact, for an AR [1] process, the Toeplitz matrix of auto-correlations has a very simple analytical form, Sij=1 |i-j| . Even though there is no closed-form solution for its eigenvalues [43], one can easily show by means of numerical simulations that the resulting LVD dimension density for such processes depends hardly on N, but strongly on the value of the characteristic parameter 1. Since the latter is related to the time-scale of the associated exponential decay of auto-correlations as t*=-1/log 1, low values of 1 give rise to a fast decay and, hence, high values of the LVD dimension density, whereas the opposite is true for high values close to 1 (see Figure 6). This behavior is in excellent agreement with the theoretical considerations made above.
Complex network-based approaches
These days, the analysis of network structures is a common task in many fields of science such as telecommunication or sociology, where physical or social interactions (wires, friendships, etc.) can be mathematically described as a graph. When the corresponding connectivity pattern contains a certain number of interacting units (referred to as network vertices or nodes) and is neither completely random nor fully regular (e.g. a chain or lattice), but displays some less obvious type of structure, the resulting system is called a complex network. The structural features of such systems can be described using the rich toolbox of quantitative characteristics provided by the so-called complex network theory [44,45,46,47].
Besides the analysis of network structures based on a clearly "visible" substrate (such as infrastructures or communication systems), it has been demonstrated by various authors that complex network approaches can be useful for extracting and understanding the dynamical backbone of systems composed of a large number of dynamically interrelated units or variables, such as financial markets [48], the neuro-physiological activity of different regions of the brain [49], or the functioning of the climate system [50,51,52,53]. In the aforementioned cases, a network structure is identified using suitable measures of statistical association (e.g., linear Pearson correlation or nonlinear mutual information) between records of activity in different areas or of different variables or coupled units. Information on the underlying functional connectivity of the large-scale system is inferred by considering only sufficiently strong interrelationships and studying the set of such connections among the variety of subsystems.
In parallel to the development of complex network methods as a complementary tool for multivariate time series analysis, a variety of different approaches has been suggested for studying single univariate time series from a network perspective [54]. Existing approaches include methods based on transition probabilities after coarse-graining the time series' range or the associated reconstructed phase space [55], convexity relationships between different observations in a record [56], or certain notions of spatial proximity between different parts of a trajectory [57,58,59,60,61,62,63], to mention only the most prominent existing concepts in this evolving area of research (for a more detailed recent review, see [54]). For two of these approaches, the so-called visibility graphs and recurrence networks discussed below, it has been shown that some of the resulting network properties can be related to the concept of fractal dimensions or, more general, scaling analysis. In the following, the corresponding recent findings are summarized.
Visibility graph analysis
Visibility graphs have been originally introduced as a versatile tool for studying visibility relationships between objects in architecture or robot motion planning [64,65,66,67]. Lacasa and co-workers [55] suggested transferring this idea to the analysis of time series from complex systems, where local maxima and minima of the considered observable play the role of hills and valleys in a one-dimensional landscape. Specifically, in a visibility graph constructed from a univariate time series, the individual observations are taken as network vertices, and edges are established between pairs of vertices xi=x(ti) and xj=x(tj) that are "mutually visible" from each other, i.e. where for all xk=x(tk) with ti<tk<tj the following local convexity condition applies: When describing the connectivity of this network in the most common way in terms of the binary adjacency matrix Aij (here, Aij=1 implies that there exists an edge between vertices i and j), the latter can be consequently expressed as follows: where denotes the Heaviside function defined in the usual way.
As a simplification of the standard visibility graph algorithm, it can be useful considering the so-called horizontal visibility graph, in which the connectivity is defined according to the horizontal visibility between individual vertices, i.e. there is an edge (i,j) between two observations xi and xj if for all k with ti<tk<tj, xk<min(xi,xj). Consequently, the associated adjacency matrix reads In other words, the horizontal visibility graph encodes the distribution of local maxima in a time series (i.e. short-term record-breaking events). Due to its simpler analytical form, it has the advantage that certain basic network properties can be more easily evaluated analytically than for the standard visibility graph.
As a particularly remarkable result, it has been demonstrated both analytically and numerically that for fractal as well as multifractal processes, the degree distributions p(k) of visibility graphs, i.e. the probabilities of finding vertices with a given number of connections (degree) , exhibit a power-law (commonly called "scale-free property" in complex network theory) with a characteristic scaling exponent that is directly related to the associated Hurst exponent H [68,69]. Moreover, it can be shown that for a wide class of such processes, the Hurst exponent is itself related with the fractal dimension D0 as D0=2-H, however, this relationship is not universal [70]. In this spirit, the scaling exponent obtained from visibility graphs can be considered as an alternative estimate of the fractal dimension. In turn, besides the validity of the aforementioned relationship between Hurst exponent and fractal dimension for the specific data set under study, the possible improvements with respect to computational efforts, required data volume and related issues still need to be systematically compared with those of existing estimators of the Hurst exponent.
In addition to the potentially ambiguous interdependence between Hurst exponent and fractal dimension, using visibility graph approaches for the purpose of estimating fractal dimensions from geoscientific time series may be affected by a further problem. Towards the ends of a time series, there is a systematic tendency to underestimate the actual degree of vertices just due to a lower number of possible neighbors [71]. While this feature will have negligible influence for long time series, it may considerably contribute to a bias in the degree distribution estimated from small data sets common to many geoscientific problems. In turn, a potential advantage of visibility graphs is that they do not require uniform sampling in time, which makes them applicable to typically problematic types of data such as paleoclimate records [71] or even marked point process data such as earthquake catalogues [72].
Recurrence network analysis
Recurrence networks are another well-studied approach for transforming time series into an associated complex network representation [60,61,62,63]. In contrast to visibility graphs, the basic idea is reconstructing the spatial structure of the attractor underlying the observed dynamics in the corresponding phase space. That is, given a univariate record the dynamically relevant variables need to be reconstructed by means of time-delay embedding first if necessary. Consequently, the first step of recurrence network analysis consists of identifying the appropriate embedding parameters by means of the corresponding standard techniques. Having determined these parameters, time-delay embedding is performed. For the resulting multivariate time series, the mutual distances between all resulting sampled state vectors (measured in terms of a suitable norm in phase space, such as Manhattan, Euclidean, or maximum norm) are compared with a predefined global threshold value .
Interpreting the state vectors as vertices of a recurrence network, only such pairs of vertices are connected that are mutually closer than this threshold, resulting in the following definition of the adjacency matrix: where ij denotes Kronecker's delta defined in the usual way. To put it differently, in a recurrence network only neighboring state vectors taken from the sampled trajectory of the system under study are connected. In this spirit, the recurrence network forms the structural backbone of the associated dynamical system. Moreover, since no information on temporal relationships enters the construction of the recurrence network, its study corresponds to a completely geometric analysis method.
The structural properties of recurrence networks have already been intensively studied.
Relating to the degree distributions, it has been demonstrated analytically as well as numerically that the presence of a power-law-shaped singularity of the invariant density p(x) of the studied dynamical system is a necessary condition for the emergence of scale-free degree distributions, the scaling exponent of which is, however, not necessarily associated with the system's fractal dimension, but with the characteristic behavior of the invariant density near its singularity [73]. More generally, recurrence networks are a special case of random geometric graphs aka spatial networks, where the network vertices have a distinct position in some metric space and the connectivity pattern is exclusively determined by the spatial density of vertices and their mutual distances [74]. The latter observation allows calculating expectation values of most relevant complex network characteristics given that the invariant density is exactly known or can at least be well approximated numerically [75]. Specifically, the transitivity properties of recurrence networks on both local and global scale can be computed analytically for some simple special cases [75]. A detailed inspection of these properties demonstrates that the global recurrence network transitivity , , , , can be considered as an alternative measure for the effective dimensionality of the system under study [76]. In contrast to established notions of fractal dimensions, the estimation of the associated transitivity dimension does not require considering any scaling properties of some statistical characteristics. The definition in Equation (18) is motivated by the fact that at least using the maximum norm, for random geometric graphs in integer dimensions d the expectation value of the network transitivity scales as (3/4) -d [74,76]. However, it should be noted that the proper evaluation of the transitivity dimension is challenged by the fact that it alternates between two asymptotic values, referred to as upper and lower transitivity dimension, as the recurrence threshold is varied [76].
According to the aforementioned interpretation of the transitivity properties, it has been found that the associated local clustering coefficient providing a measure of transitivity on the level of an individual vertex is a sensitive tracer of dynamically invariant objects like supertrack functions or unstable periodic orbits [54,60,61,76]. In turn, global clustering coefficient (i.e. the arithmetic mean value of the local clustering coefficients of all vertices in the recurrence network) and network transitivity track changes in the dynamical complexity of a system under study that are related with bifurcations [10,61,77,78] or subtle changes in the dynamics not necessarily captured by traditional methods of time series analysis [10,79].
In a similar fashion, some other network measures based on the concept of shortest paths on the graph can be utilized for similar purposes. In summary, it has to be underlined that the recurrence network concept has already demonstrated its great potential for studying geoscientific time series, however, this potential has not yet been fully and systematically explored for different fields of geosciences.
Complexity and dimensionality analysis in paleoclimatology
Unlike for data obtained from meteorological observatories or climate models, the appropriate statistical analysis of paleoclimate proxy data is a challenging task. Particularly, a variety of technical problems arise due to the specific properties of this kind of data [24].
Firstly, paleoclimate data sets are usually very noisy due to significant measurement uncertainties, high-frequency variations, secondary (non-climatic) effects and the aggregation of the measurements over certain, not necessarily exactly known time intervals.
Secondly, in Earth history environmental conditions have changed both continuously and abruptly, on very long time-scales as well as on a set of different ''natural'' frequencies the influence of which has changed with time. Especially during the last million years, there has been a sequence of time intervals with cold (glacial) and moderate (interglacial) global climate conditions, which can be interpreted as disjoint states of the global climate system. Even more, these two types of states have alternated in a way that displays some complex regularity, i.e., the timing of the (rather abrupt) transitions between subsequent states (glacial terminations and inceptions) has been controlled by dominating frequencies of variations in the Earth's orbital parameters [80], which is commonly referred to as Milankovich variability. As a consequence of these multiple transitions, paleoclimate time series are intrinsically nonstationary with respect to variability on a variety of different time scales.
Finally, in the case of sedimentary and ice core sequences as the most common types of proxy records, the core depth has to be translated into an age value with usually rather coarse and uncertain age estimates [81,82]. Since the rate of material accumulation has typically varied with time as well, an equidistant sampling along the sequence does usually not imply a uniform spacing of observations along the time axis. Both unequal spacing of measurements and uncertainties in both timing and value pose additional challenges to any kind of time series analysis approach applied to paleoclimate data.
Analysis of time series with non-uniform sampling
As stated above, non-uniform sampling is an inherent feature of most paleoclimate records. Hence, the appropriate statistical analysis of such records requires a careful specific treatment, since standard estimators of even classical and conceptually simple linear characteristics are not directly applicable (or at least do not perform well) in case of unequally spaced time series data. Consequently, in the last decades there has been an increasing interest in developing alternative estimators that generalize the established ones in a sophisticated way.
Traditionally, many approaches for analyzing paleoclimate time series have implicitly assumed a linear-stochastic behavior of the underlying system, i.e. that the major features of the records can be described by ''classical'' statistical approaches like correlation or spectral analysis [83,84]. In particular, novel estimators for both time and frequency domain characteristics have been developed which do not require a uniform sampling [85,86,87,88,89]. In turn, many recent studies in the field of paleoclimatology, including such dealing with sophisticated statistical methods [84,90], have typically made use of interpolation to uniform spacing. It has to be underlined that this strategy, however, disregards important conceptual problems such as the appearance of spurious correlations in interpolated paleoclimate data [86] or the presence of time-scale uncertainty. At least the former problem can be solved by using improved more generally applicable estimators, whereas the impact of time-scale uncertainty can be estimated using resampled (Monte Carlo) age models and distributions of statistical properties estimated from ensembles of perturbed age models consistent with the original one [71].
Moreover, classical statistical methods such as correlation or spectral analysis are typically based on the assumption that the observed system is in an equilibrium state, which is reflected by the stationarity of the observed time series. However, this stationarity condition is usually violated in the case of paleoclimate data due to the variable external forcing (solar irradiation) and multiple feedback mechanisms in the climate system that drive the system towards the edge of instability. Hence, more sophisticated methods are required allowing to cope with non-stationary data as well [91]. One prominent example for such approaches is wavelet analysis [92,93,94], which allows a time-dependent characterization of the variability of a time series on different time scales. As for the classical methods of correlation and spectral analysis designed for stationary data, estimators of the wavelet spectrogram are meanwhile also available for unevenly sampled data, for example, in terms of the weighted wavelet Z transform [95,96,97,98,99,100], gapped wavelets [101,102], or a generalized multiresolution analysis [103,104]. Similar as for classical spectral analysis, such wavelet-based methods often exhibit scaling laws associated with fractal or multifractal properties of the system under study.
Fractal dimensions and complexity concepts in paleoclimate studies
The question of whether climate can be approximately described as a low-dimensional chaotic system has stimulated a considerable amount of research in the last three decades.
Notably, much of the corresponding work has been related with the study of paleoclimate records. As a prominent example, the seminal paper "Is there a climatic attractor?" by Nicolis and Nicolis [105] considered the estimation of the correlation dimension D2 of the oxygen isotope record from an equatorial Pacific deep-sea sediment core. A direct follow-up [106] presented a thorough re-analysis of the same record utilizing the information dimension D1. Both manuscripts started an intensive debate on the conceptual as well as analytical limits of fractal dimension estimates for paleoclimate time series. Grassberger [107] analyzed different data sets and could not find any clear indication for lowdimensional chaos. This absence of positive results has been at least partially triggered by the problematic properties of paleoclimate records, particularly the relatively small amount of data and their non-uniform sampling resulting in the need for interpolating the observational time series. Grassberger's results were confirmed by Maasch [108] who analyzed 14 late Pleistocene oxygen isotope records and concluded that "the dimension cannot be measured accurately enough to determine whether or not it is fractal". Fluegeman and Snow [109] used R/S analysis to estimate the fractal dimension D0 of a marine sediment record via the associated Hurst exponent H, whereas Schulz et al. [110] used the Higuchi estimator for a similar purpose. Mudelsee and Stattegger [111,112,113] estimated the correlation dimension of various oxygen isotope records using the classical Grassberger-Procaccia algorithm.
Due to the inherent properties of paleoclimate data, estimating fractal dimensions and related complexity measures is a challenging task. Instead of using the classical fractal dimension concepts, in the last years it has therefore been suggested to consider alternative methods that allow quantifying the dimensionality of such records. Donner and Witt [11,23,24] utilized the multivariate version of the LVD dimension density (see Section 3.3) for studying long-term dynamical changes in the Antarctic offshore sediment decomposition associated with the establishment of significant oceanic currents across the Drake passage at the Oligocene-Miocene boundary. In a similar way, Donges et al. used recurrence network analysis for sliding windows in time for identifying time intervals of subtle large-scale changes in the terrigenous dust flux dynamics off North Africa during the last 5 million years [10,79]. These few examples underline the potentials of the corresponding approaches for a nonlinear characterization of paleoclimate records.
Perspectives and challenges
Both classical as well as novel approaches to characterizing the dimensionality of paleoclimate records still face considerable methodological challenges. While established methods typically rely on the availability of long time series, this requirement can be relaxed when using correlation-or network-based approaches, which are in principle suited for studying nonlinear properties in a running windows framework and thus characterizing the time-varying complexity of environmental conditions encoded in the respective proxy variable under study. However, some methodological challenges persist, which have been widely neglected in the recent literature.
Most prominently, the exact timing of observations is of paramount importance for essentially all methods of time series analysis. In the presence of time-scale uncertainty inherent to most paleoclimate records, this information is missing and can only be incorporated into the statistical analysis by explicitly accounting for the multiplicity of agedepth relationships consistent with the set of available dating points. The latter can be achieved by performing the same analysis for a large set of perturbed age models generated by Monte Carlo-type algorithms, or by incorporating the associated time-scale uncertainty by means of Bayesian methods. However, an analytical theory based on the Bayesian framework can hardly be achieved for all possible methods of time series analysis, so that it is most likely that one has to rely on numerical approximations.
Even when neglecting time-scale uncertainty, the non-uniformity of sampled data points in time typically persists. Among all methods discussed in this chapter, only the visibility graph approach is able by construction to directly work with arbitrarily sampled data. However, this method is faced with the conceptual problem of how to treat values between two successive observations that have not been observed for whatever reason. Donner and Donges [71] argued that simply neglecting such "missing values" may account for a considerable amount of error in all relevant network measures, so that the meaningful interpretability of the obtained results could become questionable.
For the other mentioned approaches, time-delay embedding is a typical preparatory step for all analyses. Since interpolation can result in spurious correlations [86] or at least ambiguous results depending on the specific procedure, alternatives need to be considered for circumventing this problem. In the case of uni-and multivariate LVD dimension density, it is possible to directly utilize alternative estimators of the correlation function, e.g. based on suitable kernel estimates [86], for obtaining the correlation matrix of the record under study. For methods requiring attractor reconstruction (e.g. the Grassberger-Procaccia algorithm for the correlation dimension or recurrence network analysis), there are prospective approaches for alternative embedding techniques, e.g. based on Legendre coordinates [114], that shall be further investigated in future work.
Conclusions
Since the introduction of fractal theory to the study of nonlinear dynamical systems, this field has continuously increased its importance. Besides providing a unified view on scaling properties of various statistical characteristics in space or time that can be found in many complex systems, fractal dimensions have demonstrated their great potential to quantitatively distinguish between time series obtained under different conditions or at different locations, thus contributing to a classification of behaviors based on nonlinear dynamical properties. However, as it has been demonstrated both empirically and numerically, established concepts of fractal dimensions reach their fundamental limits when being applied to relatively short and noisy geoscientific time series, e.g. climate records. As potential alternatives providing measures with comparable meaning, but different conceptual foundations, two promising approaches based on the evaluation of serial correlations and complex network theory have been discussed. Although both concepts still need to systematically prove their capabilities and require further methodological improvements as highlighted in this chapter, they constitute promising new research avenues for future problems in climate change research, other fields of geosciences, and even complex systems sciences in general. | 2017-09-17T18:24:40.495Z | 2012-11-14T00:00:00.000 | {
"year": 2012,
"sha1": "425712bab2c9a73d4c59f4cb9490231addf20cc9",
"oa_license": "CCBY",
"oa_url": "https://openresearchlibrary.org/ext/api/media/45371724-efd9-4cba-a023-d4d486eecde4/assets/external_content.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "09a47a02f77dcf4678935f8325c8fa167e154ee1",
"s2fieldsofstudy": [
"Environmental Science",
"Mathematics"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
14905273 | pes2o/s2orc | v3-fos-license | The rise of the photosynthetic rate when light intensity increases is delayed in ndh gene-defective tobacco at high but not at low CO2 concentrations
The 11 plastid ndh genes have hovered frequently on the edge of dispensability, being absent in the plastid DNA of many algae and certain higher plants. We have compared the photosynthetic activity of tobacco (Nicotiana tabacum, cv. Petit Havana) with five transgenic lines (ΔndhF, pr-ΔndhF, T181D, T181A, and ndhF FC) and found that photosynthetic performance is impaired in transgenic ndhF-defective tobacco plants at rapidly fluctuating light intensities and higher than ambient CO2 concentrations. In contrast to wild type and ndhF FC, which reach the maximum photosynthetic rate in less than 1 min when light intensity suddenly increases, ndh defective plants (ΔndhF and T181A) show up to a 5 min delay in reaching the maximum photosynthetic rate at CO2 concentrations higher than the ambient 360 ppm. Net photosynthesis was determined at different CO2 concentrations when sequences of 130, 870, 61, 870, and 130 μmol m-2 s-1 PAR sudden light changes were applied to leaves and photosynthetic efficiency and entropy production (Sg) were determined as indicators of photosynthesis performance. The two ndh-defective plants, ΔndhF and T181A, had lower photosynthetic efficiency and higher Sg than wt, ndhF FC and T181D tobacco plants, containing full functional ndh genes, at CO2 concentrations above 400 ppm. We propose that the Ndh complex improves cyclic electron transport by adjusting the redox level of transporters during the low light intensity stage. In ndhF-defective strains, the supply of electrons through the Ndh complex fails, transporters remain over-oxidized (specially at high CO2 concentrations) and the rate of cyclic electron transport is low, impairing the ATP level required to rapidly reach high CO2 fixation rates in the following high light phase. Hence, ndh genes could be dispensable at low but not at high atmospheric concentrations of CO2.
INTRODUCTION
Some 30 years after their discovery (Ohyama et al., 1986;Shinozaki et al., 1986), the functional role of the 11 plastidencoded ndh genes (which are homologous to genes encoding components of Complex I of the mitochondrial respiratory chain) is still a mystery. Among eukaryotic algae, only a few Prasinophyceae and all Charophyceae (the green algae related to higher plants) contain ndh genes (Martín and Sabater, 2010). Most photosynthetic land plants contain the ndh genes, which are absent in parasitic non-photosynthetic species of the genera Cuscuta, Epiphagus, Orobanche and the Orchidaceae family. This suggests that the thylakoid Ndh complex, encoded by the 11 plastid ndh genes and an as yet unknown number of nuclear genes, has a role in land plant photosynthesis. However, plastid DNAs of the photosynthetic Gymnosperms Pinaceae and Gnetales and of a few species scattered among angiosperm genera, families, and orders Abbreviations: E BI , Gibbs energy stored as biomass per mol CO 2 fixed; PAR, photosynthetic active radiation; PCR, polymerase chain reaction; ppm, parts per million; S g , entropy production; η, photosynthetic efficiency.
(e.g., Erodium, Ericaceae, Alismatales,. . .) lack ndh genes (Braukmann et al., 2009;Blazier et al., 2011;Braukmann and Stefanović, 2012), which suggests that ndh genes could be dispensable under certain environments. The 11 ndh genes account for about 50% of all C to U editing sites identified in the transcripts of plastid genes (Tillich et al., 2005), which again suggests that dispensable ndh genes in the ancestors of extant plants accumulated inactivating mutations (Martín and Sabater, 2010). Of these, T to C mutations have been neutralized by C to U transcript editing in most extant plants which presumably recuperates the functionality of the ndh genes. Thus, tottering on the edge of dispensability, the ndh genes provide an excellent opportunity to test the natural selection of photosynthesis-related genes during plant evolution.
It is widely accepted that the Ndh complex is located in the stromal thylakoids (Casano et al., 2000Lennon et al., 2003), transfers electrons to plastoquinone and is involved in cyclic electron transport. However, two different electron donors have been proposed: reduced ferredoxin and NADH. Ferredoxin as the electron donor would imply a role of the Ndh complex providing a www.frontiersin.org cyclic photosynthetic electron transport (Yamamoto et al., 2011) in addition to the commonly accepted model in which ferredoxin directly donates electrons to the PQ/cyt.b 6 f intermediary electron pool (Kurisu et al., 2003). However, as pointed out by Nandha et al. (2007) in a similar assay with the PGR5 protein, the rate of plastoquinone reduction is too low in the assay of the Ndh complex with reduced ferredoxin as electron donor. In contrast, spectrophotometric assays of activity and zymogram of NADH dehydrogenases after native electrophoresis, combined with immunodetection with antibodies raised against protein encoded by chloroplast ndh genes indicate NADH as the electron donor in different plants (Cuello et al., 1995;Corneille et al., 1998;Sazanov et al., 1998;Casano et al., 2000;Díaz et al., 2007;Martín et al., 2009;Serrot et al., 2012). Accordingly, the Ndh complex, in concerted action with electron draining reactions (Mehler, superoxide dismutase/peroxidase and terminal oxidase) in chlororespiration, would contribute to adjust (poise) the redox level of the cyclic photosynthetic electron transporters (Casano et al., 2000;Joët et al., 2002;Rumeau et al., 2007), hence optimizing (Heber and Walker, 1992) the transport rate necessary for cyclic photophosphorylation and, in general, the thylakoid polarization and lumen acidification that is also required to avoid the damages caused by excess light by dissipating energy through zeaxanthin (Eskling et al., 2001;Karpinski et al., 2001;Minagawa, 2013) in the process of non-photochemical quenching. Accordingly, by poising the redox level of the cyclic electron transporters, the Ndh complex contributes to the protection against photooxidative-related stresses Sabater and Martín, 2013). The activity superoxide dismutase, which is key for electron draining, decreases in adult-senescent photosynthetic tissues, when the over-expression of the ndh genes results in an overreduction of electron transporters which triggers the accumulation of reactive oxygen species inducing cell death (Zapata et al., 2005;Nashilevitz et al., 2010;Nilo et al., 2010;Sabater and Martín, 2013).
Related to the function of the Ndh complex in cyclic electron transport, questions remain on the extent the ndh genes improve (if such is the case) photosynthesis and on the environmental conditions that made ndh genes dispensable in certain plant lines. Apart from a certain weakening under different stress conditions, transgenic plants defective in ndh genes usually show normal growth Martín et al., 2004). However, the information provided by transgenic ndhdefective plants is sometimes debatable. Only a few of the claimed nuclear ndh genes have been unambiguously demonstrated to encode Ndh components (Darie et al., 2005;Rumeau et al., 2005;Shimizu et al., 2008). In fact, the basic respiratory complex I found in archaeal and eubacterial kingdoms may be functional with only 11 subunits (Moparthi et al., 2014) homologous to those encoded by the 11 plastid ndh genes. Frequently, Arabidopsis nuclear mutants defective in the thylakoid Ndh complex are affected in subunit assembly, plastid ndh transcript processing and, in general, processes that can have pleiotropic effects on several chloroplast functions (Meurer et al., 1996). On the other hand, the obtainment of homoplastomic plastid ndh transgenics is highly improbable. Although efficient technologies are available that insert foreign sequences, the large background provided by 100s copies of plastid DNA (Rauwolf et al., 2010;Matsushima et al., 2011) in mesophyll cells makes the selection of homoplastomic transformed cells difficult, even after several culture cycles. DNA-blot hybridizations are not sufficiently sensitive to establish homoplastomy, and even more sensitive approaches such as PCR amplification could be insufficient. Rapid replication of nontransformed plastid DNA makes it difficult to maintain plants with a high proportion of plastid ndh defective DNA for several generations, unless selective culture conditions are regularly maintained during the 2 or 3 weeks after germination. Although not homoplastomic, the low level of non-transformed DNA has allowed us to investigate the functional properties of transformed tobacco plants that contain a high proportion of plastid DNA with defective ndh genes and show low amounts or malfunctioning thylakoid Ndh complex Zapata et al., 2005).
The functioning of the photosynthetic machinery under rapidly changing environmental conditions (mainly light intensity) is recently receiving considerable research interest Tikkanen et al., 2012;Garab, 2014). Photosystem I protection, cyclic electron transport and the control of reactive oxygen species require strategies that are being intensely investigated and are different from those under constant high light (Suorsa et al., 2013). However, little is known on the final effect of rapidly fluctuating light on net photosynthesis. The slight delay in reaching full photosynthetic rates in transgenic ndh-defective tobacco plants after sudden increases of light intensity prompted us to investigate the contribution of the ndh genes to suppress that delay and the consequences on the photosynthetic efficiency and S g , as measures of fitness (Sabater, 2006;Marín et al., 2014), at different CO 2 concentrations and rapidly fluctuating light intensity. To maintain the low entropy associated to leaf organization (Ksenzhek and Volkov, 1998;Davies et al., 2013;Marín et al., 2014) the entropy produced in photosynthesis must be exported, which increases the global entropy as required for all irreversible processes (Schrödinger, 1944). Therefore, comparisons of the entropy produced in photosynthesis by wt and ndh-deficient plants with the negative entropy associated to cell organization would help to evaluate the advantages provided by the ndh genes.
Measurements of net photosynthetic rates revealed that the increase of the rate of photosynthesis when the intensity of light suddenly increases is delayed in ndh-defective plants when compared with wt and control transformed (ndhF FC) plants at high but not at low concentrations of CO 2 . Probably, by balancing the redox level of transporters, the Ndh complex maintains high rates of photosynthetic cyclic electron transport during the low light intensity stage to maintain thylakoid polarization and the ATP level required to protect photosynthetic machinery and the rapid response of photosynthetic rate when light suddenly increases in the following stage. The consequence is comparatively low photosynthetic efficiency and high S g in ndh-deficient tobacco plants under rapid fluctuating light and high concentrations of CO 2 , which suggests that the ndh genes could be dispensable at low atmospheric concentrations of CO 2 , but not at higher CO 2 concentrations.
PLANTS CULTURE
Most assays were performed with wt tobacco (Nicotiana tabacum, cv. Petit Havana) and transgenics defective in the ndhF gene by intragenic insertion of the spectinomycin-selection gene aadA (Koop et al., 1996;Martín et al., 2004Martín et al., , 2009; ndhF and pr-ndhF described later). Additional assays were carried out with different tobacco plants where the aadA selection gene was inserted upstream of the ndhF gene ) maintaining intact ndhF gene (ndhF FC, control) or site-directed mutated: T181A and T181D, in which the 541 ACT 543 triplet encoding the phosphorylatable Thr-181 of the NDH-F subunit has been substituted by GCT and GAT encoding alanine and aspartic acid, respectively.
Tobacco plants were cultured as described Martín et al. (2009). Seeds from non-transformed wt tobacco were sown in pots with compost soil substrate, germinated and grown in a glasshouse. Seeds from transformed plants were aseptically germinated and grown for 1-2 months in sterile Murashige/Skoog (MS) agarsolidified medium supplemented with 600 mg L −1 spectinomycin. Plantlets were transplanted to compost soil substrate in pots under controlled glasshouse conditions and irrigated with MS. The genetic identity of the different tobacco plants was regularly tested by primer-directed amplification of appropriate plastid DNA regions, size determination, and sequencing ). Since 2002 (for ndhF) and 2007 (for the other transgenics), new seed generations of each transformed tobacco plant were produced at least once a year by completing the life cycle of the original transformed plants ), obtained as detailed in Supplementary Materials. Sample seeds of ndhF, T181D, and ndhF FC are available from authors upon request.
MEASUREMENT OF NET PHOTOSYNTHESIS, TRANSPIRATION RATES, AND CHLOROPHYLL FLUORESCENCE INDUCTION
Photosynthesis and transpiration rates were determined in the glasshouse at 25 • C in 6.25 cm 2 regions of intact fully expanded healthy leaves (containing ∼20 μg chlorophyll cm −2 ), of the midstem of plants at the beginning of flowering. Leaf was, fitted on the chamber of the LCpro+ portable photosynthesis system (ADC BioScientific Ltd., Hertfordshire, UK) as previously described Martín et al. (2009) and Marín et al. (2014), except for the CO 2 concentration that, having been programmed as fixed during the light sequence treatment was varied. Net photosynthetic activity (in μmol consumed of CO 2 m −2 s −1 ) and transpiration rate (in mmol of H 2 O m −2 s −1 ) were measured during the light sequence treatment where the intensity (in μmol m −2 s −1 PAR, at leaf surface) abruptly changed according to the sequence: 15 min acclimation at 130, 6 min at 870, 6 min at 61, 6 min at 870, and 6 min at 130 μmol m −2 s −1 PAR. Data collected each min and at light intensity transitions were directly represented using the GraFit Erithacus software (Surrey, UK) and the Origin software (Princeton, NJ, USA). Registered data indicated that the sub-stomatal CO 2 concentration was stabilized (<5% variation) from the end of 15 min acclimation through the following 24 min incubation. Experiments were repeated 2-10 times.
The rates of net photosynthesis and transpiration determined in attached leaf sections varied during different days, probably due the variable environmental factors affecting the whole plant.
However, the relative rates with different CO 2 concentrations, determined in the same day, did not differ by more than 5% after 2-10 determinations during the year in wt and the control ndhF FC tobacco lines. Therefore, absolute rates showed in each figure correspond of measurements in the same day, when 3-5 CO 2 concentrations were assayed. Figures are representative of 2-10 experiments. The influence of the concentration of CO 2 on the photosynthetic efficiency and the production of entropy was expressed relative to the values at the reference 360 ppm CO 2 and all experimental results were represented. More details on the statistical significance of the results are discussed in appropriate sections.
Fluorescence assays were carried out in the glasshouse with similar leaves as those used in photosynthesis rate determinations. Chlorophyll fluorescence changes were measured with an Opti-Sciences (ADC BioScientific Ltd., Hertfordshire, UK) OS1-FL modulated chlorophyll fluorometer. Standard assay was used with relative minimum and high light intensities optimized to show differences among ndh mutants . Leaf disk regions were dark-adapted with clips for 30 min after which they received 2 min minimum light (0.1 μmol m −2 s −1 PAR) followed by 5 min higher relative light (0.15 μmol m −2 s −1 PAR) and 9 min again of minimum light. 0.8 s saturating flashes (5,000 μmol m −2 s −1 PAR) were applied at 1, 3, 4, 5, and 6 min of light incubation. Fluorescence was recorded each 0.1 s and collected data were represented using the GraFit Erithacus software (Surrey, UK) and the Origin software (Princeton, NJ, USA). Assays repeated at least three times showed no significant differences. Yield of quantum efficiency (Y), of light energy absorbed by photosystem II which is used in photosynthetic electron transport, was calculated as Y=(Fms-Fs)/Fms. Where: Fms is maximal fluorescence and Fs is variable fluorescence under steady state.
PHOTOSYNTHETIC EFFICIENCY AND ENTROPY PRODUCTION
As detailed previously (Marín et al., 2014), photosynthetic efficiency (η, the fraction of absorbed radiant energy converted to biomass chemical energy) is: η = 100 E BI /E in , where E BI is the chemical Gibbs energy of the net photosynthesis products stored as biomass (CH 2 O) and E in the absorbed PAR measured at leaf surface and corrected by the 7% transmitted. The entropy generated (S g ) was calculated from net CO 2 fixation data, dimensions of the experimental design, Gibbs free energy and entropy values in data banks and conventional thermodynamics. The entropy generation was expressed per J (Joule) of biomass chemical energy generated: S g /E BI .
For each photosynthesis assay, the integrated net CO 2 consumed over the last 24 min of light incubation and the intermediate 12 min light phases (at 61 and the following at 870 μmol m −2 s −1 PAR) were converted to C-equivalent biomass (CH 2 O) according to the reaction: The integrated fixed CO 2 (mol m −2 ) was multiplied by the equivalence: (calculated with: G 0 = 479.8 kJ mol −1 , R = 8.314 J mol −1 K −1 , T = 298 K, P O2 = 0.21 bar and where [CO 2 ], in the later equation, is in ppm) to obtain the energy (E BI ) stored as biomass and synthesized per square meter.
Similarly, the associated production of entropy (S BI ) was determined by: By considering a PAR λ mean of 550 nm and applying the equivalence: radiant energy (J) = 119.3 × 10 6 /λ(nm), the absorbed PAR energy (E in ) was estimated 140.2 kJ m −2 and 67.6 kJ m −2 for the last 24 min and the intermediate 12 min light phases, respectively. Their associated entropies were determined as that of non-diffuse sunlight by considering an effective temperature of 5000 K for solar radiation (Ksenzhek and Volkov, 1998) 28.0 and 13.5 J K −1 m −2 for, respectively 24 and 12 min light phases.
The S g was obtained by subtracting S in from the sum of all forms of energy wasted which total E in -E BI divided by the absolute temperature, T, plus the entropy of the biomass produced (S BI ), thus: Measurements and derived calculations at different CO 2 concentrations were referred as percentages of the results at 360 ppm CO 2 obtained with the same plant and group of experiments.
OTHER DETERMINATIONS AND ASSAYS
DNA isolation, PCR amplification and agarose gel electrophoresis were performed as described previously . Zymograms and immunoassays related to the thylakoid Ndh complex were also performed as described previously .
TOBACCO LINES WITH PARTIAL RECOVERY OF ndhF GENE COPIES
In addition to previously described wt, ndhF FC, ndhF, T181A, and T181D tobacco plants ), we assayed partially reverted phenotypes of ndhF (pr-ndhF) that we have found among descendants of the ndh-deficient ndhF tobacco transgenic, as identified by the increase of the 515 bp PCRamplified band ( Figure 1A, lane pr-ndhF) characteristic Zapata et al., 2005) of the non-transformed plastid DNA of wt ( Figure 1A, lane wt). The relative intensities of the amplified 1,928 and 515 bp bands should approximate and respectively mirror the relative abundance of transformed ( ndhF) and non-transformed (wt) plastid DNA molecules among the 100s of DNA copies contained in a single mesophyll cell. The presumably low proportion of the functional ndhF gene in pr-ndhF only slightly permitted the recovery of the clear post-illumination fluorescence increase characteristic of wt ( Figure 1B) that is absent in ndh deficient plants Martín et al., 2004). Accordingly, pr-ndhF phenotypes showed a thylakoid Ndh-dependent NADH dehydrogenase activity which was lower than in wt but higher than in ndhF transgenic (not shown). In contrast to the clearly delayed leaf senescence phenotype of ndhF (Zapata et al., 2005), pr-ndhF showed only slight delayed leaf senescence in comparison to wt tobacco ( Figure S1). The frequency of pr-ndhF phenotype is increasing in successive offspring derived from the original ndhF tobacco (despite the presence of spectinomycin during the initial culture of ndhF). Conceivably, unknown factors favor the replication of the few remaining copies of wt plastid DNA in ndhF tobacco over the transformed molecules defective in the ndhF gene. Although we are not yet able to control its emergence or its inheritance, the finding of pr-ndhF phenotype provides an additional retromutant control that confirms the involvement of ndh genes in photosynthesis and other processes. In the future, the ability to control (and determine by quantitative PCR) the wt to ndhF plastid DNA ratio will provide a deeper understanding of the influence of the copy proportion of the plastid ndh gene in different processes.
PHOTOSYNTHETIC RATES UNDER FLUCTUATING LIGHT INTENSITIES. EFFECT OF THE CONCENTRATION OF CO 2
In the field, leaves are exposed to frequent and rapid changes in light intensity (0 to about 2,500 μmol m −2 s −1 PAR) due to transitory shadow produced by clouds, by other leaves fluttering in the wind and by wandering animals (Külheim et al., 2002). To investigate the effect of rapid light intensity variations on the photosynthetic performance of leaves, we established a reference light fluctuation incubation consisting of 15 min of leaf acclimation at 130, followed by four 6-min light phases of 870, 61, 870, and 130 μmol m −2 s −1 PAR at leaf surface. Rates of net photosynthesis varied for the same plant from 1 day and leaf to another. However, relative photosynthetic rates for different CO 2 concentrations were highly reproducible for a same tobacco line with differences not higher than 5% and result in similar shape of the rate-time curves characteristic for each CO 2 concentration. Therefore, we determined photosynthetic rate curves for up to five different CO 2 concentration assays carried out successively with the same leaf section by changing the setting of the CO 2 concentration. The 15 min acclimation was repeated for each CO 2 concentration and the order of the assays with different CO 2 concentrations (increasing or decreasing) did not affect the responses of photosynthesis rates. Experiments were repeated without significant differences 2-10 times and each graph in the following Figure 2 corresponds to one representative group of assays carried out in the same day with each plant and variable concentrations of CO 2 . Figure 2 shows photosynthetic rates during the four light phases after the 15 min acclimation of wt, one pr-ndhF phenotype and ndhF tobacco plants at different CO 2 concentrations. In most repeated experiments ( Figure S2), wt showed the highest and ndhF the lowest photosynthetic rates but, as stated above, no definitive conclusion could be drawn from differences of activity among the three plants. However, differences in the time with which each plant reaches maximum activity after light intensity increased to 870 μmol m −2 s −1 were highly reproducible. In contrast to wt, which rapidly reached the maximum photosynthetic (Figure 2, upper box), ndhF (middle box) showed a considerable delay in reaching the maximum photosynthetic rate at CO 2 concentrations higher than the ambient 360 ppm and, paradoxically, the photosynthetic rate of this transgenic was higher at 475 (--) than at 615 (--) ppm CO 2 during the first 3 min after the transition from acclimation 130 to 870 μmol m −2 s −1 . Furthermore, during the first 3-5 min after the transition from 61 to 870 μmol m −2 s −1 (12 to 18 min in Figure 2) the photosynthetic rate was higher at 365 (-•-) than at 475 (--) and 615 (--) ppm CO 2 . By comparison, during the fluctuating light incubation, the rate of photosynthesis in wt was always higher the higher the concentration of CO 2 . At low CO 2 concentrations, 260 to 312 ppm (--and --, respectively), there were no detectable differences between wt and ndhF tobacco plants in the increase of the rate of photosynthesis after transition to high light. Figure S2 shows photosynthetic rate versus time curves for several groups of assays (each group corresponds to experimental determinations carried out within the same day and leaf) with wt (four groups), pr-ndhF (two groups), and ndhF (three groups) tobacco plants. Figure S2 shows the phenotypic variability of the various ndhF strains. Rate curves are highly reproducible within the same day and leaf (see groups wt-3, wt-4, and pr-ndhF-2 in Figure S2) for the same CO 2 concentration, although absolute rate values for the same plant and similar CO 2 concentration significantly vary among the different days of assay (compare in Figure S2 the high activity at 470 ppm CO 2 in wt-1 with the low activity in wt-4 group of assays).
The rates of photosynthesis in pr-ndhF were determined at CO 2 concentrations of 365 ppm and higher (Figure 2, bottom box) and the curves were closer to those of wt or to ndhF, probably dependant on the relative ndhF gene dose with respect to ndhF tobacco. Therefore, these results support that the products of intact functional ndh genes improve the photosynthetic performance at fluctuating light intensities at higher than ambient CO 2 concentrations. The slow increase of the rate of photosynthesis at high concentrations of CO 2 in ndhF is not due to an indirect effect of plastid DNA transformation because the control ndhF FC transgenic tobacco, containing the inserted aadA gene and intact ndhF gene, did not show delay in reaching high photosynthetic rate when assayed up to 535 ppm CO 2 ( Figure S3).
The slow increase of the rate of photosynthesis at high concentrations of CO 2 cannot be attributed to an impaired stomatal response of ndhF with respect to wt. We found (Marín et al., 2014) that in wt tobacco, under the successive 6 min periods at 870, 61, 870, and 130 μmol m −2 s −1 PAR and different concentrations of CO 2 , the rate of photosynthesis varied strongly and rapidly between a minimum and an approximately 5-fold higher maximum; a result similar to that shown in Figure 2. In contrast, the rates of transpiration and the stomatal conductance changed slowly with a maximum that barely doubled minimum values. In this work, we found similar results for all tobacco plants: the rates of transpiration and the stomatal conductance change slower than the rates of photosynthesis and the span between maximum and www.frontiersin.org minimum values is significantly narrower for transpiration and conductance than for photosynthesis. As an example, Figure 3 shows that at 470 ppm CO 2 the changes in transpiration (increase at high light and decrease at low light intensities) were slower in ndhF than in wt tobacco. Transpiration rates mirrored stomatal conductance changes determined in parallel assays (not shown). Similarly to photosynthesis, the transpiration responses in partially reverted pr-ndhF tobacco lay between those of wt and ndhF plants. The comparison with Figure 2 indicates that, in both wt and ndhF, rate responses are slower for transpiration than for photosynthesis. In general, there is an inverse relation between the internal CO 2 concentration in the leaf and the stomatal opening and, consequently, transpiration (Wheeler et al., 1999). As the internal concentration of CO 2 is lower at higher photosynthetic activity, it seems likely that the slower photosynthetic response in ndhF than in wt is responsible for the even slower transpiration response of ndhF than of wt, and not the opposite.
DETERMINATIONS OF PHOTOSYNTHETIC EFFICIENCY AND ENTROPY PRODUCTION
We evaluated the efficiency of the conversion of radiant energy to chemical energy (biomass) and the S g of wt and mutant plants both during the total 24 min and for the intermediate 12 min light treatments (6 min at 61 and 6 min at 870 μmol m −2 s −1 PAR). Obviously, the relative differences between wt and ndhF were higher for the intermediate 12 min than for the 24 min incubation.
wt, ndhF, pr-ndhF, point mutant transgenics T181A and T181D and control ndhF FC (containing the aadA spectinomycin resistance gene near the 5 end of the unmodified ndhF gene) plants were assayed. In T181A and T181D, the phosphorylatable threonine-181 is substituted by alanine and aspartic acid, respectively. Thus, the thylakoid Ndh complex in T181A cannot be activated by phosphorylation, whereas the negative charge of aspartic acid (D) in T181D mimics the activation effect of the threonine phosphorylation in wt, resulting in a highly active Ndh complex ). As Figure S2 (for wt, pr-ndhF, and ndhF), S3 (for ndhF FC), and S4 (for T181D and T181A) show rate curves obtained from several groups of assays with T181D and T181A tobacco plants. Interestingly, T181D shows slight delay to reach full photosynthesis rate at low CO 2 concentration (T181D-1 in Figure S4), but similarly to wt and in contrast to ndhF (Figure 2) and T181A (T181A-1, T181A-3, and T181A-4 in Figure S4) no delay at high CO 2 . Therefore, in agreement with previous enzyme determinations , the Ndh complex of T181D is probably always active due to the negative charge of aspartate (D), while the Ndh of wt tobacco requires 181-threonine phosphorylation. Conceivably, Frontiers in Plant Science | Plant Physiology de-regulated hyperactive Ndh complex in T181D tobacco could over-reduce cyclic electron transporters at low CO 2 concentrations, when the electron draining by Benson-Calvin cycle is low. The consequences would be low rate of cyclic transport and over production of reactive oxygen species. In this aspect, T181D tobacco would provide an interesting tool for further investigation of the redox and protein phosphorylation control of photosynthetic electron transport. The higher photosynthetic performance of T181D than of non-phosphorylatable T181A tobacco at high CO 2 concentration is also appreciated when comparing efficiency and S g at different CO 2 concentrations.
Since the experimental approach did not provide a reliable comparison of absolute photosynthetic measurements on different days, a reference assay at 360 ppm of CO 2 (sometimes interpolated from data at very similar concentrations) was always carried out within each group of experiments. Therefore, integrated net CO 2 fixations, over the last 24 min or the intermediate 12 min of light incubation time, with the other 2-4 CO 2 concentrations (assayed in the same group of experiments) were referred to the CO 2 fixation at 360 ppm CO 2 and photosynthetic efficiency and S g were expressed as percentages of the respective values at 360 ppm CO 2 . This approach allowed us to combine results from about 35 groups of assays carried out different days with the six tobacco lines totalizing 139 rate-time curves.
Photosynthetic efficiency η increased almost linearly with the concentration of CO 2 (upper boxes of Figure 4) up to 400 ppm in an impressively very similar manner in all plants, which supports the statistical relevance of the approach. For CO 2 concentrations higher than 400 ppm significant h differences were observed among tobacco lines: in most assays pr-ndhF ( ) and T181A ( ) showed lower η (and consequently higher S g /E BI ) during the 24 min incubation period (left boxes; compare with the gray line corresponding to plants with full functional ndhF gene described in the legend of Figure 4). For the intermediate 12 min incubation period (right boxes) when a strong rise from 61 to 870 μmol m −2 s −1 PAR took place, all assays with ndhF ( , three assays) and T181A ( , one assay; encircled) failed to significantly increase η at concentrations of CO 2 higher than 400 ppm. For the intermediate 12 min period at higher than 400 ppm CO 2 , pr-ndhF ( ) only showed slightly lower efficiencies when compared with all the assays with tobacco plants containing full functional ndhF gene. Conversely, the entropy produced (S g /E BI , lower boxes of Figure 4) decreased as the concentration of CO 2 increased. The decrease above 400 ppm CO 2 was less pronounced, especially for determinations in the intermediate 12 min (lower right box), for ndhF ( ) and T181A ( ; encircled; around 90% of the 360 ppm value) than for the other tobacco plants (around 75% of the 360 ppm value). Variable ndhF gene dose of pr-ndhF phenotypes in the different assays ( ) could explain the variable, although generally lower, efficiency at CO 2 higher than 400 ppm of pr-ndhF than ndhF FC ( ) which, by containing the spectinomycin resistance gene, is more representative control than wt (•). Figure 2, the delay of Ndh complex-deficient ndhF tobacco plants in reaching high photosynthetic rates when exposed to sudden increases of light intensity becomes clear at CO 2 concentrations above the 400 ppm predictable in the next decades. Under the sun fleck conditions common in open environments, the delay could impair the net photosynthetic performance of plants lacking ndh genes when compared with normal plants.
As shown in
With the exception of the plastid ndhF gene disruption, no genome alteration has been found in ndhF tobacco Zapata et al., 2005), which strongly indicates that the low proportion of ndhF gene copies and the resulting low level of the Ndh complex are solely responsible for the low photosynthetic performance of ndhF tobacco plants at high CO 2 concentrations. The rapid photosynthetic response ( Figure S3) of the plastid transgenic control ndhF FC, which has the same aadA gene insertion as ndhF but outside the ndhF reading frame, indicates that the ndhF phenotype is not due to an indirect effect of plastid transformation. The intermediate (although variable) photosynthetic performance of pr-ndhF tobacco plants (Figures 2 and 4) provides additional evidence of the necessity of the ndh genes and the thylakoid Ndh complex for rapid photosynthetic responses to light at high CO 2 concentrations. On the other hand, the differences in the transpiration response (Figure 3) with regard to the photosynthetic response (Figure 2) to sudden light increases between wt and ndhF plants indicate that the ndh deficiency directly affects photosynthesis and is not mediated by a primary effect on the stoma machinery.
As described in preliminary results , the similar photosynthetic responses of ndhF and T181A at high CO 2 concentrations suggests that the impossibility of site-181 phosporylation impairs the photosynthetic performance in T181A under rapidly fluctuating light intensities.
Redox balance of the electron transfer chain is critical for optimizing photosynthesis under fluctuating light in cyanobacteria and higher plants (Tikkanen et al., 2012;Allahverdiyeva et al., 2013). In Chlamydomonas (Alric, 2014), as in higher plants (Heber and Walker, 1992), redox equilibration is required for the maximal rate of cyclic electron flow. Therefore, the effect of the Ndh complex accelerating the photosynthetic response when light intensity suddenly increases could be due to its proposed role in optimizing the cyclic photosynthetic electron transport by adjusting the redox level of transporters (Casano et al., 2000;Joët et al., 2002). Electrons supplied by the Ndh complex prevent the over-oxidation of cyclic electron transporters that would occur due to the low supply of electrons from photosystem II between two high light intensity phases (Figure 2), especially at high CO 2 concentrations that would rapidly deplete electrons from transporters by consuming NADPH in the Benson-Calvin cycle. In the case of ndhF, the supply of electrons from NADH through the Ndh complex fails, consequently the photosynthetic electron transporters remain over-oxidized and the rate of cyclic electron transport around photosystem I is low, impairing the thylakoid membrane potential and the ATP level required to rapidly reach high CO 2 fixation rates in the following high light phase. As reported by Kanazawa and Kramer (2002), in comparison with the linear electron transport, the contribution of cyclic electron transport in maintaining thylakoid polarization is negligible at low CO 2 www.frontiersin.org FIGURE 4 | Effect of the concentration of CO 2 on photosynthetic efficiency (η) and S g /E BI of different tobacco plants. As explained in the thermodynamic background, photosynthetic efficiency is the ratio of chemical energy (E BI ) stored as photosynthesized biomass to the PAR energy absorbed by the leaf. S g /E BI is the ratio of the total entropy produced to the chemical energy (E BI ) stored as photosynthesized biomass. Calculations were performed for the total 24 min (left boxes) and the 12 min intermediate (right boxes) light incubation treatments. The represented values of η and S g /E BI at different CO 2 concentrations are the percentages with respect to corresponding values at 360 ppm CO 2 (references at 360 ppm are in the range of 3.6% η and 0.08 K −1 S g /E BI for all plants). The concentrations of CO 2 represented are those determined by the LCpro+ photosynthesis system in the leaf chamber and could slightly change with respect to those programmed. As repeated determinations for very similar CO 2 concentrations in a same tobacco produced very close points, only the mean value is represented. Therefore, points in the figure result from 139 different assays and several are the mean of 2-4 independent determinations. Inserted gray lines were obtained by second degree polynomial fitting of the experimental points corresponding to wt, ndhF FC, T181D tobacco plants that contain full dose of functional ndhF gene. Encircled points in left boxes correspond to those of ndhF and T181A at high CO 2 concentrations.
concentrations, which could explain the similar photosynthetic rate responses between ndh-defective and wt tobacco plants at low CO 2 concentrations.
The over-oxidation of cyclic electron transporters should be more pronounced the lower the light intensity and, accordingly, the delay in reaching a full photosynthetic rate at high CO 2 concentrations in ndhF was longer for the 61 to 870 than for the 130 to 870 μmol m −2 s −1 PAR transition (Figure 2). Thus, the strong influence of the light intensity in the middle low intensity phase seems more compatible with a role of the Ndh complex improving cyclic electron transport by adjusting the redox level of transporters than by providing an additional cyclic electron transport chain, as proposed by Yamamoto et al. (2011), where the light intensity applied to the middle phase conceivably does not affect the subsequent photosynthetic rise of ndhF. The necessity to maintain the redox balance of the photosynthetic electron transport chain under fluctuating light has also been reported (Allahverdiyeva et al., 2013) in cyanobacteria, in which the flavo-diiron proteins Flv1 and Flv3 involved are dispensable under constant but not under fluctuating light conditions. Therefore, under fluctuating light, the inability of ndh-defective plants to provide enough electrons to balance the redox level of transporters (specially at high concentrations of CO 2 ) could determine a low rate of cyclic electron transport under low light intensity. The low rate impairs the thylakoid polarization and ATP level required for a rapid response of the net photosynthetic rate when the light intensity suddenly increases in the following stage. Transitory shortage of ATP would Frontiers in Plant Science | Plant Physiology decrease the rate of NADPH oxidation in the Benson-Calvin cycle at the next exposure to high light intensity, resulting in hyper-reduction of the electron transfer chain and PSI photodamage. Significantly, the PGR5 protein is also involved in the redox poising of photosynthetic electron transport (Nandha et al., 2007) and probably plays a role in the protection of PSI from photodamage under fluctuating light (Suorsa et al., 2013).
The effect of ndh gene products accelerating the increase of photosynthetic rates during the high intensity phases of fluctuating light reasonably indicates a definitive role for the Ndh complex. However, the mechanism involved and its relation with other factors, such as the PGR5 protein, require further investigations.
The differences in η and S g /E BI between wt and ndhF were higher for the intermediate 12 min incubation than for the total 24 min incubation (Figure 4) and, plausibly, would be greater still under the rapid and more frequent changes of light in natural environments.
As the result of many assays, a 0.08 K −1 S g /E BI value was estimated at 360 ppm CO 2 and approximately 80 J K −1 was estimated for the S g per leaf Kg and min, an amount 100-fold higher than the decrease of entropy associated to solute compartmentalization in cell organelles (Marín et al., 2009). Although roughly estimated, these values indicate that the ndh-associated S g reduction in less than 1 s at CO 2 concentrations higher than 400 ppm is in the range of structural leaf entropy values and could be evolutionarily significant as a selectable trait (Sabater, 2006) as found in other systems (Ding et al., 2011;Davies et al., 2013). In this regard, higher yields of energy conversions are equivalent to lower S g and both are plausible selectable traits in photosynthesis.
CONCLUDING REMARKS
The functional relevance of the thylakoid Ndh complex has been investigated by determining the photosynthetic response under fluctuating light and several CO 2 concentrations in different tobacco plants affected in the plastid ndhF gene that encodes the NDH-F subunit of the Ndh complex. In contrast to wt, ndh-defective plants show an up to 5 min delay in reaching the maximum photosynthetic rate at CO 2 concentrations higher than the ambient 360 ppm. Accordingly, ndhF-defective tobacco plants show a lower photosynthetic efficiency and higher S g under rapidly fluctuating light intensities and high CO 2 concentrations than wt. Based on our results and the previous results of other groups, it is postulated that the activity of the Ndh complex maintains high rates of photosynthetic cyclic electron transport by providing electrons to balance the redox level of transporters during the low light intensity stage and could be dispensable at low but not at high atmospheric concentrations of CO 2 and under less intensively fluctuating lights. For the first time, these results establish a definitive connection between ndh gene products and photosynthetic performance and predict the influence of changing atmospheric CO 2 concentrations on the evolutionary conservation of the ndh genes.
ACKNOWLEDGMENTS
This work was supported by Grant BFU2010-15916 of the Spanish Dirección General de Investigación (Ministerio de Economía y Desarrollo). | 2016-06-17T23:52:17.447Z | 2015-02-09T00:00:00.000 | {
"year": 2015,
"sha1": "cf18bce4e5651e37ba9b1f94cab9eed514198b41",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2015.00034/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cf18bce4e5651e37ba9b1f94cab9eed514198b41",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
221820241 | pes2o/s2orc | v3-fos-license | Saved by the SPY*: Ulnar Artery Reconstruction With LCFA Graft for Hypothenar Hammer Syndrome
Introduction: Hypothenar Hammer syndrome refers to thrombosis/aneurysm of ulnar artery at Guyon's canal in wrist, with resultant arterial insufficiency in the ulnar artery distribution.1 Patients typically describe unilateral symptoms in the fourth and/or fifth fingers of the hand. Symptoms can range from asymptomatic to pain, pallor, paresthesia, weakness, cold intolerance, and eventually ulceration, necrosis, and gangrene of the distal digits.1 Treatment options range from conservative, lifestyle management, to medication, and ultimately to surgical intervention. In this case report, we outline the second successful lateral circumflex femoral artery (LCFA) graft reconstruction of the ulnar artery in the setting of Hypothenar Hammer Syndrome conducted by the senior author. However, during this procedure, the use of intraoperative intravenous (IV) injection of indocyanine green (ICG) dye (hereafter ICG) imaging helped identify an additional area of stenosis previously unseen on pre-operative MRA, therefore enabling us to perform a more adequate resection and repair. To our knowledge, the use of intraoperative ICG for Hypothenar Hammer Syndrome and/or ulnar artery reconstruction has not been documented in the literature.
CASE DESCRIPTION
A 62-year-old right-hand-dominant man presented with "few years" history of progressively worsening symptoms along the right fourth and fifth fingers, which included pallor, discoloration, pain, paresthesia, "pins and needles," skin lesions, and ulcerations (that had since healed at surgical evaluation appointment). Magnetic resonance angiography (MRA) showed a 2-to 3-cm segmental occlusion at Guyon's canal and an incomplete superficial arch. After failed conservative management, operative intervention was planned. The authors performed a right-sided ulnar artery reconstruction with a lateral femoral circumflex artery (LCFA) arterial graft and sympathetctomy. Intraoperative indocyanine green (ICG) imaging revealed a larger area of stenosis previously unseen on the preoperative MRA scan. This resulted in the need for a larger incision, diseased artery segmental excision, and ultimately a larger LCFA graft. Since the pathologic segment was larger than previously thought, this enabled a more adequate surgical intervention that otherwise would have been insufficient based on MRA alone. Microsurgical anastomosis was performed, and ICG imaging revealed patent vessels. See
QUESTIONS
1. What is hypothenar hammer syndrome? 2. How is hypothenar hammer syndrome typically treated? 3. Why the intraoperative ICG use? What is the big deal? 4. Should surgeons consider using intraoperative ICG more routinely?
DISCUSSION
Hypothenar hammer syndrome refers to thrombosis/aneurysm of ulnar artery at Guyon's canal in wrist, with resultant arterial insufficiency in the ulnar artery distribution. This condition was first described by Van Rosen in 1934. Overall, the disease is quite rare, with about a 1.6% incidence rate, and a male predominance of M:F = 9:1. The term "hypothenar hammer" syndrome was coined by Conn et al in 1970, [2][3][4][5][6] in which the hook of the hamate bone acts as an anvil for the ulnar artery, which is subject to repetitive forces (the hammer). The cause, for all intents and purposes, is effectively trauma. Risk factors include repeated vibration and occupations such as carpenters and mechanics. Ferris and Stone's 7 landmark study suggested that underlying vascular anomalies, such as intimal hyperplasia, can predispose individuals to developing this disease. Patients typically describe unilateral symptoms in the fourth and/or fifth fingers of the hand. Symptoms can ePlasty VOLUME 20 range from asymptomatic to pain, pallor, paresthesia, weakness, cold intolerance, and eventually ulceration, necrosis, and gangrene of the distal digits. If an aneurysm is present, some patients may present with a pulsatile mass. Workup usually contains a positive Tinel's test, in which tapping over the ulnar artery distribution elicits pain. However, some studies have shown that up to 17% of patients with this condition have normal Allen's tests. 8 Workup is confirmed with contrast imaging (magnetic resonance angiography [MRA]) that shows segmental occlusion or aneurysm in the ulnar artery and a resultant incomplete superficial arch.
Treatment options range from conservative lifestyle management to medication and, ultimately, to surgical intervention. Regimens typically begin with nonoperative lifestyle management: smoking cessation, avoidance of recurrent trauma and exacerbating factors, or the use of padded/protective gloves. Medical treatments are the second step and traditionally target the Raynaud's-like phenomenon with the use of calcium channel blockers (CCBs) and antiplatelet (anti-PLT) medications. 1,5,6 If these interventions are unsuccessful, medical professionals proceed to surgical treatment. Endovascular fibrinolysis is indicated for thrombotic lesions without aneurysm that have been present for less than 2 weeks. The most common operative treatment is arterial ligation and reconstruction. This procedure is indicated for patients with a digital-brachial index less than 0.7 and if conservative treatment measures have failed. 1 The last-resort effort is typically the Leriche procedure, which is resection of the diseased arterial segment without reconstruction, indicated if the digital-brachial index is greater than 0.7. Surgical treatments typically consist of dissection and resection of the diseased arterial segment with arterial reconstruction. The repair was first done by end-to-end anastomosis of the ulnar artery. Lifchez and Higgins's 2009 study showed that venous graft reconstruction showed better long-term outcomes compared with end-to end anastomosis of the ulnar artery. 9 Dethmers 10 and Endress 11 showed that most venous grafts used in ulnar artery reconstruction were occluded at various long-term study endpoints. Temming first described the use of a lateral circumflex femoral artery (LCFA) arterial graft for ulnar artery reconstruction of the ulnar artery in the setting of hypothenar hammer syndrome 12,13 . Ultimately, in 2017, De Niet showed that arterial grafts had better outcomes when compared with venous grafts in terms of long-term patency. 14 At a 63-month follow-up, 11 of 11 grafts were patent, and 9 of 11 patients showed clinical improvement for LCFA reconstruction of the ulnar artery.
To our knowledge, the use of intraoperative ICG imaging has not been previously described. There are no true indications for the use of intraoperative ICG, but its postoperative use and efficacy have been well described. The authors decided to use ICG imaging before arterial ligation and sympathetctomy. This revealed an additional area of stenosis in the ulnar artery previously unseen on the preoperative MRA scan. Because the pathologic segment was larger than originally thought, a more radical dissection, ligation, sympathectomy, and LCFA graft ensued. Although the procedure was more drastic, it was also more appropriate, as it successfully identified all pathologic segments of the artery. If the authors solely used MRA, there would have been an incomplete resection of the ulnar artery, and it is likely that the patient's symptoms would not have completely resolved.
The authors would strongly consider using intraoperative ICG imaging in future cases to ensure that all pathologic segments were correctly identified and that an adequate surgical intervention was planned. More research needs to be done, however, as to the success and efficacy of intraoperative ICG compared with preoperative MRA alone.
SUMMARY
A 62-year-old male carpenter with a long history of pain, pallor, discoloration, and skin lesions along the right fourth and fifth fingers was diagnosed with hypothenar hammer syndrome. MRA revealed a 2-to 3-cm segmental occlusion in the distal ulnar artery by Guyon's canal and an incomplete superficial arch. After failed treatment, the authors planned for resection of the pathologic segment, repair with an LCFA graft, and sympathetctomy. Intraoperative ICG imaging revealed an additional area of stenosed artery, previously unseen on the MRA scan. This newfound pathologic segment of artery called for a more extensive dissection, sympathectomy, arterial ligation and excision, and ultimately a larger LCFA graft. Although the resultant procedure was more drastic than previously planned, it was also more adequate for this patient's pathology. Without the intraoperative ICG imaging, no additional area of stenosis would have been seen, and the surgical management would likely have only been partially adequate. Microsurgical anastomosis was performed and postoperative ICG imaging revealed patent vessels. | 2020-09-22T05:07:44.007Z | 2020-08-05T00:00:00.000 | {
"year": 2020,
"sha1": "e3c59ddd15435d7fd49189ac7bc609d973bbfc15",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "e3c59ddd15435d7fd49189ac7bc609d973bbfc15",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
239992804 | pes2o/s2orc | v3-fos-license | Is that really a question? Going beyond factoid questions in NLP
Research in NLP has mainly focused on factoid questions, with the goal of finding quick and reliable ways of matching a query to an answer. However, human discourse involves more than that: it contains non-canonical questions deployed to achieve specific communicative goals. In this paper, we investigate this under-studied aspect of NLP by introducing a targeted task, creating an appropriate corpus for the task and providing baseline models of diverse nature. With this, we are also able to generate useful insights on the task and open the way for future research in this direction.
Introduction
Recently, the field of human-machine interaction has seen ground-breaking progress, with the tasks of Question-Answering (QA) and Dialog achieving even human-like performance. The probably most popular example is Watson (Ferrucci et al., 2013), IBM's QA system which was able to compete on the US TV program Jeopardy! and beat the best players of the show. Since then and particularly with the rise of Neural Networks (NN), various high-performance QA and Dialog systems have emerged. For example, on the QQP task of the GLUE benchmark (Wang et al., 2018), the currently best performing system achieves an accuracy of 90.8%. Despite this success, current QA and Dialog systems cannot be claimed to be on a par with human communication. In this paper we address one core aspect of human discourse that is underresearched within NLP: non-canonical questions.
Research in NLP has mainly focused on factoid questions, e.g., When was Mozart born?, with the goal of finding quick and reliable ways of matching a query to terms found in a given text collection. There has been less focus on understanding the structure of questions per se and the communicative goal they aim to achieve. State-of-the-art parsers are mainly trained on Wikipedia entries or newspaper texts, e.g., the Wall Street Journal, genres which do not contain many questions. Thus, the tools trained on them are not effective in dealing with questions, let alone distinguishing between different types. Even within more computational settings that include deep linguistic knowledge, e.g., PARC's Bridge QA system (Bobrow et al., 2007) which uses a sophisticated LFG parser and semantic analysis, the actual nature and structure of different types of questions is not studied in detail.
However, if we are aiming at human-like NLP systems, it is essential to be able to efficiently deal with the fine nuances of non-factoid questions (Dayal, 2016). Questions might be posed • as a (sarcastic, playful) comment, e.g., Have you ever cooked an egg? (rhetorical) • to repeat what was said or to express incredulity/surprise, e.g., He went where?
The importance of these communicative goals in everyday discourse can be seen in systems like personal assistants, chatbots and social media. For example, personal assistants like Siri, Alexa and Google should be able to distinguish an ability question of the kind Can you play XYZ? from a rhetorical question such as Can you be even more stupid? Similarly, chatbots offering psychotherapeutic help (Ly et al., 2017;Håvik et al., 2019) should be able to differentiate between a factoid question such as Is this a symptom for my condition? and a self-addressed question, e.g., Why can't I do anything right? In social media platforms like Twitter, apart from the canonical questions of the type Do you know how to tell if a brachiopod is alive?, we also find non-canonical ones like why am I lucky? Paul et al. (2011) show that 42% of all questions on English Twitter are rhetorical.
To enable NLP systems to capture non-factoid uses of questions, we propose the task of Question-Type Identification (QTI). The task can be defined as follows: given a question, determine whether it is an information-seeking question (ISQ) or a non information-seeking question (NISQ). The former type of question, also known as a canonical or factoid question, is posed to elicit information, e.g., What will the weather be like tomorrow? In contrast, questions that achieve other communicative goals are considered non-canonical, noninformation-seeking. NISQs do not constitute a homogeneous class, but are heterogeneous, comprising sub-types that are sometimes difficult to keep apart (Dayal, 2016). But even at the coarsegrained level of distinguishing ISQs from NISQs, the task is difficult: surface forms and structural cues are not particularly helpful; instead, Bartels (1999) and Dayal (2016) find that prosody and context are key factors in question classification.
Our ultimate objective in this paper is to provide an empirical evaluation of learning-centered approaches to QTI, setting baselines for the task and proposing it as a tool for the evaluation of QA and Dialog systems. However, to the best of our knowledge, there are currently no openly available QTI corpora that can permit such an assessment. The little previous research on the task has not contributed suitable corpora, leading to comparability issues. To address this, this paper introduces RQueT (rocket), the Resource of Question Types, a collection of questions in-the-wild labeled for their ISQ-NISQ type. As the first of its kind, the resource of 2000 annotated questions allows for initial machine-/deep-learning experimentation and opens the way for more research in this direction.
In this paper, we use this corpus to evaluate a variety of models in a wide range of settings, including simple linear classifiers, language models and other neural network architectures. We find that simple linear classifiers can compete with state-ofthe-art transformer models like BERT (Devlin et al., 2019), while a neural network model, combining features from BERT and the simple classifiers, can outperform the rest of the settings.
Our contributions in this paper are three-fold. First, we provide the first openly-available QTI corpus, aiming at introducing the task and comprising an initial benchmark. Second, we establish suitable baselines for QTI, comparing systems of very different nature. Finally, we generate linguistic insights on the task and set the scene for future research in this area.
Relevant Work
Within modern theoretical linguistics, a large body of research exists on questions. Some first analyses focused on the most well-known types, i.e., deliberative, rhetorical and tag questions (Wheatley, 1955;Sadock, 1971;Cattell, 1973;Bolinger, 1978, to name only a few). Recently, researchers have studied the effect of prosody on the type of question as well as the interaction of prosody and semantics on the different types (Bartels, 1999;Dayal, 2016;Biezma and Rawlins, 2017;Beltrama et al., 2019;Eckardt, 2020, to name a few). It should also be noted that research in developing detailed pragmatic annotation schemes for human dialogs, thus also addressing questions, has a long tradition, e.g., Jurafsky et al. (1997);Novielli and Strapparava (2009);Bunt et al. (2016); Asher et al. (2016). However, most of this work is too broad and at the same time too fine-grained for our purposes: on the one hand, it does not focus on questions and thus these are not studied in the desired depth and on the other, the annotation performed is sometimes too fine-grained for computational approaches. Thus, we do not report further on this literature.
In computational linguistics, questions have mainly been studied within QA/Dialog systems, (e.g., Alloatti et al. (2019); Su et al. (2019)), and within Question Generation, (e.g., Sasazawa et al. (2019); Chan and Fan (2019)). Only a limited amount of research has focused on (versions of) the QTI task. One strand of research has used social media data -mostly Twitter -training simple classifier models (Harper et al., 2009;Li et al., 2011;Zhao and Mei, 2013;Ranganath et al., 2016). Although this body of work reports on interesting methods and findings, the research does not follow a consistent task definition, analysing slightly different things that range from "distinguishing informational and conversational questions", "analysis of information needs on Twitter" to the identification of rhetorical questions. Additionally, they do not evaluate on a common dataset, making comparisons difficult. Furthermore, they all deal with social media data, which, despite its own challenges (e.g., shortness, ungrammaticality, typos), is enriched with further markers like usernames, hashtags and urls, which can be successfully used for the classification. A different approach to the task is pursued by Paul et al. (2011), who crowdsources human annotations for a large amount of Twitter questions, without applying any automatic recognition. More recently, the efforts by Zymla (2014), Bhattasali et al. (2015) and Kalouli et al. (2018) are more reproducible. The former develops a rulebased approach to identify rhetorical questions in German Twitter data, while Bhattasali et al. (2015) implements a machine-learning system to identify rhetorical questions in the Switchboard Dialogue Act Corpus. In Kalouli et al. (2018) a rule-based multilingual approach is applied on a parallel corpus based on the Bible.
RQueT: a New Corpus for QTI
The above overview of relevant work indicates that creating suitable training datasets is challenging, mainly due to the sparsity of available data. Social media data can be found in large numbers and contains questions of both types (Wang and Chua, 2010), but often the context in which the questions are found is missing or very limited, making their classification difficult even for humans. On the other hand, corpora with well-edited text such as newspapers, books and speeches are generally less suitable, as questions, in particular NISQs, tend to appear more often in spontaneous, unedited communication. Thus, to create a suitable benchmark, we need to devise a corpus fulfilling three desiderata: a) containing naturally-occurring data, b) featuring enough questions of both types, and c) providing enough context for disambiguation.
Data Collection
To this end, we find that the CNN transcripts 1 fulfill all three desiderata. We randomly sampled 2000 questions of the years 2006-2015, from settings featuring a live discussion/interview between the host of a show and guests. Questions are detected based on the presence of a question mark; this method misses the so-called "declarative" questions (Beun, 1989), which neither end with a question mark nor have the syntactic structure of a question, but this compromise is necessary for this first attempt on a larger-scale corpus. Given the importance of the context for the distinction of the question types (Dayal, 2016), along with the question, we also extracted two sentences before and two sentences after the question as context. For each of these sentences as well as for the question itself, we additionally collected speaker information. Table 1 shows an excerpt of our corpus. Unfortunately, due to copyright reasons, we can only provide a shortened version of this corpus containing only 1768 questions; this can be gained via the CNN transcripts corpus made available by Sood (2017). 2 The results reported here concern this subcorpus, but we also provide the results of the entire corpus of 2000 questions in Appendix A. Our corpus is split in a 80/20 fashion, with a training set of 1588 and a test set of 180 questions (or 1800/200 for the entire corpus, respectively).
Data Annotation
The RQueT corpus is annotated with a binary scheme of ISQ/NISQ and does not contain a finergrained annotation of the specific sub-type of NISQ. We find it necessary to first establish the task in its binary formulation. Each question of our corpus was annotated by three graduate students of computational linguistics. The annotators were only given the definition of each type of question and an example, as presented in Section 1, and no further instructions. The lack of more detailed instructions was deliberate: for one, we wanted to see how easy and intuitive the task is for humans given that they perform it in daily communication. For another, to the best of our knowledge, there are no previous annotation guidelines or best-practices available.
The final label of each question was determined by majority vote, with an inter-annotator agreement of 89.3% and Fleiss Kappa at 0.58. This moderate 1 http://transcripts.cnn.com/ TRANSCRIPTS/ 2 See https://github.com/kkalouli/RQueT Sentence Text Speaker QT Ctx 2 Before This is humor.
S. BAXTER NISQ Ctx 1 Before I think women, female candidates, have to be able to take those shots.
S. BAXTER Question
John Edwards got joked at for his $400 hair cut, was it? S. BAXTER Ctx 1 After And you know, he was called a Brett Girl.
S. BAXTER Ctx 2 After
This, is you know, the cut and thrust of politics. S. BAXTER Table 1: Sample of the corpus format. Each row contains a sentence and its context before and after. The question and its context also hold the speaker information. Each question is separately annotated for its type.
agreement reflects the difficulty of the task even for humans and hints at the improvement potential of the corpus through further context, e.g., in the form of intonation and prosody (see e.g., Bartels 1999). The resulting corpus is an (almost) balanced set of 944 (1076 for the entire corpus) ISQ and 824 (924 for the entire corpus) NISQ. The same balance is also preserved in the training and test splits. Table 2 gives an overview of RQueT.
RQueT as a Benchmarking Platform
We used the RQueT corpus to evaluate a variety of models, 3 establishing appropriate baselines and generating insights about the nature and peculiarities of the task.
Lexicalized and Unlexicalized Features
Following previous literature (Harper et al., 2009;Li et al., 2011;Zymla, 2014;Bhattasali et al., 2015;Ranganath et al., 2016) and our own intuitions, we extracted 6 kinds of features, 2 lexicalized and 4 unlexicalized, a total of 16 distinct features: 1. lexicalized: bigrams and trigrams of the surface forms of the question itself (Q), of the context-before (ctxB1 and ctxB2, for the first and second sentence before the question, respectively) and of the context-after (ctxA1 and ctxA2, for the first and second sentence after the question, respectively) 2. lexicalized: bigrams and trigrams of the POS tags of the surface forms of the question itself (Q), of the context-before (ctxB1, ctxB2) and of the context-after (ctxA1 and ctxA2) 3. unlexicalized: the length difference between the question and its first context-before (len-DiffQB) and the question and its first contextafter (lenDiffQA), as real-valued features 4. unlexicalized: the overlap between the words in the question and its first contextbefore/after, both as an absolute count 3 https://github.com/kkalouli/RQueT ISQ NISQ All Train 847 (969) 741 (831) 1588 (1800) Test 97 (107) 83 (93) 180 (200) Total 944 (1076) 824 (924) 1768 (2000) Table 2: Distribution of question type in the shortened and the entire RQueT corpus, respectively.
(wOverBAbs and wOverAAbs for context before/after, respectively) and as a percentage (wOverBPerc and wOverAPerc for context before/after, respectively) 5. unlexicalized: a binary feature capturing whether the speaker of the question is the same as the speaker of the context-before/after (speakerB and speakerA, respectively) 6. unlexicalized: the cosine similarity of the In-ferSent (Conneau et al., 2017) embedding of the question to the embedding of the first context-before/after 4 (similQB and similQA, respectively).
We used these feature combinations to train three linear classifiers for each setting: a Naive Bayes classifier (NB), a Support Vector Machine (SVM) and a Decision Tree (DT). These traditional classifiers were trained with the LightSide workbecnh. 5 The Stanford CoreNLP toolkit (Toutanova et al., 2003) was used for POS tagging.
Fine-tuning Pretrained BERT
Given the success of contextualized language models and their efficient modeling of semantic information, e.g., Jawahar et al. (2019); Lin et al. (2019), we experiment with BERT (Devlin et al., 2019) for this task. Since the semantic relations between the question and its context are considered the most significant predictors of QT, contextualized models should be able to establish a clear baseline. The QTI task can be largely seen as a sequence classification task, much as Natural Language Inference and QA. Thus, we format the corpus into appropriate BERT sequences, i.e., question-only sequence or question -context-before or question -contextafter sequence, and fine-tune the pretrained BERT (base) model on that input. We explicitly fine-tune the parameters recommended by the authors. The best models train for 2 epochs, have a batch size of 32 and a learning rate of 2e-5. By fine-tuning the embeddings, we simultaneously solve the QTI task, which is the performance we report on in this setting. The fine-tuning is conducted through HuggingFace. 6
BERT Embeddings as Fixed Features
The fine-tuned BERT embeddings of Section 4.2 can be extracted as fixed features to initialize further classifier models (cf. Devlin et al. 2019). We input them to the same linear classifiers used in section 4.1, i.e., NB, SVM and DT, but also use them for neural net (NN) classifiers because such architectures are particularly efficient in capturing the high-dimensionality of these inputs. To utilize the most representative fine-tuned BERT embeddings, we experiment with the average token embeddings of layer 11 and the [CLS] embedding of layer 11. We chose layer 11 as the higher layers of BERT have been shown to mostly capture semantic aspects, while the last layer has been found to be very close to the actual classification task and thus less suitable (Jawahar et al., 2019;Lin et al., 2019). We found that the [CLS] embedding performs better and thus, we only report on this setting.
Moreover, as shown in Section 5, some of the unlexicalized features of Section 4.1 lead to competitive performance with the pretrained BERT models. Thus, we decided to investigate whether the most predictive unlexicalized feature can be efficiently combined with the BERT fine-tuned embeddings and lead to an even higher performance. To this end, each linear classifier and NN model was also trained on an extended vector, comprising the CLS-layer11 fine-tuned BERT embedding of the respective model, i.e., only of the question (Q-Embedding), of the question and its (first) contextbefore (Q-ctxB-Embedding) and of the question and its (first) context-after (Q-ctxA-Embedding) as a fixed vector, and an additional dimension for the 6 https://huggingface.co/ binary encoded unlexicalized feature.
We experimented with three NN architectures and NN-specific parameters were determined via a grid search separately for each model. Each NN was optimized through a held-out validation set (20% of the training set). First, we trained a Multi-Layer Perceptron (MLP) with a ReLU activation and the Adam optimizer. Second, we trained a feedforward (FF) NN with 5 dense hidden layers and the RMSprop optimizer. Last, we trained an LSTM with 2 hidden layers and the RMSprop optimizer. Both the FF and the LSTM use a sigmoid activation for the output layer, suitable for the binary classification. All NNs were trained with sklearn.
Quantitative Observations
The results of the training settings are presented in Table 3. Recall that these results concern the corpus of 1768 questions. The results on the entire corpus can be found in Appendix A. For space reasons, we only present the most significant settings and results. For the lexicalized features, all models use both the surface and the POS n-grams as their combination proved best -the separate settings are omitted for brevity, so e.g., Q tokens/POS stands for a) the question's bigrams and trigrams and b) the question's POS bigrams and trigrams. All performance reported in Table 3 represents the accuracy of the models.
The careful benchmarking presented in Table 3 allows for various observations. We start off with the diverse combinations of lexicalized and unlexicalized features. First, we see that training only on the question, i.e., on its n-grams and POS tags, can serve as a suitable baseline with an accuracy of 62.7% for NB. Adding the first context-before improves performance and further adding the second context-before improves it even further at 72.7% for NB. A similar performance leap is observed when the first context-after is added to the question (73.3% for NB), while further adding the second context-after does not change the picture. Since adding the first context-before and -after to the question increases accuracy, we also report on the setting where both first context-before and -after are added to the question. This does indeed boost the performance even more, reaching an accuracy of 75% for NB. Given that the second contextbefore is beneficial for the Q+ctxB1+ctxB2 setting, we add it to the previously best model of 75% Table 3: Accuracy of the various classifiers and feature combinations (settings). A checkmark means that this feature was present in this setting. PT stands for the pretrained BERT embeddings and FN for the fine-tuned ones. Bolded figures are the best performances across types of classifiers. The stared figure is the best performing ensemble model across settings. wOverAbs and wOverPerc are omitted for brevity. and find out that their combination rather harms the accuracy. Experimenting with both contextsbefore and -after and the question does not lead to any improvements either. The combinations of the lexicalized features show that the best setting is the one where the question is enriched by its first context-before and -after (75%).
We make a striking observation with respect to the unlexicalized features. Training only on the speaker-after, i.e., on whether the speaker of the question is the same as the speaker of the first context-after, and ignoring entirely the question and context representation is able to correctly predict the QT in 77.7% of the cases. This even outperforms the best setting of the lexicalized features. The speaker-before does not seem to have the same expressive power and training on both speaker fea-tures does not benefit performance either. We also find that the rest of the unlexicalized features do not have any impact on performance because training on each of them alone hardly outperforms the simple Q tokens/POS baseline, while by training on all unlexicalized features together we do not achieve better results than simply training on speaker-after. 7 Based on the finding that the speaker-after is so powerful, we trained hybrid combinations of lexicalized features and the speaker information. First, the speaker-before is added to the Q+ctxB1+ctxB2, which is the best setting of contexts-before, but we do not observe any significant performance change. This is expected given that speaker-before alone does not have a strong performance. Then, the speaker-after is added to the setting Q+ctxA1 and the performance reaches 76.1% (for DT), approaching the best score of speaker-after. The addition of speaker-before to this last setting does not improve performance. On the other hand, adding the speaker-after information to the best lexicalized setting (Q+ctxB1+ctxA1) does not have an effect, probably due to a complex interaction between the context-before and the speaker. This performance does not benefit either from adding the second context-before (which proved beneficial before) or adding the other unlexicalized features. 8 Moving on, we employ the pretrained BERT embeddings to solve the QTI task. Here, we can see that the model containing the question and the context-after (Q-ctxA-Embedding) is the best one with 80.1%, followed by the model containing the question and the context-before (Q-ctxB-Embedding, 78.3). Worst-performing is the model based only on the question (Q-Embedding). This simple fine-tuning task shows that contextualized embeddings like BERT are able to capture the QT more efficiently than lexicalized and unlexicalized features -they even slightly outperform the powerful speaker feature. This means that utilizing these fine-tuned embeddings as fixed input vectors for further classifiers can lead to even better results, and especially, their combination with the predictive speaker information can prove beneficial.
In this last classification setting, we observe that the classifiers trained only on the fine-tuned BERT embeddings deliver similar performance to the finetuning task itself. This finding reproduces what is reported by Devlin et al. (2019). However, the real value of using this feature-based approach is highlighted through the addition of the speaker information to the contextualized vectors. The speaker information boosts performance both in the setting of fine-tuned Q-Embedding and in the setting finetuned Q-ctxA-Embedding. In fact, the latter is the best performing model of all with an accuracy of 84.4%. Adding the speaker-before information to the fine-tuned Q-ctxB-Embedding does not have an impact on performance due to the low impact of the speaker-before feature itself.
Qualitative Interpretation
The results presented offer us interesting insights for this novel task. First, they confirm the previous finding of the theoretical and computational literature that context is essential in determining the question type. Both the lexicalized and the embeddings settings improve when context is added. Concerning the lexicalized settings, we conclude that the surface and syntactic cues present within the question and its first context-after are more powerful than the cues present within the question and the first context-before. This is consistent with the intuition that whatever follows a question tends to have a more similar structure to the question itself than whatever precedes it: no matter if the utterer of the question continues talking or if another person addresses the question, the attempt is to stay as close to the question as possible, to either achieve a specific communication goal or to actually answer the question, respectively. However, our experiments also show that combining the first context-before and -after with the question does indeed capture the most structural cues, generating the insight that one sentence before and after the question is sufficient context for the task at hand. Interestingly, we can confirm that the second context-after is not useful to the classification of the QT, probably being too dissimilar to the question itself. Table 4 shows examples of the most predictive structural cues for the best setting of the lexicalized classifiers (Q+ctxB1+ctxA1).
ISQ you feel, what do you, do you agree, make of that, you expect, me ask you, why did you, how did you NISQ why arent't, and should we, COMMA how about, how could, do we want, can we Training on non-linguistic unlexicalized features does not boost performance. However, our work provides strong evidence that the speaker metainformation is of significant importance for the classification. This does not seem to be a peculiarity of this dataset as later experimentation with a further English dataset and with a German corpus shows that the speaker information is consistently a powerful predictor. Additionally, we can confirm from Appendix A that the speaker feature has the same behavior, when trained and tested on the entire corpus. To the best of our knowledge, previous literature has not detected the strength of this feature. From the prediction power of this feature, it Figure 1: Interactive visualization of the wrongly predicted instances of the models fine-tuned Q-ctxB-Embedding and fine-tuned Q-ctxA-Embedding+speakerA. Based on this visualization, we can observe sentences with similar patterns and how these are learned from the models. Some sentences are ambiguous having both patterns; thus, we need a third model for our ensemble. might seem that information on the question and its context is not necessary at all. However, we show that the addition of the linguistic information of the question and its context through the finetuned embeddings provides a clear boost for the performance. The importance of similar linguistic unlexicalized features has to be investigated in future work. In fact, for the current work, we also experimented with the topic information, i.e., based on topic modeling, we extracted a binary feature capturing whether the topic of the question and the context-after is the same or not. However, this feature did not prove useful in any of the settings and was thus omitted from the analysis. Future work will have to investigate whether a better topic model leads to a more expressive binary feature and whether other such features, such as sentiment extracted from a sentiment classification model, can prove powerful predictors.
Concerning the distributional and NN methods, this is the first work employing such techniques for the task and confirming the findings of the more traditional machine learning settings. Fine-tuning the pretrained BERT embeddings reproduces what we showed for the standard classifiers: the context and especially the context-after boosts the performance. This finding is also confirmed when treating the fine-tuned BERT embeddings as standard feature vectors and further training on them. Most importantly, this setting allows for the expansion of the feature vector with the speaker information: this then leads to the best performance. Unsurprisingly, the speaker-before is not beneficial for the classification, as it was not itself a strong predictor. Finally, we also observe that the results reported for this smaller corpus are parallel to the results reported for the entire corpus (see Appendix A).
Further Extension & Optimization
By studying Table 3 the question arises whether our best-performing model of fine-tuned Q-ctxA-Embedding+speakerA can be further improved and crucially, whether the context-before can be of value. With our lexicalized models, we show that the best models are those exploiting the information of the context-before, in addition to the question and the context-after. However, all of our BERT-based models have been trained either on the combination of question and context-before or on the combination of question and context-after, but never the combination of all three. The inherent nature of the BERT model, which requires the input sequence to consist of a pair, i.e., at most two distinct sentences separated by the special token [SEP], is not optimized for a triple input. On the other hand, "tricking" BERT into considering the context-before and the question as one sentence delivers poor results. Thus, we decided to exploit the power of visualization to see whether an ensemble model combining our so far best performing model of fine-tuned Q-ctxA-Embedding+speakerA with our context-before BERT-based model fine-tuned Q-ctxB-Embedding would be beneficial.
To this end, we created a small interactive Python visualization to compare the two models, using UMAP (McInnes et al., 2018) as a dimensionality reduction technique and visualizing the datapoints in a 2D scatter plot. We computed positions jointly for both models and projected them into the same 2D space using cosine similariy as the distance measure. As we are interested in potential common wrong predictions between the models, we only visualize wrongly classified samples, and group them by two criteria: the model used (colorencoded) and the gold label (symbol-encoded).
Examining the visualization of Figure 1 (left) we observe that there is no overlap between the wrongly predicted labels of the two models. This means that training an ensemble model is a promising way forward. Additionally, through the interactive visualization, we are guided to the most suitable ensemble model. Particularly, we see some common patterns for the wrongly predicted labels for each of the models. The fine-tuned Q-ctxA-Embedding+speakerA has a better performance in predicting ISQ, whereby the decision seems to be influenced by the speaker feature (i.e., if the question and context-after have different speakers, the model predicts ISQ). However, the fine-tuned Q-ctxB-Embedding model seems to learn a pattern of a context-before being a question; in such cases, the target question is predicted as NISQ. In the ground truth we have ambiguous cases though, where questions have both patterns. Thus, although it seems that the two models fail on different instances and that they could thus be combined in an ensemble, they would alone likely fail in predicting the ambiguous/controversial question instances. Instead, surface and POS features of the questions and their contexts should be able to differentiate between some of the controversial cases. To test this, we created an ensemble model consisting of the two models and the best lexicalized model holding such features (Q+ctxB1+ctxA1). First, this ensemble model checks whether finetuned Q-ctxA-Embedding+speakerA and fine-tuned Q-ctxB-Embedding predict the same label. If so, it adopts this label too. Otherwise, it picks up the prediction of Q+ctxB1+ctxA1. With this ensemble approach, we are indeed able to improve our so-far best model by 4%, reaching an accuracy of 88.3%, as shown in the last entry of Table 3.
At this point, two questions arise. First, the reader might wonder whether this result means that the task is virtually "solved". Recall that the inter-annotator agreement was measured at 89.3% and thus, it might seem that our ensemble model is able to be competitive with that. However, this is not the case: if we observe the Fleiss Kappa, we see that it only demonstrates moderate agreement. This could be due to the difficulty of the task, as mentioned before, but it also shows that the task formulation has room for improvement. In a postannotation session, our annotators reported that some of the uncertainty and disagreement could be tackled with multi-modal data, where also audio or video data of the corresponding questions is provided. Additionally, higher agreement could have been achieved with more annotators. Thus, our current work offers room for improvement, while providing strong baselines. Second, the question is raised whether this feature combination is indeed the best setting for all purposes of this task; the answer to this depends on what the ultimate goal of this task is. If the ultimate goal is applicationbased, where a model needs to determine whether a question requires a factoid answer (or not) in a real-life conversation, the trained model should not include the context-after as a feature as this would exactly be what we want to determine based on the model's decision. However, if the goal is to automatically classify questions of a given corpus to generate linguistic insights, then the trained model can include all features. The evaluation undertaken here serves both these purposes by detailing all settings. On the one hand, we show that the models achieve high performance even when removing the context-after and that therefore an applicationbased setting is possible. On the other hand, we also discover which feature combination will lead to the best predictions, generating theoretical insights and enabling more research in this direction.
Conclusion
In this paper, we argued for the need of the Question-Type Identification task, in which questions are distinguished based on the communicative goals they are set to achieve. We also provided the first corpus to be used as a benchmark. Additionally, we studied the impact of different features and established diverse baselines, highlighting the peculiarities of the task. Finally, we were able to generate new insights, which we aim to take up on in our future work.
Appendix A: Performance Results on the entire RQueT The following table collects all performance results when training on the entire RQueT corpus of 2000 questions. Although we cannot make this whole corpus available, we would like to report on the performance to show how our findings are parallel in both variants of the corpus and that the smaller size of the corpus we make available does not obscure the overall picture. Table 5: Accuracy of the various classifiers and feature combinations (settings) on the entire RQueT corpus of 2000 questions. A checkmark means that this feature was present in this setting. PT stands for the pretrained BERT embeddings and FN for the fine-tuned ones. Bolded figures are the best performances across types of classifiers. The stared figure is the best performing ensemble model across settings. wOverAbs and wOverPerc are omitted for brevity. | 2021-10-26T22:50:50.902Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "d82d574f363ef801402819a4f3c922a54d8d90a6",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "d82d574f363ef801402819a4f3c922a54d8d90a6",
"s2fieldsofstudy": [
"Computer Science",
"Linguistics"
],
"extfieldsofstudy": []
} |
189068660 | pes2o/s2orc | v3-fos-license | TOPOGRAPHY DEPENDENT VERTICAL WIND DISTRIBUTION ESTIMATION
. We propose a novel technique for estimation of vertical wind distribution. Proposed modification of logarithmic profile relies on introducing topography dependent dispersion parameter calculated based on fractal dimension of topography. Initial results compared against full scale wind measurement show high agreement. Proposed methodology brings promise of precise a priori calculations of wind profile in case of non-flat, nonhomogeneous surface roughness terrain, improving the precision of wind potential estimation for wind energy sector.
Introduction
Wind energy is nowadays an important part of energy sector in many developed countries (EU, USA, China) and it is still on growth. Although it is a mature technology, a lot of potential can be found in grid, conversion and in basic aerodynamics as well. In opposition to other renewable sources, like solar or biomass, the initial estimation of potential for wind energy is challenging due to geographic-based highly inhomogeneous distribution. Although detailed descriptions of wind climate are existing [2,6], they are not able to provide enough information for the detailed examination of candidate sites for wind development in microscale [4].
One of the crucial parameters for wind energy potential estimation is wind it its vertical wind velocity distribution being function of terrainsee figure 1.
Fig. 1. Wind vertical distribution dependence on terrain characteristics
As visible on Fig. 1, the average velocity grows from zero at ground levelthe no-slip boundary condition, a foundation of classical fluid mechanics, to geostrophic value at some altitude. Growth rate depends on the terrain characteristics. As power that can be extracted from wind is proportional to velocity cubed, the knowledge of velocity at desired hub-height or velocity vertical distribution is crucial.
The vertical distribution needed for extrapolation of velocity for desired altitude can be only roughly estimated before the measurement.
Distribution typically a log law, is function of surface type, its parameter, surface roughness , can be found in literature categorized by terrain type. Comparison of values between sources [1,3,5] shows however significant discrepancy, driving towards errors in wind power estimation. Moreover, the available in literature values are valid only for flat homogeneous terrain, what in many cases is highly idealized.
A question, how to include the topography in wind analysis, incorporating the effects of nonhomogeneous terrain, changes in surface roughness, complicated, non-flat topography is a challenge and alive topic not only in wind energy but in general micrometorology and pollution dispersion studies as well [1].
Purpose
The aim of our current work is therefore to propose a tool for implementing the site surrounding topography influence on vertical wind distribution allowing its (wind vertical distribution) precise a priori calculations.
Our idea relies on describing the topography level of complication by its fractal dimension and to integrate it with classical logarithmic relation for vertical wind distribution.
Finally we will show positive validation of proposed methodology by comparison of our results with real wind data gathered at the site of interest.
Fractal dimension
We are accustomed to topological or geometric definition of dimensionwe know that line is one-dimensional, plane is twodimensional and our space is three-dimensional. The intuitive definition of dimension is the minimal number of coordinates required to describe investigated shape (e.g. space, cube, torus etc.).
We can imagine a lot of shapes. Some of them are smooth e.g. ice rink or glass, another are rough e.g. mountains or clouds. To distinguish, to what extent one shape is more complex than another, some measure is needed. This class of measure is the fractal dimension, which in current work will be used for description of site surrounding topography.
Fractal dimension is a correspondence, which to any shape assigns a real number. In case of a square, the fractal dimension is equal to topological dimension i.e. 2, but more complicated shapes are described by not obvious and not straightly intuitive, noninteger values.
A fractal dimension can be applied to every set, not only in "our space", but generally in any Euclidean spaces , as well as in all metric spaces (although in current work we focus on objects in our, three-dimensional Euclidean space-on the terrain surrounding the wind measurement station).
Briefly, the fractal dimension measures, how complicated is examined form. It characterizes complexity of an object by the rate of volume growth, when the measures are more and more precise. The basic principle for calculation of fractal dimension is that volume and precision are related by power law. Hence, based on these two parameters, it is possible to calculate the fractal dimension.
As fractal and fractal dimension are actually a capacious term, hard to be defined formally from mathematical point of view, the diversity and multitude of definitions exist in literature (at least ten to our knowledge). From this set we decided to use the Minkowski dimension, for the reason that its definition allows for counting fractal dimension of an arbitrary shape. That feature is crucial for our purposethe natural terrain under investigation is of unknown shapesurely, it is not simple to describe it with mathematic formula (see: section 2.3).
Calculation of fractal dimensiondata extraction
In first step, a topography of place of interest, the surroundings of wind measurement station has to be digitized or extracted from available database.
One of advantages and strong points of presented method is that for most places of potential interest, the GIS (geographic information system) data are publicly available.
In presented case we used geospatial data from Norwegian Mapping Authority via website http://www.norgeskart.no.
Firstly, a grid was created and heights were located in that grid. To achieve grid with resolution of 1 m, determination of this distance on map was necessary. One meter between latitudes (geographic coordinates) is always constant on Earth, so exact values of latitudes were learned. Next, the distances between longitudes were calculated on north and south margin of considered area (because it is various on Earth). Differences on extremes are negligible, hence the distance giving 1 m between longitudes was the central value assumed. In this manner the mesh grid was created (see table 1). For sake of keeping the high resolution of 1m and finite computing time we decided to focus on the area of 500 m radius from the measurement tower in the first step or research.
Data needed to create height map was provided by script written in Ruby programming language. Script connected to the web application programming interface, aggregated and deserialized JSON (JavaScript Object Notation) data. This set was allocated in adequate place in table 1. In next step, acquired data were processed in MATLAB for visualization (see map on Fig. 2) and further calculations of fractal dimension.
Calculation of fractal dimensionbox counting method
In current step based on pre-processed data the fractal dimension can be calculated.
If matrix of heights is prepared, it is possible to create map of terrain (Fig. 1). Here we can start with research of fractal dimension.
The definition of Minkowski dimension is given as: where is size of cube , and ( ) is minimal number of cubes , which are necessary to cover of considered shape.
Unfortunately, due to limitation of available data to 1 m resolution, we cannot, accordingly to definition, obtain convergence of cube sizes to zero, nevertheless cut off at 1 m will not influence the results in considered case. Fractal dimension can be found by approximation of volume on number relation and was done by built in MATLAB linear least square (LLSQ) method, exhaustive in satisfied extent.
To develop the calculations reliability, the improvement of a standard procedure of fractal dimension calculation by box counting has been applied, which schematically is presented on figure 3. This recipe is repeated n-times. We have decided on n equals 33. To achieve more comprehensive results, the procedure was realized on 100 subsets of A (A is the matrix including 100 sequences of cube sizes) with 33 elements in each subset. 30 first elements are random integers from 1 to 120, the last three are always 250, 1 and 2 (to provide start and end of segment).
Such a procedure resulted in two sequences of number in each subset of A: ( ( )) and ( ) . Final function ( ( )) = F( ( ⁄ )) can be presented on log-log plot.
The choice of base of logarithm is free as the consequence of logarithm propertieswe should only decide on number from segment (0, +∞)\{1}.The slope of this plot found by Linear Least Squares method is the searched fractal dimension of terrain.
Obtaining fractal dimension in this way, we know the consequences of different cubes size. To avoid controversy on correctness and arbitrariness of the chosen sizes, we repeated determination of numbers for 100 different sets. Therefore we obtain 100 fractal dimensions of one and the same shape, calculated on another sets of cubes sizes. From this set we select the central value (median) and this result serves us for further calculations.
Validation
To test our results we decided to compare vertical velocity distribution extrapolated using our formula with measurement data taken from wind measurement station through 12 months of 2010 year (courtesy of prof. Lars Seatran, Norwegian University of Science and Technology, Trondheim, Norway).
The measuring 100 m height mast is located in Skipheia region, on the western part of The basic wind climate characteristic of the site can be described as a typical coastal, with average surface roughness of z 0 = 0.00308 m and mean wind velocity at 100 m of 8.31 m/s. Wind rose (for neutral stratification) is presented on figure 5.
Data for comparison were filtered to include only neutral stratification (logarithmic profile is valid only for neutral conditions). Stability was calculated based on 10 min intervals (only the samples with 100% data recovery were chosen) by bulk Richardson number and Monin-Obukhov length.
Surface roughness
Nowadays, the logarithmic law is typically used for calculation of vertical wind profiles [3,5], that can be presented in form given by equation (2).
where is size of cube , and ( ) is minimal number of cubes , which are necessary to cover of considered shape.
The term "surface roughness" can be misleadingthe logarithmic law is derived from earth momentum equation and , the surface roughness is an integration constant, not a measurable feature of "terrain surface".
Of course knowing the average wind velocities at least two altitudes the distribution parameter can be fitted, but it is not known before the measurement. The only estimate is a tabularized literature value.
That is the most common situationthe wind velocity is known/measured at given height and has to be extrapolated to desired altitude. In such cases standards [3,5] and books [1] propose application of from tables. In such tables the terrain description and value , various in different sources can be found. However, the biggest problem is how to apply tables for non-homogenous areas. Standard [5] allows to 10% variation of terrain type, but it is a rare situation.
Findings
Finally the regression analysis was performed to find the relation between surface described by fractal dimension and the vertical wind velocity dispersion parameter, the surface roughness.
To ensure that solution is not a site specific, we based the calculations on 16 direction sectors, each separately representing different surface type. This has given us the 16 datasets of z 0 and fractal dimension.
After preliminary selection, we have decided to describe the regression with custom modified exponential model in form given by equation 3: where: , , d is the fractal dimension of terrain, investigated height, reference height, ( )wind speed at height and ( )wind speed at height .
Using MATLAB curve fitting tools, the surface roughness was associated with fractal dimension of terrain. The goodness of fit was found more than satisfying, R squared and adjusted R squared were equal 0.82. It is the noticeable improvement of predictions compared to calculations of velocity distribution based on literature available surface roughness (e.g. ESDU and Eurocodessee figure 6).
Research limitation
Although the proposed methodology was developed based on 16 direction sector representing different terrain types, we believe it should be tested on different site for rigorous validation.
We are also aware of the limits of our findings due to limited area investigated (500 m radius).
However our formulation was positively tested against atmospheric wind data in six different altitudes (0-100 m) in neutral atmospheric stratification. Our further research will aim for investigation listed limitations improving proposed methodology.
Practical implications
Our findings have important practical implications. Presented methodology leads to better estimation of wind vertical profile, one of the basic wind parameter needed not only for estimation of wind potential in energy sector, but also in meteorology and dispersion studies.
Most important our technique relies on topographical data, bringing the promise of straightforward including the terrain shape in basic wind parameters estimation. Additionally, the proposed methodology can be applied easily in cities (urban wind) or in other complicated terrain -fractal dimension is a universal measure and it is applicable to every type of terrain.
Originality
According to our best knowledge, there are no reports in literature describing presented issues. Application of fractal dimension as a surface characteristic measure is not a new topic in metallurgy and geology, fractal dimension was used as a characteristic time series feature in wind engineering as well, but as far as we know the relation between fractal dimension and vertical wind velocity was not presented yet. | 2019-06-13T13:09:40.553Z | 2017-06-30T00:00:00.000 | {
"year": 2017,
"sha1": "ba95e097e50fd5c082f8c9e11435771e1c65b808",
"oa_license": "CCBYSA",
"oa_url": "https://doi.org/10.5604/01.3001.0010.4830",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "e6cdf931d0a6edddf57b8b298a8a0b2d54129052",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geology"
]
} |
165395887 | pes2o/s2orc | v3-fos-license | ‘Bound Coolies’ and Other Indentured Workers in the Caribbean: Implications for debates about human trafficking and modern slavery
Under systems of indenture in the Caribbean, Europeans such as Irish, Scots and Portuguese, as well as Asians, primarily Indians, Chinese and Indonesians, were recruited, often under false pretences, and transported to the ‘New World’, where they were bound to an employer and the plantation in a state of ‘interlocking incarceration’. Indentureship not only preceded, co-existed with, and survived slavery in the Caribbean, but was distinct in law and in practice from slavery. This article argues that the conditions of Caribbean indenture can be seen to be much more analogous to those represented in contemporary discussions about human trafficking and ‘modern slavery’ than those of slavery. Caribbean histories of indenture, it is proposed, can provide more appropriate conceptual tools for thinking about unfree labour today—whether state or privately sponsored—than the concept of slavery, given the parallels between this past migrant labour system in the Caribbean and those we witness and identify today as ‘modern slavery’ or human trafficking. This article thus urges a move away from the conflation of slavery and human trafficking with all forced, bonded and migrant labour, as is commonly the case, and for greater attention for historical evidence.
Introduction
Practically, an immigrant is in the hands of the employer to whom he is bound. He cannot leave him; he cannot live without work; he can only get such work and on such terms as the employer chooses to set him; and all those necessities are enforced, not only by the inevitable influence of his isolated and dependent position, but by the terrors of imprisonment and the prospect of losing both labour and wages (Beaumont 1871). 1 When it is remembered that the victim of the system-I can call them by no other name-are generally simple, ignorant, illiterate, resourceless people belonging to the poorest class of this country and that they are induced to enter-or it would be more correct to say they are entrapped into entering-into these agreements by the unscrupulous representations of wily professional recruiters who are paid so much a head for the labour they supply and whose interest in them ceases the moment they are handed to the emigration agents, no fair minded man will, I think, hesitate to say the system is a monstrous system, iniquitous in itself, based on fraud and maintained by force (Gokhale 1912). 2 Indian and Chinese indenture in the nineteenth century, as described by observers such as Gokhale and Beaumont, lend themselves to a reflection on that history for discussions about human trafficking today, given the striking similarities in conditions that are presented about both. The resurgence of the claim that Irish indentures were 'slaves' in the history of the Americas 3 urges us to consider the broader question of how indentureship can be understood as 'modern slavery'.
Given that Irish, Chinese and Indian indenture and the enslavement of Africans were important to the making of the Caribbean and have long been discussed by Caribbeanists, it seems appropriate to delve into the region's history and scholarship to think through such questions. I propose here that the simultaneous and serial histories of slavery and indentureship in the Caribbean, alongside centuries of observations, accounts, and analyses comparing the two systems, provide us with tools for a rethinking of current discourses of human trafficking and 'modern slavery'.
The questions I seek to engage here are not about which labour systemindentureship or slavery-was the more monstrous, for both were violent, coercive and inhumane. I am also not interested in contributing to scholarship that argues for a hierarchisation of oppression. Rather, I am preoccupied with questions about how histories of slavery and bound labour 4 in areas of the world such as the Caribbean can be seen as 'parallel lives and intertwined belongings', which produce different knowledge and understandings, 5 and which in turn could influence current thinking about human trafficking and 'modern slavery'. Moreover, as a critical sociologist, inspired by a historical materialist reading of the social and political, this paper does not engage with debates about the 'use and abuse' of history. Rather, it is an effort to present an alternative to those discourses on 'modern slavery' and human trafficking that lack a reflexivity about slavery in the past, and which currently dominate public and policy interpretations of forced and migrant labour. 6 4 I use the term 'bound labour' to point generally to the practice of indenture in the Caribbean as well as contemporary migrant labour systems-state and privately sponsored-that recruit people often through fraudulent means, tie a labourer to an employer or sponsor, require them to work at a specific job for a period of time, and involve some form of financial indebtedness. This may or may not be similar to the concepts of 'bonded', 'debt-bonded' or 'forced labour'. It is noted, however, that these and other such terms all present definitional problems, generate endless debate amongst labour historians, and are often used interchangeably in discussions about human trafficking and modern slavery. 5 K Nimako, 'Conceptual Clarity Please! On the uses and abuses of the concepts of "slave" and "trade" in the study of the transatlantic slave trade and slavery' in M Ara jo and S Rodr guez Maeso, Eurocentrism, Racism and Knowledge, Palgrave Macmillan, 2015, p. 189. 6 This paper was first presented at Durham University in May 2017, and while it has greatly benefitted from discussion with Siobhan McGrath, David Lambert, and Richard Huzzy, as well as from feedback from two anonymous reviewers and the Anti-Trafficking Review editors, it remains a work in progress.
Caribbean Indentureship
Caribbeanist historians, sociologists, anthropologists, storytellers, novelists, poets and the like, tend to agree that indentureship was a labour system that pre-dated, co-existed with, and survived slavery, and was organised with a considerable level of fraud and violence by colonial governments to enable farm and plantation owners access to and control of a large pool of low-cost wage labourers for the agri-industry. Indentureship in the Caribbean was integral to the globalisation of capitalism from the fifteenth century onwards. The drive to accumulate capital not only interfered with other modes of production/ways of life and produced social and economic dislocations and political conflicts, but moved labour from those areas that it impoverished and disrupted to new hubs of production, such as plantations in the 'New World'. The Caribbean system of indenture relied on the recruitment, often under false pretences, of dispossessed and marginalised people (mostly young adult men) from Europe and Asia, and contractually binding them to a fixed term of work for a single employer in the British, Dutch, Danish, Spanish and French colonies in exchange for transportation to (and sometimes from) the colonies, subsistence wages and in some instances, land. Indenture contracts varied between one and fourteen years, with possibilities or requirements for re-indenture after the initial contract. The indentured were shipped to the Caribbean and confined to a plantation or estate where they lived and worked under conditions comparable to those for Africans under slavery. They had no choice in employer, could not change employers or buy themselves out of, or negotiate their contract, nor could they move freely without the consent of their employers. Planters in collusion with colonial governments often managed to maintain them in states of indenture or dependency through creating economic conditions that demanded or required re-indenture after the initial contract. The indentured were, in Guyanese Indian vernacular, 'bung coolies'-bound to employer and the plantation-in a pattern of 'interlocking incarceration '. 7 While most attention goes to the system that followed the abolition of slavery by the British in 1834, indentureship also occurred both before and during the period of slavery. Impoverished, destitute or imprisoned white European men and women, as well as children, made up some of the first cohorts of labourers from the late 1620s to early 1700s, the majority of whom were indentured to tobacco and cotton farms in the Caribbean. Numbers are hard to come by, but estimates are that prior to 1660 around 190,000 whites arrived in the English colonies in the Caribbean, such as Barbados, St. Kitts and Nevis, Montserrat, Antigua and 7 B V Lal, 'The Odyssey of Indenture: Fragmentation and reconstitution in the Indian diaspora ', Diaspora, vol. 5, no. 2, 1996, p. 174. Jamaica. 8 Barbados, for example, received a large number of Irish indentures and some Scots, English and Welsh who, prior to the late 1640s, are said to have mostly left for the Caribbean 'willingly', in search of a better life. Some were under contract 'to work for their master for an agreed-upon period (usually between three and seven years) in exchange for the cost of their passage, clothing, provisions while in service, and the promise of between two and ten acres of land upon the completion of their term of indenture'. 9 Many thousands of others, including children, arrived without contracts. They were joined by vagrants, those considered felons and criminals who were exiled to the island, and political prisoners following conflicts such as the 1649 Cromwell invasion of Ireland, which led to several thousand Irish and others being 'Barbadosed'. 10 Women were also rounded up and taken off the streets of London and shipped to the colonies. All were pressed into indenture as agricultural labourers, often working alongside enslaved Africans. 11 With the expansion of the land-gobbling sugar plantation and the turn to Africa for a seemingly endless and more controlled labour supply, the promise of land as well as future work for the former indentured evaporated, leaving a mostly 'un(der)employed, poor and propertyless population'. 12 White, landless, formerly indentured workers sought to re-indenture themselves, tried to migrate elsewhere, or eked out a living in the marginal spaces. Barbados', Bim, vol. 15, no. 57, 1974, pp. 41-55;S O'Callaghan, To Hell Later cohorts of indentured workers arrived in the Caribbean from 1834, after the emancipation of Africans from slavery under the British, continuing well into the twentieth century, consisting primarily of Indians, Chinese and Indonesians ('Javanese'), with smaller numbers from Britain, Malta, France, Germany, and Madeira and the Azores ('Portuguese'). Some formerly enslaved Africans already in the Caribbean as well as Africans transported directly from Africa, were also indentured. Around half a million Indian workers replaced enslaved Africans on Caribbean plantations in this period with the majority in Guyana, Trinidad and Suriname, others in Jamaica, Guadeloupe, Grenada and French Guiana. 14 Around 120,000 Chinese were transported to Cuba, and between 1853 and 1884 about 18,000 to British colonies, especially Guyana. 15 Surinamese history captures the extent and diverse origins of the indentured population in the Caribbean from the nineteenth century on. The Dutch colony drew first on labour from China and Madeira, then from Dutch colonies in Indonesia and British colonies in the Caribbean, and from 1873 to 1916 from India. 16 The transportation of indentured labourers from Indonesia continued until 1939.
Around the region, indentureship and slavery were complexly intertwined. The indentured all started out as agricultural workers and domestic servants, sometimes working alongside enslaved Africans. Yet some, such as whites and Chinese, were encouraged to take up semi-skilled, artisanal, or shopkeeping positions, with some whites securing racial privilege through the 'public and psychological' reward of whiteness, 17 taking up appointments as lowly managers and overseers of the enslaved. Some former enslaved Africans, in seeking to survive after being freed from slavery, opted for or were driven into indentureship, often moving to colonies where the agri-industry was then expanding (particularly the Guyanas). 18 Trinidad and Guyana 1875-1917, Ian Randle Press, Kingston, 1994 Indies, no. 4, 1971, pp. 62-73. In recalling the voyage of the Cinq-Freres with Africans from Sierra Leone to French Guiana in 1854, Monica Schuller observes that, 'Three shipping companies recruited indentured workers to French Guiana… the first two were for the voluntary engagement of free Africans, while the other involved the purchase of slaves followed by a declaration of their freedom, and their immediate enrolment as contract labour for French Guiana.' 19 Celine Flory adds to this, noting that the French government established a 'repurchasing' programme-rachat-whereby private merchants could purchase captive Africans and force them into a ten-year indenture contract in French Caribbean colonies. 20 Such a switch from slavery to indenture also occurred when ships bound for Cuba and Brazil, carrying enslaved Africans in contravention to the European agreements of the time, were intercepted by British ships. The Africans were freed from slavery and, on arrival in the Caribbean, sold as indentured workers. 21 Caribbean history is thus marked by the overlapping of two distinct labour regimes for over three centuries, with people sometimes moving between the two, experiencing both, and with planters managing both, at times simultaneously.
Indentureship as Slavery?
Caribbean indentureship-both the early and later forms-has often been compared to and described as slavery. In 1667, for example, the indentured were being described as 'poor men that are just permitted to live, and a very great part Irish, derided by the Negroes, and branded with the epithet "white slaves"' 22 , or as sharing a common sufferance and a common grievance with enslaved Africans. 23 That with many of their countrymen, they were induced by certain evil disposed persons, under false pretenses, to quit their native country, Fayal, to become agricultural labourers in this Colony. Of the whole number thus cajoled, one third only are still in existence. The rest have fallen victims to the unhealthiness of the climate or the cruelties of the slavery system to which we, equally with the unfortunate blacks have been subjected. …Men, women and children have suffered the greatest misery and oppression on several estates where they have been forced to work far beyond their strength by coercion of the whip, without proper shelter at night or adequate food during the day. 24 Following the abolition of slavery, it was not uncommon for indentureship to be labelled 'the new slavery', especially by those agitating for the abolition of the system, with figures such as the former Chief Justice in British Guiana, Joseph Beaumont, publishing his observations in Britain under titles such as The New Slavery: An Account of the Indian and Chinese Immigrants in British Guiana. According to British abolitionist George Thompson in an address to the House of Commons in the 1880s about indentureship, 'The system of emigration has been false, and to attempt to carry it out extensively would only be to create a new slave trade under the false colours and a modified description.' 25 Similarly, a later trend in Caribbean historiography has been identified as 'neo-slave scholarship', in which indentured Indians in particular have been categorised and described in similar ways to enslaved Africans-as victims, forced and broken, and subject to intense violence, with little agency or ability to resist. 26 Likewise, it is argued that the recent resurgence of the white slavery narrative in the Americas appropriates a history of suffering and trauma, and stresses 'a sense of shared victimization' with enslaved Africans. 27 A large part of the claims of indenture-as-slavery lies in the material conditions of indentureship and de facto experiences of the enslaved and the indentured. Richard Ligon, in writing about his stay in Barbados from 1647 to 1650, remarked, 'if the masters be cruel, the servants have very wearisome and miserable lives…I have seen cruelty there done to Servants, as I did not think one Christian could 24 Williams, p. 97. 25 Ibid., p. 99. 26 An example of this scholarship can be found in H Tinker, A New System of Slavery: The export of Indian labour overseas, 1830-1920, Oxford University Press, London, 1974 have done to another.' 28 About the shipping of Chinese to the Caribbean, Mary Turner notes: The ships employed in the trade … were prepared like the slave ships with gratings over the holds to allow only one person on deck at a time: small cannon, ready loaded, guarded the mouth of the hatches and the steam ships had neat contrivances for letting steam into the hold in case of real trouble … only the chains were missing. Water shortages, disease and mutinies characterised the voyages… the sailors called them death voyages. 29 On arrival in Cuba, Chinese indentures were 'subjected to the same discipline as slaves'. 30 Others have commented on the cultural similarities to the construction of the category 'slave', through a process of dehumanisation and violence. The British made 'coolies', Gaiutra Bahadur writes: the system took gardeners, palanquin bearers, gold-smiths, cow-minders, leather-makers, boatmen, soldiers, and priests with centuries-old identities based on religion, kin and occupation and turned them all in an indistinguishable, degraded mass of plantation laborers without caste and family… Like the slaves before them, they were an entirely new people, forged by suffering, created through destruction. 31 Slavery and indenture appear to share many dimensions in such first-and secondhand accounts, and slavery has been and continues to be evoked in ways to speak about the cruelties, coercions, and highly exploitative character of the indentureship system. Such attempts to describe Caribbean indentureship as a new form of slavery, or to equate it with slavery-like conditions, are analogous to the twenty-first century efforts to make forms of migrant labour coeval with human trafficking. It also signals the ease with which slavery worked then, as it does today, as a metaphor for a lack of freedom. However, despite similarities in some conditions and experiences of enslavement and indenture and the violence of both labour systems in the Caribbean, the two are widely recognised by scholars and writers alike to be quite distinct from each other-distinctions that have resonance for discussions about 'modern slavery' and human trafficking.
Modern Slavery and Human Trafficking as Indenture?
A most obvious distinction is that the indentured in the Caribbean were for the most part contracted, and it was their labour that was sold and traded as a commodity through the indenture contract that tied them to the employer. They were not, as enslaved Africans were, legally defined as property, chattel, or non-human, not excluded from property rights, nor were their 'owners' compensated for the loss of property at the point of their emancipation. Morally and legally the indentured were defined as human persons-albeit, in Mill's term, sub-persons 32 -who could make claims to legal rights both as citizens of their home country and under indenture laws in the colonies, 33 and could own property. The premise of a contractde facto or de jure-and the claim to rights that were experienced by the indentured in the Caribbean, echo today throughout discussions about human trafficking and 'modern slavery', where it is also widely acknowledged that the majority of 'trafficked victims' or 'slaves' are defined as being bound to specific types and terms of work, often through a debt, 34 retain basic citizenship rights in their countries of origin and can make claims to a range of human rights, even while they may be denied rights as (im)migrants at the new sites of work.
Perhaps as importantly for a comparison with slavery discourses, is that indenture for the most part rested upon 'choice'-that is, impoverished, destitute, dispossessed people were compelled to find some form of subsistence and even though were 'lured' to the 'New World' by recruiters with promises of 'a new life' and prosperity without usually knowing about the cost of living in the colonies or the conditions of work, went voluntarily. 35 The indentured, as is recounted for the Irish as well as later groups, had a choice between staying in places where conflict or famine ruled, or going along with a recruiter and accepting a contract to work for a fixed term overseas, at times being enticed by 'massive propaganda campaigns' about the opportunities in the Caribbean. 36 Bahadur observes about the recruitment of Indian 'coolies': Rights, vol. 6, no. 2, 2007, pp. 181-207. 35 Lal, p. 174. 36 Sheppard.
Recruiters lived in the local imagination as schemers, liars, even kidnappers. According to widespread belief, they did not inform. They misinformed. They gave recruits the false impression that they could return home from their jobs for the weekend: they promised work as easy as sifting sugar; and they exaggerated the gains to be had, inflating wages and conjuring lands of milk, honey and gold. In coolie folk songs, the recruiter is a cursed, vilified figure. 37 In such a process, it is argued, one can hardly speak of free choice, but instead a choice determined by need-a circumscribed agency. 38 The 'choice' for indenture rested preponderantly on a desire to find a better life, often to escape violencefamily, spousal, and other-or starvation. Thus, as Dale Bisnauth concludes, even though the fear, in the case of Indians, of crossing 'the Black Waters' and hence to become outcasted, was very strong, 'the stress of circumstances' proved for some to be stronger. 39 As with so-called modern slaves and trafficked persons today, they were, in Jo Doezema's words, 'forced to choose'. 40 Enslaved Africans, on the other hand, had no semblance of choice at any point in the process. They were kidnapped, stolen from their homes and villages, manacled, and taken in chains from Africa to the Americas. There is little scholarly or other disagreement about their forced departure from villages in Africa, or about the brutal conditions in the baracoons and forts of West Africa, on the ships on the middle passage, or on the auction block and sugar plantations in the 'New World'. Enslavement did not depend on Africans being pushed by famine, landlessness, domestic violence, or other miserable conditions-they were captives, denied any form of decision-making or agency in the process of being made a slave.
Historiography thus identifies the 'root causes' for indenture as similar to those identified for trafficked persons and the 'modern slave'. The poverty, food shortages, landlessness, family circumstances, domestic violence, war or religious persecution, or a search for security and safety that propelled people into indenture, occurred alongside but was not the same as the history of the capture and enslavement of Africans. However, both systems shared a global context of the 37 Bahadur, p. 38. Rights, resistance and redefinition, Routledge, New York, 1998, pp. 34-50. expansion of capitalist production and industries and capital's constant search for cheap labour and services, as well as the space of the plantations in the 'New World'. And it is the parallel history of indenture, with its tangle of dislocation, survival strategies, fraud, demands of capital, and hopes for a better life, that led many people to enter into formal and informal agreements with recruiters and employers. Contemporary migrant labour systems, such as work programmes in Canada that rely on agricultural labour from Jamaica and Mexico, domestic labour from the Philippines, and sexual labour from Latin America and post-socialist states in Europe, continue to manifest problems similar to those encountered by indentured workers in Caribbean history: recruitment under false pretences, repayment through labour for an overseas passage, low wages, agreements that tie them to one employer, and poor working and living conditions at the new site of employment. And while human trafficking is usually claimed to operate underground, the state continues today to regulate labour and capital, profiting from arrangements that enable conditions of unfreedom. 41 In this way, the role of the state in creating the conditions for trafficking resonates with the regulation of indenture by colonial governments.
Indenture was constructed as temporary and return home was promised and therefore sometimes possible. Between 20-25% of Asians are believed to have been repatriated after indenture in the Caribbean. Some were forced into another period of indenture in order to qualify for their passage home. Many of the migrant workers did not or would not return to their natal land once their indenture had ended, and having no other survival options, re-indentured themselves in the Caribbean. Others, once back in India, China or Java, re-indentured themselves and returned to the colonies. In researching her own family history, Bahadur notes, 'About 7 percent of emigrants arriving in Guiana in the dozen years before my great-grandmother did -2,075 people -had been indentured before, either there or somewhere else.' 42 Analogous to situations of 're-trafficking' today, the re-indentured knew they were to pay off the costs for their transportation and maintenance through hard labour, that their movements would be circumscribed, and that the work and living conditions in their place of employment were harsh. Still, hope for a better life prevailed, directing them into the hands of unscrupulous middle-persons, recruiters, transporters and employers, with the expectation that the difficulties along the way were for a finite period. Today's experiences of seasonal migration for wage labour in salt pans, export fish-processing zones, strawberry farms, sex industries, domestic and care work or the kafala system, in India, Denmark, the US, the Mekong, and the UAE, are most commonly held to represent the bulk of what is identified as human trafficking, forced labour and 'modern-slavery' in the twenty-first century. These exhibit similar qualities, with many people returning to or maintaining connections with home. 43 Slavery, on the other hand, was for life and was hereditary, where the enslaved were 'alienated from all rights or claims of birth', 44 and return to Africa was not an option. Few accounts of re-enslavement emerge in Caribbean history, and the concrete experience of moving from slavery into another form of unfree labour signals the distinction between the two systems. Even though plantation conditions might have been similar, there were clear boundaries between the conditions of indenture and those of slavery.
Race, Gender and Sexuality
Racialised and gendered dimensions of Caribbean indentureship can further elucidate analogies between historical and contemporary instantiations of migrant and forced labour. Caribbean history allows us to see that only certain racialised categories were deemed enslaveable (peoples who at the time were indigenous to the Americas and Africa), with blackness emerging as a critical category in the making of the 'slave' under modernity. The Caribbean indentureship experience however, was 'colour-blind'-notions of race were not foundational to the system, even while constructs of racial difference saturated indentureship and were used to justify the harsh treatment of some workers, and at times the privileging of others. The arguments that today 'modern slavery' and human trafficking do not depend on race again point to the similarities between contemporary forms of bound labour and those of yesteryear, while also serving to erase the specificity of contemporary global racialised divisions of labour. Conflating twenty-first century bound labour with slavery thus elides the significance of blackness in the making of transatlantic slavery, as well as the legacy of that anti-black racism that manifests today in the Americas through the incarceration and disenfranchisement of millions of people of African descent. It is also argued that such an erasure works politically to deny reparation claims for slavery. 45 A paralleling of situations 43 See for example: Rhacel Salazar Parre as, 'The Indenture of Migrant Domestic Workers ', Women's Studies Quarterly, vol. 45, no. 1&2, 2017, pp. 113-127 described as human trafficking and 'modern slavery' with those of indenture could thus enhance our understandings about the ways in which race both informs and obscures labour relations under capitalism.
Women's sexuality also played a large part in recruitment processes for indenture. Accounts or analyses of indentured European women are hard to find, yet women were documented amongst the destitute, the landless, and the deported political or religious prisoners, even while details are scant. Jill Sheppard's research suggests that in 1645 soldiers in England visited 'brothels and other places of ill-repute and press-ganged 400 women of loose life to join several hundreds already on board ship for Barbados', although what became of the women is not apparent. 46 From India and China historical evidence is more available. Women were deemed hard to recruit, and recruiters are recorded as having to pay up to double the amount for women than men. Moreover, Indian women were recruited not in the first place for their labour, but to tie men to the plantations-i.e. on the basis of their sexuality-to marry, provide care work, bring stability to the male labour force, and help eliminate the cost of remigration and the loss of workers. In India, 'Agents for indenture … circulated notices in the Bihari countryside promising women that, if they migrated to the sugar colonies, they would "find husbands at once among the wealthier of their countrymen"' and in China, 'prospective migrants to these colonies were given an incentive of 20 for wives', while 'women were not indentured but arrived officially as companions or wives of indentured Chinese men'. 47 Women's (hetero)sexuality under indenture was of prime interest to the employers, although not for reproductive purposes-the plantocracy was not concerned with reproducing the labour force through encouraging births. Adult labour was plentiful, could be obtained cheaply, and was renewed through constant importation. 48 In this way, sexual, emotional and care work for indentured Asian men was central to the women's recruitment and employment. As wage labourers they were deemed inferior to men, and were paid less even while they performed the same work in the fields, but their sexuality was highly prized by the employers. The sexualisation of, in particular, Asian indentured women, is not dissimilar to that which is described as 'sex trafficking' in the twenty-first century, in that sexual labour was, and is, an explicit part of the reasons for the recruitment and overseas employment of women. And as with the contemporary narrative, assertions of sexual agency by indentured Asian women located them in the view of the chroniclers of the time as 'immoral', 'loose' and prostitutes, leading planters at times to force women into monogamous unions. 49 So too, 46 Sheppard, p. 49-50. 47 Bahadur, p. 36; Sheppard, p. 119. See also: Sewradj-Debipersad, p. 20. 48 P P Mohapatra, '"Restoring the Family": Wife murders and the making of the sexual contract for Indian immigrant labour in the British Caribbean colonies, 1860-1920', Studies in History, vol. 11, issue 2, 1995 Mohapatra; Bahadur. regulation of women's sexuality was heightened through narratives about the 'evils' of indentureship, reminiscent of the ways in which discourses of human trafficking work to curtail women's mobility and sexual agency.
Conclusion
While indenture was a vicious and highly exploitative system, relying on false promises to recruit workers, and confinement, abuse and violence at the site of employment, little in the narrating of Caribbean history conflates indenture and slavery, even while a rhetoric of slavery has at times been mobilised to evoke outrage and moral indignation about the conditions of indentureship. The legal status of the indentured as persons, the rights they held, the apparent choice they had to migrate and take up work in a new land, the possibilities or promise of return home, and the temporariness of their condition, all indicate that indenture was significantly different from slavery. Chroniclers of the time as well as historians and other writers have maintained distinct terms and identifications for what took place in the Caribbean from the seventeenth to the mid-twentieth centuries. Those who experienced the move from slavery into indentureship could also likely have spoken about corporeal, physical and economic differences.
Moreover, histories of the simultaneity of indentureship and slavery in the Caribbean enable us to pinpoint important distinctions between these labour systems, and suggest that labelling unfree or forced labour today as human trafficking or 'modern slavery' elides and obscures specificities and differences in legal status and conditions of work and life. As Julia O'Connell Davidson notes, 'Historical evidence … underlines the dangers of de-contextualizing elements of human experience of relationships from entire bundles of rights, obligations, immunities and privileges that go with particular social statuses at particular moments in time.' 50 Even from this initial reading of a 'New World' past, indenture appears far more analogous to conditions of unfree labour today than transatlantic slavery, suggesting that it is a more useable and less salacious term than 'modern slavery' and its counterpart, human trafficking. Thus, rather than appealing to morality or fears about captivity through the notion of slavery or a discourse of human trafficking, we could seek to learn from the past as well as build strategies for change that perform critical analyses of everyday practice with care and respect for that past. In this regard, Caribbean history has much to offer to the contemporary debate. Nevertheless, this is not an argument to simply exchange terms. While a politics of indenture could deflate some of the hype and moral panic that comes with notions of 'modern slavery' and human trafficking, its adoption would not necessarily get 'to the bottom of things'. Migrant rights, 50 O'Connell Davidson, p. 69. | 2019-05-27T13:20:18.087Z | 2017-09-21T00:00:00.000 | {
"year": 2017,
"sha1": "264cb10efa6482a1ba7eab27f79384685097a61b",
"oa_license": "CCBY",
"oa_url": "https://www.antitraffickingreview.org/index.php/atrjournal/article/download/263/234",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d67996bbf71c14853edf27e4b25ac3b957732a3a",
"s2fieldsofstudy": [
"History"
],
"extfieldsofstudy": [
"History"
]
} |
235666360 | pes2o/s2orc | v3-fos-license | Life Cycle Based Greenhouse Gas Footprint Assessment of a Smartphone
A life cycle assessment (LCA) technique has been used to evaluate the greenhouse gas impact of a smartphone. Smartphones are becoming popular due to the advancement in technology, improved connectivity, and cheaper prices. The production of smartphones on a mass scale has an impact on the environment. A Fairphone 1 from Fairphone enterprise has been chosen for this study as the company is trying to promote sustainability and phones with lower environmental impact. Different life cycle stages are chosen for the study, such as production, assembly, transportation, and end-of-life waste treatment. The use phase was excluded from this study for the sake of simplicity. The goal and scope are defined for the study, and the environmental effect is assessed using the Recipe 2016 method. The climate change impact category has been chosen for the study since it is a relatively well-known impact. The impact assessment results show that most of the contribution to the overall impact is due to the production phase (62%) followed by transportation (27%). The relative contribution of the production of the components is also represented, and it shows that the most likely impact is from the production of Printed Circuit boards (PCBs).
Introduction
With the advancements in the modern world's technology, humans have become much dependent on these device's extraordinary capabilities. The planet now faces challenges that are specifically connected with global change and its resulting consequences. Extraordinary climate conditions and the economic, social, and ecological weight added to governments are considered to be related to environmental change. So, there is a huge thrust to develop technologies that will advance greener options. According to the United Nations, the world population was estimated to be 7 billion in October 2011, which is expected to grow by 2 billion in the next 30 years [1]. This population growth places great pressure on Earth's natural capital. Resource scarcity is a vulnerability for governments to deal with these reserves, closely related to the increasing production and consumption rates of the modern world. For the past decades, cell phones have seen some significant changes in the functionalities. The use, therefore, isdiverse. Cell phones nowadays are not just being used for communicating but they also possess the capabilities of computations. Even though manufacturers are adding the newest and technically advanced features in their smartphone models, the lifetime of the product is declining significantly. This technical enhancement in small-sized equipment comes at the expense of the utilisation of some of the rarest and unique elements found on the earth. Figure 1 shows the number of smartphone users worldwide till the year 2020. It is interpreted that this number is expected to reach around 3.8 billion next year [2]. Increasing uses and demand for smartphone production puts a great deal of burden on the environment as it requires more resources. With reduced lifetime, electronic waste generation from obsolete smartphones is also a concern for humans.
A typical smartphone contains a range of rare-earth and precious metals, including gold, silver, platinum, and palladium. These metals require a lot of effort to mine from the Earth's crust and are very expensive to waste. Thus, recycling the e-waste generated is a fair practice to reduce the overall impact on the environment. As per the report of the global E-waste monitor 2020 in 2019, the e-waste amount generated was about 53.6 million metric tonnes with only 17.4% was adequately being collected and sent to recycling [3].
The Central Pollution Control Board (CPCB) conducted a survey in 2005. The e-waste generated in India in the year 2005 was around 135,000 metric tonnes, which was assumed to reach about 800,000 metric tonnes by 2012.With this growth rate, the amount of e-waste is expected to reach nearly 2million metric tonnes by 2025, as shown in Figure 2 [4].
Most of the life cycle assessments studied reported only the Global Warming Potential (GWP) impact [5]. It is estimated that greenhouse gas emission from a Sony Z5 smartphone with accessories is 19 kg of CO2 equivalent (CO2-e) per year [6]. Thus, it is necessary to assess the impact caused by the products and processes on the environment. This study focused on the potential environmental impact caused by a smartphone and all of its life stages. The impact assessment tool is utilised to evaluate this potential and recommendations have been made.
Life Cycle Analysis of a Fairphone/Smartphone
The life cycle assessment (LCA) is an internationally recognized environmental analysis technique with a set of standards to evaluate the possible burdens on the environment and resource consumption in every step of a product or process [7]. This way, the overall impact of a product or service on the environment is assessed. LCA constitutes of the following stages -• Goal and scope definition -defining the functional unit and system boundary.
• Inventory analysis -select all life cycle stages, define input and output parameters and the parameter's quantities. • Impact assessment -determine the type and size of pollution.
• Interpretation -identifying the large contributor to pollutions and redesigning the stages to reduce the burdens.
LCA tool requires a lot of detailed input data gathering, time, and monetary resource-intensive process. Data gathering becomes even more problematic for mobile phones, as there is not much data available for the components and end-of-life (EoL) stages [8].
Fairphone is a private company that aims to produce smartphones with minimal environmental impact. Currently, Fairphone smartphones are available in three different versions (Fairphone 1, Fairphone 2, and Fairphone 3). The Fairphone 2 was the first-ever modular smartphone available for users worldwide [9]. This study uses the LCA method to predict the load on the environment associated with the different life cycle stages of a Fairphone 1 smartphone. Data obtained for the inventory analysis were modelled using the dataset from the Ecoinvent database [10]. The data was generated from the product disassembled for generating Bill of materials (BOM)tables for the Fairphone 1 [11].
The product breakdown for Fairphone 1 smartphone is shown in Figure 3[12]. The detailed BOM is also available for modelling life cycle inventory analysis (LCIA). The analysis shows that the impact of each life stage chosen for this study's scope. Subsequently, some recommendations can be made for the reduction of overall environmental impact. The next section will describe the methodology used for the analysis of this study.
Goal
This study aims to recognize the stages in the life cycle of a Fairphone 1 smartphone posing a burden on the environment and induce specific improvements.
Scope
For this study, the scope covers the following life cycle stages -• Manufacturing.
• Assembly of the smartphone.
• Transportation. As mentioned earlier, the inventory data is produced from the BOM, the product breakdown, and the information provided by the suppliers. For this study, the impact category chosen is GWP. It is defined as the heat absorbed by any greenhouse gas than that by the same mass of carbon dioxide. The unit for the respective impact category is kg CO2equivalent expressed as CO2-e unit [13].
The system boundaries depicted in the flow chart are shown in Figure 4. It also shows the relationship between the processes. The use phase of the Fairphone is not considered for this study, and hence it is excluded. This shown system applies to all mobile phones. Component manufacturing and their transport are modelled in the system using datasets from Ecoinvent. The energy required for the assembling process is also modelled in the system. The recycling processes are modelled separately for the Fairphone and the battery. Still, no relevant data is available related to the impact on the environment due to EoL treatment associated with mobile phones.
Life cycle inventory analysis
For inventory analysis, data were obtained from the literature for the available BOM of the Fairphone. In the disassembled product of the Fairphone, additional information such as the weight of the components was obtained. For the system's modelling initially, an open-source software tool OpenLCA by Green Delta was selected. However, due to the lack of database available in OpenLCA, the approach was not adopted. Instead, manual calculations were done later for the impact calculations. Ecoinvent version 3.6 database was used for inventory analysis.
Extraction of materials
As seen from the system boundary flow chart, it is defined for the component production. Data from the Ecoinvent dataset is used for modelling. Components that are not available in the datasets are modelled using the material composition. These components are listed as follows: • The camera unit • The earpiece and the speaker unit.
• The vibrator unit.
Production of the components
In the data available from the BOM for Fairphone 1, the total components used to manufacture the smartphone were listed. The components were weighed and listed during the product teardown of the Fairphone. After that, each component was matched to the respective data in Ecoinvent for modelling. Table 1 shows the inventory list of components used in producing a Fairphone smartphone and its weight. The material composition of the earpiece and speaker is given in table 2, and the weight of the metals was used for the modelling. The weight of each remaining unknown material from the speaker and vibrator was modelled again using 'Electronic component, passive, unspecified, at plant' from the Ecoinvent dataset.
Assembly of the smartphone
Dividing the electricity uses by the smartphone produced gave the electricity uses per phone about 0.44 kWh. In the literature [14], it was found out that the total electricity uses for the assembling process was 361Wh. This resembles the data for this study, as well. The electricity mix data of China is used.
Packaging of the components
Packaging data of the smartphone was provided in the BOM. The weight of the smartphone guide booklet provided is 68 grams. For this, light-weight coated paper data from Ecoinvent is used. Phone packaging uses Kraft paper, and this is represented by 'Kraft paper, unbleached, at plant' in the Ecoinvent dataset. For the packaging of the components, two packaging factors were calculated based on each component's weight. If the component's weight is more significant than 0.5 gm, the factor was assumed to be 0.1. The packaging is made of plastics and modelled using 'Packaging film, LDPE, at plant.' Whereas if the component's weight is less than 0.5 gm, then the packaging actor was assumed to be 1.94. The packaging material is made of the main polystyrene and modelled using 'polystyrene, high impact, HIPS at plant.'
Transportation
The information related to the location of the suppliers was available from the BOM. The distance between the assembly plant and supplier was obtained from sourcemap.com. Types of transportation were obtained from the BOM. For transportation from the lorry, 'transport, lorry, 16-32t, EURO5[RE]' was used. For transportation from air flights' transport, aircraft, freight, Intercontinental' and 'transport, aircraft, freight' both were used. For the transportation through canal' transport, barge [RER]' was used, and finally for transportation from rail' operation, coal freight train, diesel [CN]' was used for IOP Conf. Series: Earth and Environmental Science 795 (2021) 012028 IOP Publishing doi:10.1088/1755-1315/795/1/012028 7 modelling. For modelling the components, total weight, including packaging weight obtained from multiplying packaging factor to the component weight, is used.
Recycling of the smartphone
From the literature [11], it is found that nearly 60% of the Fairphone users are from Germany. For the simplicity of this study, it is assumed that s100% of smartphones collected for recycling are sent to the facilities. The total transportation distance from the user to the recycling facility of the Fairphone is assumed to be about 1500 km. 75% of this transportation is done by lorry and rest 25% by train. The respective Ecoinvent dataset used is 'transport, lorry 20-28t, the fleet average' and 'transport, freight, rail[BE]'. Both mobile phones and batteries are recycled separately after dismantling. Two technologies are generally used to recycle e-waste, the pyro-metallurgical process, and the combined pyro-hydrometallurgical process.
For the recycling of the Fairphone without battery weighing 124.88 gm, the material flow analysis of one ton of waste mobile phone is used [15]. The MFA uses emissions from the pyro-metallurgical process mainly. This way, modelling for recycling of Fairphone without battery is done. For the modelling of recycling the Fairphone battery, process 'disposal, Li-ions batteries, mixed technology/GLO' from the database is taken. The methods used are both hydrometallurgical and pyrometallurgical recycling.
Life cycle Impact Assessment and Interpretation
The method chosen for the calculation of the impact assessment results is ReCiPe 2016. In the current study, midpoint characterization is used. This study's only impact category is climate change or GWP (kg CO2 equivalent). The results for the impact category are reported below. In Figure 5, the contribution of each life cycle stage is shown for the impact category.
Figure 5. Climate change impact
As seen in the figure, the production and transportation phases have the highest contribution to GWP. This is followed by the assembly, packaging, and recycling processes. From Figure 6, two observations can be made: (1) the production phase contributes to more than 60% of the total GWP impact, (2) the transportation and assembly phase collectively contributes to more than 30% of the total impact. Figure 6. Contribution of each process on GWP The impact assessment of the Fairphone 1 smartphone for the climate change category is shown in Table 3. Climate change has the highest contribution from production (62%) and transportation (27%) phases. The contribution of the production of different components can be seen in Figure 7. PCBs have the highest contribution (49%) in the production phase with 2.4884 kg CO2-equivalent. Improvements can be made by: • Reducing the PCB surface area during the manufacturing process for future smartphones.
• Focusing on the recycling of the waste PCBs as they contain precious rare earth metals. Figure 7. Contribution for production of components A similar interpretation can be made to produce integrated circuits, which have the next highest contribution (20%) in the production phase. The contribution of each process for the overall climate change impact is shown in Figure 8.
Transportation has a significant impact (27%) on global warming, mostly causes by the transportation of smartphones from China to Germany. This impact can be reduced using other modes of transportation, such as rail transport. To reduce the impact from the LCD screen, the size of the display can be minimized adequately. Also, using new technology such as OLED and AMOLED screens will reduce the power consumption by the smartphone.
Conclusions
This study found the climate change impact of the Fairphone smartphone with the exclusion of the use phase. The product emits about 8.19 kg CO2equivalent. More than 50% of this emission comes from the production phase of the 5.09 kg CO2equivalent. The production phase is followed by the transportation emitting 2.23 kg CO2 equivalent. Potential improvements can be to reduce the overall impact on climate change. These modifications can be applied to future smartphones to reduce the burden on our environment. | 2021-06-29T20:03:43.260Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "32c2ef031de7e0a36c280919b54596de51148cb9",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/795/1/012028",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "32c2ef031de7e0a36c280919b54596de51148cb9",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Physics"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.